doi
stringlengths
17
24
transcript
stringlengths
305
148k
abstract
stringlengths
5
6.38k
10.5446/15342 (DOI)
We've got quite a few projects underneath us and ASLR is one of the projects that we took on. And while people are joining in, while people are still filing in, I thought I would have a little egotistical moment and talk a little bit about me. I'm a big fan of open source. When I started learning how to program, when I was like 12 or so, I didn't even have a computer. I wrote in C on like a piece of paper or anything. It really sucked. And so did my code. But I'm a big fan of open source. When we got a computer and I finally got online, I would look at other people's source code and learn from others. That was a big help. I love freeBSD and ZFS. Great technologies in freeBSD. ZFS is amazing. I'm such a fanboy of ZFS that I even have it running on my little netbook, which kind of sucks. Makes it a little bit slow, but it works. I'm passionate about security. The first time I got online, I was like 14. And I spent a whole month convincing my parents that we need a 56k dial-up modem. And we got online using that free trial of AOL. And then I found out about NetZero and Juno. So NetZero and Juno, they provide free dial-up service, or they did at the time. But they made it so that you had to install Adware on your system. So it would be a program that displays just a little rectangular image with ads. I'm like, dude, this takes up a lot of my precious dial-up bandwidth. I can't deal with this. And so my first experience with security was downloading a hex editor and looking at how the strings and stuff, how the dial-up program authenticated and grepping through using strings to look at all the 800 dial-up numbers. So we didn't have to call long distance either. And so I found out what numbers it uses and how it authenticates with their servers. And I ended up scripting my own little dial-up connection clients so that I wouldn't have to use their Adware. So that was pretty fun. I've submitted a few patches to FreeBSD. By far, ASLR is the biggest. I'm a maintainer of a few FreeBSD ports, mainly dealing with jails and Go and Drupal. I'm the author of Libhijak. Dude, I remember you. That's not necessarily good. I spoke last year about it. It's a tool that makes a shared library that makes runtime process infection easy on Linux and FreeBSD. It's a pretty cool tool. You can inject shell code in as little as eight lines of C code. It's pretty nifty. One of my friends right now is porting it to ARM for Linux. So eventually, hopefully, we'll use Libhijak for a lot of security things on Android devices. A little bit of a disclaimer. Of course, I have to put this to satisfy the legal department. Any opinions, thoughts, ideas, tools, blah, blah, blah, they reflect me only. And SoldierX, not any of my previous employers, my current employers, or future employers. So I'm here on my own accord. I am representing SoldierX here today because they've donated hardware to the project. They've donated resources. They're pretty cool peeps. So what we're going to do is we're going to start off with a few definitions. We're going to start off, you know, try to get on the same page with what ASLR is and the history behind it. Then we're going to talk about FreeBSD security strengths and their weaknesses, some of the things that need to be improved upon. And then we're going to talk about what we can learn from others because FreeBSD is pretty much the last enterprise operating system to implement ASLR. And so we have the unique opportunity to learn from the strengths and mistakes of others. Then I'm going to talk about how we've implemented ASLR in a very kind of high level detail because I've only got 45 minutes. In ASLR, ASLR touches quite a few different places in the system. In the source code, I don't have the time to go in depth. But I'm at least going to show you the files that are modified and what we do. Then I'm going to talk about how to use ASLR in FreeBSD and what needs to happen next. Then I've got a live demo and I did sacrifice 3.5 goats to the demo gods. And the goat BOF was born. So for definitions, security, it's what Sony and Mt. Gox have in common. They suck at it. Really, security is just an onion that's made up of ever increasing layers. And the more layers you have, the more time and resources it's going to take an attacker to successfully exploit your system. So we have many different layers. One of those layers are different exploit mitigation technologies, techniques. An exploit mitigation technique is a method or technique to prevent the successful exploitation of security vulnerabilities. Our computers have millions and millions and millions of lines of code in them. All these programs, dozens of programs, hundreds of programs we run on our computer. We don't know where the vulnerabilities lie. But an attacker, a dedicated attacker is going to use whatever means possible to pop a shell on your box. And so exploit mitigation technologies allow you to secure your system from the unknown. At least help in giving you more time to be able to address security vulnerabilities. ASLR is one such exploit mitigation technology. It is address based layout randomization. So what happens is, without ASLR, your application that you want to run like Firefox, for example, says to the operating system, I expect to be loaded at this address. My data is here. I need it here. If it's not here, I'm going to crash. And so your program is loaded in a deterministic way. So if there's a vulnerable function, if there's a piece of data that an attacker is interested in, either a vulnerable function or a password stored in memory, a social security number, credit card number, whatever it is, then the attacker knows exactly how to reach that point in memory and trigger that vulnerability. What ASLR does is it randomizes where the program and its dependencies are loaded in memory. So the attacker doesn't know, the attacker knows that there is a vulnerability, but he doesn't know where in memory it is. So the program like Firefox, I was saying, for example, will tell the operating system, hey, I can be loaded anywhere in memory. That's fine. My data can be loaded anywhere in memory. I'll make do. You just tell me where I'm loaded and we'll make amends. So ASLR helps protect against very low-level attacks, buffer overflows, return to libc, and when you mix that with full format string vulnerabilities and integer overflows. So really low-level things. ASLR doesn't help protect against PHP, LFI, and RFI, and SQL injection attacks and cross-site scripting, all that fun stuff. That's too high-level for ASLR. So we're really just talking about really low-level vulnerabilities here. You can think of exploits like Stuxnet and Flame and even Heartbleed, even though ASLR wouldn't have helped in the Heartbleed case, but same kind of low-level type of thing. To give a little bit of history, this is all on Wikipedia, so thanks Wikipedia University. Just to get on the same page, though, in 2001, we now know that the PAX team is just a single person, but at the time we didn't know if it was one person or multiple people. So I still call them the PAX team just because that's how we've called them over the years. They implemented as a third-party patch ASLR for Linux. And in 2004, OpenBSD picked up that patch and it took open... You didn't know about it. Oh, you didn't? It wasn't in any of them. Oh, okay. Sorry. Well, then that was wrong. Well, in 2004, OpenBSD started implementing ASLR. I think it, I guess, maybe not based on PAX, but at least their current implementation is based on PAX, I believe. No? Really? Okay. All right. It may have been based on the same research. Okay. But none of the code. None of the code. None of the code. I don't think of any of the better papers. Okay. All right. Well, I learned something new every day. So it took OpenBSD four years to implement ASLR, but to their credit, they had a lot more work to do. They had to change a whole lot of stuff in order to support ASLR because we're coming at this a little bit behind. We don't have to change as much stuff. There's not a lot, not as many prerequisites for implementing ASLR. For us, it's already been done. In 2005, Linux ripped PAX's ASLR and they dumbed it down, called it more secure, and did a lot of politics bull crap there. I'll talk a bit about more about Linux and I know I learned from others' slide. In 2007, Microsoft introduced ASLR into Windows Vista. Their initial ASLR implementation was crap. Very easy to bypass, extremely easy. It was pretty much useless. They made some design and architecture decisions that were more of a mistake that last till today because they had to add some fields to the PE header to say, we'll talk about that later. They introduced ASLR in 2007. It took Apple five years to finish up their ASLR work for OS 10, which they started in 2007. In 2011, Sun or Oracle, I don't remember what year Oracle bought Sun, but they introduced ASLR into Solaris 11. In 2014, Oliver Pinter and I submitted call for testing emails to the mailing list for our ASLR implementation on FreeBSD. FreeBSD has a lot of security strengths. FreeBSD's security mainly relies on policy-based security decisions. The Mac framework, NFSV4 and Post-6 ACL, secure level, auditisty and jails. You can think of jails and beehive as sort of a policy-based security because you're saying that I don't trust this application so I'm going to jail it. FreeBSD has kind of a hybrid policy-based and exploit mitigation technology, sandboxing, known as Capsicum. That's a pretty good implementation. Sandboxing is pretty cool. I don't have that listed on the slide. I forgot to put it, but Capsicum seems to be pretty useful, pretty promising for the future. Non-policy technology. These are exploit mitigation technologies, non-exec stack for AMD64 and P-Trace restrictions. Fun little fact is the P-Trace restriction prevents lib hijack from working. FreeBSD does have a few weaknesses. There were the non-exec stack. That means that you can put your shell code on the stack. When you're exploiting an application, your end goal is to run arbitrary code known as shell code. But you can store it on the stack. You're just not going to get it to run on the stack. FreeBSD has that, but it's not working on all platforms. I have a Raspberry Pi and supposedly it supports a non-exec stack, but it doesn't. Once we're finished with the ASLR patch itself, I'm going to go back and take a look and see why non-exec stack isn't being obeyed on ARM. One of the things that goes hand in hand with ASLR is called position independent executable support. I'm going to call that Pi because that's a lot easier to say than position independent executable. ASLR, really, to be effective, needs Pi support. That's the part that tells the operating system, that the program can be loaded anywhere in memory. For ASLR to be effective, Pi support needs to happen in base. I'm actually working on that right now and hoping to get a patch up to Brian Drury tonight to be merged into your head. That will be coming here soon. Of course, there's no ASLR in FreeBSD. That's why we're talking about it. GRSec is a third-party patch to the Linux kernel that hardens the kernel and the user land. My end goal is to port all the features that make sense for FreeBSD to FreeBSD. To learn from others, Linux is really political. It's a territorial pissing contest when it comes to this kind of stuff. What they did was they saw PAX's work, but they didn't want an implementation from an anonymous author. We didn't know. The Linux guys didn't know who wrote PAX's ASLR implementation. They wanted something of their own. Instead of trying to figure out who it was and contact them, there's an actual email address for them. They could have just emailed the guy. They ripped PAX's code and dumbed it down and called it more secure. The one thing that drives me to FreeBSD more than anything, I love FreeBSD's technologies, the one thing that drives me to FreeBSD over everything is the lack of politics. When it comes to this kind of stuff, I sat down with Desk the other day. I was kind of scared because this is the first time that I ever had a major patch review. I was kind of scared. It was more about the technology. He had some ideas and suggestions. It's more like, you've got a patch, that's cool. We need to get this in. But first, let's fix a few issues. It's more about the technology with FreeBSD than the territorial pissing contest. That's been my experience. Other people's experience may vary, but that's been my experience when submitting patches and helping out the FreeBSD project. It's not nearly as mind-numbing as it is with Linux. Technologically speaking, Linux is ASLR. I'm not going to talk about packs because Linux's actual ASLR implementation is pretty weak and pretty dumb. It's either turn ASLR on globally or turn it off globally. If you want this exploit mitigation technology on your system, but you have one application that is closed source, kind of like Flash or the NVIDIA binaries. What do you want to have ASLR for? If an application that is misbehaving under ASLR, like it's crashing or there's other bugs, then you have to have ASLR turned off for the whole system. That's stupid. The two things that I took away from this is that we need to randomize enough bits because that's the problem with Linux's AS. That's one of the problems with Linux's ASLR is that they're not randomizing enough bits. There's no way to control how many bits you want to randomize. There's no way to toggle it on a per application basis. That's kind of a half-lie, but not really. This command up there, that set arch right there, you have to use that command every single time you run this application that you want ASLR disabled for. It's kind of stupid. If you have an application that gets started via init scripts, you have to go and modify init scripts to disable ASLR. Every time you update the package for that, you have to back out your modifications, let the package update, and then reinsert your modifications. It's stupid. One of the things that I learned from Linux is we want to be more flexible and more dynamic than that. Windows, now with Windows 7, Windows 8, they have a pretty decent technical implementation. The biggest issue there is individual DLLs and EXCs can have ASLR turned on or off. In EXC, both EXCs and DLLs are just PE files. There is a bit inside of the PE file that says, for this object, ASLR is turned on or off. You can have an EXC that depends on, let's say, five DLLs, for example. We'll say the EXC has ASLR turned on and four out of the five DLLs have ASLR turned off. That means one DLL has ASLR turned off. The windows will apply ASLR to the EXC and to the four DLLs, and ASLR won't get applied to the one DLL. We'll be in this hybrid state where ASLR was applied but not. That is a big issue because, for example, just recently, I can't name the vendor, but there was one vendor who, if you use Windows, you probably use their software every day or nearly every day, especially if you're looking at documentation. They had just released a new version of their software and it depended on quite a few different DLLs. It was just that exact same scenario. They had compiled their applications with ASLR support and all the DLLs except for one. There was a vulnerability in one of the DLLs that had ASLR turned on, but the call stack was such that the control flow could be controlled via the DLL that had ASLR turned off. Because they could use that DLL that had ASLR turned off, because that was loaded in a deterministic way, attackers were able to successfully exploit the application using nothing more than just your standard exploitation techniques. That can be a major issue. We still have that kind of issue with the ELF file format on FreeBSD, but that's because the ELF file format was designed years and years and years and years way before ASLR was even thought of. That's why pie support is really important, because otherwise you're not randomizing all of the address space. If you don't have pie support for that application, you're still randomizing where the dependencies, the shared objects, get loaded, but not the binary itself. So you still kind of have that DLL issue just in reverse. So ASLR is available for FreeBSD, is available on all architectures FreeBSD supports with some... We don't have hardware for all architectures, of course, so your mileage may vary. We actively test on AMD64, i386 ARM, and kind of Spark64, vanilla FreeBSD, just FreeBSD11 current without the ASLR patches at all, has issues on Spark64, so I haven't been able to test the ASLR stuff too much on Spark64. So we have exact base randomization via Pi. I'm still working on the Pi patch, and we're going to hopefully get that in base here by tonight. I ran into some issues with being able to compile the bootloader, so that's kind of important. So two of my favorite features, if an application... We're going to have three ways that you can toggle ASLR for individual binaries. One isn't implemented yet, that is FS extended attributes. So you can set an extended attribute for the binary in the FS that says, hey, don't apply ASLR when you're executing this application. Another way is to... If your application is misbehaving, you can jail that application and have ASLR turned off just for that jail. So you have ASLR turned on for your host OS and all your other jails, but for just this jail, you can have ASLR turned off. And my favorite feature overall is that I've tied into the file system firewall known as UGIDFW, which is part of the MAC framework, that allows you to specify dynamic rules as kind of like an IPFW style rules for controlling ASLR on a per binary and even like per user and per group basis. So it's pretty cool. So what's the difference from Windows in less binary than what you do? From Windows, we... If you have the same problem, that's what I'm asking. If Adobe would be installed on QDC, we'd have a chain of SOs or whatever. Yeah, it's kind of in Windows case, you can have an individual DLL that doesn't have ASLR turned on. And in FreeBSD's case, it would be the binary itself, not the shared object. But the shared object is in health as well? It is, but it's already...shared objects are compiled such that they're already randomizable. That's the nature of a shared object. It can be loaded anywhere in memory. And so previously, without the ASLR patches, shared objects were getting loaded in a deterministic way. They were always getting loaded at the same address. Even though they could be relocated, they weren't. So that's part of this ASLR patch is to do that for shared objects. And for all objects that binary loads, would have an ASLR applied or not applied, which is up to the pain of the study? Or would it be you can load a binary with ASLR turned on and some of the shared objects may have it turned off? No. It's the reverse. Shared objects will always get randomized unless ASLR is turned off for the binary or in the jail. So if your binary is not compiled as a position independent executable, though, compiling as a position independent executable turns your executable into a shared object pretty much. So if your executable itself is not compiled as a pie, then the executable will be loaded at a deterministic address in a deterministic way. So we kind of have the opposite of the same type of Windows problem, but in the opposite way. So yeah. And I'll show you how that all works. So here are all the settings, all the CCTL tunables. The status tunable says there's three different values for that. In-interbased, one is you have to have your applications opt-in via the, right now, just via the UGDFW or setting it to two says opt-out, meaning that applications, all applications will have ASLR applied except for the ones that you say you want to opt-out of. And three means that it's enabled globally that we're going to force ASLR to be turned on everywhere for every application. And so really my favorite feature is the UGDFW integration because you can create firewall rules that apply per user, per group, per file, per file, or per object. You can define an object as not a file, but as different ways. You can take a look at the man page for the object stuff. It's really cool. It's very powerful. But there is some ABI and KBI breakage with this. So if you are a third-party developer and you've developed based off of the libUGDFW stuff, then there is a little bit of ABI breakage. You'll need to recompile your application. API, there is no API breakage, so all your function calls and stuff will remain the same, but I did have to change some underlying structures. So an example rule would be for me if I want to disable ASLR for this test application, then I could say add a rule for my user only. When I run this test application, that small a means disable ASLR. So that's pretty cool. And if a different user runs that application, they get ASLR. Yeah. So this is kind of low-level details, the higher-level version of the low-level details. In currentpacks.c, those are just generic helper functions. And currentpacksaslr.c is where the sysctls are implemented and the math of how we're applying the ASLR is done. And iminjac.elf is where we're doing the runtime linker randomizations, and it's where we've implemented PySupport. So there is a potential issue with compiling as a Py, in that since we're randomizing the load address of the executable, you might end up with a non-null address, or, well, our implementation guarantees that you won't have a null mapping. So that's kind of important. We made, we needed, we double-checked and made sure that that isn't an issue. And null mappings will never happen with our ASLR and Py implementation. So this bullets a little out of date, as of today. There's two knobs that need to be set. One is user-controlled and one is developer-controlled. The user-controlled knob is with Py. So you put with Py in your make.conf or source.conf, if you just want source.conf, if you want Py, only for base, make.conf, if you want it globally. But applications, some applications don't support being compiled as a Py. And some applications kind of are buggy when ran as a Py. So each application must, this is the part that's out of date. I'm turning this opt-in to opt-out. So by default, applications in base will compile as a Py, except for those that have the noPy equals yes flag. So I'm adding, what's that? Are you imp? Okay. There isn't really any other way. We'll talk. We'll talk. But I don't know if there's any other way to do it other than a no flag. So there is now. Cool. So I'll utilize that. Crap, I got to go and change all 30 different files. I'm changing them. No politics, isn't it crazy? Well, see, that's exactly it. They're helping me. They're like, hey, do it a different way, but they're helping me with it. Whereas with Linux, it'd be like, go screw yourself. But he could, but he doesn't. How does the New Yorker say, go to hell? But the next project is going to be dealing with the ports framework after this, and that's going to be a project in and of itself. So it's a good thing that I work with some ports people, just 20 feet away, well, a little bit more than that. But that's going to be my next major project, is adding Pi support to the ports framework. So how to use it? Compile your kernel with the PAX ASLR option. Right now, ASLR is not in base, so you have to apply a patch. So we're still in the call for testing phase. So apply the patch, add PAX ASLR to your kernel, recompile, install the kernel. By default, ASLR will be turned on when you compile with that kernel option. And if you use jails, child jails will inherit the parent jail settings. So that means if you have ASLR turned off for the parent, then when you boot up the new jail, ASLR will be turned off for the jail when it boots up. So, and if you want to take advantage of Pi support, then you have to compile your applications with the dash F Pi and dash Pi C flags, and ideally also add dash Pi to your LD flags as well. I'm going to skip this slide because our SAG fault protection, which is a recommended feature for ASLR but is not required, we're still working out the low level details of that. SAG fault protection is a very difficult thing to architect, and we're going between like two or three different designs and architectures for our implementation. So we're still a little bit undecided on how we want to implement that. Basically what SAG fault protection does is it frustrates the attacker that is trying to determine the inner workings of ASLR to see if there's any deterministic behavior. So like if there's a vulnerability that the attacker is triggering that causes a SAG fault, and like SSHD, SSHD will restart the process automatically if it crashes or if it exits. An attacker will basically do like a password brute force attack, but in this case a SAG fault brute force attack. SAG fault protection just adds a delay into restarting that application. Just one thing, Linux only has SAG fault, but these decals have lost error from where pointers get corrupted as well, so make sure that that's included as well. Okay, that's really good to know. I did not know that. Danilo Egea, who is a committer, is actually working on the SAG fault protection feature, and so I'll bring that up with him. So for future work, I need help with the ARM stuff. I don't know the ARM architecture very well. I attended yesterday's ARM intro presentation. That was really helpful, but I still don't know Jack about ARM. So if you own a Beagle, Bone Black or Raspberry Pi and you're willing to test this stuff out, get with me, and I'd love some help with that. Hopefully we're going to be committing the Pi support today after I remake all my changes to not do the no Pi thing. And we need testing. We need people to test this. I've been running the ASLR patches on my box for pretty much since its inception, and our ASLR implementation, at least for AMD64, is solid. I have not had a single issue. I now have Chrome, the Chromium project, compiled as a Pi, and it works great, except for HTML5 audio and video doesn't work anymore. I don't know why. So my end goal, though, is once ASLR is done and in base and Pi support is done and in base, and NX support is fixed on non-AMD64 and I think I36 architectures, then I'm going to work on more GR second-packs features. I'm probably going to work on WXORX next and then M Protect. So now is the demo. So you can see in this test application, basically all I'm doing is I've got a pointer that points to some data, and I'm just pointing out the address, printing out the address of that pointer. So you can see, CTO Security.packs.aslr. You can see I've got ASLR turned on. I am, wow, just had a brain fart. I'm randomizing 21 bits of M-Map calls and 21 bits of the exec base. That's Pi. The exec line is for Pi. So when I run this application, you'll see that that address gets randomized each time. So CTO Security.packs.aslr.status.0. So I've got ASLR turned off globally and that address isn't randomized anymore. So right now what I'm doing is I'm adding a UGIT FW, a firewall, or file system firewall rule to disable ASLR for that one application. Actually what I'm going to do is I'm going to disable ASLR just for that application. This mode says I can read the file and execute it. The Packs Flags option, which is optional, the lowercase a means ASLR is disabled for this application. The uppercase a means it's enabled. So if we're running an opt-in basis, using a capital A, we'll say I'm opting in this application to ASLR. So now you can see the behaviors the same, but Security.packs.aslr.status. We can see that ASLR is enabled, but it is disabled for that application. So I copied test to test to you, so it's a different file. We can see that ASLR is still working, but for other files. Can you run it just with 3D so we can see the firewall? The firewall rule? Yeah. It is? Oh, no, just run test. Oh, yes. It's a different user. Yep. So running it as a different user means ASLR is applied for that file. I chose 10 because it's a nice easy number to remember. So now that I removed the firewall rule, ASLR is enabled. So that's really it for the presentation. I wanted to thank a few people. Silverpinter is the one who initially started development on the ASLR feature. What happened was I had pointed out, I had posted on my little tech blog that I was going to start working on ASLR for free BSD. And then he somehow saw that post and then contacted me and said, hey, let's work together. So I added the per jail support, PI support, and the UJIT FW stuff. And he did quite a few other awesome things. Danilo Egea, he's working on the SEG VGARD feature. And Ryan Steinmetz is the ports committer and ports sec team member that I work with. He's awesome. I'm probably going to murder this name. Johannes Maksner, no clue how to say that last name. He convinced me to send a status report for the quarterly status report, which got some people interested in and added some more testers to CFTQ. And to SoldierX for donating hardware and support otherwise. So these slides are online. It's just a text file that you can open up with VIM, Emacs, whatever. So these are some references. And thank you very much for coming. Thank you.
Address-space layout randomization (ASLR) has existed in many operating systems for a number of years. The most famous implementation is the PaX patch for Linux's kernel. This presentation introduces and announces an ASLR implementation based on PaX for FreeBSD/amd64. Details regarding how ASLR has been ported to FreeBSD and some advanced features will be presented. FreeBSD will soon be getting a port of PaX to 11-CURRENT/amd64. This presentation details changes to how ELF executables are loaded in memory and innovative workarounds for legacy applications that don't support (or misbehave) ASLR. Jails can have their own ASLR settings. Misbehaving applications can be run in a jail with ASLR turned off, while ASLR remains turned on in the other jails and in the host.
10.5446/15337 (DOI)
Let's see if we can make this do something. Good. We're here to talk about building FreeBSD with Beemake. I'm going to do a quick intro. The question came up recently on the developers list, why Beemake. I'll briefly talk about that. I'll talk about the transition of FreeBSD to using Beemake. And one of the reasons for giving this talk is to provide a venue for people who have questions to ask them. So if you have any questions, feel free. I'll then talk a bit about the metamode and DERDEPs builds and projects Beemake where you can find them and play with them, which was, to a large extent, the reason for this whole exercise. So I started Beemake back in the early 90s up until then. I'd been mostly a sun-o-wes guy. Because sun in their wisdom didn't provide a tool chain. Everybody used Gmake and GCC and so on. And in 93, my sun workstation got hit by lightning. And I decided that I had to replace it. And I didn't really fancy spending another $20,000 on another sun workstation when I could spend $3,000 on a PC and run $386. BSD. And so that's when I discovered the BSD make. And it didn't take very long to decide that that was a winner. So Beemake is basically net BSDs make with autoconf added. Much as I liked building on BSD, net BSD in particular at the time, most of my clients used crazy stuff like HPUX and UTS and AIX and all sorts of nonsense. And so I wanted to be able to have a nice portable build that was as nice as the BSD build, but portable to everything under the sun. And that's what I got with Beemake. I've personally run it on everything from AIX to UTS and pretty much in between. Back in 2000, when I started at Juniper, their build was in a fairly sorry state. And my first project was to fix it. And so it's been building with Beemake since 2000. And to a large extent, Juniper pretty much funded me to develop make for net BSD and anybody else that wants it for the last 14 years. I put a lot of features into net BSDs make in that interim. And one of the things that always surprised me was, although I didn't do a lot of evangelizing within the net BSD community of these features, almost without exception, within a week of me putting some new cool feature into make, it was being absorbed into the net BSD build without me even mentioning it. And I always thought that was pretty cool. So up until fairly recently, the Juniper build had diverged considerably from free BSD. Juniper, we run Junos, which is basically free BSD. And we modified the build considerably. And we never really bothered tracking the free BSD build until a few years ago when we decided that we wanted to move to an environment where we had free BSD building as close to stock free BSD as we could and then adding our Juniper bits on top of it. And then all of a sudden, we had the situation of, well, we don't want to have to continually retrofit our build to free BSD. And we don't think free BSD are quite ready for swallowing all of our build. So let's see if we can meet somewhere in the middle. And so 2011, I gave a talk here and we had a chat to some of the free BSD folk about whether there was any interest in picking up some of our build technology for free BSD. And there was actually considerable interest. So we started the process. And so let's see. The first commit of being made into free BSD head, October 12, 2012, we had a bit of a headshot. We had a bit of a hiccup with the ports. We needed to get a ports run to confirm that we weren't going to destroy ports. And they had to rebuild all their infrastructure, which delayed that for a while. But nonetheless, free BSD 10 shipped with building with be made by default. It's been back ported to 9.3, so the port folk can use that. And I have to say none of this would have been successful without the goodwill and cooperation of both projects. There's a number of changes that have gone into NetBSD's make to make it easier to do this project. The NetBSD folk were more than willing to absorb some of those changes. And so thanks to all of them on both sides. So why Bmake? Somebody asked this quite recently. And actually Warner answered it for me. And he put it very well. Basically, Bmake is little more than NetBSD's make. NetBSD have a very active team of folk who are contributing regularly to Bmake. So it's not a situation if I get hit by a bus that everybody's through. There's plenty of people around who understand it and can support it and maintain it. It has a plethora of cool features. Not least is an abundance of modifiers. And quite importantly, in that regard, I forget some years ago, I reworked the modifier handling so that you could use it recursively. And what that lets you do is stick complex constructions of modifiers into variables which you can then reuse without having to cut and paste them everywhere and getting them wrong. Things like multiple iterator values for for loops is quite nice. There's at least one for loop in the port stuff that would have been much nicer if it had been able to use that. Things like being able to auto-ignore stale dependencies found via dot-depend is actually very helpful. I did a clean on a FreeBSD tree that I've been doing for production builds for a year now. And I had to clean it for the first time a couple of weeks ago, which was really annoying. Because we'd switched branches. I had to switch to a new branch. And there was something I had to clean. It was annoying. I'm going to talk a bit about dirt-eps.mk. It uses pretty much every single feature of Bemake. And so it alone is a cool reason to do this. So why meta-mode and or dirt-eps? It's worth noting. I usually talk about them as one and the same thing. They're not. You can actually use each of them independently. You could use dirt-eps with a completely manually maintained build system. It would work very nicely. But it's certainly nicer and easier if you have the meta-mode stuff as well. Together, they give you a very simple and reliable and maintainable solution to building complex chunks of software. The FreeBSD build is a decent size. The Junos build is a lot bigger. And building large sets of software in parallel is actually a somewhat complex task. But we like to make it as simple as possible. One of the nice things about the dirt-eps build in particular is the build works exactly the same way regardless of where you start. So if you like to edit bin cat and run meta-ex compile with any Macs such that you're effectively doing a CD into bin cat and running make from there, it will do all the right stuff even if it's a clean tree. Similarly, if you're at the top level and you want to do a quote production build or you just want to build bin cat and so on, it works exactly the same way. I'll probably cover it again. If you look at the top level make files in, say, FreeBSD or NetBSD or Junos, if you could, in the pre-meta-mode days, there's some pretty large make files. I think FreeBSD is currently up to about 1,800 lines. The pre-meta-mode Junos build was 5,000 lines. And I could guarantee that there would be very few people in the company that could actually read it all and follow what was going on. When we cut over to the meta-mode stuff, that was reduced to less than 200 lines. And 200 lines that virtually anybody could read. So it's affordable. I don't know about you guys, but at Juniper I have close to 2,000 developers. And believe it or not, a lot of them don't log their builds. And most of them are not build geeks. And so when something goes wrong, they send an email to BuildTrolls. I updated my tree and it doesn't work. Yeah, a bit more info would help. We actually, actually, you'll see, I've got an example. We actually trained Bmake years ago to spew mountains of useful data that is exactly what we need to see when you say, my build broke. And we document all this stuff. And he says, if you want to report a build problem, Bmake, when it blew up, it spewed out. All this info, please include all that in your problem report, and you'll get much faster resolution. Well, of course, you never get any of that. And when you ask them, well, where's your log? Oh, I don't have one. And so you're very often stuck with saying, well, go do it again, log it, and come back to me. With the metamode stuff, we actually log the critical info whether they want us to or not. And so very often, all the important data you need to analyze a failure is there. And everything works better. All right, here's a quick teaser. It's very hard to make a build log look interesting. I'm sorry. This is building BNSH in a clean tree. I usually use bin cat, but some people think bin cat's a bit simple. So BNSH is a lot more complex. So here we are building BNSH in a clean tree. We just seeded into his direct, well, actually we seeded into BNSH, and we said make minus j8. And he just went crazy and did it all. And you can see, and how long did it take? It took 58 seconds. It's not a particularly fast machine. So that was building libc and all that sort of stuff. The things to note are the object directories were created automatically. There was no make, depend, done anywhere. Everything ran in parallel, but in the correct order. The log was easy to read. There was no noise about compilers and all that sort of stuff. We only built that which was necessary. And we visited the leaf directories directly. There's no tree walks involved. And at the end there, in this slide, you can see the last thing he did was update make, file, depend, or at least check whether it needed updating. All right, try it again. So this is a quick look at one of those make, file, depend. This is actually, I think, the one for BNSH. In the Junos build, because we're always cross-compiling and we're always building for multiple architectures at the same time. We actually, by default, use a make, file, dot, depend, dot, dollar machine so that we can have them updating in parallel without any concerns for contention. When we looked at doing this for FreeBSD, we figured that was going to be a little bit too much. And so one of the exercises we went through was to look at making it all work with a simpler model. And it's actually worked out very nicely, I think. So what we have here, you can see from the client, it's an auto-generated make, file. That means you don't touch it. The first thing it does there is it sets a variable that basically represents the relative location of this directory within the source tree. And that's key to how the whole thing works. Pazder is one of those magic variables that B makes sets for you. It represents the directory where it found the file that it's currently reading. And one of the modifiers that B make has, it says you can take the value that you've got here and you can pass it to a real path and turn it into an absolute path, which is extremely useful for this process. The key data in this file, apart from that, is DERDEPs, which is a list of the directories. And again, these are those directories relative to the top of the source tree that need to be built before we can build BNSH. And I mean, it's pretty simple to read. The only complexity here, if you want to call it that, is we substitute the variable CSU DER for what it actually was. This allows us to use the exact same list of dependencies regardless of the architecture. I386 is a bit of a wart in that for most architectures, the CSU DER is easily derivable from the machine value, not quite so for I386. And then the last thing it does is include DERDEPs.mk. And also at the end there, it will capture, if necessary, any local dependencies. These are what allow us to do a parallel build in a clean tree without doing make-depend. This is effectively capturing just the stuff within each directory, which are the sort of things that would normally break a parallel build in a clean tree. We'll look at this in more detail later. So building previous, the projects we make, as I briefly mentioned, is a test vehicle. It's an exercise in putting all of this building into a tree which is very close to head. It was synced last week from head. And it uses all the metamode and DERDEPs and all that sort of thing. The goal is to be able to easily cross-build FreeBSD as close to stock as possible, while minimizing changes to FreeBSD itself. Within Juniper, we have a team of FreeBSD folk. We've been building head and stable 10 now from back around 9, using almost exactly the same setup. There's a bit more stuff there. For instance, we use external tool chains. And we generate packages which are basically ISO FS's and all that sort of stuff. And disk images for feeding into VMs. But from the pure build point of view, there's very little difference. So just quickly, on the transition for FreeBSD, we first talked about it with the FreeBSD folk in 2011. Converting the base to use Bmake, which is the first step to be able to do any of this, was actually very simple. It took a patch of less than 300 lines to allow the FreeBSD head to build using Bmake instead of Fmake. And we're going to use Fmake to represent the old FreeBSD make, just to avoid confusion. Ports was a little bit more complex, because they have to support building on older versions of FreeBSD. At the time, it was 8.3, I think. And they don't branch. So that makes for a bit of a challenge. More importantly, it's reasonably simple to fix the base. It's a bit more complex to fix ports. The most difficult problem is all of the thousands of people out in the world who are using FreeBSD and things derived from FreeBSD who we have no visibility into. And for those people, it's a bit rude to go and pull a rug out from under them. And so as we'll see, we made a number of changes to Bmake to minimize the impact on those people as we go. And so again, just call if anybody does have horror stories or something like that, feel free to let us know so that we can maybe do something about them. So there were changes made to Bmake to support FreeBSD. Almost without exception, those changes go into NetBSD first, and then get imported back in. One of the goals of the exercise is to minimize divergence between FreeBSD and NetBSD with respect to make so that we all benefit. So both Bmake and Fmake are descended from Adam DeBoer's Pmake from Sprite. They've diverged quite a bit. One of the most obvious differences is the colon U and L modifiers, which I think FreeBSD got from OpenBSD, around about the same time that I was sticking all the OSF modifiers into Bmake, which, oddly enough, had colon U and colon L as well. Fortunately, these aren't used by the base system at all. And as of last week or this week, Ports doesn't use them either. But for the purpose of our story, it's still an issue. Or it was an issue. The other, I guess, key difference is Bmake is, aggressive is putting it nicely with respect to $.path. It will find anything by a.path that it possibly can. And that isn't always what you want. So you sometimes have to curb his enthusiasm. And there's a.noPath thing to do that. Fortunately, just simply saying.noPath, $clean files, covers 99% of the problem. NetBSD's bestyone.make marks all of the standard targets as both phony and not main. So that was another change. One of the real fun ones is handling job tokens. All modern makes, when you do a make minus j8, you don't want every sub-make to start another 8. And so on, you get what is it? Geometric increase in load. So they use the means of having a token pool of some sort to constrain that. Both Fmake and Bmake do that, but they do it differently. Fmake uses a FIFO, which the first instance, when Fmake starts up, he has a look to see if the FIFO exists. And if it doesn't, he assumes he's the master, creates the FIFO, exports its name, and everybody else then uses it. Bmake, on the other hand, uses a pipe. And he passes a magic argument to his children with the descriptors for the pipe. And so if Bmake doesn't get that argument, he assumes that he's the master, and so he goes and creates it and passes it on to his children. Both systems work nicely, but they don't work well together. In particular, if you had Fmake as the initial instance, he would export the name of his FIFO to the environment, go and start a bunch of sub-makes. And if they happen to be Bmake, each of them would think that he was the master, because he'd had no clue that he wasn't. And so he would start another N. It almost works the same going the other way, except that Fmake would blow up if Bmake passed him his minus capital J argument. So you're sort of saved there by that incompatibility. But if you were careful to cleanse the environment that you passed to Fmake to not have the minus J, then each Fmake would think that he was the master, and you'd have the same problem. There's other useful stuff in there. So a quick talk about the changes to the base. Any of the Bmake-specific syntax we bounded with ifdef.pasta, that's one of those magic variables that it defines for you. We added.nopath to all the generated files and phony and not main for the standard targets. We created a.weight as a no-op target for Fmake so that you can stick.weight into subdur lists and other things like that. And we ended up adding an option without Bmake so that people who, for whatever reason, didn't want to venture into the sharp pool could avoid it for a while. Another change, Fmake had a preference for BSD makefile before it looked for the traditional makefile and makefile. Bmake only looked for the traditional stuff, but it's configurable. So by simply sticking a list into sys.mp you can make it look for anything you want to. And in all of these cases, if we could address an incompatibility by just looking at the magic variable in sys.mp, that was the preferred solution. For the job token one, that's pretty much what we had to do. Bmake, in particular, only passes his token pool descriptors to submakes if the target is tagged with.make, the special source. In Fremisty Fmake, the man page says exactly the same things that it is supposed to do the same thing. But it doesn't seem to work, which is evidenced by the fact that somebody went and added this $ underscore plus thing to basically do what.make is supposed to do. And so what that means is that the Fremisty makefiles and presumably the thousands of makefiles written by people who use Fremisty but aren't in source have not been necessarily sticking.make on all the targets where they would want.make if it had worked. And so the only reasonable solution we could come up with for that one was to set a little knob in sys.mp to tell Bmake to say, you know what, just pass your descriptors to all the children. And that sort of makes the problem go away. Oh, the error token is a nice one. So normally when Bmake fails, this is in jobs mode. He sticks an error token back into the token pool. And the next time all the other submakes go to take a token out of the pool, they get the error token. They say, oh my god. And they bail. Which is wonderful because your build stops immediately on error, which 99.999% of the time is exactly what you want. Apparently it's not what you want for make universe. In that case, people would prefer that if some variable of MIPS is broken, that 99.36 and all the rest could just chug along and get as far as they could. And so we added another knob to say, you know what, if this make job error token is set to false, then don't stick an error token into the pool when you die. And so in the top level make file, if it sees that you're doing make universe, it'll set that knob to false. And so you can get the make universe behavior that you want and still get the nice behavior for all the normal cases. Oh, another. This is another fun one. So in the long ago, make use set minus e to fail things on statements. I don't recall exactly why. But I think it was mainly to get more consistency in target handling between jobs mode and compact mode. In NetBSD, they did away with all that. And they made it work very reliably such that the unit of failure was the command line rather than statements within the command line. So with fmake, you could do CD, to some bogus directory, semicolon, rm minus rf star. And it would be reasonably safe because unless you had a directory called that, it would fail at the CD and never get to the rm minus star. If you fed that same command to bmake, though, he would happily remove everything in the current directory, which may or may not be good. Well, the object directory anyway, if you had one. The format at the bottom is, of course, safe with any version of make and is therefore preferable. And while it would have been a quite simple exercise to go through all the make files in previous d and fix them so that they did the latter form instead of the former, again, that doesn't do anything for the thousands of people out in the world who may not get the memo. And so we had to do something about that. So again, this is one of those things that you can configure with make. So we just defined a shell for previous d to use by default, which effectively makes bmake behave the way fmake does and use set minus e by default. The option, the bmake option. So I think when we originally proposed adding bmake to make to previous d to start this transition process, we had in mind that we would just add bmake. It would get installed as bmake and people could choose whether they used it or not and so on. That wasn't considered ideal by the project. They wanted us to do the cut over as soon as possible and effectively, one day you'd go to bed and it was fmake and you'd wake up the next day and it's bmake. And that would have been feasible for the very first part of the project itself, but it's a bit more of a challenge again for the third parties. And so we ended up having to add that option so that people who didn't want to face the transition right now could avoid it. And so we added this option source makefile had to do the right thing, which typically involved it has a dance for you and you're upgrading to recognize that the make that you have on the system is not quite adequate for building the new version of software and it would go and bootstrap a new version of make and so on. And once it existed, it would use it. But it always called it make. And so he would say, oh, I have this temporary version of make, I'll go use it, even though it might have been six years old. So we fixed that so that if you wanted to be building with fmake, it made sure that temporary make was called fmake. And if you wanted to bmake, it would make sure that temporary make was called bmake. And if you change your mind, it would do the right thing. That's all being ripped out now. Thank you, Wana. The apparently was broken for a while. Anyway, nobody noticed. So obviously. That's OK. It's OK. It's gone. So we've burned the bridge. So that's progress. I mentioned the colon U and L for ports. I'd had a cunning plan for how to deal with bootstrapping ports, which worked brilliantly unless you were the portsfolk, because of the way that they want to be able to build ports. And so that wasn't going to work. So yet again, we added a local hack for ports to be able to set a knob saying, I really want the old colon U and colon L. And even with this set, actually, in many circumstances, bmake can do the right thing, because it can tell whether it's looking at a colon U with a value after it, or whether it's looking at a colon U with nothing in which case it might be the old modifier. But again, that's due to be removed after I get back from this conference. Another thing that fmake had, which netpstmake didn't have, was the tolerance for quoted strings as iterator variables. And so that was added to netpst. And then I think one of the final ones was the minus v behavior. If you do fmake minus v foo, it spits out the fully resolved value for you. If you do the same thing for bmake, it will give you the literal value, which may or may not include other variables. This is brilliant if you want to do debugging of what's going on. I use it a lot. And with bmake, you can always put dollar foo in there to expand it fully. So this is great for command line type stuff, but it's not necessarily ideal for the build itself. For the build itself, the fmake behavior is actually better. And so I proposed changing the default behavior and adding a debug flag so that you could give me the literal value. We're compromised by adding yet another knob to say which behavior you like. So revistd sets make expand variables to true, in which so you get the default revistd behavior. But regardless of the setting, you always have the debug flag to give you the literal value. And so both projects are able to have the semantics that they know and like. Oh yes, this was the last one. So bmake, some years ago, they switched the way iterator variables within for loops are handled to avoid namespace pollution. And so they use this colon u value. So what the colon u does is say, if the variable doesn't have a value, use what follows as the value. And so we have effectively a null variable. And so we're going to use value as the value. And this avoids, as I say, namespace pollution. Trouble is it makes it very difficult to try and play games like escaping iterator variables within for loops. And especially when you have nested for loops and you've got to do multiple escapes to try and escape the escapes within the for loops and so on. And I looked at this one for a while and I thought, you know what, let's forget it. Let's just use the inline loops, which are much easier to follow. So I basically replaced a 20 line block of nested for loops with a single line, which produces exactly the same output, of course. Oh, OK. All right. Did we have a pointer? There was a pointer at some stage. Here is one. Doesn't matter. I can't reach. All right, so this thing is the inline loop modifier. The first thing that follows it is going to be the iterator variable. And then everything between the next at and a closing one. And this isn't the closing one because this is introducing a nested loop onto the end. So the closing one is like this followed by another colon, for example. Oh, brilliant. Thank you. Hope I don't kill anybody. Ah, excellent. All right. This is so cool. All right, so in the man page we mentioned that in the OSF development environment where this came from, they actually use a convention of uppercase variable name with dot c the end for the iterator variable. That's really hard to say. But that's really hard to type. I'm lazy, so I prefer single character iterator variables because then you can just use dollar p unless you need to do modifiers on it. So anyway, this is two nested loops. So the first thing we do is for the Mlang stuff, we're going to stick man slash at the start of it. And if we find that we've got an empty one, we get rid of it. And then for those, so each value that we've got at that is going to be an M valuable. And then we're going to also iterate through the M links. Hang on. We're going to come up. What are we doing? Yeah, so each of those is going to be p. So we've got a p, so there's another loop. So we've got p colon e is the extension of p. And so this is actually referencing a man dollar p blah prefix. And then we've got m, which is our, from Mlang. And then again, we stick the p with his extension at the end. And unfortunately, when I did a full screen on my terminal with the normal font, you could actually see the end of the line. And it finishes with a couple of colon closed brackets. This is fire. The colon at colon u colon l colon d and colon p all came from the OSF version of P make. And the inline loop operator is particularly wonderful. Because unlike a for loop where everything is getting expanded as you read the for loop, in this case, the loop is expanded at the time of reference. And so you can do some very cool stuff with it that you can't possibly do with normal for loops. So this is saying for each n in what precedes it rather than for each n in the demo, and then for each p within the loop? Yes. Yeah, I figured you guys would be computer science, but you probably were able to get the hang of it. So do you make after the inline loopers, if they don't have a lot of loops sometimes like this? Yeah, both of them have their place. Don't get me wrong. Dot for loops are definitely useful at times. But there are things that to do it with a dot for loop, you have to do obscene things, which are just quite natural with the inline loops. All right, so why is b make so cool? Well, it can do dirt apps. That's reason enough. You can see dirt apps in the tree. It's in contrary of b make mk, dirt apps mk. I don't recommend reading it on an empty stomach. I've been doing this sort of stuff for 20 odd years, and I consider dirt apps one to read on a full stomach. But anyway, as I mentioned, we have a plethora of modifiers. One of the ones that somebody added not that long ago is this one. So I saw in previous D they have a fixed value that used for the random seed for C++ compiler. This allows you to have a repeatable build, but have a different anonymous initializer seed for every library that you build, which is actually quite a nice thing. This is an example of using the parse dir and parse file. This just ends up being a token that you can use. If you want to create a little marker target, makefile.inc, for example, to test whether that makefile.inc has been included already, you can make a target that includes this or this, or you can spell this all out, but it's more readable to just say this. This will ensure that you'll get a target that is the same value regardless of whether you read it via an absolute path, a relative path, or any combination thereof, because the column TA turns it into an absolute path using real path. So you canonicalize it. This one's an example of, as I mentioned, you can stick complex lists of modifiers into a variable and then reuse it. And there's an example down here. So here's a list with a number of variables, and we basically just apply this set of modifiers to it, and we get a list of exclusions. And that sort of thing gets used quite heavily by DERDeps and lots of other cool stuff. It can do the equivalent of strf time using either local time or GM time. This in the Junos build, we like to have timestamps on the build logs so that we can see where all the time's going. This allows us to have those timestamps in the output logs for essentially free, and so we do it. And we use this format typically so that you get a human readable thing. You get something you can actually do math with and a handy little token to be able to grep them out. And one of the tools I have in my kit bag is a little Python script, which will read one of these logs, and it can basically give you a progress bar so you can see how long the build's got to go until it's done. By the way, I'm not going to get through all these slides. I've stuck a number of slides in towards the end to just cover random stuff that people may or may not be interested in, so don't be concerned that we're going to run out of time, because we will. So it was last sync. The Projects Be Make branch is the one that hopefully most of you are interested in. This was last synced about a week ago, and as of this morning, it completely builds again. The transition to clang through some spanners into the work, but I got some help from David last night to sort that out and finally finished getting it all building again. As I mentioned, unlike the Junos build where we use this format of MakeFile.depend pretty much universally, in FreeBSD we use this format. There are, I think, about five places where we do this, and that's invariably because there are places where the auto-generated, the locally-generated files that you need to capture the dependencies for have names that vary with the architecture. And so the way the build-in for here works is if you had previously built one of these directories for, say, I386, and therefore there was a MakeFile.depend.I386, and then you then went and built it for MIPS when he gets to the end and says, oh, I need to update MakeFile.depend. He says, oh, I see there isn't one, but I see there's a MakeFile.depend.some other machine. I better do the same thing, and he'll just automatically create a MakeFile.depend.MIPS for you. And so all the right stuff happens. The packages MakeFile acts as a top-level MakeFile. In the Junos build, the source MakeFile just includes packages MakeFile. It just makes it simpler from transition point of view. And in the FreeBSD build, we do the same thing. We leave the FreeBSD top-level MakeFiles alone so that we can still do build world so we can make sure we're not breaking FreeBSD when we want to commit stuff back up from that environment. But we don't want to use it for our stuff. So this does the job. Under packages, in the normal world, you would have a whole bunch of packages, foo, to build a package called foo. We use the pseudo directory as a place to put these all represent targets because they happen to have a MakeFile.depend file under them. And so when you go into packages and you say Make Bootstrap Tools, for example, it knows that that's something it can build. And it also knows because it's got.host that it's something that it has to build for the magic machine called host, which happens to be whatever host you're building on as opposed to the host, maybe i3-at-6 or AMD64. But it's not necessarily the same thing as an i3-at-6 or AMD64 target because you may be building FreeBSD 10, but you might be building on FreeBSD 7, which is unfortunately what we're doing at the moment. But it doesn't matter because it works. So for the environment, I think in most of these examples, I use a little tool called MK. Its join life is from wherever it is to have a look upwards until it finds a file called sandbox-env. And wherever it finds that file defines the top of the quote sandbox, and he sets a variable SB to that location. And then he reads that file up to condition his environment. This is great if you're an Emacs user and you're editing branch, I have trees from NetBSD, Junos, FreeBSD, all over the shop, and you can be editing them all within the same Emacs and just do metarex compile in any of them. And it'll just do the right thing. But if you don't like using wrappers and so on, all you really need to do is make sure that you have makesyspaths set, and everything else that needs to be done can be done by, say, local sys.mk or anything else. Projects.bmk has a local sys.mk, which I've checked into the tree. The main reason is it contains a lot of stuff which I haven't taken care of yet. But generally speaking, you can build all of use-land, kernel, tool chains for both the host and target. I added a boot. Clang 3.3 is unable to build Clang 3.4, it seems, or at least the table gen can't handle the headers and so on. So you very likely need to bootstrap the tool chain in order to be able to get building with the new Clang. So I added a target to do that. And since bootstrapping tool chains is not a wheel worth reinventing, I just made it leverage the existing targets in the source makefiling one. We're now using sysroot, which works much better with Clang than pretty much any alternative. You can do build world. You can have build world work, like build world, but produce metafiles along the way. And that's actually very useful both for comparing against when you're scratching your head as to why Clang built when you did build world versus not building when you didn't. You can actually go look at all the syscalls involved and see what was going wrong. It also gives you a means for bootstrapping makefoldup depend if you were wanting to start this exercise all over again. So here's a quick thing about getting started. So minimal environment setup. Once you've got that, you should be able to just do make bootstrap tools. You'd want to do at least J8, because it's going to take a little while. If you're not using my little MK wrapper and you want to then build the tool chain again, and you remember the days of building GCC where you used the host compiler to build stage one and then used stage one to build stage two and eventually. Same sort of thing. You could skip this step, but you would have to add this with tools to these ones to basically use the set of tools that you've built in this stage. It's probably neater to go through this step once and then put it all aside and be done with it. And then you can have fun cd-ing to random places and building them. Or the lot is a pseudo target that just basically builds everything. And again, it should all just build in clean tree. Debugging failure. I was talking to Kasey or somebody last night. And so I gave him this example as to how it's cool. So I mentioned before, nobody logs their builds and they get build failures and they give you absolutely useless information. These days, though, as long as they can tell you where their tree is, we can go find the clues. And as an example, we'll edit bincat and sticker a bogus include in there. And then we go and compile it. And lo and behold, it explodes. Oops, yeah, couldn't find that. And b makes spews out. This is some of the noise I mentioned that b makes spews out on failure. This is all configurable, of course. By default, it doesn't do any of this. You have to set some variables. But you can set it to spit out any variables you like. So we have a number that we like to see. The level is useful, the current directory, the object directory. So it spits out the target that it was building, the metafile that it was creating. And it will actually copy this metafile into an error directory at the top of the tree so they're easy to find. And if you're doing things like I have a continuous build machine, and so he knows to go, build failed. The first thing he does is look in the error directory. If there's anything in there, that's why the build failed. That makes the analysis very simple. And so here's the metafile that got copied into source top dot dot error with a name that we can easily find. And yes, there we go. Even if the guy hadn't logged his build in, this error had actually happened 5,000 lines earlier and is not possibly in his scroll back anymore, we can still go find what the exact error was, why it was happening. We can go and see which version of the compiler he was running and all of the files that were being read and so on. And it may be that, oh yes, you've got the wrong version of the compiler, or no, you tweet your spell. Oops, wrong. But one way or another, you have the information that you need to be able to work out what was wrong and do something about it. For next steps, we think it would be useful to actually add a bit more to it so that you could, for instance, have a target in the Projects Be Make branch that would let you go and effectively just build a little bootable VM image that you could then throw into Be Hive, which would be quite useful. Another thing that may or may not be interesting is distributed building. This is actually very easy to do with DERDeps. But that's probably the subject for a whole new talk. But if anybody's interested, I can bend your ear on that. Are we going? OK, so now we're into just the almost random stuff. But one of the things I was talking about with Warner recently, and he's thankfully been doing, is separating the BSD.star stuff from, so you can think of these things as classes of make files. The BSD.star.mk files are things for building stuff in the BSD way. But you don't necessarily want to have them only able to build user source. And you can have source.ops, for example, as options which only apply to building user source that you don't necessarily install, but are still useful. And similarly, you can have local.star.mk, local.com for anything else you like, as things which are either site local or tree local. They're not part of the repository, per se. But they allow you to do things like customization without hacking the original source. If you look at DERDeps.mk, most of the meta and similar make files, I include a.local, whatever the name of the thing is, to allow customization without having to touch the original make file, which in the case of something like DERDeps.mk is very important. Because, like I said, I only touch it when I'm sober. And I would recommend others do the same. Actually, I would recommend not touching it. So source.ops, this is something that has been very handy. It's now something that we can include within those make files that depends that we use in the pseudo targets. So for instance, this one is for the tool chain. So we can now say, well, is make clang define? No, it's not. Oh gosh, let's go include source.ops. Now we know it will be defined. And we say, is clang yes? OK, we'll add clang to the set of stuff we're going to build. Is GCC yes? We'll add GCC, and so on and so forth. And that way, you can easily handle building all the optional stuff. One of the other things that we saw earlier was auto creation of object directories. A nice feature of Bmake is any source that you give to the special target object sets, the object. It has to exist, of course. So this little dance up here makes sure that it exists. And this is how we do auto creation of object directories. If you are going to do this sort of dance, though, you have to do it very early in the game. Because this has to be done before you start reading make files that are going to potentially influence.path. Because when you do.path some directory, if it doesn't exist, it doesn't get added. And there are cases where it will exist. But if you do it too late in the game or too early in the game, you'll get a path that's in the source tree, rather than the object directory, which was what was intended. So again, if you want to play with auto object directories, you really need to do it during the sys.mk phase. But this works really well. I was talking to, again, Chat last night, 10 something years ago within Junos. We went through the whole phase of the camp of people who like to dribble object directories in the source tree. You had the camp who liked to have just a single directory called obj. And then you had the other guys who were building multiple archers. So they wanted obj.dollarmachine and so on. So we got to a point of having five different ways to support building the tree for all these various combinations. And it just got ridiculous. So we just said, forget it. We're going to create object directories automatically. And no questions asked. And it allows you to simplify things enormously. And simple is good. All right, so since nobody's asking any questions, I'll waffle a bit about meta mode. Basically, what we do is when we build a target, we create, amazingly enough, a file called target.meta. And it collects stuff like the expanded command line, which, if all you did, would be useful. Because that allows you to say, all right, last time I built this target, I used that command line. Now I'm building it, and I'm going to use this command line, and they're different. The target is out of date. And this allows you to do things like skipping. You've all seen make files where you have comments like, oh, if any of the make files change, we better regenerate this in case. And that can lead to all sorts of unnecessary compilation. By being able to be sure that the target is out of date, because its command line actually was impacted, you have a much better situation. We capture any command output, which, nine times out of 10, is just the error. In most cases, it's empty. And we capture all the interesting system calls. By interesting, I mean the ones that are interesting to make. And most typically, he's interested in read, execute. He has to track changes of directory so that he can work out path names and stuff like that. But mostly read and execute is what he cares about. And why do we do this? Well, the automated capture of all this information helps us optimize the build and improve reliability. And by optimizing the build, I mean doing as little as possible, doing it in parallel, but doing it correctly. And capturing the command output, again, makes failure analysis much easier, even when people refuse to log their builds. Metamode and Dirt Eps help with all of the above. As I mentioned, we don't do make to pen. We still do make to pen within our build for the kernel, because at least last time I looked, there were certain files that only get generated via the make to pen phase of the kernel. But I'm told by O'Brien that maybe even that's not necessary anymore, so we may be ditching that soon. But for the rest of the tree, we never do make to pen. And it saves you a lot of time. And while, even before we had all this, we were using GCC to dribble out info so that you could auto-gather dependency information, using the filemon kernel module lets you gather all that information for everything, not just the compiler. Let's see. And so you automatically catch tool change changes, all that sort of stuff. Also, here's the example of an unnecessary recompile that you can avoid. You can use things like DP add to bootstrap your dependencies if you're just getting started. And interestingly enough, you can get into problems where, if you have actually spurious entries in DP add, you've said, oh, DP add, I need lib foo-mumble, but you don't actually use it. It won't be captured in the auto-captured dependencies because you didn't use it. And then you get to the ultimate directory, and it says, well, I can't do anything because foo-mumble's not there. It says, well, that's because you didn't need it. So once we've finished processing DERDAT DP add to add C flags and stuff that we may or may not want to as a result, we actually throw it away to avoid those sort of errors. Some targets will need work. For instance, it's quite common to have a target that is going to include some kind of timestamp. You may or may not want that target to be regenerated every time you build. If you don't, you can actually tell make, for this particular target, I don't want you to do command line comparison because that would make you do the wrong thing as far as I'm concerned. You can actually be clever about it. Make knows that if your target actually involves the.o-o date, does anybody use that these days? It's basically expands to the list of sources that are at a date with respect to the target. And therefore, the expansion of it is going to be different almost every time you do it. And so there is no way that you can reasonably compare a command line that involves any use of this variable. So make knows this. And so if he spots this variable in the script, he'll say, well, I can't compare that command line. I'll skip it, which means that you can cheat because you can deliberately stick that variable in there, match something that will never be in there. And so you actually have a token here which will never expand to anything in the command line. But it'll be seen by Bmake, and he'll say, oh, he doesn't want to compare this command line. But you can still come. So in this target here, we won't regenerate just because the timestamp changed. But if you change anything in DP add, that changes the command line. And so we will regenerate the target. You enable it just by setting the word meta in a variable called make.mode. It gets read by all the make files. It gets looked at after you've read all the make files. So your make files can control this variable pretty much anytime. And effectively, meta.sys.mk does a bit more. But the key bit it does is it says, well, the target all depends on dirt-eps. And then we're going to wait for that to finish. As in, we're not going to do anything else until dirt-eps is finished. And that line there, if you like, is the magic that makes the whole thing work. Well, there's a bit more to it, but you get the idea. So when we write them for $target, we create $target.meta. Excuse me. We skip it normally for any quote special target, like begin and error. Unless you also sometimes you do want a metaphor for these, in which case you can always stick the special source dot meta on there to say, I really want a metaphor for this. It's never created if the target is flagged. No meta. I don't care what you think. I don't want a metaphor for this place. And normally, metafiles are skipped if the object directory and the current directory are the same. But in the case of, for instance, building the kernel, where we try to leverage the existing kernel build to the extent possible, so we actually do the CD into the object directory and run make in there. And in that case, we do want metafiles. And so we stick a little knob in make mode to say, making metafiles in the current directory is OK. And lastly, if the target that you're building is outside of your object directory, he will create a metafile with all the slashes replaced with underscores, just so that you end up with a file that you can reference within that directory. I think we've covered the content. Important to note, there are two variables that are used to track metafiles. MakeMetaFiles contains a list of all the metafiles that we read or examined or in any way knew anything about while we're building this particular directory. And makeMetaCreated is all the ones that we updated. And so this allows us, when we get to the end of having built this particular directory, if this variable is empty, we did nothing. We know that we don't need to go and consider whether we should update the makeFile.depends, which can be a reasonably expensive process, so we skip it when we can. Oh, if the meta verb, if the meta, we're being metavobose, then we'll expand the makeMetaPrefix, which defaults to just building a title name, but you can make it do anything you like. And in fact, I have a fairly cunning makeFile that sets this to a variable that actually is used to track all of the directories that you visited during the build, so that you can then go back and reexam them to see if there's any files that you should have SVN added but hadn't, so you can throw an error. That's another very cool trick. A little bit about filemon. It's a reasonably simple kernel module. The original prototype that John Birrell did of this idea used detrace. Detrace mostly works. It doesn't seem to let you reliably get the absolute path of the program that you're invoking. There's also a little bit of overhead, but more importantly, it needed special privileges to be able to do this. And we didn't necessarily like any of those. So we did this kernel module. It's available for FreeBSD, NetBSD, and there is a Linux version, which they promise me they will make public. And I will nag them continually until they do, because it's actually quite useful. Each record in the metafile that we get from filemon is the whole thing is designed to be easily parsed by a shell script. And the original thing that did the post-pressing was a shell script. So we have a single character to denote the system call that we did. And then typically you get the tag, the process ID, and the data. And the data is 9 times out of 10, just a path name. In the case of linking and moving, it's two path names with quotes, just in case there were spaces in the names. And then in the NetBSD version, I don't bother with stat calls, because they fill the thing tremendously, but they're not particularly useful to make. But they are interesting from a human point of view, I guess. When we read them, OK. So when we read them, and we only read them if Bmake thinks that the target is possibly up to date. So if you're reading a make file, and it's a clean tree, for example, all the targets are going to be out of date, he's never going to look for any metafiles, because he's just going to go and do it. But if the normal rules for make look at all the directory and so on, and thinks that the target is up to date, then and only then will he go look at a metafile, and he'll go and read it. And one of the first things he does is compare the command line. If the command line's different, and you haven't told him not to, he'll say, it's out of date. Go do it. He skips past the command output, and he gets to the system calls, and he needs to be able to pass the system call stuff, and for each file that you read or executed, if it's newer than the target, then the target's out of date. If one of the interesting ones is, yeah, so we have this concept of a bailiwick, anybody like the history, that defines to Bmake the sphere of his influence. So if he's responsible for populating that directory over there on the table, and he's reading a metafile that put a file over there, and it's gone, that means he needs to redo it. And so the target will be out of date as a result. Yep, that covers that. So one of the nice things that we can do with this, as I mentioned, you could write all these makefile depends manually if you wanted to. But it's nicer to do it automatically. So when we finish building, we have a tool that can go and read these things. And he can look at the make.meta created to know that he has to do something. But he actually looks at the full list to go have a look at what he needs to read to get all of the dependency stuff. And this whole process is greatly simplified by having your object directories and your source trees not overlapping. So object directories within the source tree complicate the thing. So it's much nicer to use something like the model we use where your object directories are either completely aside using makeobjder prefix or the fancy makeobjder that we use where you have essentially parallel trees. So here's an example of one. I forget what this was. Oh, this was building something in BNSH. So you can see the things that he read. Anything that he reads out of an object directory that isn't the current object directory defines something that needs to have happened before this directory can be built. And that's the essential idea. And we extract those and we stick them into DERDeps. You can also have me extract a variable called sourceDERDeps if you're doing things like originally within the genus world, we support the idea of subset trees where you can say, I just want to check out enough of the tree to be able to build bin cat. Everybody needs to be able to build bin cat. And so you can look at the source DERDeps for bin cat and say, oh, well, that means I need libc and I need level r and you can follow exactly the same logic for checking out the tree as for building it. But we haven't done that for a while and we decided, because these actually result in far more churn than these do, we actually took them out. And we don't bother collecting them in the Projects Beemake branch. Mapping object to sourceDERD. If you're reading things out of the object directory where they were created, it's a very simple exercise. You typically just substitute object for source top and you've got the directory. Because again, we keep everything nice and neat. If you're pulling things out of a stage tree, for example, which is you can just think of it as a dester that you've done auto install into, you need a little bit of help. Because there isn't necessarily a directory in the source tree that corresponds to user slash include. So what we do when we stage files is we stick a file called.derdep. And when the tool that processes these things, when he goes and reads and he says, oh, the guy used a file from here called uni standard, he says, is there a uni standard.h.derdep? Oh my gosh, there is. Let's have a look. And that gives him the value that he needs to stick into.derdep. And so he can proceed without any further ado. Logs are still useful, though. In particular, we can do things like capturing the number of metafiles that we examined for building libc, how many we actually created. So you can tell from the discrepancy in the numbers that this was an update build. It's nice to see lots of directories where you spent zero seconds doing absolutely nothing. Because that means that you don't have bugs in your make files that are causing things to be built unnecessarily. Over. Oh my gosh. Could it say something? All right. I'm curious. Anybody interested or want questions or anything like that? Because I can skip straight to the end. Where are we? Yep. So you can handle the writing itself as one of the dependencies. Yes. So you can do that distribution, it's really good to know from doing anything about the build, so you could like, you're at the same height of the build. Everything. If you want to. If you do distributed compilation like using, say, something like DCC, you can do that. It's a. See, you have to question the other side of the compiler that you do. Yes. Absolutely. If you do distributed, it's just a little more solution. Yep. Then you can send the compiler over. Yeah. So the way we do that is, if I'm the master of the build and I'm going to ask you to build something, I actually make you look at my tree and you will read the list of tool chains that you're supposed to use from that and you will use the same tool chains by default. Otherwise you'll explode and we won't do anything. You can do the same thing with DCC as in you send the job to a machine that you know has the tool chain that you want to have used. There's any number of ways to skin that cat. Any other questions? All right. Hang on. Woop, woop, woop. There you go. All right. So it may be the greatest thing since sliced bread, but it's maybe not for everybody. But it does provide a simple solution to some rather complex problems. And if anybody has any questions, there's some further reading and you can always catch me at s.o.g. at free sd, netbisd, juniper, and so on and so forth. Thank you. Thank you.
The Junos "meta mode" build is the coolest thing since sliced bread. The FreeBSD projects/bmake branch provides a proof of concept for the more general use of this technology. The latest release of FreeBSD uses bmake by default, a prerequisite for the "meta mode" build. This talk will cover the adoption of bmake and how the projects/bmake branch differs from traditional FreeBSD build.
10.5446/15335 (DOI)
I'm going to try to be quick because Dale has some really cool photos and videos in his presentation. I'm going to talk a little bit about our work in the space of digital justice and civic innovation, specifically about one case study that some of you have heard me speak before. We'll have already heard about the Red Hook Wi-Fi project. This is a collaboration that we've had with the Red Hook Initiative. New America Foundation is a think tank. A lot of what we're exploring and what my team, the field team does at OTI is think about how to work with communities to explore alternative ideas that can lead to policy formation later. The main issue that we're looking at right now is the idea that everyone should have access to free and open communications infrastructure. What you'll see up there is examples of using some of the Form 477 data from the FCC. You might not know that the FCC has open data, but they do. It's not necessarily awesome to try and make decisions off of, but what these maps are looking at is by census tract how many household subscribers you have to broadband in urban Detroit and Philadelphia. The maps on the other side are something that we did as part of a series in Atlantic cities. You may have seen a few follow issues around cities of access. One of our ideas is that communities can build infrastructure to enable access and local communications on their own. We do that, one of the ways that we look at doing that is thinking about how you can model digital networks on social networks that exist in the communities. The technology we use to do that is actually mesh networking software that we develop in health. It's open source, available online for anybody to use. Our goal is to make it as easy to use as possible so that any community can do this in whatever the context needs to be. One of the nice things about mesh, there's lots of negatives to it, and I'm happy to talk to anyone about that. One of the things that makes it really powerful is that since mesh is an ad hoc platform, it's an ad hoc protocol, you can remove and add nodes in a network really, really easily, and the network will adjust. Where this comes in handy is when you need to rapidly spin up new nodes in a network, say in a disaster response situation, or even in just a normal community situation. For example, I was sending emails today about why I couldn't log into the network because one of the router hosts is actually a church, and the church is turning the power off at night, and they don't always get in. The pastor at the church doesn't get in until noon or so because he's more focused on the afternoon set of times. He can turn on and off the power and participate when he wants to, or when it works out for the church's hours, which lets it model a lot more how our social interactions work, how our neighborhoods and communities function. So mesh is really great for matching those purposes, although there are other critiques about it from a performance perspective. So the case study that I'm going to talk about is Red Hook Wi-Fi. Red Hook, if you're not familiar with it, is a neighborhood in Brooklyn in New York City. It's separated from the rest of Brooklyn by a above ground highway that's actually an elevated highway. So to get to the neighborhood, you have to walk under the highways. It makes you feel a little bit scary sometimes. Lots of speeding cars, people on and off ramps from the highway. And one of the main features of the neighborhood is actually one of the largest public housing complexes in Brooklyn. There's about 11,000 residents of the 15,000 total that live in the neighborhood, 11,000 live in the public housing. So it's also historically been an industrial neighborhood. So this is an aerial shot from one of the buildings close to the water looking in. You can see lower Manhattan, obviously, but off to the right you can see a sort of tall red brick building. That's actually the public housing which is clustered in the center of the neighborhood. So one of the goals and what some of the takeaways from this project in general is idea of leveraging community anchors in the neighborhood and thinking about community led design processes as a way to address multiple issues. So one of the ideas behind starting this network was to think about ways to bridge the community in the neighborhood, so the public housing community and then the more commercial gentrifying district that's a couple blocks away. They call it actually the front and the back and the two don't interact that much. So one of the ideas was how could we build social cohesion through a local communications project. Another idea was also in doing that, thinking about how the community as a whole could be more resilient to disasters, would have better local networks, think about maybe even connecting people to jobs that were starting to come with the new businesses in the neighborhood. So the technology that we use, like I mentioned, is commotion wireless software. It's a free and open source and the primary development leads are actually us in the think tank, so again a weird activity for our think tank, but trying to think about other ways that technology is developed. We use ubiquity hardware. A lot of people use Marathi hardware if they're thinking about things like this. We can talk about hardware later if you want. We use ubiquity because it's really easy to put our software on it and they don't get mad at us for doing that. They're actually very interested in the projects that we're working on, so we have an open dialogue with them about it. So it started as just a router on a rooftop pointed at the housing based at the Red Hook initiative. This one was providing an internet gateway to the community and just seeing if people would notice it was there. There was another router that was sitting on top of an apartment building overlooking one of the main public parks in the neighborhood. This one didn't have an internet gateway but did have a local web page, so when you got to it there would be a way that you could leave chat messages, sort of like a Facebook wall but just a HTML page and people could leave messages for each other. Most of them were like, where's the internet? Hey what's up dude? Things like that. People talking to each other. But stuff that actually wasn't happening in the neighborhood on the street but was instead happening on this local server. So the two were actually too far apart. There was a school in the way, a couple blocks of distance, lots of trees, things that make wireless signals get interrupted. And as part of this experiment, the router's up in the neighborhood but what do we want them to do? Two leads on the project, Jonathan Baldwin who is a colleague of mine and Tony Schloss who's the director of community initiatives. The product initiative started hosting participatory design workshops at the reddit initiative with residents thinking about what services were needed, what were the problems in the neighborhood that you might want to address, what should the goals of a project like this be. And through that they actually started developing this platform that we call tide pools which is basically tiles that were created custom for the neighborhood using leaflet, JS and some custom icons that the community designed themselves and created really like a digital ownership of place for the neighborhood. And with it they started, they brought in open data feeds so they started offering, on the Wi-Fi network now there's this local application running where you can find out when the next bus is coming. Because there's only one bus line through the, there's two bus lines through the neighborhood, one that actually runs on some amount of consistency and has actually an API for when the bus is coming which is a project that's been slowly rolling out in New York City but was tested in Red Hook first which is kind of amazing. So now when residents log on to the network they can go and find out when the next bus is coming. They know if they'll be waiting for 20 minutes and have time to maybe go back to their apartment and or wait until they go out to the stop, things like that. Just basic information services that weren't there already. And then Hurricane Sandy hit. So the thing that was really interesting about this was the community workshops and the local apps and the placement of Wi-Fi actually meant that a lot of people knew that they could come to the Red Hook initiative for Wi-Fi. So they were, everybody like ran there to be able to get online. There was no power, no communication services for about three weeks after the hurricane in the neighborhood. It took a long time for them to get around to bringing the heat back on the housing. And there was a couple of those storms which was fun. So everybody sort of flocked to the Red Hook initiative for a variety of services but also a lot to get online. We saw a huge amount of people jumping onto the network in the days after and so it was clear that there's a need for alternative infrastructure in the neighborhood and that people knew to sort of cluster around the Red Hook initiative for that access. So very quickly spun up again on tide pools just building in a way for people to text in problems. So if they did have cell access and if their phones had power they could say like, oh we need a generator here, let's get people responding to this. And we had this running at the Red Hook initiative so people who were monitoring for things coming in could reply and it would actually send a text message back but they could reply using the map-based pop-ups. We quickly deployed like four more routers running off of battery power so that we could cover more of the park where FEMA was also setting up recovery, a recovery tent and the processing of getting people filling out forms. They brought us a satellite uplink so that we had internet access and actually a local ISP plugged in their fiber which they were able to keep their services up during the storm and afterwards. So what started as just like two routers, one near the park and one in that sort of cluster closer to the highway, very quickly became coverage in the whole area. Like literally an afternoon we were able to put up more routers and expand coverage and services to the places where people were primarily gathering for response and getting new resources. So that's just a network visualization. So fast forward a couple months. The government saw it, the mayor's office saw that this was a really valuable initiative and that we should play with growing that out some more and gave the Red Hook initiative funding to start running a job training program focused on building up the network but also on just developing technology skills within the community. So these are the Red Hook digital stewards. They're all 19 to 24 years old and live in the Red Hook housing. So they were six the first year and this year there's actually 12. They're continuing the program and they started thinking about what it would mean to have a network covering the whole neighborhood. So this is some planning exercises where they're just, they have routers and they're thinking about power structures and who are good community assets and anchors and what the neighborhood actually looks like to them. This is more conceptual mapping activity. They actually went out and again using tide pools, the platform that we had in place in the neighborhood. They were texting, oh this building is tall enough to host a router. These people are really important members of the community. We want to make sure we provide access there. Everyone hangs out here. Let's make sure there's access there. So doing inventorying and site service, they started thinking about how to do outreach. So this is the flyer because call me, I'll help you set up your router and you can get internet access in the community. And they started thinking about more community anchors and partners. So again as part of that inventory process they identified good organizations that could host routers that will be really important partners that might want to leverage the network as a way to advertise services in the neighborhood. And started, so these are young adults who have never worked with a lot of these types of tools. They sort of think of the internet as what they get on their phone because they don't even necessarily have computers at home. But they were starting to learn about how to do a lot of this mapping and analysis and network planning all through this program. This is the same form of 777 data in red hood. So this is the problem that we're trying to think about. All the red that you see is the public housing. So that's where there's the lowest adoption rates. And it's sort of donuts out from there where there's more of the community lives in the yellow and green zones. And some of those are newer buildings that have been set up and wired to start with. Some of the stewards installing a router on a roof. And carefully leaning out to install a router on the other side of a metal fence so that the signal can go further. Thinking about how they actually want to set up a network switch and making sure that the router can get access. That's Kathy making an internet cable, an ethernet cable. A router hanging out with some plants on a roof. And so quickly from the seven nodes that we got to after the hurricane, they now have about 25 nodes. And this is sort of the coverage area now, which is actually covering one of the main urban farms that's in the neighborhood as well as some of the park spaces and most of the commercial corridors. And there's a couple little clusters and they're continuing to build out this year. And actually one of the best stories that I've heard so far from Tony, who's now teaching the program on his own and running it within the community without as much support from us. The new group of digital stewards, they were doing that planning exercise and they actually identified three new areas in the community where the network doesn't exist already, where they were thinking about it being needed without them even knowing necessarily where all the access points were. And we've sort of updated tide pools to be more mobile friendly. So there's a lot of what's happening right now in Red Hook has continued long term recovery from the hurricane. The stewards are now not only acting as network builders and community organizers, but they're gathering people together to think about resilience in the community, think about engaging with the government and getting the things that are needed for the community to actually come together and continue to come back because there's still places that haven't been fixed actually from the hurricane, businesses that are still having trouble. So this year they've got 12 young adults and they're continuing with that. They've set up an advisory board and they're really engaging with communities along the lines of disaster recovery, economic development, diversity in technology, digital justice, civic engagement, public safety. It kind of goes every week we get a new request of someone interested for a slightly different reason because some part of the story applies in some interesting way to them. So big takeaways. The social relationships in the neighborhood are really important and really useful for leveraging in technology projects and doing, basing this off of the community's needs. So thinking about going needs first and addressing, thinking about how this can wireless can be a platform for community change and community building. A big thing in a disaster, having the stuff already there is really useful. So part of what made it easy to build out the network was we had a pile of routers sitting around waiting to be flashed and installed so that we could expand the network. We just hadn't been running with it yet, but we were ready to go, which made it very easy to respond quickly. These community projects can be the organizing vehicle to bridge multiple groups in the community and even though it's challenging, one of the things that we've learned in the last year is the more that we can share the practices that we're learning as we go, the broader community can grow and now there's like four or five different networks in Brooklyn that are all trying to get started and they're talking with the Brooklyn Borough President's Office and really trying to institutionalize a lot of this in a very interesting way. We're seeing New York City's Economic Development Corporation put out interesting RFPs where they're thinking differently about how infrastructure actually gets built and a lot of it is lessons learned from seeing projects like this grow and be developed from the community. Up next is more thinking about alternative power, building in more texting so that we're not just having mobile-friendly smartphone pages and other applications to the neighborhood. They're thinking about games to engage people in playing with the network so that they're comfortable using it in the case of another disaster and things like that. That's it. Thanks.
Community-led efforts to close the digital divide and address digital justice issues, such as Red Hook WiFi, can generate economic opportunity, facilitate access to essential services and improve quality of life in communities. Understanding the ecosystem of our neighborhoods and cities and the relationships to technology and data is essential to this process. This talk will focus on how communities are using data, technology and community outreach to build resilient communities, provide access and open governance systems.
10.5446/15334 (DOI)
We've been working on for the last couple of years called Rogue. And the team has been in Element Solutions and Boundless as well. Rogue is actually a backer name. We came up with the name and then we decided to, what that means is Rapid Open G, a spatial user driven enterprise, just because if you're an OSD program, you have to have an acronym, right? So Rogue is a joint capability technology demonstration. And what that means is it's an office of secretary of defense mechanism for bringing somewhat mature technologies to operational problems. So basically you identify an operational problem, you say, hey, I have a technical solution for that. And then you basically have two years to basically proof it out in the field. So the problem was really collaboration with geospatial information. And when you put in the context of humanitarian assistance and disaster response, then you make the problem a little bit more interesting or complex at the same time. The key with disaster responses, you don't really know who's going to show up to the table, right? If something happens, then different countries come to the table, different NGOs, depending on what part of the world is in, different agencies, and of course whichever military or military's respond. Common situation awareness is obviously needed. When Haiti came, went down, when Yolanda happened, everybody was trying to grasp and get it. Like what's the actual situation on the ground, and then everybody needs to share that information. So it's where you're going to radical sharing almost in some cases. And they each have a piece of the puzzle, right? So the folks that are, for example USAID, know where all the different projects are and investments on the ground. The local response agency actually has an idea of where the most vulnerable areas are, and they can probably tell you where the most likely areas are, where the worst impact has been. And that all needs to be combined somehow. They're all distributed just by the nature of it, and sometimes disconnected, right? So the communications have this kind of bad habit of not being available when you need them the most. And the last one was actually driven by Southern Command themselves as the operational manager, is all the partners need to have direct access to the technology themselves. So one of the unique things about the Southern Command AOR is that they want all the countries to be technologically enabled. It's not a help for them if a country doesn't have technology and they can't really share information. So really that's our focus. Our focus is to share geospatial information and to do that in a very distributed fashion and handle disconnected cases. And I always have this slide up here to make sure I give credit where credit is due. The U.S. Army Corps of Engineers is the technical manager for this, and they're the ones that have been really driving us forward. I mentioned Southern Command as well, but Department of State, HIU, is also involved, and we're transitioning to Pacific Disaster Center. And I'll talk a little bit more about Honduras actually throughout this. So we're using quite a few components. We're kind of using the OpenGew suite plus this thing called GeoNote. It's a portal for discovering geospatial information. It's all open source. This cool thing called GeoGit that you just heard about, so thanks to Juan for setting that up really well. And then we have a couple of custom components. We actually modified GeoNote a little bit because we've been adding it so that GeoGetAware, and then we have a mobile app for data collection that's basically specialized to be able to go disconnected, load the data, go disconnected, collect the data, come back, reconnect, push your changes, and then just a thing called MapLume for weaving all that data together in a web map. So there's a cool thing called GeoGit you already heard of. So one of the things that we do in Rogue is we store all of our data in GeoGit databases for the most part. Some of the stuff is still in PostGIS. G-server lets you get away with using a lot of different data sources, and we take advantage of that. But the key for us was the multi-user, kind of multi-organizational aspect, right? And then the other key was being able to actually sync repositories, along with the conflict resolution. So that was really key. So if you've got different groups of people that are actually working on data together and they go disconnected, they're going to diverge very quickly, right? Think about word docs before track changes. I send you the word doc, and now you have to painstakingly go through and manually figure out what I changed. Or I have to tell you what I changed. If I did 50 things, then we're all in trouble, right? So that's kind of like the state where you are now if you're shipping files around. So I'm a junior nerd, maybe even a new geographer. I'm not sure if I qualify or not. But from my point of view, OpenStreetMap is kind of like epiphany of this great kind of concept of sharing data and making it available. I'm also an analyst, or was an analyst a long time ago. An analyst will worry about their data. So this is kind of what they worry about, right? You can laugh. This is meant to be lighthearted. So this is actually what happens when you let three developers take the UI and play with the data and start making commits. But that's actually pretty cool because it's 2000 commits. Everything's versioned. When this data was actually still live, you could actually go through all the different versions of the shark of snooze because there was a little edit wherever that one. So really what it gets to is the prominence is key. So prominence is key to actually being able to have trust. So if I want to give you my data, what are you going to do with it? Especially if I'm going to let you edit the same data that I'm editing, what's going to happen, right? Well, the established trust one is I got to know who did it. And that's what you get, let's us do. And understand the lineage, actually what specifically changed and when it changed. And it's also key to being able to work distributed because when you're distributed, when you come back online, you start pushing this information with each other, the data, the software itself has to know it changed. And we used a mobile app we built called Arbiter. It's open source as well. If you want to use it, you can go and grab it. Not quite as slick as the one we saw earlier, the map tool. But it's meant for distributed data or disconnected data collection. So you cache the data, you can go edit it, you come back, you push your changes, you can make photos and associate them to the features. And you can see those in the map as well. And we don't care about the comms as long as it's pretty decent, right? It's got to be 3G or Wi-Fi. So I promised you get the Honduras and how we actually put this in use. So each one of those nodes, you see Pacific Disaster Center on the left. Up on the top was Joint Task Force Bravo, so Connoir Base. And at the bottom was Capeco. Capeco is the FEMA equivalent in Honduras. So they're the ones that are responsible for resilience as well as response. And so what we did is an exercise based around their Independence Day support. And we had data syncing using GeoGit between all three locations. And the back end or the front end, I'm sorry, was GeoNode with our map inset on the JTF Bravo, Capeco. But over on the PDC side, it was actually disaster-aware. Because that's already a situation awareness tool for event management and response. So basically we could let users at Southcom and other agencies be able to see the data and take advantage of this user-generated content without having to change the tool they're using. And we tried to make the other tools as simple as possible. And the other beautiful thing about GeoGit is because of the fact that it keeps track of all the commits you're on, you can sync basically all three directions at once. And all three of these nodes can keep track with each other. If you drop connection, for example, between PDC and Capeco, all the changes will go through JTF Bravo. And it's okay that you're actually syncing with the two at both the same time. So the operational demonstration that we're going to have next month is actually based on Hurricane Mitch. The one we did previous last year was actually for the Independence Day parades that they had. And going up the main boulevard there into the stadium, they had all their teams situated along the parade route and it was very much kind of an event management kind of thing. But the key was it was very much corollary to what they have to do manually with their logs. So if somebody opens an event, a child is lost. Boom, this time stamp, child was put lost, this location is exactly what they'd write. At this time stamp, parents were found, child was reunited, okay, keep track of that. So basically we just did the same thing using GeoGit. They would just go and they would actually just use the tablets and just change the reports. And it's only pushing the WFST, so it could be really any tool that's doing it. And on the back end, you're getting all the version that goes along with it. So we actually did have people reunited. We actually had a traffic accident reported. Heat exhaustion was actually more common as well. But it was actually quite successful in that case. Kind of a little bit different concept than our mapping and data management concepts that we talked about. So assessment and response is kind of one of the big focuses that they have down there. Building damage assessments, they have reports that started out as Excel spreadsheets that we turned into layers. There were 50 some odd fields with certain conditionals on them and you'd have to go down and do that report. And it becomes an official government report. That's now a layer. We can also leverage OSM. The picture on the right is actually the hot layer, the humanitarian open street map layer for Yolanda. So that was one of the first experiments with pulling OSM using DuGet. And just being able to grab that and pull that in is one of our main use cases. So this ties in with the Department of State because they have this great program called Images of the Crowd, where they make data available to the volunteers for Ulpa Street Map. And people can digitize, but the key is you can also grab that now and you can pull that down. And so you can go disconnected with that as well. And also down in Honduras, they're going to start using it for their social development layers. There's a lot of poverty in Honduras, right? So going to a house, what do you need? What is your house need? Do you need water? Do you need utilities? Do you need a roof? Do you need all three, etc.? And just keeping the history of all that. So there's a website that's got all of our slides and links available to it. So you can go and get that. And there's actually even a demo server that's live you can play with. So hopefully the resolution is pretty good here. So this is a GeoNode with MapLume inset in it. And we've got some of the test data that we've pushed up on here. Getting to the history is actually pretty straightforward. We've got it by the layer and by the feature. So if I pull this up, so this is a GeoGit layer, I've actually got a history button. And this gets me to a list of now I can see all the changes that have been made, who made them and when. All the way back to the beginning of the feature. And if I want to see a specific change, I can go in here and it'll actually tell me, I can dig down through here. And this one I just changed to one. So we kind of follow a norm of green means the new, something that was new. They're added, yellow means it's changed, and are modified, and then red means a delete. So this one I came and I fixed the name because it was actually typed in incorrectly earlier by me. And one of the cool things that GeoGit lets you do is it's actually called the blame command. We decided to call it show author so that people wouldn't be freaked out by it. But I can also get everything about each individual attribute within a feature because you think about the lifetime of a feature, it's going to have a lot of different authors that are contributing to those different pieces of data. And so you can actually get that information as well. And then if I want to actually have the big red button for undoing it. We also have a conflict resolution in here. So the conflict resolution is actually very similar. You have three panels. You have the original, then the chain, or the, basically yours and theirs. And then in the middle is the one that's going to be the final. We have the ability to set up syncs. So we could run a sync through here. And then if I go in, we wanted people to be able to add features pretty straightforward fashion as well. So now it just gets added to the database. For the map, we're just making every edit a commit so it's going in almost immediately. And then for the mobile app, it will actually push however many commits you've, or how many changes you made during the time you've been working. And then notifications were key because it's always great to know that you're getting the latest and greatest changes from people. But what if you don't know what actually changed? That was kind of one of the problems. So we made it easy to get to the notifications and figure out when something was added. And it basically has all the same abilities as what I showed earlier. And then like I said, there's actually a full layer history as well. And this lets you get through all the history of the layer itself. So that's just kind of a real quick tour. There is actually a website online. And if you want, you could actually go up and play with the server yourself. There's a gentleman from the Netherlands who's playing with it. If you want to get to the hub, you can just get to it on GitHub as well. So we've been fulfilling our mandate of making sure everything was open source. Okay. Thank you.
The The Rapid Open Geospatial User-Driven Enterprise (ROGUE) Joint Capability Technology Demonstration (JCTD) is focused on supporting humanitarian assistance and disaster response efforts in the SOUTHCOM area of responsibility. ROGUE is addressing some of the core challenges in the geospatial community right now -- distributed collaboration, disconnected editing workflows, and provenance of data. All of this is being delivered as open source software, and based on open standards to encourage adoption by partners. This talk will explore how the ROGUE team is using GeoNode, GeoGit, and the OpenGeo Suite to provide a collaborative editing environment that maintains provenance of the data. In addition to developing GeoGit, the ROGUE technical team has demonstrated practical application of the technology through mobile and web applications (Arbiter & MapLoom). Both of these projects are available as open source as well. The discussion will include an overview of how the technology is being used operationally in Honduras and for risk assessment and response. A short demo will wrap up the talk.
10.5446/15333 (DOI)
microphones as we speak. OK, so the presentation is about Capsicum and Casper, and how Capsicum and Casper help to avoid using hacks that are currently used to sandbox applications. So before we start, let me introduce Mariusz first. Mariusz was a Google Summer of Code student who was working on Capsicum, a successful student, and is now working with me. And my name is Paweł. I work at Wheel Systems. We do security products. And I also free-biz the committer for more than 10 years now. So why we decided for the title and how we came up with the TOC idea in the first place. So this is the mailing list that basically talks about capabilities, object capabilities. And a question was sent there. What do people think about Capsicum? Is it just a lipstick on a pig? Or maybe it's the solution. So we accepted the challenge. And some history. The biggest problem with the operating systems we have now with Unix or Windows is that both those systems were actually designed with more taking care of separating users between themselves. So Unix, even Multix, was designed as multi-user system from the start. Then came Windows NT and Windows NT 4.0, I think, was the first multi-user version, truly multi-user. Because of course we had Windows versions before, which provide some kind of separation. But it was not really for security. You just had different profiles. And of course, Mac OS X was based on free-bizd. So it was also prepared to be multi-user operating system. But there was no real isolation between users' processes. So there was strong focus to actually protect set UIDs programs. So basically, programs that elevate their privileges when you execute them, like PASWD or tools like that, that require to access to root readable files. And of course, there was a strong focus to make the network demons secure. So your machine cannot be hacked remotely. But nobody really cared about making PDF reader secure or making all the tools that doesn't require set UID bit or not network demons. Nobody cared about security because you cannot switch to different user or you cannot attack the machine remotely. It's not really true, but that was the idea. So what types of isolations do we have? Nothing new. We have hardware virtualization like Beehive, Zen, VMware, VirtualBox. We also have operating system level virtualization like Jailzones, Solarisones, OpenVizion Linux. And we have process sandboxing Seedbelt from Mac OS X, Seccom from Linux, and Capsicum. So I'm sure you all saw this picture already, but this pretty much summarizes the shift in how we see security, how we are trying to actually protect. Because of course, protecting a root user on your laptop is not really a good goal because root doesn't really have any meaning. And if someone is able to gain access to your account, I'm sure he will be able to skip to root. So do we really need this process isolation? We had a few weeks ago, we had a security event. My company is organizing a security event in Warsaw in the evening. We just invite people, whoever is interested in security just comes in the evening and have some fun. So we invited Gynval Koldwin to give a talk. He's from the Google security team. And stuff like they do is, for example, in this period, they were able to, by fusing, they were able to find, or at least that 1,120 bugs were fixed in FFMpec. Not sure how many wasn't fixed yet. But even the fixes the FFMpec developers did, introduced new bugs, new crashes. It was a long process, and I'm sure it's far from being finished. And this was only by fusing. It wasn't really security audit of the code. The same for Flash or Adobe Reader. Basically, tools like that have many bugs. Nobody cared about them, and still many doesn't care about them. And I would like to show you how techniques that are currently used are not really feasible to address those problems, because most of the hacks we use to secure those applications actually require root access. You have to be able to change your UID. And I wouldn't sleep well if I would set UID to root to my FFMpec utility. That wouldn't be a good idea. OK, but let's try to go through all those. I call them hacks. You can call them portable sandboxing techniques, if you want. So the widely used technique was to just drop the credentials. And almost every tool does use some kind of the sandboxing change credentials. For example, OpenSSH default sandbox mechanism does just that, it uses unprivileged SSH user. But why is it hard to actually use that? Because it's very error prone. All those techniques were not really designed to provide sandboxing. They are used because you can find them on almost every single operating system, or at least Unix-like operating system. But you have some things to remember about. For example, when you change your UID, you have to change your group ID as well. But when you change your group ID, you have to do it before you change your user ID. Because of course, once you change your user ID, you have no longer permission to change your group ID. And when changing your group ID, you have to remember that there can be much more groups that you should drop before actually changing user ID and group ID. That's the order I said about. Also, you have to verify that all those operations actually succeed. Because there are many problems with that. We had a really nice Sandmail security bug, where Linux introduced POSIX capabilities, which are not a capabilities. It's just the name, but those are not really capabilities, just some global privileges. There was a bug in Sandmail where Sandmail didn't really check if a set UID succeed. So when Sandmail didn't have this permission or privilege to change his own user ID, it was trying to drop privileges because it was running as root. But this operation failed. So Sandmail was running as root without actually dropping any privileges. So you have to verify if it succeeds. Another problem you can have, so resource limits, for example, this user cannot create more than that many processes. So if you try to change your user ID, you may hit resource limits. And of course, I'm not really sure, but that's a suspicion that not on all operating systems, that UID modifies all those real effective and safe user ID and user group. And of course, the biggest issue is that it requires to be root. So this is basically what I think the code should look like, and it should be secure, I think, in that order. So we drop all the groups. We set group ID. We set user ID. We verify all those operations. And just to be sure, we verify them again. Change root, another way to sandbox your applications. So basically, change root just restricts access to the directory you gave to the system call. For example, OpenSSH, use change root to change root to slash var slash empty directory, which has no content. The unprivileged process cannot write to it. And it should be pretty safe. But of course, change root requires root. Once you change your root directory, you have to remember to change your current working directory to new root directory. If you don't do that, you can still access files outside of your new root. And of course, another problem, if you will leave directory descriptors behind, you can use fchangedir to actually escape from your change root. Another problem is that all the path components from the root path you are trying to change to should be owned by root. If user can write to the directory, he can rename the directory, he can create a sim link. And you can change it to some different directory than you hope it for. So this is also very important. And of course, you have to check for change root and change your failures. And actually, doing this first bit, race free, is also pretty challenging. It's not really easy to verify all the path components and be sure that you are change rooting to the same directory that you have actually verified if there is no race between. So this is the code that implements that, although I skipped those two bits at the top because the code would be just too large to fit on the screen. And of course, checking for open directory is very expensive if you have to scan all the descriptors that can be used, the same for ownership. It's really tricky to do it right and also pretty expensive. Set error limit. This is nice hack. It's used as far as I know. I know only about OpenSSH using this to implement some kind of sandboxing. So when you limit your number of descriptors to zero, you won't be able to make internet connections anymore, open any files, and stuff like that. So it's a really nifty trick. You can also limit file size and this allow forking by limiting number of process to zero. But it's not really widely used because I think it's very impractical to not be able to do any kind of descriptor operations that you cannot duplicate or descriptor. You cannot receive descriptor that someone delegates to you. So it's very limiting. That's why I think it's not widely used. Nice thing about that is that it doesn't require root access. It doesn't require root. There is also in kernel, you have this PunderStore SUGIT flag, which is set by the kernel when you do set UID or when you execute set UID binary. And it can restrict various interesting things. For example, OpenSSH sandboxing drops privileges to SSH user. But every single session in the system, every single SSH session, sandbox, is running as the same user. So in theory, if I break to the sandbox, I can then use Ptrace to jump to another sandbox because it's run by the same user. But because of this flag, it's not possible. So this is also a nice hack. And also it restricts various signals. So I cannot send all the signals to different process that has this flag. I can only send some subset of signals, hopefully secure subset. And here are some proofs. So we had a bug where there were missing checks for set UID or set gate or set groups. Someone forgot to check about them. And it was a security bug. There was no set group call. So actually, someone left all the groups to which root user was the member of. Someone was leaving those behind, which allowed to do some interesting things. And someone called setEUID. So this one only changes effective user ID. And if you change effective user ID in Unix, you still have safe user ID. So you can easily get back to your root privilege by calling setUID with zero. SetUID is not really used to change all the IDs. It's just to change your effective user ID. So when you create a file, this file is owned by this effective user ID. But you can switch back to your previous UID, which is in this case root. Change root. There are a few bugs when people forgot to change their root after calling changeRoot. There was one bug we found where changeRoot directory was readable by the user. So you can do things like you can link setWeedBinary to your changeRoot and then create your own, for example, libc that will be loaded when you execute your setUID binary. And of course, once you have root access, you can escape your changeRoot easily. There was a bug, I think, in NetBSD where this flag was not properly checked and Petra was allowed even if the process had this flag. So it's not really application bug, but it's worth to know that those sandbox techniques are not really those sandbox techniques require this to work. We didn't found any bugs with setLimit, but I think it's mostly because only a very small number of programs actually use that because it's very limiting. So maybe that's why we weren't looking properly. And for example, this is a patch that was proposed to fix this missing changeDir. And it is wrong because ignoring the fact that nobody checks all the path components, the changeDir is done before changing root. So here, this directory can be changed. So actually, you can change root to totally different directory than you change root then too. I don't think it was commit or at least it was fixed because we checked syslocng source and it's better now. But even order of those operations is pretty important. OK. In the capability world, you don't have access to global namespaces. And in FreeBSD, you have quite a few of them. Can you see actually the text here? Somebody knows how to turn on the lights. OK. And we have all those techniques. And we will see which ones actually protect against accessing various namespaces. So for example, you have process IDs. Set UID or changeRoot won't help. Set P sugit does help a bit in a way that it restricts a number of signals you can send to the process. But you can of course always list all the processes in the system. So if you will break to OpenSage sandbox, for example, and someone is using a tool that takes password as an argument, you can list process names and all the arguments and actually get the password. Of course, it's not really a great idea to do that, but people does that, so file paths. That's of course a file system. So changeRoot helps here. Set UID helps in a way that you cannot access root-owned file systems and stuff like that. And setLimit does help a bit because you won't be able to open it because you cannot create new descriptors, although you still can list directory content and stuff like that. NFS file handles setUID helps here because you have to be root to actually be able to convert or open file by NFS file handle. But even if you change your root, you can still open files outside your root if you are a root user. FileSystemIDs, those are basically returned by system calls like getFSstat. You have this file system ID returned only for root, so if you drop your credentials, you won't get those. SysControls, some of the setUID helps, at least, in the setUID helps, at least, well, helps a lot. But it doesn't protect entire SysControl trees because there are still SysControls that can be written by everyone. Or all SysControls can be read by anyone. So it doesn't cover all the cases. And PSUGIT flag helps because, for example, there are system calls that allow to change, for example, resource limits of processes with the same user ID, although they do check if the process doesn't have this flag. So it also helps, but it doesn't cover all the cases. System5IPC is not protected by any of those. So if you have shared memory or semaphores using system5IPC, there is no protection using those techniques. POSIX IPC, there is some protection when you drop your privileges, but the best one, of course, is just disallowing creation of new file descriptors. System clocks, you can, of course, read time, but you won't be able to modify system time if you are not root. For jails, this namespace is about jails, IDs, and stuff like that. So you can still list available jails in the system. You won't be able to attach to a jail and do stuff like that if you are not root. CPU says it's not really important namespace, but nothing protects about messing with it. The same for routing tables. You can just change your routing table if you want to. There is no security checks at all. Pro-Cautical addresses, this is about doing IP connections and also using Unix domain sockets for connections. So change root does help when you cannot connect to Unix domain sockets, but it doesn't help when you just want to be a part of, I know, spam botnet. For example, in FreeBSD, we did use this error limit, set error limit, but I don't think we use it anymore. Is there a deserout? We don't use error limit anymore, right? On FreeBSD, in SSH sandbox, we use Capsicum now. Yeah, but the Capsicum sandbox still uses error limit. It does? Because I remember that we weren't able to use error limit because we were using OpenCrypto. Yeah, we can't set the file script from living to zero because OpenSpot is currently... Yes, when we use Crypto acceleration, it does require to open another file descriptor so we weren't able to use set error limit before Capsicum, which means that if my only goal is not to hack into your system but just to create another box that will send spam for me, that's fine by me. I can break into your SSH sandbox if I can break into your SSH sandbox. I'm able to send spam, so that's fine by some. In practice, we can't protect those two namespaces correctly because, as I said, set error limit is pretty much impractical, so not many programs use that. So this is more or less how it works with current techniques that are used for sandboxing. Enter Capsicum. So Capsicum basically provides two things. One is tight sandboxing. CapEnter allows you to enter capability mode where you have no access to any global namespaces. You cannot open files. You cannot do internet connections. All the rights or authorities you have are either inherited or delegated to you through Unix domain sockets. And another one is capability rights where you can limit your file descriptors to only some basic functionality. For example, if you open a file, even for read only, you can still change ownership, change the file mode, and do stuff like that. You can use capability rights to actually limit file you open to only reads or only writes. We won't talk about that, but we will talk about sandboxing some more. So this is our table from before. I'm not sure if you expected that, but with Capsicum, it looks much better. We can protect against accessing all those namespaces, and there is no question marks in here. So if you compare the code you have to write to sandbox using those techniques I talked about, it's pretty complex. And again, I wasn't even trying to implement checking ownership of all the paths when you change route or looking for open directory descriptors. So it's much simpler than it should be. And you can replace all that with one cup enter call, and you get much more. So I think it's pretty nice. So current Capsicum usage in base. We have DH client already using Capsicum. HustD, HustCTL, RQD, RQ who was sandboxed by Mariush during Google Summer of Code. We sandboxed Unique, I did this D, of course. SSHD also now uses Capsicum sandbox. We also have TCP dump, K-dump, and ping. And we are working on more, and we are, of course, ready to welcome any patches welcome, I would say. So if you'd like to sandbox some of the tools in the system, it should be pretty easy, I think. OK, so Casper, I will switch to Mariush. OK, so either. Pretty loud, OK? I think that's the system. OK, so before I start, I just want to say that this is my first conference, and it's my first presentation in English. So I'm pretty nervous, so please don't throw any comments. So help me welcome Mariush. Thank you very much. So Casper, as everybody knows, is a friendly ghost. It's a, but in 3DSD, it's a demo that provides some functionality that it's not allowed in a couple of capabilities. So we try to make the app of the functions that are provided by Casper very similar to the original functions. So we have some services in Casper. Every two services are a response to some other functions, like service, the system DNS is providing us some functions to get us, get us our get address information. And like you see, we have services like system out, or system random, and all these services can be also limited by problems. So we could, in some case, we could probably limit the system DNS to be able to only resolve some specific type of problems. OK. OK. So before that. OK. So here is an example of using Casper. We have a function to in it, Casper. It's a function that connects to the Casper demo. We then use the Casper service open, the open of the special services that we would like. Yes. Here is the next beginning of the services to only use families of others. That's how it goes. Those. And here we have users of Casper functions. We need to use this if that micro, because we can now, if the system is installed with Casper demo on, it's not happening. So we need to use this even more often than it was seen provided, because we must check before input headers, before creating the connection with Casper and so on and so on. OK. So here we have some programs that use Casper. It's TCP.damp. We could send those applications, but only with some options that they are provided. So for example, with TCP.damp, we will be able to use IP translation, because like we said before, we are able to enter to any namespaces. So TCP.damp will only work with the end options. And the comma will only work with error options, because we would not be able to translate the GUI and GUI to the test. OK. So after a while, we will also develop system file access services, which allow you to send books programs, which takes a list of file asynchronous. So before we don't have any service that was provided as some files, so system file access is not only the Casper services, but it's also a library. So the first approach which we make to delete this every in depth in our program. So system file access is divided into two separate functions. One is provided as sandbox functions, and another one is provided as unprotected functions. So in sandbox mode, we use Casper to provide us descriptions, but in unprotected mode, we just use the open function. So we are able to remove, we don't need to check which version we need to use, and it's still not committed yet, and I will say a little bit later why. OK. So here is the app that we provide with file, system file access services. So we have one function to delete the library. The second one is just taking the Casper is running, and if we should enter to Kappa. Another two are just remakes of open, sorry, the one is only the remakes of open function, and the last one is just a clear destructible pattern. So I would like to show you how we sandbox do-see program in freestyle. So first one was to add dual hitters. One was syscapability, which provides all the information from the system, and the second one was our file arts. And now we have a global file arts variable. It's global because the deal with changing the interface of the boot set was a little bit longer, but still not complicated. So the next step is just to end the file arts library. So we give RFC and RxValum, define which one to write, what rights he should be open. Then we check if the library was initial, and then we check if we're using a unprotected mode or a sandbox mode. If we use a caster, then we just enter to Kappa. And last step was just to change open function to file arts. So as you can see, that was very easy to do. So we also have some problems with correct model of caster. We didn't mention that, but caster is the demo that is found in the special process called the CYGALTAP, which then is in exact function. It became a services that we wanted. So caster is running as a root because he needs some additional capabilities. So the first problem was a different to then, but caster is running as a root, and the first step is asking for some operations surrounded by the user. So for example, in a file arts, we would follow to run services in the root. We have access to all files, so we bypassing the standard file arts. So we were able to refer this problem by using just functions like setD and setUy and setRoot. So now the services run the same right like the problem that is asking for. Another one is that we problem with different resource limits. Like the problem mentioned before, we had functions like devlinux, which sets some limits on the process. We don't have any mechanisms which allow us to get those resource limits and to set them to the caster service. So we have this problem that caster has more limits, it doesn't have any limits, and our process could have some limits. Another problem is about different working territory. It also was connected to the file arts service. If we would like to open some files using only the reliant part, the caster service is often in different directory than system file arts. So we must somehow provide caster with information about in which directory we are working. So we must manage to do this by sending the correct directory to the caster when we open the service. We send it like in the file description because if we send it as the text, there will be a great problem, and we can say that the directory could be deleted or some options of the directory could be changed. We also have problems with different numers that caster demon was running as routes. So it is taken almost from the root user that is running the process, because it could have different numers. So this was easy to bypass by sending every time if we send the file with how we create the file, that we also send the numers to the caster. Another problem which we were unable to solve is a different map that a root user had and that user that ran the process. So while we think about some of the problems, we also discovered that we had problems with different routing tables. You can set the routing table using set fit function and you can set a routing table for all this one process. So we also have any mechanism to send the correct routing table to the caster. Another one is that we have different CPU sets and we also have many behind us to do this. We also discovered with system file cards that if we use that and that or that and that file, it won't work anymore because caster had different files, we thought that the program which is running it. We could see here the example where we used the Pertisif substitution in a deep program and the tool, the first line of the deep output is that file is the deep page. So when we use the process substitution, we receive the data. And if we try to open it, it's the same like we tried to duplicate the description. So we have here also the problem that we don't resolve. We also have different process group and different integrate. And we also have... because of using the multi-process and that process is running by different user and so on, it's harder to... it's harder to audit or cut with the program program. Okay. So just to make it clear, can you hear me? Does it work? Does it work? No. How about now? Now, now, now. Just to make it clear, we are able to securely send credentials to the Casper Demon, current process credential because Unix domain sockets allow to do that, but we cannot securely send any other stuff like current working directory or mandatory access control labels or UMAS because it can be forced, we would have to send it as data. And of course this sandbox could send whatever it wants to send. So the future goals we are looking at is to eliminate this problem, the separation that the sandbox process, that the service process is forced by the Casper Demon by just moving from a Demon model to more a library model where basically you just initialize Casper Library, not a Demon when you start and then all the service processes will be children of the process itself. There will be no separate Demon, so it will inherit all the attributes of the original process like working directory, UMASC, mandatory access control labels, it will be easier to audit or K trace such process. And of course we are looking how to lower the bar for new Capsicum and Casper consumers. Of course Casper itself is a way to lower the limit because in capability mode you cannot access global namespaces so basically what Casper does is to provide your way to access the global namespaces in a restricted manner. In this TCP dump case we were limiting a system DNS service only to answer to reverse hostname lookups so you can only translate IP addresses to hostnames not the other way around. Another way to lower the bar is Mario's system file arcs where we use one API and there are no if-defs, you just use the same API, either you have Capsicum or Casper or not. You have just the same API and you will discover if you actually can sandbox or not and it will either open files directly or will open them through this Casper service. Okay. So I think that's all we had for today. Are there any questions? Close your eyes. Yes. You mean by trying to protect the debugging applications or? Yes. Let's say sure that they can only access the memory access to the debugging applications. What about the other ones? Well, Kdump is a case of debug application that we did sandbox just to be sure because for example if you are a freebies developer and you receive Ktrace output I would prefer to analyze it in sandbox not directly so that it can access to the modules like that to sandbox but of course we are welcome to commit any patches you have. Any other questions? Yes. So with SSHD using Capsicum, is that cause trouble if I want to use Capsicum authentication because then it has to go and open the key tab which is new file. Can you tap this on the back? I don't know. Capsicum authentication is the sandbox process that isn't supposed to be available. Yes, actually the sandbox error limit that is also in open SSH doesn't allow to even create new descriptors. So the sandbox itself probably is not possible that is going to interact with stuff like that. Yes? Capsicum is not portable. The question is if Capsicum is portable. Of course we are fighting for Capsicum to be portable by trying to prove that that's the way to do stuff and there is ongoing port to Linux sponsored by Google so Google is porting Capsicum to Linux. So hopefully from what I heard OpenBSD also likes the design. So hopefully Capsicum will be ported to OpenBSD as well and others will follow hopefully and finally we will have portable Capsicum which basically we can use instead all of those nifty tricks. Any other questions? Okay, thank you very much.
Capsicum and Casper are FreeBSD proposal for a clean, robust and intuitive application compartmentalization. Today's sandboxing techniques build on top of existing technologies that weren't really designed for this sort of protection (like chroot(2), rlimit(2), setuid(2), Mandantory Access Control, etc.). Capsicum and Casper provide rich infrastructure for breaking applications into multiple useful sandboxes and thus significantly reducing Trusted Computing Base. Capsicum is a lightweight OS capability and sandbox framework implementing a hybrid capability system model. The Casper daemon enables sandboxed application to use functionality normally unavailable in capability-mode sandboxes. The talk will discuss Capsicum framework, Casper daemon and its services. It will provide introduction based on already implemented examples to those new FreeBSD features. The talk will also present existing portable sandboxing implementations to give clear picture how hacky those solutions are.
10.5446/15330 (DOI)
Turn the light on. Okay, we're good. We're good? Yeah. Yeah, yeah. Here we go. Sorry about that. So, is it okay if I get six minutes over? Should I try to finish at 5.30 or is it okay if I go to 5.36? Okay. Cool. Thanks. Thanks. You got both mics on? Yeah, I haven't turned this one on yet. They're both on? Yeah, I just turned it on. It's on now. Okay, we'll just go grab some so we can start recording. Okay, sure. Okay, cool. Is it on? I don't know. Okay. Okay. Hi, okay. So, we've resolved the recording stuff, so we're good to go. So, my name is Arun Thomas and I'm going to talk about BSD ARM kernel internals. This is my second BSD conference. I gave a talk at EuroBSDCon 2011 a few years ago and so it's good to be back at another BSD conference. They're always a good time. So, let's talk about a BSD ARM. So, I'm going to start off with a little bit of a quick demo. So, this is a previous BSD current running I guess a few weeks ago when I built it and a little bit of debug code. And so, you can see it booting. There's a mix of debug code and witnesses on, so it's going to be a little bit slower. So, you'll see that there was, and I have some screenshots later that'll kind of, you can actually see what's going on. But there were a couple versions of Uboot that ran. Uboot loader is running now and then it's going to load the kernel. So, I'll take a couple seconds. Okay, here we go. Yes, I want to start now. So, you've got all this kernel code. It's booting up, it's doing some device stuff. And then, shortly it'll boot up the rest of the way. So, this is kind of, I'm going to kind of go through the flow exactly what's going on. So, to get to this point there was a lot of like machine-dependent stuff that needed to happen. And so, here we go. You're in user land and stuff's happening around the file systems and pretty soon you get to root prompt. So, I'm running it off of Beaglebone Black. This is the board here. It's pretty cheap. It's a nice platform. It's supported by all the BSTs that have ARM ports. And so, yeah, it'll keep going. And so, this is the black. I have the white tube. So, we're looking at an SSH session from your laptop. No, no, this is just, I plugged in the serial. There's a serial cable going on. So, Adafruit has these handy little cables you can just plug in. So, just plug in root. Password, you get the free BST prompt. So, this uses UBoot. So, free BST, I think everyone uses UBoot on ARM boards basically. So, that's that. So, that's the basic process and then I'm going to kind of go into like what happens when you're booting up. I'm going to focus on the machine-dependent parts, the machine-independent parts that I don't really talk about at all, but we'll look at some of the code and see what happens. So, all right. So, these are a couple of those screenshots so you can kind of see what happens. So, there's a couple versions of UBoot running. That's one. That's another one. And then, the free BST UBoot loader runs. And then, there's something here about a DTB. It's a device tree blob. I'll talk about what that is. And then, you get the copyright notice. Before that happens, there's a whole bunch of machine-dependent stuff that needs to happen before you get to this. And I'll talk about all of that stuff that happens. So, you can tell that it's free BST, 11 current. And it's got a weird revision because I'm using git and it's dirty, so because I had some local hacks. Postname is imaginatively beasty. And it's the Beaglebone kernel right there. And I'm using crochet. It's just who I'll talk about a little bit later to build this image. And it's kind of cool for BSTs built with Clang. And this is running in Cortex-A8. I'll talk a little bit about the other processors that ARM has, all the Cortex processors. Here are some of the features. And then, it talks a little bit about caches. And then, this is the system on-ship, the SOC, the TI AM-3358. And we'll talk a little bit about SOCs and all that stuff, so if you're not familiar with ARM. And then, there's some more stuff about device tree. And this is basically how the UART gets mapped. So, this is the memory range. This is the IRQ. And we'll go into all of this. So, the goal of this talk is to get you hacking BST on ARM. So, are there any people here who haven't done any hacking on any ARM hacking or have done little hacking? Raise your hands. Excellent. So, my goal is to get all of you, like, at least interested in hacking ARM. So, that's my goal for this talk. So, we'll start off with the ARM 101. So, we'll talk a little bit about the basics of the ARM architecture, what the assembly looks like, what the system level stuff happens, looks like, various boards that you could buy, the SOCs, all that stuff. So, just like a quick bootstrap to the ARM architecture. Then, we'll go through some of the kernel code. We'll look at NetBSD and FreeBSD. I just picked those because those are the ones I've hacked. But, after watching some of the OpenBSD MIPS talks, I'm hoping to get OpenBSD running on ARM. Maybe in an updated version of this talk, I'll add some OpenBSD stuff. So, we'll focus on the machine-dependent stuff. I won't look at the machine-independent stuff, as I mentioned earlier. So, we'll look at how the kernel boots up and does some exception handling and sets up the initial page table and all of that stuff. It'll still be kind of high-level, but you can at least, I'll give you at least kind of like the files you want to look at. It's sort of high-level, so you can dig into more detail later. Then, there's a short section on tips for hacking on BSDRM and debugging and more resources, so you can dig in further. I won't be able to get into everything, but there's a lot of good material out there that you can look into. So, there we go. Okay, so the ARM architecture. Hugely popular and embedded systems, you probably own several, each of you. It's also moving into general-purpose computing. So, you've got laptops, Sops netbooks. Some of the Chromebooks have ARM chips in them. They're pretty nice, actually. There's servers. HP was making ARM servers at one point. It's also moving into high-performance computing. GPGPU and Vida have some pretty cool SOCs coming out. And the main reason for this is power efficiency. So, it's a big push. So, ARM has an interesting business model. It doesn't manufacture chips. It basically licensed the architecture and designs to various silicon vendors like TI and Samsung and all that stuff. So, they basically just come up with the architecture and some of the processor designs, someone else will package it up into SOC and fabricate it. Okay, so ARM architecture. So, ARM stands for Advanced Risk Machine. It used to be a corn risk machine. And since it's a risk machine, it's reduced instructions like computer, fewer instructions, simpler instructions. Because it's risk, it's load-store architecture. So, if you want to operate on memory, you have to load it from memory into register, operate it on there, and then store it back out to memory, unlike, say, x86, which is not very risky. So, it's big Indian or little Indian. Little Indian is more common. So, the current versions, there's several versions of the ISA. The current version is ARMv7 and ARMv8. So, ARMv7 is 32-bit and ARMv8 is 64-bit. It's ARM, it does some cool stuff in ARMv8. They simplify the architecture. But I won't be talking about that at all in this talk. So, I'm going to focus purely on ARMv7 and 32-bit. So, each of the ISAs also has various architecture profiles. So, there's the application profile, the real-time profile, and the microcontroller profile. I'm only going to talk about the application profile. So, the real-time microcontroller profiles are more for embedded systems, and they don't have full MMU support. So, as I said, ARMv7a is what we'll talk about. 32-bit ARM processors. They're called Cortex-A, if you're looking at models. They have full MMU support, and if you look at the ARM documents, they're designed for full feature operating systems. So, things like BSD and iOS and stuff like that, not really the kind of like embedded operating systems. So, the ARMv7a has two instruction sets, actually. So, there's the ARM instruction set, and there's the thumb instruction set. So, the ARM instruction set is bigger than the thumb instruction set. So, the thumb instruction set has a mix of 16 and 32-bit instructions, where ARM is just 32-bit. So, the 32-bit instructions were added with the thumb 2, and they added thumb 2 technology. So, the good thing about thumb is that you get better code density, and that's good for your caches. So, all the ARM CPUs are packaged up with other logic into a system on ship or SOC. So, you'll hear this SOC acronym a lot. So, what's in an SOC? So, you have your interrupt controllers, your timers, your UARTs, SDMMC controllers, SATA controllers, USB controllers, GPUs, and all kinds of peripherals. So, you might have a camera controller, or GPS controller, all kinds of stuff. All that stuff that goes into your phone is going to be on the SOC. So, the SOC is actually getting better for the ARM developers. So, ARM is actually kind of standardized some of these things, like the interrupt controller and the timers. Before, each SOC vendor would basically create their own set of timers and their own set of interrupt controllers. So, they had kind of hard for OS developers, since they had to write all these new drivers. But now ARM has this generic interrupt controller and generic timers, so it makes it a little bit easier. ARM is kind of building a platform. Okay. So, this is what an SOC looks like. This is the AM335X SOC. This is the SOC that's in the Beaglebone black. So, you see that the core is actually a small part of the SOC. There's a lot of other stuff in there. So, you've got a GPU here, and you've got a whole bunch of various buses. You've got the UR at SPI, I2C stuff, timers, real-time clock, ACs, and then you've got your GPIOs. So, there's a whole bunch of logic on there, including USB and an EMAC. So, the SOC has a whole bunch of stuff in addition to the core. So, as I mentioned, I'm using Beaglebone black. It can be yours for the low-low price of $45 US. It's a popular hobbyist board, and it's supported by FreeBSD, NetBSD, and OpenBSD. Dragonfly, I don't think, has an ARM port as far as I know, but I imagine if it did, it would support it as well. So, it's kind of a cool little board, as a lot of IO. Another popular board is the Beagleboard XM. It's a little more expensive. It's supported by FreeBSD in a branch for Google Summer Code. I don't think it was integrated back, but may sometime. NetBSD supports it. OpenBSD supports it. And the nice thing about this board is, if you don't want to buy it, QMU, Linara's version of QMU actually has a support for it. So, you could just boot it up in a virtual machine. So, that's kind of cool. Okay, so this is a bunch of different SOCs, and these are all supported by, and the boards that are associated with them. And these are all supported by one of the BSDs. So, the top is the OMAP3, OMAP4, the DaVinci, and Sitara. They're all from Texas Instruments, and they're kind of a family of SOCs. It's fairly similar. So, you've got the Beagleboard, and the Beagleboard, and there's a couple versions of the Beagleboard. There's a couple versions of the Beaglebone, like, Sean over there was talking about the Beaglebone white versus black. I have the black here. And Pandaboard has a couple versions, too. So, these are really popular developer boards. Then you have the Allwinner A10 and A20 that's used in the QB board and the QB truck, also a fairly popular board, especially now. The FreeScale IMX6 is in the WAN board. The Samsung X10S5 is in the Chromebook and the R&Dale board. That's a really high-end SOC with a really high-end R&P processors on them. So, the last one's actually really interesting. So, XilinXinq has these Cortex chips packaged up with some FPGA logic. So, if you want to do any hardware design, it's a pretty cool platform. So, these boards range from 45 to, like, 400, 500. I forget exactly. The Z board is the most expensive board. The MicroZet, I think, is fairly cheap. So, if you want to play around with that, it's pretty cool. So, this is all the Cortex CPUs. So, on the low end, you have the Cortex A5, the FreeScale Vibrid, as an SOC that supports the A5. That's for really kind of, like, embedded stuff. The Cortex A8 is more popular. You'll see that in most of these developer boards that are out. So, the OMAP 3 SOC, DaVinci, Sitara, so your BeagleBoards, BeagleBones, have this chip in there. So, it's the lower end. And they all win an A10. So, the Cortex A9 is sort of the mid-range CPU. You'll find in the OMAP 4, the FreeScale MX6, and the XilinXinq. So, this is what's in the panda board. The Cortex A15 is the high-end. As I mentioned, it's in the XNOS 5. You'll find it in a lot of the high-end phones and in the Chromebook. So, the A7 is actually a replacement for the A8. So, it's found in the XNOS 5 in the old winner, A20. So, it'll eventually, like, fully replace it. And the A12 and the A17 are mid-range CPUs. So, they'll replace the A9 over time. And I don't know what SOCs they're in, but I'm sure they'll be in a lot of SOCs, maybe some of you know. So, it's sort of, there's a lot of CPUs, so it's hard to kind of keep track of them all, and typically the one you'll see a lot is the A8 and A9 for the developer boards that you'll see. So, now that we've talked a little bit about the boards and the hardware, I'll talk a little bit about software. So, ARM has several versions of ABI's, and so an ABI is an application-finer interface. So, the ARM docs, the quote is that they are, rules that an ARM executable must adhere to. So, things like executable formats, calling conventions, alignment, what system calls look like, all that kind of stuff is set aside in the ABI. So, there's several ARM ABI's. There's the ARM EABI, the ARM Embedded ABI, and there's the ARM EABI HF, that's the ARM Embedded ABI with hardware floating point. So, those are kind of like the current ABI's. There's also the ARM OABI, and I don't actually know what it stands for, maybe old, I don't know, but it's sort of obsolete, but it's used for older versions of ARM. Does anybody know what OABI stands for? No? Okay. Just wondering. So, NEPI and FREME-AC both support EABI and EABI HF. So, when you're building this stuff, you want to make sure you have the right tool chain. So, you want a tool chain that's built for EABI or EABI HF, depending on what you want. Okay, so let's look a little bit into what the architecture looks like. So, the ARM has 16 general purpose registers. Some of them are used for other things or have kind of dedicated uses. So, the frame pointer, R11 is the frame pointer, R13 is the stack pointer, R14 is the link register. So, when you do a call instruction, the link register will save your current PC, so you have something to go back to, so that's the link register. And the program counter is R15. So, ARM also has some program status registers. There's also a floating point, it's called VFP instructions, and SIMD, that's the neon instruction set. I won't really talk about those, but if you're doing like, high vector code or GPU code or that kind of stuff, it'll be useful to you. So, in terms of this, ARM has these two program status registers that are fairly important when you're doing systems code. So, there's the current program status register, the CPSR, the saved program status register, the CPSR, which is used for exceptions. And it holds a number of important bits, like the processor mode, for instance, SVC is a mode, and we'll talk about what the different modes are soon. The interrupt mask bits, so if you want to disable interrupts, you'll set this IRQ bit. ARM versus thumb state, and INS is a big IND, a little IND, and various condition flags. So, here's what the assembly syntax looks like. We'll just add two numbers together, we'll add one plus two. So, you move one into R1, you move two into R2, then you add R1 and R2 and put in R3. So, the destination is on the left. So, that's a basic intro to data operations on ARM. So, the memory instructions, as I mentioned, it's a load store architecture, so if you're going to work on stuff or memory, you have to use the LDR, the load, and the store, STR instruction. So, you want to move what R1 points to into R0, and you want to store R0 into what R1 points to. There's also push and pop instructions that'll push multiple registers onto the stack, and pop multiple registers off the stack. So, these are actually aliases for ARM, has these instructions, LDM, load multiple, and STM, store multiple. They're the same thing if you have the right suffix and whatnot. And control flow. So, ARM, of course, has control flow. So, this is branch of 0, so bz loop, suspect. And the call instruction, as I mentioned, is the bl branch link instruction, and so this will label, and so this will save the current PC to the link register. If you're going to do a return, it's a bxlr, so branch exchange of the link register. I'll just jump back to whatever's in the LR, and in older versions of the ISA, this is how you do it. You just move the LR into the PC. That's, I think, deprecated in ARM v7, but it still works. They just don't recommend it, especially if you're doing thumb stuff. Okay, so that's kind of like the application level stuff. So, let's look into the OS relevant bits, because that's what we're doing. We're trying to get the OS going. So, there's actually more than this, but the two privileges I would care about are PL0 and PL1. So, PL0's unprivileged user code, PL1 is privileged kernel code, which is what it's used for. And there are nine operating modes, so there's one unprivileged user mode that runs at PL0, and there's eight privileged modes that we'll talk about, and they run at PL1 and above. So, the privileged modes are used mostly for exception in interrupt handling. So, it's a little bit complicated. ARM v8 cleans some of the stuff up, but if you're going to be doing kernel programming, especially at this level, it's good to know how this stuff works. So, here are the various modes. There's the supervisor mode, which handles, which sees for system calls, and it's also the initial mode that the processor starts up in. There's interrupt mode for normal interrupts, fast interrupt mode for higher priority interrupts, faster as well as you might expect from the name. There's abort mode for memory faults, undefined mode for illegal instructions. You can also use this to emulate instructions and software. System mode, which is a privileged mode for user mode registers. It doesn't really get used that much. Hypervisor mode for VMM support and monitor mode for trust zone. The modes that you really, that we'll really care about are supervisor mode, interrupt mode, abort mode, and undefined mode. Those are the ones that typically get used. So, there's one kind of interesting feature of the architecture. There's a thing called banked registers, and it's a little bit complicated, but it's good to know about. So, most registers are shared across all the modes. So, you have one PC that applies for all the modes. But there are some registers that are dedicated for each mode. So, you can have, and so there are duplicate registers for each of these modes. So, there's a separate stack for each mode, so you're going to want to have separate stack pointers. So, there's a stack pointer for user mode, a stack pointer for a supervisor mode, for instance. And the CPU will automatically set the appropriate banked register, depending on what mode you're in. So, in user mode, the CPU knows that you want the user mode stack pointer. In SVC mode, the CPU knows you want the SVC stack pointer. So, why do they add this stuff? So, it's important for exception handling. So, basically, we wonder how do we get back to the faulting instruction. So, we need to do that. We need to save the state at the time of the exception. So, the banked link register, so usually the link register saves the PC when you do a call. Here, it will save the program counter at exception time. And the saved program status register will save the current program status register at exception time. So, it basically saves your state. So, for instance, if you're doing a system call, LRSVC and SPRSVC, the link register and SVC mode, and the SPSR register and SVC mode, save the program counter and the current program status at the time of the SVC exception. So, from the user code. So, typically, when the processor changes modes, it will do that kind of automatically on an exception. So, it will switch to interrupt triggers. It will switch to IRQ mode on SVC instructions, also called SWI, or software interrupt. If you look through the BSE code, this is actually the name that gets used. But if you look at the new ARM docs, they tell you everything uses SVC, because I guess that's sort of the new name for the supervisor call or system call instruction. So, the SVC instruction will switch to SVC mode. So, if it's basically how it works, each exception, you go to the recording mode. So, the OS can also change the mode using the privileged CPS instruction. So, that's the change processor state instruction. So, the CPS instruction basically what it does is it modifies the mode field in the CPSR. So, this is basically, you'll find this when we talk a little bit later about what the instruction, the kind of early boot code looks like in FreeBSD and at BSD. It'll do this. So, this is how you switch to SVC mode if you want to use the CPS instruction. And if you want to switch to SVC mode and disable interrupts, this is the SVC mode. And you use the suffix ID, which is interrupt disable, and you tell which bits, the IRQ bit and the fast IRQ bit. So, back to the status registers. So, if you'll recall these hold the current mode, the interrupt disable and endiness. This is how you read and write them. So, this is if you want to read the CPSR and if you want to write the SPSR, it looks like this. The instructions are a little bit confusing. So, it's move to register from status is read, and then write is move to status from register. So, these are in the docs and you'll get used to them. But there is a gotcha here and actually this was fixed in FreeBSD a couple months ago. So, there's if you don't give the FSXC suffix, which basically lays out each of the parts of the status to write, you might not the compiler or the assembler won't necessarily do the right thing. So, there's an suffix underscore all and it doesn't actually mean all. So, depending on the version of gas you use, bad things can happen. So, there was this bug with the wrong Indian register restore bug and so this was because all didn't really mean all. So, if you're interested you can look into the revision ID, but it was kind of an interesting bug that was fixed and I think it took a long time for it because in LaPore it fixed it. So, tricky. So, let's look over view ARM Virtual Memory. It's 32 bit address on ARM v7. ARM v8 of course is 64 bit and if you use LPA it's actually 40 bits, but we'll talk about the 32 bit ARM v7 stuff. So, with that you get a 4 gigabyte virtual address space and there's paging support, two levels of paged tables. The TLBs are hardly managed so the MMU will do the paged table walk and the TLB miss. The commonly used page sizes are 4KB small page sizes and one megabyte sections. And if you want to know more, a lot more about ARM Virtual Memory there's a talk about transparency for pages tomorrow. So, you should go check that out. You'll learn a lot more about virtual memory because I'm just covering the basics. Okay, so the other kind of key thing in the architecture is coprocessor 15. So, if you're doing kernel hacking you're going to have to look at this. So, it's a system control coprocessor. Coprocessor is a little bit of a misleading term because it's actually an integral part of the architecture. You can't really take it out, but so it's heavily used for systems programming and it's used for things like setting up the processes page table. So, this is how you write the page table to the translation table base register. So, it's an MCR instruction, move to coprocessor from register and it's a little, I don't know, it doesn't, you have to read the docs to figure out exactly like which numbers to put in there and stuff. So, usually these things are wrapped up in inline assembly so that makes it a little bit easier. The other important thing this does is it holds the system control register. So, the system control register allows you to enable the M.U., the branch predictor, caching and it tells you where to put the exception. It allows the OS to tell the CPU where the exception vectors will go. So, yeah, reading, MRC, writing, MCR, so move to coprocessor from register and yeah, this stuff is in the docs but it takes a little getting used to. I kept screwing it up, but maybe you guys will be faster. So, this is probably the most important slide in the whole presentation. So, this is where you go to get more information about ARM. So, I'm just going to kind of like touch some of the basics, this is where you go. So, yeah, this is a quote from the NetBSD code and I'll read it out. It's, and thus spake the ARM ARM. So, what's the ARM ARM? It's the ARM architecture reference manual. And so, that is all the details about the instructions that architecture. It's 2,000 pages, or almost 2,000 pages. So, you're probably not going to read it cover to cover, but it has, it'll answer all the questions you have about the ARM architecture. So, you want the ARM V7A and ARM V7R edition of that. If you want kind of a quicker, kind of a lighter introduction to ARM, the ARM Cortex-A series programmers guide is really nice. It came out a couple years ago and they just updated it for this year. And the latest revision, they really updated and expanded a lot of the discussion. So, you should, if you're interested in ARM, you should download that like now. Both of these are free and it does require registration on the ARM website, but they're free. So, the other thing, the other resource that's good is the ARM System Developers Guide. This is actually a printed book and it's from 2004, so it's about a decade ago. And ARM moves pretty fast. So, it's definitely dated, but if you want to really dig into like, kind of like the systems level kernel stuff, like exactly exceptions work and some of that stuff, it's a good resource. In addition to the core ARM docs, you also want to look at the manuals for your processor, your SOC, and your board. So, for the Beagle board, it's a Cortex-A8. You want to look at the Cortex-A8 TRM, for the, sorry, the Beaglebone as well. And then for the Beaglebone, you want to look at the AM335X TRM. And then you want to look at the system reference manual for that. And it'll give you stuff basically like the device map. So, for the Beaglebone black, these are some of the key addresses that you need to know. 440900 is where the UART is. You write to the address, you can write to the UART. The interrupt controller is at this address, the timer, DM timer 1's clock is at this address, and the RAM's at this address. So, you want to look at your TRM to figure out exactly how to set up your board. ARM also has several migration guides. So, for the folks who are coming from Xe6 or MIPS or Power, these are really good guides. And even if you're just kind of interested in computer architecture, this is kind of cool. So, I saw some MIPS talks today, so I'm thinking about reading the MIPS to ARM guide, except I'm going to go from, maybe from ARM to MIPS as well. So, the IA32 guide has, like, for instance, it tells you about Agaccio, with the fact that characters are unsigned by default, and I can call this problem. There's a lot of useful tips in there. Okay. So, that's kind of a quick introduction to the ARM architecture. There's a lot of material, but as I said, there's a lot of guides that you can look into to get more details. So, now we'll start digging into the code a bit. So, I think George Neville Neill calls this code splunking in his ACM column, so we're going to start digging. So, the vast majority of the OS code is machine independent, but we're basically just going to dig into the machine-dependent code, so the ARM code. So, that's a mix of C in some assembly and some inline assembly, so that's why we quickly went over the assembly syntax earlier. And we'll see examples from FreeBSD and NEPISD, and by time, there would have been openBSD here too. Okay. So, FreeBSD and NEPISD ARM both have great ARM support. There's a few notable differences, really, so there are interesting differences to me. So, NEPISD has this build.sh strip that's used for cross-building, and it allows for cross-OS building, which is kind of cool. So, you can build from your Mac or your Linux machine, so that's kind of cool. FreeBSD uses Clang even on ARM to build the system, so that's kind of interesting. FreeBSD uses something called Device Tree for hardware configuration, while NEPISD uses the Autocon framework. FreeBSD also has an extra bootloader stage that you saw when we booted up. So, these are the key directories that you're going to want to look at if you want to dig into the code some more. So, these are the include directories. And so, it's interesting that this is ARM32 directory. You might think that happened, like, when ARMv8 was added, but it turns out ARM had a 26-bit version of the architecture a long time ago, so there's like 26-bit ARM support in some of the NEPISD files. And then these files are basically where you get the kind of core, like the source files, ARM, ARM32, and Cortex. So, in terms of the SOC and the Beagleboard, you want to look at the OMAP directory, and the EVB ARM, which stands for Evaluation Board ARM, is where the kind of platform board-specific stuff goes. So, all your Beagleboard machine-debt code goes there. So, the configuration files you want to look at, the files.arm file tells you which files will get built for the ARM port, and the files.cortex file also adds to that. STD.arm is the kind of like build options. So, you look at this stuff for the core files, and then the Beaglebone and the SOC files, you look at the OMAP2, the EVB ARM, the Beagle. And this is the kernel config file, the top-level config file that will pull in the rest of the stuff. So, FreeBSD is similar. It actually has fewer pads to look at. So, you look at the SysArm include and SysArm ARM, so that's the headers in the source for the FreeBSD ARM core support. And then for the SOC and the Beagle, you look at the TI directory that's shared code for all the TI family of SOCs. And then you look at the AM335X directory for the stuff that's specific to the Beagle. And then you've got the configuration file. These are the files that are for the core ARM support. This is Conf. And then you've got the Beagle stuff, so the TI family of SOC files, AM335X, the Beaglebone, and this is the top-level config file. So, if you want to dig into that stuff more, you'll kind of read through those files and see what's going on. So, it'll tell you exactly what's getting built. So, let's talk a little bit about bootloaders. So, I mean, what you saw earlier when I booted up, there was a lot of different bootloaders that happened. And so, we'll kind of go through what happens there. So, at a high level, what the bootloader does is it does sort of low-level hardware initialization. So, DRAM and serial. It'll pass and boot parameters to the kernel and it'll load the kernel. So, I mean, that sounds simple, but it actually does a lot under the hood. So, but at a high level, this is what it does. So, when you boot up, the first thing that happens is that the reset handler in the SSU ROM runs. And the first stage bootloader will run. So, that's MLO or SPL. It's a stripped-down version of Uboot that runs and it's needed since the DRAM isn't initialized. So, this is kind of the output from there. And so, you can tell that it's reading the Uboot image. So, then Uboot will run and that will read the configuration from the un.txt. You can see that here. And it'll run the Uboot loader on FreeBSD. It doesn't do this on NetBSD. And also, read this device tree blob that we'll talk about a little bit later. So, then the third stage bootloader runs. Again, only on NetBSD, on FreeBSD, rather. And this is an implementation of loader8, which is a loader for other architectures as well. And it will read the loader.conf configuration and then it will load the FreeBSD kernel. So, you can kind of see that happening here. And it's going to use that device tree blob. And if you're interested in what the sources are doing, that's where you go. So, device tree. This is used by FreeBSD on several platforms. I think it came from PowerPC originally. And basically what it is is a data structure that describes hardware configuration. So, all the device tree sources are in this directory, sysboot, FDT, DTS, ARM. For the Beagle on Black, you want to look at these files. The Beagle on Black.DTS and the AM335X.DTS.I. So, that's an include file. The Beagle on Black.DTS includes. So, most of the logic is actually in here. So, this is the configuration for the serial port. So, the SSE is the AM335X. And the serial port is at this address, which we saw earlier when I mentioned the device map. So, it's 44E0900. It tells you what kind of serial port it is, where the address range, the register shift. So, it's four byte access. 32 bit. And which interrupts clock frequency and other stuff. So, previously, so that DTS file that we just showed, there was a fragment from that. It'll go, the device tree compiler will take that and turn it into a blob. And so, that becomes the BBBone black.DTB for the Beagle on Black. And it's stored in a compressed format called flatten device tree. The blob can either be compiled into the kernel or the UB loader can load it. In this case, when we saw the output earlier, what's happening is the UB loader is actually loading it separately. It's not built into the kernel. And so, once that happens, the kernel will parse the DTB to learn the board's hardware configuration. And libftd handles the parsing in case you're curious on how that stuff works. So, an FBSD doesn't use device tree, it uses auto comps, the device auto configuration framework. Basically, what happens is the hardware config info is generated by the kernel configuration process. One config eight runs. And so, this is an example of that from the BeagleBones kernel configuration. So, this is the same UART configuration. You can tell the same address that we saw earlier, 44 E09000. And the size is the same. The same range, the same interrupt, and the registry of four are here, these multiple. So, it's just a different way to represent this information. So, at a high level, so we've talked a little bit about bootletters, at a high level, we'll talk about kernel initialization. So, the early kernel initialization is basically just kind of like the low level device stuff. So, it's kind of what mostly we'll be talking about in the coming examples. So, the first thing it does, it'll save off the boot parameters, it'll set initial page table and enable them, you. Then, it'll set up the exception vector table, the exception handlers, and the exception stacks. Then, it'll do some initializing of the devices, like the serial, the interrupt controller, the timers for the clock tick. Then, you'll get into your machine independent initialization, initialize various kernel subsystems, more device initialization. Then, you'll enable interrupts and then switch to user mode and run a knit. So, that's kind of like at a high level. It'll focus mostly on kind of like the early kernel initialization and the machine dependent stuff. So, these are the very first instructions that FreeBSD runs when you boot ARM. So, it's in sysarmarmlocore.s, so this is shared with all the ARM SSCs. So, you can tell that, so it uses the Linux boot API, or at least that's one of the options. There's some if depths around this code. So, in R0, you put 0. R1 gets the machine type, which gets passed in at ARM. And R2 gets the DTB image pointer. So, the first thing that this code does is that it will save off these parameters. The comment tells us that it gets put in ARM boot params and eventually passed in at ARM. And the next thing it does is that it will make sure that interrupts are disabled. Typically, Uboot does this for you, but it's good to do it anyways just to make sure. So, NetBSD, it's fairly similar. Each SSC board has a separate start routine, so that's a little bit different than previous days. So, there's a Beagle start routine that will be used on NetBSD as opposed to a common start routine on previous days. So, the first thing it does is that it will switch to SVC mode and it will disable interrupts. So, similar to previous days, except it also does the mode switch. Uboot should already put this processor in SVC mode, so this probably isn't necessary, but it's good to make sure. It also will save off the various parameters that it got from the boot loader. That's what this STMI-A thing is. It's basically a store multiple. And the moveW moveT thing, it doesn't really matter. It's basically because ARM can't do 32-bit immediate loads, so that's how you do that. But basically what it does is saving off the parameters it got from the boot loader. So, I'll continue walking through what NetBSD does as it boots up. FreeBSD is pretty similar and I'll talk about the similarities. NetBSD has a little bit more branching, so that's part of the reason why I went with that one. So, you can kind of follow through. So, the next thing that it's going to do is that it's going to set up the page table. So, BeagleStart calls this ARM bootL1PTinit function. So, that sets up an initial page table. And so, this is an L1 page table with one megabyte sections. And basically all it does is it identity maps the kernel. So, virtual address equals physical address. And then also map the serial. It also maps the serial, so you can get your debug output to the UART. So, once it sets that up, it'll call ARM CPU init. And ARM CPU init is what, if you read the comment, is what turns on the MMU and caches. So, the ARM CPU init function can be found in this directory. So, it's a little bit misleading because it says it's an A9 function, but it's running on a Cortex A8, but the code is actually the same. And basically what this does is it invalidates the caches and TLBs. And it'll also, after it does that, it'll enable the caches. It'll set the translation table base register, and then it'll enable the MMU. Okay. So, once that happens, the BeagleStart routine will jump to start in a locore.s. And this is actually a shared routine across all the ARM boards. And so, the ARM SoC will, here's the branch to start. And so, start what it does is basically sets up the environment for ccode. So, up until this point, everything you've seen is assembly code. And so, once that's done, start can call init.arm, which is the first ccode that runs. init.arm will call init.arm common. So, init.arm is actually a board specific. And there's an init.arm common, which is actually shared across all the boards. So, after that happens, after init.arm runs and init.arm common runs, start will return back to BeagleStart. And then you can call main, which is the first machine independent code. So, this is actually fairly similar to how previous D boots as well, except main, I think, is MI startup. And then there is also an init.arm function that does all the kind of heavy lifting to do the, to set up the processor and all that stuff. So, here's what init.arm looks like, or what it does basically. So, as I mentioned, init.arm is SOC specific. You can find it in Beagle, MacDep or Mashedep, I don't know. And then the init.arm common is ARM generic. You can find in the ARM32, ARM32boot.c. And so, it performs all the logic needed before main runs. So, it will set up the CPU func structure for the basic CPU functions. So, for the write-ins, in this case, for the ARM to the ARMv7 CPU functions. It will map the devices and initialize the console. It will set up a real page shable and switch to it. There's a lot of code to do that. And then it will also set up the exception vectors and stacks. And finally, it will also parse the boot arguments that it got from Uboot. So, once init.arm runs, you get back to Beagle start and then main runs. And that's the machine-independent code, which we won't really talk about, but it's pretty interesting. And there was a AsiaBSDcon talk about that that I'll reference later if you want to learn about how all that stuff works. Okay. So, let's talk a little bit about exception handling, since we kind of went through the machine-dependent code. So, basically as a kernel hacker, what you have to do is you have to set up the vector table. You have to set up the exception stack pointers. And you have to write handlers for each exception. So, here are the various exceptions that you have. You have the reset, undefined instructions, supervisor call for the SVC or SWI, prefetched abort and data abort. Those are memory faults. Interrupt, fast interrupt, hypervisor call for the SVC instruction. So, the exception vector table is a jump table with eight entries, one for each exception type. So, each entry holds one ARM instruction. So, that instruction can either be a branch to an exception handler or a PC load of the exception handler. And so, that's what that looks like. And there. So, this is previous to use exception vector table. And you can find that in ARM exception.s, so it's shared code. So, basically all this does is it's basically a branch instruction. So, you've got one for each of the various fault handlers. So, you're going to figure out exactly what the system call handler is doing. You look in SWI entry and see what that does. So, the OS has to tell the processor where the exception vector table is. There's a few options. So, you can put it at address zero or address 0xFFF0000. It's kind of the high vectors and where zero is the low vectors. And this is set based on the system control registers VBIT. You can also use the, in ARM V7, the vector base address register to program where the vector table goes. And NEPISD actually has an option for this if you want to do that. So, I think it supports all these options actually depending on which config, which if def you go through. So, this is what previous to use exception set of code looks like. NEPISD is also similar. But, so as I mentioned, previous to has an ARM function. And so, what it'll do is it'll allocate stacks for each of the modes. So, as I mentioned earlier, there's only a few modes that are really important to us. So, there's the IRQ stack for interrupts, the board stack for memory faults, the undefined stack for undefined instructions, and the kernel stack, which is used in SVC mode. So, once you've allocated all the stacks, you want to set the stack pointer for each of the modes. As I mentioned, there's those bank registers. So, you have to go into each mode and set up the stack pointers for each of these for IRQ mode, board mode, undefined mode, and SVC mode. So, once you do that, you will do call ARM vector in it. And you can tell, so it's ARM vectors high, so it's that 0xFFFF000 address. And this will set up the vectors that should go on the table. Okay. So, that's kind of like a high-level overview of how the machine-dependent code works on a BSD, specifically NEPISD and free BSD. So, there's a lot more stuff that happens in the machine-dependent code, or independent code. Some of the machine-independent code is going to call into machine-dependent stuff, like if you need to save context or switch and that kind of stuff. But that's kind of like the overall stuff. So, if you look at the other functions, if you follow each of the exceptions through, you can kind of look into how that stuff works. So, now we'll get into a few kind of practical things about developing BSD and ARM. BSD has actually really good cross-compilation support. So, it's kind of cool because you can cross-build the whole system. You've got the tool chain, the user land, all that stuff. It's all built-in, so it's kind of nice. You don't have to do that stuff yourself. You can also create bootable SD images. It's a project called Crochet for free BSD. And it makes it really easy to create these bootable SD images for these various boards. You can also use build.sh on FBSD to do the same thing. When I was building for the Big O'Bun Black, I used the eARM V7HF API. But you can choose whatever API it makes sense to you. One important thing here is that if you use this stuff, it'll give you the right version of Uboot. If you have an old version of Uboot or a different version of Uboot, it can be incompatible with your kernel. Sometimes I'd run a different version of Uboot or Uboot that came with the board, and sometimes the kernel wouldn't boot or panic or something. If you use this method where you just use the Uboot that came with the Crochet image or the build.sh image, you'll have an easier time. If you don't want to go through the hassle of building the image, you can also grab snapshots off the NetBSD and FreeBSD mirror. If you don't want to deal with burning SD images, you can also NetBoot. You can TFT boot the kernel, you can NFS mount the root file system. If you get tired of building SD images, that's one thing you can do. It might keep you from spending a lot of money on SD cards if you burn them out, if you keep re-imaging them. A few tips on debugging BSD ARM. One of the things you want to do is get printf as early as possible. Uboot usually will initialize a serial for you. If you write to that address, the right THR register, that 4x, 449 address, you can get stuff out to the UART. If you want to see early debug output, you can turn on verbose in it ARM on NetBSD. There's a debug macro on FreeBSD's MacDep.c. That stuff's good. It's good to see all that stuff come out on the console. Another thing that's useful is a JTAG debugger. Some are actually kind of affordable, like the fly swatter. Some are very, very expensive. You might need your company to buy it or something, but fly swatter is sort of reasonable. It's kind of useful so you can single step when you're doing low level port stuff. Especially if you're doing work with Uboot, then you really want a JTAG debugger. Kernel Debugger is also useful. DDB, it's nice to be able to have that as a debugging facility. QMU is also great. You can actually hack on ARM without having any ARM hardware. You need some hardware. You need to at least have a laptop, but you don't need any ARM hardware. Lenora's version of QMU has Beagleboard XM support and Mainline QMU has a QBboard support. The nice thing about this is that you can add debug code to your hardware. It's kind of cool. You can look at the code to figure out exactly what is the ARM process you're doing when, I don't know, exceptions happen. It's kind of nice if you don't want to dig through the ARM. There are a lot of talks on various ARM embedded topics. These are all from various VSC contours. How FreeBSD boots is great. It goes through all the machine-dependent and machine-independent stuff. It focuses on MIPS, most of its generic. It even mentions ARM in some places. That's worth reading if you want to get some more information. NetBSD on the Marvell or ModXP is great if you want to learn more about how to get NetBSD running on a modern kind of ARM SoC. FreeBSD and NetBSD on the APM is actually about PowerPC, but it's an interesting comparison about how you port on FreeBSD versus NetBSD. There are a lot of comparisons there that was kind of interesting. FreeBSD on latest ARM processors, EBA toolchain. That's a lot of interesting stuff about the new ABI and the toolchain support and how that happened. If you want to learn more about Flatten Device Tree, that's a good talk. Then interfacing FreeBSD with Uboot goes through all of the booting stuff in much more detail and talks a lot more about Ubuilder. I think maybe this talk kind of coincided with the creation of that. Porting NetBSD to a new ARM SoC is actually kind of an older document. It's just a web page of what we did in the presentation, but it goes into really heavy detail on how to get up a new SoC. Some of it's dated, but it has sort of the most complete information. If you want to port a new board, then that's where you'd go to. These are all really good presentations and overviews, so you should check them out if you want to do more hacking. As I said, there's a couple of cool ARM talks tomorrow. Transparent Superpages for FreeBSD on ARM and FreeBSD on BeagleBone Black, a robotic application. As I mentioned, this is the same board that the presenter will be using. I imagine a lot of it is probably a cooler demo than I did, so all I did was boot up FreeBSD. In summary, we discussed the basics of the ARM architecture, went into all the system-level aspects. I gave you guys a shopping list of all the boards you could get, you could try to collect them all if you want. It'll be kind of expensive, but you could do it. Then what the assembly looks like and all the system-level aspects. I also gave you the resources, so you should definitely grab the Cortex-A programmers guide and the ARM ARM. Then we looked at some of the machine-dependent BSD code, we kind of walked through what boot looks like. Sort of at a high level, but at the very least, you know which files to look at. You can kind of dig in a little bit more with Cisco or Global or whatever you use or CTags to figure out exactly what happens. There is a lot more detail there, I kind of went over it at a fairly high level. Then there were a couple tips and further resources I added for BSD hacking on ARM. Definitely, the crochet tool is really great and Buildus-A is great if you want to just build images. If you do a lot of hacking, definitely MFS and TFT boot is a good way to go. I want to thank all the BSD developers, especially all the ARM folks. There were a lot of really useful posts on the various mailing lists. Most of the developers have blogs, so there's a lot of really good information on there. I found that really useful when I was kind of getting up to speed on this stuff. If you just kind of Google something, you'll probably come across one of the developers' blogs. That's good stuff out there. I'm hoping, so it's a lot of information. It was kind of like drinking from the fire hose, but I hope that I gave you guys at least some interest and at least foundation so you can kind of start hacking on ARM for yourself. You can just grab some hardware off the shopping list and a BSD of your choice. Some of the things that you could do, you could port to a new board or add driver support or fix some bugs. I know that FreeBSD is working towards becoming a tier one, making ARM a tier one architecture. There was a wiki page about things they wanted done, so you could help out with that. NetBSD and OpenBSD are always looking for more driver support and board support. If you're really enterprising, you could port Dragonflight ARM. I think one of the developers is actually an ARM employee, so maybe you could talk to him. Actually, I think that would be cool. Some of those things I would be interested in. Another one would be getting a Beagleboard XM support and a mainline for FreeBSD. That would be cool. Crochet had, I think there was a wish list item on there. I think when you build a Crochet image, you have to do it as brute, so there was like a thread on that, so you could help out with that, but there's a lot of stuff you can do. There's definitely a lot more development happening on Linux on ARM just because it's a big ecosystem, but there's a lot happening on BST too, and you could be a part of it. If you have any questions about ARM, anyone talk to me, I'll be around, and you can email me. With that, I'm happy to take any questions. Two questions. I noticed you were using some, like what are you using for your serial connection? Yeah, so I'm using this cable from Adafruit. If you go there, after the talk, I can show you where to get it. I don't know, it's like a 10-buck cable or something. Second, do you have time after this, after you're talking to and talking privately about some plugs I'm having on ARM? Sure. Sure. Absolutely. Okay. One thing to be really nice is we actually got an outline of this paper as it getting started on ARM and the handbook or some of the previous data. Oh, cool. Sure. We should talk afterwards and other people who know how to talk to you. Okay, definitely. Sounds good. What other questions does it serve? Okay. I'll second that one. Cool. Second, awesome. So if you're building for different ARM-based devices, is there stuff like make-bar and the kernels, so should we just be trying to stuff on ARM or get that to the file and run on the actual, last year, or is that something? Yes, I mean a lot of those stuff is kind of dependent on like the configuration stuff. So some of the stuff is going to be just specific to your specific, so there's ARM generic stuff, there's Cortex stuff, then there's stuff that's for your specific SOC and for your specific board. So some of the code will be shared, some of it will be if-deft, a lot of it's driven from the configuration files. So if you look at the files that are ARM and STD to ARM, that basically defines which files will get built for each SOC and for each board. Cool. All right, thanks a lot. Thank you.
In this talk, I'll discuss how BSD kernels interface with the ARM processor. I will cover the kernel internals of the FreeBSD and NetBSD ARM ports, focusing on ARMv7 primarily. I will discuss how booting, memory management, exceptions, and interrupts work using plenty of BSD code. This talk is meant to be a quick start guide for BSD hackers who aren't familiar with the ARM architecture.
10.5446/15131 (DOI)
Stanford University. Okay, so, fermionic strings have been a part of the subject since almost the beginning, not quite the beginning. And as I said, they solve two important problems. I forgot what they are, though. So they give you fermions, and they get rid of the tachyons. They leave you with ten dimensions instead of twenty-six. So they don't solve that problem. That problem turned out to be less of a problem and more of a feature, feature means a good feature, than people had expected at the time. But we'll come to it, and it's a subject known as compactification. Using the extra dimensions and doing something with them to make them innocuous, or if not innocuous, an interesting feature of the theory. But we'll come to that not now, though. What I wanted to talk about a little bit was not really historical, but the scattering of strings. The subject really began with studying scattering of particles. Elementary particle physics was always about scattering of particles, not because it's the most interesting phenomena that can happen. It's not. It's rather dull. You send some particles together, and a bunch of junk comes out in all sorts of directions. But it's about all we can do in the way of experiment. And so we try to unravel from the scattering data what was going on inside the collision. And inside the collision, of course, means the properties of particles and so forth. So the natural tool of experiment, the scattering, and the natural thing that a theorist would ask is if I have a theory of particles, how do you compute the scattering amplitudes? The scattering amplitudes, what is a scattering amplitude? A scattering amplitude, you have some incoming particles that are part of your incoming information. I'm going to have time running this way tonight, horizontally instead of vertically. I don't know why. Variety. Particles come in, something happens inside a black box, and particles go out. Not necessarily the same number of particles. The particles come in, and they carry momentum. Of course, they carry other things. They carry spin. They carry charge. They carry labels, like, for example, is it a muon, or is it a whatever it happens to be? But let's simplify the story and ignore everything except their momentum. So particles come in, and they carry four momentum. Four momentum means energy and momentum. Let's write down what a four vector of energy momentum is. The energy and the three components of momentum. Of course, if we're working in 26 dimensions, we have 25 components here, but I'll just write down four of them. That's a four vector, a relativistic four vector. Each particle has a momentum, and we're going to call it k. k mu, where mu goes from one to four, from zero to three. This is usually called zero, one, two, three. Doesn't matter. Four components of momentum. Now, what do we know about the four components of momentum of a particle? They have something to do with the mass of the particle. Well, of course, they are the energy and the momentum. What's the relationship between energy and momentum and mass for a relativistic particle? Anybody remember? c is one. We will take c equals h bar equals one. What's the connection between the components of energy, momentum, and mass? e squared, e squared equals p squared plus m squared. Here, let's write it the following way. e squared minus p squared equals m squared. Just in order to keep my notation consistent with notations of physicists for many, many years, I am going to write this as p squared minus e squared equals minus m squared. In formula, I've just taken the negative of it. Now, that can also be written in terms of the components of k. In terms of the components of k, that's, what is it? It's k naught squared, sorry, it's k vector squared minus k naught squared, naught standing for the time component, e. Spatial component squared minus the time component squared. This is often just called k squared. Just call it k squared. The left-hand side here is the square by definition. It's a definition of the relativistic square of a vector. It's called p mu p mu. Or to simplify it, let's just call it k squared. It's the space component squared minus the time component squared. That's called k squared. And for every particle, k squared is equal to minus m squared. You can't vary k squared. Of course, k squared consists of the energy and the momentum. When you say you can't vary k squared, it doesn't mean you can't vary the momentum. It means when you vary the momentum, the energy varies in a certain way. And the way that it varies is that e squared minus p squared or k times k naught squared minus k space squared equals minus m squared. So that's the first thing about, it's not even about collisions. It's just about particles. You characterize them by their four momentum, three components of which are independent. The fourth component has to be subject to this constraint. Now we put in a bunch of momenta. Let's call this k1 for the first particle. Is that the way I label them? I like to keep my notation straight. Yeah, I think I call this one k1, this one k2. And then outgoing over here, we'll call the incoming momenta k. I'm going to call the outgoing momenta q. These are also four momenta. I'm going to call it q3, q4, dot, dot, dot. Let's take the very simple case in which two particles go to two particles. q4. These particles come in, those particles go out. How do you represent momentum conservation? Momentum and energy conservation. Moment energy conservation are simply that k1 plus k2, thought of as four vectors, is equal to q3 plus q4. All components, the space components define momentum conservation, the time components define energy conservation. OK, now because of the perversity of physicists, what physicists like to do is to redefine the outgoing momentum and think of them as incoming momentum. Now that's crazy. The outgoing momentum are outgoing, the incoming momentum are incoming, but to do it, all we really have to do, to make it symmetric with respect to incoming and outgoing momentum, is to take each q, q4, change its sign and call it k minus k3. That means changing the sign of its energy, means changing the sign of all of its momentum. It's a trick to be able to write this in a symmetric form. Instead of writing k1, well let's see what it is. k1 plus k2 minus q4 minus q3 equals zero, now becomes k1 plus k2 plus k3 plus k4 equals zero. You treat all particles as incoming, but you have to remember that the label for the outgoing particles is labeled with minus the actual momentum of the outgoing particle, but once you do so, momentum conservation is completely symmetric between the four particles. A useful trick. It's a useful trick that keeps labeling especially consistent. Notice that when you change the sign of the momentum to redefine the momentum with a minus sign, it does not change the fact that the square of the momentum is equal to minus m squared. So we have, in this particular process we have four momenta, think of them as all incoming, although the energies of two of them may be negative, the outgoing ones, each one of them subject to this constraint. The question that a physicist would ask about this collision is what is the amplitude? What is the probability? The thing, the amplitude is the thing that you square, the complex number that you square to find the probability for that collision. But the collision is a function of a number of variables. It's a function, or the probability for the collision is a function of the momenta of the incoming particles and a function of the momentum of the outgoing particles. So it's a function of the k's. Let's call that amplitude a. It's a thing that you square. And it's a function of all of the k's, k1, k2, k3, k4. But there's some redundant information here. First of all, the momenta have to be conserved. Second of all, the square of each k has to add up to minus the mass squared. So there's really too many variables here. There's too many independent variables. How many independent variables are they? Before we impose any constraints, each momentum has four variables. This is four plus four is eight, plus four is 12, plus four more is 16. That's a lot of variables for something to depend on. Fortunately, you really don't depend on that many variables. Let's think about physically now how many variables there's a scattering amplitude or a scattering process to depend on. Well, the first thing you can do is whatever the momenta are, you can use relativity to go to a frame of reference where the center of mass is at rest. In other words, where the two momenta, the two space components of the momentum are equal and opposite. You can always do that. If the particles are both moving down the z-axis, well, you just move fast enough to be halfway in between them. If they also happen to be moving in some other direction, just move in that direction, you can always go to a frame of reference where the particles are equal and opposite, the space components, in other words, the spatial momenta. Next, you can always rotate the system so that the momenta are coming in along the x-axis. What's left over in the initial state? What does it depend on if you know that the momenta are equal and opposite? What does the whole thing depend on? What does the initial state depend on? The initial state only depends on the magnitude of the momentum, not the magnitude of the square of the momentum, I mean, sorry, not the magnitude of the four vector. The four vector has to have magnitude m squared, but the magnitude of the space momentum or the energy, if you like. Once you go to the center of mass, the only thing left over is the total energy of the collision in the center of mass frame. So that's one thing, e center of mass. Now let's suppose that our particles are all of the same kind for simplicity. The particles collide, what can you say about the outgoing momentum? First thing is they have to be equal and opposite. Why? That's momentum conservation. Moment conservation says they have to be equal and opposite. What about the energy of the outgoing state? Has to be the same as the energy of the incoming state. What is the only thing that can differ between the incoming and the outgoing particles? The angle, that's it, the angle of scattering, theta. So the scattering amplitude, although it's written in terms of 16 variables, really only depends on two. Two independent variables. Those variables should be expressible in terms of relativistically invariant things. We can think of it in the center of mass, but we also ought to be able to think about it in any frame of reference. We should be able to construct two independent invariants, relativistic invariants, which are enough to completely characterize the scattering. So let's talk about the invariants describing a scattering process. Two particles come in, two particles go out. Let's think of them as all coming in, changing the sign of the momentum. What is this? K1, K2, K3, K4. What can you do with a four vector? What can you do with four vectors to construct invariants? There's only one thing you can do, really. You can square them. That's about all you can do. But you don't have to be talking about K1 squared or K2 squared. You could be talking about K1 plus K2 squared, right? That's a good invariant. K1 as a four vector. K1 plus K2 squared. Now remember what that means. That means the space components of K1 plus K2, which are labeled with arrows, squared, minus the time component, which means the energies. Let's label them. K0, K0 means energy. K0, 1 and K0, 2 squared. You sum the momentum, square it. You sum the energies. Square it and subtract. And that's called K1 plus K2 squared. Let's see if we can figure out what it is. It's an invariant quantity. Let's see if we can figure out what it is by going to the center of mass frame. In the center of mass frame, the momentum of this particle, what should we call it? Well, it's K1. It's the space component K1. What about the momentum of this particle? It's equal and opposite. Minus K1. Right? So, the total space component of the momentum in the center of mass frame is zero. We don't have to worry about this one. What about this one over here? That's the energies. We didn't talk about it, but what about the energies of the two particles in the center of mass frame? They're equal and opposite. They have the same mass. The energies are equal. The center of mass frame is one in which the particles are perceived as moving with exactly the same momentum, magnitude of the momentum. And so the two energies are the same. So this just becomes twice K naught quantity squared. Very simple. This is the center of mass energy here. In the center of mass frame, you add the two energies. It's the total energy, and the square of it is just the square of the center of mass energy. So this quantity here, which is called minus s, it's just called minus s, is nothing but the square of the center of mass energy. Just look at it. Or s is the center of mass energy. Why? In the center of mass frame, this is zero, and this is just the total energy. So minus s is just the square of the center of mass energy. Square of the center of mass energy we call s. Now can you think of any other invariant that you can build? How about K3 plus K4 squared? We took K1 plus K2 squared. How about K3 plus K4 squared? It's the same thing. Why? Because K2 is minus K3 plus K4. Energy conservation. So we don't get anything new there. K1 plus K2 squared is the same as K3 plus K4 squared. It's the energy, and it just says that the energy in the initial state is the same as the energy in the final state. What about K1 plus K3 squared? That could be something new. So let's see what it is. Let's see if we can figure out what K1 plus K3 squared is by working in the center of mass frame. In the center of mass frame, all the particles in and out have the same total energy, have the same energy. Each particle has the same energy. Particles just get scattered through an angle. That's all that happens. So they come in and they go out with the same energy, but they scatter through an angle. So let's take K1 plus K3 squared. Here it is. K1 plus K3 squared, K3 squared, K1 K3 squared. How about this? This is in the center of mass frame. It is same as K2 and K4, but that doesn't help. I want to know what it is, though. How big is it? All the particles have the same energy, incoming and outgoing. All they do is scatter through an angle. Zero. Zero. Why is zero? Why not twice the energy? Yeah, because K3 and K4, we flip the sign on them. So this isn't there. And this is just K1 plus K3 squared. So let's see if we can see what that is. K1 comes in, particle 1 comes in, and of course also particle 2. Particle 3 goes out. Here's particle 3. Time goes in, 3 goes out. But if we label these particles with the Ks, K1, and let's call this one now Q, Q3. Let's make it an outgoing particle. Then really what this is, is it's K1 minus Q3 squared. It's really the difference of the momentum of the incident particle and the final particle. It's called the momentum transfer. It's the momentum transferred, if you were to think of particle 1 and particle 3 as the same species of particle, then it would just be in the collision, it's the momentum transferred from 1 to 3. Bing, bing, there's a momentum transfer, and that's the momentum transferred from 1 to 3. That's what this is, momentum transferred. It can also be expressed in terms of the angle of scattering. I will tell you what the formula is. The formula for K1 plus K3 squared is it's just twice the energy squared minus the mass squared. We already worked out what the energy is, the energy is the s variable. It's twice e squared minus m squared in the center of mass frame times 1 minus the cosine of the angle of scattering. This is the interesting thing here. The angle of scattering is the angle between 1 and 3. K1 comes in, Q3 goes out, the particle gets deflected through an angle, and it's that deflection angle, here it is, theta, the deflection angle of the particle between the incoming state and the outgoing state, and that's what's here. Why is that interesting? Well, of course, that's what's measured in an experiment. You put detectors in different places, you scatter particles, and you find out the probability for them to get deflected through an angle at different values of the energy. This variable, the K1 plus, let's see, what did we do here? I'm sorry, I, I, the definition of s, I forgot to erase the s over here, the definition of s was K1 plus K2 squared. That was the definition of it. One plus two coming in, and it was the square of the center of mass energy. So s equals energy center of mass squared. K1 plus K3 squared, that's called, anybody think of a name for it? T. T, good, T. It is called T. I knew you were going to get that. And that is some combination of the energy and the momentum transfer. It's the center of mass energy squared minus the mass squared, big deal, we already know what the center of mass energy is, times one minus cosine of the angle of scattering. All right, so we have K1 plus K3 squared. How about K2 plus K4 squared? Is that different? Well, if you go back up to here, K1 plus K3 is minus K2 plus K4. So K1 plus K2 squared is the same as K3 plus K4 squared. It's also T. Is there another combination? We have K1 plus K2, we have K1 plus K3, K1 plus K4, right? That's another one. All right, so one more quantity is K1 plus K4 squared. What can that be? Well, if you were going to give it a name, what would you call it? Minus T equals K1 plus K3 squared. U minus U equals that. These are the STU variables. But what's going on here? There are only two quantities that the scattering depends on, energy and angle of scattering. We seem to have three independent things. Well, the answer is there are not three independent things. If you use the momentum conservation and you use the fact that each K squared is M squared, what you'll find is that S plus T plus U is equal to minus 4M squared or something like that. There's a constraint among them. There are not three independent ones, only two. These are the two interesting quantities. The third one is dependent on the other two. Do you have to consider the vacuum at all in these collisions? What does that mean? Is the vacuum going to participate in the collisions? I'm not sure what that means. The vacuum is all over the place. Participates like all hell. What does it mean? What do you mean by participate? Maybe the vacuum could just slow down a particle going through free space. Well, we know that doesn't happen, don't we? It's momentum conservation. Okay, well, just a wild idea. Yeah, it could be. I mean, it's a possible world. The vacuum has friction. It's not our world. Moment conservation says that particles aren't slowed down by the vacuum. At some point, I think you said that 3 and Q4 have the same energy. In the center of mass frame. In the center of mass frame. Why can't there be a, I mean, K1 and K2, the center of mass frame is kind of defined that way. Yeah. But then the center of mass frame is defined as a frame where the momenta are equal and opposite. If the momenta are equal and opposite, the energies of the particles are the same. If their masses are the same. Now they scatter, but they have to go out equal and opposite for momentum conservation reasons. Yeah. So that's, all right, these variables, ST and U, the other one which is there, those are called Mandelstam variables. And they're very symmetrically defined. Mandelstam, Mandelstam, Mandelstam. Mandelstam is a currently active physicist in Berkeley, but this notation comes from the early 60s, the very early 60s, Mandelstam variables. And notice what they have. They have this beautiful symmetric structure. That's what's interesting about them. Now let's talk about Feynman diagrams for a moment. And I'll just tell you what the answer is for certain Feynman diagrams. Let's suppose two particles collide. This is a Feynman diagram. Create a third particle of a different mass in here. Let's call it capital M. And then those particles go out like this. Those come in, Feynman diagrams, so they come in together. They coalesce to form a second particle or a third particle or whatever, and then that particle decays and it goes out. That's a Feynman diagram. That Feynman diagram has a value. I'm going to tell you what it is. It's the product of two coupling constants. There's always coupling constants and Feynman diagrams. And then there's the propagator of the particle in between. And that has a very simple form, one divided by s minus capital M squared. It has a s minus capital M squared. That's the characteristic structure of a scattering amplitude where two particles come together and merge and form another particle. And notice it's a function of s, the energy of the process. It doesn't depend on t. It doesn't depend on the other variable. You want one that depends on the other variable. You draw a similar diagram, except in which instead of one and two merging, you have, I'll draw it over here, one and three merging. One, two, three, four. It looks like exactly the same diagram except turned on its side. But it represents a very different kind of physical process. One and two come in, exchange a particle between them, and then go out as three and four. What would you guess if you had to guess the amplitude for this process? It's going to have g squared again, the g's are the vertices. What else? Exactly, one over t minus M squared. Now you can imagine a third process which would be one over u minus M squared, and it would look like this. One, let's see, that's a little hard to draw, one. It looks like this. One and four merge, one and four merge, little, you have to switch the lines like that. Let's ignore it. It's there, it's there, and it's important. But it's too hard to draw. I don't like drawing cross. They're perpendicular to the other two? They're not perpendicular, it's just a question of which prior particles came together, and I hate drawing cross lines on the blackboard so we won't. But there is, in principle, there can be another one. This one depends only on the energy. Doesn't depend on the angle of scattering altogether. T contains the angle of scattering. S is only a function of the energy. That means that when this process happens, to say that it doesn't depend on the angle of scattering means that every angle of scattering is equally probable. When the particles go in, they have equal probability of getting deflected through any angle or whatever. That seems odd, but that is the property of this process. And the reason is very simple. The particles come in from some direction, they form this compound state, and then when they decay, they've forgotten which direction they came in from. They come in, they form the composite, and then when the composite decays, it decays at an arbitrary angle uncorrelated to the initial directions. That's this process here. This one is different. It depends on the energy, but also on the angle of scattering. This is the one that depends on the angle of scattering, and if you work it out, it's easy to see that it favors small angle scattering, this favors large angles. So this one depends on angle, and this one depends. But notice how similar the two expressions are. There's a real symmetry between them. In fact, if you interchange S and T, the scattering doesn't change. This is a property of relativistic scattering amplitudes that they have this kind of symmetry. They sort of forget which were the incoming and which were the outgoing particles, but only if you express it in terms of these kind of invariance. Good. So now we have some basic idea of what it is that a theoretical physicist wants to compute about a model of particles to compare, not because he's so interested in it, but it's the thing that he can give to the experimentalist and say measure the probabilities for scattering as a function of energy and so forth, and here's my prediction. Okay. Are you saying that the dependence of S, T, and U causes this amplitude to what, shift around if you, I mean you still have three terms, but. But the value shifts around if you. Depends on the angle of scattering. All right. So if all that was there was this, this would favor small angle scattering and be, not forbid, but make it less likely. Let's see if we can see why. I think here it is. I think that you have a third term. The third term is there. What about it? Oh, well, what you say that you didn't want to write it up there or did it, you didn't want to write the diagram. I didn't want to draw the diagram, but if there was a process in which one in four could come together and do the same thing, then we would want to put a G squared, one over T, one over U minus M squared. And because of three are not completely independent. Yeah, they're connected. Yeah. Then that adds back to the. One over U minus, okay, so what is U? It's interesting. U is equal to E center of mass squared minus M squared times one plus cosine theta. So both, so they're clearly related. If you know S, you know the energy. Therefore, if you also know T, you know the angle of scattering, and this is just a function of the energy and the angle of scattering. So they're all connected with each other. Another way to say it is if you add S and U, you'll cancel out the angle altogether and you'll get just a function of energy, which is clearly dependent only on S. But they're all, but the Cipri's finding diagrams and in general you have to add them all. Okay. But I want to focus on this here. This does not describe the scattering of mesons, for example, very well. It gives a poor description of the scattering of mesons. And the reason is simple. The reason is that there are many, many different particles that can be produced when two mesons collide. Whole stacks of them with different mass and different angular momentum. We haven't described what the angular momentum would do here. What the angular momentum would do would be to change the dependence on the angle of scattering. Let's not get into it. Formulas like this are too simple to describe the realistic scattering of mesons. It's too easy to excite higher vibrations of these particles here. And when two particles collide, they could make some ground state. They could make some first excited state. They could make some next excited state. They can make a whole raft of different particles that can go in there. And people in the 60s, basically beginning sometime around 1965, 66, 67, tried to concoct all amplitudes with interesting mathematical properties that would represent all of the possible particles which could go in here. This was trial by trial and error. It was just making up formulas to try to represent the different particles which could go in here and the different particles which could go in here. The first attempt tended to be to add particles. This is called an S-channel process. Why? Because it involves one over S minus M squared. Particles coming together, K1, K2. That defines S. This is called the T-channel process, K1 and K3 coming together. So they began by saying, let's just add more and more stuff into here, S-channel, and let's add more and more stuff into here, T-channel, for all the particles that could be scattered. Well, that lasted for a certain period of time, adding up the various particles which could go in there. And then people tried to find more comprehensive formulas just by, I don't even know what the right word is, sort of a kind of curve fitting but a very sophisticated kind of curve fitting and a rather dramatic formula was discovered which contained the physics of all of the particles in the S-channel and all of the particles in the T-channel. Just replaced this combination here with not just one particle but many of them, whole towers of them with a formula which I'm going to just, this is of historical interest mainly, with a formula which I will write down and you can explore its beauty, its great beauty. And this is very, very simple, it's called the Veneziano amplitude. Young Italian physicist, he was young at the time, he's not young anymore but he's still Italian. He just, I don't know how he made this guess, he just randomly wrote down some things which had some right properties and which really did look like adding up things like this. It was a function of S and T. Everybody know what the gamma function is? Now the gamma function, you don't need to know it but I'm just going to write down the formula for fun. The gamma function is a generalization of the factorial function. For the factorial function is only defined for integers. The gamma function is defined interpolated between integers. Gamma function for the integers is equal to n minus one factorial, gamma of n. Now it's defined for non-integers too and I won't tell you, it's defined by an integral and it continuously interpolates between the integers. This is the Veneziano amplitude multiplied by the coupling constant squared. If you examine this amplitude, you'll find out, I'm not going to go into its mathematics, its mathematics is very simple. All you have to know is about gamma functions and if you want to explore it, it's actually quite a simple construction. But I'm going to tell you what it looks like. It looks like an amplitude that you would make by summing up large number of particles, this is representing some composite particle or some particle it could be in there, of different masses in the S channel. But it also looks like, oh, notice that it's symmetric. It's symmetric with respect to S and T, exactly like this is. The peculiar thing about it is you can represent it as a kind of sum of Feynman diagrams with all the particles in the S channel, diagrams like this. But because it's symmetric under S and T, you can also represent it, you don't add, it is also equal to the same kind of thing going this way. Something odd was afoot about this formula. It had all the important features that the scattering amplitude should have. It could be analyzed as if a whole bunch of particles were produced and then decayed, but it could also be analyzed as if a whole bunch of particles were exchanged. This was something new. This had not been seen before. Previous to this, everybody would have added contributions for S and T. And this thing replaced this. The question was, what is this? Where does this come from? What kind of physics gives rise to this? What kind of physics can you imagine would give rise to this? To a formula like this? The answer, of course, turned out to be string theory. The invention in the discovery of string theory was just looking for a physical model which would give this for its answers, for its answer for scattering of two mesons or two particles. I'm going to tell you, without a lot of drama, what physical model gives rise to this scattering amplitude. I'll show you a little bit. I'll show you what the logic that went into it was. It wasn't very hard to guess that this was a theory of strings. It wasn't too hard just because, well, it wasn't too hard. You said exchange particles versus… The S channel and T channel exchange, direct production and exchange, sometimes called the direct channel and the cross channel. The S channel is called the direct channel. Those are the direct particles come in, coalesce, and then go out. The cross channel is when the particle jumps across from one side to another. I'll show you what the physics of that formula is. You begin with two strings. I'm not going to do a calculation, but I'll show you everything that went into it. It's a little bit tedious, what you have to know about. You have to know a lot about harmonic oscillators, and that's all. Nothing much more than that. You start with two strings. Here's a string. Now remember, the string has a coordinate along it which we called sigma. Let's draw the sigma axis over here from sigma from zero to pi. Here's the string. But the string propagates in time. Let's draw time horizontally. Here's the end of a string at sigma equals zero, sigma equals pi. These are open strings. The string of course moves around in space time, but it's always located between zero and pi. That's not spatial position. That's just its parameter along the string. The material of the string, the particles that make it up, are in here. Each point in the history of the string, think of the string as sweeping along, here it is. It's sweeping along some space time sheet. It's called a world sheet instead of a world line. Each point in that world sheet is characterized by a point sigma and a time which has called tau. Tau goes this way. It's a time, and sigma is a coordinate along the string. Not a real spatial coordinate, just a label for labeling points along the string. Now is this time that we've used previously the infinite momentum time, but that's not what's important. What's important is that the idea of a world line becomes a world sheet. And instead of being parameterized by a single variable tau, it's now parameterized by two variables. Each point in here has a position in space time, x, x mu, or just x. And we've already worked out what the equations of motion of x are. They are the wave equations describing waves moving up and down the string. The wave equation. Let's write it. This is d second x by d tau squared minus d second x by d sigma squared equals zero. That's the wave equation that described the oscillations of the string. But we don't even have to think about this. We can just imagine that this string is a collection of a large number of particles. Place the world sheet by a bunch of world lines narrowly spaced with springs between them. With springs between them. That's the picture of the evolution of a particle. Now what we're going to do is begin with two particles. We're going to have two particles coming in from the past. And we're going to put them right next to each other. Let's see. I think we need another color for the second particle. We're not going to put them right next to each other in space necessarily, but just in the parameter space. Here this goes from zero to pi. We need to put another one in going from pi also from zero to pi. Here's the other string. There's no meaning to the fact that I've drawn them right next to each other. I've just drawn them. Their actual position in space-time might not be adjacent. So this just parameterizes, this half parameterizes the world sheet of the left particle. This half parameterizes the world sheet of the right particle. And they might be far away from each other. That means the X's over here may be very different than the X's over here. Now with any luck at all, with some probability, the end of the string might touch the end of that string. And when it does, they can coalesce. That's an assumption that they can coalesce. But that's the basic process of string theory, that they can coalesce and then form a single string. Now they really are connected. If you like a new spring developed, when these touched each other, a spring appeared connecting the last particle of this string with the first particle of that string, and the whole thing becomes one string. That condition persists for a while, it persists for a while until a quantum mechanical event happens and the strings separate again. Randomly, but you can transform around cleverly to make it be at the same place along the string. There's enough symmetry, enough symmetry of the equations that you can put this point at the same horizontal level as this point. So what is the nature of this process? Given this, it's possible to guess what the answer is for the O. The important quantity in here is the amount of time that the string spends coalesced. It's the amount of time that the compound state stays before it breaks up into its final constituents. And let's just call that tau now. Let's call that from this time here, tau. I'll tell you what we do with it. In the end, we integrate over it, but let me tell you what you put in. If you're interested in a quantum mechanical amplitude, you want to start with an initial state. The initial state, let's start with one particle. The initial state can be thought of as the state of a whole bunch of little points, namely the points that make up the string, x1 through xn. Let's start with the first string. x1 through xn. It's just a collection of mass points, x1 dot dot dot through xn. And the wave function of it is just a function of x1 through xn. That's its wave function to begin with in the start. What do we know about the wave function? Does everybody understand why a wave function is a function of the n positions of the particles? That's quantum mechanics. Quantum mechanics says wave functions or state vectors are functions of the position of the constituents. What's the probability to find the particles at position x1 through xn? Psi star psi. So that's what this is. This is the wave function of the starting assemblage of particles. Of course, I'm purposefully not taking the continuum limit to show you what goes on. At the end, you have to take the continuum limit, but let's not do that. What do we know about this? Well, the first thing we know about it is that this particle comes in with momentum k1. I'll tell you exactly what that says. That says that this wave function contains a factor e to the i k1 times the center of mass position. This is the wave function e to the i kx is the wave function of a particle with definite momentum. What momentum do you use? You use the center of mass momentum. So what is the center of mass position? Sorry, the center of mass position. What's the center of mass position of these points? It's the average position. X1, the sum of the x's divided by n. So you have x1 plus x2, up to xn divided by n. That factor, and that factor alone, tells you that this initial particle here had momentum k1. Now, what about the rest of the wave function, which depends on the relative coordinates, not the sum of them, but the distances between the neighboring particles? That's some wave function which characterizes the ground state. It depends on everything except the sums of the x's. It's some wave function, let's call it sine naught for ground state, and it depends on the x's, same x's, but it actually doesn't depend on the sum of them. The sum of them here, the differences between them here. Differences of x's here, neighboring x's and so forth, here, some of them here. And this wave function is computable. It's just the ground state, describing the ground state of all the harmonic oscillators making up the string. And it can really work out. With enough room on the blackboard, I could tell you exactly what this function is as a function of a collection of x's. It's not very hard. It's a bunch of exponentials. It's workable, you can do it. This is the wave function of the first particle. What about the wave function of the second particle? It's exactly the same kind of thing, except it doesn't depend on these coordinates, it depends on the coordinates of the red particles here. So let's write it down. It's e to the i k2. Now, what shall I write? Shall I write x1 through xn? No, that's the original particle, those are the original constituents of the first string. I want the constituents of the second string. So let's write xn plus 1, xn plus 2, all the way up to x2n. The second half of the particles is grouped together into the second string, also divided by n, also times psi naught of 1 through n. This is xn plus 1 to the end, to the end of the chain. Can that be read? It is read. But can you read it? OK, you can read it. Now, that's the initial state. But now you say with some probability, the two endpoints merge. To say that the two endpoints merge simply says that you set xn equal to xn plus 1. You look for that piece of the wave function where the two endpoints are at exactly the same point. So you begin with this. The next step is to say, let the nth particle on the first chain be at exactly the same place as the n being at the same place as xn plus 1. So we're going to put in here then xn. All the others left unchanged. That's now not, that's the wave function of the state right at the point after the two particles have coalesced. When they're coalesced, at the point where they coalesce, they come together. Nothing happens to the rest of the chain, but the two particles, so this is, if you like, it's the amplitude that the two particles, the chain touched. This is the amplitude that the two particles, the endpoints touched. Now we have a new starting point, which is a function of x1 through x2n. It's a state of that many particles. And what do we do with it? We have to evolve it. We have to evolve it using the Hamiltonian. Remember what a Hamiltonian is. A Hamiltonian is a thing which updates you from one instant to the next. So you take this initial state. It's a well-defined thing, and you propagate it forward in time using the Hamiltonian. Take it. What's the right rule for updating a state from one, from an instant to a later instant? You multiply the state by something. e to the iht. e to the i. So you take this wave function, and you evolve it. You solve the Schrodinger equation for it. But that's the same as multiplying it by e to the i, the total Hamiltonian times tau. But what is the total Hamiltonian? The total Hamiltonian is just the collection of springs and mass points, a collection of harmonic oscillators. It's just a collection of harmonic oscillators. We know how to do this, and that gives us the state of the system after time t. What's the last step? The last step is to project the final state, here it is, onto two separate particles again to project it onto a state with two separated particles of momentum k3 and k4. It's very, very straightforward if tedious, slightly tedious. You take the two initial states of the two particles, they're well-defined ground states of the particles. You insist that the two endpoints are at the same place, that constrains the wave function. You let it evolve as a single string for a ways, and then you let it break up again, and that simply means multiplying it again by some final state. I'm not going to write it all out. It's the same kind of thing. And that gives you the transition amplitude. To make a long story short, you start with the two particles, you constrain it so that they're at the same place. You evolve it, and then you project it onto the final state. A very, very well-defined thing to do. That gives you the amplitude that the strings coalesced for an amount of time tau. But how should you choose tau? Those things in black and red there are added together, and then you multiply. Those are multiplied together. Now, if you have two separate systems, each having its own wave function, you multiply to create the wave function of a composite. So this is just a wave function of a composite of two particles. To begin with, you don't set the first particle equal to the last particle here. Then you look for the amplitude, you look for the piece of the wave function where those two particles are at the same place. You say, aha, now that they're at the same place, fuse them together and evolve it as if it were a single string, a single collection of mass points and springs. For an amount of time tau, and then basically you take your scissor and just cut the spring in the middle and let it evolve after that. That calculation is quite doable, not even very hard. It's just too much for the blackboard for one morning, but I'm going to tell you what the answer is. I'm going to write down the answer for you. Don't try to remember. This is the answer. It's an integral over the tau. Did we say that we integrate over tau? I think I said we integrate. What do we integrate over tau, incidentally? Why do we add up the amplitudes for all possible times that this composite could exist? This is the Feynman rule of summing over all possibilities. The only parameter here is the time that it takes for this to break up again. Feynman's rule is sum over all paths, which in this case just means integrate over the time that they spent evolving together. That gives you the amplitude for the two particles in the initial state to become the two particles in the final state. The result is an integral. It's the integral over the time that they spend together. That's it. I'll tell you what the integrand is after a certain amount of calculation, which is actually not very hard. It becomes the integral of e to the tau. That's the time. Times s plus 1. This s plus 1 is exactly this s. It's the s variable or the center of mass squared, the center of mass energy of the two particles. You have a factor like that. You have another factor, which is 1 minus e to the tau to the minus t minus 1. t is the momentum change between the initial particle here and the final particle here. Remember, those momenta are coded in this wave function, which I've erased. They were coded in the wave function in terms of those exponentials. Sorry, the center of mass energy. Here is the momentum transfer. It appears in the formula. Then you have to integrate it, d tau. There happens to be another factor, I think, of e to the minus tau in the integrand. You're right. I have d tau twice. e to the minus tau, yeah. The first term is tau times s plus 1. Yeah. No, yes. Yeah, that's what you get. Is that plus e to the minus tau? What's that? When you combine the e to the minus tau hanging out the end, is that multiplied or how do you multiply this? I've left it out here on purpose. I've left it out here on purpose, d tau. Now that doesn't look particularly symmetric between tau and between t and s, but after it was computed, it took about a half hour to realize that you should change variables in this integral. Change variables from e to the tau, from tau to something called z. Let e to the tau equals z. Let e to the tau equals z. Now we write this. e to the tau is z, so this becomes integral of z. I'm missing some minus sign. No, e to the minus tau must be, this must be minus here. It is e to the minus tau there. This thing just becomes z to the minus s plus 1. e to the minus tau, so this is minus, minus, e to the minus tau z, by definition, just to change your variables. What about 1 minus e to the minus tau? That becomes 1 minus z to the minus tau minus 1. Oh, incidentally, this integral goes, I think, from 0 to infinity. Is that right? From 0 to infinity. Yes, it does. This becomes 1 minus z to the minus tau minus 1. What is d tau times e to the minus tau? It's just d z. It's just d z. I think I got it. It's just d z. Did I get that right? I've lost track of whether this is plus or minus here. I don't remember. But it's just d z. This is the formula. And where does the integral go? The integral goes from tau equals 0, where z is equal to 1 to, this is minus. It is minus. I'm sorry. I'm making a mess out of it. e to the minus tau is equal to z. To tau equals infinity. In other words, you have tau equals 0. That's when it very suddenly merges and then falls apart instantly. The tau equals infinity, when the separation is infinite here. And what happens when tau goes to infinity? z goes to 0. So it's an integral from 1 to 0, or 0 to 1. There's probably some sign in here of an integral that looks like this. The amazing thing about this integral, the whole upshot of it, is that this integral is completely symmetric between s and t. Can you see that? How do you see that it's symmetric between s and t? Right. You just substitute for z, z minus 1. You make a change of variables between z and z minus 1. And you see that this integral is completely symmetric. So although a completely unsymmetric starting point between energy and momentum transfer, somehow it wound up giving a completely symmetric answer between the two of them. I lost track of the minus sign. The minus sign comes from e to the minus tau d tau going to dz. Yeah. I've lost track of the euro. So you get a minus sign, that's why you switched to order. There's a minus sign somewhere. I'm saying e to the minus tau d tau. Yeah. And then you're going to make that just be dz. Yeah. So you get a minus sign. Yeah, there's got to be a minus sign out here, right? Right. Then you switch the order of integration and it becomes from 0 to 1 like this. That's right. That's correct. It's symmetric whether or not you switch the order of integration. Right. And that's the answer. It's symmetric between s and t. And what is it? It's a process in which two particles join, form a composite which wiggles around for a while and then breaks up. In other words, it's the analog of the Feynman diagram in which a composite is formed and then decays. Composite is formed and then decays. But it winds up being completely symmetric between s and t. In fact, this function is called the Euler-Beta function. It's a function of two variables, s and t. It's called beta of minus s and minus t. It's the Euler-Beta function. Same as function of mathematical physics. And guess what it's equal to? It's equal to the Viniciano amplitude. It's exactly equal to the Viniciano amplitude, namely this product of gamma functions over another gamma function. How did it get to be that it was symmetric between the two? Now this is not obvious at all. It's symmetric between the s channel and the t channel. It's symmetric between the s channel and the t channel. I'm going to come to that next time. That has to do with a fundamental, extremely deep symmetry of string theory called conformal symmetry. It's a symmetry which allows you to take these world sheets and deform them in crazy ways as if they were Turkish taffy and stretch them out in different directions. And for example, turn this picture into a picture which looks much more like two particles coming together and exchanging something. It's a character. Yeah? So the gamma gamma function is reminiscent of the probability function of I think it's factorial A, factorial B over factorial AB, the permutation. Right. Comparatory coefficients. Yeah. So what's the time in between the two? It is true that the beta function evaluated at integers is the combinatoric coefficients, the inverse of the combinatoric coefficients. You know what he's talking about. He's talking about n factorial, m factorial over n plus m factorial, which occurs all over the place in combinatorics. This just happens to be the same function. No simple connection. First of all, it's the inverse of it. The inverse of it is not a combinatoric coefficient, but it just happens. It's an integral which defines the same combination of gamma functions. There is no simple relationship. It's not that something combinatoric went on here, at least not to my knowledge. And it's certainly not the way Viniciano found it. I don't know what magic he pulled up to find it. So this was, if you like, partly historical, but part of the important logic of the theory is that number one, you can calculate with it. It's all it is, is harmonic oscillators. Everything can be done with harmonic oscillators. It's a bunch of harmonic oscillators. You break the process up into pieces, and then you integrate over the time in between. You can calculate, you calculate, and you find an integral. The integral, at this point, by magic, has the property that it's symmetric between s and t and somehow looks like not the sum, but has the features of having processes where particles coalesce. That's the direct process I drew here, but somehow buried in it is also somehow processes where particles are exchanged. And that was the magic of it. What was the surprise that a whole new logic of putting processes together to make new processes? Okay. Any questions about this? Yeah. I think I'm missing something. In the beginning, we described two strings called less than you know, the string of n oscillators and n oscillators, they come together in two n, as it goes on. But remember, when you're taking the limit n goes to infinity, two n is the same as n. Sure. But it still has the property of a boson or fermion. Oh yes, so far, yes. The question is, when you wrote down the wave equation, you had two points coalescing, which is two n minus one, so it changes from positive to, it seems inconsistent, I'm sure. You lose two, not one. Well, n and n plus one becomes one. No, actually n and n plus one disappear basically. Okay. If you like, I mean, I think that's the right way to think about it. Yeah. Yeah, they eat each other. They eat each other like that. And you're left with, right? So the new spring, the new spring that forms connects these two. Yeah, otherwise we'd have big trouble. We would lose one fermion. Right. So that's a good one. One way to think about it is that a particle and an antiparticle color less than the, if you thought of these as quarks on ends of strings, you could think of the last one being a quark which annihilated with the first quark of the other string. So yeah, you lose two of them. But who cares about two when there's an infinite number? Yeah. Right. Yeah? Right. But you're right. You have to be careful not to lose an odd number of fermions. That's something that shouldn't happen. Yes. The next quarter you continue on with string theory and talk about superstring. Well, we're not going to spend a lot of time on superstrings, although the things that I will tell you, strictly speaking, apply to superstring. And not to the bosonic string. No, I'm not going to go into the heavy mathematics of superstrings. What I think we're going to do next time is I'm going to tell you, well, I'll tell you a little bit more about string theory and about the properties of world sheets and the symmetries that allow you to stretch this thing like Turkish taffy and make different kinds of Feynman diagrams out of it. But then we're going to move on to another subject called M theory, which is another way of deriving string theory from a totally different van- well, I think we have to talk a little bit about compactification, about what you do with these extra dimensions, and we will maybe next time. But then we will move on to a totally different origin of string theory that evolved much, much later in time, sometime around 1995, 96, which started with a completely different picture, and in which a great many of the features, the more complicated features of string theory are completely transparent, and it's called M theory, and we'll start a different starting point, which leads to exactly the same physics. So this process you just described was based on the clues from two Masons colliding? Yes, but it's also the way to photons collide in string theory. It's the way any two open strings collide in string theory. Incidentally, the analog for closed strings is very similar. You have two closed strings, and you pick a particle from here and one from here, and you require that they be at the same point so that they do this kind of thing. And then you evolve it as a single string, same rules, same kind of rules, and that would correspond to the scattering of two closed strings. If you scatter two closed strings, the intermediate thing that you make will again be a closed string. Does this only apply to strings where the particles are fermions? No, no, no, no, no, no, no, no. This calculation was originally done for the Bosonic string, and of course it's more complicated when you have to keep track of the fermions also. Extremely similar, extremely similar, some slight differences that are not important. What I should say about this is that the scattering amplitudes, we said that there was a photon and a graviton in the system. All right, it looked like there was a spin two particle and a spin one particle, and you could force them to be massless if you wanted. You didn't have much choice. There weren't enough components to make a massive particle. So you did that. Well, then you take these particles and you collide them, and you work out scattering amplitudes. Scattering amplitudes are very distinctive and characteristic for emission of photons and gravitons. They are not just any old scattering amplitudes. They have very, very definite properties which make them extremely special. The emission and absorption of photons cannot be mistaken for the emission and absorption of scalar particles or other things. They satisfy some very, very important rules. Those rules originate from the conservation of electric charge. Conserved electric charges emitting photons is almost very, very little ambiguity in what the emission and absorption of photons look like or what a scattering of photons by charge particles look like or even the scattering of photons by photons. And it was at this point calculating these diagrams where it became completely clear that these things which we were calling photons were behaving exactly like photons and the things that we were calling gravitons were behaving exactly like gravitons. That they satisfied all the rules for graviton-gravaton scattering. Just saying there was a particle that looked like a graviton was a very weak thing. When all of the scatterings were constructed and the rigorous tests of whether it was satisfying the rules for scattering of gravitons by massive things, by massless things and so forth fit perfectly, exactly, then people realized that they really were dealing with something that looked like the scattering of gravitons and photons. So the scattering amplitudes played a big role in establishing with precision that we were dealing with objects that did behave like photons, gravitons, and it. Okay, I think we're finished. Yes. Where does charge come from? Where is what? Charge. Oh, yeah. We haven't talked about that, but okay, we'll talk about it next time. Remind me. Remind me. For more, please visit us at stanford.edu.
(October 25, 2010) Leonard Susskind focuses on the different dimensions of string theory and the effect it has on the theory. String theory (with its close relative, M-theory) is the basis for the most ambitious theories of the physical world. It has profoundly influenced our understanding of gravity, cosmology, and particle physics. In this course we will develop the basic theoretical and mathematical ideas, including the string-theoretic origin of gravity, the theory of extra dimensions of space, the connection between strings and black holes, the "landscape" of string theory, and the holographic principle.
10.5446/15130 (DOI)
Stanford University. We haven't discussed units at all. And let's discuss units a little bit. Hadron physics, we've discussed the idea that a Hadron is a string and that it can be excited. That it can be excited by setting it into rotation or setting it into vibration, by exciting the harmonic oscillators that make up the stringy character of the proton or whatever it happens to be. There's a certain energy scale or a certain amount of energy that each excitation will give you. In particular, the energy jump from the ground state to the first excited state. How much is it? What does it depend on? Well, in a string theory, there's really only one parameter. We talked about it a little bit. We discussed the idea that if you stretch a string, let me just go back a little ways, if you stretch a string from one point to another, then it behaves pretty much like a spring. It does behave like a spring. It has an energy if this is a non-relativistic string, just an ordinary non-relativistic string, a rubber band, or an idealized rubber band. Its energy, its potential energy, let's call it E, what does Hooke's law give for the energy? Some k, which is some spring constant, times the separate times the square of the distance between the end points. Missing something, factor of two. Yeah, that's the potential energy that's in a string. And we've identified energy, this non-relativistic energy, we've identified it not with the mass of the string, but with the square of the mass of the string. Why? Because we're thinking about this in a frame of reference where the string is moving very, very fast, and when non-relativistic physics actually works for the motion of the string perpendicular to the direction of motion. So let's keep that in mind. We're looking at a string that's going down the z-axis like this. It's been stretched to a distance x, or let's call it L. Distance L, this becomes L squared. And we've identified the energy with the square of the mass, L squared. There is some spring constant. That spring constant, I'll just call k, it says that the mass of the string, which in the rest frame, in the frame of rest at which the string is at rest, that mass is the usual energy. That's going to be the square root of that spring constant times L, perhaps divided by the square root of 2. I'm not interested in the square roots of 2 now. So the mass of a stretched string is proportional to its length times some parameter, which I've called the square root of k. The square root of k has another name. It's called the tension. It's the tension in the string, the energy per unit length. Take a mass, and we have a string at rest. We're not looking at it in this frame in which it's moving fast. Its energy is its mass. We stretch it, we stretch it out, and the energy of it grows proportional to the length. It's like surface tension, except now it's linear tension instead of surface tension. Energy proportional to length, the coefficient is called tension, the tension in the string. And so there's a tension, which is just the square root of k. We don't even need to call it square root of k. We can just call it tension. Now, how much energy do we get for each oscillation? For each oscillation, oh, yeah, what are the units of a square root of k? Let's work in our favorite units in which the speed of light and plonks constant are equal to 1. In those units, energy has units of 1 over length. So if energy has units of 1 over length, what's the units of kappa here, of square root of kappa, the string tension? Yeah, it's energy squared, or 1 over length squared, energy squared, if you like. So this object over here, let's call it t, t for tension, that tension in the string is the thing which sets the fundamental scale. And it sets the scale for everything. It sets the scale and has units of energy squared. And I'll tell you what it does. It tells you that each excited state, each time you excite the string by one unit, it adds to its mass squared, essentially that tension. So the thing which has units, the thing which determines the units of the theory, are this string tension. The bigger the string tension, the bigger the jump in mass when you excite something. The same thing is true of an ordinary spring, incidentally. If you have an ordinary spring, let's take the mass of the spring to be 1, the spring constant, call it k. The frequency is just, well, it's the square root of k, k over m. The frequency is proportional to this tension. And of course, the energy that you bump up the spring by every time you excite it is also proportional to omega. So the energy jump, the energy jump between the ground state, the first excited state, next excited state, and so forth, is controlled, the unit is controlled by this string tension. OK, you could ask, what is the string, and string tension is force per unit, or sorry, is energy per unit length. You know another name for energy per unit length? Energy per unit length. Force. Force. Force times length is energy. Force times work, force times distance is work. Right. So all this tension is, if you pull this apart, that's how much force is pulling you back. That's how much force is pulling you back. It's the force within the string. Another way to say it is, if you took one of these strings at the surface of the earth, and you anchored it at some place and suspended away from it, it would be the weight that the string could support. A heavier weight would just shrink, or would fall down. A smaller weight would be pulled up. OK. That's the character of these strings. If you had one in the laboratory, the weight that it can support is independent of its length, incidentally. You see that from this formula. You see that from a formula that mass or energy, same thing, energy is equal to spring or tension times length. Force is energy per unit length. Energy per unit length is force. And so the force that this string can support is independent of its length. That's the character of these strings. OK. So then you could ask, how much weight at the surface of the earth, how much weight could a hadron, a meson, if you could somehow anchor one of the quarks in a meson to some support over here, the ceiling, how much weight could a meson support? The answer is about a truck. About a, I can't remember if it's an 18-wheel truck or a 16-wheel truck, or maybe it's just a half-ton panel truck. I don't remember, but it's that order of magnitude. That's what a meson could support. So they're pretty strong. They're microscopic little things, but they're pretty darn strong. And the stronger they are, the higher the frequency of the oscillations, and so the larger the gap between the lowest energy state and the highest energy state, how do I actually know this? Did anybody ever support a truck with a hadron? Of course not. What we actually know is the amount of energy that it takes to excite string. And from that, we deduce what the tension is. And from the tension, I can then tell you what you can support. These are not the strings that string theorists imagine are associated with really fundamental particles like gravitons, photons, and so forth. Those strings are much, much stronger. Very much stronger. The tension in them is vastly larger. If, again, you took the Earth and you supported a weight, namely the weight of the whole galaxy. Now, of course, this is nonsense. It's not the galaxy that's going to move. It's the Earth that's going to move. But if you could concentrate a mass of the galaxy in some small volume and make the Earth heavy enough, then you say, if I make the Earth heavier, I increase its gravitational field. Well, not necessarily so. We can make the Earth much bigger, but keep its gravitational field the same. If you could somehow keep the acceleration due to gravity the same, then the weight that you could support would be about the weight of the galaxy. So the strings that we're talking about in fundamental string theory are vastly higher in tension, much higher in tension. The meaning of that is that the energy in an excitation to excite the string is much, much larger. These are much stiffer strings. They have much, much larger spring constants. And therefore, much larger frequencies of oscillation, if larger frequencies of oscillation, then larger energies in exciting them. How much energy? Well, the thinking goes that it's probably somewhere up near the Planck energy. If we're talking about gravitons, it's the gravitational scale. Does everybody know what the Planck energy is? I know you've heard of it. Do you know actually what it is? First of all in numerically and second of all, what did actually where it comes from? Times c squared. That's a fairly big energy. It's enough to, that's a car bomb. Boom. How much was it that I didn't hear? How much did we didn't hear? 10 to the minus 5 grams. 10 to the minus 5 grams. And then you take that and multiply it by the square of the speed of light. But when I asked you if you know what the Planck energy is, or the Planck length, or the Planck whatever it is, I was asking a more theoretical question. I was asking, do you know what it is in terms of the other constants of nature? Maybe we should go through that. I mean, this is worth doing. Also, how long are these strings? OK, that's the other question. That's the other question. The stiffer they are, the smaller they are. Now, of course, you can stretch them out to any length. There's no limit to what you can stretch them to. But if you ask how much in their ground state, remember, things in their ground state oscillate. How much do they oscillate in their ground state? And what's the mean fluctuation in the size of these strings, the vibrations of them, then the larger the spring constant, the smaller they are. They're much stiffer, much harder to pull apart. Pulling this thing apart by one centimeter will cause a lot of it. The smaller, the bigger the spring constant is. So how big do people think they are? Oh, some ways of order of the Planck length. Planck mass, Planck length. How fast is one vibration? The Planck time. OK, so let's talk about these Planck units. There's some reason to believe it's a little bit smaller than that. Maybe a factor of 10, maybe a factor of 100 smaller, but that's a fine point. So let's talk about what the Planck length is. I didn't prepare this, so we're going to have to work it out in real time. What is the Planck length? What are Planck units, first of all? What are any units? In physics, we need to have three units, mass, length, and time. And our usual choice of units is the usual units that we use in the laboratory are not determined by any fundamental physics. The meter is a convenient unit for measuring rope. That's where it came from, measuring rope. You measure a rope like that, a cloth, or whatever it is. So no doubt it originated from the length of a human arm. Probably. A foot, of course, is also a similar kind of unit. And I assume it came from somebody's foot, the king's foot. So what's the real physics then? Or what's the real science of what a meter is? The real science is how many atoms does it take to make a length of an arm, a useful arm? There's nothing whatever to do with any fundamental physics. It has to do with biology. How long it takes to make an arm? Not a very deep unit. That's mass. Mass, same thing, a kilogram is a kind of weight that you can manipulate in the laboratory. And a second is about the time that you could measure with a pendulum. That's it. That's where they came from. And they have no deeper meaning than that. We would like, for really fundamental reasons in physics, to choose units which have some very fundamental meaning. Now, for example, and if we do, then many of our equations will be much simpler. For example, if the size of a proton comes into various equations in nuclear physics, and the size of a proton is 10 to the minus 13 centimeters, if we work in centimeters, or 10 to the minus 15th meters, there's going to be 10 to the minus 15th all over. And it's not just 10 to the minus 15th. It's 1.73498. And physics is going to be very messy. And the other hand, there's nothing to prevent us from saying, let's use the unit in which the radius of the proton is exactly 1. Then nuclear physics will turn out to look a little bit simpler. And it will. Nuclear physics will turn out to be a little bit simpler. But atomic physics won't look a hell of a lot simpler. The physics of sub-nuclear particles, sub-hidronic particles will not look a lot simpler. There's nothing universal about the proton. The proton is just some particle, that's all. Why use the proton instead of some other particle? No good reason. So there's nothing really universal about the proton. And the rules of physics will not in any way be especially simple if we use the proton. Time, same thing. So we would like three units which have some deep fundamental significance. It's equivalent to saying we would like to choose three constants of nature and set them equal to 1. It's completely equivalent to choosing three units to choose three constants of nature and set them equal to 1. Constants of nature now mean dimensional constants. The size of the proton is 1, but that's not a good one. You want to use things which are very universal, three constants. There are three constants in nature which are truly universal. And I'll tell you what they are. The first constant is the speed of light. Why do I say it's universal? Because there's a rule. And the rule is nothing can move faster than the speed of light, nothing. Everything move is every velocity of every material object is bounded by the speed of light. The use of the word every there tells you there's something fundamental about it. That there's something, whether it's protons or electrons or photons or anything else, they all are bounded by the speed of light. So there's something universal about the speed of light. Yeah? Well, I think that the constancy of the speed of light is constant. The constancy of it? Well, yes. The fact that it's the same in every reference frame and the fact that it's the same for all particles. Now, when I say it's the same for all particles, you say, wait a minute, wait a minute. Protons don't move with the speed of light. The limiting velocity for a proton is the speed of light. So in that sense, it's very universal. So that's the first thing. We want to set c equal to 1. Sounds like a good thing to do. And we do that all the time, of course. The next thing, which is very universal, is Planck's constant. Let me give you an example. Planck's constant is not just some any old constant like the radius of a proton, but it's connected with the uncertainty principle. For every object in nature, the uncertainty principle between the momentum and its position is greater than or equal to h bar. There may be some four or some two in there, I don't remember. But it doesn't matter what object you're talking about. You could be talking about an electron. You could be talking about a proton. You could be talking about a bowling ball. They are all constrained by the same uncertainty principle. And so in that sense, h bar, and it doesn't much matter whether you use h bar or h, the one without the 2 pi in it. That's not the point here. The point is that Planck's constant applies to everything in the world. It's universal. So the next thing you might want to use, not might, but we will, is to set h bar equal to 1. So what's the last really universal, is there another truly universal constant? The gravitational constant. Remember, according to Newton, every object in the universe, no matter what it's made out of, no matter what it is composed of, every object in the universe has a gravitational force, which is the product of the masses divided by the distance squared times the universal constant g. So again, something which applies to everything, are they units, are they units in which c, h bar, and g are equal to 1? Now that's the same as asking the question, can you combine c, h bar, and g into a combination that has units of mass? Can you combine them into a combination that has units of length? And can you combine them into a combination that has units of time? And the answer is yes. Let's see, let's do length. We want to do length. Tell you what, even better. Let's do length squared. Let's see if we can find a combination of g, h bar, and c, which has units of length squared. OK, so here's what we do. Just some simple dimensional analysis. Everybody here knows how to do dimensional analysis? OK, we'll do some simple dimensional analysis. We want to find a combination g to some power, let's call it p, h bar to some other power q, and c to some other power p, q, r, which has units of length, I'm going to choose length squared. Let's say it has units of length squared. Now, this doesn't mean it's equal to length squared. It means it has the same units. Sometimes people put a bracket around a thing like that to indicate its units. The units of g to the p, h to the q, c to the r, should be length squared. I'm choosing length squared to avoid a square root in the final formula, that's all. There is another reason. It is widely believed that the fundamental unit is a unit of area, not a unit of length. But we'll come to that some other time. But let's see if we can find p, q, and r. OK, first question is, what, now we have to figure out what the conventional units of g, h bar, and c are. The units of c, that's easy. What's the units of c? That's length over time, right? What about the units of Planck's constant? Momentum distance. L squared m over t. L squared m over t. Yeah. L squared is mass. Momentum times distance, right? Delta x, delta p. Delta x, delta p, so that's a length. And then a momentum is a mass times a velocity. And a velocity is a length over time. So that's length squared over time. Length squared mass over time, that's Planck's constant. And the last one, the one that I can never really remember, these two I remember, the one I can never remember is g. But how do we figure out what the units of g are? Acceleration equals mass times g times. Yeah, we use one of the equations that g appears in. All right, so here's an equation that, let's see. Which was your equation? Acceleration equals. All right, acceleration equals. Acceleration equals. Yeah, we'll get that in a minute, but that's equal to. Mass g over r squared. Mass g over r squared. So that's length squared. Acceleration is what? Length per time squared. And again, equal means, doesn't mean equal. So it's L cubed over t squared divided by m, it looks like. Right? OK, let's now go through the exercise. By the way, that's like Kepler's law, where the mountain. You're right. You're right. You're right. It is like Kepler's law. OK, so now we have on the left side, length squared. Now let's go g to the power of p. What's g again? So that's L cubed over t squared times m in the denominator. All right, so that's L to the 3p, t to the 2p, m to the p. Right? Do I get that right? That's g to the p. Now what about h bar to the q? h bar to the q is length squared, length to the 2q, m to the 2q. No, m to the q. And then t to the q? Do I get that right? OK. And then the last one is speed of light to the rth power, which is length to the r over time to the r. Now all we have to do is find three constants so that everything on the right-hand side cancels out except two powers of length. OK, so the first thing you can see is mass only appears in two places, mass to the q and mass to the p. So what does that say? It says p better be equal to q. So let's just set p equal to q. We now know that. And we can cancel out the masses. That's done. Now what else do we have? We have to get rid of all the times it looks like, huh? All right, so how many times do we have? We have 2p, 2p plus q is the same as p. So that's 3p plus r equals 0. 3p plus r is equal to 0. So that tells us that r is equal to minus 3p. Did I do that right? Let's just check. We have 2p, 3p plus r equals 0. r equals minus 3p. OK, I hope I'm not doing this right. Minus 3p. Did I make a mistake? Yeah, yeah, yeah, but q is p. We've already figured out the q is p. So if I'm not mistaken, I think we can get rid of all the t's, p is equal to q. So let's see what we have. We have p is equal to q. That looks like 5p. Boy, why not recognize that? 5p. L to the 2q. That's taken care of. And now r is what? r is equal to minus 3p. Yeah, and you're done. Oh. 3p minus 2, 5p is? Yeah, you guys, 5p minus 3p and you're done. 2p. We have 2p, right? We get 2p. All together 2p, 2p. And that now tells us what p is. p is equal to 1. OK? So let's see what we have now. p is equal to 1. It tells us that the plunk area, this is the plunk area. The plunk area L squared is g h bar looks like over c cubed, right? C cubed. C cubed. This is usually expressed by saying the plunk length is the square of this. OK. The square root of it. Thank you. Let's put in some numbers. My problem is I can never remember what plunk's constant is. Anybody know plunk's constant? 10 to the? 10 to the minus 34. Good. 10 to the minus 34? Yeah. OK. Yeah. g is 6 times 10 to the minus 11, which is about 10 to the minus 10. Order of magnitude, 10 to the minus 10. What is h bar? 10 to the minus 34? Yeah. 10 to the minus 34. 10 to the minus 34. In usual MKS units, 10 to the minus 34. And what is c cubed? C is 3 times 10 to the 8. 3 times 10 to the 17. Oh my god. How many? Times 10 to the minus, how big is it? 24. 10 to the minus 24? I think it's more like 10 to the minus 25, huh? Yeah. 10 to the minus 25. So how many square meters is this? Pretty small. Pretty small. Yeah. Some tiny, tiny number. OK. The plunk length is very small. If we had done it right, we would have gotten 10 to the minus 35 meters. So 10 to the minus, it should be 10 to the minus 70th square centimeters. I'm not sure I got everything right. About 70. All right. So that's small. That's smaller than anything. What's that? Good. OK. Now the plunk time. That's the plunk length is the square root of this. What about the plunk time? Think of the plunk length as being the size of a little thingy. There's a thing the size of the plunk length. What do you think the plunk time is? It's the time for a light ray to cross that thing. What else could it be? So it's the time for a light ray to cross that little distance. So does that mean we have to divide it by the speed of light? Yeah, we have to divide it by the speed of light. So the plunk length, L-plunk, is 10 to the minus 35 meters. The plunk time is 10 to the minus 43 or 42, something like that, 42 seconds? 10 to the minus 44, 43 seconds, something like that. Now what about the plunk mass? We haven't figured out what the plunk mass is, but we could do exactly the same calculation. Work out what the plunk mass is. From H bar. And the plunk mass would be about 10 to the minus 8 kilograms. So this one's big. That's a big mass. 10 to the minus 8 kilograms is a observable thing. It's a little dust grain. It's a little dust grain. These are impossibly small, small times and small distances. OK, this is a plunk unit. Things measured in these plunk units are measured in some very, very fundamental units. OK, I'll tell you some things. The universe has a radius. The known observable universe, the horizon size, is about 10 to the 60th plunk length. 10 to the 60th plunk length. The age of the universe is also about 10 to the 60th plunk times. And good, that's a gallon of, sorry, a tank of gasoline is about an energy content. A tank of gasoline is about the plunk mass. So that means you could drive across the country with 10 plunk masses of energy. Right, that's what it means. These are the units that string theory is in. The size of a vibrating string, the fluctuations in it due to quantum uncertainty, order of magnitude, the plunk length. The frequencies of oscillation or the time, not the frequency, but the period of oscillation of order 10 to the minus 43 centimeters. Sorry, sorry, 10 to the minus 43 seconds. Right, and the amount of mass that would be involved in exciting a fundamental particle such as a graviton or an electron. Let's take the electron. If the electron is a string, you could ask, how much energy does it take to excite it? The units are expected to be somewhere in this range, 10 to the minus 8 kilograms. Now, that's a huge number. How many GEVs, giga electron volts is that? One other useful number to remember is that one plunk mass is about 10 to the 19th GEVs, which means 10 to the 19th proton masses. One plunk mass is about 10 to the 19th proton masses. 10 to the 23rd proton mass is a little gram of water or something. 10 to the 19th, which is 4 orders of magnitude smaller, is a tiny, tiny little droplet, but visible, quite visible. Well, if you have good eyes. So exciting an electron by a particle physics collision. You say, well, we don't need to do a particle physics collision. We'll just explode a tank of gasoline in the vicinity of an electron, and it will start the electron vibrating. Yeah, it's just rather hard to get the energy to be concentrated in such a small distance. So these are the reasons that that. Is it above its ground state? Yeah, above its ground state. At its ground state? No. Well, remember it's mass squares which come in integer multiples. All right, so it's mass squares which come in integer multiples. So if you want to know what the unit is, square the plunk mass, and that's the amount that it takes to bump you up. But to go from the ground state, if the ground state is almost massless, then it's about one plunk unit of energy. That's why you can't excise the electron. Right. Right. It's just much, too much energy to pump into such a small volume. And for this reason, that string theory is way, way direct detection. How would you detect that a thing is a string? The same way you detected a proton as a string, you hit it, you set it into vibration, and you discover that there's a bunch of excited states where the mass squared is proportional to the angular momentum. That's the kind of thing you'd like to do with an electron. That's the kind of thing that you'd like to do, or be right with you, Michael. That's the kind of thing you'd like to do with a photon. But the increase in mass here is just way beyond what can be done. In order to have the structure of the string that we've been describing, wouldn't the string as a whole have to be somewhat bigger than this? Right. Mm-hmm. Yeah. Question? Question? This is just a bunch of arithmetic over here. What makes us think that when we get a number out, 10 to the minus 35th, that it has anything to do with the length of a string? It's a guess. It's a guess. It's a guess based on a number of things. No, no, it's not just, no, no. No, it's not just numerology. It could be wrong. It could be a thousand times less. Nobody thinks that the string length scale could be smaller than the Planck length. That looks quite impossible. But that the string length scale could be larger, and the energy scale could be lower. That is possible. How much lower? Well, I think from the things that we know about particle physics, we can't say with any precision at the current time really at all. But from the precision with which particle physics standard model and so forth seem to work, I think there's plenty of evidence that the energy scale is very, very high. Whether it's Planck scale or a hundredth of the Planck scale or a thousandth of the Planck scale, we don't know. There is another scale in physics. There is another scale in physics which keeps coming up over and over again. We've talked about it. It's the unification scale. If you remember it all from the last quarter, we talked about the scale at which the various coupling constants seem to come together. The scale at which the electro-weak forces and the QCD forces seem to merge into a common structure experimentally. And there, there are experiments. They're not experiments to get to that energy, but experiments to extrapolate. The extrapolation of what we know about physics seems to say that ordinary quantum field theory probably holds to something like a distance scale of roughly a thousand times larger than the Planck scale. So we do have evidence that ordinary quantum field theory without any stringy structures and so forth, or equivalently that the electron is fundamental to scales perhaps a thousand times smaller, sorry, a thousand times larger than the Planck scale. But that's still pretty small. Then you wouldn't be able to suspend the galaxy. Maybe you could only suspend, oh, I don't know, some globular cluster somewhere. It seems that a Planck-weft can be thought of as the fundamental unit of small stream of time. But certainly a Planck mass is not a fundamental unit of mass. No. No. Planck mass is believed to be the mass of the lightest possible black hole. One Planck area, one unit of entropy. And how long would it take to evaporate? One Planck time. Here today, gone. Today. You showed the coupling coefficients unifying. I believe one time you put down a fourth one for gravity, so that comes down. Is that especially? What's the coupling unit of gravity? No, that's okay. I'll tell you what the various pieces of evidence are. I'm not going to explain them in detail. This is just to remind you. If you plot the coupling constants of which coupling constants, the ones that we measure in the laboratory, the electric, let's just call it the electric coupling constant, E squared really, the weak interaction, sorry, the electoral weak, I'm getting, there are three of them. There's u1, su2, and su3. There are three of them. This is energy this way. And energy is the same as inverse length. Inverse length. So as energy gets large, wavelengths get short, energy, high energy means small distance. Okay. We measure these coupling constants at some energy scales. Remember, coupling constants are things which change with energy or change with distance scale. We measure them somewhere where in laboratories, which means 50 GeV, 100 GeV at most, we measure them here, we get three numbers. We also measure slightly indirectly the derivatives of these things. In fact, we can compute the derivatives of them. We can compute the derivatives of them purely theoretically, but with good confidence because we know the theory here very well. What we find is that if you extrapolated these, they would all cross at about a common point. That point, and that point is way, way out here at an energy which is not the Planck scale, but it's about 100 or maybe a thousand times, somewhere between 100 and 1000 times smaller than the Planck scale, I think maybe more like 100. So this could be M Planck roughly divided by 100, somewhere in between. And the Planck scale would be over here. Or this is a logarithmic scale. On a logarithmic scale, it looks like this. So on a logarithmic scale, a given gap can mean a very, very big change in energy. And it looks like this. Now, of course, we don't know with certainty that there's not all kinds of things going on at higher energy here which would muck this up badly. We don't. But what we do know is that if there is nothing in here to muck things up badly, and there's some reason to believe that, that these coupling constants would just come together at the scale. The meaning of that is that quantum field theory, conventional quantum field theory, seems to be working to about here. There's other evidence for this scale coming from neutrino masses and a few other places. But it's far from tight. It will become much tighter if LHC is lucky and makes the right discoveries. It will become much tighter and will have a much better knowledge about this extrapolation. So this is one of the things we're going to learn from LHC, is how reliable this extrapolation is. Planck mass being out here. And this in itself doesn't tell us where the string length scale is. What it tells us is that the stringy character of particles does not become important until energy is above this. So somewhere is between here and here. If string theory is all right, somewhere is in here at the scale of Schmitts. Remind me again what the three coupling constants are. There's what is it? QED, QCD, and gravity? No. QED, QCD, and the weak interactions. A weak interaction. At one point you did put down, I think you put down something for gravity. Yeah. Yeah. If you were to plot the gravitational coupling constant behaves differently, has a different character. But let's see. It starts very, very small. And it goes up something like this. And it would cross. It doesn't cross at the same point. It crosses somewhere as nearby. But basically the place where this one gets to be about order one, that's the Planck scale. So it's really exactly the same statement to say that the gravitational constant, the running coupling constant, crosses at roughly the same place as to say this unification scale is not too far from the Planck scale. Is that the Planck capital G? Yes. So does capital G change as you start to measure the higher and higher interactions? No. Capital G does not change. But capital G is not exactly the right measure of things. It's capital G. All right. Let's do, I'll answer your question. Let's look at the force law between two objects. In electromagnetism it's E squared divided by R squared. And that's the force. And that's the electromagnetic coupling constant here. In gravitation it's G M squared over R squared. So the thing which is analogous to E squared is G times M squared. But what M should you use there? Well, the M that you use there is associated with the particular scale of physics that you're studying. All right. So you can see then in that sense this quantity here increases like M squared. Yeah. So this is the relevant thing here. It's a dimensionless thing. OK. That's units. Where on that scale if you could build accelerators big enough is that they start producing nothing but little black holes. Real black holes? Yeah. So the small ones, you know, if you build galaxies. Oh, it's all the way out there. The clock mass. Yeah. There's two ways to produce black holes. One way or another you've got to have a lot of pressure to squeeze things into a small volume. One way is kinetic energy. Small objects with lots of kinetic energy blast them together and you have a chance of making a black hole. The other way is not kinetic energy but gravitation. Gravitational potential energy pulling things together. That takes a star. So to make a black hole using gravity you need something as heavy as a star. To make a black hole by the collision of particles you need energies of order of the plunk energy. Now if you think about that for a minute, imagine an accelerator of a, I don't know, is it fun to, let's explore the following question. How big, what kind of accelerator would it take to do experiments of the plunk energy? To do interesting experiments of the plunk energy? Well, basically the energy of an accelerator, of a linear accelerator, scales linearly with the size of the accelerator. So you get a certain number of GEV per unit length. Slack is a good example. It takes approximately two miles to accelerate a thing to a hundred GEV. So two miles for a hundred GEV. Two miles, one hundred GEV. The plunk energy is about ten to the seventeenth times bigger than this. It's ten to the nineteenth GEV, so it's ten to the seventeenth times larger. So with just the parameters of the Slack accelerator, you can multiply this by roughly ten to the seventeenth. So it would take ten to the seventeenth miles. How big is ten to the seventeenth miles? Oh man, ten to the seven, ten to the, I think ten to the thirteenth miles is about a light year. Ten to the thirteenth kilometers or something like that, if I remember. Ten to the thirteenth kilometers, ten to the seventeenth, ten to the fourth light years. Galaxy size, galaxy size to accelerate something up the plunk mass. We're not going to do it, at least not tomorrow. But suppose you could build such an accelerator. Next question, what kind of luminosity do you have to have? An accelerator has to have a luminosity. The chances of, what we want to do now is we want to collide particles head on to make a black hole. But remember, the radius of that black hole that we want to make is itself the plunk mass, the plunk radius. So that means we've got to aim these particles to collide within a plunk distance. Well, accelerators don't do that. That's not the way accelerators work. The way accelerators work is you just get a lot of particles going to the left, a lot of particles going to the right. And you get enough of them so that some will collide at the distance scales that you're interested in. In other words, you need big luminosity. What kind of luminosity would you need? Oh, I don't know, let's say, what's a slack luminosity? 10 to the 13th particles per second or something, anybody know? I don't know. Well, 10 to the 13th particles, but that's just, but you know, the distance scales there are pretty big. How much better are you going to have to do? I don't know, let's say 10 to the 20th particles. 10 to the 20th particles colliding, each with a plunk energy. 10 to the 20th tanks of gasoline per second. Now, yeah, 10 to the 17th barrels of gas per second to fuel it as big as the galaxy, we ain't going to do it. That's what we would need to explore small black holes and something pretty close to it to explore string theory directly, a really direct test. The question is, are there indirect tests? All right, I think the indirect tests will pretty much be very theoretical. And unless somebody constructs a very, very convincing string theory that gives rise to the standard model exactly in some very computable way, the hope of direct tests, I think, is very remote. So that's what we're facing, that's what we're up against. Okay, we didn't get where I really had intended to go. In fact, we didn't even touch on what I intended to touch on. Especially these extra dimensions that you had talked about, aren't they in the range of the tanks in this third minus 25? Or which dimensions? The extra dimensions of the... Yes, it is believed, again, without really terrific evidence. But again, it's the same kind of evidence. It's the same, where is our plot of coupling constants. If there were extra dimensions, and the extra dimensions had any sizable distance, if they were at all sizable, then ordinary quantum field theory would break down when particles begin exploring the extra dimensions. The conventional use of quantum field theory, as we use it, would break down at points where the energy became large enough to explore these extra tiny dimensions. If it's true, and if we continue to get evidence that quantum field theory makes sense as it is out to these distances here, then we'll know that the extra dimensions must lie somewhere between the Planck length and maybe a thousand times bigger than the Planck length. So exploring extra dimensions will also be something that is extremely indirect and extremely hard. Why couldn't it be significantly... or is it magnitude less than the plot point? What's the argument? Or is it magnitude... oh, that's another question. That's another question. The theory really doesn't work at all under those circumstances. Let's put it this way. If you tried to make them smaller than the Planck length, you would wind up making their mass larger than the Planck mass. Making a small thing with big mass means you make a black hole. So if you're trying to think of strings which were much smaller than the Planck distance, and their mass would accordingly be larger than the Planck mass, you would be talking about black holes. You wouldn't be talking about strings. That's the problem. They would collapse. The mass and their small distance would cause gravitation basically to turn them into black holes. So that's the reason that string theorists do not ever consider the possibility that the string scale is smaller than the Planck mass. And I think it's a very good reason. Did you say that the mass here is derived from the choice for L and T? From your what? Well, you gave us why we have an L and why we have a T. Yeah, yeah, but you do exactly the same thing. You say, let's make a mass squared out of these parameters here. And I forget exactly what it is. We could figure it out. But the others were a physical choice. Sorry? The other length of time were a physical choice. What do you mean they're physical? They're all physical? Well, yeah, but I mean the time, for example, is the time to cross L and A. And that's a physical quantity. Well, each one is a physical quantity, too. And what do you say about mass? You just put it down. Does it follow from our choices from L and T? Yes, it does. Well, no, it doesn't follow by itself from L and T. But if you asked, ah, great, let's see. Once you have L and T, you can get mass from either H, R, or G. So that is the way to think about it. Take two particles whose distance apart is the plonk length. Now, because their distance is so small, the uncertainty principle says that they have a lot of energy. The uncertainty principle says if things are very well localized, then they have a very large momentum. The fluctuations in the momentum are very large. So take two particles whose distance is known to be within the plonk distance. And that says something about their momentum. Their average momentum is extremely large, and from that you can compute what their kinetic energy of these two particles are. That kinetic energy will be the plonk mass. So two particles, if you try to take two particles and localize them to a distance comparable to the plonk distance, the answer will be that the energy that you have to put in to do that will be the plonk mass. So it's just uncertainty principle. We can work it out, but, you know, okay. Let's take a rest. I'm going to tell you next about some mathematics. I'm going to tell you about the mathematics of conformal transformations. Why? Because that's the basic mathematical tool of string theory. Well, we have our choice. I can either tell you why there are 24 or 26 dimensions in string theory, or I can start to tell you about conformal transformations. We'll get to both of them at some point, but... We haven't clapped the thing together yet. Yeah, as long as it's not on Thanksgiving. Thanksgiving is always Thursday, so I think the answer is yes. Okay, we have 45 minutes, and I will try in 45 minutes to give you the easy version of why there have to be 26 dimensions in bosonic string theory. The easy version is not easy, and it doesn't really get to the root of it. The hard version is very hard. But that doesn't mean that we can't talk about it qualitatively, but I thought I would do what I call the easy version, which is not easy, and not satisfying, but nevertheless is correct. When people see this, they walk away and say, gee, that's sort of magic, but in the bad sense. But it is correct, and it is sort of the way that the need for extra dimensions was discovered. It has to do, if you remember, in order to get the photon to come out right, we had to do something odd. We had to take the mass squared of the ground state of the oscillating string. Remember, let's take the open string, take in the case of the open string. The mass squared had to be minus one unit. That was so that when we applied a creation operator to create the photon polarization, it came out massless. Minus one, one unit on top of that, gives us something massless. We haven't talked yet about how we get rid of this particle of negative mass squared. I assure you that can be gotten rid of. But let's not worry about it. Let's agree. The mass squared of the ground state is minus one. Why is it minus one? How do we get minus one? Well, harmonic oscillators have zero point energy. So the ground state energy of this vibrating string should be nothing but the sum total of all of the zero point energies of the vibrating string. Zero point energies are all positive. And in fact, they depend on the frequency. Anybody remember what the zero point energy of a harmonic oscillator is? A half h bar omega. Let's forget h bar, set it equal to one. One half omega, and do you remember the frequency of the nth harmonic oscillator? It was just n. Each one of the oscillating modes of the string, each one of the oscillating modes, the nth oscillating mode has frequency n. So the ground state energy of the nth oscillator is just n over two, in some units, is n over two. Now this doesn't sound good. We have a lot of oscillators, one for each integer. We have to add up all that energy. It's not quite easy to imagine how we can add all this up and get minus one. So there's something strange going on. But actually we don't have to get minus one. Well, we have to get to something a little bit different. Let's go back to the energy of a very, very fast moving system. The energy of a very, very fast moving system has, first of all, it has the overall momentum. Moving down the z-axis, it has the overall momentum. And then on top of the overall, and that's a conserved quantity. And we really don't care about it very much, since it never changes during the processes that take place in particle physics. And then for a particle which is not moving, whose center of mass is not moving in the plane, what's the rest of it? The mass squared divided, I think, by twice the momentum. And that's why we began to identify the energy, the non-relativistic energy, with m squared rather than m. This was the energy, and to make precise this idea of very rapid systems which become non-relativistic, you have to let p become extremely large. Basically you have to take the limit of an infinitely large momentum. In the limit of infinitely large momentum, this is the form that the energy takes. You say, well, I'm going to take the form of infinitely large energy. This thing just goes to zero. But what you really do is you take e minus p, you subtract the momentum from the energy. That's all right. This is a perfectly conserved quantity. It never changes. If it never changes, you don't care very much about it. And then multiply the whole thing by, let's say, 2p, and that gives you m squared. This 2p here, the reason you multiply by 2p, is that's got to do with time dilation. The faster a thing moves down the z-axis, the slower its internal motions go. The slower the internal motions, that means the smaller the internal energies. For example, if you have an atom, what's the ionization energy of a hydrogen atom? 13.5 electron volts or something. Now you take that atom and you speed it up and you send it down. In other words, the energy difference between the ground state of the atom, let's not take the ionization energy, the energy difference between the ground state of a hydrogen atom and the first excited state is what? Oh, 3 or 4 volts, something like that. 3 or 4 electron volts. Now what happens as you boost the atom? What happens to the overall energy of it? It increases, of course. Right? That's this. But what happens to the energy difference between the first excited state and the ground state and the first excited state? Does it get bigger or smaller? Smaller. And it gets smaller with this one over P here. And in fact, the energy of that atom or the differences of energy between the atoms are proportional to the square of the mass of the atom divided by 2P. So that's what this formula is. Well, we don't care about this piece. That just has to do with its overall motion down the z-axis. Just subtract it off. It doesn't have to do with the internal motions. And then the P in the denominator here, that's just associated with the slowing down of the internal motions. So let's just multiply through by it. And this is the way we really think of energy. This quantity is the energy and it's proportional to the mass squared. Okay, the point is now that the actual energy of the ground state has an infinite piece, a piece we, or let's put this piece back. Let's put, let's not subtract this. Let's add this back on the side. Plus, what is it? Plus P squared, plus 2P squared. Let me just get this right. Let's see. Let's take P times E is M squared plus 2P squared. Okay? Notice that first of all is the interesting thing which is the mass squared, which we're interested in. But there's also this infinite piece here, this infinite piece. So it's not really quite true that the energy has to add up to minus one. It would be okay if it added up to something like minus one plus an infinite constant that was similar to P squared. So I'm going to show you how that works. I'm going to show you how that works, how you can add up one plus two plus three plus four plus five. This is all the internal energy and get something which looks like a constant, namely minus one, which is what we want to get, plus an infinite piece that can be absorbed into something that's already there. This is a trick of physics that takes place all the time. It happens all the time in all kinds of contexts where you get some infinite answer and you don't know what to do with the infinite answer, but you realize that the thing which is becoming infinite already has a constant piece in it that you just add the extra infinity to the constant piece. An example is the energy of vacuum, the energy of just a mass of a particle or a self-energy, all sorts of things like that, where you can hide infinite constants into things which are already there. I'm not saying this is a satisfactory state of affairs. It's just something that we do all the time. The energy in this room, it has zero point oscillations from all of the photons and everything else, not from the photons that are here, but from the oscillations of the electromagnetic field. How big are those oscillations? The energy? Infinite. What do we do with it? We just subtract it off by saying we could add a constant to the energy of the room and it wouldn't make any difference, so just get rid of it. The same thing here. We're going to get a constant piece which is just infinite but constant. As long as it's constant, it never affects anything. Plus a term which looks like this with an additional mass squared. Let me show you the mathematics and we can then discuss another term, the physics. The mathematics is tricky but easy. The mathematics is just a question of adding up 1 plus 2 plus 3 plus 4. It's just a sum here. 1 plus 2 plus 3 plus 4. There's an overall factor of a half outside that comes from the half h bar omega, but the basic calculation is to add up all of the integers. Obviously infinite. Let's see if we can extract out of that infinite piece an infinity plus something finite. This is always the way quantum field theory and relativistic quantum mechanics works. 1 plus 2 plus 3 plus 4, this is not something we're going to easily add up, is it? Let's do something to it and then take a limit. Let's do something to it so that it makes sense and then take a limit. Let's multiply each one of these. Let's introduce a small constant, epsilon. E to the minus epsilon is a number less than 1. But when epsilon is very, very small, this number is close to 1. Let's add to it plus 2 E to the minus 2 epsilon. This is just a trick for making the sum converge. Each term in it, we're going to put in something which gets smaller and smaller, but when epsilon goes to 0, it'll just give us back the original sum. So epsilon is going to go to 0. In fact, you know what? Well, yeah, all right, let's leave it this way. 1 plus 2 epsilon, what's the next one? 3 E to the minus 3 epsilon plus 4 E to the minus 4 epsilon. But now, if epsilon is not 0, then the sequence of terms E to the minus epsilon, E to the minus 2 epsilon, E to the minus 3 epsilon do get smaller and smaller. Now, if epsilon is small, it takes a long, long time for them to shrink. So if we were just to plot the E to the minus epsilon, E to the minus 2 epsilon, as a function of the integers, it would fall very slowly if epsilon is a small number. But eventually, the E to the minus N epsilon will win, and it will make each term successively smaller and smaller, and you'll be able to add them up. So what you do is you add these up, and then after adding them up, you let epsilon go to 0. It's a mathematical trick. Let's see what we get. It takes a bunch of steps, and I'll show you, but the steps are elementary. Okay, this can be written as the sum from N equals 1 to infinity of N, E to the minus N epsilon. E to the minus 1 epsilon times 1, E to the minus 2 epsilon times 2, and so forth and so on. That's the sum. Now, the trick is to get rid of this N. First trick is to get rid of this N, and the way to get rid of this N is just to take the sum without the N in it and differentiate with respect to epsilon. What happens if we differentiate with respect to epsilon? What does a derivative with respect to epsilon give? It brings down minus N. So this is equal to minus, we really want to put minus here, this is equal to the original sum we were interested in, N, E to the minus N epsilon. That's the first step. Next step is to observe that we can add the series up. This whole series is a geometric series. It starts out, let's just see what it is. It has an E to the minus epsilon plus an E to the minus 2 epsilon plus an E to the minus 3 epsilon, which also can be written in another form. It's E to the minus epsilon times 1 plus E to the minus epsilon plus E to the minus 2 epsilon and so forth. I pulled out a factor of E to the minus epsilon. Do you know what this is? Exactly. This is a geometric series, 1 plus a number plus the square of the number plus the cube of the number. And so we can add all of this up. And what do we get? We get E to the minus epsilon divided by 1 minus E to the minus epsilon. That's the thing that we want to differentiate with respect to epsilon. That's this. I put a minus sign in front of it. This is something you can do at homework, but I'm going to show you roughly what happens. But in the end, we're interested in small epsilon. Let's expand it for small epsilon. Let's first expand it for small epsilon and then do the operation and then epsilon go to zero. We're interested in small epsilon. So what is E to the minus epsilon when expanded in terms of epsilon? 1 minus epsilon plus epsilon squared over 2, we're only going to need things to power epsilon squared. We won't need things beyond that. And of course, there are more, but we won't need them. Now what about the denominator? What's in the denominator? We have 1 minus, okay, so let's write out E to the minus epsilon minus 1. I think it's plus epsilon, minus epsilon squared. Let's go to the next one. I think we're going to need the next one, as you'll see. Epsilon squared over 2, right? What's the next one? Let me just check the sign. 1 minus 1 plus epsilon minus epsilon squared over 2 in the next one. Epsilon cubed over what? Epsilon cubed over 6, right? Now we have trouble. The trouble is the ones cancel. The denominator is proportional to epsilon. Let's factor out one power of epsilon. Let's see. This is the object whose derivative we want to take. Let's factor out a factor of epsilon in the denominator. Since everything in the denominator has epsilon, let's factor it out, 1 over epsilon, and then this becomes 1 minus epsilon plus epsilon squared. Did everybody follow that? Okay. Plus higher terms, but as I said, the higher terms are going to disappear when we go to small epsilon. They won't be important. Was it a minus sign? No, it was a minus sign. 1 minus the 1d1 sub-skeleton. Well, I thought I got it right. All right, should we do it over? 1 minus, put in brackets, 1 minus epsilon plus epsilon squared over 2 minus epsilon cubed over 6. All right, everybody happy with that? Now the ones are going to cancel. And the minus times the minus will make plus minus plus. They have no idea if that's what I had written down before or not. Then we pull out a factor of epsilon, and that turns this into 1 minus epsilon over 2 plus epsilon squared over 6. All right, now the trick is we're expanding everything in powers of epsilon. But here we have a thing in the denominator. I don't want this in the denominator. I want in the numerator. So how do you deal with a thing in the denominator like this? You use the formula that 1 minus a small number, let's just call it s, is equal to 1 plus s plus s squared plus s cubed and so forth. Here's your s. Oh, did I leave out something? No, I think I got it right, didn't I? I think I did everything right. I think I did everything right. So what we do is we take what's in the denominator here, and we expand it to get all the epsilon dependence in the numerator. Let's see what we get. I think we're going to get 1 plus epsilon over 2 minus epsilon squared over 6. But then there's a term coming from squaring the small quantity here. If I keep things to order epsilon squared, all I'm going to get is, let's see, I think plus epsilon squared over 4, I think. But you can expand it out yourself, something like this. Plus higher order in epsilon. No, I think it was epsilon squared. It was just epsilon squared, not epsilon cubed. No, the thing which was epsilon cubed in the denominator here became epsilon squared when I pulled out the epsilon here. And I was purposefully keeping everything to power's order epsilon squared. I know that the next order is not important. OK, so let's see if we can combine this together. I think the chances that I'll get this right are negligible. But all right, so we get minus d by d epsilon, 1 over epsilon times, OK, 1 times 1. Let's put everything that you get from 1. Plus epsilon over 2 minus epsilon squared over 6. Oh, can we combine these two together? What's epsilon squared over 4 minus epsilon squared over 6? Epsilon squared over 12. Right? OK. How did you do that so fast? Who did it? You know? What? Took me about 30 minutes to get that epsilon squared over 12. OK. I know how to do it. 1 plus epsilon minus epsilon, sorry, 1 plus epsilon squared over 12, right? That's this, this, this, and this. Then we get minus epsilon minus epsilon times 1. And then minus epsilon squared over 2. But then from here we get something of order epsilon cubed. And I'm not interested in keeping anything of order epsilon cubed. That's too high an order. I don't care about it. We'll see why in a moment. And then we have plus epsilon squared over 2. And nothing past that because the next one would be epsilon fourth. I think this is everything to order epsilon squared. Should it be plus? No? I think it should be plus, right? Yeah. Right. Looks right. 1 times epsilon squared over 12. OK. Now, epsilon over 2 minus epsilon, that's minus epsilon over 2. I'll get rid of this and these two cancel. So there's our formula right there. As you'll see, anything of higher order is not going to be interesting to us. OK. Look at, let's first focus on the one that's easiest. The easiest term is this epsilon over 2. It gets multiplied by 1 over epsilon. That means it's just 1 half. And what happens when you differentiate 1 half with respect to epsilon is 0. So this is, we don't have to write it. It's not there. OK. Now we have a term which is 1 over epsilon. That's bad news. This is something infinite at epsilon equals 0. And remember, we're going to go to epsilon equals 0. All right. What's the derivative of 1 over epsilon? Minus 1 over epsilon squared, right? Minus 1 over epsilon squared. Oh, and there's another minus sign here. So that's plus 1 over epsilon squared, I think. Yes. And then 1 over epsilon times epsilon, one of the epsilon's cancel, and then you differentiate with respect to epsilon. What do you get? 1 over 1 minus 1, 12. This is a famous formula which is usually stated half jokingly that 1 plus 2 plus 3 plus 4 plus 5 is equal to minus a 12. We tend to forget about this term over here. And the reason is that this term combines, well, this term is an infinite constant which really doesn't, I would have to convince you that where it, what happens to it, it gets absorbed into the p squared, the p squared term in the energy, the infinite term in the energy. And a careful analysis, a really careful analysis of it, allows you to absorb this into an additive constant in the energy that doesn't do anything. For the moment, you'll have to accept that. The real answer, the really right answer, the really right answer is that in good string theories, properly defined string theories, this isn't there altogether. But I'm not going to tell you why now. I'm not going to tell you why now. I'm just going to tell you this infinite constant is not important. I know the famous story of Dirac and Pauli, that Dirac calculated the vacuum energy and found out that it was infinite. And they said, well, since it's infinite, I don't care about it, we'll throw it away and Pauli turned around to him and said, just because it's infinite doesn't mean it's zero. Right. In this particular case, Dirac wins. Dirac wins, but this is something we will have to come back to and explain why this one over epsilon squared. Remember, epsilon is going to go to zero. This is going to be an infinite constant. And in this particular case, the infinite constant really is not important. And I'm not going to pursue it right now. I'm going to tell you the answer is minus a 12. Now, this is not yet a particularly good answer. This is not a particularly good answer yet. Remember, what we wanted to get was altogether minus 1. We wanted to get this minus 1, which when we excited by one unit of energy will give us the massless particle. In fact, the minus a 12 is not quite right. And the reason is because of this factor of a half from the 1.5 H bar omega. So it's actually 1 over 24. 1 over 24. That minus 1 over 24 is the energy of the ground state, including the zero point oscillations, but throwing away a certain infinite term, and I have to explain it way later. But, okay. How do we get, oh, have I left anything out? I left out one thing. Remember, for each mode, there is not one oscillator but two oscillators. The a's and the b's. Remember the a's and the b's? The x oscillators and the y oscillators. So actually, it's twice this. Twice this, one for the x oscillators, one for the y oscillators. It's still not good. It's only 2 over minus 24. We really want to get minus 1. How do you get minus 1? You say there weren't two dimensions of oscillation. There were 24 of them. 24 directions of oscillation plus the direction going down the z-axis, plus time make 26. That's where the 26 is. Now, this is a crazy story. At the time, nobody believed it that this is a little bit too silly to be true, but in fact, the mathematics did fit together. Mathematics. And this is the simplest way. This, which is fairly complicated already, is the simplest way to see. And it is historically the way it came about that requiring, let's say there were d dimensions of space perpendicular to the direction of motion, and requiring this to come out to be minus 1, just so that when you excite it, you can get the photon, that was the original argument about d having to be 24, or the total space time dimension having to be 26. Now, as I said, the arguments, this was hardly a convincing argument at the time. It's virtue is that it's easy to present, relatively easy to present. What is the history? What was the history of this introduction? How did this come about? Yeah, well, I was just wondering, 1990s or... Oh, God, no. 1969. Yeah. No, 69. No, no. No, this goes back to 1969, I would say. Maybe early 70, but I think it's still 69. So it's for a long time? Yes. Well, by this point, people were just exploring the mathematics of string theory. And they had... So you did have a regi trajectory? Well, experimentally, you had the regi trajectories. But at some point, when the mathematical structure was put in place, people began to explore it for its own sake. It was realized rather quickly that there was some kind of funny thing going on that there was a spin one particle, one unit up, one unit up from it, but it didn't have all the states that it needed. It had two polarizations and not three polarizations. And it was realized rather quickly that this could only be the photon or something like a photon, but that then left the question, what was this zero point energy that had to be minus one? And it was realized fairly quickly that that required 24 dimensions of oscillation. Now, as I said, this was by no means a convincing argument. Other much, much, much more convincing arguments came about, but they were highly mathematical, really highly mathematical. And I'll tell you about them in words, but not now. We need one more mathematical concept. Is there a similar trick to get eight? Yes, eight. Yes, yes, there is. It's a similar trick to get eight. But before I tell you that, let me go back to the closed string. Remember the closed string, you'll have to excite twice. You'll have to get spin two. You'll have to hit it with two oscillators. That meant that the ground state had to have minus two units of energy. But now we have a problem. d over 24, if d is 24, is only minus one. What's the answer? 48 is minus 48. 48 is minus one. Why 48? Because you get a minus two, one-twelfth, right? You get the minus two. Yeah, but wait a minute. I know, but you only have 24 directions. We can't change the number of directions in space when you go from closed strings. No, no, no, the same theory, the same theory has both closed and open strings. You can't change that. Right, you can't do that. No, we'll agree. The theory has 24 dimensions. Now what happens to the closed string? Secular parallel. Secular parallel, alright. Well, something like that. Remember, you have twice as many oscillators for the closed strings. You've got the ones that go around to the left, and you've got the ones that go around to the right. Everything gets doubled once again. You have not only an oscillation for every direction of space, but you have one for right-going modes and one for left-going modes. So in fact, it's true that you get twice as many, which means that the ground state energy is minus 2, which is just exactly the right thing so that two units of oscillation will bring you up to massless again. That began to smell better. It began to smell better, but still far from a convincing argument. The real thing has to do with something called conformal invariance. So we're going to take up conformal invariance next time. What conformal invariance means? If you want to prepare yourself, learn a little bit about complex function theory. Very little, not much. We're going to do a very little bit. Learn what the Cauchy-Riemann equations are. If not, I will explain it. And we will come to why string theory is the theory of Turkish taffy. You know what Turkish taffy? You pull it this way, you pull it that way, you stretch it out that way, you stick it together, and somehow it all stays together. It's conformal transformations. Anyway, I think we're finished for tonight. Cauchy-Riemann equations. Cauchy, Cauchy. Just learn a little bit about them.
(October 18, 2010) Professor Leonard Susskind delivers a lecture concerning plonck variables and how they relate to string theory in the context of modern physics. String theory (with its close relative, M-theory) is the basis for the most ambitious theories of the physical world. It has profoundly influenced our understanding of gravity, cosmology, and particle physics. In this course we will develop the basic theoretical and mathematical ideas, including the string-theoretic origin of gravity, the theory of extra dimensions of space, the connection between strings and black holes, the "landscape" of string theory, and the holographic principle.
10.5446/15122 (DOI)
Let's begin with a little mathematical preliminary, a few mathematical preliminaries, which if I didn't do them now in advance, I would have to do them during the course of showing you some things about string theory and that would be a new sense. On the other hand, there's nothing that I think you don't know well or that most of you don't know well. First thing, just a little bit of calculus formulas, a couple of calculus formulas that are useful to have on the blackboard. If we have a function, and let's call the function x, that's the dependent variable, x is a function, and what is that a function of? It's going to be a function of a variable called sigma. Why am I using x? Why am I using sigma? Because it turns out that these are the standard notations that are used in string theory for certain functions. But they could be any functions. It could be y as a function of x. It could be x as a function of sigma. It could be f as a function of g, whatever. And I'm going to allow, for the purpose of this discussion now, sigma to be a variable which runs from zero to pi. In other words, half a cycle around the circle. So sigma is an angular variable, except it's not quite an angular variable. It's an angular variable that only goes halfway around the circle. I want to approximate a continuous function by a discrete function and eventually make my description better and better and better by filling in the axis with more and more points. That's what calculus does for you. So we replace x of sigma by x, let's call it sub i, where i runs from one to n. Later on we're going to let n get very, very big. Let's think of the difference between x at i and x at i plus one. We can call that delta x if we like. But it's delta x between i and i plus one, or i and i minus one, I think I made it. It's just equal to x of i minus x of i minus one. I could have chosen it i plus one minus i. That's delta x. And it's well approximated in the limit where we put many, many points in there, assuming the function is smooth, in other words, assuming that the limiting function is a smooth, differentiable function. It's well approximated by derivative of x with respect to sigma times delta sigma. Now how big is delta sigma? Delta sigma is the sigma interval between two neighboring values of sigma. How big is that? That's the whole interval, pi, divided by n, chopped up into n little segments. So this is another formula that we'll make use of. And finally, a formula, this is for derivatives, the formula for integrals. Let's first imagine adding up all the x's. Some of all the x's of i's. We're adding them all up. What's the approximation for that? Well, if you multiply it by delta sigma, in other words, just take delta sigma, which is pi over n, and multiply that by the sum of the sigmas. This becomes in the limit, the integral, in this case from zero to pi of x of sigma. We simply replace x of i by x of sigma. And delta sigma becomes d sigma. Basically, it's the definition of an integral, but I want to have these equations on the blackboard because we'll use them several times. OK, that's the first mathematical preliminary. Let's draw a line underneath it. I'm going to try to get all the mathematical preliminaries on the blackboard over here and then use them over there. OK, next. Then we have a function, now a continuous function. Actually it doesn't even really have to be continuous, but the function, take it to be continuous, defined on the interval from zero to pi. Any such function, with a caveat that we'll describe in a minute, any such function can be Fourier analyzed. Another way of saying it is, it can be written as a sum of sines and cosines, sines and cosines, and I'll write out and I'll give you the examples in a moment. But before you do so, I should say sines or cosines. Before you do so, before you do it, you'll have to establish certain features about the functions which put them in a class which is either called Dirichlet or Neumann. Anybody know what they are? Dirichlet? OK, boundary conditions. Boundary conditions. Boundary conditions, the behavior of the functions at the end points, the boundaries of the interval. So for example, this function could stand for the displacement of a violin string from the horizontal. For a horizontal, well, not for horizontal, but for a violin string, you hold down the ends. The ends are firmly in place and therefore the value of x at the ends is zero. Just because the ends are being held fixed, those are called Dirich, or that is called Dirichlet boundary conditions. I'll write it once, Dirichlet, Dirichlet was a, Dirichlet, Dirichlet, you got a t at the end. Dirichlet, Dirichlet of course was a French mathematician and studied waves, moving on things like strings and classified one of the boundary conditions as Dirichlet. And the meaning of that, or Dirichlet boundary conditions, means x at the end points, that means x of zero equals zero and x of pi equals zero. Now another class of functions which are sort of the opposite in a certain sense, well, they're not the opposite, but they're just another class of functions are described by something called Neumann boundary conditions. Neumann of course was a German mathematician, N-E-U-M-A-N-N, Neumann boundary conditions. And those, we're going to find those are the appropriate conditions for discussing a string, motion of a string, and I'll tell you where they come from as we go along, but at the moment I'm not doing physics, I'm doing just some mathematical facts. Neumann boundary conditions are the statement that the derivative of the function is zero at the end points. So let's first draw a Dirichlet function. A Dirichlet function might look like this. It's pinned down at the end points, and so x at the end points is zero. For Neumann functions, the derivative of x with respect to sigma is equal to zero. And those are functions which look like this. Let's put a vertical axis there too, vertical axis. They don't necessarily go through zero at the end points, but their derivatives go through zero. That means they have no derivative at the ends here. They're flat at the ends. Those are Neumann functions, or better yet, that's the Neumann boundary condition. Now why would you impose one versus the other? We'll discuss that. This is clear that you would want to impose this if you had a string which was held down. When would you use this? The answer is when you have a string whose ends are not held down, but we'll come to it. Both kinds of functions can be written as... yeah? I'm just going to say, with a bibarphone or bibarheart, that's where you have a relevant bar supported a quarter-way going from the end. I'll tell you where you would use Neumann Dirichlet. If you have an organ pipe, and if you close up the ends of the organ pipe, then the sound waves have to be zero displacement at the ends. If you open up the organ pipe, then the ends, then it's Neumann boundary conditions. What if the string goes up vertically at seven-and-a-half percent? We're not talking about strings moving in gravitational fields vertically and horizontally are the same thing. No, I was talking about mathematics really. If you come up vertically as an approximation, if it comes up vertically and then... Oh, you mean if it has a vertical jump in it? Yeah. Yeah. Of course, physicists as a rule don't like such vertical jumps. Discontinuities. Discontinuities usually mean infinite energy. Well, I was thinking of electric currents where you could have a sharp jump. Yeah. Yeah. But even a really sharp jump is even that. There's something that... Yeah, no, you're allowed to have sharp jumps in these functions, but they have to be piecewise continuous. Piecewise continuous means they can have jump here, jump here, but they're... For our purposes, in particular for a string, for a string to have a jump in it would mean the string was broken. Yeah. And we don't want to break the strings. Okay. Next thing, foria decomposition or yes, sir? Yes, John. Michael. Does the open pipe satisfy that boundary condition? What's that? The what? The open pipe, does it satisfy that boundary condition? Well, it's more likely to satisfy this one than this one. I'm just wondering why it satisfies any boundary condition. I'll have to think about it, and I'm not the... We can really think about it, but I'm not prepared right now. If it satisfies anything, it's going to be this one, but we can come back to it. Yeah. The pressure of the antestate, the atmospheric pressure, which is... You're probably right. I don't want to think about it now. I don't want to get my head off. I'll give you an example of an idealized string system, which does satisfy it. You have a pole over here. Okay. Another pole over here put into the ground. You have a light, almost massless ring, which goes around these poles, and they're connected by a string. Again, x corresponds to the height of the string, let's say, above the surface of the earth. This kind of system, if these rings were very, very light, would have Dirichlet boundary conditions. Instead of holding the end of the string down, the end of the string is free-floating. Okay? So, it's when the end of the string is free-floating like this, that Neumann conditions are appropriate. Now, that's not obvious. That's not obvious. I'm going to explain the physics about why the ends of strings, freely-floating string ends, satisfy this. But for the moment, we're just dealing with mathematics. Okay? All right. Now, again, let's come back to the Fourier decomposition of such functions. And, Fourier decomposition plays a very, very essential role in so many things, but in particular, string theory, functions which satisfy Dirichlet boundary conditions. Those can be, let's call them, f of x, no, not f of x, x of sigma, can be written as a sum from n equals one to infinity of coefficients, x of n, those are a set of coefficients, times sine of nx, n sigma. Why sine? Because sines are the functions which vanish at the end points. Sine of nx for any n is zero at x equals zero and x equals pi. The most general function that satisfies Dirichlet boundary conditions, and which is continuous, and for our purposes, continuous, differentiable, all the good things, the most general function which satisfies that can be written as a sum of oscillating functions like this. This is Dirichlet. What are the functions whose derivatives are zero at zero and pi? Cosine, cosine nx. Cosine nx for any n is flat at the end points, and there are enough cosines, cosine x, cosine 2x, cosine 3x, what is cosine of zero x, incidentally? What a sine, let's begin with sine of zero x, why didn't I start at n equals zero, what is sine of zero x? It's just sine of zero, right? Which is zero. So, there's no point in putting in a term which is zero. What about cosine of zero x? It's one, and if the function happens to have an average, if its average isn't zero over the interval, it will start out with a constant term which is just flat. In other words, a flat function is a good Neumann function. And so, in this case, x of sigma for Neumann, x of sigma, is the sum from n equals zero to infinity because zero is now an interesting case, x of n times cosine of n sigma. What about the derivative of x? Let's take the Dirichlet case. What is the derivative of x of sigma with respect to sigma? Well, if you differentiate a sine, what do you get? You get a cosine. So if a function is Dirichlet, its derivative is Neumann, and if a function is Neumann, its derivative is Dirichlet. That's a fact. Good. Now, the last mathematical fact that I'm going to put on the blackboard, is that the fact about integrals, integrals of sines and cosines. Most of you know it. Anybody who knows anything about Fourier analysis knows it. If not, here's the fact. Supposing you take, let's work with the cosines. Starting I integrate from zero to pi, cosine of n sigma times cosine of m sigma. What do I get? First, what do I get if n is not equal to m? You get zero. If n is not equal to m, you start with one cosine, and n is not equal to m. That means you're multiplying one cosine by another cosine, and as most of you know, I suspect that if you do that and integrate it and average the product of them, you'll get zero. So this is zero if n is not equal to m. What if n is equal to m? Then of course it's just integral cosine squared of n sigma. So how do you figure that out? Let's say cosine integral of cosine squared n sigma. What does cosine squared vary between? It varies between zero and one. In fact, the square of the cosine, if some cosine function looks like this, the square of it will, of course, wherever it's one, it'll stay one, but where it dips below the axis here, the square will be above the axis, so the square of it looks like this. The square of it looks like that. What is the average of the cosine? Zero. What is the average of the cosine squared? Half. All right. So in particular, if you integrate cosine squared, you can just say the average of cosine squared is a half, but you're integrating it between zero and pi. So this function on the average is just a half, but when you integrate between zero and pi, you get the width of the interval, the width of the integral, and this is just pi over two. It's pi over two for any n and m. Sorry, no, for any n equal to m. So we can write down now that this is equal to delta nm, that's the symbol which is zero if n is not equal to m, and one if n is equal to m, times pi over two. Now I made a mistake for one special case. Anybody know what the special case is? n equals zero. So let's take n equals zero. Equal zero to pi of cosine, what's cosine zero sigma? One. One. All right. Cosine of zero is one, so it's just one d sigma, and that's equal to pi. So the one exceptional case is when n and m are both equal to zero. Okay? This is true for n. I won't bother writing it. Is it exceptional? I think so. No, no, I was going to say if only one of them is zero. Well, yeah, if one of them is zero and the other is not, it's zero. Yeah, okay. No, then it's not exceptional that way. Except n equals m equals zero, and then we get pi. Just pi. These are some mathematical facts that we will need. The other thing that I had planned to write out is a bunch of mathematical facts, but I think I'll hold off on it. We'll come back to it. Is properties of harmonic oscillators? Yes, let me just write down one fact about harmonic oscillators. If you have a harmonic oscillator described by a coordinate x, then the harmonic oscillator, a Hooke's law oscillator, the coordinate of the oscillator you could write as x. Whether or not it's the same x as here, we'll come back to. But for the moment, let's just call the coordinate of the oscillator the displacement of the oscillator x. What is the kinetic energy? What's the kinetic energy of the point? One half m x dot squared, right? One half m times the time derivative of x squared. But you can always work in units in which the mass is equal to one. You can always rescale things. You can rescale a coordinate so that m x dot squared is just, you take the m and you take the square root of it, you absorb it in here and you rescale x. You can always choose units so that the kinetic energy is x dot squared and over two. It's conventional to put a two there. That's the kinetic energy of a point mass. What about the potential energy of a Hooke's law? So this is energy. Let's call it energy. It's plus one half times the spring constant called a kappa x squared. What's the frequency of this oscillator? Can anybody tell me what the frequency is? Square root of k over m. But m is one. I've chosen m to be one. I'm working in a formulation where m is equal to one. So what is the frequency? This frequency is the square root of k. Or in other words, k is the frequency squared. Let's just put it there, omega squared. This is the formula for a harmonic oscillator. This is its energy, kinetic energy, potential energy. If we wanted to work with Lagrangians, we would write kinetic energy squared minus potential energy squared. That's the basic formula that we'll need for harmonic oscillators. The quantum oscillator we will come back to, the quantum oscillator is of course quantized with energies in units of omega. So if you see this, you say, uh-huh, the energy levels of this harmonic oscillator are any integer times omega times h bar. I think we're all on the same page on that. Next question I want to address. This is perhaps a philosophical question, but I think we have to answer it. What is a particle? What do we mean by a particle? We're going to be talking about strings. Strings are not particles. Strings are assemblages of large numbers of particles with springs between them. So in what sense do we mean, what do we mean when we say a certain particle is a string? That raises the question, what do we mean in general by a particle? All right, so what properties do particles have? They have location, but a no particle is known to be a point. In fact, no particle that is known is a point, period. No particle that we know of is a point particle. Even the electron is not a point particle. It has a little cloud of photons around it. The photons are virtual pairs. The electron, if you could look at it through a powerful microscope, would have some fuzz around it, and that fuzz would be virtual photons and virtual pairs, electrons, positrons. So an electron is not a point. Certainly a proton or a neutron is very far from a point. Protons are big, gigantic objects that are, why do I say big? Big is a relative term, and for this class, protons are very, very big. So a particle is not a thing which is a point particle. It can be composite. It can be made of things. Could you take, let's take a thing which I think most of us would not call a particle ordinarily. Let's take a box filled with particles, a box, just an ordinary box tin can filled with a large, a gas of a large, large number of particles. It has a mass. It has a position, namely the center of mass position. When we speak about the position, we usually mean the center of mass position. So the, here it is, here's the thing we're talking about. Is it a particle? Of course not, it's a cup of coffee. It has a position. It has a mass. Why don't we call it a particle? Apart from the fact that it's big, and since I already said big is a relative term, so why is this not a particle? What other ingredient do we add when we speak of particles, when physicists speak of particles? Now I don't expect you to know the answer. I will tell you the answer. Anybody got an answer? Indivisibility. Indivisibility, nope. A proton is certainly not indivisible. Is it spin? Spin. Well. If you try it over, the coffee's going to fall out. I'm not going to try it. This particle happens to have a hole in it. As a matter of fact, I'm going to make the hole bigger, and I'm going to. OK, I will tell you what added ingredient is an important distinction between things which we usually think of as particles and things which we usually think of as highly composite. It has to do with their energy spectrum. If you remember that energy is equal to mass, E equals MC squared, then it has to do with their mass spectrum. An electron has a more or less, well, has a unique mass. The electron, you cannot add to the mass of the electron. I could add to the mass of this very easily. Just shake it. Shake it and stop shaking it. Why have I added to the mass of it? Because I've put in some energy into it that's probably stirred it up and added to its energy, and energy is mass. Because there is nothing that you can do to increase the mass of an electron. I shouldn't say nothing. Nothing that we can do in the laboratory at present can excite the electron into a state of higher mass. It's discrete. It's all by itself. It's energy spectrum. If you were to plot vertically the energy, here would be the electron right over there. The photon would be down at the bottom with zero mass. There would not be a whole bunch of excited states of the electron just above it. The proton does have excited states, but the excited states are pretty discretely different than the proton itself. It takes a couple of hundred MEV, million electron volts, to spin up a proton or to cause it to oscillate. A proton also has to some extent this discreteness and the excitations of it are well above it. It's got a kind of isolation in energy or in mass. What about a cup of coffee? A cup of coffee has a mass of some fraction of a kilogram, but what if I stir it up a little bit? It's over here. It's up here at one kilogram. Now, it was less than a kilogram. What about the first excited state of the coffee? If I were to cool the coffee down to zero temperature, it would have some mass. How much energy can I add to it? I can add an incredibly tiny amount of energy to it. Why? Well, one way is just to poke one molecule and give it, but even less energy by making a sound wave through it, a phonon through it, an incredibly tiny amount of energy. So there's a neighboring state right there, which is so close that you really can't distinguish it as a separate individual quantum state, and there are zillions of them, zillions of states very close by, practically forming a continuum of energy levels. That is in practice how we distinguish particles from mush. Mush can be excited by tiny, tiny little bits of energy. Particles, it takes a significant, discrete amount of energy, an identifiable, discrete amount of energy to excite them. That's really the only difference. Is a string, a quantum string, a particle or not? That depends on the excitation spectrum of the energy levels above the ground state. If they're well separated for some reason, then the answer is it will behave like a particle. If they're extremely close together, so close together that the experiment, whatever it happens to be, can't distinguish them, then it won't behave like a particle. That's the characteristic difference between mush, as I said, and particles. The energy that it takes to perturb a proton is a couple of hundred MEV. Oh, some 15%, 15% of its own mass to get the excited state of the proton, maybe more like 20%, oh, about 50% of its mass, 40% of its mass to get up to the next energy level. So that's pretty significant. What about a string, a quantum string? Well, the strings that we're going to talk about, the particular ones we're going to talk about, the separation between the first string and its excited state is more like the Planck mass. That's a huge mass from an experimental point of view, so huge that we have no hope of in a laboratory exciting the excitations. So that's why we call strings particles. Yeah. So is this like an issue of scale then, because if you zoomed in, you would see those different energy levels being no far if you're looking close enough? The energy levels are the energy levels. If you zoomed in, you might be able to see that it was made out of a lot of pieces, but nevertheless, you would discover that even though it's made out of a lot of pieces, to recite them would cost a very large amount of energy on the scale of particle physics energy levels. In other words, if an electron is truly a string, and its mass is what? Half an MEV or something like that, and the excitation energy to rotate it or to vibrate it or to do anything else to it is the Planck mass, which is how many times bigger than the electron? Ten to the 23, ten to the 24 times bigger. So there's a big gap in the spectrum. The existence of gaps in the spectrum is what defines what makes particles different than how about a violin string? Very little energy by comparison with its actual mass. If a violin string is made out of cat gut, what do they make of violin? Do they still kill cats to make violins? Yeah. Yeah, okay. Less than a gram, I don't know what, but it's a significant mass. And the energy that it would take to excite one quantum worth of energy to that violin string when translated into mass is unmeasurably small. So you've got a lot of quantum states and a tiny little interval of violin string is not something we would call a particle. Okay, so now we have to begin to explore the mathematics. We have the mathematics on the blackboard. I suppose we should say we're exploring the physics, but the physics is fairly mathematical. So a photon is not a point particle? A photon is a point particle. It's not a question of point. It's a particle. There's no similar particle with a mass very close to it. No, no, I was agreeing that it's a particle, but the question is whether you said none of the particles are point particles. No, a photon is not a point particle. A photon can dissociate into an electron and a positron. An electron and a positron. So if you were to look at a proton carefully, you would find that it's also a fuzzy structure with electron-positron pairs in it, more virtual photons. It's pretty darn small, but not infinitely small. And it's even measurably, it has measurable size to it. You can actually measure the size of it. The fact that an electron, for example, is not a point is associated among other things with its anomalous magnetic moment. So a photon's size depends on its frequency? No, no, not in the correct definition of it. No. It's wavelength does, but not its physical size. OK. Now we talked the last time about describing systems at very, very large momentum. This is an important part of the logic of the approach to string theory we're going to take. We call the light cone frame or the infinite momentum frame, and I'll just remind you for the moment that we could come back to it at some later time. But at the moment, the main fact that we established or that we talked about last time is that if you take a system which could be highly relativistic, what it means to me highly relativistic now, I mean that it has parts, pieces, which move relative to each other with close to the speed of light. OK. A baton twirled around so fast that the ends of the baton are moving with close to the speed of light. That's a very relativistic system, and the question is, can you in any sense at all describe it using something like non-relativistic physics? If the baton is moving slowly, so the all parts of it are moving with only a small fraction of the speed of light, then of course you use non-relativistic physics to describe it, but it's an approximation. It really, and as it spins faster and faster, as the parts of it get up relative velocities relatively close to the speed of light, that approximation breaks down. It's not a good approximation. So it's in no sense exact even if the baton, if the center of mass of the baton is at rest. The issue of using non-relativistic physics to describe the relative motions of the pieces of it is not a question of whether the center of mass of it is at rest. It's a question of the relative motions. So there's no good approximation, no good non-relativistic approximation to a baton which is spinning fast enough that its parts are in relativistic motion relative to each other. On the other hand, the trick that I showed you last time is exact. In the following sense, you could take the baton twirling around an axis going in this direction altogether and now boost it. All you'll have to do in order to bring it into the non-relativistic form that I talked about last time is just boost it to a momentum where the momentum of the whole thing is much larger than the momentum going around in the plane. Or equivalently, it's much more relativistic along the axis that you boosted it than it is in the other directions. Then the formulas I showed you last time become exact. And what do they say? They say that with respect to the two-dimensional motion, the two-dimensional motion in the plane perpendicular to the direction that you boosted it up, the description is completely and exactly non-relativistic. So that was the trick that we used, or that we're going to use, to describe the properties of strings. And it's that trick which gives me the courage to explain it to this class because I wouldn't try to explain it in the fully relativistic form. Yeah? Last week you referred to the infinite momentum. That's something I'm talking about. Yeah. So if it truly was infinite, then that would imply that it's traveling at the speed of light. Are you just slightly off? Slightly off. Slightly off. Do you want me to go through that again just very briefly? I don't know. I think you might have done the concepts just in question whether it's truly infinite or really, really big. No, no. You take limits. You take things which have limits. All right. Let me show you the sorts of things you can define which, yeah. What's the axis that the top is rotating? You just pick an axis. Any axis. Any axis, no matter how the system is moving, pick any axis and boost it along that axis. What if it's being boosted along the same axis that it's rotating perpendicular to it? If it's rotating this way, the axis is that way. No, what if it's being boosted in the same axis that it's rotating? Then it's moving this way. It still has an unrelativistic description. What it would look like, nonrelativistically, from a nonrelativistic point of view, is a rod which is doing this. Project it onto the plane. The projection onto the plane is nonrelativistic. And the curious thing about strings, it's completely enough to know their projection onto the plane to know everything about them. This is a bizarre and interesting fact about strings that they do not have any independent coordinates. If you boost them along an axis, they have no independent degrees of freedom along the axis. All the degrees of freedom are perpendicular to the axis. It's a rather remarkable fact about them. Do you amplify that? That means they can't arbitrarily move in that? They can't arbitrarily move in that direction. The motion in that direction is completely determined by the motion in the other directions. Yeah, that's a... They're trapped in a sandwich sort of. Well they are trapped in a sandwich in that direction, but it's more than that. It's not just that they're trapped in a sandwich. That their motion, well first of all they're Lorentz contracted. But if you find examined within that Lorentz contraction, yes they would be moving, but their motion is completely and entirely dictated by the motion in the other directions. That's a curious fact. Is that the source of the holographic? Yeah, well it was one of the things that the speculation began. Yeah, it is. Yeah, it is closely connected with that. And I don't think we're going to try to get into that now for sure. Instead we're just going to... Well, all right, let me just remind you about that light cone story. Let's spend two minutes at it. If you take any system, collection of particles, moving, doing whatever they're doing, it has a center of mass. In the frame of reference at which the center of mass is at rest, it has an energy. That energy is called its rest energy. And it's the thing which when divided by c squared, you call the mass, multiply or divide it. The energy at rest means the energy when it's momentum is zero. In a frame of reference in which it's momentum is zero, that's called the rest mass. It doesn't have to refer to a single particle. It can refer to any system whatever in the frame of reference in which it has zero momentum. In other words, in the frame of reference which it's at rest, in that frame of reference its energy is called rest energy or rest mass. So it has a mass. That mass is in general composed not only of the masses of the constituents, but the relativistic motion, the kinetic energy, and even their relativistic kinetic energy, whatever it is, it all adds up to the mass. Now what if it's moving? If it's moving, it has an energy which is not just m. It's the square root of the momentum squared plus the mass squared. I've set c equal to one, speed of light equal to one. That's the formula for the energy of it. Let's write that in the form square root of pz squared. That's the direction we're going to boost it along. Plus, I could write px squared plus py squared, but I'm not. I'm just going to write p squared p, well let's just call it the px squared. Why not? px squared plus py squared plus m squared. When you boost a system along an axis, let's say the z-axis, the other components of momentum don't change. So as long as these components of momentum are finite, you can always make pz much bigger than anything else in the problem, bigger than px and py, and much bigger than m. And in that limit, this becomes equal to pz plus px squared plus py squared divided by twice pz plus m squared divided by twice pz. What I've done is expand this quantity here, thinking of pz as the big thing and the rest of it as being much smaller. And this is what you get. First thing is, well this does not look like it has a good limit as it stands when pz gets large. First of all, it gets large. It gets infinite. But what is this infinity? This infinity is, notice that this infinity does not depend on the x momentum, the y momentum, or even all the internal motions which are making up the mass. It's just a constant which doesn't depend on the internal structure of what's going on inside the object. It's just a momentum. You can subtract it off. This is a rule. If you have a conserved quantity, a quantity which is conserved and which whatever it is, you can subtract it from the energy because in the end of the day the only things you're interested in are energy differences. Generally speaking, at least before you worry about gravity, the only things we worry about is energy differences between systems. So you can either subtract it off or bring it to the left-hand side. You can write e minus pz. And not worry about adding a pz there because it simply never contributes to differences of energy of different things. For example, if this were an atom, an atom might decay. What an atom does when it decays is related to the energy difference between the states which decay. This as long as the atom's momentum along the z-axis doesn't change during the decay, this term here doesn't amount to anything. Now what about the pz in the denominator? Why is that there? Keep in mind that energy is related to time. That energy is Hamiltonian. And Hamiltonian in quantum mechanics has to do with the rate at which the wave function changes. Energy is Hamiltonian and Hamiltonian is I d by dt, a time derivative of the wave function. Why is the right-hand side going to zero? If the right-hand side is going to zero, the left-hand side must be going to zero. In other words, the wave function must be changing very, very slowly for a particle moving with a large momentum. The internal wave function, the wave function having to do with its internal motions, why is it that the internal motions look like they're going slow? Time dilation. Time dilation. So, what you want to imagine then is you want to rescale the time. The faster the system goes down the z-axis, the slower its internal motions. But we don't want to throw away the internal motions. We want to keep track of them. We want to understand them. So we rescale the time. In other words, we go to a time variable which as we boost the system, we go to a time variable which is suitably redefined so that the internal motion, if it was an atom, we would want to rescale a time so that the rate of revolution of the electrons around the atom would not go to zero. And that simply means throwing away the Pz in the denominator. The Pz is the time dilation factor. If we don't care about it and we want to speed up everything by going to a new time variable, all we have to do is throw away the Pz in the denominator. I suppose that's equivalent to multiplying by Pz. And now you see we have an expression on the right-hand side which does not, which, oh, sorry, which does have a limit as Pz goes to infinity. It's the Hamiltonian or the energy function which keeps track not only of the internal motions but the motions along the x and y direction. The motions along the x and y direction as well as the internal motions are kept track of by a Hamiltonian over here and it has a nice limit as Pz goes to infinity. Number one. Number two, it looks very non-relativistic. The sums of the squares of the momentum, components of momentum of the square, the momentum divided by two, that's the relative, yeah. What happened to the peculiar Pz? What? Peculiar Pz. It's dropped out. The individual component? Well, no, no, we have to worry about the relative Pz to some extent. If the system is composed of a bunch of parts, the other degrees of freedom that we have to keep track of is the ratio of momentum along the z-axis of the different parts. But in string theory, you don't have to worry about it. This is quite a marvel of string theory. You don't ever have to worry about it. It's all constrained in the... But that, it is missing there. It is missing there. Yeah, well, it's not missing here because here I'm thinking about a total system. But if the system is made up out of parts, then each part would have, would give a contribution that looks like this. Well, yeah, last week you were summing, so I couldn't see the parts. Here I've written the expression for the entire composite system, okay? But the composite system of made of parts, this would be the center of mass motion. This would correspond to the center of mass motion, the center of mass motion in the two-dimensional plane. If the system is made of a bunch of parts so that the Pz here are sums of momentum of other particles, then yes, in principle, you should have to worry about the relative components along the z-axis. And if you really had to worry about them, it would be very complicated. String theory, for magical reasons that are only partially understood, but no, they're understood. But still a little bit magical. The relative motion of the parts along the z-axis is completely constrained and completely determined by the other motions. So you don't even have to think about it. Just throw it away. Don't worry about it. Yes, it's not intuitively obvious, but it is true. Okay, so the main message here is that the physics of a very fast-moving system, as it moves in the perpendicular plane, has the form, has the exact, this is not an approximation, exactly has the form of two-dimensional non-relativistic physics. But what you have to keep track of is that the portion of the energy which is independent of the state of motion, the thing you would non-relativistically think of as the binding energy of a system, just the energy at rest, is not proportional to the mass in this formulation, but is proportional to the square of the mass. Okay, so that's a review, that's a one-hour review. I will try not to make one-hour reviews for everything we do, but I thought this time it was worth the effort. Okay, in fact, I don't even think we've quite finished what we said the last time. Let's go on to strings now. The right model for what we're talking about now is strings moving in two dimensions, all right, in a plane perpendicular to the direction of the large momentum. So let the dimension, let the large momentum be along the perpendicular to the blackboard, then we want to make a model of a relativistic string which is wiggling around, moving, stretching, doing all the things that strings do, but moving only in two dimensions. To do that, again, I think I'm still reviewing, but I think it's worth it, we think of the string as a collection of mass points. We can begin by thinking of the string as a collection of mass points. And just going back to this approximation here, capital N mass points, capital N mass points, N minus 1 springs between them. Think of a string as a collection of mass points with little springs between them, N minus 1 springs. In energy, let's write down its energy. This is now non-relativistic physics. Each particle has a kinetic energy. It's a sum of all of the particles. The kinetic energy is x dot squared over twice its mass, no, times twice its mass, no, times its mass over 2. I'll get that right eventually. Mass over 2, this is the ith spring. And when I write x here, I mean you have to add, every place I write x, add in also y. X goes to y. There are two directions that everything can move in, and I'll call them x and y. I here does not stand for x and y. It stands for which particle along here we're talking about. Same thing plus my dot squared over 2. That's the kinetic energy. And what about the potential energy? The potential energy is something like kappa times xi, or we could call it delta x. Did I call it delta x before? Yeah, delta x. Let's just call it delta x. Delta xi squared over 2. Our kappa is the spring constant between the springs between neighboring mass points. That's the energy. What about the Lagrangian? That's the difference. This is kinetic energy, potential energy. Lagrangian is kinetic energy minus potential energy, so we might want to write that down at some point. And exactly the same kind of expression except where x everywhere is replaced by y. That's a string of point masses. Now we want to take the limit in which n goes large. So first of all, we need a parameter to label the positions or to label the mass points. That parameter will take to be called sigma, and it goes from zero to pi. That's arbitrary. I could let it go from zero to one. I could let it go from zero to two. I could let it go from zero to two pi. It's just a labeling device. We label going from zero to pi. The reason for labeling from zero to pi instead of from zero to two pi is because we're going to save going from zero to two pi for closed strings, rubber bands. Think of this string as a rubber band that's been cut open and has two ends. A rubber band which is closed together will allow to go from zero to two pi. So it's just an arbitrary reminder that this is not a closed cyclic string. We'll do closed cyclic strings too. Now what about the masses? As we let, and we want the mass of the whole string to remain fixed. With each mass point weighted gram, and I let the number of mass points go to infinity, what would happen to the string? We get incredibly massive. That's not what we want to do. We want to keep the mass of the whole string fixed, but subdivided into smaller and smaller pieces. So that means that the mass should go as one over n. In fact, I think we can just let it be one over n. We can let it be something else over n. We can let it be two over n. This is actually a choice of units. This is an arbitrary choice of units. In fact, all the choices I'm going to make now can be absorbed into units. So we let the mass go to as one over n. And what will the total mass of the string be? No, oh, I'm sorry. Just a moment. Let me go back a second. This mass here is not the true mass of the mass point. It's the non-relativistic analog mass. I shouldn't call it n because then you'll identify it with the relativistic mc squared mass. Let's just call it mu. Mu stands for just a parameter and it's the analog mass of the, all right, so let mu be one over n. What would the total analog mass of the system be? One. One. OK. So this is a string with analog mass one. If you go, all right, let's just leave it at that for the moment, with analog mass one. What about the spring constant? What happens if you have a bunch of very, very stiff springs? Imagine you have a bunch of very stiff springs and because of that you can't stretch it very much. Now we combine in series a large number of these springs. What's the spring constant of the whole thing? Is it easier to stretch or harder to stretch? It's much easier to stretch. The spring constant of the composite spring is like one over n, one over divided by the number of, the meaning of that is if we want to keep the composite spring fixed, if we want to keep the composite spring fixed, then we have to let the spring constants of the individual little springs here get larger small. Large. Large. Large. If we have a rubber band and we stretch it, it might have a hooks constant of some normal number, easy to stretch. But if you try to take two of the neighboring molecules and stretch them, I guarantee you you would have to put in a lot more force to stretch it between two neighboring molecules. So the spring constant, k, I think I will take to be, let's see, n, this is arbitrary, this is convention, k equals n over pi squared. The pi squared is put there only for the purpose of minimizing the number of pies and formulas at the end. If you don't want to worry about the pies, just ignore them. They play no essential role in anything. I put them in in order to get rid of pies at some later stage, so don't worry about them. Well, I want to start by saying the square root of 2 equal to 1. I said nothing equal to 1 except 1. Sometimes I said some things like 3 equal to 2. But 1, not 1. Right. I checked, the normalization I want is the spring constant as well, but you can ignore that. All right, let's rewrite these things as integrals using exactly the formulas we have up there. OK. Oh, we need one more formula. Delta sigma is equal to pi divided by n. I think I actually have that up there, do I? Yeah. Delta sigma is pi divided by n, taking the pi interval and breaking it up into n little pieces. OK, let's take the kinetic energy first. The kinetic energy is going to be 1 over n times the sum of xi dot squared. I'm only going to do this for kinetic energy, and then I'll tell you what the answer is for the potential energy. All right. Now, 1 over n, let's see, 1 over n is delta sigma over pi. I've just divided this equation by pi. So let's see. So this is equal to sum delta sigma 1 over pi times xi dot squared. In fact, I want xi dot squared over 2. So that gives me a 1 over 2 pi out here. But now we use the connection between integrals and sums. A sum times delta sigma is the same as an integral. So this just becomes integral 1 over 2 pi, the integral of derivative of x with respect to sigma squared, with respect to time squared. The derivative of x with respect, let's call it tau squared. Tau is time. Tau is this analog or this time that appears in this, in this light cone or infinite momentum frame physics. X dot means the x dt, or the x d tau. Sum over sigma times a function is related, is simply the integral. And where does the integral go from? The integral goes from 0 to pi. That's the kinetic energy. What about the potential energy? I'm going to tell you now what it's equal to. You can work it out yourself. It uses exactly the same things. The new thing that we need, we have delta x here. And delta x is related to the derivative of x with respect to sigma. All right? Now the other term that we get here is plus derivative of x with respect to sigma squared, all times d sigma. Have you seen the energy function like that anywhere as before? Well instead of calling x x, I called it a phi. And instead of calling sigma sigma, oh god, I would call it x. In other words, if I thought of sigma as a line in space and x as a field, this would be the energy of a simple wave field. And it would satisfy a wave equation. This physics here is exactly the same physics as all waves like physics. Waves run up and down this sigma interval. Now the only thing that you have to keep track of is what happens, let's suppose we have a wave. Let's suppose we have zero. And supposing we have a wave. The wave either moves to the left or the right. What happens to the wave when it gets to the end? It doesn't keep going. It bounces off. The boundary conditions, either Dirichlet or Neumann boundary conditions will cause the wave to bounce off. Do you know what happens to the wave when you have Dirichlet boundary conditions? Yeah, it gets flipped over. It gets flipped upside down in order to keep the field. What happens with Neumann boundary conditions? Then it just reflects left to right, but it doesn't reflect up to down. So that's the basic wave equation. We could write down the wave equation that corresponds to this. How would you write down the wave equation? Well, you'd write the Lagrangian. The Lagrangian is kinetic energy minus potential energy. And then you would work out Lagrange's equations. Lagrange's equations would just be a simple wave equation. I'm not even going to write it because we're not even going to need it. But it would just describe waves moving up and down and reflecting off the ends. What was tau? What was tau? Tau is time. OK, I'll tell you exactly what tau is. Remember that we had to rescale the time because of the slowing down of clocks. Tau is really proper time. It's time, the time of a clock that's moving with our infinite. Imagine there's a clock that we boost together with the system. That clock slows down. So we're not measuring real time in the rest frame, but we're measuring, and that's the system of interest is moving next to the clock. And so we can compare the internal motions with the clock of the system. That's what this tau is. That's what this tau is. So it's time except slowed down so that we can keep track of the time-dilated internal motions. OK, good. But that's what tau is. It's proper time. But you can just think of it as time from the point of view of this non-relativistic picture. All right, so that's the energy of the string. And it has some energy. If the string is at rest in this plane, in other words, if it doesn't have any knit motion in the plane, then its total energy will be the square of the mass of the string. OK, so we'll eventually use this formula here to identify this energy with the square of the mass of something. We'll come back to that. A question on mu again. Mu is the mass of each point. Of each point of mass. And it's not m because m is the rest mass. And this is the mass. Yeah, yeah, yeah. It's a complete analog object that we don't even need to specify. We can just write down this formula here. We just write down this formula. Yeah. Does that quantity have a name, e minus bz times bz? It's called the light cone energy. Yeah. The reason I don't want to call it light cone is because it has nothing to do with cones. It's a misnomer. Well, no, there's people who picture where you rotate the axes. You get the same results as you've gotten. But some people find it. No, no, no, no, no. The point is cones. Cones. Cones. These are not light cones. Cones look like this. The axes are the 45 degree angles instead of the. Right. Those aren't cones. Those are wedges. Cones around like that. So the surface of revolution of the. I mean, it's not, no, it's not a surface of revolution. It's a light front. The right word would be light front. Yeah. It's the wrong. Wrong terminology. The boosted direction is not a cone. Right. Right. It's two light planes intersecting. Well, don't worry about it. I purposefully didn't want to get into that. Right. There's no point in giving you wrong terminology and then explaining why it's a wrong terminology. My picture of a 43D cone is an expanding sphere. This is not what this is. But this is not that. Not that. Right. Right. Nothing to do with cones. As I said, it was a misnomer. Light front would have been a better word. But didn't Dirac use the term? I don't think so. I'm not sure. I don't know. I don't think so. No. He just did the transformation. Right. He did have light cone coordinates. But his light cone coordinates were like cone coordinates. He also talked about something which I would call light front coordinates. He, and that was this. So let's not, let's not belabor it. Okay. One last point before we, well, I think, all right, I think all of this we went over last time and I go over it more completely now. Now I think I start with something that we didn't go over last time. I don't think I did anyway. Did we talk about the boundary conditions at the end of the string and why they're Neumann and not Dirichlet? Good. Let's do that now. This is, yeah. Okay, let's leave this up on the blackboard here. But now we want to come to boundary conditions. We want to come to exactly this issue. Oh, incidentally, let me reiterate again. There's another term and the other term is of the same form except with y replacing x. Okay. The motion of the string involves knowing both x and y are as a function of sigma. A point on the string is labeled or is an x and a y. Okay, so we have both of them and we mustn't forget about them, about the two coordinates. Incidentally, real string theory often makes use of many more coordinates. Okay. The many more coordinates would just go into x, y. Give me some more letters. Don't use z because we already use z, w, v, and so forth. Gamma, olive. All right. Let's talk about boundary conditions on the ends of strings. The boundary conditions are actually determined by nothing more complicated than Newton's law. Now, you say, why are we allowed to use Newton's law? We're talking about relativity. Well, it's because of this two-dimensional analogy. In the two-dimensional motion of the string, Newton's laws are correct. Now, Newton's law is applied to what? Newton's laws applied to the mass points. Newton's laws apply to the mass points. Tell us how the mass points accelerate given the force on them. What's the force on a mass point? Now a typical mass point is being pulled from the left and it's being pulled from the right. So the forces, they won't necessarily balance, but it will be getting a force from the left and a force from the right. There are two special points which are only getting forces from one side and not the other. Those are the mass points at the end of the string. So let's concentrate on the mass points at the end of the string. And then here's the last point on the string, the ultimate point on the string, that's n, and here's the penultimate point, n minus one. And they're connected by a spring. Now what is the force, the x component of force, on the end point of the string? Hooke's law. Hooke's law tells you that the force on the end of the string is the displacement or the distance between the nth point and the n minus first point. In other words, the force on the string, the force on the nth point here is proportional to delta x, the last, let's just call it delta x, but it means the separation between the nth point and the n minus first point. That's the force on, oh sorry, what about the spring constant, the spring constant, k. And if you remember, k is very large. It grows like n. I won't bother putting in the pies. So the force on the end of the last molecule on the chain there is delta x, it scales like delta x times n. Now what is delta x? Delta x is approximately, and in the limit of a large number of points, it becomes exactly, the derivative of x with respect to sigma times delta sigma. And what is delta sigma? Delta sigma goes like 1 over n. So the n's are going to cancel. I don't care about the numerical constant factor. The force on the end of the string is going to be proportional, let's just put proportional, to the derivative of x with respect to sigma. What is the derivative of x with respect to sigma? It's the amount that the string is stretched near the end. It's the stretching factor near the end. If the x d sigma was zero, it would mean that the nth point and the n minus first point are right on top of each other. If they're separated, then there's the x by d sigma. Now what does that have to equal by Newton's equations? By Newton's equations, it has to equal the analog mass times the acceleration, m times, let's call it x double dot, the acceleration. But what is the analog mass? This isn't m. This is mu. But what is mu? Because we've chopped up the string into lots of little pieces, the end point is very, very light. It has a mass which is only one over n. Well now we have something a little bit bizarre. If I multiply this by n, let's get rid of the intermediate thing here, multiply it by n. We find that the acceleration will go off to infinity as n gets larger and larger. The acceleration shouldn't go to infinity. That doesn't make sense. That the acceleration of the end point of the string is wildly, wildly violent. The answer is that the right boundary condition at the end of the string is Neumann boundary conditions. So in order to prevent infinite accelerations, which are quite unphysical, in order to prevent the string from having an infinite acceleration of the end point, you impose Neumann boundary conditions. The x by d sigma is zero at the end points. This was in fact the original argument about all of this. It was at this level that it was first understood. Now we have a fairly complete system, a system with a Lagrangian and well-defined boundary conditions. The question which we will take up after about five minutes is how do you do the quantum mechanics of it? That's what we were after. We were after the quantum mechanical energy levels. We want to compute the quantum mechanical energy levels. And having computed the quantum mechanical energy levels, we will know something about the masses of these objects. That's the goal. So we mount, we can throw away everything else off the blackboard and say here is a system, it's a classical system at the moment. How do we make quantum mechanics out of it and how do we find its energy levels? Now what do you do to study this string and in particular to study it quantum mechanically? Well, the first thing you do is you write x as a sum over cosines. Remember what x is? X is a function of sigma. So is y. X is a function of sigma. Y is a function of sigma. Sigma goes from zero to pi. What can we do to make a concrete investigation of this? We can Fourier analyze x. Dirichlet, no sorry, Neumann boundary conditions mean cosines. And so we write x of sigma in this form, also y of sigma. And then we take these two expressions, let's concentrate on x, and we plug them into the Lagrangian. Or the energy, it doesn't matter. We plug them into the energy or the Lagrangian, into the kinetic energy and the potential energy. And we re-describe the system in terms of the kinetic and potential energy of a set of new degrees of freedom. The new degrees of freedom are just these x-ends and y-ends. So let's straightforward. Let's begin with the kinetic energy. Let's see. I don't think we need what's up above. So let's pull it down and start with the kinetic energy. I'll work it out for you. And then I'll probably, well, maybe we'll do the potential energy too. Once we have that, we'll have something that we can work with more easily than this form here. So we take xn, we write it this way. What's xn dot? Dot meaning time derivative. Sorry, not what's xn dot. What's x dot? X dot of sigma. X dot of sigma, we just get by differentiating this. Now the cosine of n sigma doesn't depend on time at all. It's just a cosine of n sigma. It has no time dependence. Does this have time dependence? Yeah. Something has to have time dependence. The string is going to wiggle. What has time dependence is the Fourier coefficients. The sigma dependence is coded in these cosines. The time dependence, if x is also a function of time. In other words, it's a wiggling string. It's a wiggling string. The position of every point on it is time dependent. The string wiggles. Why do we put time dependence in these formulas? Well, we put them in here. So think of the xn's as being objects which have a time dependence. OK, how about xn, sorry, x of sigma dot? What is that? That's going to be sum on n. We're just going to time differentiate. We're going to differentiate this with respect to time term by term. So this is n equals 0 to infinity of, I'll write it as, xn dot, the time derivative of xn times cosine n sigma. Now let's write the kinetic energy. To write the kinetic energy, we first have to write the square of this. This is the same as the x d tau at point sigma. We have to square it. The square will give us a double sum. So the x by d tau squared will be a double sum, a sum from n equals 0 and m equals 0 to infinity, sorry, m equals 0 to infinity, x dot n cosine n sigma. I've just rewritten this. But then what do I multiply it by? I multiply it. Should I do this in two steps? No? OK. xm cosine m sigma. So it's xn dot. Let's group them together. xn xm dot cosine n sigma cosine m sigma. But now we're also instructed to integrate it over sigma, integral d sigma. The only sigma dependence is in these cosines. So we can bring the integral over to here, d sigma. And the last step is to divide by 1 over 2 pi. 1 over 2 pi is a convention. It's a convenient convention. How about this integral? The integral has two kinds of terms. It has terms with n and m not equal to 0 and another term with n and m equal to 0. We have to be a little bit careful about them. First of all, after we've integrated over sigma, there will be no terms with n not equal to m. So in other words, we're going to get a single sum, not a double sum. What will be in the single sum? It will be 1 over 2 pi. Now, the first term is going to come from n equals 0 and m equals 0. And that is just x0 dot squared divided by 2. No, no 2. No, no 2 at all, not even any pi. Yeah, yeah, yeah, 2 and no pi. Where did that come from? It came from the pi cancelling the pi downstairs. That's it. What is this? What is this x0, incidentally? Do you have any idea what x0 stands for? It's the average position, right? But that makes it the center of mass position. And the center of mass position of the string, the string, in addition to everything else, in addition to all of its wiggles, it has a center of mass motion. It has a center of mass, which is its average position. This is just the kinetic energy of the center of mass. Everything else is associated with the internal vibrations and the internal motions. This is the piece that has to do with the overall motion of the string. So x0 and y0 are nothing but the center of mass. OK, let's suppress y and just work with x for a minute. Now, what about the other terms? The other terms are a sum from n equals 1 to infinity. Let's see what they have. Let's put them in. Plus 1 over 2 pi xn dot squared. There are no terms with n not equal to m. And then what's the integral of cosine squared of n sigma? Pi over 2, right? Pi over 2. So the pi's will cancel, and we'll get an extra factor of 2 downstairs, which will make it overall a one-quarter. OK, so this is center of mass. And this here has to do with the internal relative motions of these little constituent elements of the string. Now, what about the other term? Minus the x by d sigma. Let's work that one out. I think I can erase this. Let's get rid of this for a minute. Let's work it out. How about the x by d sigma? That's not x dot. It's just the x by d sigma. We start with this formula here. That's x. And now we want to differentiate it with respect to sigma. What happens when you differentiate a cosine of n sigma with respect to sigma? You've got a minus. Cosine becomes sine, but there's also a factor of n. So this is an n here. Cosine becomes sine. So that's the derivative of x with respect to sigma. Notice if x involves cosines, derivative of x with respect to sigma involves sines. If one is Dirichlet, the other is Neumann and vice versa. OK, so now let's write down the square of the x by d sigma. We'll be adding then plus a sum over n and m again, because we're going to square this. When we square it, the minus sign goes away. And we'll have n times m times xn xm sine n sigma sine n sigma sine m sigma. And this gets integrated. What am I missing? Anything? Yeah, one over 2 pi. OK, what's the integral of sine n sigma times sine m sigma d sigma from zero to pi? Zero unless n equals m. And if n equals m? Pi over 2, I heard it. I just want to say it, pi over 2. OK, so again, this all adds up. We're constrained to set n equals to m by the integral. The integral will be zero unless n equals m. OK, so it's just n squared. Xn squared. Then pi over 2, right? Pi over 2. That cancels the pi and puts a 4 downstairs. So it looks like this, same factor of a quarter, but instead of xn dot squared, it has n squared times xn squared. If we're doing the Lagrangian, it's minus. For the Lagrangian, it's minus. For the energy, it's plus. OK, so we have three terms altogether. The first term involves x naught. The second term involves time derivatives. It's kinetic energy. This, of course, is also kinetic energy, kinetic energy of the whole string. And then there's something which involves x squared. Let's go back to here. This is the energy of a harmonic oscillator. The energy of a harmonic oscillator has an x dot squared and an x squared. Evidently, for each n, the Lagrangian is the Lagrangian of a harmonic oscillator. The Lagrangian of a harmonic oscillator for each n. What are the frequencies of these oscillators? This system is a collection, an infinite collection of infinite number of harmonic oscillators. It's as though you had a discrete, countable infinity of springs. What are these? Of course, these are just the harmonics of the string. They're the harmonics of the string. And what we've done is to rewrite the Lagrangian in terms of the coefficients of the individual harmonics. And what do you find? First of all, they're not coupled to each other. The nth harmonic is not coupled to the mth harmonic. It's a collection of noninteracting spring, noninteracting harmonic oscillators. What's the frequency of each one of these oscillators? What's the frequency of the nth oscillator? Yeah, n. You can read that off from here. It wouldn't change if you put a 4 here. The frequency would still be in both places, as long as you put it in both places. Omega is the frequency, and omega squared is the coefficient of x squared. Here's the coefficient of x squared. And so we can write, there's a collection of oscillators, each one with a frequency omega n equal to n. What about omega zero? What happened to it? You realize, of course, that this is the restoring force of the oscillators. This is the restoring, this is the Hooke's law stretching energy of the oscillators to restoring force. There's no restoring force for the center of mass. Of course, there's no restoring force for the center of mass. The center of mass is free to fly away. There's nothing holding on to the center of mass. So there's no restoring force here. That's just x naught dot squared. And the internal energy, this is the vibrational internal energy of the string. It's this which has to be identified, where is it? We lost it, with the mass squared, the true mass squared of the relativistic string, the internal energy. This is the internal energy, and it's to be identified with the square of the mass. Okay, we might as well get rid of this. This is, it's trivial. It's just the overall center of mass motion. This is the interesting part here, and as I said, it's a collection of harmonic oscillators. Now, I've written only half of it. The other half are the y's. So another identical, it's a doubly infinite collection of harmonic oscillators. Each one with frequency n. So the first one has frequency one. It's a nice slow frequency. The next one has twice the frequency. The next one has three times the frequency. And of course, these are simply the different harmonics of the string. And what do we do with them? We quantize them. We quantize them. We subject them to the process of quantization, which is very easy for a harmonic oscillator. In fact, we don't even really have to go through the whole exercise. We've done it before. We just have to remember that the energy of harmonic oscillators are quantized in certain forms. Let's see if we can see what the energy levels are going to be. There's going to be a lot of energy levels, an infinite number of course, but we can make some energies in a, we can start organizing them. I think for tonight I won't write down creation and annihilation operators. And let's for the moment suppress y. We'll come back to y. Let's just concentrate on x. Maybe we'll include that. Let's do y2. So for each n, there are two kinds of oscillators. An oscillator along the x-axis and an oscillator along the y-axis. So let's see what the, and they have the same frequencies. The x-oscillator, the x, the x-ends don't have the frequency, same frequencies. The x's and the y's have the same frequency. So there's another replica of exactly the same thing. And of course they correspond to whether the harmonic is vibrating along the x-axis or the y-axis. By combining them together, you can make oscillations along different axes. So let's see what kind of energy levels the system will have. There'll be a ground state where none of the oscillators are excited. They're all in their ground state. Let's call that O, ground state. This does not correspond to the vacuum. It doesn't correspond to an empty space. It corresponds to a single string with no energy excitation. So it's a string. It's a form of particle. It's got an energy. It's got a momentum. And it has a gap in its spectrum. It has a gap to the next energy level because harmonic oscillators have gaps to the next energy. The quantum harmonic oscillator, it takes some energy, some finite energy to excite it. So it's a particle. All right, what's the first excited state? The first excited state you can create by creating one unit of x energy. How much does that correspond to? It corresponds to the frequency. Remember, the energy levels, I should write down a formula, the energy of a harmonic oscillator is equal to n h bar omega. This is different n, different n. Number of quanta, what shall I call it? Q, number of quanta. Number of quanta. It's an integer. It's an integer. Different integers than these integers. These integers label different oscillators. But each oscillator itself can be excited. All right, so we take the energy of an oscillator is Q, the integer Q, times the frequency. We're going to set Planck's constant equal to one. Let's forget Planck's constant. Set it to one. And omega is just n. So for the nth oscillator, it's just Q times n. The nth oscillator, the energy of the nth oscillator. All right, so what's the first excitation? You can excite, you know, you can excite either the x or y oscillator, the lowest oscillator, the n equals 1 oscillator. You can excite it one time. You can give it one quantum of energy. How much energy does it have then? Q is one, n is one. It has one unit of energy. And some units, we haven't specified units yet. So whatever the ground state is, there's two states above it. Two states above it, whatever the energy of the ground state, let's not specify the ground state energy. It is whatever it is. Above it, this is the ground state. Above it are two states. Why do I say two states? Either x, you can excite it along the x-axis or the y-axis. Because it just corresponds to the string vibrating or to the first oscillator vibrating along the x-axis or the y-axis. And of course, linear superpositions of them can vibrate along any axis. So that's the first thing, one unit up. Two states, a multiplicity of two. I'll put here the multiplicity. The multiplicity of this one is unique. The multiplicity of this one is two possible states. What comes next? Well, you can do a number of things. You can excite the lowest oscillator twice. You can excite the lowest x-oscillator twice. You can excite the lowest y-oscillator twice. You can excite the lowest x-oscillator and the lowest y-oscillator each once. But what else can you do? You can excite the second oscillator. We'll work this out more carefully next time when we introduce creation and annihilation operators. But you can excite either the x- or the y-oscillator lowest oscillator once. That will give you two units of energy up here. Or you can excite the x- lowest oscillator once and the y- lowest oscillator once. That will also give you two units of energy. But you can also excite the second oscillation once, either x or y. That will give you two more. So altogether there are five states here. Five states, x excited twice, y excited twice, x once, y once, or the second oscillator of either the x or the y, or the second oscillator twice. OK, so let's count. Let's see. In terms of creation and annihilation operators, A for x, B for y. We can take the lowest oscillator, A plus, sorry, A1 twice squared. B1 plus squared on O. This is exciting. The x-oscillator twice, the y-oscillator twice. A plus one, B plus one, O, that's three. And next, A2 plus or B2 plus. Five states, right? Exercise, count the next two levels. How many states are there all together? Exercise for the next time. But the point is, the most important point is that there are gaps between significant gaps between the states. So they're discrete, they're particle-like. We'll work out in a little more detail the spectrum, and then having done all of that, we'll try to identify some of these particles. Some of them are bad guys that we want to get rid of. Others are good guys that we recognize and we say, those are particles that we like. Having done that, we want to do closed strings. Let me just tell you right now in the next five minutes, or the next two minutes, what the relationship, these are called open strings. They're open strings because they have ends. They have end points. Now, the basic process of interaction of string theory is string ends coming together and joining to make longer strings. Strings can come together, if you like, in the context of Hadron physics, meson physics, the strings have quarks at the ends, quarks and quark antiquarks. Now, that's not true for the string theories that we're going to be interested in, but for ordinary Hadrons, quarks and antiquarks. And what can happen is a quark and an antiquark can come together, annihilate, and create a longer string. So the basic interaction process, which we haven't taken up yet, we haven't discussed it yet, is the joining and splitting. If joining can happen, splitting can happen. Joining and splitting of strings. Now, here's a string all by itself. And as it happens, this string executes a fluctuation, which happens to bring the other end around close to the original end. This end of the string over here sees another end of the string over here. It doesn't know it's part of the same string. All it knows is it's found another string, another end, another end to annihilate with. So once you introduce string interactions, you are committed to something new. You're committed to closed strings. Every theory of strings always has closed strings. You can write theories of closed strings that don't have open strings, but you can't write theories of open strings that don't have closed strings. So there's something more general. Well, I don't have to write words general. There's something, if you had to make a prediction about string theory, it would not be that there are open strings, but it would be that there are closed strings. There's no way around the closed strings in string theory. There are ways around the open strings. So the next thing we're going to have to do, after we work out this and figure out what open strings are like, we're going to work out closed strings. I will tell you the answer right now. Open strings often behave like photons. Closed strings behave like gravitons. There is no theory of strings which doesn't have gravitons. There are theories of strings which don't have photons. But we'll come to that next time. For more, please visit us at stanford.edu.
(September 27, 2010) Professor Leonard Susskind discusses how the forces that act upon strings can affect the quantum mechanics. He also reviews many of the theories of relativity that contributed to string theory today. String theory (with its close relative, M-theory) is the basis for the most ambitious theories of the physical world. It has profoundly influenced our understanding of gravity, cosmology, and particle physics. In this course we will develop the basic theoretical and mathematical ideas, including the string-theoretic origin of gravity, the theory of extra dimensions of space, the connection between strings and black holes, the "landscape" of string theory, and the holographic principle.
10.5446/15120 (DOI)
What I had prepared for tonight was a lesson on how string theory gave a resolution to the question of the entropy, what is it that carries the entropy of a black hole? I don't know if we'll make it all through it because it's late already, but let's start it. To say that a system has an entropy, a large entropy in particular, is another way of saying that there's a large number of microscopic degrees of freedom which are too small and too numerous for you to keep track of. So when somebody says the bathtub full of hot water has a entropy of 10 to the 30th or whatever it happens to be, they're implicitly making a statement that there is microscopic structure there which you can't see, well maybe you can see it but you choose not to see it, too many particles, too numerous and that they are carrying 10 to the 30th bits of information which for practical purposes is hidden. So it then becomes a question that you can ask, all right you do the thermodynamics, you study water, you heat it a little bit, you measure its energy, you measure its temperature, you use the laws of thermodynamics and you discover that it has a certain entropy and so forth, it then becomes a question, what is it? What are those microscopic degrees of freedom that are carrying the entropy? Given that it has an entropy, you know they're there, but that doesn't tell you what those degrees of freedom are, it doesn't tell you what those objects are and the same is true of a black hole, the same is true of general relativity in particular, but in particular a black hole when Beckenstein put forward the idea that black holes have entropy, the natural question was what are these tiny microscopic things which apparently in some way are on the surface of the black hole because he said the entropy was proportional to the area, but there was nothing in the general theory of relativity which gave any clue as to what they might be. The reason is not so different than the statement that if you study fluid dynamics, fluid dynamics makes no real reference to the microscopic nature of fluids. What does it do? It's a sort of coarse-grained description of the macroscopic flow of fluids. There's a velocity at every point in the fluid, there's a density of the fluid, a few things like that and it makes no reference to the microscopic theory, but when you change the temperature of the fluid, you change the energy of the fluid, you discover there's an entropy and it tells you there's a microscopic structure there. Just like fluid dynamics doesn't tell you what that fluid dynamics together with some thermodynamics does not tell you what that fundamental microscopic structure is, it tells you it's there. And you know, invites you to start to consider the question of what is there beyond the simple fluid dynamics questions you could ask. In the same way, the entropy of black holes invited us to start asking is there a microscopic structure, the black holes or the gravitational fields or the spacetime or whatever that is explaining this entropy. Now string theory did provide an explanation of the entropy of black holes. It may not be the only explanation, string theory might not be the right theory of black holes, might not be the right theory of nature, we don't know, but within the context of string theory, it provided a picture and an explanation of what the microscopic objects are that carry the entropy of the black hole. So I thought I would take you through the very, very simplest way of, and it's not so simple, but the simplest example of counting the entropy of a black hole using string theory. Fortunately, you don't have to know too much about string theory, very little in fact. The argument is basically qualitative. Unfortunately, it's a little bit too crude to see any absolute exactness, but it's good enough to get the order of magnitude answer right. So we'll go through that. The first thing you need to know is some facts about the constants of string theory and gravity. First thing is both in gravitation and in string theory, there's a unit of length. The unit of length that's important for the structure of strings is different than the unit of length that's important to the structure of gravity. The gravitational length scale is called the Planck length. Let's call it L sub p. It's some combination, somebody remember it, that it contains g, h bar, and c. I can't remember what it is. Square root of g, h bar over c cubed, does that ring a bell? That sounds right. That's the Planck length, 10 to the minus 33 centimeters, very, very small. We're going to work with units in which h bar equals c equals 1, so in those units g, or the square root of g, is just the Planck length. That's all it is. Newton's constant in units like this, the square root of it is the Planck length. Or to write it another way, the Planck length squared is Newton's constant. Now, Newton's constant has another meaning, of course. Newton's constant is the constant of gravitation, tells you the force law between massive objects. We'll come back to this in a minute. Main point now is that Newton's constant has units of area. That's why we say that the entropy of a black hole is its area measured in Planck units. It's its area measured in units of g. Okay, that's one statement. Now, what about string theory? String theory also has a unit of length. It's not the Planck length, and I'll tell you what it is. If you were to take a typical string in string theory, heat it up, you would find out that it wiggles all over the place. But you will find out that if you looked at it through a microscope, that the size of the wiggles would be of a particular characteristic size scale. You would find out that the wiggles were never on a smaller scale than a certain scale, and that scale is called the string length scale. It's not terribly important exactly how it's defined. It is roughly speaking, it's the size of an oscillating string if it has one unit of oscillation. So it's a certain length scale that goes into string theory. It has to do with the wiggliness of strings and the length scale along which it wiggles. But whatever it is, it's called L string. All right, so there's something called L string, also something called L string squared, but for the moment there's L string. And finally, there's another constant in string theory, and it's the string coupling constant. Anybody remember what the string coupling constant is? First of all, what its meaning is? The probability that if a string crosses itself, it breaks. So it's the probability that if you have a string and it pinches off like this, it's the probability that it breaks. And it's called G. It's the amplitude for the string to break. It plays the same role in string theory as the electric charge does in electrodynamics. In electrodynamics, the electric charge is the amplitude for a charged particle to emit a photon. The string coupling constant is the amplitude for an oscillating string to emit a small string. The small string is, of course, a graviton. So it's the amplitude for a oscillating string, whatever it's doing, to emit a graviton. Finally, let's talk about gravitational forces. How do we think first electromagnetic forces? How do we think about electromagnetic forces? We draw a Feynman diagram and a photon is exchanged. Now, this should be taken with a little bit of a grain of salt. It's not exactly really a photon being shot from one charged particle to another, but the diagram has some real meaning. There is an emission of a photon over here. The amplitude for that is E, the electric charge. And there's an absorption of a photon over here. Another factor of electric charge. Two factors of electric charge, the meaning of that in common language, is that the force between two charged particles caused by the photon is proportional to the product of the two charges. One for emission, one for absorption. Well, what about gravity? If gravity is controlled by string theory, then the analogous thing would be two objects, both of which are made out of string. Everything is made out of string and string theory. One emits a graviton. Here it is. It's wiggling around. It emits a graviton. The amplitude for that is G. The graviton goes over to this side here and reconnects with that. In string theory, that is the source of the gravitational interaction between two objects. It's analogous to the Feynman diagram of electrodynamics. And the main point is that the force between two massive objects is proportional to the square of the string coupling constant. One for absorption, one for emission. It also contains, incidentally, the product of the masses of the object and the distance between them squared. But that's not so important. The important thing is that this G squared appears in the force law. Well, there's another object that sometimes appears in the force law. It's not little g squared. What is it? It's big g. The usual Newtonian force, you put big g, not big g squared, but little g. So it sounds very much like little g squared must be big g. But that's not quite true because they have different dimensions. Big g has units of length squared. Little g, that's just a probability. It's the probability that two strings will separate. Probabilities are dimensionless. They have no dimensions at all. So it must be that little g squared must be connected to big g by something that carries dimensions. What's the units of big g again? Length squared. What's the probability of little g? What goes here is the string length squared. Now it's dimensionally consistent. Now it's dimensionally consistent. Area on one side, area on the other. But it's also true that the Planck length squared is L Planck squared. So we've just derived the relationship between the Planck length, the string length, and the coupling constant. Removing the square root, it's the coupling constant times the string length is equal to the Planck length. Now typically we imagine that g is a small number, that the probability for strings breaking up is a small number. If it's a big number, we don't even know how to get started in doing string theory. We typically think of as a small number. And so the Planck length is typically a small fraction of the string length. But that's, we just keep that in mind. This is one of our basic equations. L Planck squared is the Newton constant and L Planck is equal to the string coupling constant times the string length scale. These are two important equations. Write them down. If we don't get through this derivate, we may get through it tonight. We may. All right, that's one set of facts. One fact. No, actually there's two facts there. Excuse me, did you say that g, I'm trying to figure if g is greater or less than one or... Well g is usually less than one. Yeah. So that means that the string scale is typically bigger than the Planck scale. In general the Planck scale is the smallest scale that you wind up thinking about. String scale can be bigger. Okay, next. Entropy. Let's start with entropy of black holes. What's the symbol for entropy? S. So I won't write entropy, I'll write S. What's B.H. stand for? Is your it isn't Beggarstein Hawking? Okay. S black hole is equal to... Now we're not going to worry very much about the numerical constants. First of all, it's equal to the area of the horizon divided by g. Does that make sense? Is this dimensionally consistent? Sure. Area has units of area and g has units of area. There's a 4 in there but we're not going to worry about the 4. That's... All right, another way we could write it is to remember that the area is proportional to the square of the Schwarzschild radius divided by g and the Schwarzschild radius is 2mg. So this becomes proportional to m squared g squared. mg is the radius, m squared g squared is the area and now we can factor out a g. So the entropy of a black hole is the mass squared times Newton's constant. But it's also the mass squared times the Planck length squared. All of these are the same thing. Mass squared times Planck length squared, that comes from here. So that's the black hole entropy. Now let's think about a string. Imagine that we have a string, a little string, a little one, a graviton. But now we hit it hard. We hit it hard and we give it a lot of energy. We heat it up. How do we heat it up? By bombarding it with a lot of particles or just putting it in a frying pan or boiling it, whatever we do with it, we pump a lot of energy into it. What happens when we pump a lot of energy into it? It starts to vibrate, it starts to oscillate and typically, unless we do it in an especially careful way, it'll form a big tangle of string. There it is, all over the place. If we heat it up, that's about what we'll get. It's like a big war ball of, a big ball, you know, if you go fishing, you remember what your reel looks like after, you know, the big tangled mess. Big tangled messes have entropy. They have entropy just because there are many, many tangles which are hard to tell apart. What's the entropy? The entropy of a thing is the number of configurations which are hard to tell apart, which are too hard to tell apart because the pieces are too small and numerous. So a string, a vibrating string like this also has entropy. Let's try to guess. Let's try to guess what the entropy of a vibrating string is. To do that, I'm going to make a model, a very, very simple model of a string. It's an oversimplification. It gives the right answer. It's certainly an oversimplification, but qualitatively, it's the right picture. And here's the idea. Instead of thinking of string theory in an ordinary space, let's imagine that space is replaced by a lattice, a cubic lattice that looks like that. And then what is a string? A string is a collection of links, the lines of the lattice just in terminology. The lines of the lattice are called links. The nodes are called sites, S-I-T-E, sites, links. The squares, you know what the fancy name for the squares of a lattice are? The plaquettes. The plaquettes. But we're not going to have anything to do with plaquettes. All right. So there's the links of the lattice and a string would be a connected set of links. In other words, a line or a curve through space is replaced by a sequence of links. The links can go back over themselves. No rule that says the link can't go back over itself, but we follow it. And let's say we follow it from beginning to end. Imagine now a big jumbled string, kind of random walking string. How many configurations of it are there? All right, let's start at one end. We're going to ignore the question of whether it's a closed or open string. It doesn't matter for those purposes. You get the same answer pretty much. Supposing you start at one end, how many possibilities are there? I've drawn a two-dimensional world. If it were a three-dimensional world, we would have some lattice coming out of the black board for a four-dimensional world, but the basic picture would be the same. Okay, let's work with a two-dimensional world for simplicity. How many ways are there of starting out? Four. Okay, we get to this one over here. Could have gone to any four. What's the next number of ways to continue? Four. We're allowed to cut back on ourselves times four. Supposing altogether the string has little in links. Incidentally, in this model, I want you to imagine the size of a link is the string length, L string. The size, we're doing string theory. So what else could it possibly be if we're doing string theory? Each one of these is size L string. Supposing the total length of the string was in units, how many states are there? Four to the end, right? Which happens to be two to the two end. Let's just write four to the end. Supposing it was a three-dimensional lattice, what would it be? Six to the end. It doesn't, whatever it is, it's some ordinary number to the nth power. Supposing it was a lattice which wasn't a cubic lattice, but was a hexagonal close pack lattice or some other ridiculous lattice. The number would still be some number to the nth power. Now, what is the definition of the entropy? That's how many random states, and in fact, it's an interesting fact. Of course, some of them are very special. There's the one state which is just a straight line. That's a pretty rare possibility. The chances that you would get a straight line are pretty remote. One out of four to the end if n is large. If you run this on a computer and just random, you know, get a random number generator and just start generating string configurations, and n is relatively large, let's say a hundred, they'll all look about the same. They'll all look at it. They'll form a ball or a random walking stuff. They'll all look about the same. The chances that you will get a untypical looking string are very, very small, with a hundred links, boy, every day they really look very similar. So that means that the number of states which are all similar to each other, hard to tell apart, is four to the n. What's the entropy of that? N log four, right? The entropy is by definition a logarithm of the number of, let's call them, macroscopically indistinguishable states. So now we know the entropy of a random string. Let's put it over here. S string is equal to the number of links times logarithm of four or some ordinary number. Let's not worry about logarithm of four. It's proportional to the number of links of the string. In other words, it's proportional to the length of the string. But n is dimensionless. It's just an integer. Length is dimensional, as units of whatever units are. So how do I make this, well, let's see, what am I writing? Oh no, no, this is, this is correct. This is, this is correct. This is correct. But there's another way that we can write this. Supposing the length of the string, and by the length of the string, I mean following it around is l. l is the length of the string. How many little links does it have? Well, n, I know. But what is n? The question is what is n? l over l string, right? That's n. The total length of the string divided up into these little unit string lengths, that's n. So we can also write that the entropy of a string is its length divided by times, no, divided by l string. Let me rewrite this formula over here. It's a over g, but g is l-plank squared. There's something both similar and totally different about these formulas. Area is replaced by length, l-plank squared is replaced by l string. They are different. They're not the same. They contain different powers. This is linear in the length. This is quadratic in the size of the other thing. It contains l string downstairs. This one contains l-plank downstairs. But there is some similarity, but they are different. Now, let's rewrite it another way. In string theory, each one of these little links has a mass. The mass of one link, let's consider it. Let's call it little m for mass, the mass of one link, l-i-n-k. What is that equal to? Okay, in our units, in units, where are our units? Our units are up here. C equals h bar equals one. What's the relationship between units of mass and units of length? Inverse. A mass is an inverse length. So, the mass of one of these links in string theory, since we're doing string theory, there's only one thing it could be, one over l string. So, that's the mass of a link. Okay, what about the mass of the whole string? Let's take the mass of the whole string now. I think we can erase this. What's the mass of the whole long string, or the whole ball of n over l string? But what is n? I think I lost n. n was l over l string. n itself was l over l string. And so, the mass of the string is the length of the string divided by the square of, the length of the string divided by the string length squared. Did that make sense? This is a constant of nature, or the units of length. This is the length following the string, following the string around on its curve. Alright, so that's m string. And now, let's eliminate l. l is equal to m string times l string squared. And let's plug it in to the formula for entropy. What do I get? l is the mass of the string times l string, right? Did I get it right? I got it right. Well, the mass of the, let's not even call it the mass of the string, the mass of the whole thing, whatever it is. In string theory, strings can be anything. You're a string, I'm a string. A star is a string, or a collection of strings. So again, we see something else that's sort of similar. The entropy of the string, or the entropy of the black hole, is the square of the mass times the plunk length squared. The entropy of a string is the mass of the string times the string length. There's a pattern here. There is a pattern here. It's the wrong pattern. It's not the pattern we want. Because after all, what we'd really like to say is this big ball of tangled string is a black hole. Why not? I mean, you know, you take two strings, you collide them together with a terrific, terrific force. You make a big jumble of string. What could it possibly be? It must be a black hole. But it's not working. It's not working. Something's wrong. Not working, but it's almost working. Well, not almost working, but it's working. Some pattern is right. Let's take a little break. I need a break. No, no, I don't know if we'll get there tonight. I'm not sure, but you asked too many questions. I didn't intend to leave you in suspense, but... All right, here are the basic formulas that you need to follow what I may not finish tonight. But the simple formulas, so write them down. Look at them, and if we don't finish tonight, remember them. Newton constant, that's nothing but the plunk area. Newton constant is the plunk area. The relation between the plonk length and the string length is through the string coupling constant, and it goes this way, not the other way. Black hole entropy is area divided by plonk area, and it's also mass squared times plonk length squared. We'll forget the G here if we like. Entropy of a string is the length of the string instead of the area divided by one power of L string, and it also happens to be one power of mass, not times the plonk length, but by the string length, about the string length. All right, those are the basic working equations that we'll use. Now, in string theory, G is something that you can change. You can imagine that the coupling constant G is something that somebody could have a knob that could change. Let's not ask how you build such a knob. It's built into string theory that things like coupling constants are things that could vary. So let's imagine that they could vary. And I'll imagine, but I'll assume that they can. What happens, and let's start. Let's start our game with the coupling constant G being very small. If it's very small, negligible, maybe even zero. If it's zero, strings don't split at all when they cross each other. Strings don't split and jump across to other strings. There is no gravity. Gravity is infinitely weak if the string coupling constant is zero. So that's a starting point. And now let's start with a big tangled string. So much energy has been put into it that it's massively big, as big as a star or bigger. It can't be a black hole. Why can't it be a black hole? Because the relationships between the entropy of a string and the entropy of a black hole are not right. All it is is a string, entropy proportional to its length. Now let's imagine increasing G, little G. What starts to happen? What starts to happen is, let's see where are we? Well, what does start to happen as we increase little G? Well, let's hold L string fixed. It's convenient to work in units in which L string is held fixed. So if G gets bigger, L-plunk gets bigger, but that's another way of saying that the gravitational coupling constant gets bigger. That's not too surprising. If we increase G, it means little strings can break off. If they can break off, they can jump across and gravitation becomes important. So as little G starts to increase, gravity starts to become important. What happens to this thing is gravity starts to become important. Gravity starts to pull it together. It pulls it together, shrinks the size of it. What happens to the energy of it, incidentally? Does it increase or decrease? It decreases. It decreases. That's because the gravitational attraction, the potential energy is negative, and when it gets pulled together, it lowers the energy. When an object gets pulled together by gravity, typically its energy decreases. It decreases. But it becomes denser. At some point, just like any other object, if you squeeze it more and more, at some point, its structure will pass its Schwarzschild radius. It'll become smaller than a Schwarzschild radius. What happens? It becomes a black hole. Let's imagine doing this very slowly. We slowly, slowly, slowly change G. The string begins to shrink and shrink and shrink. A shrink, I mean to say, that the whole ball of thing begins to contract. At some point, it turns into a black hole. I'll try to tell you when it turns into a black hole. I'll try to give you an idea of what the crossover is between string and black hole. So, this end is G equals zero. No gravity. Gravity has not turned on yet. Now we start increasing G. The string starts to shrink and eventually, when G is large enough, becomes a black hole. I don't know. This is some value of G. The value of G depends on the mass of the object. The more massive it is, the smaller G has to be in order to make it a black hole. That's pretty clear. The more massive it is, the more easily it'll become a black hole. So, at some point, depending on what we started with, it will become a black hole. Now. The question is, is G between zero and one, or is it a big number? Let's say it's between zero and one. I think it's convenient to say. I think it's useful to think of it as being between zero and one. The story is not pretty when G gets bigger than one. We don't want to go there. In fact, we're going to continue to think of G as small. If G is small, it may take a very big object in order to turn into a black hole. It will. It may take a rather large thing to turn into a black hole. But whatever G is, whatever G is, basically there's an object which is big enough that by the time you turn G up to that value, it will turn into a black hole. That's what happens if you slowly, there's another word for slowly, adiabatic. Adiabatic is a physicist's fancy term for changing a control parameter very slowly, turning a dial very slowly. So under an adiabatic change of G, this turns into a black hole. What would you guess happens if you then take the black hole and turn G back again? You turn it to be so small that there's not enough gravity to hold the black hole together. What does it become? A ball of string. A ball of string. What else could it be? What else could it be? Everything is a ball of string when you turn off gravity. And so it also goes this way. Now there's actually a technical point here. It's a technical point about the relative entropy of a single string and many strings with the same total amount of mass. You might think, well, when you go back again, perhaps it doesn't turn into a single string, perhaps it turns into three disconnected pieces of string or a hundred thousand disconnected pieces of string. No, there is a technical point about counting the number of states of strings. And the answer is there are many, many more states of a single string than there are of multiple strings. That's a little bit counterintuitive. You might have thought if I have two strings of the same mass, you take a certain amount of mass and you divide it up into ten strings, how many configurations are there relative to if you took the whole mass and made one big string out of it? There are many more configurations of one string. That's surprising. It's a fact. So the most likely thing that you'll get when you go back is another single string. And so you can imagine somebody with this dial turning it one way, string becomes black hole. Turning it back again, black hole becomes string and so forth. Now, what happens to the stringy stuff when you turn it to a black hole? Does a string disappear? Does it get sucked into the black hole? We're watching this thing from the outside. It's inside the horizon. It's inside the horizon. But if we're watching it from the outside, nothing ever crosses the horizon. So the answer is it's not inside the black hole. It's on the horizon or near the horizon of the black hole. Here it is. Of course, it's only a good macroscopic size black hole when the black hole is much bigger than the string length scale. And so when the black hole is really much bigger than the string length scale, the string is just a little bit of fuzziness on the horizon. And that's what you would see from the outside. As you turned up gravitation, the object would collapse, but you would see the string sort of collect on the horizon of the black hole. Okay? Let's think about black holes of different size relative to the string length scale. The string length scale is basically the size of these loops of string here. Let's imagine a smaller black hole. Here's a smaller black hole. And here's the loops of string. Do you think it makes sense to think about a black hole which is itself much smaller than the lengths of the loops of string which make up the black hole? Well, there's no way you can answer that, then to know a little bit more about string theory than we've talked about. But the answer is no. The point at which the Schwarzschild radius gets as small as the loops of string, that's the transition point. Alright? So if you imagined varying the constants so that the relative size of the Schwarzschild radius and the string length scale varied, somehow you did that, then at the point where the string length scale is comparable to the Schwarzschild radius, that's the point at which the black hole becomes a string or the string becomes a black hole. There's another way to say it. It's also the point at which the black hole does not have enough mass to create a gravitational field to hold the string down onto it. The fluctuations of the string will simply cause it to float up off the horizon. So one more element. In this going back and forth between strings and black holes, the point, the point of separation between strings and black holes, let's call it the transition point. The transition point, transition, is when the Schwarzschild radius is equal or approximately equal to the string length scale. If the Schwarzschild black hole radius is smaller than these little loops of string, it doesn't mean anything anymore to say it's a black hole, it just becomes a string. This is the transition point. Let's rewrite it. The Schwarzschild radius is what? m times g. m, 2's don't count, we don't care about 2's. 2 or equal to 1. And g is equal to L-plank squared. Actually, we can go one step further, can't we? We can relate L-plank to L-string. Let's see, which way shall we do this? Let's get rid of L-plank in this formula here. So this would be, yeah, let's rewrite this. m times L-plank squared, that's g squared, L-string squared, is L-string. Let's go back over what I did. I'm just going to do it again for you. Do the steps again. Schwarzschild radius equals L-string. Schwarzschild radius is mg, equals L-string. g, that's mass, times L-plank squared, equals L-string. And now we use that L-plank is g times L-string. So this is m-plank, g squared, L-string squared, equals L-string. Did I do that right? Yeah. I think we can cancel something here, can't we? Equals 1. I didn't write a 1 on the other case. I will now do it. Okay, so let's see what that means. For a given mass, as you start to decrease the coupling constant g, this quantity here will eventually go down to about 1, and then that's the point at which the transition between strings and black holes takes place. Alright, so we need to know this equation here. How are there, you wrote that the entropies are equal? Of what? You wrote mLp squared, you thought? It was equal to Ls. Well so far I haven't even talked about entropy, I mean I talked about entropy earlier. No, I think that you have m squared Lp squared, and if you both factor out m from that, you get the lower equation. Which one is right? It looks like the entropies will be equated. Mr. We're going to, you know, I just want to lay down all the equations for you. We're not going to finish it tonight, and then I want you to go and study those equations. Remind yourselves about that, and then we're going to put them together. We're going to put them together, and out of them we're going to see a derivation of the entropy of a black hole. And then coming from the entropy of strings, we're basically going to derive this formula from this formula. String theory derives black hole entropy. Okay, there's one more element, one more moving part. I'm trying to lay out all the parts, it's a, it's a, in a sense it's a simple argument. We don't even need an integral. Not even a derivative, let alone an integral. But on the other hand, there's some logical pieces there which are kind of intricate. So I'll try to go slowly and spell them out. One more. It's the fact that when you change a system adiabatically, you have a control parameter, coupling constant, a magnetic field, an electric field, and you, and you have some system in it. It could be a box of a, a container of fluid, gas, and the whole thing is in a huge magnetic field, and you slowly, slowly, slowly vary the magnetic field. What happens to the entropy? It's a constant. It doesn't change. You turn the magnetic field up, all sorts of things happen. The energy changes, the volume might change because the gas might press against the, all kinds of stuff happens. The one thing that doesn't happen is the entropy doesn't change. This is partly a consequence of something which I've emphasized over and over in these lectures, the conservation of information. Entropy is hidden information. When you change, when you change the environment, not the environment in terms of the atmosphere and things like that, but when you take a control parameter and you slowly change it, the amount of hidden information doesn't change. That's the same as saying entropy doesn't change. So in a adiabatic process, like we're talking about here, where you slowly change the coupling constant, the entropy doesn't change. So that's one more thing. Let's just write it down. I'll just write it. Adiabatic change of anything, but in particular g, does not change the entropy. Question, would that be contingent on, let me go through a phase transition, like steam to water or water to gas? Yeah, it is contingent. I'm not going through a phase transition, but phase transitions are characteristic of infinite systems. Phase transitions, really strict phase transitions only happen when a system has infinite volume. Systems of finite volume never have strictly speaking phase transitions. If you go slow enough, if you go slow enough with the change, the entropy doesn't change for a finite system. The statement that said that a long string has more entropy than having a smaller string. In a certain volume, in a volume, you fix a volume, you put enough string in there to more or less fill a volume, and now you think about the possibility of one long string versus several small strings. It would seem so, doesn't it? Right? Happens not to be true. Like explicit calculation. There is a reason. I'll tell you what. Let's do it next time. Remind me next time. There is a reason, and it's actually simple. It's a surprising but simple thing. Okay, so what's the strategy going to be? The strategy is going to be to start with a black hole. Start with a black hole with a given mass, start with a black hole with a given mass at a given coupling constant. Let's call it M naught and G naught. This is our target object. This is what we're interested in. The black hole of mass M naught when the coupling constant is G naught. And now we're going to start slowly changing G naught. As we slowly change G naught, the character of this object, its mass, its whole structure is going to change, and it's going to morph, and eventually when G is small enough, namely at this transition point, it's going to turn into a string. Okay? Once it turns it... Sorry, is it a string or a black hole? No, it starts as a black hole. Oh, it starts as a black hole? It starts as a black hole, and now we start decreasing G until it turns into a string. Once it turns into a string, we use string theory to tell what the entropy is. But the entropy couldn't have changed during that process because entropy just doesn't change during an adiabatic process. So once it's turned into a string, we can then use string theory to say what the entropy is, and in that way figure out what the original black hole entropy was. As simple as that. Okay? We're going to do it. What we're going to show is when you do it, this formula for the string becomes this formula for the black hole. Okay? So that's a remarkable fact that... And it's been checked for many, many, many different kinds of black holes, rotating black holes, charged black holes, higher dimensional black holes. It's not an accident of a little bit of an accidental agreement. It's something which has been checked for a wide, wide variety of different kinds of black holes. Rotating, not rotating, charged, uncharged, magnetically charged, higher dimensional, you name it. So in that sense, there's a good candidate for the understanding of the entropy of a black hole, and if you like, in some rough and ready sense, it's just the stringy stuff that's trapped near the horizon when the black hole forms. Good question. Yeah. So the string with the greatest entropy is one string. The string of a given length or a given mass. Remember, the mass is proportional to its length. So if you have a given mass of string where you've turned off gravity, in the process of turning off gravity, it became a string. Once it becomes a string, the string that would fill a certain volume of a total mass m, the entropy is greatest for the single string. But you also said that if there's a finite probability, it may end up being two strings instead of one. There's a finite probability that may end up being two strings. And in fact, the answer would not change very much if it were two strings. One string, you get an answer. Two strings, you get a similar answer. It doesn't break up into a lot of strings. If it broke up into a lot of strings, you'd get a very different answer. Right. So the statistical counting. This is a fancy kind of statistical mechanics of counting configurations and saying when you go back and forth, the black hole, which is in a sense the maximum entropy that you can squeeze in a region of space, morphs into the maximum entropy that you can contain on a string. So it's a very interesting logic. And we can carry it out for the simplest case beyond the simplest case. It's too hard. Okay, let's. I will. We'll finish this argument next week. For more, please visit us at stanford.edu.
(February 14, 2011) Leonard Susskind gives a lecture on string theory and particle physics that focuses on how string theory gives a resolution to the question regarding the entropy in a black hole. In the last of course of this series, Leonard Susskind continues his exploration of string theory that attempts to reconcile quantum mechanics and general relativity. In particular, the course focuses on string theory with regard to important issues in contemporary physics.
10.5446/15118 (DOI)
Stanford University. Today what we're going to do, oh you had homework, right? You had homework to build a black hole and bring it in, right? Did you build one? Show and tell? No, okay. All right. Well I tried that, and of course she had applied, I wrote it to Mr. Google, and pointed me to Wolff's website, which has the calculation. Calculation of what? Really the radius of a black hole that had the mass of, or had the temperature of three degrees. Oh. Okay. The difference was that they put both of them constant in the denominator. Yeah, it doesn't matter. Suppose you didn't know the answer. Let's try to estimate it. Did we do this last time? All right. We want a black hole whose temperature is in three degrees. Why three degrees? Well, three degrees is the temperature of empty space, due to the cosmic background radiation. So if we had a black hole, which was three degrees, it would be an equilibrium with its surroundings. Now, would that be particularly interesting? No, because if it got a little bit hotter by accident, then it would just proceed to get yet hotter. If it got a little bit cooler by accident, by emitting too many photons, it would suddenly get cooler. So there's no real equilibrium, nevertheless, just to try out some numbers. A three-degree black hole, three degrees Kelvin black hole. All right. Does anybody know the wavelength of light, which is about three degrees? If you know it, tell me. Otherwise, I will have to work it out. It's about a centimeter? No, it's not 21 centimeters. No, no, no, no, no. It's not 21 centimeters. It's microwave. Yeah, it's microwave. Microwave background. Okay. So what is it if it's a... Oh, no, it's not 10 meters. No, no, no, no, no. 10 meters. No, no. Microwave, that microwave is a beautiful one millimeter to one centimeter. Yeah. But it's certainly a harder piece. It's in the order of centimeters. Yeah, it's in the order of centimeters. So let's just say a centimeter. Okay. Let's say... It's got a fit in the microwave oven. What? Yeah. It's got a fit in the microwave oven. That's right. That's right. Good point. So it's got to be less than a foot. Yeah. But it's order of centimeter or so. Okay. Now, here's a fact, and if you haven't noticed it already, that the wavelength of light at the temperature of the black hole is about the same size as the radius of the black hole. That's... There aren't too many parameters around that you can shuffle around. And the radius, the wavelength of light emitted by a black hole of size r, that wavelength is about r. So we can read off from that what the radius of a black hole which emits one centimeter light is. And the answer is a centimeter. So we now know it's Schwarzschild radius. Now, what was the question again? How big was it or how massive? How massive? Okay. So let's go with how massive. If it's one centimeter, how many centimeters are there in a kilometer? A thousand times a hundred, a hundred thousand, right? So this tenth of the five centimeters in a kilometer, and a solar mass black hole is about a kilometer big. I don't know, maybe it's two kilometers, maybe it's a little bit bigger, but it's a few kilometers big. So the thing we're thinking about is a hundred thousand times smaller in radius. How about in mass? The mass and the radius are proportional to each other, 2mg. So it's a hundred thousand times lighter than the sun. About how massive is a hundred thousand times lighter than the sun? It's about the earth. It's roughly order of magnitude, the earth, the mass of the earth. So if the mass of the earth were compressed to a black hole, it would be roughly a centimeter. That's... But why did that come up? I can't remember now. Somebody asked about it. Does that agree with what Wolfram says? How do you... I put the number then I got about a moon mass. Moon mass? Sounds a little small to me. Moon sounds a little bit small, but the point is it's not a thing that you're going to hold in your pocket. Right, okay. That's pretty heavy. I'm surprised it's moon mass. I would have spent a thought line. But okay, that's not important. It's a factor of what? No, no, no. The moon is about a quarter of the radius of the earth. But the gravity on the moon is one-sixth of that. That doesn't... No, no, no. Okay, let's not get into it. Let's not get into it. We want to build ourselves a black hole now. We have some moving parts or some working parts. And let me remind you what the working parts are. The first two working parts are the Penrose diagrams for flat space and for a Schwarzschild black hole. So let's draw them. We don't want to do any real calculation. Calculation is too hard. We want to draw pictures and get the basic mathematical facts from the pictures. All right, so let's start with flat space. First of all, over here, just to remind you what a Penrose diagram looks like, I'll draw a flat space over here. There's time. Now, the other coordinate in the problem is not going to be x. It's going to be radial distance. So it never goes negative. Radius is never negative. Radius goes this way. I seem to have made it a little bit negative over here. It goes that way. And at every point at t and r, there are also the angular directions. So here are my coordinates. My coordinates are polar coordinates centered at me. I'm right over here. I also have a clock, so I'm moving vertically. My clock is ticking off time. And at different distances from me, that corresponds to different r. But at every r, there's a sphere of points. So an object could be at any point on that sphere, but we're going to suppress that sphere. We're not going to draw it. We're just going to draw r and t. Light rays that move toward the origin come in along 45 degree axis. Light rays that move out from the origin go out at 45 degrees and so forth. All right, the Penrose diagram that goes with this is just gotten by smushing it. Smushing is a mathematical operation. And look it up in Wolfram. Look, Wolfram smushing. We smush it this way and that way until we get it on a finite plane. And when we do so, and there's more than one way to do it, incidentally, but they all look the same. There's more than one functional form that you can use for the coordinate transformations, but they all look very much the same. The point over here, that's sort of the origin of both space and time, there's nothing special about this point versus this point versus this point. They're just a different time. But let's pick a origin of time and put it right over here. This is time equals infinity. And it corresponds to the location, the squished, the scrunched location of all the people at different radii who just stand still. If they just stand still, they just wind up vertically up at t equals infinity. But on this figure, they all wind up at what looks like a point. But of course, that point is not a point. It's just the fact that we've squished everything, squished, squished. And they get denser and denser as you go out here, because after all, there is an infinite distance out to infinity, and that's all being plotted on this Penrose diagram. So the vertical lines, which become kind of hyperbolas, not necessarily exactly hyperbolas, but looking hyperbolic-ish, they accumulate, meaning to say they get closer and closer over here and here. And that's the vertical lines. The horizontal lines, which are the lines of constant time, well, the one at t equals 0, that's pretty obvious. It starts here, and it goes out to infinity. The one at t equals 1 also goes out to infinity. Looks like that. These are not necessarily, in fact, they're not straight lines as a rule. And again, they get closer and closer and closer and closer as you get up toward the infinite time. Same thing in the lower part of the diagram, which I won't draw. If you've drawn it correctly, then light rays move on 45 degree lines. As I said, that's partly just definition of the way you draw it. And as somebody pointed out last time, had I drawn it accurately, a light ray would then move, well, apparently I didn't draw it very accurately, because it's going to wind up over here. Maybe I can just put another one in. That's better. And so forth. That's a light ray from the origin out, a light ray from out back to the origin is that way. You can have light rays that started later and hit the origin later and then go back out. That's fine. They're all pretty much the same, except one is later than the other. You can also imagine light rays which don't hit the origin. So again, I'm the origin. You flash your laser at me, and it goes by me at a meter. From the point of view of radial distance, it comes in from a long distance away, and it sure looks like it's aimed toward me. So it's hard to tell that it isn't the radial light wave. But then as it gets closer, instead of getting into r equals 0, it gets a certain minimum distance in and then goes back out. So a light ray that misses the origin is another one that misses the origin. I'll draw this from bold and bold, comes in pretty much the same and then goes back out. I didn't draw it with any great accuracy, but you get the idea. They all look the same incidentally far away. Every light ray, no matter how much it misses me by, if it's far enough away, it looks like it's aimed toward me. OK, that's flat space. That's one of the moving parts, or one of the tinker toy elements that we'll put together to build our black hole. And when I say build a black hole now, I mean, at least mathematically or in imagination, create a black hole. As opposed to just take the mathematical structure and examine it, how would you go about creating one in the laboratory? Well, we need a pretty big laboratory. But all right, the next thing was the Penrose diagram for the Schwarzschild black hole. I drew it for you last time, and I'll redraw it now. It is a little bit odd. Here's the way you draw it. You begin by drawing a square but on the diagonal, you know, a diamond, a square diamond like that. Then you draw another square diamond next to it of the same size. And then from the top of this diamond, you draw a wiggly line to the top of this diamond, and from the bottom of this diamond, you draw a wiggly line to the bottom of that diamond. The wiggly lines are the singularity, future singularity. Let's call it singularity plus, singularity minus. People outside the black hole, they're over here. This side we're going to find out doesn't really mean anything. So don't obsess about this side. Don't even ask me about it for the moment. We're going to find out what happens to it. Here and here are somewhat unphysical. But this region over here, far away from the black hole, it ought to look like flat space. If the black hole were a million miles away, and we're far away from it, it ought to look like good old flat space. So it ought to look like what flat space looks like out beyond a certain distance here. Well, that's not too hard to imagine. Here's r equals infinity. That matches this point over here. Here's t equals infinity. Here's t equals minus infinity. Light rays come in from here. Light rays go out here. Lines of constant r, and that can mean either Schwarzschild r or rho. It doesn't matter. Proper distance. Oh, here is r equals 2mg right here. If you come in along a line like that, you get to r equals 2mg right over there. Let me draw that line a different color. This is the line t equals 0. And t equals 0 starts far away and comes in, and comes into the horizon. Right at r equals 2mg. So that's r equals 2mg. Here's a later time. Here's a later time, and so forth, earlier time, earlier time. Now, this is r equals 2mg. How about r equals 3mg? In other words, farther out from the horizon, that's going to look something like this. r equals, I don't know, 6mg. I'm not sure exactly where it would be, but on this diagram, there's r equals 6mg. r equals 100mg, far from the black hole. That's out here, and so forth and so on. So as you can see, this figure and this figure look pretty similar as long as you're far away from the black hole. But this figure looks quite different than this one does near r equals 0. Here's r equals 0. Light rays bounce off it. There's no r equals 0 that light rays bounce off here. Instead, a light ray will just move right through the horizon and hit the singularity. So near the black hole, they're different. Far from the black hole, they're similar. Once you're in here, as we've talked about before, you cannot escape. You're doomed. You hit the singularity. OK. Now, one more ingredient that goes in is a theorem. I called it Birkhoff's theorem. It is called Birkhoff's theorem. And Birkhoff's theorem is a generalization of Newton's theorem. So let me remind you about Newton's theorem now. We're going to put these elements together in a minute. Newton's theorem, let me first express it in a general way. If you have a spherically symmetric distribution of mass, in other words, it's isotropic, the same in every direction. Here it is. It doesn't have to be a shell. It could be a lump. It could be a shell. It could be a solid sphere with uniform density, like the Earth is not quite uniform density. Or it could be a sphere like Jupiter with varying density, more dense in the center until you get out to the outer boundary where it has almost no density. It doesn't even have to come to an end. It's just a spherically symmetric distribution. Then what Newton's theorem says, and the analog we'll discuss in a moment, what Newton's theorem says is if you take any sphere that the gravitational force of a test mass of a point at that distance on that sphere is exactly the same as the gravitational field of the same amount of mass, not the total amount of mass, but the mass contained within that sphere as if it were located right at the center. The force on this particle here is as if the entire mass inside the sphere were concentrated at the center. And the mass outside the sphere has no effect on the gravitational force on that particle at all. That's Newton's theorem. The special case of it, which I'll be interested in, oh, yeah, all right. So what about, let's go slowly, supposing the mass distribution terminates at some radius so that out beyond that radius there is no mass. Then the theorem says that if you're out beyond that mass, the entire mass of object gravitates as far as you're concerned exactly as if the entire mass were at the center. Now let's take the case of a shell. Let's apply that to a shell. Let's apply that to a shell. Now the blue line is a shell of mass, the blue circle, or the blue sphere, if I want to be accurate, is a symmetric shell of mass. If we wanted to think really mathematically with too much precision, no doubt, we would imagine that it was an infinitely thin shell, but we don't have to do that. All right, it has a certain mass, m. And the gravitational field of a particle on the inside, well, how to refine it? We take that point, we put it on a sphere, and we say that the gravitational force on that point is exactly the same as the gravitational mass of all, gravitational force of all the mass inside the sphere. But there is no mass inside the sphere. So the gravitational field or gravitational force on something inside the sphere is zero. The interior of the sphere is exactly the same as if there had been no shell of mass there at all. And what about outside the sphere? Outside the sphere, it behaves exactly as if the same amount of mass was concentrated at the center. OK, now let's take Birkhoff's version of this theorem. B-I-R-K-O-F-F. K-H-O-F-F. Birkhoff's theorem says very similar thing. It says, we'll take the case of a shell of mass, but it doesn't really have to be a shell. You can say very similar things for any mass distribution. But for the special case of a shell, it says that the metric tensor, metric tensor is the gravitational field, or better yet, the geometry, the spacetime geometry inside the shell is exactly the same as if nothing was there. In other words, it's just good old flat spacetime. In Kowski spacetime, the simplest thing with no curvature, no black hole, just empty flat space. And outside the shell of mass m, the metric is exactly the same as if there were a black hole near the center here of mass m. So if it's a big shell, let's take, for example, the mass of the Earth. I'm not going to make a black hole out of it. Let's take the mass of the black hole, sorry, the mass of the Earth, and leave it as the mass of the Earth. The only thing I'm going to do with it is smooth it out so that it's perfectly symmetric, perfectly symmetric sphere. Then it says that if you're outside the surface of the Earth, the geometry is exactly the same as the Schwarzschild metric corresponding to exactly the same mass. In other words, it behaves exactly the same way or the gravitational field that you would feel. The gravitational field that you would feel would be the same as that little one centimeter nugget that formed the black hole of the same mass. Okay, so that's the, that's Burkhoff's theorem. Now we want to put together, we want to, we're going to create ourselves a black hole, and this is the simplest of all ways to create a black hole. It's the simplest of all ways to create a black hole. What do you imagine? You try to do this at Livermore, but I don't think it would work. Livermore, you know they have these lasers that, they shoot them off all simultaneously and radiation, a shell of radiation comes in and hits something and squeezes it. I have no idea whether it'll ever work or not. I really don't know to do what they want to do. But if you imagined a much more symmetric field of lasers here, where you really filled this thing up and made it a very, very smooth shell of lasers, and then did your flash thing with the lasers, it would create a shell of light coming in. Alright, so we would begin then, let's, let's undraw the lasers and only draw the sphere of light coming in. So a sphere of light is coming in with the velocity C. It's propagating in with the velocity C. Light has energy. How much energy does this light wave have? Whatever it has, let's call it E. In fact, from the outside, we would say it has mass. E equals mc squared. So whatever the energy of it is, divided by c squared and that's the mass of this object is seen from, let's say, far away. Let's not distinguish energy and mass. E equals m. Alright, so this object now is a collapsing shell of mass, collapsing with the speed of light, and Birkoff's theorem, or Newton's theorem, Birkoff's theorem applies to it. What does it say? It says, oh, incidentally, what I should have told you when we were talking about Newton's theorem and Birkoff's theorem, it doesn't matter whether the mass is moving. As long as it stays spherically symmetric, the theorem is still correct. So here the mass is moving inward, and what we can say is on the interior of the shell, at any given time, of course, what may have been the interior of the shell at one o'clock may be the exterior of the shell at two o'clock, but take the interior of the shell at any time, the interior of the shell at any time, the geometry, the metric, the gravitational field is just empty space, as if nothing was there. On the outside, what do you have? The Schwarzschild metric, the good old black hole metric. So you're piecing them together. We're imagining taking a region of space with nothing in it and sewing it together with a region of space on the outside, and the seam, the place where they're sewed together is the mass distribution. Let's try to draw a Penrose diagram for that. Let's start with empty flat space. Let's begin our drawing with empty flat space. We don't need this. Empty flat space. And into that empty flat space, we're going to throw a shell of light. So we begin by drawing empty flat space. Here it is over here. It's just I'm going to get rid of some of the lines. We don't need all those lines on it. So here's our empty flat universe. Just a triangle. Now, from far away, we're going to throw in this shell of light. Instead of throwing it in from Livermore, let's throw it in from infinity. So the light comes in from very, very far away, along a thin shell. And what does that look like? That looks something like this. It's going to be the trajectory of the light ray, or the light shell. Remember, each point in the diagram is really a two-dimensional sphere. So every point on this diagram represents a two-dimensional surface, and every point along the light ray here really represents a shell of light. So at this point, the shell is very big. It's far away, and it's moving toward smaller and smaller r. And when it gets to the center, the shell has shrunk to a point. That's a lot of energy that we've put into a small volume. Whatever the energy is, if we squeeze it into a nothing volume, we've created a huge energy density, we've created a lot of energy within a small volume, you might think we make a black hole. Okay, and you'll be right. Let's begin. Let's take the first half of Burkhoff's theorem. It says inside the shell, space-time is perfectly flat. Where is inside the shell? Let's imagine drawing one of these surfaces. Along the surfaces, the radius is increasing. Where is the inside of the shell? This is not the inside of the shell. That's the outside of the shell, right? Here's the inside of the shell, from here to here. All right. Let me draw all the places which correspond to the interior of the shell. That's all of this. All of that is the interior of the shell. The interior of the shell, Burkhoff tells us, is just flat space-time. This diagram here was all flat space-time. It means I got the picture right inside the shell. Just by drawing flat space-time, inside the shell, I got it right. Outside the shell, I don't have it right. What does Burkhoff's theorem tell me outside the shell? It says outside the shell here, the metric should be the metric of the black hole. Let's draw the black hole metric. Let's draw the black hole metric. Well, here it is again, but I'm going to simplify it. Now, again, we're throwing in a light ray from infinity. It starts out here, and it comes in, and boom, it's the singularity. Where now is inside the shell and where is outside the shell? Inside the shell is to the lower left. Outside the shell is to the upper right. So where did I get the metric right on this diagram? Outside, right? Out here. That's what it should look like outside the shell, and this, that's wrong. Also, this is wrong. So let's erase it. This part was wrong. Burkhoff tells us that's the wrong thing for here, and this part is wrong. Notice that among other things, I'm throwing away all this problematic region with another universe on the left, a white hole in the singularity in the past, and I'm left with only this. That's the portion of the black hole metric outside of the black hole. So here's the light ray, and it intersects. It hits the singularity right over here. Here's the light ray. It comes in and gets to r equals 0 right over there. These two points must be the same point, the end of the light ray trajectory. If I want, now what I want to do is I want to imagine taking this and simply pasting it on top of this, pasting the two together, and here's what it looks like. Out here, that's the Schwarzschild metric. I've erased the rest of it, but here it is. It's everything above this red line. That's what goes here. Everything below the red line, that's here. That's the simplest formation of a black hole, and the reason that I spent some time at it was to show you among other things what happens to these crazy regions, the other half of the wormhole, beyond the wormhole, and what happened to the white hole. White hole's not there. The wormhole's not there, but there is a singularity there. Singularity is there. Having established what, any questions up for now? It seems like one of the pieces is the horizon and the other is... Well, so far we haven't even talked about the true horizon of this geometry yet. We're going to do that in a minute, but go ahead, that's the question anyway. That's the point where it makes them together on the upper left, or maybe not. On the interior space, one, we had an R equal to zero. That was right over here. Yeah, but on the exterior one, we had only the right. You know, at the singularity, R is equal to zero. Right, good. That's a worthwhile point now. If you remember, if you go back to the Schwarzschild metric, the singularity was exactly where that little two-sphere shrunk to zero, where R went to zero. So in fact, you're matching R equals zero along here with R equals zero along here, and they do match. That's the construction of a black hole. That's the way you would do it. This is the kind of black hole you would get. So let's now ask, where is the horizon? Where is the horizon of this black hole? The definition of the horizon now is not R equals 2mg. That was a temporary definition of the horizon. The horizon is the surface which separates the region. Let me say this clearly. There are those places where it's possible to send a message out to infinity. In other words, to send a light ray out to infinity. If you have enough fuel in your engine, you have a chance of escaping from those regions. But in particular, light rays can escape from certain regions, and from other regions, they cannot escape. The definition of the horizon is the surface which separates those two regions. Period. It's the surface in space and time which separates those two regions. That from which you can escape the singularity and from that what you can't. Well, it's very easy to see where that is. You just draw a light cone back to here. If you're over here, can you get out? No. You'll wind up at best over here. What about if you're out here? Can you get out? Sure. You can send a light ray out. Anywhere to the lower right of the green line, a light ray can get out. Now, a light ray can also get in. You can also throw a light ray in. That's just to say, from the outside, you can throw things into the black hole. You cannot from the inside throw things out of the black hole. So the green line here is the separator that separates the places where you can escape from those where you can't escape. Now, notice something curious. Below the red line here is just empty flat space. There was nothing there. It was just good old empty black space, flat space. The signal of the incoming light shell hadn't even gotten there yet. And yet there is a horizon there. What's going on? Well, just think about somebody who happens to be over here. He doesn't know anything at all about this light shell which is going to come in. The light shell, he never, he receives no message that this light shell is coming in. The light shell originated over here. A signal cannot get to him, knows nothing about it. The light shell is coming in, but he doesn't know it, has no knowledge of it at all. Nevertheless, the fact that that light shell is coming in means he's trapped. He cannot escape to infinity. That light shell is going to get him. You can see why. I don't have to, words, a picture is work, worth a thousand words here. He's behind this line. He cannot get out of that line without exceeding the speed of light. So he's going to hit the singularity. Even though there has, nothing has happened there yet. The geometry is flat space, and yet the horizon is over here. The horizon forms before the light shell gets there. Does anything really happen there? Can anybody see anything? Is somebody moving over here? Do they see something happening? No. Nothing special happens over there. But whether they like it or not, they're doomed by the time they get past that light shell. I'll give you an example. I've used the example many times of a drain hole as a black hole. The drain hole, a lake, a big, big lake, with a drain hole in the center where the water is being sucked out so fast that at some distance away, the drain hole is draining away water so fast that a rower who is trying to outrow the current can't outrow it because the velocity inward is too fast. So now imagine that the drain hole is shut off. You've plugged up the drain hole. You're sitting a couple of meters away from where the drain hole is. Nothing is happening. The water isn't moving. You're perfectly happy. If it stayed like that forever, you would have no trouble rowing away. But now, unbeknownst to you, somebody pulls the plug from underneath and all of a sudden, the water starts rushing toward the center, you are doomed. Whether or not you felt anything going on, you're doomed because all of a sudden the velocity of the water is such that you can't outrow it. So there's an example of you've passed the point of no return even though there's no flow of the water at that point. Same is true here. Pulling the plug is analogous to this light sheet coming in. Once the light sheet comes in, anybody who happens to be in here is trapped. What happens to the horizon? Let's think about the horizon a little further. Here's the horizon and remember at each point on this diagram there is a two-dimensional sphere. So each point along here represents a two-dimensional sphere. What's the size of the sphere over here? Zero. So we start over here with a zero-sized sphere. There it is. Now we move out a little bit along the horizon. How big is the horizon over here? Well, it's slightly bigger. The horizon has grown even though nothing's happened. Nothing has been going on in here. And the horizon keeps growing up until this point here when how big is it at this point? 2 mg. 2 mg where the mass, remember, we've just taken a black hole geometry and sort of welded it on over here. And the black hole geometry on this side had a horizon whose size was 2 mg. So it just grows until it becomes a size 2 mg. And then it just stays that way unless somebody throws in some more stuff. If somebody throws in some more stuff, the horizon will grow. So that's a picture of the creation of a black hole from an incoming light. So you're saying the reason that the plug gets pulled out of the drain is because of the sphere as well as getting smaller and smaller. But I only use that example to show you that even though nothing has happened yet, you could nevertheless be beyond the point of no return. Or you could be like on the edge of Niagara Falls where the time when Niagara Falls was so frozen up that it wasn't moving at all and you think you're perfectly safe, there is no point of no return, the water's not moving, and then all of a sudden the ice gives way and the water starts going. You're doomed, it doesn't matter that nothing seemed to be happening. Same kind of thing. Okay, shall we take a break? We're going to tell them that it's trapped in there, not knowing it's coming. Yeah. In an earlier class we said that all the masses of the water did say this thing. I'm wondering if he gets, is he on the outside of the horizon from a point of view of a distant observer? Everything that falls on the outside? No, no, the distant observer can't see him. The distant observer's out here, can't see him. So does he go right into the center? So what is the status of the water that's inside? There he is. What can happen to him? He can hit the singularity. He can't avoid it. So anything that's trapped into the forming horizon goes to the singularity and stuff that falls in subsequently appears to be stuck on the horizon forever or somehow. Well, okay, so he got in here somehow, didn't he? I mean, well, he was trapped, but he didn't begin existence over here. He may have come from here, huh? There's particles. There's particles. Maybe he was, maybe his mother was over here and his grandfather was over here and so forth and so on, or maybe he just accidentally collected, came to existence as a funny fluctuation of a bunch of molecules, but those molecules were there before atoms or something was there. The energy was there beforehand. So the energy came in here in areas. And yes, somebody looking back can see his grandfather, or his father, or whoever it is, or his particles, but can't see him. In fact, what he sees is he sees these particles just collecting on this horizon, which is peculiar because in some sense the black hole hasn't even formed yet. Look at that. This is weird. Now, this is really weird. The observer up here looks back and he sees grandfather falling in and then there he is. He's born over here. He looks back and what does he see? He sees grandfather getting closer and closer and closer to something which doesn't exist yet, but can't get it, but never sees him go past it. It goes through that point very close to the horizon intersect at the rest. So I mean, it's a space-time diagram. We're asking what somebody on the outside sees. We're asking for measures. But that sight line looking back at where that photon has come from, it goes through the red line. It goes through the red line, yes. So it's actually close to where the core is. The person out here knows that a black hole formed. He knows a black hole formed. He sees the light coming from... He has no problem knowing that a black hole formed. He can look back and he can see the shell coming in. So the person out here knows that the black hole is formed. The odd thing is that he sees this person over here getting trapped onto the horizon even before the shell got in. Looks back. Looks through the shell. It's just light. He can see through light. He looks back through the shell and sees... So that continues the idea that everything from an outside observer will see. Any mass that's in there will be somehow smushed onto that horizon. Right. Always. Right. Forever and ever and ever. Or until the black hole evaporates. Right. All right, now we're going to come to the really bizarre questions that have to do with the combination of black holes, gravity and quantum mechanics, and information, and all that kind of thing. Let's draw a picture of the horizon now as a different picture. The horizon is just a surface. Here it is. We're going to fall downward now. Here's the horizon. It's really a great big sphere. But just like the surface of the Earth is a big sphere, let's take the flatter of the approximation, or the flat horizon approximation, and just think of the horizon being over here. Things fall down in this direction toward the horizon. And let's talk about the properties of the region near the horizon and what happens to things as they fall to the horizon, through the horizon, whatever. Let's talk about what happens to them. Now, one of the things we discovered earlier on was that black holes have temperature. Black holes have temperature. We even discussed the temperature. I think we even roughly calculated it. They have entropy. The entropy is proportional to the horizon area. So it's almost as though the entropy was carried by things which were sort of wall-papered onto the horizon. Maybe not so surprising, since from outside everything does look, everything that fell in does look like it's sort of plastered onto the horizon. So maybe it's not all that surprising that the entropy of the black hole is proportional to the area. We discovered that if we tried to throw in more bits, it made the black hole bigger. Every bit increased the area by one plonk unit, the area of the horizon, and we went through that. That's a quantum phenomenon. It involves H-bar. So there was entropy and there was also temperature. Anybody remember what the Hawking temperature of a black hole is? Let's get rid of some of these pictures over here and go back to... Hawking temperature of a black hole. Let's call it TH. Remember what it was? You're going to look at your notes. You're going to cheat. H-bar is c cubed over 8 pi mg. H-bar is it c cubed? I think it's c cubed. C cubed over 8 pi mg, which we could also write in terms of the Schwarzschild radius. 2mg divided by c squared is the Schwarzschild radius. So this is also equal, I think, to Hc over, looks to me like 4 pi r. Do I have that right? 2mg, I'm slow. I have to put a 2 here and then divide by 2. Yeah. Yeah. In particular, in units in which H-bar and c are equal to 1, which are natural units for us, the temperature of the black hole is proportional to 1 over the radius of the horizon, and the radius of the horizon of a big black hole is big, and so black holes are cold. They're cold. Now, what exactly does this temperature mean? Does, can temperature depend on position? In some sense, yes, and in some sense, no. What is the sense in which the temperature can depend on position? I can tell you right now, the temperature in New York City is not the same as the temperature here. Okay. But you said the temperature can't depend on position, right? Somebody says, explain yourself. That's true in empty space, and truly empty space, the temperature is zero. In thermal equilibrium, temperature is really only accurately defined with infinite precision in equilibrium. Temperature is a thing that has to do with thermal equilibrium. Thermal equilibrium is what happens when everything gets equilibrated. In other words, when all the variations in temperature, when the heat flows in such a way as to increase the entropy to the maximum amount, and then temperature in thermal equilibrium is exactly constant. Now, this is a true statement, but one has to understand in a gravitational field exactly what it means. It does not mean, here's a gravitational field, it does not mean in thermal, in thermal equilibrium, it does not mean that if you took an ordinary standard thermometer and lowered it down or raised it up, that the temperature would be the same. Here's why. Temperature, what does temperature measure? Roughly speaking, very roughly speaking, it measures the kinetic energy of particles, right? Now, supposing we had that the surface of the Earth was being heated from underneath, here's some flames, these are green flames, heating the surface of the Earth from underneath, heating the surface of the Earth hot, and it was ejecting thermal molecules into the atmosphere. In fact, those molecules were the atmosphere. Let's go to an extreme situation where the molecules are so rare that they don't bang into each other. The free path between them is very large or the molecules are so weakly interacting that basically what happens when a molecule gets ejected is it falls back down. So, above the surface of the Earth there are molecules, their gravitation is pulling them back down, they jump up depending on their kinetic energy, in other words, they jump up by an amount which depends on the thermal fluctuations that they might have gotten hit with. The hot surface here is bombarded, vibrating, shaking, doing whatever has the fluctuating, and it'll kick a molecule up and the molecule will fall back down. Do you expect the kinetic energy of the molecules to be the same no matter how high you are? No. Why not? Because they lose kinetic energy when they rise up. Supposing you wanted to know, so here's an example where even though in some very technical sense, which I'm not going to try to describe for you exactly, the temperature is the same everywhere. There is a sense in which that is true, but for our practical purposes where what we mean by temperature is that which is measured by a thermometer, and what is measured by thermometer, the kinetic energy of the particles that are hitting the thermometer, the temperature would be higher down low than it would be up high. So, or the kinetic energy of the particles would be higher down low. The same would be true incidentally if these were not molecules, but we were radiating photons. Photons would also get radiated and photons are also subject to gravitational forces. Photons which are radiated will also fall back down. In fact, as they rise, they lose energy. As they fall, they gain energy. So it's the same with photons. Photons are no different. All right, now let's suppose you were very, very far away. You didn't want to get too close to this thing, but you wanted to know what the temperature was down below. What would you do? Well, you'd measure the temperature high up, and you say, well, it's half a degree temperature high up here, but you could extrapolate down and say if the typical kinetic energy of a particle or a photon that you see up 100 miles from the surface is such and such, then you could just use some straightforward extrapolation and figure out what the temperature is down near the horizon, or horizon, or near the surface of the Earth in this case. Now, what is the Hawking temperature? The Hawking temperature is the temperature seen by a thermometer far from the black hole. It's the actual temperature that a thermometer would register if it were very far from the black hole. It would be registering the presence of photons, thermal photons, black body thermal photons, which somehow were ejected from the black hole, and which were out there as a kind of atmosphere far from the black hole, which were exciting, the thermometer far away, and the temperature is pretty cold. But in the same way, you can extrapolate downward and ask what the temperature must have been close to the surface of the black hole. There actually is a formula for doing it. I think it's a little bit too late now, maybe we can go through it another time, how you actually calculate the variation of the actual measured temperature in a gravitational field. But I'm going to tell you the answer. The answer, first of all, is that very far away the temperature is the Hawking temperature. Where is it? Here it is right over here. As you get closer and closer to the black hole, the temperature arises. And as you get very close to the black hole, as you get close to the horizon, we won't call this the Hawking temperature. It's kind of the extrapolated Hawking temperature, what you would extrapolate into. Let's give it a name, T of rho. It's the temperature measured at a proper distance rho from the horizon. It is 1 over 2 pi rho. 1 divided by 2 pi rho, the closer you are, the higher the temperature. In fact, the horizon is a very difficult place to escape from. In fact, if you're behind the horizon, you can't escape at all. If you're right at the horizon or very close to the horizon, it's extremely difficult to get away, which is another way of saying you lose a lot of energy. A photon loses a lot of energy when it tries to escape from very close to the horizon. So if you see a finite temperature a distance above the horizon, then extremely close to the horizon, the kinetic energy must have been very large. That's what's going on here. The actual temperature of the photons or of a thermometer that would be lowered on a cable, here's our experiment, we're going to lower a thermometer on a cable. It's going to be a tiny, tiny thermometer because for one reason or another we'll have to use tiny thermometers, but we'll lower a tiny thermometer on a cable. The cable will send back a signal telling us what the temperature recorded on the thermometer is, and as you lower it down, you'll find the temperature goes up. Up and up and up and up. Well, if you really believed this, then you would say that the temperature close to the horizon was very, very hot. Now, we have a funny situation now. Let's suppose we take an atom and we lower it down closer and closer to the horizon. What happens to that atom as it gets close to the horizon? Forget general relativity. Now, it's just moving into a very hot region. What happens to it? It becomes excited, but it's getting hot. It's getting really, really hot. It gets ionized. It's getting really, really hot. The nucleus gets torn to pieces by the high temperature. The quarks get liberated. It's defeated. It's defeated. Well, yeah. Right. Yeah, that's right. If we put a cable around you and lowered you down and put your feet to the surface of the black hole, you would get a hot foot. That's what this picture says, and how do you get away from it? You've done a calculation. You've done a legitimate calculation, and this is a legitimate calculation. The calculation is correct. By now, it's been tested over and over again. If you do lower something down on a cable and then pull it back up, it will give all of the evidence of having been in a very, very hot region. So this is not to be questioned. But there's something a little funny about this, because from this point of view and from this picture over here, the horizon is nothing very special. It's just a green line that I drew there, but there was nothing there. It was just a mathematical line that I drew to separate the region from which you can send a signal out to a sailor. You can't send a signal out. But on the other hand, nothing special was going on in this horizon over here. The curvature didn't get large. The metric near the horizon was close to flat space. We've already established that. This is nothing very dramatic going on. And so you might expect that if you dropped something through the horizon and now think about it in the frame of reference of the falling object, the falling object doesn't say it took an infinite amount of time to go through. Here's the falling object going through the horizon. The clock ticks off seconds. It took exactly seven seconds to go from here to here. In the frame of reference of the falling object, the object says, I fell through, or I fell past the point where I originally calculated there was a horizon. It doesn't say bump, I just felt the horizon. But when it was outside the black hole, he made a calculation of where the horizon was, estimated how long it would take him to fall through it, fell through it, nothing happened. So from the point of view of somebody who falls through the horizon, nothing happens at that point. That sounds contradictory. It sounds either he does get burned up or he doesn't get burned up. And if he doesn't get burned up, how did he avoid the consequences of this very, very high temperature? Or it could be an atom. Does the atom get ionized? It does get ionized, though its particles get spread over the entire horizon by the extremely high heat, which causes things to diffuse. Or does it fall through here? There's got to be an answer. It's one or the other. It can't be both, right? I think one way of trying to get at this is to try to think as much as possible in what I would call an Einsteinian kind of way, or maybe it's a Heisenbergian kind of way. To try to be operational about it. To try to say, look, let's see if we can prove or disprove this or that by doing an experiment. Now here's the kind of experiment that you might try to think about. The temperature of this atom here increases as it falls down. In fact, it gets very hot before it passes through the horizon. It doesn't have to go through the horizon to get good and hot. How hot does it have to get to ionize a hydrogen atom? 13 electron volts, right? How hot is that? 3,000 degrees. Something like that? Yeah, a few thousand degrees. So at some point it reaches 3,000 degrees. It's outside the horizon. At that point, if all makes sense, then it should get ionized. And that's before it's self-rule. Therefore, it ought to be able to send the signal back out, or somebody stationed over here ought to be able to send the signal back out and said the atom either did or didn't get ionized. From one point of view, the atom ought to be able to say, nothing's happening to me. So freely through the horizon, I'm within a certain distance of the horizon. You told me the temperature was going to be 3,000 degrees, but I don't feel a darn thing, and I'm going to send out the message to you and tell you. From the other point of view, you've done a mathematical calculation which says that the temperature gets hot. You've got a conflict, a real conflict. So let's see if we can turn it into an experimental conflict. Before we do, I want to remind you of the Heisenberg uncertainty principle. And the reason is the logic that goes on here and that is important in understanding this better is very close to the logic of that Heisenberg used. What Heisenberg said, he said you can't measure both the position and velocity of a particle. In fact, in some sense, it can't have both a position and a velocity. It has one or the other, or you can measure one or the other. If having a certain property means that that property can be measured, then it has one or the other and not both. And that seems contradictory. Why does it seem contradictory? If it makes sense to talk about the position of a particle at any time, then it makes sense to talk about the sequence of positions of a particle. If it makes sense to talk about the sequence of positions of a particle, then it has a trajectory and that trajectory can be used to extrapolate and find the velocity for the particle. So if it's really true that the particle has a position at every instant, then it might not have a velocity at every instant. Heisenberg said, I don't care what you say about whether it does or doesn't. Let's ask if we can really measure both simultaneously. I just want to very quickly remind you of the logic here. So supposing you have a particle, an electron, and you want to measure its position to high precision, you want to measure its position to high precision, how do you do it? You hit it with a photon. You create an image of it on a screen. And in order to have the resolving power to see the location of that electron to within a resolution delta x, you have to have a photon, you have to hit it with a photon or a light wave, whose wavelength is no larger. It's less than or equal to delta x. But then, if the wavelength is less than delta x, that says that the momentum is larger of the photon, is going to be larger or equal to Planck's constant over delta x. Is that Planck's constant with the across or without the cross? That doesn't matter. That means that this particle is going to get bombarded by a photon in order to measure it. You might have said, look, I believe that particle is standing still. I've created that particle in such a way that it's standing absolutely still. And I want to confirm that it's standing absolutely still at the origin. So what I will do is measure its position very quickly and then instantaneously afterwards measure its velocity and try to check that it was both at the origin and at rest. Well, you know the story. I mean, the story is extremely well known. I'm just reminding you of it. That if you have to hit it with a particle of momentum p, it's going to get hit. It's going to recoil. The particle is going to go off in some direction with a momentum more or less of order p. In what direction? Well, that direction is somewhat random. And so if you try to measure the velocity afterwards, it will not be what you thought it was in the first place. In other words, the experiment creates exactly the condition that it was designed to show wasn't the case. The experiment was designed to show that the velocity was zero, let us say. And just the experiment itself created the condition which was the opposite of what it was designed to confirm. That's the Heisenberg logic. It's deeply, of course, connected with quantum mechanics. H bar appears in it. In classical electrodynamics, you can make an image with an arbitrarily small amount of arbitrarily low intensity electromagnetic wave. You don't have to have one whole quantum. In quantum mechanics, you can't subdivide the energy less than a quantum. So if you want to do the experiment, you're stuck. You're going to give that electronic kick. Okay, let's now analyze. Let's be Heisenberg and analyze the question, what happens when an atom falls onto the horizon? Does it or does it not get ionized? Does it or does it not get really mutilated by the high temperatures above a certain line here? Sorry, below a certain line. Below a certain line, here's the line rho, at a distance rho, where the temperature T is 1 over 2 pi rho. Now, what we're interested in is when the atom falls below the line where the temperature has exceeded the ionization temperature. We're not interested in what happens when the atom is above the region where it's nice and cold. We're interested in what happens to the atom when it gets within the region where the temperature is of order, the ionization temperature. Once it gets down that low, it's in this huge gravitational field. It fell from above. The huge gravitational field near the black hole has accelerated it. It's accelerated it by that point up to close to speed of light. Just because the gravitational field is so intense near the horizon, so hard to get away, big gravitational field, this thing is falling downward now with close to the speed of light, how much time does it have before it actually falls through the horizon? Once it's through the horizon, you cannot find anything out about it. Once it's through the horizon. So how much time does it have before that distance? Well, the answer is what is it? Rho over C, if this distance is rho, the time that it has is rho over C, which is proportional to one over the temperature, one over the ionization temperature. The higher the ionization temperature, the shorter the amount of time that the particle has in that region. Okay, now how are we going to do an experiment from the outside to determine whether the, before it falls through, it's too late, once it falls through the horizon too late, can't get any information out. We want to find out during this period, the short period of time here, while it's falling in the high temperature region, we want to try to find out if it was ionized. How do we find out if it's ionized? Well, we shine some light on it. Here might be a source of light over here. That light is going to illuminate it, bounce off, and create an image. This image over here is either going to be an atom unionized, or it's going to be an atom splattered all over the place, one or the other, we don't know which. Okay, we have a very short amount of time to do the experiment, namely the amount of time that it takes to fall from the region where the temperature first got hot enough to ionize it until it falls through the horizon. That's a short amount of time. That means, what does it mean in terms of resolution power? Another way to say it is we have a small distance that we have to resolve. We have to see that atom within a small distance, that small distance is rho. What does it mean? Well, it means that that photon had better have a wavelength, which is smaller than the distance rho. So lambda has to be smaller than rho, and that says that the energy of that photon has to be larger. I'm going to, let's see, h bar over lambda times c, divided by c. Let's see, e is p times c, this is p c. Right? Or h, h. It's actually never rho over c. The energy has to be larger than this. The smaller the wavelength, the larger the energy, and lambda is of order rho. So the energy has to be larger than this. Well, this is essentially exactly the energy that corresponds to this temperature, one over rho. We didn't put in the h bars and c's here, one over rho. One over rho, same thing here. In other words, you have to hit the particle, you have to hit the atom with a photon, which is energetic enough to ionize it. In order to see it in that period, you have to hit it with a photon, which is energetic enough to ionize it. In other words, you have to create exactly the conditions which the experiment was designed to show do not happen. The experiment will ionize the atom. And there is no way to check whether it was the experiment that ionized the atom. You try to do it again after the experiment and see if it's still ionized. Yes, it will be still ionized. Any experiment that you do, which is designed to prove that the atom wasn't ionized, will ionize the atom. And ionize the atom in an uncontrolled way, in a way that you can't go back and check that the atom was not ionized. So, yeah. What would be that high energy ionize it, even in the frame of the falling object? When we get hot. Yes. In the frame of the falling object, yes? It's just hot. Yes, that atom gets whacked. That atom doesn't like what happened. And if somebody out here were to monitor you and see whether you can fall through the horizon or not, if they did an experiment to check, they would evaporate you. But if I'm just falling through. But if you're falling through and nobody does the experiment, you're fine. How does the falling through in somebody does do an experiment? Then you're in trouble. And you're in trouble. And couldn't you measure far away from the black hole and look for gradients and from that extrapolate? No, you'd have exactly the same problem. If you try to check that the temperature is rising, what are you going to check? What are you going to check about it? Let's say you just drop a thermometer down. You drop a thermometer, let the thermometer freely fall, and you want to find out if the thermometer has gone up a little bit. How do you find out? You can't ask the thermometer if the thermometer is falling down. You've got to get some radiation back from the thermometer. So you have to hit it with some radiation in order to, you have to look at it. You have to look at the thermometer. I have to record while it's going down. Well, something has to interact with the thermometer. And what you'll always find inevitably. Do you pull it back up after it has recorded its descent? Or then it will show that it was hot. Then it will show that it was hot. Just the acceleration of turning it around will make it hot. No, but it records the time. So during the free fall, you have to look at that. As long as you can be getting information back, that information will be coming back in the form of quanta, and those quanta will either have been emitted by the object or have been interacted with the object. There's no way you can get information without getting quanta back from the object. Either the object radiated those quanta and therefore recoiled, or you hit it with quanta. There's just no way to do it without influencing the object. And the amount of influence is always, always exactly what the temperature at that point would have done. So it's a sort of catch-22. You cannot do the experiment without causing the thing you were trying to show to the... Yes, Kevin. So the high energy in the photon there is to resolve a small distance. Yeah. Or a small time. Does that end up being the same thing as saying that to get that reflected observation back out of the black hole takes a tremendous amount of energy because you're getting it... Strictly you're not trying to get it out of the black hole. It hasn't fallen into the black hole yet. Yes. Out of the region near the horizon takes a huge amount of energy to get... Does that really end up being the same thing? Yeah. Yeah. Yeah, it does. It does. In the end of the day, it does. Yeah. That photon did not have to be a high energy photon by the time it got out. Because in traveling out, it will lose some energy. But it did have to be a high energy photon in the region in here. And this atom doesn't care what the photon energy was out here. It wants to know what the energy that was... You can still wait over here. If it's going to admit the measurement or the information itself, it has to admit it's such a high energy. Absolutely. Yeah. It doesn't have to be an external photon hitting it. It could be something that it naturally emitted. But that's right. It seems that the atom falling into the black hole has a preferred direction. Namely, the horizon. Typically, we don't call a temperature......directed kinco energy into a baseball. We don't call it hot. We just say it has kinetic energy. Well, it's not the kinetic energy of the object which is falling in. The object is falling in. It's kinetic energy is in temperature, but it's interacting with an environment which is getting hotter and hotter. It's the environment that it's interacting with which is very hot. It's the radiation from the black hole. It's the radiation from the black hole. Could you check with a magnetic field if the ion is... What's that? The ionization. Can you check with a magnetic field? If you... Look. You could do an experiment. You could imagine an experiment in a laboratory that's falling with the atom. It could be magnetic fields, whatever's in there. And that experiment, as you fall into the atom, could be doing the experiment. The problem is getting the information back out of the black hole. Right. So whatever happens in that laboratory, you have to interact with the laboratory with something external and then get the information back out. And it will always be the case that the energy of the photons that are necessary to get that information back out are the same, or at least as big as the energy of the environmental photons in the thermal bath. So if I have a transmitter in the thermometer, by the time I get close to the event of rising that temperature, the amount of energy I have to generate to set the signal out would evaporate at any rate. Exactly. Isn't that distance really very small? Very small. No. That's the point. You don't. The distance over which the temperature is high is very small. So you can either think that you have to have a photon of high spatial resolution or high time resolution. Either way, you'll either say the photon has a high momentum or it has a high energy. This is all equivalent to the usual Heisenberg uncertain principle for time and energy, right? Well, that's what we're used. Yes. Yes. Yes. So this puzzle about what goes on at the surface of a black hole, whether it does or doesn't get hot, gets all mixed up with these questions of quantum uncertainties and so forth. And that's the result. There's no way to answer the question other than to do the experiment. And when you do the experiment, the experiment will do exactly what you were trying to show didn't happen. That's all. Yeah. Okay. So, what's the solution to catch it in that window? Is it possible to use a lower resolution, lower energy, and do it a bunch of times and hopefully you'll just catch it, right? Well, you only get one shot at each particle falling through the horizon. Is the probability just as ridiculous to you? You say, supposing I did the same experiment over and over with many, many atoms and use a low resolution. I don't know. I never thought about that one. It's less energetic photons. Of course, each one to be questioned. It's a sensible question and it ought to be answered. We have to have enough energy to get something to come out. Right. Same problem you have. Right, but you don't have to do, you know. You have to have enough energy to check it out. There's not going to be any loopholes to go to the uncertainty principle. Right. You can't beat the uncertainty principle by what you're trying to do. Are we saying we can predict that person falling in long field after a track? No. We can only say that the experiment will in itself cause the conditions that I was trying to prove didn't happen. That's all we can do from this kind of analysis. In other words, if somebody comes and says, look, I'm convinced that somebody who falls through the horizon, nothing happens to them, then my answer will be, yes, can you do an experiment that shows that? And the answer will be no. You cannot do an experiment that shows that. That's all this is designed to show, that at a level of operational experimental physics, it would be impossible to confirm that the atom did not experience a high temperature. And we'll come back. We'll look at a couple of more Gdankin experiments and try to see what they say. Of course, there's no way to say what happens if no experiment is done. It's very much like the two-slit experiment where if you don't do an experiment, there's no meaning to the question of which slit you went through. So if you don't do an experiment, there's no answers to the question. Okay, let's call it a night and I'll see you next week.
(February 7, 2011) Leonard Susskind gives a lecture on string theory and particle physics that focuses again on black holes and how light behaves around a black hole. He uses his own theories to mathematically explain the behavior of a black hole and the area around it. In the last of course of this series, Leonard Susskind continues his exploration of string theory that attempts to reconcile quantum mechanics and general relativity. In particular, the course focuses on string theory with regard to important issues in contemporary physics.
10.5446/15116 (DOI)
Stanford University. Okay, let's go back to, I'm going to try to, we've done this before incidentally, but there's only so much physics out there and I can't keep making it up fast enough to, which I could but I can't, when I was younger I could. So we're going to do things that we've done before. Alright, let's go back to what we learned about the geometry of a black hole near the horizon. Near the horizon we found out that it was just pretty much good old flat spacetime, approximately, very, very close to the horizon. And we described it by a particular metric. For this purpose we started of course with the good old Schwarzschild metric, but we made a bunch of changes of variables. In particular instead of the Schwarzschild coordinate R, there's never enough ink here, instead of the Schwarzschild coordinate R that goes into the Schwarzschild metric in the usual way, we replaced it, oh good, we replaced it by another variable which was actually the proper distance from the horizon. R is not the proper distance from the horizon, but rho is. That's its definition, that's the definition of rho, proper distance from the horizon, and I won't bother writing down the relationship between them, not terribly important to know, but rho, one thing is important when, at the point when R is equal to 2MG, that's the Schwarzschild radius, that's the point where the horizon is, at that point rho is equal to 0, proper distance from the horizon. We also made another change of variables, omega, which is a time variable, and it's a dimensionless time variable, is equal to the Schwarzschild time divided by 4MG. So roughly speaking, oh, in every place here, C is equal to 1. Roughly what omega is, time and length are measured in the same units when C is equal to 1. 2MG is the Schwarzschild radius of the black hole, so apart from a factor of 2, omega is time measured in units of the Schwarzschild radius. So that's what omega is, and that's why it's dimensionless because it's a ratio of things, time divided by the Schwarzschild radius. And when we did that, we went to a little exercise, it was a number of steps, more steps than I really like to do, but nevertheless we did it, and we found out that near the horizon of the black hole, we call it the near-horizon geometry, that the proper distance was equal to minus rho squared, the omega squared. I can write this in terms of proper time. The difference between proper distance squared and proper time squared is just to change the sign. You can write d tau squared and put a plus sign here, the omega squared plus the rho squared. And then there was another piece coming from angles on the horizon, but I'm not terribly interested in that. I'm interested in distance from the horizon and time, a two-dimensional picture of the black hole, just time and distance from the black hole. This is what the metric is like. And then I pointed out to you that this metric is very, very similar to the metric of ordinary flat space, conventional flat space, the surface of this table here, in polar coordinates. In polar coordinates, we would ordinarily write, I don't want to write it here because I don't want to confuse it with this, but ordinarily in polar coordinates theta, rho, we would write that ds squared is equal to rho squared d theta squared plus d rho squared. So they look very similar to each other. The only difference is this minus sign and omega is analogous to theta. It's a hyperbolic angle. Just to finish, yeah, all right, that's good. This is flat space, ordinary flat space. This happens to be nothing but ordinary flat space time in a kind of polar coordinate reference frame. Everybody remember this? And omega is T over 4 mg. Now, this is the metric close to the black hole horizon. Here's a picture of this. This is just flat space in polar coordinates. So it looks something like this. Omega is a kind of hyperbolic angle variable, which way down the remote past, omega is equal to minus infinity, way up here, omega is equal to plus infinity, and rho, lines of constant rho are just hyperbolas which look like this. And here's the horizon. Right over here is the horizon, or right over there, if we come in along a line of constant omega, fixed time, that point right over there is r equals 2 mg. It's rho equals 0, but it's r equals 2 mg. That's what we set up last time. We also said that the horizon of the black hole, well, the general horizon, there's two notions of horizon, incidentally. One is called bifurcate horizon, I'll tell you the words. The bifurcate horizon is simply this point over here. And the bi, I don't know what furcate means, but bi means 2, this are the intersection of these two light cones here. That's called bifurcate horizon. But there's another notion of horizon. The other notion of horizon is a more physically interesting one. It's this whole light like surface here. Why is that interesting? Well, if somebody gets stuck behind that light like surface, they cannot send a message to the outside. They cannot send a message out beyond that light like surface. So this is a kind of horizon. And once you're behind it, you can't get any information out to distances beyond the r equals 2m, or rho equals 2m, or rho greater than 0. Okay, so that's the geometry of the near horizon. What about when we go very, very far from the black hole? When we go very far from the black hole, we see also it looks like space-time, it looks like flat space-time. But here's the metric that we would see. We take the Schwarzschild metric, the good old Schwarzschild metric. Let's write it down. ds squared is equal to 1 minus 2mg over r dt squared plus 1 over 1 minus 2mg over r dr squared. And now we go far from the black hole, which means r very large. What happens to this when r is very large? This is dt squared, excuse me. It just becomes dt squared plus or minus dt squared plus dr squared. r and rho, far from the black hole, r and rho are the same thing. Once you're way far away from the black hole, the proper distance to the black hole is approximately just r. It's only near the black hole that r and rho are very different from each other. But when you're far from the black hole, radius is just radius. Distance from the horizon is basically the same as it would be in flat space. So far from the black hole, rho and r are basically the same thing. This factor here is not interesting. It goes to 1. This factor down here also goes to 1. So we get minus dt squared plus dr squared. So far from the black hole then, the s squared is minus rho squared. But let's write, sorry, not minus rho squared, excuse me. I just want to write dt squared. I just want to write dt squared because there is no interesting factor. So what is dt squared? dt squared is 16m squared g squared d omega squared. What I've done here is just multiply by 4mg so that omega, so that t is equal to 4mg times omega. That's this piece here. And then the other piece is just plus dr squared. The d rho squared is just this over here making use of the fact that rho and r are the same thing far from the black hole. So look what's happening. That near the black hole, the only interesting difference between these two is not in the d rho squared term, but it's in the time term. Omega is a kind of time. It's related to time through this equation over here. And on the one hand, near the black hole, there's a factor rho squared. When rho is very, very small, it means that time runs very, very slowly. Near the horizon of the black hole, there is very little proper time in any appreciable d omega. But far from the black hole, this is what the formula says. Okay, this is curious. What's going on? Think about this. What kind of geometry? It's a little hard to envision. In fact, it's quite hard to envision. But as you move away from the horizon of the black hole, this here thing goes to this thing. Let me give you, I'll try to give you the best picture I can of what this geometry is like. Minus. Yeah, minus. All right, let's come over to here, to this picture over here. It's much easier to understand space than it is space-time. So let's come back to this picture and now imagine a geometry which has the following property. Near the origin, it looks like rho squared d theta squared plus d rho squared. But far from the origin, it looks like instead of rho squared, let's just say 16 m squared, g squared, d theta squared plus d rho squared. Very similar to what's going on over here, except we're not dealing with space-time, we're dealing with just ordinary space. What kind of geometry is this? So let me show you what it looks like. Can you guess what it looks like? This is, I'll show you. Near the origin, it looks like polar coordinates. Let me draw this plane on its edge like this. Here's this plane drawn edge on. As I go around theta, I just go around here, near the origin, it looks like the plane. But now as I move away from the origin, what happens, look at it, instead of having this rho squared here, we just get a constant squared. The meaning of that is that the distance around theta, between two neighboring values of theta, instead of depending on rho, just becomes constant, becomes 16 m squared, g squared. I'll show you what it looks like. It looks like a cigar. It looks like a geometry that looks like this. Where the distance around the cigar at this end here is, well, theta goes from zero to two pi. The distance around the whole cigar in this direction, the cigar band, is 4 mg times 2 pi. Theta goes from zero to 2 pi as you go around. That's what's at this end. On the other hand, very near the origin over here, it really just does look like polar coordinates. Theta goes around, looks like polar coordinates, very close to the tip of the cigar, the cigar looks flat. Any geometry in the vicinity of a point looks pretty flat. But as you go out and as you move around this point here, close to the tip of the cigar, this is the metric. Far from the tip of the cigar, the distance around the theta axis does not depend on how far away you are. Same here. The distance in omega, when you're far away, doesn't depend on rho. When you're close in, the distance for a given omega does depend on rho. The main message here is that this is a curved space. It's a curved space with a curvature near the horizon. And how does that curvature depend on mg? Well, the distance around this cigar gets very, very big when mg gets big. So when the mass gets big, the distance around the cigar gets big, and that means that the curvature at the tip of the cigar gets less. The bigger the mass is, the less curvature that you have at the tip of the cigar, the less curved it is, a small mass would look like this. Highly curved at this end. This is also true of the Schwarzschild black hole. Highly curved when the mass is small near the black hole tip, near the horizon. This point is the horizon. It maps or becomes this point over here. So you can get some feel for what this geometry is like by looking at its ordinary space version. But still, I think that's not completely satisfying. It just is what it is. It is what it is. Here's the geometry. The coefficient of the omega squared varies from near the horizon where it's rho squared to far from the horizon where it just becomes a number, the number being 16m squared g squared. That's really all you really need to know about the black hole. What is this factor here? We look at given angular separations. The factor in front of the omega squared tells us how much proper time there is from one omega, let's call this omega equals zero, and let's call this omega equals one. Near the horizon, very close to the horizon, where rho is very small, the proper time distance between omega equals zero and omega equals one is very small. So near the horizon, as time flows, as omega flows, very, very little proper time between neighboring values of time. Far from the black hole, neighboring values simply have times, time differences, which are just this number here, which is nothing but dt squared. Okay, so clocks, this is a statement about clocks. Clots far away from the black hole behave exactly as you would expect. They read this time here. Clocks very close to the black hole read rho times omega, rho times omega. Omega is a kind of time variable, but it scales differently close to the black hole. That's what a black hole is like. Again, outside the black hole corresponds to rho greater than zero. Rho greater than zero is all of these curves here. Here's rho equals one, rho equals two, rho equals three, rho equals four. That's outside the black hole. Anybody who's stationed at rest outside the black hole, if you could, if you are at rest outside a black hole, and just let go, you'll slowly fall toward the black hole if you're far away. If you're very, very far away, it'll take a long time, but it doesn't take much effort to stay out of the black hole. You can orbit around the black hole. You can just have a little jet propulsion device that can keep you out of the black hole, and that's moving along one of these trajectories here, just staying a certain distance away from the black hole. On the other hand, once somebody passes through, we'll now call this the horizon, the event horizon. This is the bifurcate horizon. This is the event horizon. Once you pass the event horizon, you cannot send a message to the outside world. That's the character of the black hole. That's what makes it a black hole. Good. On that diagram, speed of light is a straight line. Oh, speed of light, radial speed, radial light waves always move on 45 degree axes. In any drawing I ever draw, unless I tell you otherwise, and I may tell you otherwise sometimes, but the speed of light is a 45 degree line, right? So a light ray falling in that way, a light ray trying to get out that way, but it won't get out. So light rays can fall in, but they can't fall out. What about the singularity? Let's go back to the Schwarzschild metric, which I've erased off the blackboard someplace, and write it down again, and see if we can trace. Well, first of all, let's redraw this diagram. Just a question about the terminology you're using. You're saying when the light ray falls in, light ray shines into the black hole. Right, what do you mean? Because does the light ray stay at the horizon? We could write across the horizon of the black hole, it doesn't actually fall in. We'll see. Well, we can answer that right now. Here's Bob. Bob is writing on his trajectory. He's writing on his world line. Let's move him out a little bit. Okay, Bob is quite far from the black hole. Here are omega equals zero, omega equals one, omega equals two. What about the Schwarzschild time? The usual time that Bob measures on his wristwatch, he measures Schwarzschild time, the t. Well, t and omega are basically the same thing except for a factor of 4 mg. So this would be his time equals zero, this would be his time equals 4 mg, this would be his time equals 8 mg, and so forth. So writing along here, Bob is looking at his wristwatch and time is ticking off. There's an infinite number of ticks along his trajectory. He never falls into the black hole, why not? Because I tell you, he doesn't fall into the black hole. He stays outside, he's orbiting it. So he stays at a fixed proper distance, that means on a fixed hyperboloid here. And he looks back. Let's draw in Alice. Now Alice doesn't have to be Alice, Alice could be a proton, it could be a photon, it could be anything falling or moving in this direction. Let's say a photon, you asked me about a photon. Here's a photon, that photon moves 45 degrees and it sure looks to me like it passes the horizon. What about Bob? What does Bob say? So Bob's wristwatch reads off time. Time infinity is way, way out there. So he looks back from wherever he is, let's get another color onto this problem. Red. Bob looks back and he sees the photon over here. Why do I say he looks back? He looks back and he sees the light coming to him from, light doesn't usually come from, yeah, strictly speaking Bob can't see this photon, the photon's going the other way, so let's not make it a photon, let's make it Alice, okay, it's Alice. Alice is falling through. She's waving her hand as she falls through. And Bob looks at her, here's her waves. Let's make them closer together. She's waving pretty rapidly along her world line. She's waving, hi Bob. Bob is looking back and from this point he sees Alice over here. We all see each other slightly in the past. Takes a light time to get to us. So Bob sees Alice over here. Next, after another interval of time he sees Alice over here. She's waved a certain number of times. Bob sees Alice wave a few more times. Bob keeps looking back. And according to him, each one of these measurements, each one of these observations is separated, let's say, by one second. Each one is separated by one second along this trajectory here. But look what happens. He sees fewer and fewer of Alice's waves per unit time. Eventually as he gets way out there, a huge amount of time of his time transpires, but he only sees one wave. And then as he goes further and further into the distant future along his accelerated trajectory, he sees the last wave right over here, but he never sees Alice cross the horizon. He cannot see Alice cross the horizon. So from his point of view, Alice waves slower and slower and slower, and eventually just comes to a frozen halt where she appears to be frozen on just outside the horizon. How close to the horizon? Well, let's take that point over here. That corresponds to some hyperbola, and that hyperbola tells you how far she is from the horizon at that point. So that's Alice. From Bob's perspective, he's going to see Alice in the exact same place, right? He sees Alice. She's frozen in the exact same place. So that means there's some information that's being contained. This is a classical picture of the black hole. There's not accounting for the quantum mechanics of the black hole. So from a completely... look, what would Bob actually see? Imagine Alice, you see her waving, getting slower and slower. She's sending out light waves. It's the light waves that Bob sees. How does she send out light waves? She sends out light waves by taking electric charges and wiggling them, right? It's our atoms that do the wiggling, of course. Bob sees Alice's wiggles of her charges slow down. What does that say about the light that Bob sees? At first, it's optical waves. Then it becomes infrared. Eventually, it becomes long, wavelength radio waves. Bob doesn't see Alice anymore. But maybe his long wavelength radio can pick up a few of her waves. But then her wavelengths get so long, or the frequency gets slow, so low, that Bob just doesn't see anything. So really strictly speaking, Alice just fades away. She just fades away from Bob's point of view. Her radiation goes from being optical radiation to being thousands of miles long infrared, but radio waves, and even longer. So Alice fades away. How about Alice? What does she see? You might think maybe that means that Alice sees Bob speed up. So let's look at what Alice sees. A quick question before you get to it. If I'm flowing onto the horizon, I slow down from your point of view. I don't get squashed this way. I get squashed this way. It's a form of Lorentz contraction, but yes, that's what it sees. How about Alice? What does Alice see? Alice looks back. Let's draw, let me draw a little cleaner Alice here, and now in green. This is Alice's clock. Alice's clock doesn't do anything very special. She looks back and sees Bob. And she sees Bob. And she sees Bob. She sees, even after she passes the horizon, she sees Bob. What she sees is Bob accelerate away from her. Bob doesn't disappear from her perspective when she crosses the horizon. Not at all. What does happen is she sees him accelerating away from her. So that's what happens to Alice's view of things. She doesn't see Bob speed up. Bob sees her slow down, but she doesn't see Bob do anything special when she passes the horizon. Bob looks perfectly normal. She can send him, no, sorry, he can send her signals. He can send her signals from, yeah, but she can't send, he can send her signals. That goes this way. He's sending her signals, but she cannot send him signals. So it's a kind of one-way barrier. The horizon is a one-way barrier. Is his light blue shifted? No, actually not. Well, I mean, as seen by her? As seen by her. No. As a matter of fact, since he's accelerating away from her, her light is redshifted. His light is redshifted. He's more and more Doppler shifted toward the red. No. His light is redshifted. She sees him redshifted. He sees her redshifted until she just fades away. Okay, but it's completely asymmetric. All right, that's the picture of Alice and Bob falling into the black hole. Yeah, question. Yeah. Does this apply to gravitons? Yeah. Yeah. So Alice's gravitons are never seen by Bob. Right. And for colleagues. Right. And all the infinite number of colleagues. Okay. Gravitons are just particles like photons. Much harder to see, but much harder to detect, but still, conceptually, they're just photons. But once Alice falls in, the mass increases, so the radius increases. Once Alice falls in, why? Once Alice falls in the black hole. Yes. Like, if you increase the mass of the black hole, so the radius does increase a bit. We'll come to that. We'll come to the increase of the mass of the black hole. Yeah. We'll come to that. If the gravitons can't escape, I would think that just like the dark hole, black hole not only dark, it would be gravitation free. But that's what happens. No. No. You don't feel gravitational fields because they're a really genuine graviton, gravitons coming at you from the Earth. I thought that was the force, the field that carried the gravitational forces. Virtual gravitons, which have no trouble exceeding the speed of light. Sorry. Virtual gravitons have no trouble exceeding the speed of light. What's true is, I've been standing here a long time. You can feel my gravitational field. What in a sense you're feeling is the gravitational field that was created sometime in the past relative to you. Now I move suddenly. Constraints on the speed of light of gravitons tell you that you cannot see me move or you won't feel me move. Your gravitational field that I exert on you will not change until light gets a chance to get to you. That's all it says. Doesn't that say that light in the gravitational field moves at the same speed? Everything I just said is also true of electromagnetic fields. If I were highly charged and you were also highly charged, we might either repel each other or attract each other. You're supposed to laugh at that. That would be a steady state situation. But if I suddenly moved, you would only detect my field changing after light got to you. Light speed, and light speed. But you just said virtual gravitons go at infinite speed. I don't know the distinction between virtual gravitons and what you've just described, I guess. If it's not your main, then that's fine. It's not your main. Virtual gravitons you can just think of as the units of gravitational field which make up the static gravitational field of a stationary object. They don't have to escape from the black hole. They're there. They're there. They're there. They're there. Let's talk about this puzzle of what you see when you watch your feet fall through the black hole as you're falling. Everybody always gets confused about that. What happens when I see my feet pass through the black hole horizon? How can I see my feet pass through the black hole horizon? Let's set up that problem. Alice is falling into the black hole. Here's a feet. Her feet fall through first. She's falling feet first. Here's Alice's feet. I'm not going to try to draw feet. Yes, I will. Here's her feet. I can't draw feet. Okay, here's her feet. That's the world line of her left foot. The world line of her head is over here. It follows her in, falls in later. So here's Alice's feet. Here's Alice's head. Alice is looking at her feet. So at this point over here, she certainly sees her feet. But she sees her feet over here. She doesn't see her feet at the instant when she looks at them. She sees her feet slightly in the past. Alice sees her feet from here. There's Alice's foot. Alice sees her feet from here. Alice sees her feet from here. And she sees them from here. Yeah, right. She looks back and sees the light coming to her from her feet. Here's the light coming to her from her feet along a 45, my 45 degree angles are not particularly accurate, but they're supposed to be. So light comes from here. Alice sees it. She's wiggling her toes, and her toes are electrically charged so they're sending light. She sees light here. She sees light here. She sees light here. There's no point at which she doesn't see her feet. Of course, she doesn't see her feet from this point at the same time. From this point over here, she doesn't see her foot at that same instant of time. She sees her feet from slightly in the past. So there's nothing special happens when she passes through the horizon. Okay, now suppose that her head stays out of the black hole in her feet, which have already passed through too late. A foot has passed through, but she decides at the last minute, no, I don't want to go there. Then what happens? Well, that's bad. That's not good. Let's see what happens. Alice's head now, whoops, now takes off and separates from her feet. There's no way that she can avoid her feet being torn off by the acceleration. So to keep out of the black hole, she has to accelerate away from her feet. Now, her feet might get pulled. Okay, her feet might get pulled, but they can't get pulled fast enough to keep up with her. It looks like they're keeping up with her, but they're not. And the reason is the following. As Bob moves faster and faster in, Bob is accelerating away from the black hole, he's not really accelerating, he's standing still. But when you stand still in a gravitational field, it's the same as accelerating. So Bob is accelerating away. Think about this now. Think Bob accelerating away, and the distance to Alice, or Bob and Alice, this is now Bob and Alice. Alice is dragging behind Bob at a fixed distance in your reference frame. What is it like in Bob's reference frame? If the distance to Alice in your reference frame at rest is fixed, and Bob is going faster and faster, what does Bob say about Alice? Well, she's not so much going the opposite way as she's just getting separated farther and farther because of Lorentz's contraction. So Bob says, oh my goodness, Alice is getting farther and farther away from me, even though the observers at rest say that Alice is a fixed distance away. She can't get closer because she can't get past this light cone. She can't get up to the speed. She can't go faster than the speed of light. So eventually from Bob's point of view here, Alice just drags further and further and further behind. And if it happens not to be Alice and Bob, but happens to be Alice's head and Alice's feet, then Alice's feet are just stretched away from her until bad things happen, until they break. So once Alice's feet are behind the horizon, she has two alternatives. Fall in with them, no problem, nothing unusual, or stay out of the black hole, and she will be defeated. What? Defeated, that was good. Now so far, we haven't discussed anything about the singularity of the black hole. So let's start to talk about the singularity of the black hole. Where is it on this chart here? Where is it on this diagram? Let's bring up the diagram again. Let's get rid of the head and the foot. And let's come back to the Schwarzschild metric. Let's rewrite the Schwarzschild metric. d squared times 1 minus 2 mg over r, with a minus sign, and with a plus sign, 1 over 1 minus 2 mg over r, the r squared. And then there's another term, r for the angular coordinates. Let's suppress that, not worry about it. The term which has the negative sign is always time. You always recognize time, or the time-like variables, by a minus sign in front of their metric. You recognize the space-like variables, ordinary space, by a positive sign in the metric. But something happens when r crosses 2 mg. When r is greater than 2 mg, this is positive. 2 mg over r is small, 1 minus a small number is positive. But when r gets smaller than 2 mg, suddenly the sign changes here. The sign also changes here. It means that somehow r is becoming a time-like variable, and t is becoming a space-like variable. This does not mean that anything dramatic, it doesn't mean space turns into time behind the black hole horizon. This is a feature of a peculiar set of coordinates. When you go past the horizon, the coordinates interchange. What's happening here is something like this. If you were to move in along a surface of constant time toward the horizon, you would get to the horizon, you would, r would be decreasing, r decreasing, decreasing, decreasing, until it gets to 2 mg. And then where does it go from there? Then suddenly when r gets past the horizon, it becomes a time-like coordinate. r goes up that way. What about if omega is a little bigger, or t is a little bigger? So we come in also to the horizon. This is omega equals zero. This is omega equals one, one unit of time later. We also come in and we hit this point here. And then these coordinates, these Schwarzschild coordinates, which are peculiar coordinates, suddenly make another turn and go off here. So they're funny coordinates. There's nothing special happening over here. It's just a choice of coordinates that you've used where on one side of the horizon, r is a space-like variable. On the other side of the horizon, it's a time-like variable. And vice versa. Time turns into space, space turns into time. But nothing really is happening at this point. It's just an ordinary generic point of space-time. Okay, now let's go to where the singularity lies. The singularity is at r equals zero. That's a nasty place. There's something very dramatic happening there. Tidal forces become infinite, curvature becomes infinite. All hell breaks loose at the singularity. That's r equals zero. Well, as we start coming in, we go to r equals 2mg. That's not zero. Then we continue along the r-axis. Think of it as the r-axis. But the r-axis makes a right-angle turn until we get to r equals zero. And that's over here. This is r equals zero. We do the same thing on another curve. And we come to r equals zero over here. If we connect all the points that correspond to r equals zero together, it looks like this. It looks like a hyperbola. It is a hyperbola. That hyperbola is the singularity. Let's draw it having nasty, wicked points because it's highly curved and very unpleasant. You sort of thought originally that the singularity was a place. A place means a place is a vertical world line. But it's not a place. It's a time. Moving along here, the singularity occurs after a certain amount of time. Not really a place. It's a kind of time. There's the singularity. That's the nasty place. Okay, let's think about now poor Alice who has fallen behind the horizon here. She's over here. Where does she go next? She wants to escape the terrible fate of falling into the singularity. She can try to get out or she can try to go that way. Either way, she cannot escape. There's no way that she can, without exceeding the speed of light, she cannot escape the fate of falling into the singularity. So that's another property. Once you pass the horizon, that's it. You have no chance but to fall into the singularity. Not only can't you communicate to the outside, but you're doomed. You're doomed by the singularity. And that does one more feature of the black hole. Now, the mathematics of this metric. Now we come to a real curiosity, but it's a mathematical artifact, which we're going to find doesn't really mean anything physical, but still is a mathematical artifact. This geometry here is the same if you turn time to minus time. In fact, these coefficients here don't depend on time altogether. And what happens to dt squared if we take time to minus time? It stays the same. This metric here is what a physicist would call time reversal and variant. It looks the same going forward as going backward. Well, that's a little bit peculiar, because you can see over here that sort of forward in time, there's a singularity. What it implies is that there must be a singularity also in the past. And if you study this metric in its mathematical detail, you'll find out that what it really represents is a time symmetric situation with a singularity in the past and a singularity in the future. Now Alice doesn't have to worry about the singularity in the past. There's no way she can get back to the past. You can't get back to the past. She has to worry about the singularity in the future. She's going to fall into the singularity over here. But let's just think of Alice's history for a moment. Incidentally, the meaning of saying that something is time reversal and variant is the same as saying that a particular history, if you were to run it backward, if you were watching the events taking place and you were to run them backward, meaning to say you would take a film of them and run the film backward, that the backward film would also be a possible history which makes sense as a solution of the equations of the theory. That's what it means for something to be time reversible. The equations of general relativity are completely time reversible, so anything that can happen forward can also happen backward. So let's think about what that means. Here's a history. Alice jumps into the black hole and winds up on the singularity there. As she passes the horizon, Bob loses track of her, can't see her. Well, if that's a possible history, then we should be able to fold it over and say there's another possible solution of the equations in which Alice is created at the singularity and jumps out of the black hole. Well, there's Bob looking back at the black hole. Oh, she has no trouble going this way. No trouble at all. She can't cross out from here. Well, the time reverse of that has nothing to do with her ability to get out of this. This is called a past horizon. Alice has no trouble at all getting out of the past horizon. It's simply the time reversal of the statement that Alice had no trouble falling into the horizon. The time reverse of passing out is the horizon. The time reverse of passing in is passing out. So if she can pass into the horizon in the future, she can pass out of the past horizon in the past. What does Bob see? Bob looks back and sees something silly. He sees a Alice ejected out of this black hole. That's silly, of course, right? I mean, that's nonsense. So there's something wrong with this picture. We're going to find out what it is. No, no, no, no, no, no. Here's Bob over here. He looks back. He actually seems to be able to see the singularity. That's in itself a little bit annoying. He looks back and he sees the singularity. And then he suddenly sees Alice jump out from the singularity. What does that call? Anybody know what that's called? This nonsensical configuration? Fiction. Yes, it is. Well, the first thing is, yes, it is possible. Is it likely? No, it's unlikely. This is called a white hole. A white hole. This is what a white hole is. Now, white holes don't exist. They don't really make sense. Doesn't Bob also have a history going in reverse at this point? No, Bob is out here. He's just doing what he ordinarily does. This is not, this is fictional. This is not something that makes a lot of sense. It's about as likely as, well, what's a good example? You evaporate water from a puddle into space. Instead, you see water materializing from space. This can happen, candidate. It's called just a compensation. I better be careful. All right. A bomb explodes. A bomb explodes and sends garbage all over the place. That's a possible history of, let's say, Newton's equations for a bunch of particles where you start with a bomb. One of these old-fashioned bombs that Bolsheviks carried around with a fuse. My grandfather would have had, I'm sure. You wait a little while, boom, and stuff gets scattered all over the place. You'd be pretty damn surprised to see a bunch of stuff flying inward, reassembling the bomb and doing it very delicately so that eventually it just had a little bit of a flame at the end of it. That doesn't happen, right? It's not going to happen. But wait a minute. It's a possible solution of the equations of Newton's theory. Newton's theory is completely time reversible. So what's wrong? Both are possible, but we know very well that one of them is not possible. Okay? What the... just to throw words doesn't help. Yeah, it's entropy and it's... One of them is clearly extremely unlikely. The other is rather likely. They're both possible. With the inverse process, where stuff comes in and creates the bomb, all you would have to do is move one molecule, a tiny, tiny fraction of molecular distance in the incident state, to completely undo the ability to create the bomb. It's extremely unstable, the backward trajectory. It's sort of like... imagine I have a very high mountain that comes to a point. It's very easy to roll a ball off the top, starting at rest. Just give it that tiniest little kick. Where will it go? It'll fall down the mountain. The probability that it will fall down the mountain is extremely high. The reverse trajectory is possible, throwing the ball up to and just getting up to the top. It's extremely unlikely, the tiniest little bit of error, and you'll wind up missing the top. Another example, everybody here play pool when they were kids, billiards, you know? Yeah, of course you did. You start with all the billiard balls, all the balls racked up in a triangle, perfect triangle. How many balls are there? 15. Right. And you take a cue ball, you give it a good hard shot, and they go scattering all over the place. High probability that they'll just scatter all over the place. It would be rather amazing to see 15 billiard balls, even if we didn't worry about friction on the table, it would be rather amazing to see 15 billiard balls come together and form a perfect triangle, with the cue ball going out that way. Pretty unlikely, right? But both of them are solutions of the same equations. So both are possible from the point of view of the equations, but the difference is that if you started those 15, 16 balls in a configuration which was just perfect so that they reassembled into the triangle, and then the tiniest little error, you would miss the triangle completely and all you would make would be a table full of billiard balls going in random directions. Yes, but also is it just as unlikely that you would have precisely the same thing going the other way every time you did it? Yes. Each of these is equally unlikely. Yes, that is correct. But if you were just to say, I'm watching this thing and what I see is billiard balls scattered all over the place. Yeah, if you did it over and over and over again, you would get more or less the same pattern every time, a billiard ball scattered all over the place, and unless you were following the billiard balls carefully, you wouldn't notice there was particular differences. But you would notice if the billiard balls came together and formed a triangle. So yes, you're right. I mean, any given, I think your point is any given configuration is very unlikely. That's where entropy comes in. But what Bob is seeing here, and we're ultimately going to understand this in terms of the entropy of the black hole, but what Bob is seeing here, what we now understand is he's seeing one of these very, very unlikely processes, not an impossible process, but unlikely where a black hole just sitting there suddenly spurts out a full-fledged version of Alice, very unlikely, but not unlikely that Alice falls into the black hole. So it really is very, very much the same thing. We're going to find out that this is one of the aspects of black holes that has to do with their entropy. We discovered black holes have entropy. They're in thermal equilibrium. They have thermal properties. You could say the same thing about a pot of hot water. What's the likelihood that if you drop an ice cube into a pot of hot water that it will form a melted configuration with the water, getting a little more water in it and filling up the pot a little more, very high, right? What's the probability to sit and wait and watch that pot that an ice cube will get ejected out of it? But both of them satisfy the same equations, both of quantum mechanics and classical mechanics. So in that sense, they are both possible. The same thing going on here. The horizon of the black hole has entropy, just like the pot of water, hidden information. Very rarely, very rarely, will the entropy of the black hole of the pot of water assemble itself into an ice cube. Same thing here. Very rarely will the garbage on the horizon of the black hole assemble itself into ejecting Alice. Is that probability all of the quantum mechanical probability? It could be both. It's a combination of quantum mechanical and thermal. Thermal fluctuations and quantum mechanical fluctuations. Both are there. But in the case of the hot water, of course, it's mostly thermal. Okay. Okay. Let's take a five minute break. And then we're going to talk about Penrose diagrams and we're going to talk about actually making black holes. How you actually make a black hole. And why half of this diagram, namely the bottom half, is completely unphysical. It's not something you ever really have to worry about. The bottom half, which is the peculiar half of the white hole. Michael asked me a question and I realized when he asked the question that I was using a bit of jargon. So let me just clarify the jargon. I used, I told you that rho was proper distance from the horizon. The jargon there was proper distance. Let me just remind you what proper distance means. It is a piece of jargon. It's similar to proper time. What is proper time along a observer's trajectory measure? It measures wristwatch time. Right. But it's wristwatch time along a curve. Right. All right. Now we have over here Bob. And let's take Bob right at time equals zero. He draws the straightest path he can to the horizon. In this case, a straight line. What does it mean to say that this is a given proper distance away from the horizon? Yeah. Yeah. It's the distance measured by real yard sticks or meter sticks in this room. We don't allow yard sticks in this room. We allow meter sticks. Distance measured with honest meter sticks. That's all. That's all it means. That to say, now, little r, where is it? Little r up there, I think I've erased it. Little r over here does not measure proper distance. It's just a mathematical coordinate which is not proper distance. It doesn't measure real meter sticks, the distance. Rho is the thing that measures real meter sticks. So I apologize for jargon, but sometimes I forget. OK. Now let's talk about Penrose diagrams. Penrose diagrams are great. They really are the way to understand all kinds of things in general relativity, and they're easy. They're easy to think about, easy to draw. A little bit difficult to actually work out in detail, but we don't need to work them out in detail. Let's draw a space time on the blackboard. In particular, let's draw just ordinary flat space time on the blackboard. Well, we don't have enough dimensions on the blackboard to draw four dimensional space time. So let's pick out two coordinates. One will be time, and the other will be distance from the origin. Now this is ordinary flat space time, so it could be t and rho. Time runs vertically upward. T. And rho is just distance from the origin here, so it runs sort of this way. Here's the rho axis. We can make a grid out of this by chopping up the time axis into intervals and then extrapolating them horizontally. And we can make a vertical grid. So the first point here is rho equals zero incidentally, nothing to the left, because by definition rho is always positive. It's the proper distance from the sort of polar coordinate kind of coordinate. So it's always positive. And here it is. Here's all of space time. This is rho equals zero along here. Rho equals one, rho equals two, rho equals three. Here's t equals zero, t equals one, t equals minus one. Time can get negative in this picture. And this is the history of the whole universe. How about light waves? Let's take a light wave that comes in from the remote past, comes in, hits the origin. What happens after it hits the origin? It just goes back out. It's not really bouncing off anything. It's just here's the origin. My fist is the origin. It just goes right through the origin. But in polar coordinates, it comes in and goes out. So there's a light wave that comes in and goes out. A later light wave might come in later and bounce out and so forth. Again, that's the history of the world on a small blackboard. But it's not the history of the whole world. The blackboard isn't big enough to describe the whole world. The whole world meaning all infinite distances out to infinity, and all infinite times from minus infinity to plus infinity. So what relativists do is they make coordinate transformations. That's what relativity is all about. It's about coordinate transformations. But in particular, coordinate transformations which take the whole expanse of space and time and simply squash it by coordinate transformation onto a finite region of the blackboard. What you get is called a Penrose diagram. There's a rule. And the rule is that however you draw the Penrose diagram, basically only two rules. The first thing is use coordinates that gets the whole thing onto the finite blackboard. Squish it and squash it until you get the whole thing on the blackboard. But subject to the rule that light rays, radially going light rays always move at 45 degree angles. This can be done for a very large variety of geometries that you can squish it down, squish it down in such a way that the light rays continue to move at 45 degrees. The result is called a Penrose diagram. And don't worry about the mathematics of how you derive the particular Penrose diagram. Just learn to read them because they're easy to read. Here's what it looks like. Here's what the Penrose diagram of flat space looks like. There's the time axis, but it doesn't go from minus infinity to plus infinity. Well, it does, but we draw the whole thing between some point in the past and some point in the future. Just a whole time axis. The space axis also goes out to some point. That's really r equals infinity or rho equals infinity, but we squash it in. And the rest of it is just a triangle in here. There it is. All right, let's see if we can take this grid and re-plot it on here. Start over here and work your way outward. That's t equals 0 and work your way outward. Here's rho equals 1. Here's rho equals 2. Rho equals 3. But we've got to crowd an infinite number of them into here. So they're going to crowd up like that. Likewise, let's take time. Time 1. And they're going to crowd up as we go up here and likewise down here. Let's try drawing this grid. Time-like lines, vertical time-like lines, all go up to time infinity. That's up here. That's time infinity up here. So all these vertical lines look like this. More and more and more of them out here. What about the horizontal lines? They all go out to rho equals infinity. That's all over here. It's all been squished or smushed into a point over there. It looks like this. Curved. Okay, what about light rays? In particular, light rays that are aimed from far away toward the origin. Light rays come in from, first of all, they move on 45-degree lines. And the incoming light ray would look like that. That would be the image in this diagram of the light ray which came in like this. It bounces off the origin and goes back out. You could have earlier light rays that came in earlier and bounced back earlier. All light rays, all incoming light rays, originate on this side of the triangle. Now that's really way, way out at infinity, way down here. All outgoing light rays pass out of the diagram up here. That's just asymptotically far away in that direction. What about all time-like lines? Time-like lines are these observers, for example, these people who are standing still. They might be moving a little bit. They all wind up up here. Where do they come from? They come from here. If you walk out, you can't walk out, but if you imagined moving outward along one of these timelines here, to larger and larger row, you would eventually get over to here. This diagram and that diagram are the same diagram. This one has just been schmushed or squashed or whatever your favorite word is, so that it can all be drawn on the blackboard, but subject to the constraint that the speed of light stays. The light rays move at 45-degree angles. This is called the Penrose diagram. This is the simplest Penrose diagram, and that's all it is. What else is there to say about this particular Penrose? Yeah. And those light lines squish to meet the corners of each of those squares. As it goes through, is that light line curved, or is it always a point of line? As it passes through each of those squares, is it supposed to be passing through the corners of each one or is it always drew so it would slightly curve as it goes through the graph? I suppose it would be if we drew it accurately. If we drew it accurately, it would be, yeah, this is true, since it's passing through the corners here. May all end up at 90 degrees to that outside boundary? So, it's not a light ray that was pointed not right at the origin, but missed the origin. How would you draw that? We could come back here first and draw it here. You still have to be 45 degrees, so just intersect the higher, lower point, wouldn't it? No. It misses the origin, it never gets to the origin. So it comes in when it's very, very far away, it looks like it's aimed straight toward the origin, so it looks, it follows this route here. But it doesn't quite get to the origin. It comes into a distance of closest approach and then goes back out. So in fact, it would look like this. It gets to a distance of closest approach and then goes back out. But of course, there's angles involved. It doesn't come out in the same direction that it came in in, but on this diagram, all right, it's the same thing here. A light ray that misses the origin just simply misses it, it never quite gets to it, it gets to a distance of closest approach over here and then goes back out. But then it's not going at 45 degrees? Yeah, I said radial light rays are drawn at 45 degrees. Radial light rays, ones which are aimed radially, right, ones which are not radial, you have to worry about especially. You have to take that into account. Okay, these different places, these different extreme limits of the Penrose diagrams have names. Of course, this is just a time axis. This one up here is called future time-like infinity. And it's usually just called T plus. Or T equals infinity. How about just T equals infinity? This one down here is called T equals minus infinity. Time-like, past infinity, time-like, future infinity. What would you call this one? Space-like infinity. So this is rho equals infinity over here. What about these? Got a name for those? They're the places where light rays come in from. They're called light-like infinity. This one's called past light-like infinity. This one's called future light-like infinity. And for reasons that I have no idea, this was originally labeled I minus, minus for past and I for the future. It was always labeled with a script I and known as scry, scry minus and scry plus. That's not my notation, so I don't have to apologize for it. Good. So that's the Penrose diagram of flat spacetime. What about a black hole? What does a black hole look like? So to get to a black hole, let's redraw the diagram that we had earlier. Try to see what it would look like if we did some smushing and squishing to squeeze it onto the blackboard. Here's what we saw. We saw way out here. That was space-like infinity, that's way out large distances from the black hole. Here are our time coordinates like that. Here was the singularity over here. There was also a past singularity over here. And we never discussed what was on this side. I hesitated to tell you what was down here, but I eventually did. What's on this side? Nothing of physical interest, but nevertheless the mathematics of the Schwarzschild Metric contains all of this. It contains an outside world over here and another outside world over here. This outside world is not the same as this outside world over here. You can't get a message from here to here, but the mathematics of Schwarzschild has both of these. We're going to find that half of this is really quite meaningless, but for the moment this is what we found the metric look like. This is what the geometry look like. So let's ask what happens if we squish this? We're going to do the same kind of squeezing to get everything onto the finite plane. I'm just going to draw it for you. I'll show you what it looks like. I think you'll recognize it pretty fast. I need some guidelines to do it with. I'll show you how to read it in a minute. Let's start out here at r equals infinity. That's like r equals infinity or rho equals infinity. That's like way out here. Where do you think it ought to go? Way out here. This horizon over here, the bifurcate horizon, is right at the center over here. If I start at the horizon and go out to infinity, I can go this way, but I can also go along this line. This is omega equals zero. This is omega equals one. Where is that on here? Notice in either case you wind up sort of in the same place. Same place sort of. You wind up going out to r equals infinity. r equals infinity is all been smushed down to here. So where do you go? You go out to here. Likewise from here. That's the sequence of different time. Now let's look at different radial distances from the black hole. Different radial distances, I'm going to change color. Different radial distances, all, these are observers if you like, who are stationary outside the black hole. They go up to time infinity over here and they look like this. So this guy is 200 miles from the black hole. I don't know. This one's 500 miles from the black hole. That's 1,000 miles from the black hole. That's 10,000 miles from the black hole and so forth. This one over here is hovering right above the horizon. In fact, because he's trying to hover above the horizon, he must be accelerated. To hover above the horizon, take some rockets that are going to keep him from falling through. So that's him over here. The other side is a perfect reflection of this side. And this diamond here, that's the outside of the black hole. That's the outside of the black hole. That's where Bob lives. He lives out here and if he doesn't want to cross the horizon, where's the horizon? That's this. Well, this is the bifurcate horizon. This is sort of the point of no return over here. What is this? That's the singularity. That's the image of the singularity over here. So here we go. Here's the singularity. What's this one? That's the past singularity. That was down there. Past singularity. Future singularity, past singularity. This is either r or rho equals infinity. Either way. How about light rays which come in? They come in from here? But now they don't bounce off the origin. What happens to them if they're radially inward? What happens? Do they bounce off and go back out? No. They fall into the black hole. They pass through here and when do they wind up? The singularity. There are light rays which go out to infinity. Of course, they could have been created by somebody here. One of these people here could have had a flashlight and send a light ray out. But there are also mathematical light rays which begin on the past singularity and get radiated out. This is called t equals infinity. And it's the analog of the point up here. This is called t equals minus infinity. This is called r equals infinity. This is called past light like infinity. This is called future light like infinity. And that's it. That's the black hole. That's the Schwarzschild black hole in a nutshell. Drawn all of space-time, drawn on a finite piece of a plane. Radially moving light rays always 45 degrees. From this picture, you can see a lot of what the properties of black holes. Well, we've talked about them already. We've talked about what happens when Alice falls in, when Bob stays outside. Here's Bob. Here's Alice, except here Alice is a photon, but if Alice was moving more slowly. But same thing would happen. And once you cross into this triangle here, you're doomed. You have no choice but to go to the singularity. Now, what is this region out here? That seems like a second world. It looks like the black hole, this is the source of a lot of science fiction. It looks like there are two worlds connected together by a black hole. A wormhole. Right. A wormhole. However, it's not possible to go from this world to this world, because even a light ray, once it passed this horizon, would wind up over here. So they're completely out of communication with each other. Nobody on this side can signal anybody on this side. So from the point of view of anybody in this diamond here, this doesn't exist. Well, that's a little bit glib, isn't it, to say it just doesn't exist. In fact, the real story is a little bit different. The real story has to do with the way black holes are made in nature. This diagram doesn't tell you how this black hole was made. This black hole was sort of always there. From the remote past to the remote future, the Schwarzschild metric was there. So it's a black hole which is eternal. Black holes are not eternal. They're made. They're made by collapsing stars. So the next question that we want to understand is how is a black hole made? Now, a black hole before there's a black hole, it's not completely empty space. Stuff was coming in. We can imagine. Let's take an imaginary scenario. Some particles come in from very far away. They might even be photons. When enough photons come in to a region of space, photons have energy. Therefore, they have mass. Enough photons and mass, and therefore, they gravitate. If you can get enough photon energy into a region of space, you can make a black hole. So we can start with, not with nothing, but with just a very diffuse cloud of photons coming in from far away. That's practically empty space. Looks a lot like empty space. Where is the diagram that represents empty space? That's here. That's empty space. No black hole. Just plain empty space. Maybe with some photons coming in. After the photons come in, if they make a black hole, then later what we should see is the black hole. The question is, how do you put these together? How do you put this together in the past with this in the future to represent the assembling of a black hole? So we want to go through that. We want to spend a little bit of time assembling a black hole. To do so, I have to tell you a theorem. I think the theorem we'll do for tonight and we'll go through the construction next time. But it's called, in Newtonian physics, it's called Newton's theorem. In general relativity, it's called Birkhoff's theorem. And it's a very simple fact about Newtonian gravity or about general relativity. It's easy to understand. Hard to prove. Let's begin with Newton's theorem. Newton proved this. It's a theorem about 1 over r squared forces. 1 over r squared forces such as the Coulomb force and electromagnetism or the Newtonian force of gravity, 1 over r squared forces can be described by lines of force. I'll just remind you about lines of force. Here's your mass. Let's say it's a point mass. And imagine lines of force coming out of it. The lines of force could also be flow lines of a fluid coming out of a source at the origin. It could be two-dimensional with fluid spreading out over two dimensions. Or it could be a three-dimensional source over here where fluid flows away from the source. And you draw lines of force. The lines of force begin on the charged particle or the massive object. And just go out and they don't end. They don't end, of course, unless there's another mass someplace or another charge someplace. But let's say there's just one mass. That's the, and what's the rule for the either electric field or the gravitational field? The rule is at any point the gravitational field, oh, I think I will draw the gravitational lines pointing inward. Why is that? Yeah, because gravity attracts. So it's pulling on things. So the rule is at any point the force on a test object points along the direction of the line of force that passes through that point. And what about the magnitude of the force, the magnitude of the gravitational field? It's proportional to the density of the lines. If you think of if space is being filled up with a network of lines, the closer they are together, the stronger the force. So lines of force, it's an imaginary mathematical construction. You fill up space with these lines of force or lines of flow of a fluid. That is everything that you can say about the Newtonian field, gravitational field, or a point object. If you put two point objects together, then you just add the fields, which is exactly the same as saying I have two sources of fluid. In this case, I guess this would be a sink of fluid because the fluid is flowing in. You have two sinks of fluid. The fluid is being pulled in, and so you have two sources, and you simply sum the field of both of them, which means you study the fluid flow of the two sources, ducts opposed. If you work out that, you figure out the field of an arbitrary distribution of masses. Now, the Newton theorem is the following. It says, if I take a spherically symmetric distribution of mass, and I'm going to take a very simple case, I'm going to take a shell. The theorem is more general than I'm going to say, but I'm going to take a shell, a shell, a thin shell, spherically symmetric of mass. Here it is. You can think of every point on that shell as producing fluid. If you look on the inside of the shell, there is no flow. There could be. Which way would the flow flow if you were inside the shell? Well, you might say it flows radially. By symmetry, it would flow radially. But if there was flow inside the shell, where would it go? There's no mass at the center. I'm assuming all of the masses on the shell, there's no place for the fluid to go. If there's no mass at the center, there can't be any radial flow, there can't be any flow at all. So the interior of a shell of sources of fluid has no flow at all in it, which is another way of saying there's no gravitational field at all inside a spherical shell like this. What about the outside? The outside has a flow which is radially outward. The lines of force look exactly as if they had emanated from the point at the center. And the total amount of flow is whatever the total amount of flow coming out of the shell is. In the language of gravitation, what it says is the gravitational field of a shell of mass m inside the shell is absolutely identical to a gravitational field of a point mass at the center. But inside the shell is no gravitational field at all. So inside the shell, exactly the same as empty space. Outside the shell, exactly the same as the point mass. There's a general relativity version of this. Incidentally, let me say something else. This is true even if the shell is time dependent. For example, as long as the total amount of fluid coming out of it is conserved, if the shell were to change size in a way that did not change the total amount of fluid coming out of it with time, it would still be true at any given instant the gravitational field inside the shell would be that of empty space. And the gravitational field outside would look exactly like a point mass. It would just be the place where you go from outside to inside, which would change with time. So in particular, if you had a time dependent shell of mass which was contracting with the mass staying fixed, then outside it, you would see the point mass gravitational field inside, you would see nothing. Only the boundary between the two regions would change with time. The same exact thing is true in general relativity, almost the same thing. Close, not quite, but close. So let me tell you what is called Birkhoff's theorem. And then the next time we will combine Birkhoff's theorem with these two diagrams and put together the theory of how a black hole is formed. So get Birkhoff's theorem, remember it. What does it say? It says if you have a spherically symmetric distribution of mass, what do you see inside the spherically symmetric distribution of mass? Exactly what you would see if there was no mass at all. Flat space time, empty space, flat, minkowski, whatever you want to call it. Just the solution of Einstein's theory with absolutely nothing there, just good old empty space inside. What about on the outside, what do you see? You see a Newtonian gravitational field? No, not quite. What do you think you see on the outside? It's a spherical mass, but what's the gravitational field on the outside? Schwarzschild. In other words, the metric on the inside is exactly that of flat space and the metric on the outside is exactly that of the Schwarzschild black hole. We won't, well we can write it. On the inside we have good old the t squared minus the x squared, the y squared, the z squared. On the outside we have Schwarzschild, I'm not going to write it. I think I could have written the metric faster than I could have written Schwarzschild. Right. In other words, we piece together two things, both of which we know very well, connected together by a shell. How about the mass of the Schwarzschild solution? Schwarzschild solution has a mass associated with it. It's just the mass of the total shell. Does the mass of the shell change with time, even if it moves, even if the shell contracts? Does the total mass of the shell change with time? Why not? Well, supposing I tell you, supposing I just mutter the words e equals mc squared. Conservation of energy. Right. The mass, the total mass of that shell, even if the shell, and can the shell change? Of course the shell can change. You build yourself a shell, but the shell isn't strong enough to maintain itself against the gravitational field. What happens? It starts to collapse. As it collapses, the mass doesn't change, but the radius of it changes. What was inside was just good old flat space. What was outside was Schwarzschild, and the boundary between the two behaviors might change with time. So what we're going to do next time is we're going to take the flat space solution. That's the inside. And patch it together with the outside, which is the Schwarzschild solution. I'll show you how to do that. And in that way we will learn, in a very simple example, how a black hole is formed, we'll find that half of this diagram doesn't mean anything, but we'll come to it next time. So formation of a black hole next time. Okay. For more, please visit us at stanford.edu.
(January 31, 2011) Leonard Susskind gives a lecture on string theory and particle physics that focuses on the geometry of a black hole near the horizon. He describes how standard concepts from quantum physics can explain the physics that occur at this point. In the last of course of this series, Leonard Susskind continues his exploration of string theory that attempts to reconcile quantum mechanics and general relativity. In particular, the course focuses on string theory with regard to important issues in contemporary physics.
10.5446/15109 (DOI)
Stanford University. One thing that I'm confused about is vacuum energy. I know what you said when you get the plank scale that explodes. My question in particular, does electron positon, positron hair production in annihilation actually occur, or is that excluded by? Actually occur means what? That you can actually see out of the vacuum electrons and positrons coming out? Let's say the measured vacuum density, if that would be a fair measure. Electron, positron, virtual pair production. Virtual pairs do contribute to the vacuum energy. That doesn't mean that you should see electron, positron pairs popping out of nowhere. It does not mean that. It means they pop out and go back in a very short time, subject to the uncertainty principle. Quantum fluctuation and not real, genuine particles being produced. It's much like the zero point energy of a harmonic oscillator. Classically, the harmonic oscillator just sits still at the bottom of its potential well. Quantum mechanically, it cannot both sit still and be at the bottom of the potential. There's competition and it's resolved by a little bit of quantum fluctuation. That quantum fluctuation has energy. How much is it? A half h bar omega. It's always there. Can't get rid of it. It's the least energy that the oscillator can have. And of course, that doesn't mean looking at the oscillator. Most quantum oscillators are too small to see. It is the ground state. And as the ground state, it means it's the state in which nothing is happening to the extent that nothing can happen. Same is true with electron, positron pairs. Same is true with virtual photons. And in fact, it's a closely related phenomenon. The electromagnetic field, for example, in a cavity, electromagnetic field in a cavity is just a collection of harmonic oscillators. Each oscillator has a little bit of zero point energy. Add them all up. You call that the vacuum energy. I was saving that for a little bit later because it plays an important role in cosmology. Are you also going to talk again about the 120 orders of magnitude? No, I think I won't spend a lot of time on that. I'm not going to mention it, but no, I thought tonight, in any case, we mainly concentrate on cosmic horizons. What a horizon is, why there are horizons, and a little bit about the cosmology, the equations of cosmology, how they lead to an accelerated expanding universe and why the accelerated expanding universe has event horizons. I thought that would be a good thing to do tonight. Since we spent a lot of time learning about what a horizon is, we're not quite finished learning what a horizon is. No four minutes. You mentioned last week, I guess, was the ultraviolet and red relationship, and you said you were going to be getting to that tonight? Well, we can talk about it. Maybe we'll start with that. Maybe. It's not really on my list of things to do tonight. Maybe we'll get to it, though, if I run out of things. Okay. Wait three minutes and sit and think. That's a question. Let's lay it to the question back there about vacuum energy. If there's a vacuum, there is a positive energy density. Why would that have some kind of refractive effect on it? Refractive. Optically refractive effect. What is refractive? Refractive, remind me which is refraction, reflection, and respect the speed of light. Speed of light, for the speed of light, for example, would be ever so slightly less than the quantum vacuum at different free space. Yeah. The vacuum energy, it's a good question. It's a good question. Most energy that you think about, if you have that energy present, it creates a situation where the world is not Lorenzen variant. I don't mean that the theory is not Lorenzen variant. I mean that the configuration of the world is not Lorenzen variant. What does it mean? What exactly does that mean? Let's first talk about something simpler. Translation invariance. Every place is like every other place, right? No, it's not true. Stanford University is not like Berkeley. The Earth is not like the Sun, and it's certainly not like interstellar space. So what do we mean when we say the world has translation invariance? We don't mean that the configuration of the world, every place looks the same. We mean if we took the whole thing and moved it, it would look the same. But moving it means moving us, moving Berkeley, moving everything else. Now, in the same way, you talk about Lorenzen variance, is the world Lorenzen variant? No, I'll tell you why in a minute. Are the equations of physics Lorenzen variant? Yes. So what does it mean for the world not to be Lorenzen variant? Well, it means that for whatever reason, the stuff in the universe does not look the same from every frame of reference. If I were to whiz by you at a thousand miles a second, you would look different to me than if I was standing still. So stuff in the world, stuff in the world, breaks the symmetries that we talk about. They create special locations. They create special directions. The equations of physics are supposed to be rotationally invariant. That means the same, no matter how we rotate. True enough, the equations of physics are rotationally invariant, but is this room rotationally invariant? Certainly not. If I look that way, I see you. If I look that way, I see something else. So the presence of stuff breaks symmetries, and presence of stuff breaks Lorenzen variance. The world is not Lorenzen variant by virtue of the fact that there's stuff in it. Now, most of the times the stuff, energy and so forth, as I said, is not Lorenzen variant, and the result of that is in particular a result on the motion of light waves. Light waves moving through materials have different velocity than they have moving through empty space. The rule about the speed of light is that light always travels with the same velocity in empty space, but that's another way of saying that empty space is Lorenzen variant. Empty space is Lorenzen variant. Empty space is translation invariant, it's rotation invariant. Empty space has no preferred frames of reference. And the statement that light always travels with the same velocity is a statement about a Lorenzen variant world where there is no preferred frame of reference. When there's a preferred frame of reference due to the presence of material, light can travel at any velocity lower than the speed of light, than the usual speed of light. Now, vacuum energy is very special. Vacuum energy is the one exceptional situation where the presence of that energy is Lorenzen variant. It does not pick out a reference frame. If you weren't here and the worm wasn't here and I was in absolutely empty space, nevertheless, there would be vacuum energy in this room. Vacuum energy has a certain value, whatever its numerical value is, a certain number of joules per cubic meter, very, very tiny, but nevertheless, finite. And so you might think, well, that's like a material being here, it's like a stuff being here, and a stuff being here would define a frame of reference where the stuff would be at rest. Vacuum energy is very special that way. It does not define a frame of reference. I could detect the vacuum energy with a vacuum energy detector. I won't tell you how to build one. I'll tell you how to build one in a little while. We'll talk about how you build a vacuum energy detector. I could detect the vacuum energy and see how much is here. Or we get some number. I would then go whizzing by as fast as I can go and do the same experiment. If it was ordinary material, I would discover that the energy changes, the energy that material changes with my velocity. In particular, if I whizz past a gas, then of course the gas looks higher energy. Why? Because all the molecules appear to be moving past me. Vacuum energy is the one special case where my vacuum energy detector will give the same answer no matter how I'm moving. So vacuum energy is the special case where it's present, does not pick out a special frame of reference, does not violate Lorentz invariance, and it's a special case where light will move with the speed of light independently of the observer. Now moving with the speed of light means something. It means when the light ray passes you, passes right in front of your nose, and you have your clocks and your detectors locally, right near you, you measure the same velocity of light. Because of the expansion of the universe and so forth, it becomes a more problematic question of whether light moves relative to us far, far away when the light is far away from us. Does it move with the speed of light? And we're going to talk about that. The answer is no, not in general. It moves relative to local observers where that light is passing by them with the same speed of light, but it doesn't necessarily move relative to us with the speed of light. Faster or slower? In this discussion, is vacuum energy synonymous with cosmological? Yes, absolutely. All right, so let's begin there. Why don't we begin there? If we take the universe and fill it with full of particles, then let's say ordinary particles, protons, fill it up. I don't mean fill it up so that every proton is against every other proton. I just mean create some uniform density of protons. And of course, the universe is something like that. It's not so different. There's a more or less uniform energy density throughout all of space. It tends to be clustered into stars and stuff like that, but it's still bigger than superclusters of galaxies. It's homogeneously distributed, rather diffuse, something if I remember, something like about 50 protons per cubic meter on the average. And there's some energy density. Let's give that energy density a name. Standard terminology for a density is rho. And now that could stand for the mass density, but it's not being the same thing. All right, so that's the mass density. That's consistent with a bunch of particles. Now what happens, and it's the number of particles per unit volume, or the number of particles times the mass of each particle divided by a volume of space. Take a volume of space. The density of energy is the mass of each particle, let's suppose there's only one kind of particle. Particles are standing still, and not photons. Let's not get involved with photons right now. They're just protons standing still, protons, neutrons, electrons, atoms. The mass of an individual particle times the number of particles in a volume divided by the volume, that's the density. Now what happens to that density as the universe begins to expand? Well, the number of particles doesn't change, at least not if they're ordinary particles. The number of particles doesn't change, their mass doesn't change. What does change is the volume of a region of space occupied by those particles. So we take some particles that started out in a region of space, a square on a side, a hundred light years on a side, a hundred light years, a hundred light years the other way too. It has a volume of a million cubic light years, which is some number in cubic meters. That's the volume that goes in here. The number of particles stays the same, and now the universe expands, everything expands with it on sufficiently big scales. The volume of space, let's say over some period of time, doubles. When the volume of space doubles, what? I'm sorry, I meant the radius of, I didn't mean the volume. When the radius of this cube, the linear dimension of the cube, doubles, what happens to the density? Well, when the radius doubles, the volume gets multiplied by eight, and the density becomes one-eighth of what it started. OK? So ordinary material has the behavior, and this is true for photons, for anything, not the exact formula is a little bit different for photons, but the general rule is that as the universe expands, the energy density dilutes. It decreases. The density of energy decreases just because everything separates. Again, vacuum energy is the exception. Vacuum energy is the exception. Vacuum energy has the property, vacuum energy density. Vacuum energy density has the property that as the universe expands, as the volume of space expands, the amount of, the density of energy does not change. The density, not the amount in the box, but the amount per cubic meter, always stays the same. If the universe were to expand by a factor of a hundred in every direction, the volume of the region of space would go up by a million. Nevertheless, the density of vacuum energy in this room would stay exactly the same. So it's a property of empty space, which just doesn't change as time goes on. It doesn't change as the observer moves past it. It doesn't change when the observer rotates. It's not clustered. Uniformly filling space picks out no direction, picks out no Lorentz frame, and picks out no special time in the sense that as the universe expands, it doesn't change. That's, energy of that type is called vacuum energy. If it exists at all, and it does seem to exist. What do we care? Why should we care if the universe were filled with something that we call vacuum energy? We do experiments, and the only thing we're interested in this room, if we're doing experiments in this room, is differences of energy. The zero point of energy really usually doesn't matter for anything. The thing which usually counts is differences of energy. What's the energy difference between an excited atom and an unexcited atom? What's the difference of having of the energy with or without the atom? The energy of an atom counts, but that's because we can compare it to the energy of a configuration without the atom. But just empty space, who cares? We could just put as much energy into an empty space as we like as long as that energy cannot turn into any other kind of energy. As long as it doesn't come into the questions of differences of energy, why should we care if there's something called vacuum energy in the room? Well, the answer is what I said is we don't care about the zero of energy until gravity becomes important. Once gravity becomes important, what's the source of the gravitational field? The source of the gravitational field is mass. Mass is energy. Vacuum energy gravitates. It has a gravitational effect. It has an effect on spacetime. And it has an effect on spacetime, which is different than if it weren't there. So let's talk about, well, the first question is, what's the origin of vacuum energy? And as I said, or somebody asked me earlier, vacuum energy is, it wasn't quite asked to me in these terms, but vacuum energy is a feature of quantum field theory. Quantum field theory, quantum electrodynamics, quantum, the standard model of particle physics, all quantum field theories have vacuum energy. It's just a zero point energy of the oscillating fields, if you like. There's a contribution to it from the electromagnetic field. There's a contribution to it from the electron-positron field. There's a contribution to it from quark fields, even if there are no quarks present. The point is, you don't need to have anything really present to have that energy. What does it do to? It's due to virtual creation and annihilation. Virtual, remember, means that it happens so fast that a little bit of energy that is created and annihilated satisfies the uncertainty principle. If you create some particles out of nothing, then they can only last for a very short time because of the energy time uncertainty principle. Let's leave it at that. But the net result is that there is energy density called rho. And it fills up space. It's everywhere. It's Lorentz invariant. It doesn't matter how you're moving and so forth. Vacuum energy. Let's call it rho vacuum. Now, what does it do? It has an effect on the right-hand side of Einstein's equations. Let me remind you what Einstein's equations are. The details of them we don't need to know. I will write down as much detail as you need to know, which is very little. But let me just remind you what Einstein's equations say. On the left-hand side of Einstein's equations is a tensor called g mu nu. g stands for gravity, I think. It's a tensor. And it's made up out of the metric components of space, the metric of space. It's made up out of the metric tensor, the metric tensor is little g mu nu of x. It's on the left-hand side. It's got to do with curvature. It's got to do with derivatives of the metric, Christoffel symbols, all sorts of awful stuff. And by the time you write it down, it takes up what g mu nu is. It takes up a fair piece of space. But it's got to do with, oh, I said it has to do with gravity. I'm not sure it has to do with gravity. I think maybe it has to do with geometry, the g. It's the geometric side of Einstein's equations. It says some property of the geometry of space-time, this is space-time geometry, not the geometry of space alone. And on the right-hand side, in other words, it's the gravitational field. Gravitational field, geometric field, however you want to think about it. Einstein's field equations are that this on the left-hand side is equal to something, a numerical constant, but I'm not interested in numerical constant. It's 8 pi over 3, but not important. On the right-hand side is the source of the gravitational field. And the source of the gravitational field is energy and momentum. Energy, also known as mass, and the motion of masses, which is also known as momentum. So on the right-hand side, there's another tensor. It's called T mu nu. I don't know why it's called T. But it's a thing which is made up out of particles, out of ordinary fields, not the gravitational field, the electric field, the magnetic field. And if you know what this is, you know what the ordinary energy density is, the density of particles and so forth. That's Einstein's field equation. And on the right-hand side is all of various kinds of energy, both those which dilute particles, ordinary stuff, light, the stuff which dilutes when the universe expands, but also the stuff which doesn't dilute, the vacuum energy. So that's the reason we have to know about vacuum energy, to know how the geometry of spacetime behaves. If there's only vacuum energy here, I'll tell you what, I'm going to write these equations, I'm going to write them out in full, but for one very, very special case. I told you that the left-hand side is made up out of the metric tensor. And let's take a very, very special case for what geometry, for what spacetime geometry is like. In some sense, it's the simplest geometry that's not just static space just sitting there. I think we actually wrote it out last time. It was the expanding, uniformly expanding space that I told you to think of as an infinite rubber sheet, where the infinite rubber sheet is being stretched so that the coordinate marks on it, we take a rubber sheet, we mark it with coordinate marks, x equals one, x equals two, x equals three, x equals four, and then we start to stretch it. The actual distance between neighboring points increases with time, and that's represented by writing a metric, the metric ds squared, that's the spacetime distance between pairs of points, neighboring pairs of points, just as it would be in ordinary Minkowski space, we start with a minus, this is pen, this pen seems weak, minus dT squared, that's all we would have in ordinary Minkowski space, meaning flat spacetime, and then what we would have in Minkowski space would be plus dx squared. Now, dx squared here stands for dx squared plus dy squared plus dz squared, which I won't bother writing, it stands for the ordinary Euclidean metric of ordinary space. There would be some speeds of light in here, if I kept the speed of light around but I'm not, but that's not the expanding spacetime, the expanding spacetime you introduce one more thing, a thing called a scale factor, A of t. A of t is a function of time, is not a function of space, and it tells you, oops, I'm sorry, this should be A squared of t, A squared of t. What it says is that if you take two points, let's say x equals zero and x equals one, what's the distance between them? Well, it's one unit. No, it's not one unit because x doesn't actually measure physical distance, it's this which measures physical distance, the actual distance between them at any instant of time is A. And as A changes with time, the distance between these marked points on the Rubber sheet world, the Rubber sheet world, x equals zero, x equals one, the marked points separate, so A grows with time. It doesn't matter if they're separated by one unit, two units, if we want to separate them by, let's say this is x equals seven, x equals 13, then the distance between them is delta x, that's 13 minus seven, which is four, thank you. 13 minus seven times A, A of t, four. 13 minus seven is A, is it four? No, 13 minus seven is six, thank you. Right, okay, that's the meaning of this metric. They don't have to be on the x-axis, they can be off the x-axis, but you get the idea. Okay, this is a non-trivial geometry. It has space-time curvature, unless A is constant, then it's just flat space, good old flat space. If A is varying with time doing things, maybe just expanding, that's enough to make the space-time not flat. Space may be flat, but space-time is a different story. So space-time, expanding space-time is a new kind of geometry that's good. Okay, now let's write down Einstein's equations. What do Einstein's equations have to do with? They have to do with how A varies with time. A is the only thing in this geometry that's not known. It's the only degree of freedom for this kind of geometry. For a geometry of this type, all of the geometry is coded in the properties of A of t, which means A is a function of time. So Einstein's field equations must be simply equations for how A varies with time, and they are, and here's what they say. They say that the A by dt squared is equal. I'm going to put in all the factors now. 8 pi over 3 times Newton's constant times the energy density of matter. The assumption here is that the energy density of matter is completely uniform in space. You could have such a thing if space was filled up with a uniform radiation bath all at the same temperature. You could have such a thing if the universe were filled up with atoms at a uniform density. So we don't have to write rho as a function of x because rho is not a function of x, but of course we do have to write rho as a function of time. So rho is some function of time, but what is rho as a function of time? This, if we, oh, I'm sorry, there's a 1 over A squared here. It's 1 over A squared. It's A dot over A squared. Do you remember from last time what A dot over A is? It's the Hubble constant, but let's leave it there. A dot over A squared, that's it. OK, now this equation also has another significance in general relativity. It's one which confuses people. I get zillions of emails. I get so many emails about this particular point that I have got, can't keep up with them, and I don't even try anymore. This equation is also an equation for energy conservation. It expresses energy conservation in general relativity. The right-hand side is all of the energy of ordinary stuff, but it does not include the energy of the gravitational field. The gravitational field, because it's varying with time, A of t, also has energy, and that energy is a kind of kinetic energy. It's a kinetic energy of motion or a kinetic energy of the time dependence of the metric of spacetime. What is that energy? That energy is simply the energy of the varying gravitation. It's called the varying scale factor. Energy of the varying scale factor, A is called the scale factor. One odd thing about it is that it's negative in general relativity. Minus A dot over A squared is probably a factor of 3 over 8 pi g. 3 over 8 pi g, I think. It's not the number that's important. It's important that in the formal structure, in the mathematical structure of general relativity, gravity has energy, and some piece of the gravitational energy is negative, and it corresponds to a negative kind of kinetic energy of the gravitational field. So with this equation, it can be written as minus this plus this is equal to zero. Minus this plus this is equal to zero. Well, zero is conserved. Zero doesn't change with time. Zero is zero. It doesn't change with time. So among other things, among the many things that this says, that energy doesn't change with time. It also says the total energy of the world is zero. But it includes this piece coming from expanding space time, which you're not used to counting. When we count the energy of this room, we don't normally think about the fact that the universe is expanding. We just ignore it, and we say the energy in the room is positive. Okay. Good. Let's put it back the way that it was. This equals this. Let's go a little further now. Let's ask what row of T is like. What the time dependence of the energy density in the world is like. So let's come back to this box. There's not a real box. I've taken a region of space and just drawn a mathematical box. The mathematical box has a side delta x equals one on each side. How much energy is in it? Well, it's the number of particles inside it, number of particles inside it, which is not going to change. As the box expands, the particles move with it, so that if this room was part of the expanding universe, there would be a lot of particles or a bunch of protons. The protons wouldn't expand, but the distance between them would expand so as to keep the proton density uniform. So there's n protons in there. Each one has mass m. That's how much total mass is in there. What's the volume of space? The volume of space is delta x times delta x times delta x. That's one. Times a cubed. Why a cubed? Because the actual length here is delta x times a. So the side of the box here, I've chosen a box whose side at any given time is a of t. And so to find the density, I have to divide it by a of t cubed. The only important thing here is that at any time, if I know the energy, this is the energy density, if I know the energy at any given time, then I know it at all times, not because it stays the same, but because it always scales like 1 over a of t cubed. So we can say that rho of t then is some number, we could call it rho zero, it's some particular instant of time divided by a of t cubed. So now we can come back to this formula over here. We can say, uh-huh, I now have an equation for a. It's Einstein's equation for an expanding universe. And what goes over here is some constant rho naught over a cubed. When we can solve this equation, maybe we should solve it. Maybe we should solve it. It's not a difficult equation. It's much easier to solve if we take all the constants, 8 pi g over 3 times rho naught, and absorb them into another constant. Let's just call that other constant, let's just call it rho naught. And thereby erase this. OK, how do we solve this equation here? Let's write out what it is. I bet you can solve it. I'll have you solve it. But first I'll manipulate it a little bit. This is the adt squared. There's an a squared in the denominator, but I'm going to multiply through and put it on the other side. So we multiply by a squared and we get rho naught divided by a, is that right? Right. And now I take the square root of both sides, rho naught, divided by a, I take the square root of both sides, square root of rho naught, a to the one-half, and I multiply so that it reads a to the one-half times dA equals square root of rho naught times dt. a to the one-half on this side times dA, or just we could write it this way. Can you solve that? Yep, say it again. Right, what we're doing is integrating both sides of this equation. What Michael suggested is to integrate both sides of this equation on the right-hand side we get square root of rho naught times time. The integral of dt is just t. And this one is what? Three-halves, a to the one-half, so let's integrate it. That's three-halves. No, two-thirds, two-thirds, a to the three-halves. Is that correct? And now we know a is a function of time. Forgetting all the numerical constants, two, three, rho naught, what it says is that a grows like t to the two-thirds power. The scale factor grows like t to the two-thirds power because the energy density has a particular behavior. That's cosmology. That's standard, absolutely standard cosmological expansion. A growing like t to the two-thirds. Let's just draw it. A growing, let's first draw a growing like t. There's a growing like t. A t. t to the two-thirds is smaller. It grows less rapidly than t itself. This large t is smaller, and it looks like this. Not quite a parabola, but it looks like that. It slows down. A uniform line like this would be an a, which grows linearly with time. It would be like a coordinate of a particle with linear motion. That's not what happens. What happens is it turns over and it appears to go slower and slower. The universe decelerates in this kind of cosmology. This is a decelerating cosmology. What is it that causes the universe to decelerate? Gravity is pulling everything. Yeah. Another way to think about it is stuff started out moving away from each other, and gravity just grabbed the hole of it and slowed it down. And eventually it will get slower and slower. OK, now let's ask what happens if we have a different kind of energy on the right-hand side. We could put in, just for fun, I'm not going to do it tonight, we could put in radiation energy, energy solely due to radiation as if there were no particles, only photons in the universe. That would correspond, you can work it out yourself, that would correspond to putting A to the fourth here. We don't have to go into that now, it's just a fact. You can solve the equations and find out how the universe would expand if it were radiation dominated. The particular version that I gave you is particle dominated. It's usually called matter dominated, matter as opposed to radiation where the energy density decreases like 1 over A cubed. OK, but what happens now if there is vacuum energy, if there really is vacuum energy? Vacuum energy has the property that it never changes under any circumstances, the density itself. The density itself does not change under any circumstances. We need to give it a name. We'll put back, for the moment, we'll put back the 8 pi G's and so forth. Theory tells us that it doesn't change. Quantum field theory tells us it doesn't change. It tells us that it's a sort of universal constant that can be calculated. Now, does that mean that the universe that we know with certainty that the universe is filled up with a kind of energy which doesn't dilute as the universe expands? Theory doesn't tell us that. What tells us that is experiment. And we'll talk about the experiment in a minute. We'll talk about the observation in a minute. Theory tells us only what to put on the right-hand side for various different hypotheses about what the energy density is. If the energy density is vacuum energy in the sense of quantum field theory, then it really is just constant. So that means what goes on on the right-hand side here does not vary with A. Does not vary as the universe expands. And let's just call it row vacuum. Let's call it the vacuum energy density. Row vacuum is 8 pi g over 3, sorry, A dot over A squared is 8 pi g over 3 times this constant value of the energy density. Now, first of all, if this is correct, then one thing it says is that the Hubble constant, A dot over A is the Hubble constant. This is also H squared. It tells us that H squared, or H also, 8 pi g over 3 times the vacuum energy, that this doesn't change with time. Remember what the Hubble constant is. The Hubble constant is the relationship between distance and velocity. Velocity is Hubble constant times distance. So it says for all time in this kind of world, and this kind of world for all time, the relationship between velocity and distance is unchanged and always the same numerical value. That incidentally would not be true in the other solution we wrote down. Normally, for other kinds of cosmology, the Hubble constant decreases with time. Here, it just stays constant. An implication is that the distance out to which you have to go in order to get to the speed of light, where stuff is moving away from you with the speed of light, that distance is always the same. It's given by C equals H times D, or equivalently C over H is equal to D. C is a constant. H is a constant. The distance out to the place where things are receding with the speed of light from us is fixed. It doesn't change with time. So that's one implication of what this means to say that there's a constant value here. But let's see now if we can now solve Einstein's equations for an expanding universe. Under the assumption that the only energy density now, the only energy density is vacuum energy density. So what do we have? We have A dot over A squared is equal to 8 pi g over 3 times rho, but that's just the number H squared. Let's just call it H squared. Now, I'm using H now not to mean definition. It's not definition now. Previously, A dot over A was the definition of the Hubble constant. Now, it's the numerical value of 8 pi g over 3 times rho v. It's a number. It's a number that we don't know how to calculate it. We don't know the theory of it. But it's a number. H squared is now a number. Everybody understand the difference between saying H squared is A dot over A as a definition and H squared is A dot over A squared as a numerical relationship. Here, H is not defined this way. It just is a number. It's measured? It is measured. It is measured. It's a measured number. And it's not expected. And there's evidence that it is not changing with time. Right? So it's a number. How do we solve this equation? Well, first of all, we take its square root A dot. We multiply it by A. And it reads A dot is equal to H times A. I think some fraction of you can solve this equation. Nobody can solve it? Right. A dot A is equal to E to the Ht. This is the equation that tells us that the growth rate of A is proportional to A itself, exponential growth with time. A thing which changes by an amount proportional to its own value exponentially grows. And the growth factor, the time factor, the time constant for this is H. E to the Ht is the way this universe is growing. Exponentially. You can put a constant in front of it. The constant doesn't matter much. So this is an exponentially growing universe. That's the result of vacuum energy. Vacuum energy really is important. And the way to detect it, well, I'll give you two ways to detect it. One is astronomical. Measure the Hubble constant. Measure the Hubble constant over a period of time and see whether it's staying constant. Or you can measure the scale factor A over a period of time and see whether it's expanding exponentially. There's another way that you can measure it in the laboratory. There's no way you're actually going to do this. But if you have, the laboratory now means way out and out of space, take two objects which are very light and therefore their gravitational between them is not very important. Start them out at rest. And if they're light enough so the gravity between them is unimportant, they will just expand with the expansion. And you can measure the rate at which the distance between them expands with time. Make sure there's nothing else present. Complete vacuum. So the only possibility is vacuum energy distorting space. And you would discover that these begin to accelerate and begin to expand, separate from each other exponentially. So that's how you would build a vacuum energy detector. Now of course the effect in the real world is so minuscule that you couldn't even begin to hope to detect this in the laboratory, but you can detect it cosmologically. How do you detect it cosmologically? A number of ways, but you say, well we have to check that H is constant with time. That's going to take a long, long time. But no, not really. When we look out at the distant universe, we see it at different times. And we can actually measure the Hubble constant at different times by looking at it at different length scales. Or we can look, what comes to the same thing is we can measure the expansion rate or we can measure how A varies with time. All right, so I'll tell you now how it varies with time. Oh, wait. Let's point something else out. The real world doesn't just have an H squared. It also has some other energy density. Let's add then the other energy density. It was called rho naught divided by A cubed, for example. When A is small, when the universe is small, when the scale factor is small, this term is going to be much bigger than this. Why is that? Because A is in the denominator. That's why. So early on in the real world, this is not very important. And this may be very important when A is small. In that case, we worked it out a few minutes ago, and we found out that A goes like T to the two thirds. It goes like this. But now as A grows, eventually this term will get small, whereas this one is not getting small. This one's not diluting. This one is diluting. So eventually the diluting term gets smaller than the non-diluting term. And then after that, you can basically forget it. Once this term gets smaller than this term, you can forget it. And how does A behave? A behaves like E to the HT. So what you expect in a world that has both kinds of energy is that early on the scale factor, or the radius of the universe, or the distance between distant galaxies, grows this way. But then at some time, when the vacuum energy becomes more important than the other forms of energy, it takes off exponentially, exponentially something like that. So when you saw that differential equation, you end up with a C in front of the integration constant in front of the indian. Yeah, there's an integration constant here. How do you know that that integration constant isn't really small? Well, you measure it. You measure it. You measure it. You measure it. And here's T. Here's A. And of course, when you measure A, you don't just measure A now, as I said. You measure A by looking out deeper and deeper into distant galaxies. You can measure the distance between different distant galaxies at earlier times. And so at one time, you can measure this whole curve up to some point. You can't measure what it's going to be in the future, but you can measure what it was in the past. So if this is now, we get to measure all of this curve. And it looks something like that. It looks something like that, approximately something like this. With this branch of it here being fit rather accurately to an exponential, I won't try to tell you how accurately. It's about a 5%. The proper figure of merit is about a 5%. It's not a deviation. It's exponential plus or minus 5%. OK, so that looks to most cosmologists, that looks fairly convincing that we live in a world with vacuum energy. Certainly, we don't know anything else that would have much chance of doing precisely this. And it's not a fact in the same sense that we know that the electron exists so that the fine structure constant is 1 over 137 point blah, blah, blah, blah. But by now, it's pretty much considered a fact of cosmology that we live in a world with vacuum energy and that we live in an exponentially expanding world. That's pretty remarkable in a way. A question? A good question. In some books, I've seen if you take our position right down and draw the slope down, now we can, if that slope intersects to the left of the origin, you can see back to the big bang as it goes up, there will be a time where you can't see past the big bang. Is that, in other words, we'll expand it away so far that I don't quite understand that pointer is it important? Well, I'm not sure it's an issue of expanding back to, tracing back to the big bang. Or to the cosmic background radiation. In the sense that the Hubble distance is right into the age of the universe. In the sense that the Hubble, yeah. Right, the Hubble, right, in this exponentially increasing world here, the Hubble scale is not related to the age of the universe. It's just a constant, right? But it's someone you, I don't know, there's a horizon. There's a horizon. We're going to talk about horizons. We're going to talk about event horizons. There's a lot of different kinds of horizons on the market. But the only ones which are deep and fundamental interest are event horizons. So we're going to talk about event horizons. So let's now just put in our back pocket this piece of knowledge that the world is exponentially expanding. That we can summarize everything that was on the blackboard. Oh, there is an interesting fact. If the world continues to exponentially expand like this, then all of the dilutable matter, the kind, the energy that dilutes with expansion, is just going to get more and more and more diluted. So in some number of billions of years, the material in the universe will stretch out so much that it will become of negligible density. And the only energy left over will be the vacuum energy. It will become a very dull world. But so be it. Does that mean that the galaxies which aren't expanding today because gravity doesn't? No, they won't expand. They won't expand. What will happen is the few galaxies which are close enough to us and which happen just by accident more or less to be falling toward us. There's one or two galaxies which are actually falling toward us. That's a sort of, when we say that everything uniformly expands, of course it uniformly expands plus or minus little bits of fluctuation. It happens that there's a little bit of fluctuation in our neighborhood which happened to cause the Andromeda galaxy to be somewhat moving toward us. So us and the Andromeda are not going to separate from each other. But if you go out a few more galaxies, they're certainly moving with the flow. Anything which is moving with the flow will eventually depart. It will just grow and separate and separate and separate. And after a few hundred billion years, all that will be left for us to see is us, our galaxy, and that's it. We will be living in a world where we'll be able to see out to enormously large distances but you won't see anything. Astronomers who were born at that time and don't have a good record of history will come to the amazing conclusion that they are alone in the universe. Yeah. Doesn't that say that you eventually exceed the gravitational attraction because our galaxy is gravitationally bound to a group? Yeah. And if the group expands then... No, no, no. If the group expands then we're not gravitationally bound. Right. So when you said all we could see is our galaxy. And the Andromeda. Which by that time probably will have crashed into us and formed some sort of single structure. Are each of the clusters gravitationally bound to each other also? Correct. Yeah. So... No, no, no, no, no. It depends on what you mean by a cluster. If you look at a few galaxies out, not very many, just a few, one or two more, they're moving away from us. Even though they form what's called a cluster. No, they're not gravitationally bound. No, they're not gravitationally bound. Gravitationally bound means that they won't separate. So the measured gravitationally bound depends on this expansion rate? Yeah, but not very much. But not hardly at all. Hardly at all. For nearby things, if they're gravitationally bound, can just mean whether... Just forget this expansion and just ask whether they're moving relative to each other with less than or greater than the escape velocity. That's what gravity... Now, it happens, there's a more or less accidental fact that the Andromeda happens to be moving toward us. A few galaxies out, they're moving away from us. And they will continue to move away from us. They're moving away from us with greater than the escape velocity. And they'll continue. Yeah, it's a big galaxy. The gravitational constant is a constant that never changes in time. But the outward force due to the vacuum energy is a function of time. It grows... It grows with distance. It grows linearly with distance. It grows with distance, but not with time. No, not with time. The acceptance of far as things separate with time. Two objects will separate with time. The distance will become larger. And for that reason, the repulsive force will increase. But the repulsive force does not increase the... OK, you have a small repulsive... Where is the repulsive force now? And over a period of billions of years, the repulsive force... It seems to me that equation is sort of cumulative. There's a bigger amount of repulsive force. That's only because things get further away from each other. But a galaxy won't get farther away from each other. If the galaxy... If two galaxies are at the same distance at a later time as they are now, the component of the little repulsion from the vacuum energy will be the same. What I was asking is that the net gravitational force is G minus the expansion force. And if the force, the tendency towards the band increases, it wouldn't be that effect be that G very gradually decreases. That doesn't make sense. What are you saying is that the force is a function of distance, not a function of... It's a function of distance, not a function of time. If you have two objects, if you have two objects at distance r between them at any time, doesn't matter what time, there are two components to the force. One of them is G times M1, M2 divided by r squared. And then there's another force on each of them, there's another force which is proportional to... It has a G in it, but mainly it's proportional to and repulsive. This is minus meaning attractive, plus a repulsive force proportional to r itself to the distance between them. Coefficient doesn't change. Coefficient don't change. What does change is the distance between things. So as the distance grows, this will become less important, this will become more important. But you're going to ask, where does this become important relative to this? The answer is when the distance scales are pretty cosmological. You know, five billion light years or something like that. The crossover, the coefficient of this piece is so small that the crossover between the normal gravitational attraction and the bit of repulsion is on the billions of light years scale. So that's why you can't measure it by an laboratory. So it's not just a function of expanding space that is represented as a repulsive force. It's actually... It is the expanding space, but you can fake it by a force which is pushing everything else apart from each other. It's a slight fake, but it sort of works the right way. Yeah. Okay. We've now... Now, does that kind of... I think satisfies my curiosity here that in the early universe these galaxies were gravitational bound and the expansion has now taken them to where the vacuum expansion, this other factor, are. In the early universe they were not gravitation. Yes, it's crossed over. So the gravitational attraction field is small and below. When they get far enough apart, then the repulsion becomes more important than the attraction. Right. So in the early universe they were gravitational. Yeah, this was not important in the early universe. Right. That's another statement. It's two different statements to say that they're gravitationally bound and that the important force was the ordinary gravitational force. If we shoot a rocket off the surface of the Earth at one and a half times the escape velocity, it is not gravitationally bound. But still the most important force on it is the ordinary gravitational force. These are two different statements. One statement, gravitational bound, means it's going to come back. The other statement is this may be more important than this in the early universe than it was, but it doesn't mean that the galaxies were gravitationally bound. They were still moving apart from each other with greater than the escape velocity. For the most part, I mean, pretty much. The galaxies were shot out pretty much greater than the escape velocity between them. Roughly, very close to the escape velocity, very, very close to the escape velocity, extremely close to the escape velocity. And then they either turn around and fall back in or they accelerate away from each other, depending on whether there's this kind of new energy. Okay, let's take a little break. And when we come back, I will tell you more about the geometry of this kind of space and why it has horizons, what it means for it to have horizons. So now that we know that in a couple of hundred billion years, the universe is going to be described. And it's getting that way pretty quickly, actually. On the scale of tens of billions of years, we're getting there pretty fast. We're getting to a condition where the geometry of space-time is described by a metric. And the metric is e to the 2HT, that's a squared, times dx squared. And again, dx squared stands for dx squared plus dy squared plus dz squared, and this is the scale factor. This space-time, it's a space-time, it has a geometric property, it has a name. Anybody know what the name is? The sitter space, the sitter space, dE. It's the sitter space. It should be called the sitter space-time, obviously, but it's called the sitter space. And we'd like to examine its properties a little bit. Not because we're going to be sending rocket ships out there to explore it. That's going to take a while, but just for the intellectual value of understanding the space-time that we live in. That's pretty weird. It's pretty weird. Of course, it looks weird, but it's weirder than it looks. It's weird in that it has horizons. Before we get into horizons of the sitter space, let's just talk for a moment about what horizons are in general. And I'll give, as the illustration, the horizon of a black hole. Remember, we worked out what a black hole formation, a black hole which is created by an infalling collection of material, what it looks like, and we drew a pen. Remember the Penrose diagram? Penrose diagram looked like that. Before we do that, we're going to use Penrose diagrams, because it's helpful. But let's first draw the Penrose diagram of ordinary flat space. There's ordinary flat space. There's points far away. It's spatial infinity. Here's time infinity. Light rays come in from here, and they go out from here, and all that sort of stuff. And that's just empty space. A black hole space-time looks like this. Oh, one other point. Imagine an object, a massive object, not a photon, a massive object in this space. How does a massive object move through space-time? Well, all massive objects wind up at the same place. They were massless objects. Photons go out the sides here, moving with a speed of light. Massive objects, they all go to this point up here. Now, that's not really a point. It's not really a point. It's a whole big space-time that's been smooshed onto the diagram to look small. Obviously, if we wanted to take all of space-time and squeeze it onto the blackboard, we're going to have to distort it pretty badly. And it's very distorted up here, in that there's an infinite amount of space and time up in this corner. Likewise here, likewise here, in fact, on all of the edges. And all of the massive objects eventually wind up at this corner. That's not to say that they get close together. It's just to say that this is a very big place up here. Okay. Now, let's think of the massive objects in the black hole space. In the black hole space, you can have one of two things happening. The massive object can fall into the horizon and hit the singularity. Let's avoid that. Let's avoid that. Let's talk about the objects which don't fall into the singularity. Where do they go? They go to here. They all do. Again, that doesn't mean that they literally get close together. This just means there's a lot of space time out there. All of these people wind up. Let's draw the horizon. The horizon's over here. Okay. Now, let's ask what these people here can see as they look back into the past. They can see only those things which are inside their backward light cone. The backward light cone means take all the light rays which can get to this point over here. All the light rays come in from here and take that region. That's the region that he over here can be aware of. When he looks back, he can only see the things inside his light cone. Same for this guy over here. Same for this observer. They can see the things in their backward light cone. Now, how much can they see given arbitrary amounts of time? Given arbitrary amounts of time, they can look back from later and later times. And eventually, they can see all of this blue region here. What they cannot see is what's in here. A horizon is by definition the place which separates the region that these observers can see from where they can't see. Given enough time, you draw the backward light cone of a very future point along that observer's trajectory, and that is the region he can see. Everything on the other side is behind the horizon, and the horizon is the separation between them. Here, there's no horizon. An observer way up here gets back, looks back, and sees everything. Sees the entire space-time. No horizon. Here, there's a horizon. So, I just remind you of that to tell you what a horizon is. We're going to take this metric, and we're going to do a little bit of mathematics on it. It's not hard mathematics. It's pretty easy mathematics. It's not particularly abstract, but it helps us visualize the space-time. What we're going to do with it is we're going, I'm not going to try to take all of space and squeeze it onto the blackboard. I'm going to leave space infinite. But what I'm going to take do is take the time axis, and I'm going to smush it. I'm going to make a coordinate transformation of time so that I can put the entire time from minus infinity, well, not from minus infinity, minus infinity will be deep down in the floor, but plus infinity will be in some finite place here, because we're going to want to study what this world is like at late times, and I'd like to get the whole infinite future onto the blackboard. Not the whole infinite future of space, but the whole infinite future of time. So, we're going to make a coordinate transformation, and we're going to do this in a way that makes light rays move on 45-degree angles in this space. It's always good to do that. Always get the light rays moving at 45-degree angles, and it's easy to see what can communicate with what. All right, so here's the trick. We're going to change coordinates. We're going to introduce a new coordinate. I'm going to call it capital T, and it's a function of the old T, and it's going to have the property. Here's the property it's going to have. It's going to say such that dT squared, what appears here, is going to be equal to e to the 2HT, exactly what appears here, times d capital T squared. This is the definition of capital T. It's a new time coordinate, and it's related to the old time coordinate by a pretty simple equation. We're going to solve it. And then we're going to rewrite the metric with the new time in place of the old time, and the metric will have a nice appearance that will allow us to explore it very nicely. Okay, how do I solve for big T as a function of little t? The first thing I do is take the square root of both sides of the equation. dT is equal to e to the hT d capital T. And now divide by e to the hT. What do I get on the left-hand side if I divide by e to the hT? e to the minus hT, right? So I'll put an e to the minus hT here, and now we have dT. So all we have to do to find the relationship between big T and little t is to integrate. The integral on the right-hand side just gives us T. What about the left-hand side? Anybody can do that integral? Dime for anybody? One over h. One over h. Minus sign? Minus sign. e to the minus hT, right? Plus a constant, but we can fix the constant by just shifting T here. Right, there's a constant, but I can shift the constant. Suppose I put a constant. There's a constant. Now I just redefine T so that it's T plus C. It doesn't make any difference, right? Okay, now notice first of all capital T is negative. Let's just think about what happens to it. This one over h here is not so important. Let's just think about what happens to it. What is it like at very remote past times? Let's say when little t here is extremely negative, what happens to e to the minus hT? It becomes very big. If t is negative, and we have e to the minus, then this becomes e to a positive number, this gets very big, but there's a negative sign out here. So that means the remote past where little t is very, very negative is also the remote past from the point of view of big t. When little t gets very, very negative, so does big t. All right? So in both variables, the remote past is way down through the floor and the basement and even deeper. What about the remote future? What happens in the remote future to capital T? The remote future is when little t is very, very big. What happens to this when little t is very big? It goes to zero. So big t goes to zero in the remote future. Now we can draw it. The time axis, let's draw the time, the big t axis, the big t axis goes into the very remote past in the negative region, deep past. But where is the infinite future? The infinite future is right at capital T equals zero. Capital T equals zero. That also is little t equals infinity. So big t runs from minus infinity up to zero. That's one way of getting not the whole geometry on the blackboard, but getting the remote future onto the blackboard. That was a goal. Okay, let's see if we can rewrite the metric. Let's see if we can rewrite the metric in terms of big t instead of little t. So here we have, let's, little t squared. What's the relation between little dt here? Little dt, here it is. Little dt is equal to e to the big ht times big t, right? Good. I just multiplied by e to the ht to get it over on the side over here. And now we come to this metric over here. We have minus dt squared. So this is equal then to minus dt squared, but that's e to the 2ht, the big t squared. And then we have plus e to the 2ht dx squared. In other words, I've engineered it so that the e to the 2ht would come on the outside. It would be equal to e to the 2ht times the common Minkowski space metric. This would be the metric of ordinary flat spacetime minus dt squared plus dx squared plus dy squared plus dz squared, but it has this factor in front of it. What can we do with it? I want to re-express it in terms of capital T. So let's do that. Let's first multiply by h. e to the minus ht equals h times big t. All I did was multiply by h. And now let's square it. If I square it, the minus sign goes away and I get e to the minus 2ht. And on this side, I get h squared t squared. And finally, I take one over it so that e to the 2ht is equal to 1 over h squared t squared. Go through it, check the arithmetic. You'll find it works that way. And now we can write this metric just by substituting for e to the 2ht, 1 over h squared t squared. So let's put that there. In fact, let's write it over here. The s squared is equal to 1 over h squared t squared times minus dt squared plus dx squared. Now this is a long story, nice simple result, but the main point of it, the main point of it is that we've written the metric as an ordinary Minkowski space metric times a function with the same function appearing multiplying all of this. How do light rays move in this geometry? What's the rule for a light ray? Ds is zero. Right, light rays move on paths of zero proper time. What does that say? That says that you can forget this factor altogether when you're thinking about a light ray. If you want to know how a light ray moves, you don't need to know about this factor. All you need is this factor. Right? Everybody see that? If you want to know how a light ray moves, it moves exactly as it would if this were just ordinary flat spacetime. How do light rays move in flat spacetime? On straight lines tilted at a 45 degree axis to the vertical. We now know in this geometry how light rays move. Let's draw the geometry. Here it goes. It starts out deep in the past. We can't go down that deep into the basement, but it ends. This is space. Space now goes this way and this way. It ends at capital T equals zero. How can it end at capital T equals zero? Little t went on and on forever. Well, it's just that the metric, when big t gets small, the metric here blows up. It gets big. So that means there's a lot of time in here. The metrical distance between neighboring points gets very large when t goes to zero. So it's highly distorted. I mean, it's a highly distorted geometry. The one advantage of it is that light rays move on 45 degree axes. So we now know how light rays move. They move like that. If they're moving out of the blackboard, they also move on 45 degrees, but I'm not going to try to draw it. I can't draw it. All right, good. Now let's ask what an observer can see. Imagine an observer in this world. I'll make them blue. Here she is. She's standing still in her own reference frame. Time is going on. More and more and more time is unfolding. And eventually, she gets up to the top. Now how long does that take according to her watch? Forever. Forever, right? It takes forever for that to happen. But on this diagram, on this formal mathematical abstract description of this interspace, the terminal point is over there. That's the end of her existence. It doesn't bother her at all because her clock has gone off to infinity. Let's ask what she can see. What she can see is she looks back. All right, let's draw that in red. Here she is right over here. That's Alice. And Alice looks back and sees everything that she can see within her light cone. Remember, the whole purpose of this coordinate transformation was to make it easy to follow light rays, 45 degrees. If we want more dimensions, we'll have to extend this out to make a cone out of it. But you can see all of the main features from here. She can see everything behind her here. But if she waits a little while, she sees more. And if she waits a little longer, she sees more. But no matter how long she waits, the most that she can ever see is this. What about things out here? Nothing. If there's somebody out here, she can never know about it in principle. She has a horizon. The red line out here is her event horizon. It is her private event horizon. Other people have different event horizons. But this is her private event horizon. She cannot see anything out beyond it. In principle, as a matter of real principle, she cannot, at least by classical relativistic reasoning, by classical light reasoning and so forth, she cannot see anything. All kinds of things could be going on there. Her friend Bob might have originated over here, and she might have been able to see Bob in the past. From here, she could even, Bob is moving, is Bob. Alice, Bob. In the past, Alice was able to send messages to Bob. Bob was able to send messages to Alice. And now Bob passes Alice's horizon. From there on in, Bob cannot send the message to Alice. Bob may be able to see Alice, but Alice cannot see Bob. So from this point on, Bob has passed out of her horizon. He's gone. That's what's going to happen. Unless, of course, if she's holding hands, what does it look like if she's holding hands with Bob and she doesn't want to let him go? Yeah. Then it looks like this. Now, does this mean they really get closer and closer together? No, it doesn't. It doesn't because the metric is getting bigger and bigger out here. So the actual metrical distance between them, the actual proper distance between them can stay constant. And if it does stay constant, they would appear to join up here, but they're not really getting any closer together. So if they're bound together, if they're holding hands, this is what it looks like. And they can continuously forever and ever send messages back and forth. No limits. But if Bob lets go of Alice and goes with the flow, if he's, let's go implies, among other things, that he gets far enough away that the gravitational pull between them doesn't hold them together. If he lets go and passes outside her horizon, they simply fall out of causal contact. That's the expression for it. They've fallen out of causal contact. In this particular instance, Bob can send a message, sorry, can receive a message from Alice, but he can never send a message to Alice. Alice up here cannot send a message to Bob either, but Bob also has his own private event horizon. Let's make Bob's event horizon in green. Here's Bob's event horizon. Bob cannot see anything outside of his event horizon, so he can see Alice early on, but he can't see Alice after she passes out of her horizon. So he won't be able to see Alice at his time. Bob won't be able to see Alice at his time. That's right. Well, you can never see anything at the same time, period. Right. You can never see. The only thing you can never see is those things which are back inside your past light cone. So you never can see anybody at the same time. But once Bob has passed through here, Alice can never see him. If this really was flat space, if it really wasn't this peculiar, the sitterspace with this factor here, and then Alice would have to separate it, then Alice could keep going and look back at Bob. But Alice comes to the end of her world line over here, and simply cannot look back. So this is the character of the sitterspace. It has horizons. In fact, every single observer has his own private event horizon. In this case, Alice and Bob constitute one observer. And maybe we'll talk a little bit more about these event horizons next time. I'm getting a little tired, so I think I'll quit soon. But we apparently live in such a world. And in such a world, it's not Alice and Bob, but it's galaxies. Lots of galaxies. Here we are now. We can see about 10 to the 22 galaxies around us. As time goes on, they will pass through the horizon and pass out of, let's say we're Alice. They will pass out of Alice's horizon, and Alice will not be able to see them anymore. They will be gone. So as I said, Alice will be alone in the world for all practical purposes. Excuse me. Is there a theory that in the far distance, we would not be able to see any galaxies at all? That's right. That's right. They will have a... Galaxy, Alice, and Bob will... You might... Okay, let's ask whether Alice at a very, very late time here can see Bob back here. In a sense, no. It looks like he can, but in a sense, no, and I'll explain why. This is Alice here. Alice's time steps look like this. Many, many time steps here. Now, imagine that Bob sends her a signal, and that signal is going to come up to here. Alice is going to look back and look for Bob back here. Well, you say she can see him. But in fact, because her time clocks are so bunched up over here, it means the radiation that she sees from Bob is incredibly redshifted. That how many beats of Bob's clock gets squeezed into Alice's time scale here, one beat of her clock says stretched out over an enormous time scale from Alice's world. So when Alice looks back and sees Bob from way, way up here, he's enormously redshifted. You can't see very, very redshifted things. His wavelength, the wavelength of his light and the energy of his light will be so stretched out. So in fact, really, Alice has no chance of really seeing Bob over here. He's just been redshifted to hell and... Yeah, like a black hole. Like on the horizon. That's right. It's exactly the same as in the horizon of a black hole. So in that sense, at very, very late time, Alice will not be able to see any of the galaxies, except in this exceedingly redshifted sense, which is useless. She can't detect such low energy radiation, and she will just find herself alone. And so will Bob, all other galaxies having passed out of their horizon. Is it the same with the black hole, which it takes forever to reach the horizon? Yeah, yeah, but okay. It is the same. It is the same. Well, it is the same. Alice looks back. Has Bob crossed her... Where is Bob? Here's Bob over here. Has Bob crossed the horizon? No. Alice looks back. Has Bob crossed the horizon? No. Alice looks back. Has Bob crossed the horizon? No. She never sees him cross the horizon. But do objects falling into black hole become infinitely redshifted as well? Yes. Yes. It's exactly the same thing. It's exactly the same thing. Here's the black hole. Here's who stays outside. Let's put Alice outside. Yeah, let's put Alice outside. Let's keep the color coding. Blue? Blue for Alice? Alice is red. Bob is blue. No, here's Alice. Alice is blue. Bob is green. I don't know. Here's Alice. I usually send Alice into the black hole, but this time Alice gets to stay outside. And here's her clock. Her clock is going faster and faster and faster and faster and faster. It's not going faster and faster, but on this diagram, all her time intervals are bunched up, same as they are over here. And Bob, who is now green, is falling through the horizon. Alice looks back. Does she ever get to see Bob fall through? No. Bob's signals, his light waves or his waves of his hand that he sends out, appear to get slower and slower to her because they're drawn out over longer and longer time scales, which is another way of saying Alice becomes infinitely redshifted. The light from Alice becomes infinitely redshifted from Bob's perspective. So it's exactly the same thing. Yeah. So in Alice's real time, in her proper time, she's slowing down. She's going at her proper time, but on the scale, it's getting compressed, right? Well, on the scale, not on her scale. Well, she's just ticking along happily. No, that's Bob. That's Bob. You're right. That's Bob, yes. So Alice's scale time, right? Her proper time is still her proper time. So she looks back and she sees Bob slowing down. But she not only sees him slowing down, she sees his atom slowing down, she sees the radiation that's coming out. Yeah, she sees not only Bob slowing down, but she sees the atom slowing down, she sees the emission of photons getting slower and slower, the photons getting longer and longer wavelength, smaller and smaller frequency, and eventually his photons become so monumentally long wavelength that they can't be seen at all. The energy of them gets diminished to negligible. Yeah. How would she measure the temperature of the surface right above the horizon? Well, to do that, she has to send down a thermometer and make sure the thermometer doesn't fall into the black hole. So to keep it from falling through the black hole, she'd send down a thermometer, measure the temperature, pull it back up and see what it recorded. But that's not what she's doing with Bob. And the same incidentally, same thing here. If Alice, so here would be the experiment. Alice has a very long rope, a long cable. On the end of the cable is a thermometer. She throws it out hard enough that it's beyond the escape velocity from a local cluster of galaxies and waits a long time for it to migrate out toward the horizon, her horizon. But she doesn't let it go through the horizon. She then pulls it back in. And after she's pulled it back in, she sees what the temperature that it recorded was. She will also see a high temperature, but we'll come to that. Horizons are always hot in a sense, but we'll come to that. This is so far the classical description of the sitter space. Full of horizons. There's horizons everywhere. No matter where Alice is, there's somebody, she's passing through somebody's horizon. So wherever you are, you're passing through somebody's horizon. Say goodbye because you won't see them again. I'm trying to understand what it looks like in the untransformed, in the little thing. They're at the same distance. I'm going to leave that for you to do an exercise. It's an exercise to see what it looks like. No, it's a good exercise. Just undo what I did. I'm going to just say that my assumption is they're not moving apart. The x is not expanding there. No, no, no, no, no. The whole metric is blowing up, including the x parts of it. So the distance between points, if you keep the points at fixed coordinate separation, here Bob and Alice are at fixed x's. The distance between them is getting bigger and bigger. It goes to e to the infinity. That's right. The distance between them blows up when they get up to, right, that's another fact of the sitter space. Now, something you can work out is you can compare the distances between these two points, or the distance between these two points, between the distance comparing. How far apart are these two? How far apart are these two? What do you think the answer is? Are they growing? Are they shrinking? No, it's staying the same. Staying the same. If you draw a horizontal line, work it out. Here's the metric. Draw a horizontal line here and ask for the distance, let's say, from the center to the horizon over here. And compare it with the distance from the center to the horizon over here. Compare it with the distance from the center to the horizon over here. You'll find they're all the same. The coordinate distance is shrinking, but the metric is increasing, and the distance to the horizon is always the same. That is connected with the fact that the Hubble constant is constant, and so the distance to where the speed of light is, where the things are moving away with the speed of light is constant. So Alice doesn't see things changing with time. Everything stays the same. She just sees a horizon around her, and that horizon stays at a fixed distance from her. So would she be seeing the projection of all things in the past be projected onto that horizon space? Because they'll never go away, right? They'll never go away, but she sees it harder and harder to see them because they get redshifted. Now, that doesn't take into account the Hawking radiation coming from the horizon, but what you say is right. Okay, so assuming that we don't pass through each other's horizons, we'll meet here next week.
(March 7, 2011) Leonard Susskind gives a lecture on string theory and particle physics that dives into the idea of cosmic horizons as well as the relationship between ultraviolet and infrared light. In the last of course of this series, Leonard Susskind continues his exploration of string theory that attempts to reconcile quantum mechanics and general relativity. In particular, the course focuses on string theory with regard to important issues in contemporary physics.
10.5446/15106 (DOI)
This program is brought to you by Stanford University. Please visit us at stanford.edu. I want to make sure we're all on the same page with respect to some very elementary mathematics, complex numbers. It's one important concept. So I just want to go through them very quickly, extremely quickly. A complex number has a real part and an imaginary part. And you can plot it on a two-dimensional surface. The two-dimensional surface horizontal axis represents the real part of the function, not the function, just the real part of the number. And the imaginary part of the number is plotted on the vertical axis. And a point on this plane is an imaginary number. If this is x and the height is y, then the complex number is z, which is x plus iy. OK. We could also represent it another way. We can represent it in terms of a distance, which we can call r, and an angle that we can call theta. And now, just from elementary trigonometry, the distance x is r times the cosine of theta. So that's x is r cosine theta. And y is r times r times sine theta. So i times r sine theta. Or in other words, we can take the r-outs out of it, factor the r out, and write that z is equal to r times cosine theta plus i sine theta. The combination cosine theta plus i sine theta is called e to the i theta. e to the power i theta. So know that definition. e to the i theta is cosine theta plus i sine theta. The reason we call it e to the i theta is because it satisfies certain rules of exponentials. The rules of exponentials is that when you multiply exponentials, you add the exponents. e to the 3 times e to the 5 is e to the 8. 3 and 5 is 8. You can check using elementary trigonometry that if you have two angles, theta and phi, and you write e to the i phi, which is cosine phi plus i sine phi, and multiply them, you get something with a real and an imaginary part. The real part is cosine theta cosine phi minus sine theta times sine phi. Remember, i times i is minus i, as minus 1. So if you multiply these two, what you get is cosine theta. Well, I'll tell you what you get. Let's simplify it. You get cosine theta cosine phi minus sine theta sine phi. Anybody know what that is? cosine theta plus phi. Right. Cosine of theta plus phi. That's a high school trigonometry formula. The cosine of the sum of two angles, cosine cosine minus sine times sine. And then, plus the imaginary part, the imaginary part is cosine theta sine phi plus cosine phi times sine theta, and that's just sine of theta plus phi. And by definition, if e to the i theta is cosine plus i sine, then this is just cosine, then this is just e to the i theta plus phi. So from elementary trigonometry, we discover that this combination, cosine theta plus i sine theta, has all of the properties of the exponential of i theta. Now, all of trigonometry, anything you ever wanted to remember about trigonometry, couldn't remember, is all stored in this formula. For example, if you couldn't remember what the cosine of the sum of two angles is, all you need to do is multiply e to the i theta times e to the i phi in this form, and you'll discover that cosine theta plus phi is cosine theta cosine phi minus sine theta sine phi. So all of trigonometry, I don't know why they take a year to teach trigonometry in high school. And then at the end of it, they come here to Stanford and they don't know this formula. Another fact, we could check this in two ways. We could just use the formula for multiplying exponentials. e to the i theta, first of all, there's e to the i theta. That's cosine theta plus i sine theta. Remember what the complex conjugate of a thing is. The complex conjugate is just the same number except with the imaginary part changing sign. So it's the reflected number in the lower half plane. If the number is in the upper half plane, then it's reflected into the lower half plane or vice versa. The complex conjugate of e to the i theta is cosine theta minus i sine theta. But that's the same thing, that's e to the minus i theta. That's the same thing as cosine theta minus i sine theta. And if you remember that minus the sine of theta is the same thing as sine of minus theta, then you realize this is just e to the minus i theta. So the complex conjugate of e to the i theta, you just get by changing the sine of the i here. And what happens if you multiply these two? You get one. I think that it's changing the sine of the theta because that's what happens when you rotate the clockwise direction instead of the counterclockwise direction. Right. This is true, I don't know if everybody called that, but if you multiply e to the i theta times e to the minus i theta, you get one. So the class of numbers, which are the class of complex numbers, which are of the form e to the i theta, are numbers which have the special property that when they're multiplied by the complex conjugate, the result is one. And there's another way to say that. Another way to say it is that x square plus y square is equal to one. The class of numbers for which x square plus y square, remember, a number times its complex conjugate is just the sums of the squares of the real and imaginary part. And so the numbers which have the form e to the i theta, let's specialize to those, are numbers for which x square plus y square is equal to one. In other words, they're numbers on the unit circle, on the circle of radius one. They're simply called numbers, I don't know what they're called, unitary numbers. Of the form e to the i theta, you can call them unitary numbers. And they're numbers that have the property, that they lie on the unit circle. And they times their complex conjugate is exactly equal to one. Every complex number is a unitary number times a real number, a real positive number, and the real positive number is just a radius, or the distance to that point on the column. And that's all we need to know. That's all there is to complex numbers, there is nothing else. Nothing else I can think of offhand. I want to make sure everybody knows. Ah, the other word for a number which is of the form e to the i theta is that it's called a pure phase. The angle in here is called the phase angle, the phase, and the radius is called the modulus, I guess. The modulus means the size of it. So a number which has no r in front of it, or where r is one, number on the unit circle is called a pure phase, or just a phase. So when I swallow the chocolate chip wrong. When I refer to a pure phase, that means a number of cosine theta plus i sine theta. Okay, now. All right, now, the postulates of quantum mechanics. We want to now today go through the basic postulates of quantum mechanics. Now, what quantum mechanics is, is it's a calculus for calculating probabilities. After a while, you get some intuition for it, and maybe you get some pictures of what's going on. But think of it as a calculus, in other words, a calculational procedure for calculating probabilities. Probabilities of what? Probabilities for different measurements, different values of measurements. The things you measure are called observables. You prepare systems in certain states, not in California or New York, but in certain configurations that are called states. And those states are labeled by or described by vectors in a complex vector space. We've called them alpha, or A, I guess we've called them A. You can add them, you can subtract them, you can multiply them by numbers, and they form a complex vector space, number one. Number two, we normalize them. In fact, in the particular simple case that we studied, which was the two-level system, we could describe this as alpha times the upstate, for example, plus beta times the downstate, where alpha and beta are complex numbers. The coefficients here are such that alpha star alpha and beta star beta are the probabilities that if you were to measure up or down, you would measure up with probability alpha star alpha, down with beta star beta. And so, just to make the probability altogether equal to one, we require that the coefficients are such that the sums of the squares, or the sums of the, I'll call this just the square, it's the square of the, is just equal to one. All right, there's another way to write the same equation. It's that the inner product of a real state with itself is just equal to one. That is equal to alpha star alpha plus beta star beta. And so, this is the abstract way of writing that the total probability adds up to one. So states, state vectors are normalized, this is called normalized. Means that the length of the vector in some sense is equal to one, and altogether the sums of the probabilities are one. That's the basic posture with the states. This is the example of the two-dimensional space of states. Two-dimensional doesn't mean the world is two-dimensional, it just means that there are two independent possibilities, up and down, heads and tails. We discussed how you might measure whether a spin was up or down by putting it into a magnetic field and seeing if it emits a photon or not. So, given an arbitrary state of electron, we can measure its spin. If it gives off a photon, then we say it's down, I guess. And if it doesn't give off a photon, then we say it's up. And the relative probabilities or the probabilities for the two are just the coefficients here. Now, those probabilities, of course, do not completely determine the complex numbers here. Why not? Well, if I multiply any complex number by a pure phase, it doesn't change its magnitude. If I take a complex number and I multiply it by e to the i theta, where theta is any angle, I get a new number whose magnitude, whose alpha star alpha, is exactly the same as the original starting number. And so, knowing the probabilities is not enough to completely determine what these coefficients are. So, there's more information in these coefficients than just the probabilities. And we're going to find out what that additional information is. But for the moment, this is... And, of course, if there's more than two possibilities, up, down, and sideways, or up, down, and chocolate chip, then we add another one, gamma, chocolate chip, and whatever. Those are orthonormal vectors. And we're going to... And we represented them in a particular basis. The basis of up and down, we represented them as... Up was just one zero, and down is equal to zero one. They're orthogonal to each other. They're normalized, both of them, the sums of the squares of each individual one is one, so they're called orthonormal vectors. Orthogonal and normal. If we have a higher dimensional vector space with more possibilities, chocolate chip, then we might have three vectors altogether, zero, zero, one. They are still orthogonal and normalized. This zero should go here. Up and down or at right angle? We're going to have to remember the difference between these vectors, these abstract vectors which represent states, and vectors in real space, because that's going to come up today. Pointers. These are pointers, they're not vectors. And we'll label them slightly differently. All right, now, what about the quantities that you measure? They are called observables. They're the things that you measure. And by assumption, there may be more than one thing that you can measure about a system, and you can always reduce them to real numbers. The results of a measurement are always a collection of real numbers. You may combine them together, for example, into a complex number, but the results of real measurements, you know, pointers on a dial or whatever, are real numbers. And so observables are things which when you measure them, among other things, you get real numbers as a result. They are represented by matrices or operators. Strictly speaking, they're represented by linear operators, which you can take to just being matrices for our purpose. But a special class of matrices, the Hermitian matrices. Hermitian, H-E-R-M-I-T-I-A-N, Hermitian for Hermite, and Hermite was a mathematician. Hermitian matrices, and to define them, let's first define a couple of concepts about matrices. First of all, there's the concept of the transpose of a matrix. The transpose of a matrix, you have diagonal elements and off diagonal elements. And there's no reason why it has to be two by two. Let's take it to be three by three. There's M-1-1, M-2-2, M-3-3. And then there's the off diagonal elements. Let's just put one of them over here. Here's M-1-2. And here's M-2-1 and so forth. And there's other ones, other, they're all there. The transpose of a matrix is just, you act on the matrix to interchange rows and columns. All it means is you reflect it about the diagonal. You just imagine it's written on a piece of paper and you turn over the piece of paper so that M-1-2 replaces M-2-1 and so forth. The way to write that, the transpose is just written by putting a T up here. And it's just the diagonal elements are unchanged. But the off diagonal elements, M-1-2 becomes M-2-1 and M-1-2, they just interchange. And likewise over here and so forth. Okay, that's called the transpose. We can write that in the form, the transpose matrix, we can call it ij, the ijth element of it is just the j-i-th element of the original matrix and we can get the change of i and j, of rows and columns, it's called transpose. The next concept is the Hermitian conjugate. I'll write it out, Hermitian conjugate. And it's a kind of complex conjugation, but complex conjugation for matrices, Hermitian conjugate. It involves two operations. The first is to transpose and then the complex conjugate. And it's represented by a dagger. It's represented by a dagger. It's represented by, first of all, transposing, but then complex conjugating everything. All the elements, the diagonal elements as well as the off diagonal elements. And we write it by saying that the Hermitian conjugate, the dagger of a matrix, is you interchange rows and columns and then you complex conjugate. So let's do an example. Here's an example. 4 plus 2i, 7 minus i, 6 plus 4i, and 9 minus 2i. What's the dagger, the Hermitian conjugate of that matrix? Well, the diagonal elements, they don't move when you transpose. Transposing a diagonal element stays in the same place, but we have to complex conjugate it. So we get 4 minus 2i. Now, the element up here, first we have to transpose, what was this, 4? 4 minus 2i? Yeah, okay, we have to transpose, which means we put, we interchange rows and columns, and then complex conjugate. So then we get 4 plus 2i up here. This one complex conjugated. This one complex conjugated goes down here, 6 minus 4i, and the diagonal matrix element stays where it is, but it gets complex conjugated. 7 plus i. Alright, so that's the idea of Hermitian conjugation. Flip the rows and columns, and then complex conjugate. A Hermitian matrix, well first let's talk about symmetric matrices. Symmetric matrices, let's just go back to the idea of transpose without the complex conjugation. Just transpose. A symmetric matrix is one that is equal to its own transpose. In other words, when you transpose it, you get back the same thing. In order for that to be the case, all that you need is that the elements, the reflected elements are the same as each other. So it would say that m to 1 has to be equal to m12. We'll put the same m21 here. That's the idea of a symmetric matrix, and it just is exactly what it says. It's sort of symmetric with respect to flipping it or to reflecting it about the main diagonal. A symmetric matrix satisfies mij equals mji. That's a symmetric matrix, very simple concept. A Hermitian matrix is a little more complicated, and what it says, a Hermitian matrix is a matrix which is equal to its own Hermitian conjugate. So if you have a matrix and your Hermitian conjugate it, if it is equal to the original matrix, it's called Hermitian. It's just called Hermitian. That's the definition of a Hermitian matrix. Let's write a Hermitian matrix down. Write the formula for it. Yes, yes. There's the formula for it right here. mij is mij complex conjugate. Say it again. I think it's written correctly. mij is mji conjugate. It's correct. What it says, first of all, is that the diagonal elements are real. For example, this says if you interchange i and j on a diagonal element, you just get the same diagonal element, so it says that m11 is equal to m11 star. That means that m11 is real. So real numbers on the diagonal. Not all is the same real number. I just wrote real, real, real, real, meaning any real number in any one of these places. Then on the off diagonal elements over here, let's suppose that this off diagonal element over here is z, the complex number z, then the reflected off diagonal element has to be the complex conjugate. Then it's called Hermitian. So 2, 7, 1 plus i and 1 minus i is Hermitian. Real elements on the diagonal and complex conjugates when you reflect. This is a kind of reality property. A thing being its own complex conjugate for a number says that it's real. For a matrix, the concept of reality is not that the matrix elements are all real. Not that each matrix element separately is its own complex conjugate, but each matrix element is the complex conjugate of the reflected matrix element. We'll find out. We're going to see very shortly why this is a good definition. Okay, I'm going to write down some theorems. These are your homework to prove. They're very, very simple. But just they don't have names because they're too simple. Very elementary theorems. First of all, if you take two vectors, a and b, and you take the inner product of a with b. Now, remember, when you form the row vector b, you use the complex conjugate elements. This is equal to b star, b star 1, b star 2, complex conjugate times a1, a2. The first theorem is that if you interchange a and b, that will put the a's here and the b's here. What do you get? Is this correct? No. Complex conjugate. The way to think of it is that the row vector is like the complex conjugate of a certain column vector. And if you interchange the order of these, you're taking the complex conjugate of b and the complex conjugate of a, and if you multiply complex conjugates, it's just complex conjugates. So the first elementary theorem to check is that the inner product of b with a is the complex conjugate of the inner product of a with b. Now, it's extremely elementary. Anybody who's seen it, of course, is very familiar with it. Next statement. Now, this one is less obvious. Take any matrix, Hermitian or not Hermitian, and I'll, and, all right. Now, this is an operation that I haven't defined yet, but I'm going to define it for you right now. It's very simple. You have a matrix m and you multiply it by a. That gives you a new vector. This is a new vector. And you take that new vector and you take the inner product with b. And you can work out what this is in terms of the matrix elements of m and the entries that go into a and b. It's a very definite expression. First, multiply the matrix by a, or multiply a by the matrix. You'll be left over with a new vector that you can call a prime, and you take it in a product of b. This is related to, but not equal to, I'm going to write equal, but then we're going to change it, and we're going to change everything. A, put a on the left. That's kind of the complex conjugate of a. Put b on the right and put the Hermitian conjugate of m. Everybody has been complex conjugated, and all rows and columns have been interchanged. When you interchange a and b, you're interchanging a column vector with a row vector, and you're complex conjugating. The same for m. When you take a Hermitian conjugate, you interchange rows and columns, and you complex conjugate. This is a number, incidentally. This quantity is a number, a complex number in general. m on a gives you a vector. The inner product of a vector with another vector is a number. A simple number, it's not a matrix, it's not a vector, it's just a number. This equation is not right. What do I have to do to it? Complex conjugate it. I have complex conjugated a, I have complex conjugated b, and I have complex conjugated m, so I have to take the whole thing and complex conjugate it. That's a simple fact about complex conjugation of matrices and vectors, but let's suppose that our matrix is Hermitian. Let's suppose that the matrix in question is Hermitian. If it is Hermitian, I can erase this conjugation sign here. If it's Hermitian, it's equal to its own Hermitian conjugate, definition of Hermitian. So for a Hermitian matrix, and from now on, let's think about Hermitian matrices. For Hermitian matrices, for Hermitian matrices, this is called a matrix element, this is called the BA matrix element of m. The BA matrix element of m is the same as the AB matrix element of m, complex conjugated. Now, one last thing that will make the point for you, if a and b happen to be the same vector, if a and b happen to be the same vector, now let's let b equal a. Let b equal a. Look what this says. It says that the matrix element of m evaluated between sandwiched, between two states which are the same, and the state a on both sides is equal to its own complex conjugate. What does that say about AMA? That it's real. So if you have a Hermitian matrix, this was a theorem that I was going to ask you to prove, but I just proved it. If you have a Hermitian matrix, then we should give this object a name. It's called the expectation value. Why it's the expectation value will become, or what that has to do with anybody's expectations is something else. It just has a name. It's the expectation value of m in the state a. Now remember, just keep in mind, the way we're going to be describing states is through vectors. The way we describe observables is through matrices. So you can think of this basically as the average value of m when the state of the system is a. Yeah. The bracket notation for average, does it come from block and matrix? Yeah, sure. Of course, it comes from Dirac. So what do we learn? We learn that if we have a Hermitian matrix, its expectation value is real. Well, we haven't gone through the details yet, but that sounds like a good thing. If we have an observable, which is represented in which when you measure it, always gives you a real number, then obviously its average value should be real. And that's what we find, that for a Hermitian matrix, its expectation value in a given state is real. That's what makes the Hermitian matrix special, and what makes the Hermitian matrix natural candidates to have something to do with observable quantities, because their expectation values in any state doesn't have to, in many, many states, any one of them will satisfy this. Okay, any questions up till now? If there is a physical value in a particle or qm, where you, I think a unit time length is a mass, or you have an imaginary number of atoms, is there a value in the unit? Anything that you really measure is always a number. Now, you can decide to say, I will describe points, locations of particles on the surface of the Earth by giving a complex number. I'll give you an address in Manhattan, by giving you a, you know, Manhattan is laid out on the grid. I'll give you a complex number. But you don't have to do that. You can represent any observable quantity as a collection of real numbers. You can always represent the results of an experiment by a series of real numbers. And those real numbers are called observables. When they're complex, when you add them together in complex combinations, just because of the technical historical definition, we don't call it an observable. The observables are the collection of real numbers. The same question. What's a good way to think about measuring numbers when they show up in physical formulas? Like, is it like these things exist, but they're not observable? But then, how do we know that they exist if they're not observable? I think there's a better... I think the question that you really want to ask is why do we have to use complex numbers in quantum mechanics? That's, I think, what you really want to ask. Why is it that we're driven to complex numbers? This will become clear, but it has to do with time and reversibility. And we're not there yet. We're not ready for it. We could have thought about doing quantum mechanics with all real numbers and never having gotten complex numbers. We've missed some very important things, as you will see, as you will find out. But let's write down the postulates, examine their consequences, and then I will tell you what would go wrong if you restricted yourself to real numbers. It would have to do in particular with the evolution of state vectors. What we have not talked about at all is how things change with time. And it really has to do with how things change with time, where that I comes from. So, hold on, hold on to the question, and it'll come back again. I realize it's a different question than you asked, but I think it's the right question. What complex numbers are, is they're just a mathematical formalism for describing pairs of real numbers in certain ways. Why are we driven to use them in quantum mechanics? It has to do with time evolution, so we'll come to it. We're still doing mathematics, we cannot do this subject without mathematics. I mean, I can fake it. I can tell you lots of stuff about quantum mechanics and give you all sorts of mystical experiences about entanglement. And when you walk out of here, you will know no more about entanglement than you knew when you walked in, but you'll have this fuzzy feeling about it and you'll say, oh, isn't it wonderful, and it's a mysterious, but you won't have any idea what it is. So, if we want to really do it, honestly, we are forced to do this mathematics. We want to do better than that. We want to really do it right. In the simplest possible context, I mean, I'm giving you the simplest possible context to discuss quantum mechanics, basically the two-level system, and we'll do two-two-level systems, and then we'll talk about the entanglement of two-two-level systems, and we'll understand it in some detail, and you'll know what the words really mean. The next concept, and it's basically the last mathematical concept that we'll need for today, and possibly for quite a while, is the concept of eigenvalues and eigenvectors, particularly eigenvalues and eigenvectors of Hermitian operators. So, what is an eigenvalue and an eigenvector? Well, if I give you a matrix or an operator, a matrix or an operator, M, if it's a, in general, there may or may not be certain vectors that when I apply the matrix to them, don't change the vector except to multiply it by a number. In other words, there may or may not be, and typically there will be, and in fact, in the case of Hermitian matrices, there always are, certain vectors, we take a matrix, there will exist certain vectors which have the property, not every vector, there will exist certain vectors that are associated with that matrix, which are called its eigenvectors, which have the property that you just multiply by a number. The number depends on the vector. I will give you a very, very simple example right now. Supposing we have a diagonal matrix. In general, it will be called an eigenvalue and an eigenvector even if it's complex. But if M is a Hermitian matrix, then it follows that lambda is real. Let's prove that. Let's suppose we have a Hermitian matrix, we have found an eigenvector and an eigenvalue. Let's see what we learn by taking the inner product of this equation with A itself. We have M, A, A, and multiplying the equation on the left by A. Lambda is a number, it just comes on the outside, A, A. Now if you go back in your notes about five and a half minutes, remember that we proved that for any Hermitian matrix, A, M, A, its expectation value is real. We also proved that the inner product of A with itself is real. So we have an equation that a real number is equal to a real number times an eigenvalue. Well, that means that the eigenvalue is real. Or another way to write it is to divide it by A, A, and the ratio of two real numbers is certainly real. So the first statement is if we've managed to succeed and find an eigenvector and an eigenvalue of a Hermitian matrix, the eigenvalue will be real. Now I'm going to tell you the secret. The secret is that if M is an observable, then the values of it that you can measure when you do an experiment are its eigenvalues. That means the result of an experiment can measure M will be one of the eigenvalues of the matrix M. That's the significance of eigenvalues. They're the possible measurable values of the observable that's represented by M. Let me give you some examples. Oh, yes. The lambdas, the eigenvalues are the values that you measure. The eigenvector that's associated with that particular eigenvalue, that eigenvector represents a state where when you measure the quantity M with certainty, you get value lambda. So the eigenvalues and the eigenvectors go together. There are certain states for any given observable. There are certain states which when you measure that particular observable, you will with probability one get a particular answer and that answer is the eigenvalue. So let's just do an example or two. The easiest examples involve diagonal matrices. Here's a diagonal matrix. Let's just give it some letters. M11, zeroes, just two by two is good enough for us. M22, there's a diagonal matrix. What are its eigenvectors? Remember, they're vectors which when you multiply them by the matrix, you just get the original vector back times a number. I'll just tell you the answer. There are two eigenvectors of this matrix. One of them has a one in the upper place here. Let's check. What does this matrix do when it multiplies this vector? Well, we take the row, multiply it by the column. That puts in the upper place here M11. But how about the lower place? Zero times one plus M22 times zero. Zero. But this vector is just M11 times one zero. So you see what's happened. The matrix times the vector gives you a number times the original vector. In other words, in particular, it doesn't change. Yeah, it just gives you the back the original vector times a numerical number. In this case, the numerical number happens to be M11. So the vector one zero is an eigenvector of this matrix with eigenvalue M11. There's another eigenvector and the other eigenvector is to put a zero here and a one here. Let's see what we get. In the top place, we get M11 times zero plus zero times one is zero. And then we get zero times zero plus M22 times one, M22. And that's just M22 times zero, one. So we found another eigenvector, matrix times vector equals number times the same vector. And this time, the eigenvalue is M22. So the two vectors, zero one and one zero, are both eigenvectors, but with two different eigenvalues, M11 and M22. You can see where this is going. For example, if I'm interested in up, down, and the spin of an electron, I might describe it by a matrix with a one and a minus one. Then the eigenvalues would just be one and minus one. The eigenstates would be one zero. That's the correct equation. So the state with the electron up is an eigenvector of a certain observable, which is represented by a matrix. And the other eigenvector is zero one, and that is equal to minus zero one. The minus comes from this minus sign over here. So the two states of the electron, up and down, are eigenvectors of the spin operator, the spin operator, which is usually called sigma three. This is definition, sigma three is just this. They're eigenvectors with two different eigenvalues, and you see the pattern that the value that you measure, either plus one or minus one, is the eigenvalue. The eigenvector is the state in which the eigenvalue is the desired measured quantity. Any questions? And in fact, these are normalized. Room, yeah, rule. We normalize vectors, yes. Let me give you another example, which is less, a little bit less trivial. Incidentally, if it were three by three or n by n matrices, the same pattern would be there that if the matrices are diagonal, then the diagonal entries are the eigenvalues, and the eigenvectors are just vectors with zeros everywhere except in one place, a one. So you can check that out for yourself, experiment around the diagonal matrices. Let me give you one other example. The other example involves an off-diagonal matrix. These matrices, incidentally, will play an important role in the ones we're talking about, but from time being, they're just arbitrary matrices. It's Hermitian. It has real diagonal elements. Zero is a real number, and the off-diagonal elements are complex conjugates of each other. One is its own complex conjugate. It also happens to be symmetric, but it's both Hermitian and symmetric. Now let's see if we can find the eigenvectors. Rather than to do any algebra, I'm going to show you what they are, and then we'll see that they are eigenvectors. The two eigenvectors in this case, okay, now what's wrong with this eigenvector? It's too big. It's not normalized. One squared plus one squared is two. But here's a statement. If a vector is an eigenvector, it doesn't matter what its normalization is. If you double it or triple it or anything else, it's still an eigenvector. But if you wanted to normalize it, you would write one over square root of two, one over square root of two. It doesn't really matter in checking whether it's an eigenvector or not, but let's normalize it just to remind ourselves to always normalize vectors. What is this matrix times this vector? We take the zero, one, and we multiply it by these two elements. And what do we get? We get zero times, well, let me come on the side, one over square root of two plus one over one over square root of two. One over square root of two, and likewise in the bottom. We got back exactly the same vector. So this is an eigenvector of this operator with eigenvalue plus one. What about zero, one, one, zero? Can I find another eigenvector? Yes, I can. The reason I can find it is because I know the answer. But it's not hard to find. It's not hard to find. I just want to illustrate the idea of eigenvectors. Another time I can tell you how you would go about finding them, but I do want to illustrate it. The other possibility is one over square root of two minus one over square root of two. In other words, instead of having the same element here and here, we have opposite sign. Let's check it out. Zero times this, zero, one times this is minus, minus one over square root of two. And then here we have one over square root of two plus zero. So we have one over square root of two. And that's the same thing as minus the vector that we started with. We have a minus sign here and the opposite sign here. It is minus the original vector. So we found another eigenvector, and this time the eigenvalue is minus one. We found two eigenvectors with eigenvalue plus one and eigenvalue minus one. I defy you to find any other eigenvector. There are none. There are none, and that's a theorem. There are no more eigenvalues or eigenvectors. Right. Right, right, right. You can always multiply an eigenvector by a number, but it's still an eigenvector, but no more, no more, which are not just numerical multiples of the same eigenvector. Right. All right, I'm now going to prove a fundamental theorem of quantum mechanics. Very fundamental about Hermitian matrices. Am I going too fast? Well, if I am, yell out. What did you call the spin matrix? This one, say it again. Yeah, yeah, yeah. Okay, sigma three, and this is definition, one, one, sorry, one minus one, zero, zero, and it has eigenvalue plus one and minus one. Sigma one is equal to this one, one, zero, zero. I'll show you two interesting things, or, well, let's just say, one interesting about both of these. The square of each one of these matrices, does everybody know how to square a matrix? How to multiply a matrix by itself. Good. The square of each of these matrices is the same, and it's just plus one. The square of this matrix, you just square the elements, one squared and minus one squared are both plus one. The square of sigma one, let's square sigma one, one, one, zero, zero, and multiply it by one, one, zero, zero. Alright, the first element in the upper left-hand corner is the inner product, this times this, the one times the one gives us a one, then the zero times one plus one times zero, zero, one times one plus zero times zero, now sorry, one times zero plus zero times one is zero, and one times one plus zero times zero is one. So both sigma one squared and sigma three squared are both equal to one. Now that's kind of an interesting fact. Notice that it's also true of the eigenvalues of the matrices. They were both plus and minus one. If a square of a thing is one, it stands the reason that it's either plus or minus one. So for matrices, in particular two by two matrices like this, but matrices in general, if the square of a matrix is the unit matrix, this matrix is called, is the unit matrix, it's called one. Alright, it's just a unit matrix with ones on the diagonal. Sigma three squared, sigma three squared is the same as sigma one squared equal to one. And it's not independent of the fact that the square of each of the eigenvalues is equal to one. It means that the measured values that you can measure are such that the square is equal to one. And it means that you can only measure plus one or minus one. There's one more matrix I'll write down, and it's called sigma two. And it's equal to minus ii. We will come back to the sigma matrices. In fact, we'll probably come back to them tonight. We found the eigenvectors of sigma one and sigma three. We have not yet found the eigenvectors of sigma two. And I will tell you in due time what their significance is. Yeah. You can see all this geometrically. It's a very simple geometric interpretation. Okay. If you're going to say that, then tell us what it is. Oh, okay. I was going to write something up, but I can put it on the internet. Tell me how. Basically, if you look at the unit vectors, if you look at the unit vector, well, think of e1 and e2, like the z1 and 1, 0, as the unit vectors. If you look at a matrix, if you look at the column vectors, those are the images of the unit vectors. I know what you're saying. So forget it. So one of them is reflected, you're changing e1 and e2. That's called sigma one. And so the diagonal is fixed. And also the anti-diagonal is fixed. And when you're reflecting in the main diagonal, so the normal to it is also reflected. Those are eigenvectors, those directions, I think, and that's what you got. And then the, that's one of the square root of two things, those are pointing in the diagonal directions. So they're fixed. And then the other one is you're sending, you're keeping the e1 fixed, and e2 is going into its negative. So you're reflecting in the x-axis. So the vertical vector is an eigenvector going into its negative. And also the squares of these things, if you apply them twice, you come back to where you say it. Not so completely obvious for this one, but the square of this one is also one. Yeah, well then I take the i out in front, and I have i times something very obvious. Right. Sigma 2 squared is also one, not minus one, but one. They all have that property. They're all, in that sense, they're similar. That also proves incidentally that the eigenvalues of sigma 2 are also plus or minus one. It doesn't tell us what the eigenvectors are, but it tells us what the eigenvalues are. All right, let me prove to you now the following interesting theorem. Suppose, and this theorem is significant, highly significant. Supposing you have an observable, and it has more than one eigenvector, in general, will. The number of eigenvectors that it has is usually equal to the dimensionality of the matrix. It is the dimensionality of the matrix. So if you have three by three matrices in a Hermitian, they will typically have three eigenvalues and three eigenvectors. Four by four will have four eigenvectors and four eigenvalues. But here's the theorem. If I have a Hermitian matrix M, and it has two different eigenvalues, two different eigenvalues, that means that there's an eigenvector A with eigenvalue lambda A, and there is an eigenvector B with a different eigenvalue lambda B B, then if lambda A and lambda B are different, what it says is that A and B will be orthogonal to each other. Before I prove it, let me say what it means. What it means is that if there is any observable distinct quantity that you can measure, which will distinguish between two distinctly different possible measurable answers that you can get, for some observable, then that's equivalent to saying that the eigenvectors, remember the eigenvectors are the states in which you definitely measure lambda A. The eigenvectors are the states where if you make a measurement of M, you will get lambda A. B is the state in which if you measure M, you will definitely get lambda B. These states which correspond to distinctly different measurable quantities, its distinctly different measurable values of the same quantity, they are orthogonal. The basic idea is orthogonal states are distinguishable, measurably distinct, and you can distinguish between them by measuring appropriate observables. For example, we already have some examples. The two examples that we had, one of them was 1, 0 and 0, 1. These were the two eigenvectors of sigma 3. Notice that they're orthogonal because they don't share any elements, this times this plus this times this is 0. The orthogonality is the thing that tells you that in the unique experiment, you can distinguish between them. You distinguish between them by measuring the quantity which is represented by the Hermitian operator that has too many words but you get the idea. Let me prove to you that if this is so with two different lambdas, then A and B must be orthogonal to each other. This is not very hard, we just have to juggle a few definitions. We first of all multiply this equation, the first equation on the left, by B. So we'll rewrite it then as BmA equals lambdaA times BA. So I'm going to multiply both sides by B, by the vector B. I'm going to do a similar thing here. I'm going to multiply both sides by A. So we get AmB equals lambdaB times AB. I would like to switch. Let's see, so what do we have to do next? This always confuses me. We have the two equations. Now let's take this equation. We want to conjugate one of these equations. We want to take the complex conjugate of one of these equations. Which one should we choose? The second one. This is BmA complex conjugated is what it is. Remember since n is permission, we don't have to complex conjugate it. That's equal to lambdaB star times BA. Now I forgot what to do next. Yes? Yes? Yeah, now we'll just conjugate everything. Oh, did I do something? Which? Did I forget to conjugate something? What are we doing? Sorry, this is conjugated, right? Yeah, good. But before we do that, let's complete. What I've done here is said if I interchange, let's see, this one, if I interchange A and B, and at the same time take the Hermitian conjugate of M, which I don't have to do because it's Hermitian, and take the complex conjugate, I get the complex conjugate of the right-hand side. Now just complex conjugate the whole equation. Let me just throw away the star. Throw away the star. Now I have two equations. Let's get rid of the middle one here. One of them says BmA equals lambdaA times BA. The other one says BmA equals lambdaB times BA. It will not surprise you that the conclusion is there are two possible conclusions. The conclusion is that lambdaA equals lambdaB. But I told you already, I'm talking about two eigenvalues which are known to be different. So by assumption lambdaA and lambdaB are not the same. If lambdaA and lambdaB are not the same, there's only one way this can be true. In other words, and it's that BA is zero. In other words, let's subtract these two equations. We subtract the two equations. On the left side we get zero. On the right-hand side we get lambdaA minus lambdaB times BA. BA. If a product is equal to zero, that means one or the other factor is equal to zero. The product of two things can only be zero if one or the other factor is equal to zero. One possibility is that lambdaA is equal to lambdaB. True, this is true. It's a possibility. But I explicitly said let us consider two different eigenvalues, two different values of the same observable. If they're different, then lambdaA minus lambdaB is not equal to zero, and the only possibility is that B and A are orthogonal to each other, that inner product of B with A is equal to zero. So it follows for emission matrices, if I have two distinct eigenvalues, I can immediately conclude that these are orthogonal vectors. Let's check it out for the eigenvectors of sigma2. Remember where are the eigenvectors of sigma2? Here they are. Here they are right here. No, I'm sorry, sigma1 is what I meant. Sigma1. Let's check it out for sigma1. Notice that the entries here are real, and we don't have to complex conjugate anything. All the entries here are real. To take the inner product of two vectors, all of whose entries are real, all we have to do is take the product of the first two entries, sorry, the product of this entry with this entry, and add to it the product of this one with this one. So we get 1 over square root of 2 times 1 over square root of 2, that's a half, and then 1 over square root of 2 times minus 1 over square root of 2, that's minus a half, a half plus minus a half is zero, this vector is orthogonal to this vector. So there's an example of the orthogonality of eigenvectors with different eigenvalues for Hermitian matrices. As I said, the significance of this is deep, and it says that whenever there is an observation that you can do that will uniquely distinguish between two possible states of a system or two possible values of an observable, the states associated with them are orthogonal to each other. Or you can say it the other way, if two states are orthogonal, it means that there exists some measurement you can do which will uniquely tell you which one of the two is realized in your system. So that's one of the fundamental theorems of quantum mechanics, and let's take a little break. The eigenvectors of sigma 2 are so that you can check them yourself. The two eigenvectors of sigma 2 are 1i, stick a 1 over square root of 2 to normalize it in front of it, and the other one is 1 minus i, 1 over square root of 2. One of them has eigenvalue plus 1, and the other has eigenvalue minus 1. So you can check that yourself. It's a little exercise. Please do it. And you see that sigma 1, sigma 2, and sigma 3 are all similar to each other and that they have the same eigenvalues, plus 1 and minus 1. They all square and give 1. They have another interesting property that we'll come to in a moment, I think. But let me tell you what their physical meaning is. They are observable quantities, and with respect to thinking about the electron spin. Now, I told you the first time, or was it the second time, the electron spin classically in classical physics is like a magnet. It has a north pole and a south pole, and it's a pointer. It points along an axis. If we were talking about, and it has what's called a magnetic moment, and that magnetic moment is a, I hesitate to use the word vector, but the kind of vector that points in ordinary space, I guess we called it a pointer, right? A magnetic moment. Or we can call it the spin. It's the same, apart from a numerical constant, the spin and the magnetic moment are the same thing. It's a pointer in space. As such, it has components. It has components in the vertical axis, for example, as vector has a component which is about over here in the vertical direction. It has a component in the horizontal direction over here and a component over here in the other horizontal direction. And those components characterize the pointer. They are measurable quantities for a magnet. You could decide to measure the x component of the magnetic moment. You could decide to measure the y component, or you could decide to measure the z component. You might think, classically, you'd be right, that you could measure all three of them simultaneously. You could do an experiment to measure all three of the components of the magnetic moment simultaneously, and in that way figure out exactly where the magnetic moment is pointing. Let's save that question whether you can measure all of them simultaneously for an electron or not, but the answer is no. But you can measure any one of them, the x component, the y component, or the z component. How do you do it? Suppose I wanted to measure the x component. The x is this way. I put it in a big magnetic field, and I check whether or not it emits a photon. If it emits a photon, then it had one component or the other component. And the only possible answers are minus one and plus one. In other words, you'll either see a photon or you won't see a photon. So the components of the electron spin have only two possible values, the x component, the y component, and the z component. And what's more, we're going to see in a minute that the component in any old direction, if you measure it, will only have two possible values. Now, that's a little hard to think about, but let me tell you right now what sigma one, sigma two, and sigma three are is they represent the observable values of the components of the electron spin along the three axes of space, the three axes of ordinary space. I'll show you how that works and how we can construct the component along any direction in a moment. But notice that they do have sort of very similar properties, same eigenvalues. So if you measure the possible values that you can get in an experiment, for sigma one, you get one minus one. For sigma three, you get one and minus one. For sigma two, you get one and minus one. That's all you can ever get when you actually measure them. And we're going to see the same is true along any axis. But before we do, let me tell you the last, I think it's the last postulate of quantum mechanics. And here is the probability interpretation. Suppose you prepare a system in a state, a particular state, and let's see, let's call that state B. Somebody gave you an electron or whatever it happens to be in a particular quantum state B. Now, there is some observable. Let's call it M, some observable, M that has an eigenvalue lambda A, which means that if you measure M, you can get the value lambda A, or you may not get the value, you may get some other lambda. But if you measure M, you get one of its eigenvalues. And the associated eigenvector is A. And you can see that there's a difference between them just to indicate. M is an observable, meaning to say it's a Hermitian operator. Lambda A is one of the possible values that you can measure. And the associated eigenvector is lambda A. B is just any old state, any old state, whatever, not particularly an eigenstate of M, it could be or not. It's just any old arbitrary state. Then we could ask the question, if the system is prepared in B and we measure M, what's the probability that the answer will be lambda A? Lambda A are the possible values that you can measure. M is the thing you're measuring. And A is the eigenvector of M with eigenvalue lambda A. The answer is that the probability is given by the expression, the inner product of A with B times its complex conjugate. One way, times its complex conjugate, the easy way to write the complex conjugate is just to write it at time B A. Which is just the same as multiplying this by its complex conjugate. Remember that when you multiply a number by its complex conjugate, you always get a real positive number. If A and B are normalized, it's an easy theorem to prove that this is less than one, less than or equal to one. If A and B are unit length, then the inner product between them has a magnitude less than one. So this number is always less than or equal to one, and it is the probability, it's the probability, again I'll state what it is, it's the probability that if you start with an arbitrary state A, a system prepared in an arbitrary state A, and you measure the measurable quantity M, this is the probability that you get lambda A. I'm prepared to say it. I'm prepared to say it wrong. Sorry, what did I say? Did I say it wrong? You said it was prepared to state A. Oh, sorry. Let me say it again. I thought I said it the right way, but I probably said it wrong. It's prepared in the state B, arbitrarily prepared in the state B, and the probability that you measure lambda A will be the inner product between B and A squared, times its complex conjugate, a real positive number given by the square, or I call it the square, but I mean times the complex conjugate, of A with B. That's the probability. Now notice it's sort of symmetric in A and B. It is symmetric in A and B, but for the moment think of it asymmetrically. You prepare the system in A, you prepare the system in B, and you measure M and ask what the probability for lambda A is. That's the basic postulate of quantum mechanics, always assuming that all of your vectors are normalized. Always assuming that your vectors are normalized, that your state B is normalized, and that the eigenvector A is normalized. Is that the cosine squared of the phase angle between them? If these were real vectors in a real vector space, a two-dimensional vector space, it would be the cosine of an angle. Since they're both complex vectors, they have an angle between them. Is that equal to the cosine squared of that angle? What do you mean they have a cosine angle between them? No such concept. What is true is that if these are just ordinary three-dimensional vectors, real vectors, and ordinary three-dimensions, that the dot product of two vectors, two unit vectors, is equal to the cosine of the angle between them. So it's kind of like the cosine of the angle, but it's not a real thing, not real. In general, this won't be real. That's why we multiply it by its complex conjugate. Yeah. By the theorem, it just proves that B is the eigenvector of the different eigenvalues that are part of the equation. Good, good, good, good, good, excellent. If B happens to be one of the eigenvectors of M with a different eigenvalue, then A and B are orthogonal. And so what it says is that if you prepare the system in one eigenvector corresponding to a different eigenvalue, then the probability that you get A is zero. Good, perfect. So if you prepare it in one eigenvector and you measure the probability that you get a different eigenvalue is zero. Good. So that's part of the probabilistic interpretation here. And all it says is if a quantity is definitely equal to one thing, it's surely not equal to the other thing, but expressed in terms of inner products. A little more complicated, but yes. Yeah, we're going to do many, many such things. But this is the simplest of the things we could do. And for the moment, this is our basic quantum postulative probabilities. Let's give an example. What's the probability? Let's take the case where sigma three is equal to one. Let's see, let's say it this way. Here we prepare B is equal to one zero, which means, by the way, that sigma three is equal to plus one. But now we're not going to measure sigma three. We're going to measure sigma one. We're going to measure sigma one. And we're going to ask what's the probability that when we measure sigma one that we get plus one or minus one, as the case may be. So let's ask what's the probability that if we prepare, here's the experiment, we prepare the system with the spin up by putting it in a big magnetic field pointing upward. And then we measure sigma one, which is the component of the spin along the x-axis. What's the probability that we get plus one? What's the probability that we get minus one? You can guess, but we'll work it out. So what is A? A is an eigenvector of sigma one with eigenvalue, let's say, plus one. So A is equal to one over root two, one over root two. What's the inner product between these? Quick, quick, quick, quick. One over square root of two, right? One over square root of two, what's the square of it? One half. So A, B is one over square root of two, and times its complex conjugate, that's usually written this way, times its complex conjugate, its absolute value squared, which is the same as multiplied by its complex conjugate, is one half. So what this tells us is if we prepare an electron in the upstate and we measure the x component of spin, we have a probability one half for getting plus, and it will also be true that we will have probability one half for getting minus. To see that, what's the eigenvector that's associated with minus one for sigma one? We put a minus in here. It doesn't change the answer. Still one half. So there's an example, and this is a characteristic example. You prepare a system, in this case the system is prepared with the spin vertically up, and it's measured in the horizontal direction. Let's do the opposite. We're going to get the same answer. Let's do, we start B now. This time we first prepare the electron horizontally, and we measure the vertical component. It's going to be the same in a product. The horizontal, this time B, just all we've done is interchange B and A. I forgot which was which now, B and A. We've prepared the system with its spin pointing, with its x component positive, and we've measured the z component. It's just exactly the same thing, one half. Let's do a more complicated case. I've told you what the eigenvectors of sigma two are. So let's ask the most complicated question we can ask. What's the probability that if we prepare the state with sigma one equal positive one? That means B is equal to one over square root of two, one over square root of two. And this time we measure sigma two, not sigma three, but sigma two. So this time B is one of the two eigenvectors of sigma two. Let's take this one, one over square root of two, I over square root of two. Sorry, this is A now, this is A. The eigenvector of sigma two. So we start with the spin pointing along the x-axis, and we measure it along the y-axis. We start with the pointing along the x-axis. That's an eigenvector of sigma one or along the one axis. And now we measure it along the two axis. Okay, can somebody compute the inner product of these two? I think I can. The inner product of these two, we first of all have to complex conjugate. So don't forget to complex conjugate. But it's one over the square root of two times one over the square root of two is one half. And then plus, but we have to complex conjugate, so that means minus I over two. One over square root of two times one over square root of two is one half with an I. This is the inner product of A with B. I don't know if it's A with B or B with A, but it doesn't matter. What's the complex conjugate of this with itself? Let's work it out. We have to multiply it by one over two plus I over two. Quick answer? One. Not one. What was that? Mm-hmm. Okay, the imaginary parts cancel. We have a one half times a one half with an I, so we have a minus one half times a one half with a minus I. So the imaginary parts cancel. The real parts are one half times one half is one quarter plus one quarter, one half. Okay? So we see if we, the way to say this now is if we line up the spin in any direction by a strong magnetic field, and then we measure the spin in a perpendicular direction, we have a half problem in any, in fact, in any perpendicular direction, we have a probability of a half that it's this way and a half that it's this way. So any polar, it's called the spin polarization, any spin state, if we start it out with a strong magnetic field and freeze it into some direction, if we then remove the magnetic field and measure an orthogonal component to spin, it will have equal probability of being up and down. What about the components of spin in some arbitrary direction? Let's discuss the components of spin along some other axis. We, maybe it's a good time to stop now and, well, let's, let's do it. And I promise to do this again, yeah. Regarding orthogonal and this case, this case versus classical, if you have two vectors that are 90 degrees apart, you would say that... Are we talking about vectors or pointers? Pointers, there are 90 degrees apart, we would often say this orthogonal, and the projection of one and the other equals zero. Yeah, okay, good, good, good, good, good. So let's not, so right, so we have to, distinguish vectors from pointers. Right, so here we have a situation where we have, where we have a pointer which has been forced to point in some direction and we measure the pointiness in a, what's called perpendicular direction, all right? That's not, all right, good. Or let's just say, yeah, okay, good, good, or better yet, let's look at the eigenvector which corresponds to the pointiness plus one in the x direction and compare it with the eigenvector that corresponds with the pointiness in another direction. They're not orthogonal eigenvectors, they're not orthogonal vectors in the, in this sense. Yeah, yeah, right, comes out to one half instead of zero, right, good. What is the thing which has zero probability of pointing up like this? Yeah, the one that's pointing down. The one that's pointing down in this sense is orthogonal to the one pointing up. So we have to, so let's use the word perpendicular to mean ordinary space and orthogonal, okay, here's our vocabulary. Pointers instead of vectors, vectors mean states, pointers mean pointers. Orthogonality means states being orthogonal, being distinctly different from each other, measurably different, and perpendicular means perpendicular at 90 degrees in ordinary space, okay, good. Independent means something a little different. Independent doesn't mean a perpendicular. So let's stick with the word. No, independent simply means it's not pointing in the same direction. And the set of vectors, there's a concept of independence, linear independence with, which is not, which is different than orthogonal, mutual orthogonality. So independence and orthogonality are quite different for linear independence anyway. Okay, now let's come to classical pointers. I'm interested in something, I'm interested now in the component of the pointer along an arbitrary direction, along an arbitrary direction in real space. What direction? Let's pick a direction. How do we pick a direction? We pick a direction by picking a set of components of a unit vector. We pick a unit vector and pick a set of components in, let's call it a unit vector, so unit pointer, my goodness. A unit pointer, we take a unit pointer. Now this is in real three-dimensional space. One, two, three, one is the same as X. Two is the same as Y. Three is the same as Z. Same thing, X, Y, Z. And now I pick some arbitrary pointer. So some arbitrary pointer is pointing along that direction. And let's take it to be a unit vector, which means it's one unit in length. It's described by a set of three components. The sums of the squares of the components have to add up to one. But here are the three components. Let's call this vector N. And a unit vector is usually described by putting a little hat over it like that. N hat means a unit vector. This is a standard notation for a unit vector. A unit pointer, my goodness, please correct me, don't allow me to do that. A unit pointer is indicated by a little hat. And it has a set of components. The components are in X and Y and in Z, or in one and two and in three. Those are the components. And they're basically this component, this component, and this component. You add them up and you get the pointer that's pointing in some arbitrary direction. These are the components of the pointer. Now what about the component of the spin along the direction N? How do we calculate that? That the component of the spin, or the component of the spin pointer, along the arbitrary direction N is given, let's call it the spin. Now we're thinking classically. We're thinking about a classical magnetic moment, a classical spin. The dot product, sigma dot N, is the component of sigma along the direction of N. That's what a dot product is. It's the component of one vector along the axis of another vector. Sorry, pointer. What is sigma dot N? It's the first component, or the X component, called sigma one times N one plus sigma two times N two plus sigma three times N three. Well this gives us an interesting candidate in the quantum mechanics for the operator, which is the component of spin along a direction N. Namely, we take the operator sigma one, multiply it by N one, add two with the operator sigma two times N two, add two with the operator sigma three, and multiply it by N three. That's a candidate, and it is, it is the correct candidate, which we can call sigma dot N, but it's an operator. Why is it an operator? The N's are numbers. They're just numerical numbers that multiply here. Sigma's are matrices. If we multiply a matrix by a number, we just multiply all the entries by the same number, and then we add them up. We can write this down actually, given three N one N two and N three. We can add them up. Let's do it. N one times sigma one, what's sigma one? Sigma one is one one, and if I multiply it by N one, it looks like this. That's sigma one times N one. What about sigma two times N two? Well, sigma two is minus i, i, and now we multiply it by N two. Sigma three times N three. We take N three, which is one minus one, and we multiply it by N three. So that's just N three and three. Zero zero. Now we add them up. What do we get? On the diagonal, these have no diagonal elements. This has diagonal. So we get N three, N three, minus N three. We get N one minus i N two, and N one plus i N two. There's three components, N one N two and N three. The sums of the squares should be equal to one because it's a unit vector. And here is the operator that corresponds to measuring the components of spin along the direction N. So what's the experiment? The experiment is we take an electron and we put it in the magnetic field pointing along the N direction, and we see what we get, up or down. This is the operator that corresponds to that. N three on the diagonal, minus N three, N one minus i N two. Is this her mission? Yes, it's her mission. Real, real, and this one's the complex conjugate of this. So it's her mission. Is it square equal to one? What's that? Any unit vector. Well, I don't expect you to be able to see it offhand. So what I'm going to do is I'm going to show you an important property and let you check it yourself. An important property of these sigma matrices, one that you can check, but I'll derive it's importance and then allow you to check it by yourself. Let's take this sigma one plus sigma two, sigma one N one plus sigma two N two plus sigma three N three. Leave it in this form, not in this form, leave it in this form and square it. What do we get? Well, let's write it out. Sigma one N one plus sigma two N two plus sigma three N three times sigma one N one plus sigma two N two plus sigma three N three. Let's multiply. We're going to get a whole slew of terms. The first set of terms are going to be things like sigma one squared times N one squared. When I multiply this out, I'll get sigma, what's sigma one squared? One. So from the N one squared sigma one squared terms, we'll get N one squared. What about from the sigma two times sigma two term plus N two squared? What about from the sigma three N three term plus N three squared? But what's that? That's one. All right, now let's look at the thing which multiplies N one times N two. Here's the thing which is going to multiply N one times N two, N one N two. And what's it going to contain? It's going to contain sigma one sigma two. But then there's another term like that which is sigma two sigma one. All right, you've got one term which is sigma one on the left, sigma two on the right. And the other term has sigma two on the left, sigma one on the right. Why do I bother writing sigma one, sigma two, and sigma two, and sigma one instead of just calling them both sigma one sigma two? Because when you multiply matrices, the order counts. The order counts when you multiply matrices. Matrices in general have different products when you multiply them in different orders. The technical term is that they don't commute. So it's important that these things all, when you multiply them by themselves, they give you one. Yeah, these are, each of the square of these gives the identity. I know, but you just kind of prove that for the, for the sigma dot N. We're going to prove that. All right, then we have N two and three times sigma two, sigma three plus sigma three, sigma two. And then another term similar, which is N one times N three. All right. Well, this adds up to one, which means the unit matrix. This is the unit matrix one. What's this doing here? This is bad. We don't want that there. We want the square of the component of the spin in any direction to be the same as in every other direction. There's nothing special about any of these directions. Well, try it out. Multiply, let's multiply, let's multiply sigma, this you go home and do yourself. Prove to yourself that sigma one times sigma two is the opposite, is the negative of sigma two times sigma one. They're anti-commute. They're anti-commute. This is a special property. They anti-commute, which is to say that when you change the order, it changes their sign. This is something to check. You sit down and do it numerically. Sigma one sigma two plus sigma two sigma one is zero. Sigma two sigma three plus sigma three sigma two is zero. We have five minutes. We have three minutes. Let's do one case. One case for fun, let's say. Let's do, I've avoided imaginary numbers by doing sigma one and sigma three. And one times sigma three, what's that? That's equal to one one zero zero times one zero zero minus one. And that's equal to, let's see, this time this is zero. This time this is minus one. This time this is one. And this time this is zero. Did I get that right? I think I did. Now let's do it the other way. One zero zero minus one zero one one zero. All right? And the first element one times zero plus zero times one that's zero again. But now let's go to one times one is one. And now we go down to this corner at zero times zero minus one zero. So you notice when you multiply them in opposite order, you get exactly the same thing except for a minus sign. When you add these, sigma one sigma three and sigma three sigma one, add them together you get zero. Same is true for sigma one sigma two and sigma two sigma three. So these are zero. So what do we learn? We've learned that with this particular choice of matrices, the square of any component of sigma along any axis, any axis whatever, is one. That means the eigenvalues of such a matrix are plus and minus one. So any matrix where did I write it? I have it written down. Here it is. The eigenvalues of any matrix of this form are plus and minus one. This we'll check next time. But what it says is the possible measurable values of the spin along any axis of plus or minus one. This is a very weird thing in quantum mechanics. You can only get plus or minus one if you measure it along any axis. But then take another axis. You can also only get plus or minus one and plus or minus one. It's the probabilities which reflect whether two ends are close. We'll come to it next time. That's enough for this time. I've probably saturated you. I've certainly saturated myself. The preceding program is copyrighted by Stanford University. Please visit us at stanford.edu.
Lecture 3 of Leonard Susskind's course concentrating on Quantum Entanglements (Part 1, Fall 2006). Recorded October 9, 2006 at Stanford University. This Stanford Continuing Studies course is the first of a three-quarter sequence of classes exploring the "quantum entanglements" in modern theoretical physics. Leonard Susskind is the Felix Bloch Professor of Physics at Stanford University.
10.5446/15102 (DOI)
Stanford University. Alright, we were talking about something called T-duality. T-duality was very, very important to the history of the mathematical developments of string theory. Let's go back over it again and discuss it a little more fully. And then I want to tell you how it led to the concept of D-brains and how D-brains have become something of a mathematical tool for studying quantum field theories, the kind of quantum field theories that have nothing to do with gravity, but the kind of quantum field theories that we use in a day-to-day basis to understand hadrons, quantum electrodynamics, and even quantum field theories that are interesting to condensed metaphysicists. We won't get to all of this, obviously we won't get to this today, but I'll just try to give you some picture of what it's about. Alright, we imagined that there were some compact directions of, compact means these small ones which were rolled up, for simplicity I will imagine that the compactification is toroidal on tori. In other words, not necessarily two-dimensional torus, a one-dimensional torus is a circle or a line interval with endpoints being identified. A two-dimensional torus is a parallelogram, doesn't have to be straight, parallelogram with opposite sides identified. A three-dimensional torus is a cube, or not necessarily a cube, but a... Parallel of piped, thank you. A parallel of piped with opposite faces identified, this point and that point, this point and that point, and in front and the back. So in general, in any dimension the concept of a torus is a well-defined concept. And those are the easiest cases to study. Those are the easiest cases to study. In these cases, of course, supplement on top of this is the ordinary four dimensions of space-time. So this is what's present at every point of space-time, ordinary space-time, there are other directions that we can move in, or that somebody small enough can move in, and that's the setup. Now, let's focus on one particular place in space-time and ask what could be there. A particle can be there, but now let's zero in and zoom in and ask what that particle looks like on scales which are so small that these compact directions become visible. So let's start with a simple picture that we had. Just to get started, we imagine one large direction and one small direction. The small direction now would be called a circle. A circle not because it's embedded in two dimensions and looks like a circle, but just because it's head eats its tail, that when you go around, come back and come back to the same place. All right, so that's our space-time. Where's time? I don't know, time is, add time into it. Time is not drawn on the blackboard. This is pure space. In this case, two-dimensional. So, what if you like, we can generalize it to higher dimensions. The compact directions become torii, the non-compact directions, the big ones just become our three-dimensions of space, or however many dimensions of space we want. Now, in string theory, particles are strings. Let's take the case of... Not all string theories are the same. They're different from each other, but there's this fairly small classification of them. The string theories that I'm interested in right now have only closed strings. We're going to come back to what happens when you have open strings in a little while, but for the moment, only closed strings. No endpoints. That's number one. So, they're like rubber bands. They're like rubber bands. Here they are, closed. And furthermore, they're what we call oriented. Oriented means that there's an intrinsic difference between going around the string in one direction rather than in the other direction. It's mathematically like a rubber band in which on the rubber band we drew a series of arrows to indicate direction around. And we'll keep track of that orientation. An example of what that orientation would say is when the string splits, when it splits like this, it will split into two strings with the same orientation on the blackboard. So, in that sense, orientation is preserved for these strings. They remember a sense of which... Now, of course, that doesn't mean that they're inequivalent. I mean, there may be a symmetry in the sense that a string and the opposite orientation may have all the same properties. But what it does mean is that you can compare two strings to see whether they have the same orientation or the opposite orientation. Same with electric charge. Positive electric charges behave exactly the same way as negative electric charges in that, for example, if you replaced in the real world every electron by a positron, every proton by an antiproton, and with it every neutron by an antinutron, chemistry would remain exactly the same, or at least to a very, very high precision, would remain unchanged. So, in that sense, plus charges are identical to minus charges, but you can't tell the difference between a plus charge and a minus charge. But if you have two charges, you can tell whether they are the same or opposite. How? Just see whether they attract a repel. You can't tell whether they're both plus or whether they're both minus, but you can tell... Okay, so the same is true of oriented strings, that they have a sense of orientation. If you replaced every string in string theory by a string of the opposite orientation, the theory would be the same, but if you have two strings, you can tell whether they're the same orientation of the opposite. Let's just call them oriented strings. Now you can wrap them... Well, you can do two things. You can have a string which is not wound around some compact direction. There it is. It's just drawn on the surface of the two-dimensional world here. It's free to move. It can move in this direction or it can move around. So it can have momentum. And that momentum can be along the direction of the large directions or it can be along the short directions. Or it can be a combination of both. It can... When I say it has momentum, we can think of that as motion. And it can move this way. It can move this way. It can even move in a helical pattern like that, having both components of momentum. The components of the momentum in the short directions, in the finite compact directions, are quantized. They are quantized in integer multiples of the inverse radius, or the inverse circumference, the distance around the closed... It's called a cycle, around the closed cycle. Do the strings have to go all the way around the cylinder? Can they be like the view there? This one isn't going all the way around. Why do you...? Well, they didn't. I thought they went from like zero to two pi's at close... Oh, gosh. In what space? Sigma? Ah! No, no. This is not sigma. This is some x. This could be x. We could call this direction y around here. That's space. That's real space. Sigma is just a parameter which changes along the string. It's just naming the points along the string. You have a rubber band, and the rubber band is composed of molecules. Okay? The molecules can be named. The first molecule is Harry, then George, then Fred, then Sarah, and so forth. That's sigma. It's naming the particles as you go around the string. Okay? And if the string is closed, yeah, then sigma comes back to itself. Yeah. The size has to be the same along... What? Does the size of the compact direction have to be the same at every place in the long direction? We'll come to that. The answer is no. No, it does not. But we'll come to that. Okay, so sigma varies along here, along the string, and it goes from zero to two pi. But that has nothing whatever to do with whether the string is wrapped around space. So, yeah, think of... I'm sorry, I didn't bring a rubber band. Think of a rubber band. We're on the surface of the rubber band we've marked off. Sigma equals zero. Sigma equals a small number, twice the small number, as we go around the rubber band. There it is. Rubber band. And we mark off points on the rubber band. That's sigma. That sigma has nothing to do with whether I wrap the rubber band around my wrist or not. It's always there. Sigma goes around. Once, as the string goes... As we follow the string around its own shape, sigma goes from zero to two pi. It's a completely separate question of whether the string is wrapped around the extra dimension. In the same sense that a rubber band could be wrapped around my wrist. You can wrap the rubber band around your wrist. If it's an oriented rubber band, you can wrap it so that the arrows point this way, or so that the arrows point this way. So, we have two kinds of wrapped strings. Here's a wrapped or a wound. The correct term is wound. Wound around a compact direction. Here is one of them. It comes around the other side, and it points that way. Okay? That's one. The other one goes in the opposite direction. Let's take two wound strings like this. Two wound strings like this. Let me just draw it a little differently. Let's put... Do it this way. This one goes. What can happen to two wound strings which are wound in opposite direction? Well, remember, the basic process of string theory is for strings to come together and join and split. Now, the rule is that when they join and split, orientation is remembered. A string like this and a string like this can... Let's put some more orientation arrows on it. Let's ask how they can join. They can join like that. I don't think I need words to describe it. I think the picture describes it well enough. The lines have to be... Or the arrows have to be continuous. Okay? What you can't have is a string splitting somehow like that. Here, the line can't be continuous. You run into a contradiction as these points come together. All right. So this is the kind of thing that can happen. The opposite, namely the time reversal of it, can also happen. Strings which are like this can come together and join and so forth. All right. That's the basic phenomena of splitting and joining. And it's the thing which is governed by a coupling constant. So what can happen here? What can happen is clear. This can happen. Let's see. I lost track of the orientation. Now it's not really wound anymore. As from the point of view of topology, you can unwind it into this. What we've done here is we've taken this, we've stretched it around, and then joined it on the far side to form the original wound pair. Well, we can undo that. And now it's unwound. It's just a single unwound string. All right. Now, one way to think about it is to keep track of winding number. Winding number can be defined to be positive for windings which look this way. We'll call that winding number one. And if it's wound the other way, we'll call it winding number minus. I won't try to give you a mathematical definition, but winding it on my wrist, if the arrows point this way, we'll call that positive winding. If they point the other way, we'll call it negative winding. If I have a single string which is wound like so, what's the winding number? The winding number is plus one. If I have another one, forget this one for the moment, just throw it over on the side, we have another one which is wound this way, that has winding number minus one. The two together all together have winding number zero. This has winding number zero. It's not wound around at all. So what's preserved is winding number. Let's take another case. Let's imagine now two strings which are wound the same way. There's two strings wound the same way. What can happen to them? They have winding number two. Some of the two of them have winding number two. What can happen? Well, let's draw it over here again. These are going this way and they come around here. They're both going the same way. You can't unwind this. You can do something. You can have a process of splitting and joining which does this, which interconnects them in a new way. Can you see what I've drawn? We've crossed them. We've crossed them, but still after they've crossed, the arrows are still continuous. There's no contradiction in the direction of the arrows. This also has winding number two. It's one string with winding number two. This is two strings, each one with winding number one, but the sum total of the winding numbers is two. Here's again another example. Winding number is conserved. You can't change the winding number, although you can change the number of connected components. If you have winding number 100, you could have one string wound around 100 times. You could have two strings each wound the same way 50 times. You could have 100 strings each wound once, and they can all communicate in the sense that you can morph from one configuration to the other, but you can't change the winding number. So winding number is an absolutely conserved quantity in this kind of string theory. It never can change. Let's forget winding number. We're going to come back to winding number in a moment. Well, maybe we should discuss winding number. Okay. This one over here, that can just be thought of as a tiny little particle. Let's just think of it as a little particle. It's a string, but it's a particle. What's it characterized by? It's characterized by many things, shape, and all sorts of other things, but in particular, let's characterize it by its components of momentum. In particular, the component of momentum in this direction over here. The component of momentum in that direction has to be an integer number, let's call it N, integer number of a quantum of momentum, and the quantum of momentum has to do with the circumference. Let's just call it the radius around here, R, distance around here, I'll call R. It's a circumference strictly, but I'm just going to call it R, because R is the standard term for it. The quantum of momentum in this direction, and also, incidentally, the quantum of energy in that direction, if these are massless particles, energy and momentum are proportional to each other, for example, if they were gravitons, moving this way or this way. The energy is also quantized in that unit, and so the momentum can have either sign, it can be plus or minus the momentum. The energy is always plus, for the same amount, N over R. It has momentum N over R, and it has an energy also N over R. That's the energy due to momentum, and in particular momentum along the small direction here. It's quantized. So that's unwound particles, and the contribution of energy due to momentum. Oh, let's think about now what, yeah. What is the momentum of a string moving in space? Basically, it's just the velocity of the center of mass. It's just the velocity of the center, just like a rubber band, just like a rubber band. Its momentum is its mass density, which for simplicity we can take to be one times, just its mass, which we can take to be one, for a rubber band times its velocity. So the velocity, the momentum is proportional to the velocity, and we can think of this as the x, the tau. Tau is the time variable that describes the motion of the string in the simple quantum mechanics that we've described previously. And it's got to be an integer multiples. Now let's think about the wound strings, the strings which are wound around. What is the typical energy of a wound string? Well, strings have a certain tension. Let's just work in units in which that tension is one, but tension is energy per unit length. So for energy per unit length, if there's a given energy per unit length, then the energy of a wound string is not n divided by r, but it's some other integer, I'm going to call the other integer w. w stands for winding number times r. The energy of a collection of strings which are wound around here are the integer winding number times r, whereas the energy of a string which is not wound around the compact direction is n divided by r. So I think we talked about this last time, but let's discuss it again a little bit. Let's suppose that r is very small. Here we have the spectrum of the strings which are not wound. The separation between the levels is large, large because one over r is large. So it looks like this. With the separation being of order one over r, here's n equals zero, here's n equals one, and so forth. Now let's look at the spectrum of the wound strings. In other words, the energy levels of the wound strings, those also come in integer multiples. So r is zero. Winding number one, winding number two, winding number minus one, winding number minus two, but now it's w times r. So that means if r is large, what did I say, r is large or r is small? I think I said r was small. What did I say? I said r is large or small? r is small. So r makes a separation large here, but it makes it very small here. Now let's go to the other extreme. I know that we talked about this before, but I want to reiterate it again. Let's go to the other extreme. This is r much less than some unit of length in string theory, the natural unit of length in string theory. Let's just call it one. Here's r much bigger than one. What do the energy levels look like? Well, for r much bigger than one, one of r is small, and these excitations here are very close together. What about these? Here r is very large, so it takes a large energy to wrap around. Let me look like this. Suppose all you knew, all you could measure about these particles, namely the things associated with the compact directions. The only thing that you can measure about the compact directions, let's suppose, was the energy of the particles. And you're trying to figure out what the radius of compactification, known as the radius of compactification, what is the radius of compactification? Well, your problem is how do you tell the difference between particles which have energy with respect to the compact direction because they have momentum or because they have winding number? You can't, obviously. Just from the energy levels, you can't tell. So with this particular spectrum, there's an ambiguity. Are you, in a theory, with a small compactification in which large separations between different momenta, small separations between different winding number, or are you in a world with a very large radius of compactification in which the role of winding number and momentum has been interchanged? You think, well, maybe you can do a lot of other kind of experiments. Maybe you can scatter these particles together, see what comes off. One of the very, very remarkable mathematical facts about string theory is that the inability to tell which kind of world you're in is rigorously correct for everything that you can calculate about these strings. Collisions between them that two different theories with very different radii of compactification, if you interchange winding with, let's call it with momentum around the compact direction, the theories behave in exactly the same way. So, conclusion. In a certain sense, it doesn't make sense to think of the compactification as smaller than a certain size. There's a certain size where they cross over and where they interchange, and if the size of the compactification is smaller than that, it's just entirely equivalent to a larger compactification with winding and momentum interchange. Does that mean that the winding number is a mathematical convenience rather than a physical property? I don't know what to say. It says that there's a symmetry. It says there's a symmetry and that physicists use the term duality, which means an equivalence between different descriptions of the same thing. Is there an R of interest where these two things are the same? In some units, it's R equals 1, and there you have some extra special symmetry that you've got. Not only can't you tell which kind of theory you're in, but you can't tell which are which. They're the same. Yes, there is a special point. Now, I wrote an equation over here that momentum is equal to dx to tau. Let me make that a little more precise. Every point on the string has a motion. In particular, I'm thinking about the motion around the compact direction, but different parts of the string could in principle be moving different than other parts of the string. It could be differentially moving in different directions, different speeds and so forth. The full momentum that we're talking about when we talk about the momentum is actually the sum of the momentum of all the little bits of string. It could be the summit, or you could think of as an integral over the entire string of the velocity, the x-detail. In which x am I talking about? I'm talking about y, actually, the one that goes around the string, the y-detail. What about the winding number? Anybody got a mathematical expression for the winding number? Imagine now a string which is wound, the simplest possibility. It's wound around here. This is the y-coordinate. The y-coordinate is periodic. It comes back to itself. But the string also has a sigma-coordinate. The string also has a sigma-coordinate, and that's embedded in the string. If the string is wound in this very simple way, another way of saying it is that sigma is proportional to y. Actually, y divided by the total radius of compactification, I guess. If y comes back to itself after going distance r, and sigma comes back to itself after going around once, then the right relationship is that sigma is y over r. It's just another way of saying that as you go around sigma, as you go around the rubber band, the rubber band goes around the y-direction. Well, now look at this. Let's look at d sigma by dy. That's just one over r. d sigma by dy, let's just take d sigma by dy. d sigma by dy is proportional to the little spacing between here and here, and the total winding number is the sum of the little bit of d sigma dy as you go around the string. In other words, the winding number, yeah, actually this is right, r times this, and the winding number I believe is, I had an equation, where did it go? y over r is equal to sigma, so d sigma by dy is one over r is one over r. The total winding number is one over r times the integral dy by d sigma. dy by d sigma. Does this make sense that the total winding number, what's the integral of a derivative? Usually the integral of a derivative in lots of cases is just plain zero. If a thing comes back to itself when you go all the way around. But why doesn't it come back to itself when you go all the way around? Why itself changes by 2 pi? So the integral dy d sigma is just a total number of times that the string wraps around that direction. So this is kind of curious. T-duality, this funny duality between winding number and momentum, is equivalent to interchanging N and w, interchanging r in one over r, and interchanging dy d tau with dy d sigma. Funny mathematical construction. Keep that in mind that T-duality is interchanging winding and momentum, interchanging r with one over r, and interchanging the derivative of y, that's the position of the string, dy d tau with dy by d sigma, yeah, dy by d tau with dy by d sigma, derivative, this is partial derivative. It's pretty abstract, but this is an exact symmetry of string theory. If you make these interchanges, nothing changes. Say it once again, if the symmetry is dy d tau, swap dy d tau. Swap dy d tau for dy by d sigma. No, the reason I introduced this, we'll see why. I didn't have to tell you this, but I'm telling you now because we're going to use later. Wouldn't that apply a relationship between tau and sigma? No, there's no relationship, it's just you interchange them. Wherever you saw the expression dy by d tau, you replace it by the expression by dy by d sigma. It's a weird thing to do, but in all observable quantities, scattering amplitudes, energy levels, all of the properties of the string theory don't change if you make those changes. Let's talk about something else, yeah? Why would the string want to be wound around the dimension? They don't want anything. Why would it ever get in that state? If it can be in the other state while we're in it. Somebody may hit the string over here with some energy. Hitting it with some energy, among other things, may stretch it out. In fact, it may stretch out to look like this. You blast it with a lot of energy. I've drawn a very neat configuration over here, of course, but if I blast it with a lot of energy, it might be much more complicated than that, but still have this kind of configuration. Now, once you've done that and you take into account that the strings split and join, then this can separate itself into two strings with opposite winding number. Once you've created two strings with opposite winding number that are no longer connected, they can separate from each other. They can separate from each other. And when they separate from each other, well, then you've got an isolated string with winding number one and another one that you can just send off to Alpha Centauri and never see it again. The answer is that in general, you cannot prevent it from happening. You may have to, in particular, if the distance around here is large, then it takes a large amount of energy to create these winding strings. Here it is. Well, let's see. Yeah, the energy itself was proportional to R for winding number. So with a small amount of energy, it's not going to happen. You don't have enough energy to stretch the string that far. But if you collide two strings with a large amount of energy, they splatter all over the place, they stretch out in wild and chaotic directions, and there's some probability that they'll reconnect. It will not change the total winding number, but it will take a single string, or perhaps two strings which are not wound, which collide with each other, and convert out of them something with two strings of opposite winding number. Once that happens, they can separate and they go off. So the answer is generally that it will happen in collisions if you don't have it that way to begin with. The same question occurs for electric chargers. If you start with a neutral world, why are there electrons? Well, even if there weren't electrons, there got to be something to start with. There's nothing, there's nothing. But maybe you just started with photons. You take some photons, you collide them together, and what comes out is electrons and positrons. If you take the positrons and you throw them away, you don't throw them away, but you push them off to some distant place, and you throw them away with electrons. That's all. So you can't have, you can't forbid these things. You can't forbid them, and eventually you will create them. Now, these winding numbers and these momenta, these momentum quantum numbers, these quantized momentum, are not only in some sense similar to electric charge, but they have all of the properties of electric charge. In particular, two strings of opposite momentum, one going around one, can I do this? Let's see. Oh yeah, I can do it. Two strings going around the opposite way, with respect to the big dimensions of space. So here they are. Two strings, these have the same, no, these have the same winding number. Okay, that's easy, that's this. Two strings with the same winding number will repel. I'll tell you why in a moment. Two strings with opposite winding number will attract. What is this attraction and repulsion? This attraction and repulsion actually corresponds to the gravitational attraction and repulsion of the higher dimensional theory. If we live in a world of, let's say, three dimensions, three space dimensions, and we add in one more tiny direction, then strings which are wound oppositely, maybe in different places in the big dimensions, different places in space, we have a string wrap, this way we have a string wrap, that way they'll attract each other if they have opposite winding number. They'll attract each other in our ordinary three-dimensional sense. Likewise, if they have the same winding number, they will repel each other. So they behave like electrical charges. Where does electrical repulsion and attraction come from? It comes from the electromagnetic field, and of course the electromagnetic field is deeply connected with photons and so forth. Where does this attraction come from? And the answer is it comes from gravitation, but not gravitation in the four-dimensional sense, but gravitation in the five-dimensional sense, in the extra-dimensional sense. In theory, with extra dimensions, the Einstein gravity and the extra dimensions manifest itself in the phenomenon of attraction between like momenta, nope, cross it out, attraction between opposite momenta in the compact direction and repulsion of things with the same momenta. That's not obviously easy to see. I've told you this before, it's not something I'm telling you for the first time, and in fact, you can relate it to Einstein's field equations. Let's... I'm not going to relate it to Einstein field equations in any depth, I'm just going to discuss the origin of the electromagnetic field, or the analog electromagnetic field that's associated with this electrical type behavior of these particles. There must be something like an electric field, there must be something like a magnetic field if they're behaving so much like electrically charged particles. What is it? Okay, so first, just ordinary Maxwell equations. The electric and magnetic field are describable in terms of a vector potential, a four-dimensional vector potential, a mu. You build up the magnetic field as the curl of the vector potential and the electric field as related to the time derivative of the vector potential. It's a four vector, it has four components. Let's go to gravity in five dimensions. Gravity is described by a metric tensor. The metric tensor, let's call it GMN. Why don't I use mu and nu? Yeah, I'm saving mu and nu for the four dimensions of ordinary spacetime, but we're now talking about a five-dimensional world, so let's use for the five-dimensional world M and N. But what do M and N run over? They run over the ordinary dimensions of ordinary spacetime plus one more direction. So we can write GMN as having various components, GMN, that's the ordinary dimensions, and then GMN5, five is the fifth extra dimension, and then what else? GMN5. Those are the components of the gravitational field. What about GMN5? What about GMN5? What about GMN5? Symmetric tensor, yeah, the same thing. These are the independent components of the gravitational field in five dimensions. Well, this one here, that's just the metric of four-dimensional spacetime, so it's just a usual Einstein gravitational field, nothing new there. Here we have something that has an index. It has a five index here, but the five index is this hidden direction, we can't see it, but it also has a mu index, which means that it's a four-vector. We would mistake this object for a four-vector, it has four components, it's like a mu. Let's make that identification, because that is the correct identification. And what about G55? That has no components in the usual four-dimensional sense, so it must be a scalar. That scalar is usually called phi, it's called a dilaton, or sometimes called a dilaton, but it's almost always called phi. What is G55? It's the component of the metric around the fifth direction. It tells you how big the fifth direction is. If G55 is big, it's just a metric, it's the square of the distance around the fifth direction is what it is. If G55 is big, the fifth direction is big. If G55 is small, the fifth direction is small. The metric tensor in five dimensions is a field. It can vary from place to place. All of these things can vary from place to place. This is just the usual gravitational field, varying from place to place. This becomes an analog of the electromagnetic field, which can vary from place to place. And this is a scalar, which can vary from place to place. But what does that scalar mean? That scalar is the size of the fifth direction. That can vary from place to place. Somebody asked me that before, and you can imagine waves in space where the waves of space are not waves of electromagnetism, they're not waves of gravity, but of varying size for the fifth dimension. But nevertheless, the quiescent ordinary vacuum doesn't have such waves. It's fixed. But in general, the size can vary from place to place. This is an electromagnetic field. If this is an electromagnetic field, then it's also a gravitational field. And it's an analog electromagnetic field. Thought of as a gravitational field, what are the sources of gravitational field? There are things like energy and momentum. In particular, the sources of mixed components of the field like this are components of momentum. So the source of this component of the gravitational field is actually the momentum in the fifth direction, the flow of momentum in the fifth direction. That means that the source of this analog vector potential is going to be the component of momentum going around the fifth direction. Lesson? The momentum quantum number, the one, where is it? The momentum quantum number over here can be thought of as electric charge in an analog, what's on that? And the Kaluze Klein theory in which the mu-5 component of the gravitational field is the electromagnetic field. Electric charge, momentum, electric field are component of the metric. Well, good. That's easy. That was easy. That was easy. What about? And of course, we go a little bit further, and we can say that the electric field is associated with the graviton, with the graviton itself. OK, now let's come to the winding number. So I'm sorry, is phi then proportional to N? No. No, N is an integer that's related to R. It's related to R. It's not quantized. It's related to the size of that dimension. And it can vary from place to place continuously and smoothly. It's called a dilatom, and it's a wave field that, if string theory was right, should exist in some form or form. But in any case, it's part of the mathematics of string theory. Let's come to the winding number now. I told you that the winding number also behaves with this property that opposite winding numbers attract, like winding numbers repel. They also behave like electric charges. But not the same kind of electric charge as the momentum quantum numbers. It's as if there were two kinds of electric fields, two kinds of electromagnetism, living side by side, two kinds of photons, two kinds of charges, winding number associated with something we could call winding photons, and momentum quantum number associated with momentum photons, but momentum photons are just a piece of the gravitational field. In other words, they're gravitons which are polarized along the mu5 direction. So what are the field? What is the field or the analog of photons which is emitted and absorbed by winding number, not by momentum but by winding number? And here we know the answer. We'd have to go back several lectures, fairly early lectures. But let me remind you, this is probably something you've forgotten. I told you, but I don't think I stressed it very hard. Let's go back to the spectrum of closed strings. What kind of states are there of closed strings? We start with the closed string being totally unexcited. And then what do we do with it? We excite oscillations on it. We excite oscillations with the creation and annihilation operators which create and excite waves on the string. They're not creation and annihilation operators for particles. They're creation and annihilation operators for waves moving along the string. These are closed strings. Let me just remind you what kind of operators there are. There are creation and annihilation operators. I'll just think about creation operators. And I won't put a little plus because we wind up with too many indices. A creates a unit of excitation. What labels it? First of all, the As are labeled by directions of space. Directions of space perpendicular to the momentum of the object of the string. So they're labeled by directions of space. Let's call it I. I can equal, if the string is going down the Z-axis, it can be X, it can be Y, or it can be X1, X2, or it can be a compact direction Y. So the I's vary over all the directions of space, including the ones which are wrapped up. That's one thing there. Now what's the other index that depends on? The frequency of the oscillator. And that was an integer, and it tells you how much energy when you create one of these oscillations. Creates an energy proportional to N. So N is related to the frequency. I is related to direction and space. And one more thing, remember what it was? For closed strings. Whether the wave is propagating around the string, remember the string is oriented, there's little arrows, whether the wave is going one way around the string or the other way around the string. I don't remember, does anybody remember how we labeled it too? Let's just call it left moving waves and right moving waves. A, I, N, and left and right here are purely symbolic. They refer to the direction on the string, not left and right in real space. Okay, now there was a rule. I told you the rule, I called it level matching. It's a rule, I'm not going to go into the rule now. I think I explained it a little bit what it had to do with, but it's a rule. And the rule is the amount of energy in left moving waves must equal the amount of energy in right moving waves. That's a fundamental rule of string theory. The mathematics goes to hell in a hand basket if you violate it. And so let's see what kind of states we have. We can first of all have the state with no energy in it at all. That's a tachyon. It's not there in super string theory, but in the simple versions of the string theory. And it has minus two units of energy. We work that out. Minus two units of energy. And it's a tachyon, bad thing, we don't want it. What's the first excitation? What's the next energy up? Well, the next energy up is to hit it with the lowest frequency oscillator and recite one unit of energy with n equals one. So for example, we could hit it with ai. i can be any one of the directions of space. One, the lowest oscillator. Left or a1i right, oh. Neither of these states are a legitimate option. And the reason is because this one has one unit of left moving energy and no units of right moving energy. This one has one unit of right moving energy and no units of left moving energy. And so it doesn't satisfy this principle that the amount of energy in left and right have to balance. OK, so these are gone. They're not there. What's the next thing up? The next thing up is to apply two units of the lowest oscillator, one left and one right, to make sure that you have balanced energy going left and right. And that means ai1 left times aj1 right. i and j are two directions of space. OK, they are the two directions of space. For simplicity, let's imagine the directions of space are the usual three directions of space plus one more, namely the fifth direction, the thing that I called y here. One more direction of space. And we apply this, of course, to the vacuum. These are the things we identified with gravitan and other massless particles with polarizations that have to do with i and j. Well, there is something new when we add this fifth direction. One or both of these indices can now be in the fifth direction. Earlier, before we discussed the fifth direction, i and j could only be ordinary directions of space. And these corresponded to gravitons which have polarizations, they're like photons, they have polarizations, and the polarization is characterized by two directions of space. But now we have something new. We can have i be the ordinary direction of space, and this one being the fifth direction of space, the internal direction, the small direction. Now we have something which has a vector index. The first direction here, we don't even call out a direction of space normally, and so it has the same kind of index structure that a photon would have. This is like a photon. It has a polarization, let's say, moving down the z-axis. This could be polarized either x or y, or however many dimensions we have to worry about. And this is very similar to a photon now. But there are two ways to do it. You can have this one being left and this one being right, or you can have ai1 right, a1 five left, oh. There are two possibilities now, and it seems like that means that there are two kinds of particles that behave like photons, that are similar to photons. The correct thing to do is to add them and subtract them in the sense of quantum mechanics to linear superpositions, but whatever there is, there's going to be two types of objects which have the index structure of a photon. There's going to be two kinds of photon fields. The one with the sum, that's identified with ordinary graviton moving down the z-axis. The other one is identified with another field, a separate field, that also has a structure similar to a photon, and guess what it is connected with? It is the field, the electromagnetic-like field, whose sources are what? Winding number. So one linear combination is associated with momentum, and one linear combination is associated with a winding number. These two things, one of them is associated with gravitons, which are the field quanta of the gravitational field, and the other one is associated with something called b mu nu, which is different than the graviton, different properties, but also gives rise, its mixed component is also like a photon. So here's what we can say now. What is t-duality? And this is a field. This is a field which exists in string theory. It's called the Ramon, the Kalbra Ramon field. Not important what it's called. It is another field which appears in string theory. Which one is the single, which one is the triple? Which is the single and which is the triple? Oh, no, no, no, no, no, no, wait a minute. Both of these are polarized like photons. There's one more nobody asked me about. What about A5-5? A5, A5. Left, right. That one's the one that behaves like the scalar. That's the one associated whose field quanta, these are the field quanta of this one over here. Okay, namely the radius of the compact direction. They're all here. Which one corresponded to the winding number, the other corresponded? The one with a plus sign corresponds to momentum and that's the one that's associated with gravitons. The one with a minus sign is associated with the winding number. Yeah. Okay, so now we have a more complete idea of about t-duality. It involves momentum being interchanged with winding number. It involves r being interchanged with one over r. It involves replacing the x by the sigma with the x by the tau. This is the integral of this, this is winding number, the integral of this is momentum. And finally, it involves interchanging g-mu-5 with b-mu-5. All right, so these theories have more, now of course, not clear that the ordinary world does have such fields. We don't know of real particle candidates for all these fields, we don't. So this is at the moment a mathematical construction, and we're exploring a mathematical construction. There's lots of ideas about how to use these constructions. But at the moment, I think we should regard this as an exploration of the mathematics of the theory. There's plenty of room in physics for these objects, incidentally. But I don't want to get into the phenomenology of them. Good, so there we are, that's what t-duality is. In about five minutes, we're going to come back and talk about t-duality for open strings, and how it leads to a new concept, completely new concept called d-brains. So let's take a break. Yeah, okay, so we've figured out what t-duality is, this very strange interchange of big compact geometries with small ones, various other things going on. I want to concentrate now on the aspect of t-duality that has to do with, well, let's erase some blackboard here. Now you're going to see the kind of gymnastics, the kind of deductions that people have made over the years, the t-duality is one of them. They're implicit in the mathematics, many of them are very surprising. We're going to talk about d-brains, and how one was forced to have these new objects in the theory, which are called d-brains. d stands for Dirichlet, Dirichlet had nothing to do with them, he was dead for many hundreds of years. They should be called p-brains for Polchinski, but p-brain was already a term in use. Brain is spelled b-r-a-n-e, like membrane. And one speaks of d-n-brains. d stands for Dirichlet, not for the dimension of space. n stands for the dimensionality of the brain. A string is a one brain, a membrane is a two brain, a solid three-dimensional chunk, which may be embedded in higher dimensions, but solid three is a three brain and so forth. So this d-n brain, where do they come from, what do they do, what's there, what's there. They're not just made up things, they were essential to the consistency of the theory. The mathematics absolutely demanded them, and the result of knowing about them has been to derive an enormous number of equivalences between different theories. And in fact, it turned out that those enormous numbers of equivalences turned out to be enormous numbers of equivalences between different kinds of geometric structures. Equivalences which the mathematicians had no idea existed. Equivalences between different Calabiya manifolds, and which the mathematicians were entirely surprised by, and they turned out to be able to confirm them, but this is now part of the gymnastics of string theory. And d-brains have played an enormously powerful role, also in the applications of it. So let's talk about open strings now. d-brains have to do with open strings. Open strings, what's the winding number of an open string? Open strings don't have stable winding numbers. If you have an open string on a compact space, now this picture is sort of shorthand for all of the compact directions, of which there are presumably six in super string theory, and all of the open, uncompact directions. All right, but let's imagine now that in addition to closed strings, there are open strings. Open strings now have endpoints. We can't classify them with winding number. It doesn't make sense to classify them with winding number. They're not wound. So we have to think about what happens to them when we do this process of t-duality. T-duality was a phenomena discovered in closed string theory, but surprisingly it also makes sense for open string theory, and I'm going to show you what it says. This is something rather remarkable, something which is confirmed in other ways, but this is the simple way to think about it. There's a string, an open string, and what are the rules for open strings? Do you remember what the rules are for what goes on at the end of an open string? The boundary conditions on the end of an open string? Noiman. The end of an open string should satisfy the x or dy. This is y. This is x. Any of the coordinates, but let's say in particular, well, let's write them both, the x by d sigma equals zero dy by d sigma equals zero. And what does that correspond to? That corresponds to the idea that the end of the string, here's a string, it's made up of a lot of tiny infinitesimal mass points, and supposing the end of the string had some net stretching, the x by d sigma was not equal to zero, that would exert a force on the very last molecule. Well, as you subdivide the molecules into smaller and smaller structures to go to the continuous limit of the string, the molecules get lighter and lighter and lighter. How much force can you put on a mass, on a thing of arbitrarily small mass, without having it accelerate with an infinite acceleration? The answer is zero in the limit. That means that the string cannot afford to have any net stretching at the end of the string. So that's where these conditions come from. The x by d sigma, dy by d sigma should be equal to zero, and that's to forbid infinite accelerations either in the x direction or the y direction. Of course, there is another possibility, and that's that somebody's holding the end of the string and applying an enormous force to keep it from moving, but if nobody is standing there holding the end of the string, then if there's any net tension in the end of the string, it will accelerate infinitely to basically to remove that stretching. Okay, now, what about t duality? Let's suppose now that this was a small compact direction, and we want to replace the theorem, it's very, very much smaller, extremely small, and we want to say, wait a minute, this is supposed to be equivalent to a theory with a large compact direction. Here's the large compact direction, enormously large, could be cosmologically large. What do I do with these open strings? Well, we're supposed to replace winding number by momentum, but remember that momentum was, what was momentum related to? It was related to dx by d tau, right? Velocity. What was winding number related to? dx by d sigma, namely exactly what goes on here. Okay, I should actually write, since I'm mostly interested in the compact directions for the moment, let's concentrate on the compact direction. T duality involves, among other things, replacing every place you see it, dy by d tau by dx, or dy by d sigma by dy by d tau. So, what does it say then about open strings? Open strings, when you replace, when they undergo this process of T duality, what you have to do is you have to change the boundary conditions, and the boundary conditions at the end of the string become the x by d tau equals zero, or in this case, we're not, sorry, the T duality is associated with the compact directions. We don't want to do that. Only the compact directions are being interchanged small with large. We're not diddling it all with the uncompact direction. One compact direction, y goes, radius in the y direction becomes big. All right? The rule was we were supposed to replace dy by d sigma by dy by d tau. If strings originally moving on the surface had Neumann boundary conditions, that's dy by d sigma equals zero, after T duality, they have Dirichlet boundary conditions. What does it mean to say that dy by d tau is equal to zero? Oh, incidentally, this is only at the end points. This is at the ends of the string. It means the end points are forbidden from moving. So wherever they started, they are stuck at. Where are they stuck? Well, wherever they're stuck, there's something holding them there. It means that this theory has objects in it which can nail down the ends of strings. There are strings, there must be, if T duality is to make sense, there must be objects in the theory which can nail down the ends of strings. You start with a theory with no such object, and the strings just move freely, and after T duality, you discover there's some kind of object in the theory which has nailed down the location of the string. It's nailed down only the y component. We have not played around with the x's. The x's are still free to move. The compact direction here is now nailed down. So where is it nailed down? Here's the space. It's nailed down some place. Some place in y. Y goes around this way. Let's put that place right over here. There's something here which is capable of holding the ends of strings, nailing them down. That object is called a D-brain, D-federich-le, and it wasn't put in by hand. Instead, one asked, what if there are open strings in a theory and the theory has T duality, which it must have for reasons that are by now well understood? Then what happens to the open strings when you interchange the small compact direction for the large compact direction? And the answer is that you discover something new. You discover some new kind of object which is holding the animals back. Excuse me. Yeah. So is it necessary that the y value be the same for the two ends? No. I think you're asking whether this can bend. I drew it so that it could make me up here. Oh, yeah. Yeah, yeah, good. That's right. But the simplest case is where it's just fixed, fixed in some position. Now, what position can you do put it at? Here, here, here. So the answer is that you better be able to put it anywhere. So therefore, this has to be a movable object because there was nothing special about one point in space and the other point in space along these axes. So it's got to be a movable object which functions as an anchor point for the ends of strings. If it's movable, then you can be quite sure that it's also bendable. Relativity can't make sense with absolutely rigid objects. And so it will also be bendable. But let's just not bend it for the moment. And strings can attach themselves to it. Or strings attached to it can exist. A string can come. Oh, let's, let's, good. That's, that's a, these are called deep brains, right? This is the, first of all, supposing we took, let's, let's take an example. Supposing we took 10 dimensional string theory which has nine dimensions of space. Nine dimensions of space. And we made one of them compact and then introduced a deep brain in this way. What would be the dimensionality of the deep brain? Would it be one? Nine. No. Eight. Eight. We've pinned down one direction and left the other nine, the other eight directions of space free to move around. All right. Here's a tabletop. That tabletop is two dimensional. If there's an object attached to the tabletop, how many directions is it free to move in and how many is it constrained in? It's free to move in two directions. Here's a string. Here's a string with two ends. Or my finger is the end of a string. It's free to move in two directions and constrained in its third direction. All right. So if I constrain one dimension, if I constrain one dimension, it creates for me an entire dimension. It creates for me in three dimensional space a two dimensional surface. Supposing I'm now in nine dimensions of space and I constrain the end of a string, I constrain one of the dimensions. It becomes an eight dimensional surface that the string can move around. Now, suppose there was no reason for only constraining. You can do the same thing with any number of compact directions. You can play this t-duality game with any subset of the compact directions. You can play that since the same game. And you can, in the same way, construct the brains which pin down any number of the coordinates that the end of a string can move in. All right, so let's take three dimensional space again. Let's suppose now we've played this game and we've constrained the vertical position by doing this t-duality trick with respect to the vertical direction. Supposing I also do it now with respect to the front to back direction, I constrain the motion of the position of the end of a string so it's constrained vertically and this way. And that means that the end of the string has to move along the intersection of this plane with that plane. So if I constrain two dimensions, what kind of brain am I talking about? A D. In this case, D1. But if I started with nine dimensions of space, seven. So each constraint removes a direction that it's free to move in. All right, all right. For that reason, you can have in nine dimensions, by playing this game over and over again with the different directions associated with the compact space, you can have the eight brains, the seven brains, the six brains, all the ways down to what's the maximum number that you can just get rid of the freedom to move in any of the directions of space. Then what would you call the brain? D0. D0, it's a point. It's a point in space that the string is connected to and it's not free to move along any of the directions. That's a D0 brain. D0 dimension means a point. You can have from D0 brains to D8 brains. They all make sense. And they're all, their existence in some sense is all necessary to the consistency of the theory. These zero brains are a new kind of particle, an unexpected new kind of part. I'm not coming back to that. So you really start to track. Yes. No, no, no, no, no, not really. No, no, no, no, no, no. You can, no, I'll show you why in a moment. I'll show you why in a moment. Okay. We've played this game with a compact directions. But now let's imagine, having done this operation, let's imagine shrinking and shrinking and shrinking this until it becomes arbitrarily small. What happens on this side? It gets bigger and bigger and bigger. Okay. So eventually on this side, it can be so big that it's completely mistaken for a non-compact, for a non-compact dimension. In other words, if a dimension of space is big enough, for all practical purposes, it becomes non-compact. What we've demonstrated by this series of arguments is that even in the non-compact directions, there must be objects which are the anchors of strings. They can be oriented along compact directions. They can be oriented along non-compact directions, any number of them. And of any dimensionality, starting with zero and going up to eight dimensions. Actually, you can go up to nine dimensions, but that means they completely fill space. And that simply means that we're talking about open string theory in which the open strings are just free to move, but we wouldn't call them brains. There are things that people call D minus one brains, but you don't want to know. They're not real concepts. Okay, so you start here with the zero brain. It's a point in space and a string in can end on it. Whereas the other end of the string, it could be on another D zero brain. Or it could be stretched out to infinity. That's not a good thing because it becomes infinitely massive. Or it could even come back to the same point. The same D brain. That's a D zero brain. What's the next one up? A D one brain. Something very interesting about a D one brain. It's a line. Now, as you already pointed out, it does not have to be a straight line. We deduce their existence in this way. But once we know that they exist and once we know we're talking about a relativistic theory, they have to be bendable. If they're bendable, they can even bend around on themselves. They're structures which are physically physical objects. And they have strings connected to them. But now let's forget the strings that are connected to them. The strings connected to them could be like this. Let's forget the strings that are connected to them and just look at them. This is a one-dimensional object. It's called a D one brain. It's a one-dimensional object. It can bend. What's another name? In fact, it can not only bend, but it can even come back on itself. What's another name for one-dimensional objects like this? Strings. But are they the original strings? No, they are not the original strings. They're called D one brains. And they're distinct from the original strings. How would you expect they might be the same from the original strings? They're not the same all the way around. You've got points on them that have particles and points that don't. I mean, you have strings that are attached someplace and are not attached. Well, I mean, they may not have strings attached to them, but let's go through a moment. I suppose they don't have strings attached to them. They're much heavier. In strings, that makes a lot of sense because how did they start it out? How did they start it out as anchors, like infinitely heavy things that could hold the strings down? But they're not really infinitely heavy. Their mass depends on some coupling constants and so forth. It depends on various things. They're heavy. They're much, much heavier than the ordinary strings. And so in that sense, they're anchors, but they are string-like. The T-dually holds for the brains. Yeah, it also holds for the brains, too. It does hold for the brains. What's the next one up? A D2 brain. A D2 brain is called a membrane. Membranes mean D2 brains, and those are simply sheets, like the table top here, that string ends can move around on, and the hairs or strings can move around on, like that. These are D2 brains. Next one up, D3 brains. In a three-dimensional world, a D3 brain fills all of this filled space. So it fills space. It's a space-filling brain, people. In other words, it's just space. And if we had a D3 brain, then that would simply mean that we can have open strings that can just move around. They have to be attached to the brain, but the brain is everywhere. And so it's just open strings that are free to move around everywhere. So it's often said that ordinary open string theory is string theory on a space-filling brain. But if we're living in a world with higher dimensions, then a three brain doesn't fill all of space. It's like a two brain in a three-dimensional space. It's a surface, and strings can move on it. OK, that's the idea of D brains, and as I said and as I emphasized, they are mathematically important to string theory, but they also are the origin of a lot of applications of string theory. So let me very quickly tell you what's known. Let's start with one of these D brains. I assume that the unconstrained directions that are left over can be compact directions as well. Yes, yes, yes. That's correct. That's certainly correct. OK, let's imagine a D brain. And I'm going to draw it as a D2 brain. There's the D2 brain. Now, if you're saying to yourself, this is awfully slick, though I really believe that this has to be, not from the arguments that I've made, obviously. I'm giving you some sketches of arguments. Where the string theory is the right theory of nature is not the issue here. The mathematics of this is very, very tight by now. I mean, many, many cross checks, many, many different things point in the same direction. Mathematical theorems that have come out of it, though the mathematical speculations have been confirmed in mathematics, so it's remarkable. And presumably correct. All right, here's an empty D brain. Think of it as empty. Empty means it has no strings ending on it. Think of it as empty. And think of it as an empty space. The space now is not the full space, but it's just a surface in space. So think of it as a space. It's empty. We can put things into it. We can put strings into it like this. Little open strings. And these open strings are free to move around. Now, I'll tell you what the basic process of them is. The basic thing they can do is their endpoints. They have two oriented strings, so they have arrows associated with them. When the endpoints come together in string theory, let's say the endpoints come together, if we have two endpoints, one with an arrow coming into it and the other with an arrow coming out of it, they can do the obvious thing, namely the endpoints come together and lift off the surface and form a single string with two endpoints like that. How do we think about that? If these strings like this are thought of as particles which are free to move around, and in string theory, particles are strings. These are very much like the original open strings that we started with. If we think of them as particles which are constrained to move on the surface, or very close to the surface, then what we discover is that these particles can split and join. Two of them can come in, join into one. One of them can go out, separate into two. We're starting to build up something that looks like particle processes. We're starting to build up something that looks very much like Feynman diagrams. Two of them come in, join to form one, maybe it hangs around there for a while, and then the reverse process happens and it becomes two again. We're building up the kind of processes that quantum field theory describes, like creation and annihilation of particles, transmutation of a number of particles. It's not completely surprising that the mathematics of the interactions between these particles looks very much like quantum field theory. In fact, at low energies where you don't have enough energy to really excite the vibrations of the strings, there's simply quantum field theory where the particles of the quantum field theory are these open strings. In fact, the open strings behave like photons moving around on the surface. They're not photons in the sense of the ten-dimensional theory, they're photons in the sense of a theory which lives and whose particles live only on the surface. That's the basic connection between brains and their application in studying quantum field theory. Brains, like this, define quantum field theories, and they define quantum field theories in exactly the way that I've just described. But, yeah, let's, incidentally, these open strings connected to the brain here do behave like photons moving around in the brain. They behave like photons in a lower-dimensional theory. But let's imagine now that we had more than one brain. There's no reason why you can't have several brains, for example, parallel to each other. Here's two brains parallel to each other. You can move them closer and closer until they touch, in which case they just form a compound brain, or you can leave them separated. Let's leave them separated for a moment. Let's put three of them in just for fun. Three parallel brains like that. And let's think of the kind of excitations that can move around on them. Well, you can have, and the excitations means ordinary strings. An ordinary string, let's give them names. Let's give the three of these names. And the names I'm going to give them are red, green, and what, yellow? Blue. Blue, sorry, blue. Red, green, and blue. These are just the names of the three brains. Of course we could have four, or we could have seven, or we could have fifteen, but I'm particularly interested in having three of them for the moment. What kind of strings do we have? We can have strings which begin on red and end on red. Oh, remember, these strings are also oriented. Keep in mind that they're oriented. And so what would you call this string? I would call it a red anti-red, or a red-red string. This is a red-red string, and therefore some kind of red-red particle. We can also have red-green particles. Red-green particles are ones which look like this. Where one end begins on red, the other end on green. So what do we have? What are the class of particles we have? We have particles that are labeled by two indices, two colors. For the case of three brains, we have three distinct colors, and we have particles which are labeled by pairs of colors. One associated with the outgoing, and the other associated with the incoming. Does this sound like anything you've seen before? Gluons. Gluons and quantum chromodynamics. I go from red to green. What's that? Can you go from red to green? Of course. Sure. They should. Maybe you should keep in mind that this is embedded in a higher dimensional space. So to mimic that, let's think of lines in higher dimensional space. Now there's no problem in going from here to here or from here around to here. You're worried about passing through here. Well, I'm worried if you have to merge a red-green with a green-blue to get a red-blue. Well, there will be some rules. There will be some rules. And the basic rule, the basic rule is really only one rule, but we can use it over and over again. A red-green string, what can it do if it hits a red-red string? Let's suppose this is going out here and coming in here. Let's suppose this is going, let's see what I want to have. I want to have in here and out here. Okay, what can happen? This end can join with this end. They can come together and join and form a single string which goes all the ways from here to here. In other words, the red, anti-red, the thing coming into red and the thing going out of red, one in, one out, can join and simply make a single string. This is very much like a green-red gluon coming together with a red-red gluon and forming another red-green gluon. Same rules, exactly the same rules for gluons. But this string cannot annihilate or lift itself off the surface. On the other hand, if we had this situation here, then these can come together and they can lift themselves off the surface. That's incidentally why there are eight gluons instead of nine, because one linear combination can disappear and is not stable. So the mathematical rules for splitting and joining are exactly the Yang-Mills quantum chromodynamics rules for gluons. Okay, what about quarks? Okay, so now what about quarks? What is a quark in this language? A quark in this language only has one color. It's either a quark or an anti-quark. It only has one color. It doesn't have three colors. Sorry, it doesn't have a color and an anti-color. A quark must be a thing which only has one end. A quark in this language is a string which ends on one of these brains and goes off to infinity. Now, it doesn't really have to go off to infinity. It could go off to some distant brain of a different kind, but it doesn't have another end which ends on one of these three. That's a quark and it's either a string coming in or a string going out. When a string coming in and a string going out meet each other, they can join and just disappear out into off the brain. They can join and disappear from the brain. That's an annihilation of a quark and an anti-quark. If there's two quarks, they can't annihilate. They're stuck there. And that's exactly the same rule as quarks. The mathematical structure of the field theory that describe these strings moving around is essentially with some little extra added ingredients because of supersymmetry. It's a supersymmetric version of quantum chromodynamics. And it is part of the reason that brains are interesting for exploring quantum chromodynamics. I'm not going to show you how they're used in detail. I'm just showing you what the connection between things is. Let's suppose there's only one brain. Then it's not like quantum chromodynamics. What do you think it might be like? Now we have objects. We don't have to name them red, red, green, blue. It's just a string. It's like quantum electrodynamics. It has only photons. It doesn't have this complicated gluon structure. These would be photons. And what would these things be? Charge. Electrons. They could be coming in. Or they could be going out, in which case there would be electrons and positrons. When two of them come together, they can annihilate. One last point, which is really quite fascinating. Remember I told you that there are ordinary strings and D strings. D strings were these new objects that we discovered. These new objects which are a good deal heavier, but they're also strings. An interesting question is, can a D string also end on a brain like this? Incidentally, I'm thinking about three brains, because I'm thinking about mimicking three-dimensional space, D3 brains. So these are really D3 brains. Interesting question. Can a D string, a D string is itself something that ordinary strings can end on. But let's forget that. Can a D string end on a D3 brain? The answer is yes. I'm not going to try to prove that for you. That's a much more elaborate question. But then if ordinary strings, the ones we started with, make electrically charged particles, what do D strings make at their ends when they intersect the three brain here? Can you guess? No? Well, they're in some sense neutral, but they are. They've got to be something which is similar to what happens with ordinary strings and magnetic monopoles. It's remarkable. The magnetic monopoles are much heavier than the electrons, because these strings are much heavier. Magnetic monopoles are expected to be much heavier than electrons. But they're not magnetic monopoles. The relation between magnetic monopoles and electrical monopoles in quantum electrodynamics is mimicked by the relation between D strings ending on these things and ordinary strings on them. So I've told you a lot of stuff. I don't expect that you're going to follow every detail, but I'm trying to show you how string theorists discovered, and it's a long time to make. I mean, this didn't happen all in one day. This happened over a period of 20 years, basically, or more. No more than 20 years. 25, almost 30 years, that all these pieces were put together by various people, a very wide variety of people, who saw these connections. The string theory and its description in terms of brains is the primary tool for studying quantum chromodynamics. It's very bizarre. The whole thing made a full circle. It made a full circle from a theory of Hadrons to a theory of quantum gravity to the presence of D brains, who's necessarily there, which when you put them together in the ten-dimensional space, put together three-dimensional D brains, all of a sudden becomes the theory of quantum chromodynamics that it started out as. So that's, I don't know what to describe it. I describe it as sort of mental gymnastics, but I think it's more than that. I think it's a process of discovery and unraveling of the mathematical structure of this thing. It has wide application to quantum chromodynamics, other quantum field theories, fluid dynamics, all kinds of things. So far, it has not had application to, direct application to understanding the particle spectrum, and that's probably because it's just too complicated. What determines which of the dimensions the T-duality is applied to? Oh, you can do it any dimension you please. To any compact dimension you please. You choose one that's convenient. But that eventually just winds up saying that these D brains can lie along any axis. And what chooses it? The history of the universe. Is the membrane for magnetic monopoles, I mean magnetic monopoles 3, D3? The universe here, from our point of view, is a three brain here, D3 brain. But if from some of the other time dimensions, which the people who live on this world don't even know about, but a string ends over here and happens to be a D string, they'll experience that as a magnetic monopole. So, that means these D strings are oriented as well. What's that? They must be oriented as well then? The D strings are also oriented, absolutely. Yeah, they're also oriented. They use two planes for SU2. Yeah. How do you get the feature where it's not reflective? In other words, SU2, if you have a reflective universe, the physics doesn't work the same way. Perry. Perry. Oh, you're talking about charge conjugation and violation. I don't know the term, but I'm talking about how we had left-handed and right-handed. Or particle and anti-particle. More interesting, yeah. There are answers. String theory certainly has a potential possibility to describe that, but not at this level. At this level, you need, when you have collabia manifolds, they can have more complicated asymmetries that allow that. This toroidal compactification doesn't allow it. There's a lot, there are many, many other objects in the theory which I haven't gotten into. These are the simplest to describe. There are lots of other constructions and other kinds of objects that the theory has to have. And some of them, when they intersect other ones, break various symmetries and we can talk, we can discuss them some other time. I'm out of time, I'm out of energy, and I'm out of momentum. And there are...
(November 30, 2010) Professor Leonard Susskind continues his discussion on T-Duality; explains the theory of D-Branes; models QFT and QCD; and introduces the application of electromagnetism. String theory (with its close relative, M-theory) is the basis for the most ambitious theories of the physical world. It has profoundly influenced our understanding of gravity, cosmology, and particle physics. In this course we will develop the basic theoretical and mathematical ideas, including the string-theoretic origin of gravity, the theory of extra dimensions of space, the connection between strings and black holes, the "landscape" of string theory, and the holographic principle.
10.5446/15100 (DOI)
Okay, let's, we're going, we're entering today into the world of quantum field theory. Quantum field theory is of course the description of nature as we know it, with the exception of gravity. And so practically everything that we know, with the exception of gravity, in principle we think is explainable if only we had the computational power to be able to solve quantum field theories with the accuracy that would be appropriate. Now the problem is of course that problems tend to be much too complicated to solve, except for some very special problems. You can understand the hydrogen atom, but by the time you get to the boron atom they're too complicated and so forth, and certainly studying human beings as well beyond the capacity of any quantum field theorist that I know. But nevertheless, apart from gravity, quantum field theory seems to be all there is, and we're going to enter into the study of it today in a very, very, very elementary way. We're going to study today the idea of what is called second quantization. But before we do, and second quantization is about quantum fields, but we'll come to it, I want to review something first and maybe generalize that something a little bit. The thing that I want to study or to remind you of or go through with you is again the harmonic oscillator, but a new aspect of the harmonic oscillator, a couple of new aspects of the harmonic oscillator, just so that we have it in front of us, because the harmonic oscillator is the central ingredient, the central mathematical structure that goes into quantum field theory. So we need to understand it, make sure we understand it well. But we're not going to worry about springs and, you know, systems oscillating. When I speak about the harmonic oscillator, I'm really speaking about the algebra of those little operators A plus and A minus, and the product of A plus times A minus, which is what, do you remember? What is it? It's the number operator. It's the number operator. It's the number of the excitation. So I'll remind you now. We'll go through it a little bit and I'll tell you one or two facts that I may not have told you about. What I'm particularly, all right, so let's just write down the rules for a single harmonic oscillator for a harmonic oscillator, A pluses, they act on states to elevate them up to the next level. There are A minuses, which take you down. At the bottom level, the ground state, A minus gives you nothing. It just annihilates the state. All right? So A minus is the lowering operator and A plus is the raising operator. We're going to learn to call them creation operators and annihilation operators. What is it that they create and annihilate? As we'll see particles. But let's just use the same conventions that we've used up till now. There is also A plus times A minus. That was the thing, if you remember, that went into the Hamiltonian. If you remember, the Hamiltonian was just basically A plus A minus. What was it? A plus is x plus i omega p divided by some square root of 2 omega. A minus is the same sort of thing with a minus sign in there. And if you multiply them together, you get things with x squared plus, do I have that right? No, no, I have it wrong. That's p plus omega x. And if you multiply the two of them together, you get p squared plus omega squared x squared plus this extra little term that was the ground state energy. And so that went into the Hamiltonian. A plus times A minus. And we called it the number operator. It was the thing whose eigenvalues, whose observable values, were the integer level spacing or the integer levels of the harmonic oscillator. So I believe we call this capital N, if I remember. That's eigenvalues we called little n, the observable values of it. One more thing, the commutation relations of A plus and A minus. That's what made, oh, A plus and A minus are her mission conjugates of each other. That's an important mathematical fact. The other mathematical fact is that the commutator of A minus with A plus is equal to one. That's what made everything work. That's what we used to find the properties of what happens when we act with A plus and A minus. We used those commutation relations. That was basically it. That's all that went into the harmonic oscillator. Now, I'm going to suppose I have many harmonic oscillators. You could just think of them as a collection of springs, collection of springs. Or you could think of them as the various oscillations of a violin string. Or whatever, for whatever reason, there happen to be many independent systems that are undergoing oscillation or that can undergo oscillations, possibly with different frequencies. Possibly with different frequencies. Where does the frequency come into all of this? The frequency, well, first of all, it comes in here. But by the time we get to that point over there, the place where the frequency came in was just the fact that the energy is equal h bar times the frequency times this n here. There's also plus a half, or I'm going to leave out the half, the ground state. It's not important for us. So that's where the frequency came in. It came in just as telling us what the energy for each increase of the number operator was. The frequency, together with h bar, determine each time we go up a level how much energy it cost to jump the oscillator up one level. Or how much energy we might get back in some other form if the oscillator jumps down. If it jumps up for some reason, that's how much energy we have to pay, h bar omega. If it jumps down, that's how much energy we get. As I said, that's the single harmonic oscillator. But now if we have many harmonic oscillators, we can label them. We could label them with an index. I'll use the index i, but i does not stand for the square root of minus one. It's just a label which goes over all the possible oscillators that I'm interested in. There may be a finite number, there may be an infinite number. In the examples that we'll be interested in, there are an infinite number. So now we get to label the oscillators just by putting in, well, okay, we have to be a little careful over here. Each oscillator, think of it as an independent system, as a system which is just an independent degree of freedom. When you have independent degrees of freedom, the operators for one subsystem commute with the operators for any other subsystem. That's a rule. We went through it two years ago or a year ago, whenever it was. The operators describing system A, and the operators for describing system B, if A and B are truly independent systems, they commute with each other. Why? Because to not commute would tell you that you cannot measure them independently. If you can't measure them independently, they're not independent systems. So presumably if you had two springs, you could take one off way over there and the other one way off over there, and there's no reason why they should influence each other. Their operators, everything about them commutes. And so we can write the following that let's write out all the commutation relations because they're important. Let's forget this for the moment. Let's first take the commutation relations of the lowering operators, i with j, the first oscillator with the 14th oscillator. Well, first of all, if i is not equal to j, they're talking about different oscillators. If we're talking about different oscillators, they must commute. If we're talking about the same oscillator, then the commutator should be exactly the same as it was for the single oscillator case, namely one. So how do we write that? We write that by putting the chronic delta over here. All that means, oh, so now zero, sorry, zero. Annihilation operators all commute with each other. And for a single oscillator, the annihilation operator commutes with itself because everything commutes with itself. Same thing for the creation operators or the raising operators. But now here's what I wanted to say a moment ago. If I take the annihilation operator and take its commutator with a creation operator, then if i is not j, we're talking about two different oscillators. They don't know about each other. They don't talk to each other, or at least they may talk to each other, but they are independent of each other, and the commutator must be zero. On the other hand, if it's the same oscillator that we're talking about, then the commutator must be one. So this is equal to delta ij. This is the algebra of the operators of many harmonic oscillators. You need to know that. And one more thing, what's the total energy? The total energy is just the sum of the energy of all the oscillators. So let's write that. The total energy, which we can take to be the Hamiltonian, is the sum over all i, all the oscillators, the first oscillator, the second oscillator, of omega. I'll include the h bar for the moment. I'll probably forget about it before the night is over. Omega, but which omega? Each oscillator may have a different frequency. So each oscillator, the contribution of the i-th oscillator is proportional to the frequency of the i-th oscillator times n sub i. We're going to give a name to n sub i now. We're going to call n, or n sub i, but n, we're going to call them occupation numbers. Not occupation like what's your occupation, but occupation in the sense of you are sitting there occupying some space. These are the occupation numbers. It's just a name for the moment. What is it that's doing the occupying? Well, we'll find out. But this says that each oscillator has an energy proportional to its occupation number times omega times its frequency with the h bar being there. There is the extra half of ground state energy. I want to ignore it. It's of no interest. Plays no role. It's just an additive constant. Additive constants in energy don't make any difference. OK. That's the whole theory, if you like. Well, not quite. Not quite. Now, how do we label states? We have many oscillators. If we have one oscillator, we can label a complete basis of states just by the level or the occupation number of the oscillator. Remember, we have all these states, n equals 0, which is the ground state, which we sometimes just call 0, n equals 1, which we call 1, n equals 2, and so forth and so on. We label the states by the occupation number. That's a state. What if we have many oscillators? And I now imagine that I lay them out. There are many oscillators. Here's the first one over here, the second one over here, the third one over here, the fourth one over here, i equals 1, i equals 2, and so forth. Then, in order to specify a state, I have to specify the occupation number or one way of specifying a complete basis of states. One way of specifying a basis of states is to write down the occupation number of each oscillator. How many quanta or how many units of energy are there, so to speak, in the first oscillator? I'm going to call the first oscillator n, no, n1. n1, n2, n3. If there's a finite number of oscillators, it terminates. If there's an infinite number, it doesn't. Do you know any situations where, at least mathematically, there's an infinite number of oscillators? What's that? Well, an example would be a very idealized mathematical violin string. A violin string has any number of harmonics. Everyone can oscillate with a different frequency. That's a mathematically idealized, of course, the real violin string is made out of atoms, and there aren't really that many possibilities, but a violin string would be a mathematically idealized violin string would have an infinite number of oscillators. But of course, the very, very high n, the very high i, would correspond to extremely rapid frequencies. Really, because of the atomic structure of the violin string, it really doesn't have that many oscillators. But that's an example, mathematically, of infinitely many, and we're going to be interested in that case. Okay, let's talk about, let's come over to another blackboard over here. We're going to, a little more, a little more oscillation, a little more oscillator mathematics. I told you in the, originally, that if you act with the creation operator, or the, I'm going to call it, I'll slip into calling it the creation operator, with the creation operator, you increase n by one unit. Well that's not quite right. Not if we normalize the states in the usual way. The usual way to normalize a family of states, they're orthogonal to each other for different n. Why are they orthogonal to each other? Because they have a different, observably different number of n. So states with different values of an eigenvalue are orthogonal. So the usual normalization would be that the inner product of n with m is delta nm, or in other words, each state, nn is equal to one, nm is equal to zero. For n not equal to m, they're orthogonal, for n equal to n, we normally would take them to be normalized states. That's standard quantum mechanical notation procedure. With that notation, in other words, each one of these states is of unit length. Each one of these states is of unit length, but with that notation, a plus on n does not give n plus one. It gives a numerical coefficient in front, and the numerical coefficient is the square root of n plus one. Let me prove that for you. Let's see if we can prove it. We can do it over here. Let's not put the answer in. Let's just call this some numerical coefficient, and we don't know what it is yet. We don't know what it is yet. It's some cn, and we're going to try to figure out what it is. What's our tool? Our tool is the commutation relations for the oscillators. It's a single oscillator now, not multiple oscillators. Okay. Let's write, this is a ket vector equation. Let's write the corresponding bra vector equation. When you go from a ket vector equation to a bra vector equation, what do you do with the operators? The kets go to bras. What about the operators? Hermitian conjugate. Hermitian conjugate. Hermitian conjugate of A plus is A minus. Another way to write the same fact is the Hermitian conjugate on the bra vector n is equal to cn. I'm assuming that cn is real. We can do the complex case. It doesn't matter. It's okay to think of cn as real. It doesn't matter, but let's let it be real. If it wasn't real, then we would want the complex conjugated. We can choose it to be real, and then n plus 1. That's the exact same equation except bras. Kets go to bras. Operators go to their Hermitian conjugate, and numbers go to their complex conjugate, and I'm taking c to be equal to a real number. Okay, now here's what we're going to do. We're going to put this equation next to it. We're going to take the inner product of this equation with this equation, with the bottom one with the top one. Let's write it out. n A minus A plus n. That's what happens if you take the inner product of the left side here with the right side here. And that's going to equal cn squared times the inner product of this vector with this vector. But what's the inner product of this with this? 1. 1. Every one of these vectors is normalized to 1, so it's just 1. So now we have a tool to figure out what cn is. All we have to do is juggle the A pluses and A minuses in a clever way. How do we want to do? I think I erased an equation over here. I should have written, yeah, let's write down the guiding equation over here. n on the nth state is equal to little n on the nth state. Why is that true? Come on. Definition of being an eigenvector. Little n is the eigenvector of big n. Big n acts on the eigenstate with eigenvalue n to give little n. That's the definition of an eigenvector. Question? Yes? I don't see why the inner product of n plus 1 with n plus 1 is 1. Because I've used the rule. No, no, no, now I've assumed they're normalized. I'm assuming now that the n vectors are normalized. And having, no, no, I have put the ambiguity into the c's. I may assume that they're normalized and then seek to, got it? Right. Yeah. Okay, good. So, big n is nothing but a plus a minus. So let's put that here. a plus a minus. That's almost what I have here. I don't have a plus a minus. I have a minus a plus. But what I know is what a plus a minus does. What can I do? What's that operation? Commute. Commute. And here is our commutator. The commutator is 1. So what that says is that this same thing is equal to n a plus a minus plus 1. That's the commutator term, n. Okay, now I'm in good shape because I know what a plus a minus does when it acts on n. It just multiplies by little n. So all of this just gives me n plus 1 times the inner product of n with itself, which is what? 1. 1. So there we have it. Cn squared is equal to n plus 1, and therefore Cn is the square root of n plus 1. We now know what the creation operator does in all of its glory. It increases the occupation number by 1, and it multiplies by the number square root of n plus 1. We can get rid of the rest of this. We don't need it anymore. Next. What about a minus on n? Well, you go through essentially the same little game. You can do it yourself. It doesn't take, it's not very hard. This time you don't even have to do any commutator. They'll wind up in the right order, and what you'll find out is that this is equal to the square root of n times n minus 1. In other words, it takes you down a level but multiplies by the square root of n. Notice that this one, yeah, okay. That's what you find. Once you know this formula here, you don't have to worry about the fact of what happens when a minus hits the bottom level. When a minus hits the bottom level, the bottom level is n equals 0, you get 0. So this equation as it stands now encodes the fact that the lowering operator annihilates or kills or gives 0 when it acts on the ground state. Just the square root of n here. So this is more oscillator stuff that I assume that Dirac was responsible for. Now we know, I think, everything we need to know about harmonic oscillators, at least for the moment. And that was just a mathematical interlude, if you like, so that we'd have it in front of us when we need it. Let's put it up on top. Any questions about harmonic oscillators or multiple harmonic oscillators? Oh, oh, one thing. What happens, here's a particular vector or a particular state which has a certain set of occupation numbers. Let's hit it with a creation operator for the ith oscillator. We're going to the ith oscillator and we're applying one unit of creation to it, or one unit of elevation. What does it do? Well, it goes over to the ith slot. If this were 3, if I was 3, it would go over to this slot and it increases the occupation number by one unit, but only for the particular i that we wrote down here. So if we go to the seventh oscillator and we apply the creation operator, it increases the seventh oscillator level. It does not increase the sixth one, it doesn't increase the fifth one. You know, if there's an oscillator over there, it's over there. And it's not to be confused with anybody else. And so what this does is it leaves all the other ones alone, let's say i is out here somewhere, and 1 and 2, dot, dot, dot, until we get up to the ith place and then it gives us ni plus 1 and then beyond that nothing. But this is not quite right. This is not quite right because it fails into taking into account the square root of n plus 1. So what we actually have to write, what am I writing? I'm writing nonsense, ni equals n1, dot, dot, dot, ni plus 1, dot, dot, dot. But we have to put in a square root of ni plus 1. So in other words, it works exactly the same as a single oscillator with all the other ones just being bystanders, which play no role when you hit it with the ith oscillator. If you hit it with the j-th oscillator, some other occupation number would get increased. Likewise for the lowering operators or the annihilation operators, same kind of understanding. So as I said, that's what we need to know about oscillators. Now we're going to come to quantum fields, real, genuine quantum fields, but in a very simple context. Non-relativistic quantum mechanics, ordinary quantum mechanics. Not relativity yet. We're not doing special relativity, just ordinary quantum mechanics. In ordinary quantum mechanics, there's some kind of funny connection between particles and fields. Particles are described by wave functions. Wave functions are functions of position. They sort of look like fields. What are fields? Fields are things, degrees of freedom, which depend on position and on time in particular. But let's just say position for the moment, let's freeze time. Fields are functions of position. The wave function of a particle is a function of position. And so you might think, that's all there is to it. That's the connection between particles and fields. But it's not. It's not. Let me, or at least it's only a very tiny part of it, and in fact not the real idea. Let's write it up here. Psi of x. Psi of x is the wave function of one particle where x is the position of that particle. X could stand for x, y, and z, of course. It doesn't necessarily stand for just x. But psi is a function of x. But what is it? It's not an observable quantity. It's the state vector. It's the state vector in the position representation. It's not an observable. You don't do experiments to measure the wave function of a particle. You do experiments to measure the position of a particle, or the momentum of a particle, or the angular momentum of a particle, but not the state of a particle. You don't measure. That's not a measurable thing. It's a thing which tells you a great deal about the probabilities of various measurements, but it itself is not an observable. So that's the first thing, not an observable. Next fact. If we have several particles, then psi is not a function of one position. It's a function of all the positions. I want to exaggerate a notation a little bit. I want to make this a small psi, a lowercase psi. And it's a wave function. If I have two particles, let's call them x and y, again, not the x coordinate and the y coordinate of space, but just the coordinates of two particles, then the wave function is a function of two positions, all right? Function of several positions. In fact, it's a function of the position of all particles. Third fact. What was the third fact? Ah. The third fact is given a wave function which may depend on x1 through some number, whatever, x15. 15 particles. It's the description of a fixed number of particles. We describe 15 particles with a 15 particle wave function. Fixed number of particles. That is not the idea of a quantum field. By contrast, now I'll tell you by contrast what a quantum field is, and in many respects it's everything that this is not, although we tend to use the same notation for it and it's closely related. Okay. First of all, a quantum field, these are wave functions, and they represent state vectors. Oops, I. It's different. Number one, it's an observable. It's a thing you can measure. Let's just think for a moment. What is a quantum field really going to be when we get to the real thing, the real, you know, more complicated, serious quantum field theory? It's going to be things like the electromagnetic field. It's going to be the quantum mechanical electromagnetic field. The electromagnetic field is most certainly an observable. We observe it. And so, first of all, whatever psi is, it is an observable. Number two, psi is a function of only one coordinate. Oh, incidentally, the fact that it's an observable means that it's an operator. It means that it's an operator in the space of states. It's not itself a state, it is a operator. That must be if it's going to be an observable. Number two, it's a function of only one coordinate of one position, function of one free vector, so to speak. And finally, it describes systems of any number of particles, any number of particles. In other words, it's capable of describing systems where particle numbers can change even, where particle numbers can change. How can particle numbers change? Well, if we were thinking about the electromagnetic field and we remember that electromagnetic field is photons, every time electromagnetic field is radiated, it changes the number of photons. So whatever psi is, it's a description, it's an observable, it's an operator, it's a function of one position, and it represents the quantum mechanics of any number of particles, not a fixed number of particles. That's number one. I don't want to go too fast because this is tricky stuff, yeah. So when you say function of one position, you mean the position x where x can vary. X can vary. Yes, x can vary, but it's a function of only one x, oh boy. It's a function of only one x, but that doesn't mean it's a function of x when x is five. It means it's a function of x, x can be anything, but it's not a function of x1, x2, and x3. I keep telling you, why do, I mean why? It's just a function of one coordinate. One coordinate in real world means three coordinates, means x, y, and z. No, no, that's not what that y is. That y, two particles, one is named x, the other is named y. Why? Because who's on first, that's why. You've got two particles and psi xy, you've got 15 particles and psi x1 and x50. That's right, right. Y is just x2, x is x1, right. And one and two label two different particles, right. How do these dis-transform to each other? Have you explained one in terms of the other? We can do that. That's the whole point. I laid out for you the big contrast, and now we want to know what's the connection between them. It can take a little while. It's not. No, same kind of particles. Same kind of particles. And in particular, bosons will come to fermions later. Will come to fermions later. Yes, this is the theory of bosons. Everything we're doing tonight will be bosons. Excuse me, I had a question on the left hand board about the... What's that? The what? A question on the left hand board about the harmonic oscillators. Is the assumption about the independence of the different operators exact or approximately correct? Exact. So for example, pi would not be used to index the electrons in a matter. Say it again. Pi would not be used... Pi? Yeah, the subscript. That wouldn't be used to index... Yeah, I could be labeling the states of an electron in an atom. Now you've got to be careful. If the electrons are interacting, then it's more complicated. But if you just had non-interacting electrons in an atom, I would be labeling the states of the atom. Okay, but we're going to come to that. Yeah, let's come to it right now. Let's go back to single particle quantum mechanics. This is the theory of many particles. Well it could be the theory of no particles. It could be the theory of one particle. It could be the theory of two particles. Variable number of particles. Let's go back to the theory of just one particle. Okay. And for simplicity, I'm going to take a particular example, a particle in a box. The potential energy is such that the particle can't get out of a box. Just one direction of space, although nothing I'm saying is going to depend on that. This is a boson in a box? A boson. And for the moment, it doesn't matter. If there's only one of them, it doesn't matter. If there's one of them, there's no question of whether there's an exclusion principle or not. All right? So one of them doesn't matter. Let's think about the energy eigenstates of a particle in a box. Their wave functions. For example, the lowest energy wave function will be the smoothest wave function that satisfies the Schrodinger equation. And it'll typically look something like that. It'll be a sine function that just barely fits in the box. It fits in the box from one end to the other. And that's a wave function. It's a wave function in the sense of this one over here. Let's give it a name. Let's call it psi 1 of x. This should be little psi. Just make sure it's little psi. It's just a wave function of a one particle system in a box, psi 1 of x. What does the 1 represent? Just that it's the first energy level, the lowest energy level. What about the next energy level? The next energy level is some other wave function. And as you go up in energy, the wave functions tend to be more variable. The next one would have a node like that. We can call that psi 2 of x. I'm using a funny notation. The index here is not the number of nodes. It's one more than the number of nodes in the box. So the next one would be psi 2 of x. Would have a higher energy, and so forth and so on. Now, the ones in the two here do not represent, go down the line, psi i of x. Now, this i will be the same as this i eventually. But at the moment, there are two different things. In this case, it's labeling oscillators. In this case, it's labeling state vectors of one particle in a box, just one particle. Each one has an energy. I won't write down what the energy is other than to remind you that each of these wave functions does have an energy. OK, everybody all right with that? One particle wave functions. Now, let's imagine now that we have many particles. We have many particles of an undetermined number, but some number of them in state one, some number of particles in the box occupying state one, some number of particles occupying state two, some number of particles occupying state three, and so forth. That is a pretty good description of a wide class or a basis of states, in fact. If I know, that is a basis of states, saying how many particles occupy each energy level. So let's write that down now. Here's a family of states. Number of particles in state one, number of particles in state two, number of particles in state three, oh boy, deja vu. Where is, here it is over here. Absolutely identical to the way I labeled the states of a multiple harmonic oscillator system. Now, of course, there are infinitely many possible states in here of arbitrary wavelength, arbitrarily small wavelength, and that's the reason why we need to study any number of oscillators up to infinity. So here is a parallel between, you wouldn't want to do this, you couldn't do this for fermions. For fermions, you can either have no particle in there with a particular state or one particle, never two. Can't put two. So when writing down the idea that you can have any number of particles in the first state, any number in the second state, any number in the third state, we're assuming to begin with that we're talking about bosons. But notice, there's complete parallel here. Because there's a complete parallel, it means we can invent, this is an invention out of the head, operators which do exactly the same thing as the creation and annihilation operators do. Now they literally create particles. For example, A plus one on N1 dot dot dot, gives me square root of N plus one and one plus one times N1 plus one and two and three and so forth. It increases the number of particles in the first state, whatever it was, it adds one and multiplies by the standard square root. Why does it multiply by the standard square root? That's a definition. But it's a definition which is very cleverly chosen among other things so that we can use all the apparatus that we have about oscillators, which is very helpful, as we'll see. Likewise with the annihilation operator. Now I'll call them creation and annihilation operators, they create and annihilate or remove particles from the system. Okay, that's the basic idea of creation and annihilation operators. One more question. Yeah. On your top line there N1, N2, N3, I thought N1 and N2 were the common states. One, two and three are the common states and one and two and N3 are the number of particles in that state. One, two and three are, one, two and three up to I, or then past. Okay, so this is tricky. There's a lot of notation, but the notation is, it's not hard, it's just a bit to remember. So I'm going slow for that reason and repeating myself. Question please. What's the physical meaning in this environment of that square root? A plus A minus on N is equal to N times N, right? Well A plus is often a lot like A minus, so it better be that A plus and A minus are somehow square roots of N, but we can do better than that. Let's see, how can we do better than that? Hmm, let's use the commutation relations. No, okay, let's suppose we didn't know this. Let's suppose we didn't know this, but we did know about the square roots of N. Okay, let's suppose we didn't know, we knew about the square roots of N. Okay? Let's use our commutation relations to write this as A minus A plus on N minus one, I think. Yes? A minus A plus is equal to A plus A minus plus one. Plus one, right? One minus? No, let's see. A plus A minus A plus minus A plus A minus is equal to one. So, plus one. No, I was right. A plus A minus over the other side, you get a plus and then take the one over and you get a minus one. Right, I had it right. If it doesn't work out, we'll change it. Now, what does A plus on N do? According to assumption, it gives you square root of N plus one times N plus one. Okay, that's this, it's over here, but we still have to multiply by A minus. And then we have another minus N. That's what we have here, right? Okay, this is just a number square root of N plus one. Let's put it over here, square root of N plus one. What does A minus do when it acts on square root of N plus one? Oh, sorry, on N plus one. No, I think, yeah, it gives us N, but what's the numerical number? Square root of N plus one, right? So, we get two square roots of N plus one, which gives what? N plus one times N minus one times N, which is just good old N times N. In other words, those square roots of N are there to ensure that this vector is really an eigenvector of A plus A minus. And roughly speaking, very roughly, A plus and A minus are of the same order of magnitude. One is P plus IX, the other is P minus IX. Same order of magnitude, and since altogether they add up or they multiply together to give you N, each one is roughly the square root of N. Yeah? So, okay. So, we don't have a definition of that other than kind of its behavior. We're making it up. Okay. We define, we define, we're not making up randomly, but we're defining things. And then we're going to see how they work. My question was, I understand when you apply A plus, you create a particle. That's a very tangible thing. I didn't understand what does its square root do in a tangible sense. That's my question. Nothing. That's all. Well, sometimes mathematics is just mathematics. Okay. It's like you, for ordinary particle, you have to normalize the wave function. You have one particle, it's the same thing here. Okay. Right. Why doesn't particle disappear? Why doesn't it become a different state? What? When you do A minus, why does it disappear? Why doesn't it become a different state? That's the definition of A minus. You can't ask why about a definition. I define A minus so that it decreases the number of particles. Okay. Definition. Mathematics, we make definitions and then we find the consequences of them. So the question is, why is it a useful definition? Well, it's a useful definition because we want to explore systems of various numbers of particles. In fact, we want to explore systems where the number of particles can change. For example, we might radiate a photon. How would we go about mathematically indicating that we created a photon? We take the state and we hit it with a photon creation operator. Or a photon might be absorbed. A atom might absorb a photon. How do we write the fact or how do we express the fact that a photon has disappeared? We express it by saying an annihilation operator has acted. So the purpose of all of this is not only to be able to study systems of arbitrary number of particles, but even to be able to study systems of variable number of particles. Any questions? So here's what I think you're doing. What you're doing is you're defining these operators, how they act on eigenvectors. And since eigenvectors form a basis, that tells us how they act on all vectors. Okay, I got that part right on. Okay, now we've got this definition of A plus and A minus. We haven't shown, but one assumes that one can show that you have a commutation relationship that you've got up there. Yeah, with these definitions, it will follow that these operators have the same commutation there. Right. In other words, the system of a variable number of particles here can be represented just as a system of harmonic oscillators and an oscillator for each state, for each single particle state, for each single particle wave function, an oscillator and the occupation number of that oscillator is the number of particles that are occupying that particular state. The terminology is well-chosen. Occupation, now we know what it means. It's not you occupying your seat. It's a particle occupying a state, and a boson in particular. These are creation and annihilation operators for particles. How would you label it? What's the vacuum in this language? The vacuum, well, first of all, it's the state which is annihilated by all of the annihilation operators. It's the state which when you try to lower any occupation number, there's nothing there. Nothing to lower, so the state becomes zero. But the vacuum is simply the state with zero occupation number in every state. It has the property that all the occupation numbers are zero, and it has the property that any one of the A- is gives zero when it acts. So the ground state is unoccupied as well. The ground state is unoccupied. Right. It was occupied, you still couldn't lower it. If it were occupied, you can lower it. The ground state? No, the ground state is not occupied. You can't occupy it. That didn't make sense. The ground state and the vacuum are the same thing. Sorry. Right. Let's, while we're at it, let's write an expression which we can now do for the energy of the system. In order to write an expression for the energy of the system, we need to know the energy of each one of these wave functions. Each wave function has associated with it an energy eigenvalue. What should we call them? The energies. We could call them E, couldn't we? We could call them omega, or actually h-bar omega. But let's just call the energy level of the i-th. Let's kill h-bar, forget h-bar because I've already forgotten it. And just say the energy of a particle in the first state would be omega-1. The energy of a particle in the second state would be omega-2. And so forth. The energy of a particle in the i-th state would be omega-i. Let's write an expression in terms of creation and annihilation operators for the energy of the whole system. This is easy. The energy of a whole system is, first of all, the energy of all of the particles in the first state. So that's the number of particles in the first state, and 1, times the energy that's associated with particle in the first state, omega-1. Plus the number of particles in the second state times omega-2. Plus and so forth and so on. So it's the sum of n sub i omega sub i. Now, that's a true statement, but I want to write it as an operator statement. These n's are eigenvalues, but they're eigenvalues of the number operator. So we can really write this in operator form by saying it's the sum of all states, single particle states, of omega-i times the occupation number of the i-th state. And what is the occupation number of the i-th state? a dagger i ai. Notice that each one of these is just basically the oscillator formula. This is just the oscillator formula. There would be an h bar in here. Put an h bar if you like. This is just the oscillator formula, but now it's summed over all the possible states here, and you can think of it two ways. You can think of it somehow the system of particles is equivalent to oscillators, and those are just the oscillator energies. Or you can say it's just the sum of the energy of all the particles. The number of particles in the first mode, first state, times the energy of the first state, plus the second state, times the energy of the second state, and so forth. And it comes out to the same thing. Yeah. What is ai? Ai? A minus i, sorry. Okay. So what is a dagger? Creation. Oh, sorry. A plus? Yeah. Okay. Good. In many contexts, I'm sorry, in many contexts, once you go to quantum field theory, which is what we're doing now, A plus just becomes a dagger, and A minus just becomes A. Well, I'll try to stick with the same notation. I don't want to change notations now. Okay, so let's just think for a minute what are we doing? We're inventing some kind of object, which is a bunch of oscillators. On the other hand, the violin string is a bunch of oscillators. The violin string is a field. If you look at radiation in a cavity, radiation bouncing between mirrors, it's a bunch of oscillators. People speak, I mean this goes back from long before quantum mechanics, speak about radiation as a collection of oscillators. Radiation oscillators, long before quantum mechanics. And so we're beginning to build up, from thinking about ordinary particles, we're beginning to build up an idea of collections of particles as harmonic oscillators, and soon enough, we're going to see the connection with fields. Okay, if there are no more questions, maybe we'll take a five minute break because I need to stop for a few minutes. And organize your thoughts, and we'll field a few questions before we go on. Is what? That's the Hamiltonian. That is the Hamiltonian. That is the Hamiltonian. We're going to find other very elegant ways to write it, but that is the Hamiltonian. Thank you very much. Okay. Yeah, okay. I want to be able to remember where I left off. Okay. Good. Pat, what do you do with them? You put them on? I'm not sure what he's doing. I don't know where he's doing either. He's loading up the website. Yeah. I assume he puts them up. I don't know. I thought that was the idea, but I don't remember. Yeah, okay. Okay. Okay. What's that? No, that has nothing to do with it. No, the energy levels, no, it's not the energy levels of the single particles in the boxes which are evenly spaced. It's the energy levels of no particle, one particle, two particles, three particles which are evenly spaced for each oscillator. Right. Yes. Yeah. No, we're not talking about particles which are moving in a potential which is an oscillator. You could. You could think of a lot of particles in a potential of an oscillator. Then you would have two kinds of oscillators. The particles would be oscillators and the creation and annihilation operators would be oscillators. But now we're just talking about any collection of single particle states. Single particle states is just what you would have thought about when you're just thinking about one particle quantum mechanics. But now, since we can talk about any number of particles, we can generalize in this way. Okay. Okay, let's get back. Are there some questions because I don't want to go too fast. I know that there's a lot of little indices and little pieces and nuts and bolts here, which if we go fast, they'll slip out of your head. They slip out of my head from time to time. Yes. For sure. Yes. The A's are the complex amplitudes of Fourier coefficients. We'll come to that. Yes, it's deeply related with Fourier analysis. Absolutely. We're assuming that all the particles have the same energy? Same energy? No, no, no, no, no, no. If they're in the first state, they have energy at one. All the particles which are in the first state have the same energy. All the particles in the second state have the same energy as each other, but not as the ones in the first state. No, I'm fine. But they interact with each other. Well, okay, for the moment we're talking about particles that don't interact. For the moment we're talking about particles which simply don't interact with each other. That's the starting point. Later we have to ask what kind of things do we have to do to get the particles to interact. What Georgia was saying is that if particles really interacted with each other, the energies wouldn't really be additive. It wouldn't just be the energy of particle one plus the energy of particle two. There would be the energy of particle one plus the energy of particle two plus any kind of potential energy between them, for example. So, right, so that's correct. For the moment we're talking about free particles. Free particles means particles which don't interact with each other. Okay, now we come. No other questions? Yes. In non-retrovistic quantum field, there's still a half a boson in the ground state? Yes, but it doesn't play. It doesn't do anything. It's just zero point energy. And why doesn't zero point energy? Why is it irrelevant for everything? Well, if you think what does, energy is Hamiltonian. What happens if you add a constant, a number to a Hamiltonian? Well, a number commutes with everything. Remember what the Hamiltonian does for you is it provides a method of getting equations of motion. The time dependence of something is related to the commutator of that thing with the Hamiltonian. Constants do nothing. They commute with everything. Another way of saying it is in keeping track of energy is the only thing that really counts as energy differences. Not the, so you can throw away that ground state energy except for certain very special purposes that I'm not going to concern us right now. Okay, quantum fields. A field is a function of space. It's not a function of two points of space or three points of space. So it's not like the wave function of a many particle system. It's a thing which only depends on one coordinate. It's also an operator. Why? Because it's an observable. You observe fields, you measure them. What was the third point about, well, I don't remember, but, um, yeah, and somehow, somehow has to do with your ability to change the number of particles. Okay, so I'm, what we're going to do is I'm going to give you the rule for the quantum field of the system of particles. And then we're going to explore how it works. If you ask me prematurely why that's the definition, well, it won't work. It won't work. The best thing to do, sometimes it's just best to follow the definition. And I won't, I promise you, we won't go too far before you start to see why it works and why it's interesting and why, why that's the definition. But it's better not to try to, um, to intuit out from the beginning why this is the particular definition. Rather get used to it and familiar with it and, in a short amount of time, you'll understand why that's the thing to do. The definitions, the value of the definitions depends on their utility, all right, on what you can do with them. And I might say this to begin with. Quantum field theory, at least in the way we're thinking about it here, among other things is a bookkeeping device. It's a bookkeeping device for keeping track of many particles, for keeping track of the quantum mechanics of particles which come and go, change their state, even disappear and reappear, whatever. It's a kind of bookkeeping device. Whenever you have operators like this, you're doing bookkeeping. It's a little more than that because you can actually measure these fields. The things, when they go by you, you feel them. They have an effect on you. But for the moment, we can just think of it as bookkeeping. And whether a definition is a useful definition or not depends on what you try to do with it. And I can tell you, with quantum fields, you can do a lot. Yeah? Could you take a minute or two to say just a little bit about the history of quantum field theory? Do you want to develop it, too? Um, well, we go way, way back. We can start with a question of who invented field theory. Who invented field theory? Anybody know who invented field theory? I suppose it was Faraday with the electric and magnetic field. So I think probably the idea can best be, uh, the original history started with, I think, with Faraday. Maxwell invented field equations. Faraday didn't have field equations. He just had a feel for the fields, a feel for the fields. And he was able to visualize them. Maxwell wrote down, as far as I know, well, it's probably not true that he was the first one to write down a wave equation. No wave equations. Waves of water or waves of other things. But I think it can fairly be said that Maxwell wrote down the first modern version of field theory, but it was classical field theory. Um, the need to quantize fields or the need to do something involving h-bar goes back to Planck. Planck, well, it goes back before Planck. Uh, we have not talked, we will eventually talk about what is called the ultraviolet catastrophe. Won't talk about it now. But Planck realized that, uh, that classical field theory would not be adequate for studying thermodynamics of radiation, introduced Planck's constant, and in the process, uh, basically said some things about harmonic oscillators, which were more or less correct. He didn't realize that the radiation field itself was made out of oscillators, or photons, as we call, quanta of the field. He was thinking about the oscillators in the walls of a cavity, namely the atoms, which he thought of as envisioned as literally oscillators, little oscillating charges. And, um, his application of Planck's constant, and this, he didn't have his mathematics, but the idea that the energy levels were integer spaced, he wrongly thought that that had to do with the, um, with the oscillators in the walls of the cavity. It was Einstein who first, um, understood that the quantization of oscillations had to do with the electromagnetic field, with Maxwell's electromagnetic field. So, as with almost everything else, it was Einstein who, uh, had the first inklings of this. But then later, it was Heisenberg, Pauli, Dirac, was the, Dirac was the person who really put it together in the form we're talking about. Uh, and that was basically around 1929. Now, 1929, a lot of these ideas, uh, first really surfaced. But, um, you know, as usual, Einstein, uh, had, really did have all the ideas, and, uh, um, although he hadn't articulated them in, in exactly the modern form. After that, the whole history, but, uh, not for tonight, not for tonight. Um, this notion of particles or variable number of particles as harmonic oscillators, that's a good question. Who was the first one? I'm, I'm guessing Dirac again, but I'm not sure that he was the first one to identify particle creation and annihilation as a form of, um, harmonic oscillator mathematics. I would guess it was Dirac. Probably around 1930. What was the first major problem that it could solve that hadn't been solved before the Vatican? The first major problem was the problem of the ultraviolet catastrophe of, uh, of, um, or basically why radiation was quantized. Einstein knew radiation was quantized. Okay, he knew that radiation came in photons, but he didn't have a mathematical way to think about it. Uh, the first major problem that was solved was a formal mathematical quantum mechanics that was consistent with the rest of quantum mechanics that included the idea that radiation came in photons. That radiation came so far we have never even talked about what a photon is. We're getting close. We're getting close. A photon has something to do with the electromagnetic field and it's a unit of energy. It's like these occupation numbers. Okay, the things which occupy the occupation numbers are quanta. If the particles we're talking about are the quanta of the electromagnetic field, they're called photons. So that's where we're going. We're really getting close to talking about what a photon is or what a quantum of a field is. In fact, right now we're going to introduce the notion of a quantum field, the simplest version of it. I'm going to give it to you as a definition and we're going to see what it does and how it works. Okay, so psi is a quantum field and I'm going, again, because it's a quantum field, it's an operator. It's something you can measure. I'm going to call it capital psi. It's an operator, but it's an operator which depends on position. So it's not a single operator. It's an operator for each position. For each position, a separate operator, it acts on a space of states. The space of states, what is the space of states? The space of states, where is it? We erased it? Well, the space of states, and one and two. Incidentally, this is called a Fox space. F-O-K, F-O-C-K. Fox was a Russian physicist. I doubt very much that he was the first one to use this, but he wasn't. But it got named after him for the usual reasons that things get named after people, namely no reason. Okay, the quantum field psi of x. There's just a very short definition. It's equal to the sum over all of the single particle wave functions, sum over i, a contribution for each one of these states. So it's a sum over i. It contains psi i of x, the single particle wave function. These are things like sines and cosines. Sines and cosines of different wave number, of different wavelength. That's what goes there. And what multiplies it, the coefficients, the coefficients are the creation of annihilation operators. In this case, the annihilation operators. Hi. Now, somebody asked me if any of this has to do with Fourier analysis. If the size of a moment, what if the size of a momenta migraine states? That means they're sines and cosines. Then these operators A-minus would look, would resemble very closely the coefficients that occur in Fourier analysis. That is, they are the quantum version of the Fourier coefficients when you expand the thing in sines and cosines. Now, it might not be sines and cosines. It's whatever the orthonormal family of states, of single particle states is. That's it. That's the definition of a quantum field. The simplest example. What's that? These sines are just numbers. Now, in fact, there's a separate number for each position. Psi of x are just classic, just functions. So if I want to know what the quantum field is at x equals zero, I plug in x equals zero here. If I want to know what it is at x equals the square root of pi, I put in the square root of pi here. So it's a whole set of function. Sorry, it's a whole set of operators, one for each position in space. That's a lot of operators. One for each position in space. Why so rich? Why are there so many operators? The reason is because we're trying to study a system where there can be any number of particles. That's a lot of states. A lot of states, a lot of operators, just because the number of particles is totally unbounded in this idea. Okay, so there's an operator at each position. This x and this x are the same. The size are different. This is an operator. This is just a function, a function of position. The operator is in here. That's where the operator is. There is the complex conjugate or the Hermitian conjugate. This is an operator. It's not a number, but it has a Hermitian conjugate. It's exactly the same thing except that we put creation operators there and size star here. These quantum fields are not Hermitian. They're not Hermitian because the creation and annihilation operators are not Hermitian. So they're not really observables. But if you add size and size star, a size dagger, or you subtract them, you can make Hermitian combinations out of them. Those really are things that you could observe in the laboratory. Just because they're Hermitian operators. Now we're going to explore these objects. We're going to go slow. But the first thing I want to show you is what the action of psi is, capital psi of x, when it hits the vacuum. What does it do when you apply it to the vacuum? Let's take psi dagger. We'll take psi dagger. Incidentally, what does psi do when it hits the vacuum? It kills it. It annihilates it. Why? Because it's only got annihilation operators in it. Psi dagger, however, has creation operators. It creates a particle. Each term in here creates a particle. It doesn't create a lot of particles. It creates a superposition of particles, but one particle in a superposition of states. So let's see if we can figure out what it does when it hits the vacuum. Okay, so here's what we're going to do. Let's take single particle quantum mechanics, one particle quantum mechanics to begin with. In other words, forget all this fancy stuff with oscillators, just good old quantum mechanics of a single particle. There's a complete set of states. The complete set of states are the position states. Let's see how do I want to write this. Okay. States which are located where the particle is located at position x. There are a complete set of states. That means there are a complete basis of states. And that means that you can, if you sum over all positions or integrate over all positions, I'll use a, I'll keep it simple. I'll imagine summing over positions, but we may have to do an integral. If we sum over all positions, what do we get on the right-hand side? One. One, the unit operator. General rule for any basis of states. Okay. Now, let's see, I just trying to remember which way I did this. That's a true statement, but what I actually wanted was a different statement. It is a true statement. But what I actually wanted was this statement. Sum over i. As usual about this time at night, I start making mistakes. And I was just about to make a mistake, but I caught myself. What are these i's? Do you remember? They're the one particle states, the ones which look like this. Psi one, Psi two, Psi three, Psi four, Psi five, and so forth. They are a complete, they're the energy eigenstates. They are a complete set of states. If I take all the signs that can fit into here, for example, Psi of x, Psi of two x, Psi of three x, Psi of four x, that's a complete set of states. Each one of these i's here is a state whose wave function is Psi i. But the Psi i's are a complete set of states, and so it follows that this expression here is just the unit operator. This is true for any complete basis of states. You go back to your quantum mechanics and you remember what this means. Now, that means there's a little bit of trickery, but there's nothing deep going on here. The sum of i of i, i, that's an operator. That's the unit operator. It's trivial. It's a unit operator. And what happens if I apply it to a state in which the particle is known to be at x? What do I get? I get a particle at x. I've done nothing. I've just rewritten the particle at position x. On the other hand, what is this object over here? Well, it's a coefficient, all right, but it's also a wave function. Do I have it? Yes, it's a wave function. It's the wave function of the state i. The inner product of a particle at x with the state vector i is just, is it Psi or is it Psi star? Anybody know? I know. Happens to be Psi star. Happens to be Psi star. So this can also be written as the sum of i Psi star of i of x times the state i. That's a particle located at point x. Now let me slow down here and get any question about this, because I'm sure this must be a little bit confusing. This is the unit operator. We apply it on a state located at x, and it gives us a state located at x. On the other hand, this inner product of a particle at x with the wave function Psi or the state vector Psi is just Psi star of i. It's just a wave function. Okay? But what about this, what about the state here? That state is a one particle, is one particle with wave function Psi i. I can write it another way. I can write it another way. I can write it as follows. Psi is equal to A plus i on the vacuum. The vacuum has no particles in it. If I create a particle in the state i, then indeed I have a particle at i. I have a particle in the state i. So I can substitute for this, I can substitute A plus i on the vacuum. No, one particle. One particle. This is one particle in the state i. That's all, just one particle. And it gives me a particle at position x. But wait a second. The thing that I wrote down here is exactly the quantum field Psi dagger. So we can now write this in another form. We can write a particle at position x is Psi dagger at position x on the vacuum. So now we know what Psi dagger is. It's an operator which creates a particle at position x. If we have no particles and we want to create a particle, we want to change the number of particles by adding one at position x, then the way to do it is to apply Psi dagger to the vacuum. Creates a particle at position x. What if we wanted to create another particle at position y? Well, it's not too hard to work out, but what do you expect? Just hit it with the field operator. We'll call this the field operator now, or the creation, the field operator. We just hit it with another Psi dagger, but at position y. So whatever the state is, if you want to add a particle at a particular location, that's the job of the quantum field. It's a function of x, y, no, not y. It's a function of x because it has to know where to put the particle. It's an operator that puts a particle at x. It's not the particle operator that puts the particle at y, so it must be itself a function of x. That's what, in a simple situation, the simple situation that we're working on now, that's the meaning of the quantum field. It creates a particle at position x. The creation and annihilation operators create particles in the energy eigenstates. You apply a dagger to the vacuum, and you get a particle, not a position i, that doesn't mean anything, but in the wave function i, sines and cosines and so forth. That's the meaning of A+. On the other hand, when you combine them together into the quantum field, the quantum field creates a particle at position x. What do you think the annihilation operator, or what do you think the other piece does? What moves a particle at x if there's one there? What if there's none there? Then it annihilates the state. So Psi dagger creates a particle at position x, whether or not there's one there. Psi annihilates a particle at x, or gives zero if there's no particle there. Where is the information? What kind of particle to be created, buried in? It should be in the little A+. First of all, we're talking about bosons. If you get fermions, you don't do them this way. There's a separate quantum field for each species of boson. So what kind of bosons are there in the world? There's photons, there's gravitons, there are Higgs bosons, there are gluons. For each species of boson, there is a separate quantum field that creates that kind of particle in this manner. Right. So the particle is being created with a delta function probability distribution? By Psi of x. That's right. Right. That's exactly right. So, right. Excuse me. So if there's no particle at position x, then Psi of x does nothing? No, it gives zero. What does it give zero? The same thing that when an annihilation operator hits a state with no particle in it, or when the annihilation operator hits the ground state, it gives zero. It's doing nothing. No, it's giving zero. What if it's thinking of an interstitial particle of positions? If there's nothing at that position, what does it mean to give zero to a position like this? You started with a vacuum. The vacuum is not zero. The vacuum is a state. It's a normalized state whose length is one. It's a good state. There's nothing wrong with it. It's there, but it has properties. The properties are those of empty space. When an annihilation operator hits it, it literally gives zero, the zero vector, the vector of zero length. No, I don't understand what that is. How can you have the zero? Zero is the unique and only vector that if you add it to any other vector, you get the same one back. It's a vector in a vector space. Go back to vector spaces. Vector spaces have in them zero vectors. Zero vectors of zero length. It's not interesting. It doesn't have any properties. It's just a... It doesn't describe anything. It doesn't describe physically anything. It doesn't describe physically anything. If you add vacuum, you get something else. So if you add the zero state to the vacuum, you get the same thing. What if you add the vacuum state? Oh, you certainly don't get the... If you add the vacuum, let's say to a one-particle state, then you get a state which has... Let's say you add it with equal coefficients just to be simple. Then you get a state that has a probability of a half of having a particle and a probability of half not having a particle. You see, if you add the zero state, not the vacuum, to a one-particle state, you just get the one-particle state. If you add the vacuum to the one-particle state, you get something that has a statistical probability of, let's say, a half to have a particle and a statistical probability to not have a particle. Zero cannot be normalized. It is definitely not normalized. It's of zero length. The one thing that you can't normalize is a vector of zero length. Right. The vacuum is not a vector of zero length. It... For a little bit of time, people generally get confused by the notation. This looks like it's zero, but it's not zero. It's just a name. The zero here does not mean that it's a vector of zero length. It does not mean that if you add it to another vector, you get back the same vector. It's just a name of the vector, and it represents a perfectly good physical state. It happens to be a state with no particles in it, but it's a state. It's empty space, but it's a state. So we could have put a half in there. You could have called it... Right. You could have called it George Bush. Whatever you want to call it, it doesn't matter. Won't you annoy me? No. Won't you annoy me? I would like that. You can't do it. You can't do it again. We won't talk about annihilation in George Bush in this case. Yeah. That's right. Once you've annihilated it, you can't put anything back. Right. It's just zero. It's literally zero. Right. Right. Any operator that acts on the zero vector just gives back the zero vector. It doesn't give anything interesting. Right. You know. It's like one minus one is zero. Take any vector and subtract it from the same vector, and you get zero. If you add zero to any number, you get the same number. Add the vector zero to... Here's the zero vector right there. It has no length. The vacuum, on the other hand, has a length of one as a vector in the vector space. Normalized. Not easy. Not easy. Not easy, but you get used to it. On the left board, the lower sum. Yeah. Does the order of those operators... No, no, no. Only this is an operator. At each point x, this is just an ordinary number. It doesn't matter which order you put it in. Numbers times operators are operators times numbers. Right. So just think of psi of x as a collection of numbers, one for each x. Complex numbers. Complex numbers. In general, complex numbers. Yeah. Yeah, they will in general be complex numbers. That, of course, depends on the solutions to the Schrodinger equation, whether it's real or complex, but that's right. Yeah, and I think just to mean that the quantum field description of a particle is you basically do a creation operator on the vacuum space. How would you define a second particle? By applying again another, you want to put a particle in a position y on top of this? Yeah. Yeah. If you wanted to put another particle at position y. Okay, let's make a two-particle state. Good. Let's make a two-particle state and make one more observation about it. Here's a two-particle state. This is a particle at x and a particle at y. Now, x and y could be the same point, or they could be different points. Let me point one thing out, one interesting fact. The size are made out of annihilation operators. There are no creation operators in here at all. Every annihilation operator commutes with every other annihilation operator. Wait, the side dagger is a creation operator. No. I'm sorry, did I make a mistake? Yeah. Yeah, yeah, yeah, yeah. Right. Side dagger is a creation operator, of course. But still, it's true that every creation operator, this is the important thing. It wasn't whether it was creation or annihilation. Each one commutes with every other one. Okay? Every operator in here creates, every operator in here commutes with every operator in here. That means side dagger of y and side dagger of x commute. Let's call this the state with a particle at y and x. Well, this is also equal to side dagger of x times psi of y on O. Dagger. Thank you, I'm losing it. And that's equal to the state with a particle at x and a particle at y. Notice they're equal to each other. They're equal as a consequence of the fact that these commute. All right? The fact that these commute because they're built out of oscillators in a certain way, tells us that it doesn't matter which order we put the part, which order we write the particles in here. In other words, particle one at y, particle two at x, is the same as particle one at x, particle two at y. That's the defining property of bosons. That it doesn't matter which order you express them. A particle, particle, Harry over here and Jeffrey over here is the same as Jeffrey over here and Harry over there. So that's fully equivalent to symmetrization? It's fully equivalent to symmetrization and it tells you that the particles are most definitely bosons. So I didn't really have to say from the beginning, well we're going to study bosons, I could have started with this construction and we would have derived that the particles studied this way are bosons. So the interesting question is, well how the hell then do you make fermions? Yeah. That's more abstract. So does this imply that x and y are the same kind of bosons? X and y? X and y are positions. Yeah, they're the same kind of bosons. That's right, they're the same kind of bosons. So a given quantum field describes one type of particle. There's a quantum field for photons. Well actually there are two quantum fields for photons because of the two polarizations. There's a quantum field, let's say a photon along some axis. There's a quantum field for the photons polarized one way, another quantum field for the photons polarized the other way. There's a quantum field for each different type of particle. There's a separate quantum field. And if you want to make that kind of particle, you apply that quantum field. So that's what a quantum field is, or at least in its simplest incarnation. And I assure you this is the simplest. But the basic concept or the connection, you see what we're doing is we're talking about the connection between particles and fields. Here is the connection between particles and fields. This is the field, when you apply it to the vacuum, you create a particle. You define the field to be this. You define, yes, that's right, we defined it. But once we've defined it, it's an operator. If we add it to its complex conjugate, it's a Hermitian operator. Hermitian operators are observables. All of a sudden it takes on a life as something that we can measure. And do all the things with observables that we do. So all the properties of fields come out of these creation and annihilation operators? Yes. Yes. None of this would be interesting if we were talking about a situation where you could not create and annihilate particles. The whole point of this, well, I take, let me say a little less strongly. None of this would be interesting if we were only working in a context where there were 37 particles, never 36, never 45, but 37 particles end of story. We wouldn't do this. If we're interested in studying the whole set of possibilities of any number of particles, then this is a good tool. But even better when we're studying the possibility of the number of particles itself being ambiguous, meaning to say a statistical variable, a quantum mechanical variable. We can study states in which there's a superposition of different numbers of particles. No particle, one particle, two particles, and these have real meaning. An electromagnetic wave, a laser wave is like that. A laser wave is not a definite number of particles. It's a superposition. One, two, three, zero, one, two, three. And good. Okay. Now we're finished for tonight.
(October 28, 2013) Leonard Susskind introduces quantum field theory and its connection to quantum harmonic oscillators. Gravity aside, quantum field theory offers the most complete theoretical description of our universe.
10.5446/15098 (DOI)
Stanford University. Harmonic oscillator. Let's just very briefly review the harmonic oscillator because it's going to come up time and time again in what happens from here on in. As I said, harmonic oscillators are ubiquitous. They occur all over the place. Anytime you have a system which has an equilibrium state, it could be classical or it could be quantum mechanical, but classical in particular. If it has an equilibrium state, then you displace it from equilibrium. Generally speaking, it will oscillate. It may oscillate with more than one frequency. For example, a violin string can oscillate in the fundamental frequency. The first harmonic, the second harmonic, and if you pluck it in odd ways, you can make some nasty noises, as I'm sure that Art knows. He's done it many times. The nastiness because of multiple overtones clashing with each other. Well, that's probably not quite right, but you know what I mean. Nevertheless, the oscillations of a violin string are a collection of oscillators of different frequency. But more important to us are the oscillators which comprise the oscillations of the electromagnetic field and other fields. We're going to come to that and we're going to do a little bit of quantum electrodynamics, just a little bit. But basically, if you have radiation, for example, in a cavity or in a wave guide or something like that, then the radiation, the electric and magnetic fields, oscillate with definite frequencies. Again, what would be the equilibrium configuration of the radiation field? I should start with something which has an equilibrium configuration. What is the equilibrium configuration of the electromagnetic field? The answer is no electromagnetic field. The vacuum empty space with no electromagnetic oscillations in it. No electromagnetic oscillations, in other words, just empty space with no electric and magnetic fields, is the equilibrium position of the electric and magnetic fields, if you like. If you give the system a knock by, for example, sending in some microwaves or something, the system will start to oscillate. And like the violin string, it can oscillate in all of its various frequencies. And those oscillations are called photons, or that's not quite right. The oscillations are described or describe collections of photons, and we'll come to that but not tonight. OK, so harmonic oscillators are terribly important in many ways. Incidentally, sound waves are also oscillations. A sound wave moving through a crystal is, again, a collection of oscillations. You can think of it wrongly as each crystal oscillating independently of every other atom, oscillating independently of every other atom. But that's not really the way it works. Each atom exerts a force on the other atom, and so when you displace one, it gives a knock to the next one, and waves go through the crystal. The waves, again, are described by harmonic oscillators. And the quanta of the oscillations are not called photons, they're called phonons, for example. So these are things we're going to study. But to prepare ourselves for that, we studied a bit the harmonic oscillator, and I want to expand on it a bit. Remind you what the rules are. We started with the harmonic oscillator with a Hamiltonian, which I wrote as p squared over twice the mass, and I set the mass equal to one just to make my life simple. You can go back and do it with a mass not equal to one. I believe in our book art we do it with a general mass, so we do it with, yeah, OK, so it's done with a general mass. And then plus omega squared, that's the spring constant, but it also happens that omega is the frequency of the oscillator, x squared over two, and we constructed creation and annihilation operators, a plus or minus, which happened to have been momentum plus i omega x. Now notice, not finished yet, but notice if I, classically, if I were to take, this is plus or minus, if I were to take the two operators, a plus and a minus, and multiply them together, and then divide by two, I would construct the Hamiltonian. All right, so we'll come back to that in a minute, but in order to be consistent with the notations that are ancient, we'll put a square root of two omega here, and those are the a pluses or minus, and if you work out classically or quantum mechanically the Hamiltonian in terms of a plus or minus, here's what you find. Oh, one more quantity. One more quantity is called n. n is equal to a plus a minus, as simple as that. And the last time we proved that the spectrum of eigenvalues of n were just the integers starting at zero. We showed that a plus and minus were raising and lowering operators, which raised and lowered the eigenvalues of n. All right, so this is an operator whose eigenvalues are one, zero, start with zero. No negative ones. Zero, one, two, three, dot, dot, dot. And finally, not finally, but moreover, the Hamiltonian is equal to omega. Now, I have left out h bar. I'm going to leave it out, but then I'm just going to show you where it goes. If we leave out h bar, the Hamiltonian is just omega times n. Now, where does that come from? If you multiply a plus times a minus, clearly you get something which looks like this. There's two downstairs, that's this two, and there's an omega downstairs. That's why I have to compensate it with an omega here, omega n. Now, quantum mechanically, there's a little correction, and the correction comes because the p's, because the a plus and a minus don't commute. And that gave us an n plus one-half. The one-half is called the ground state energy, or the vacuum, well, if it was a field theory, we would call it the vacuum energy. If it is a simple harmonic oscillator, we call it either the zero point energy or the ground state energy. And it's really a consequence of the uncertainty principle that you can't have both p and x simultaneously being zero. And since the Hamiltonian involves p squared plus x squared, there's no way they can cancel each other. So there's always going to be a residual bit of ground state energy there. You don't lose very much if you just cross it out and forget it, because an additive constant in the energy of a system really doesn't change its equations of motion. So you can ignore it, but it is there for some purposes. All right, what I wanted to do was to show you, is there anything else? Yeah, there's one other important point, or equation, that went into showing that the A pluses and A minuses were raising and lowering operators. It was the commutation relations of the A plus and A minus. There acts rule, and if you see two things, commute them. Sometimes, of course, you get garbage. But the commutators are A minus with A plus, just equal to one. Had I not put the square roots of two omegas down there, the commutator would have been something a little messier. And so really the reason for dividing by square root of two omega here was to get rid of anything on the right-hand side to divide it away. This commutation relation, together with the definition of N, tells you that N is quantized in integers, and the connection with the Hamiltonian is such that the Hamiltonian, or the energy levels, are these. Yeah? Last time you were going to explain why it went down to state zero, and you said to think about it. Yeah, why it stops at zero? No, why zero is included. Okay, we're going to do it. Yeah. Okay. What we're going to do is we're going to demonstrate explicitly what the ground state is, that there is a ground state, and we'll find the ground state by actually solving the Schrodinger equation. And we'll see that there definitely is a state with zero. So what is a wave function? A wave function in this context, let's just put parallel to this very tricky and clever operator structure here. Let's write down the wave function description of the same thing. State vectors are replaced by wave functions, which are functions of position. If we're talking about energy eigenstates, we could label the wave functions with a little n down here, indicating which integer they correspond to. The ground state, for example, will be psi zero. The first excited state would be psi one and so forth. So state vectors become wave functions. Operators, X, the operator X becomes just multiplication by the coordinate X. I'm trying to remember to use capital letters for operators and lower case letters for just numbers, lower case. Okay, the momentum is replaced by minus i d by just derivative with respect to X. Again, there's an H bar in there if you want to keep track of H bars. Oh, if you do want to keep track of H bars, here's where it goes, right here. And that's the only place where you have to modify what I've done here. Yeah, I guess maybe also here we want to put an H bar downstairs here, I think. And you can always figure out where the H bars go by using dimensional analysis. Using dimensional analysis and remembering the units of H bar will tell you uniquely where to put the H bars. Okay, so there's momentum. And let's check and see if there is a solution of the equation A minus on psi is equal to zero. In other words, is there a state? You know, I can answer your question faster without, let me answer your question quickly without then come back to calculating. We know that there has to be a state which is the bottom state. We can't keep going down and down and down for the simple reason that the Hamiltonian is positive. It's p squared plus x squared, p squared plus omega squared, x squared. An operator like that has no negative eigenvalues. Just as classically it's either positive or zero. Quantum mechanically it also can't be negative. And so there's got to be a bottom. There's got to be a floor to the tower of energy levels. Okay, how can we have a floor? The only way we can have a floor is if the bottom state, let's call it O, let's call it ground state. It's a ground state. It's annihilated by the annihilated means that you get zero when you hit it with A minus. If you didn't get zero, whatever you got on the right hand side would be a state of one unit lower energy. There is no state with one unit lower energy. The only way for that to happen is for this to just give zero. Now once we know that A minus on O gives zero, then we can look at N. So let's look at N now. What does N do when it hits the bottom state? Well, it is by definition A plus A minus on O, but we already know that A minus on O gives zero. So this is zero. It's an eigenvector of N with zero eigenvalue. So there must be an eigen- the bottom state has to have zero value of N. That's an abstract argument, but yeah. So if zero is eigenvector, then shouldn't there be an eigenvector of zero on the far right hand side? Sorry. This zero here, this state, and this are two different things. That's the eigenvector. That's the eigenvector and it's normalized to one, and it's not zero in any sense. It's just that its name is zero. You know, if I called my friend, Arthur, zero, that wouldn't mean art wasn't here or it didn't exist. It would just mean being unpleasant. I'm calling him zero. Yeah, you can write zero as a ket, but you have to understand it's not this zero. It's just zero. Zero times anything is zero. So the fact is that it's a vector on the left hand and right hand side, so you can put the vector in there and it's just a zero vector. Thank you. But you must not call it this vector. Whatever name you give it, you've got to give it some, and the tradition is just not to write it as a vector, but just to call it zero. In the abstract theory of vector spaces, you introduce a vector called zero and it's sort of special. It's the only one that doesn't have a ket sign. But as you say, it is just another vector. Or rather, a special one. Yeah. Okay. The bottom of the tower, the bottom of the energy spectrum has to be energy h bar omega times one-half with n equal to zero, big n equal to zero, also little n. But let's see if we can see how far we can go in calculating all of the wave functions. Can we calculate the wave functions themselves? Now, you might ask, I can write a Schrodinger equation. I just use for p in the top equation. I use for t, d by dx. For a time-independent Schrodinger equation, the right-hand side is just the energy eigenvalue times psi. Question. Is it really true that you can only solve that equation? In fact, we could write down the equation. Let's write down the equation. Just time-independent Schrodinger equation. It's minus d by dx squared, oh boy, ordinary derivative, minus one-half. The one-half is the half in front of the Hamiltonian there. P squared is d second by dx squared psi of x plus the potential energy plus omega squared x squared times psi of x. And if this is the time-independent Schrodinger equation, then what I want to do is to solve the equation that this is equal to, let's call it, E for the eigenvalue for energy times psi of x. That's the same as writing p squared plus omega squared x squared on a vector is equal to E times that vector. OK. Now, if you know anything about differential equations, you'll know this is a second-order differential equation. You can always solve it for any value of the energy here. But what will happen for most values of the energy is the wave function will not go to zero at infinity. Even worse, it will exponentially explode and get big. It won't be normalizable. It will not have a finite probability under it. There's no way to normalize it. If you try to normalize it, it'll just be zero because the integral under the square of the wave function will be infinite. To get rid of that infinity, you'll have to divide by it, and the wave function will just be zero everywhere. So the rule is, one more rule, and we've talked about this last quarter, three quarters ago, whenever it was, is that what you mean by an honest vector here is one whose square integral, whose integral, the square of, which is square integrable, integral psi star psi equals one. OK. There were some exceptions to this. For a particle moving on an infinite line, sometimes we introduce momentum states and position states, which are a little weird in this way. But we would like the total probability to be finite. It's certainly not exponentially exploding. So this is one additional restriction, and it's when the wave functions are restricted in this manner, when they really form an honest and good vector space, then that's when the rest of this applies here. It only applies if we're interested in the square integrable wave functions, which we are. And so the point is that when we write down the Schrodinger equation, there are certain definite values of energy which will have square integrable wave functions. Those are the true honest eigenfunctions. If the wave function blows up and goes to infinity, we just throw them away. We say they're not in the space. OK. Let's see if we can find psi naught, the ground state wave function. We could write the original Schrodinger equation for it, but we can be smarter than that. Instead of writing h on psi equals e psi, let's write a smarter equation, a simpler equation. Let's use the fact that we know that the annihilation operator, the lowering operator, excuse me, the lowering operator on the ground state, I don't need to call this psi naught, just naught, gives zero. OK. What is a minus? a minus, just want to get my signs right. a minus is proportional to p minus i omega x. Now, there's some factors in the denominator there, but we don't care about them, because we're just going to write that this times psi of x equals zero. And numerical factors like square roots of omega, they don't matter in this equation. Just multiply through by them. And now, remember what p is. p is minus i d by dx. minus i d by dx. minus i omega x times psi of x equals zero. Now, we have a much simpler equation than we had before. It doesn't even have any second derivatives in it. It only has first derivatives in it. So we can try to solve it. The trick for solving an equation of motion like this, anybody know what the trick is? There's only a finite number of tricks in solving differential equations. There's a handful of tricks. What's that? Separation of variables. No, there's only one variable. We're going to separate. That's it. Wrong hand. Right. OK. One very standard trick for linear equations like this is to write the wave function as the exponential of something else. Now, this is just a matter of experience. I can't tell you how to guess that. Yeah, you can guess it because when you multiply through by dx, you get x dx. So the equation comes out as a derivative over a, you know, right, but it's a deformable order. Yeah, exactly. All right, so this happens to be the kind of equation that you solve. You can solve it in many ways, but one way of solving it is just to write psi of x is equal to e to some f of x. And try to figure out what f of x is. All right, so let's try it. First of all, of course, we can get rid of the i's here. i and i factor out. Furthermore, we can get rid of the minus sign because both of them have minus signs. And now what's the derivative of psi of x? To calculate the derivative of psi of x, we simply differentiate and we find that let's call it psi prime, which means the derivative is equal to f prime. That's the derivative of f times e to the f of x. To differentiate an exponential, you just differentiate the thing that, in the argument of the exponential here, and multiply again by f of x, e to the f, e to the f of x. Okay, that's psi prime, that's this here. And then we add to that, let's, we want to add to that plus omega x psi of x, which means we're going to add to this plus omega x e to the f of x. Now you see what Michael was saying, that the e to the f of x factors out. We can get rid of it. And that's where the trick, that's why the trick was done in the first place. Let's get rid of all of this here. And our equation is just the f dx plus omega x equals zero. That's it, equals zero. The f dx came from here, plus omega x, they both multiplied psi, so we got rid of psi, and that's our equation. This is really easy to solve. A function whose derivative is a linear thing, what kind of function is that? A quadratic thing. So the solution is quite clearly going to be proportional to x squared. In fact, it's going to be f of x is equal to one half omega x squared, plus a constant, let's put the constant in there, let's put the constant in there. And that's all. Minus, minus, minus, thank you. Minus one half omega x squared. That tells us that psi of x is equal to e to the minus one half omega x squared. What about the constant? Well, we can add it in, but that just puts a numerical constant in front of the exponential. Since I haven't bothered worrying too much about normalizing it, I'm not going to bother with the constant. The numerical constant out in front is there. It's got one over squared of omega in it, but it's not terribly interesting for our purposes. This is psi naught of x. And notice that it goes to zero very, very fast with x. e to the minus x squared. It's a Gaussian. It's a bell-shaped curve that goes to zero as e to the minus one half omega x squared, and is very, very square integrable. If we square it and integrate it, it's extremely convergent. And so we've succeeded in finding, now we can go back and check that the original Schrodinger equation is satisfied. In other words, we could plug this wave function into the Schrodinger-Hemiltonian up above and calculate what the energy eigenvalue is. But we already know what it is. It's going to be omega times one half. We know that from all the algebraic tricks over here. So if we stuck this function back into the Schrodinger equation, we would discover that e is equal to one half, just by putting it in, plugging it in, and doing it. So we know the ground-state wave function. We have one eigenvector of the energy. We see what its physical properties are. It's concentrated near the origin. It's very smooth because a Gaussian function is very smooth. It's concentrated near the origin. It does what you might expect a ground-state wave function of a harmonic oscillator to do. It just sits at the origin. It sits very close to the origin. Or at least by close now, I mean that it's concentrated near the origin. And yeah, it's the ground state. What about the first excited state? I'll do the first excited state for you. It's almost trivial. You can go home and have a lot of fun calculating the next 50 states. It's sort of mindless fun. But how do you do it? It's how you do it that's more interesting than what you get. Well, they're both interesting. But you do it by saying, look, I know what the first excited state is abstractly. The first excited state is equal to a plus times the ground state. I know what the ground state is, and I know what a plus is. So let's just apply a plus to the ground state. Here's the ground state. A plus is p plus i omega x. And that will tell us what the first excited state is. All right, so in terms of wave functions, psi 1 of x is equal or proportional to p plus i omega x. Now, p is minus i d by dx plus i omega x upon e to the minus 1 half omega x squared. So all we have to do now is to calculate a derivative. But you know, I already did this. I already did this. We did it with the opposite sign here. We did it with a minus to get zero. If I do it with the opposite sign and get zero, that means if I do it with the sign here, I'm just going to get twice the answer that I would get. Go back. I did this operation on here and got zero. That's the way I solved the equation. All right, so if this plus this acting on this gives zero, then it must mean that this term and this term give the same thing apart from the sign. But once I know that, then I know that when you change the sign over here, instead of having them cancel, they'll just double the answer. So this is just going to be twice i omega. That's not so interesting. It's just a numerical factor. It happens to be imaginary, but who cares? But it has x times e to the minus 1 half omega x squared. So structurally, it's different. It's not just a Gaussian function. It's the Gaussian function times x. What does that look like? That looks like this. Again, it's very small far away. The fact that there's an x here doesn't make it big far away, because it's totally overwhelmed by the e to the minus x squared. But it has to be minus. It has to be minus when x is negative. Yeah, good. It's negative? Yeah. Well, it's twice an imaginary number times omega. But whatever it does on one side, it has the opposite sign on the other side. It's an odd function. It's odd, meaning it changes sign when it goes through the origin. Right. So let's make it negative on the side. It goes down, and then comes up, 0, and then goes to 0 again. It's odd, meaning to say that when you reflect it, it changes sign. It's anti-symmetric. You would say it's anti-symmetric. The probability, which is the square of this, is symmetric. Notice that the probability goes to 0 at the origin. If you're in the first excited state of the harmonic oscillator, the probability that you find the oscillator at the origin is 0. Interesting fact. That's the first excited state. It has the ground state had no nodes. A node means that the wave function is 0 somewhere. The ground state had no nodes. The first excited state has one node. Each time you act with A+, you can just spend a little bit of time with it. You'll see how it works. Each time you act with A+, you make a higher order polynomial in front of e to the minus omega. Each time you hit it with another A+, it'll give you x squared. x squared plus another term, x cubed after a while. Each time you'll get a higher and higher order polynomial, and each time you'll add a node. So the next wave function, the second excited state, looks, has two nodes. The third one has three nodes. And if you work out the 17th, it looks something like this. Each time you act with A+, you wind up pushing the wave function out more. For example, 0 of the ground state was concentrated near the center. The factor of x here tended to push it out a little bit. Why does it push it out? Because x is bigger far away than it is nearby, and so it pushes it out. After you've done 17 factors, or maybe it's 24 or whatever, you'll find that the wave function is pushed out near the wings, small near the origin, oscillating, and then plop goes to 0. Either symmetric or anti-symmetric. Either symmetric or anti-symmetric. And as you go up, you get more and more wiggles, and the whole thing gets pushed out further and further. What do the wiggles mean? What's the implication of the wiggles? Wiggles mean momentum. What's the implication of it being far away from the origin? Potential energy. So as you keep exciting it, you create more and more potential energy by pushing the wave function out where x squared is large, but at the same time, you create more and more momentum as that oscillator swings through the origin. What's happening is the oscillator is trying to do what classical physics tells you what ought to do, that the higher the energy, the more likely it is to be found far away, but it also may be found close, and if it's found close, it ought to have a high momentum. So that's why it tends to oscillate very quickly near the origin and more slowly far away, but that's the basic physics of it. I don't know if I drew it well. It should oscillate more rapidly near the center, and less rapidly far away, just indicating the fact when it swings through the origin, of course it has more momentum. Okay, those are the first two wave functions. Yes? So it seems like as n becomes very large, the correspondence principle does not hold for the simple harmonic oscillator. What would you say the correspondence principle is? That when n becomes very large, it looks like a classical. Well, in fact, what you have to do to see the classical motion is you have to take wave packets. You have to superpose many energy levels. For example, if you take the ground state, start with the ground state. That's a nice smooth wave packet. Now, displace it. Displace it off to the side here. It's no longer the ground state. In fact, it's not even an energy eigenstate altogether. It does have an average energy, but now take the time-dependent Schrodinger equation. The time-dependent Schrodinger equation has every energy level oscillate with a different frequency. The result is that this wave function is not time-independent, and what happens, it starts to swing back and forth. It starts to swing back and forth. As it swings near the center, it gets lots and lots of wiggles, but it stays of the same general shape. It wiggles, and then it comes out this side after a while, nice and smooth again, and then it swings back and forth and back and forth. It looks very much like a classical oscillator, and in fact, the higher the energy, the more it looks like a classical oscillator. Just because the higher the energy and the bigger the swings, the smaller the ratio of the uncertainty to the size of the swing. When the swing gets very big, it sort of dominates the spatial structure of it. Okay, that's the basic physics of the harmonic oscillator. You can work out the second level, third level. By the time you get to the fourth level, you won't want to see it anymore, but it's straightforward. Good. Yeah. Are you drawing on the second level? Yeah, good. Shouldn't it go to zero at zero? Is it going to be x squared? No, I don't think so. I don't think so. I think there's a constant term also. Yeah. It's not just x squared times. Right. In the Hamiltonian you wrote originally, they're left there already implicitly contained the normalizability. Yeah. I'll tell you where it's assumed. You want to know where it's assumed. You want to know where in this over here it was assumed. It's assumed in the fact that a plus and a minus are Hermitian conjugates of each other. If you were to take a wider class of wave functions that were not square integrable, then a plus and a minus would not be Hermitian conjugates of each other. That's where it's assumed. And I did assume that. Right. So, right. Is there a way in general of recognizing when the Hamiltonian written? Yeah. It's a Hermitian. And here and there, sort of sporadically, and this or that, sometimes you encounter a solvable example. And when they're solvable, they can usually be solved by these kind of tricks. But they're very rare in the space of possibilities. And by some fortunate accident, the harmonic oscillator is one of them, and the idealized hydrogen atom, non-relativistic hydrogen atom with a point nucleus, happens to be one of the solvable examples. Okay. Now, I have in my notes two other subjects for tonight with the idea of discussing atoms a little bit further, but more than atoms, discussing everything that there is about quantum mechanics and field theory. Two things, spin and boson fermion, the difference between what bosons and fermions are. I have, well, I guess I'll go according to my notes here. I'll follow my notes. I have spin first and then bosons and fermions. So let's talk about half spin. The spin of the electron or the spin of the proton, which happens to be half spin, which means that the spin or the angular momentum of an electron at rest, at rest it has no orbital angular momentum, it has no r cross p, all it has is spin. So spin is a kind of angular momentum that's attached to a particle. It's your choice whether you want to think of it as just an abstract concept or whether you want to think of it as a tiny little thing which is literally spinning, spinning about an axis. And when I say it has no momentum, I mean to say it has no center of mass momentum. And it's a matter of choice and taste whether you want to think of it as literally being a little spin or just an abstract property that you, a mathematical property that you attach to the electron. But whatever it is, it is angular momentum. It transforms under rotation. When you transform, when you rotate coordinates, the state transforms. Now we've already been through this. We spent most of a quarter talking about the half spin system. So I'm going to go through it quickly. You should go back and read the early parts of the previous lectures. We talked about a spin having two states, up or down. But then we said, wait a minute, if it can be up or down, it can also be left or right. But if it can be left or right, it can be in or out. And we worked out the states describing those things, the operators which described the components of the spin. And I didn't tell you, but we were talking about the angular momentum of a spinning particle, or of a half spin particle. So let's see how we can understand that now that we've talked about angular momentum. The thing that characterizes angular momentum is, first of all, that there are three components of it. But just the fact that there are three components of it is not enough. It must be that those three components are literally associated with the x, y, and z directions of space. You could have them connected with something else, some other internal dimensions or whatever. But once you have matrices, once you discover that your system is being described by matrices, three of them which have the commutation relations of angular momentum, and they are angular momentum. Actually, there's no choice about that. As long as the three matrices you're talking about are literally associated with an x and a y and a z direction of space. So let me write down again what the basic commutation relations of angular momentum are. L i, L j, and let's write just L x. Let's see. Let's take the L z with the L x. The only thing with L x is just equal to i times Ly. And the other two are just cyclic permutations. You may have noticed that these commutation relations are closely related to the commutation relations of the Pauli matrices. So let's write the three Pauli matrices and check whether this is true of the three Pauli matrices. Here they are. Sigma z is equal to 1 minus 1, 0, 0. Now, we've made a choice. We've made an arbitrary choice. We've decided to take sigma z to be the diagonal matrix. In other words, we are working in the representation of the z component of the spin. We've chosen to take that to be diagonal. And of course, the eigenvalues, what are the eigenvalues of this matrix? The eigenvalues of this matrix are plus 1 and minus 1 representing up and down. Okay, then there was sigma x. Sigma x was 1, 1, 0, 0. And finally, sigma y is equal to minus i i 0, 0. There are ambiguities in these. You can rotate them and change them, but there's nothing important happens. There's no minus in sigma x? No minus in sigma x. All three are Hermitian. A real matrix, meaning to say if its entries are real numbers, if it's Hermitian, it's symmetric. That's this and this. If a matrix is Hermitian and imaginary, it must be anti-symmetric. That's this. Okay, let's check the commutation relations. Let's multiply sigma z by sigma x. And that's equal to 1 minus 1, 0, 0, 0, 1, 1, 0. This times this is 0. This times this is 1 up in here, 0, down here, minus 1, 0. That's not quite sigma y, but it's almost sigma y. It would be sigma y, I think, is it i times sigma y? Or minus, it's i times sigma y. Now, that's nice, but it's not the commutator. The commutator is this minus sigma x, sigma z. That's the commutator, and that has 0, 1, 1, 0, 1, 0, 0, minus 1. And if you go through it, you don't get 0, what do you get? Exactly the same thing you got over here, equals i sigma y. So we have not quite what we wanted. We have instead commutator of sigma z with sigma x. It's twice i sigma y. Does this mean these are not angular momentum, or does it mean that we're not talking about rotations? No, it just means we've normalized the sigma matrices incorrectly. Here's the trick. Take s and define it to be sigma over 2 for each component. For each component, take s being half the powerly matrix. It seems a little bit odd, but nobody told us what the angular momentum was supposed to be. So let's just take it and see what we get. Then we get, let's put a 2 in here, a 2 in here, and that altogether divides by 4, right? I put a 2 here, a 2 here. The whole thing by 4, 2 divided by 4 is 1 half. So yes, half the powerly matrices satisfy exactly what they're supposed to satisfy. Sz with sx is equal to isy. And likewise for the other commutation relations. So we see this little system of 2 by 2 matrices acting on two component vectors up or down are actually representing a very primitive and simple angular momentum system. It's a thing which you can think of as attached to the particle. Attached to the particle is a little spin, and a little spin is simply described by the system here. But now we can answer the question, what are the eigenvalues of, let's say, the z component of the angular momentum? Let's go to the z component of the angular momentum. The z component of the angular momentum, let's put some 2's here now that we know what we want, 1 half minus 1 half, 1 half, 1 half, 2, 2. Notice that the eigenvalues of the z component of angular momentum are a half and minus a half. When we were working out angular momentum, I told you, and I showed you, that there were two kinds of multiplets. There were the half spin multiplets. Let's put 0 over here. Sorry, there were the integer spin multiplets. Where the magnetic quantum number m was 0, 1, 2, 3, minus 1, minus 2, minus 3, and so forth, with the central point being at 0 here being one of the possibilities. Now remember, the raising and lowering operators shift you by an integer, but that doesn't say you start at 0. It says you either you start with 0 or you start at a half, a half minus a half, and then 3 halves minus 3 halves, depending on the overall angular momentum. If it's spin a half, then there's only a half and minus a half. And that's what we have here. We could have guessed all of this by saying, look, we seem to have the mathematical possibility of having half spin. That means there are two states. Once we know there are two states, we can say the angular momentum of this one must be plus a half, the z component must be plus a half, that one minus a half. And we can go and start building the structure up the way we did a year and a half ago or a year ago, whenever it was. Wasn't that long ago? I don't even know, but OK. So every particle has a spin. The spin may be 0. If the spin is 0, it's just called a spin 0 particle. The Higgs boson is a spin 0 particle. The deuteron is a spin 0 particle. Yes, nuclei can be particles. They are particles. They have no angular momentum in their rest frame. On the other hand, some nuclei do have spin. But some atoms have spin. But the important thing is, or what we're going to concentrate on is the half spin particles, the electrons, the muon, neutrinos, quarks, all the ones that really we think of as fundamental, or some of the ones we think of fundamental, have half spin. OK, all of this means that there are two kinds of angular momentum. Orbital angular momentum, and this is true in classical physics, too. Orbital angular momentum is the angular momentum of the center of mass of a system. In the center of mass frame, if the system is spinning, then it has spin. The same is true here. And what is the total angular momentum? The total angular momentum is the sum of them. It's a vector. So it's the vector sum of the two kinds of angular momentum. And it's true here, too. So I'll write down the equation, but we're not going to do very much with it. The total angular momentum is called J. Again, it's a notation. I don't know where it comes from. It's, again, a vector. And it's equal to the orbital angular momentum, L, which is r cross p, plus the spin, plus s. I just tell you that because you'll see that all over the place when you're doing quantum mechanics, particularly in atomic physics, you'll see that the total angular momentum is called J. And it's L plus s for a particle. Any questions about spin? I don't want to do too much tonight. So we can stay with spin for a while, or we can. I'm just wondering, how do you give it some system with the J? How do you identify s? Are you usually given something different? Usually given the knowledge that the spin is a half, for example, and that the orbital angular momentum could be 1, 0. If the orbital angular momentum is 0, then the only angular momentum it has is the spin. If the orbital angular momentum is 1, then we need some rules for what we get if we add an angular momentum 1 to an angular momentum of a half. The question is, what do you get? And I wasn't going to go into that now. The problem of the addition of angular momentum, it's a straightforward problem, but we won't do it tonight. I'd be happy to go through it another night. But there are rules, mathematical rules, for adding angular momentum of two systems. Basically, you have two systems. You have the spin system and the orbital system. Or you could just have two spins. Or you could have two different orbital angular momentum systems. The net system has its own angular momentum, and there are rules, quantum mechanical rules for adding them. Let's not do it now. Yeah? You said we could think of this, not the orbital, but the spin has been either actually like the physical spin, or we could think of it in a more theoretical way. Yeah? What is the experiment background or whatever that made them introduce this notion of spin? Well, the first experimental background came partly from spectroscopy and partly, more importantly, from the periodic table. So that's what I want to get. I want to get to the periodic table a little bit and show you how spin comes into it. There was Pauli who realized that to understand the periodic table, the electron had to have an additional property. Okay, that brings us to the issue of fermions and bosons, which is closely connected to the Pauli exclusion principle, at least in the case of fermions. All right, let me just motivate it in terms of chemistry a little bit. It was one of the motivations. The other came from spectroscopy, but it's not as clear. It's not as obvious. If you remember, the angular momentum squared, L squared, is equal to L times L plus 1. Is that what I want? Yeah. And when we looked at the energy levels of the hydrogen atom solutions, which we didn't do, but I just told you a fact. I told you a fact that there's some extra degeneracies. Each one of these has two L plus 1 states, but there were additional degeneracies. I'll redraw them for you. The horizontal axis is L, the vertical axis is energy. These are the energy levels of the hydrogen atom. And down at the bottom is the ground state of the hydrogen atom with the lowest energy level of it all. And it's an S wave meaning to say it has no angular momentum, no orbital angular momentum, the solutions of the Schrodinger equation. The energy is, in fact, negative. You start at some negative value, minus 13 points, something electron volts. But I'm just putting it here arbitrarily at zero somewhere up here. OK, so that's the first level. Now, at the L equals zero, there are states with, this is the state with no nodes in the wave function. Then there's one with one node, two nodes, three nodes, and so forth. So there are more states here. I'm just drawing them schematically, completely schematically. OK, there are more. In fact, they get closer and closer together as they get up here near E equals zero. But that's not too important for us. Now we go to L equals one. And at L equals one, there's the lowest energy state of L equals one. And each L equals one state has three components. Two L plus one has three components. And the ground state of L equals one has more energy than the ground state of L equals zero. Why? Because it's got angular momentum. So of course it has some more energy. So it's going to be up a little bit higher. In fact, this whole collection of levels here is going to be raised up because of the angular momentum. And it's raised up in a rather surprising and, well, I suppose you could say elegant, I'm not sure it's elegant. It is what it is. All right, so the first L equals one level occurs at the same place as the second L equals zero level. It's degenerate with it. So it's three states over here. And then the pattern continues. The next three states are degenerate with this and so forth. The degenerate means it has the same energy. Then you go to L equals two. I'll use a different color. You go to L equals two, which is over here. And what you find when you solve the equations, this is supposed to be at the same level here, is you find again the first L equals two state is higher than the first L equals one state, but it happens to occur exactly where the second L equals two state is, which is exactly where the third L equals zero state is. It's a mouthful, but you know what I mean. You go up to here. And how many states are there? Five. One, two, three, four, five. One, two, three, four, five, and so forth. And that's the pattern. All right, now you count the number of states. How many of you count the number of states one here at this energy level? Whoops. Yeah, one here. Four here. Zero angular momentum plus angular momentum one. Four. Nine here. Sixteen here. Evidently L squared levels. If you look at the way electrons fill the shells in idealized, very idealized chemical description of the periodic table, whereas it's up there, but you see that first of all, it was hydrogen, which has one electron. And when you look at helium, which has two electrons, you find out that it's consistent with an energy which would be what you would get if you put both electrons into the ground state. Things are a bit different because instead of having, let's go to helium. This is hydrogen. Let's go now to helium. Helium has a nucleus of charge two. Q equals two. Charge two. So it pulls on the electrons tighter. The whole structure is rescaled, but that's not important. The charge is bigger. That pulls the electrons in closer, but the basic structure is the same. And now you look at helium. You'll find out first of all, there's a helium ion. Helium ion means one electron instead of two electrons. And you study the helium ion, and the helium ion looks exactly the same as the hydrogen, with the exception that you'll have to plug in twice as much charge. That spreads the energy levels out and rescales them, but same structure. So the ion looks exactly like the ion. The ground state of the helium, ground state of the helium ion, looks as if it's one electron in the ground state of the helium wave function. Then you put another electron in. What do you expect? Well, the natural expectation would be if you put another electron in, the ground state of that would be put that electron also in the lowest energy state. Why do anything else? If you want to keep the energy as low as possible, put them all into the ground state. So you try that. Second electron into the ground state wave function, and it works just fine. It gives you the right rough description of the helium. It's not quite exactly right because it's ignoring the interaction between the electrons. Just pretending the electrons are just experiencing the nucleus. Let's take that as a working approximation. Ignore the interaction between the electrons. Put two electrons into the ground state of the helium wave function, and it works. It gives you the right energy for the helium. Well, what happens if you were to make a helium ion by putting an extra electron instead of the ion with only one electron? Incidentally, there's also an ion with no electrons. It's just called a deuteron, a helium nucleus. Okay, one electron, two electron, put three electrons in. What's the guess? Right into the ground state. Why anything else? But no, that's not what happens. What happens is it goes into the first excited state. So you put two electrons in the helium into the ground state, and the third one doesn't want to be there. It wants to get into the next excited state. Why? It could have saved energy. You could have had a lower energy state in the ground state if that electron went into the ground state. Okay, so along comes Pauli and says, I have an idea. My idea is that no two electrons can ever get into the same state. Something about electrons is such, I won't call it a repulsion. It's an exclusion principle. When an electron is in a state, you can't fit another one in. Somehow the mathematics won't allow it. But wait a minute, we already said we could put two electrons into the helium ground state. So it's talking nonsense. The Pauli exclusion principle doesn't work. It only works somehow when you put a third one in. Now, Pauli said that's not what's going on. He said, really, you can't put two electrons into the same state, but an electron has more properties than just its orbital motion. He said, I'm going to make up a new property, and the new property has to be such that the electron comes with two values of it. Two values of it. In that case, you can put the electron, one electron, into the ground state, and let's call it the up electron. Remember, the electron can now have two properties. We're going to call them up and down. We can call them left or right, or we can call them anything we want. But let's give them the new property, which Pauli eventually realized had something to do with angular momentum, and said, if you can put two electrons in a state, not because you can't put two electrons in a state, but you can put two electrons into the same orbital angular momentum state if you give them opposite spin. If one of them is put in the up state and the other one is put in the down state, then I can save my crazy exclusion principle. I can save my crazy exclusion principle if I assume that the electron has one more property in addition to its orbital motion that it has a thing, which I will call a spin. Now, in fact, there were previous reasons to believe in an angular momentum like that, but they're not as transparent, and I won't try to get into them. That was his basic reasoning. Electron has another property, a two-valued property. Once he knew there was a two-valued property, he was off and going. He invented matrices to describe that two-valued property. Before you know it, he had the Pauli matrices, and he understood the whole thing. But let's go to the next atom. Let's see. We go to lithium. Lithium has three electrons. We would naturally expect for lithium, you do the same game. Now the nucleus has charge three, so it's much more attractive. Charge three, you put two electrons into the ground state. For the no electron case, the one electron case, the two electron case, lithium ions, it works perfectly well to put the electrons into the ground state. Not if there's no electrons, but if there's one or two. And indeed, you find out that when you put the third electron in, into the lithium to make it neutral, it goes into the first excited state. It goes into the first excited state, meaning the state with angular momentum one. And that works out well. It works out the energy levels of the lithium atom work out reasonably well that way. And let's go out the ion of lithium. Well, you filled up the two lowest energy states. You've only put, you have three states here. You could add an, oh, oh, you could add another electron in any one of these states. Let's go back a step. Let's go, let me go back a step. Let's forget for a moment the spin. Let's talk about the, the atoms. If there was no spin, but if there was a Pauli exclusion principle. Let's try a Pauli's old idea, except without the spin. What would you find? Indeed, you would find two, or you would find one, helium. In helium, you would find one electron, am I saying this right? In the ground state. But then to keep the energy low, you could put the second electron in any one of four states. In any one of four states, if you didn't want to double, you didn't want to violate the exclusion principle, there would be four distinct helium atoms. Four distinct helium atoms depending on which state you put the next electron in. That was not right. The helium ground state was unique. Did not have this peculiarity. What Pauli's idea was, no, you don't go to the second state here, and there are not four possibilities. There's just the helium atom with one electron and the helium atom with two electrons, both in the ground state. Now you come to lithium. Now you come to lithium, you put the first electron in. To getting spin, where can it go? First electron here. I'm getting tired as usual. What I'm trying to do is explain why there are eight different, can you see this periodic table up there? The second row has eight entries, I hope. How do we understand those eight entries? One extra electron. We add an electron. We add the first two electrons and put them in the ground state, and that's the, and we're finished with the ground state. Then how many more states do we have if we only occupy the first excited state? In other words, how many different ways can we make atoms? Well, we can put an electron in any one of four states in the first excited state, but if we're allowed to play with a spin, then there's any one of eight possibilities. Two, six. P orbitals are six. Two P six, one S one. There's two P orbitals in each of three spatial directions, so there's two electrons in each P. There are four states at the first excited level. Six or P in the other two, or S one. Yeah. Okay. Is the angular momentum zero and the three angular momentum one? I think that's, they say there's angular momentum zero and three angular momentum one. Yeah, that's the four, but if now you double it by the spin, it becomes eight. Yeah, you should have a total of eight, not two. Right, that's fine. Okay, that's how spin was discovered. That's how spin was discovered, and the fact that it really was angular momentum, that could be checked by putting the atom in a magnetic field. If the electrons, additional degree of freedom was really angular momentum, that would mean that since the electron is charged, that the electron would be behaving like a little magnet. If it was really wrote spinning and it was charged, it would behave like a little, like a magnet, like an electromagnet current going around in a loop. If that were the case, and you put the electron into a magnetic field, preferentially it would either, it would lie along the magnetic field or opposite to the magnetic field with two different energy levels. Okay, so if you took the hydrogen atom with one electron, and now you put it in the magnetic field, the two possibilities for the electron spin along the magnetic field and opposite to the magnetic field would now have two different energy levels. So the two states of the hydrogen atom, which previously had been ground state, ground state of the hydrogen atom with an electron in it, the electron could have been pointing in one way or the other way, they had exactly the same energy in the first approximation, and now you put it in a magnetic field and you find that the energy levels are split. And this split exactly as if the electron were a little magnet because of its angular momentum. So the fact that it had angular momentum, that could be tested by putting it in a magnetic field. The fact that it had two levels, that was consistent with the Pauli exclusion principle and the fact that, or the assumed fact that you can't put two electrons into the same state. Now this was all wild guessing. It was wild guessing that you couldn't put two electrons in the same state. It wasn't quite so wild that it was angular momentum because that could be tested experimentally. But the next step, which I think was a Pauli, the one person that it wasn't was the person that the idea was named for, Fermi Dirac. It must have been Dirac again. It might have been Pauli, it was either Pauli or Dirac who had the idea that there were two kinds of particles. Particles for which satisfied the Pauli exclusion principle where you couldn't put two in the same state and particles where you could put two in the same state. And not only that, it was preferential to put more than one in the same state. It was already known that photons, photons for our purposes now are just particles. It was already known that a classical wave of photons was simply a collection of a large number of photons all in the same state, all in the same quantum state. So it was known that there were particles that you could put into the same state. Einstein figured them out and again it was named after Bozer because Einstein figured it out and it wasn't Bozer. Although Bozer did some... Fermi was in the mix somewhere. What's that? Enrico Fermi, was he in that same group? No, he came later. And his contribution was different. The discovery of what is called the statistics of particles, the Boson or Fermion character, was Einstein and Pauli, I believe. Or it might have been Einstein, Dirac and Pauli. I'm not sure what the actual history was. And it was not the other people, although they made major contributions. I think it was Einstein who gave Bozer the credit for the discovery. When you talk about photons, they're independent particles, but when we talk about electrons, we associate them with a hydrogen. What happens if you have just a stream of electrons? You can't put two of them in the same state. That's why you can't make a laser that lasers electrons. A laser creates lots of photons all in the same quantum state. You would have a pretty wild thing if you could get electrons into the same state and do the same kind of things with them, but you can't. You have a rather remarkable microscope that you could... you have electron microscopes, but this would be a rather superior electron microscope. So the momentum is L. It's not related to rotation around the nucleus. It is. Of course it is. There is no nucleus, it's just L. Oh, then it's just r cross p. A particle moving in free space, just moving along a line, can have an angular momentum r cross p relative to some origin. Yeah, yeah. The angular momentum is relative to an origin. That's because r is relative to an origin. The definition of r is relative to an origin. Fixing an origin, you need to... And the whole idea of rotation is about some origin. The whole idea of rotation picks, I should have emphasized that in the beginning. Whenever you're talking about angular momentum, whenever you're talking about rotation, you're picking a special point to study it about. Now, in the case of an atom, it's natural to pick the nucleus to be the point that you... the point of symmetry. For a particle moving in free space, every point is a point of symmetry, rotational symmetry. So it's not... it becomes a less interesting thing. But for the atom, there's a natural center to talk about rotations about. I should have emphasized that. Yeah. So why are there two rows of eight elements and then two rows of 18? The 18 is twice the one. But why are there two... I don't know why. At some point, the whole picture breaks down and doesn't make any sense. And pretty quickly, in fact, the picture of shells and filling shells is more than a little bit naive. It completely overlooks the interactions between the electrons. Now, by the time you have a fairly large atom, there's basically as many electrons in the inner core as there are protons at the center. So to ignore the interaction of the outer electrons with the inner electrons makes no sense at all. So, I mean, the whole thing breaks down pretty badly. And then there's a zillion rules that I don't know. There's this impression that the more electrons you have, the bigger and the bigger the atom gets, but actually they don't get that much bigger. No, they don't get bigger practically not at all. The reason is because the more electrons, what happens is very simple. The more protons you have in the nucleus, the larger the charge. That tends to pull in the inner core of the electrons, the inner shells of the electrons. And what's left over, if you have, let's say, one valence electron, is one valence electron moving in what kind of field? The field that it's moving in is n protons and n minus one electrons. So that one extra valence electron basically thinks it's hydrogen and it's no bigger than hydrogen, really. It's essentially almost the same size as hydrogen. So, I think that's what you were saying. Atoms do not get bigger appreciably, they get very slowly bigger, but they don't get appreciably bigger as you add electrons. As you add protons and electrons at the same time. The other interesting thing that I ran into is that with hydrogen atoms, the reason that they stay as far apart from each other is because of electrical repulsion. Hydrogen atoms? Oh, there's also an important component from the Pauli exclusion principle. Well, that's what I was going to say. That's the exception. The rule is for most atoms, it's Pauli repulsion that keeps them apart. Okay. Where are we? Yeah, so we want to talk about what bosons and fermions are, I guess. This might be too much to answer, but is the exclusion principle a separate postulate? No, you have a backhand, yes. Now it is not. Now it's part of basic special relativity together with quantum mechanics. Quantum mechanics without special relativity, it's a postulate, but the postulate can be phrased in a more elegant mathematical way, and that's what we're going to do. We're going to phrase the postulate in a more elegant way. Once you introduce special relativity and you combine it with quantum mechanics, it's not a new postulate, it's a consequence of special relativity, which was Derac's discovery. Okay. Instead of talking about one particle, let's talk about two particles. Well, let's go back to classical statistical mechanics for a minute. In classical statistical mechanics, or classical mechanics of a system of particles, if all the particles are the same kind of particle, the question is, do you treat them as identical particles in this following sense, in counting the number of configurations? For example, we put particles into boxes. The boxes could be little boxes in phase space, or it could just be boxes. Four boxes we can put particles into. Let's say we have two particles and they're both the same kind of particle. So here's a configuration. We can put two of them both into the same box. That's clearly a unique configuration. Or I can put one particle in one box and the other particle in the other box. Is that one configuration or is it two configurations? If these particles have names, Harry and Sally. Harry and Sally. Yeah. If they have names, then you could put Harry here, Sally here, or you could put Harry here and Sally here. And there seem to be two different configurations. In counting configurations and calculating entropy and so forth, this could be, this sounds like it's a difference. Now, whether it is an important difference or not is a separate issue. But do you count these as separate configurations? You could always imagine that these particles which are identical to you have a little, tiny little bit of paint on them that paints their name on them. But it's such a weak little bit of paint, classical paint, not quantized paint. Classical paint that's so faint, so terribly faint that no experiment that you ever could do could detect the name imprinted on the particle. But you would have to say they were different particles. Harry here and Sally there would be different than Sally here and Harry there. Or you can say, no, I'm going to do statistical mechanics as if these were the same configuration, not think of the particles as labeled. Now, in fact, you get the same answers. It doesn't make any difference for classical statistical mechanics, but it's a conceptual difference. Putting a particle here and a particle here, is it or is it not the same as switching them if they're the same kind of particle? So that's a classical version of the notion of identical particles. And in classical statistical mechanics, we usually assume that these are the same configurations. Harry, Sally, Sally, Harry, they're the same configurations and that's the way we do our counting. How many configurations are there for a large number of particles? Well, typically you get some in factorial different ways of relabeling the particles and that goes into statistical mechanics. If you've taken statistical mechanics course, you know that in partition functions there are these one over in factorials in them. And it's simply keeping track of the fact that if you interchange two particles, it's the same configuration. That's an assumption you make. In quantum mechanics, you don't have the luxury of doing one or the other. It is very definitely that particles are identical in the sense that they do not carry labels. Two electrons, one electron here and the other electron here is mathematically identical and you couldn't do it otherwise exchanging the electrons. Same with true of photons. Now, having a photon here and an electron here is not the same as having an electron here and a photon here. That's clear. That should be obvious. They have very different properties and you can tell those two configurations apart. But interchanging and swapping around which electron is which, that's a non-transformation on the state of a system and it also is true of photons. So let's think about that and what it tells us about the quantum state of a pair of identical particles. Let's take a pair of identical particles. This could be two electrons and for the moment let's forget spin or anything else. We just have two electrons in some states. Let's forget spin. How do we describe two particles? One particle is described by a wave function psi of x where x is the position of the electron. What about two electrons? And in fact, let's go back a step. What is the psi of x? What is its meaning? It's the overlap or the inner product of an electron at x with the state of the system psi. If psi is the state vector of the electron then its projection or its inner product onto a state localized at x is called psi of x. Suppose there are two electrons. How do you characterize the states of two electrons? Well, you characterize them by two positions, position of one and position of the other. Let's call them x1 and x2. That does not stand for different directions of space now. It stands for the two electrons. x1 and x2 and the wave function is a function of two coordinates instead of one, psi of x1 and x2. That's the wave function of a two particle system. It's a function of, and if you want the probability, the probability is a probability to find particle one at position x1, particle two at position x2. So we have functions of two variables when we have two particles. Now let's think of the operation of swapping the two particles. Taking particle one and replacing it by particle two and particle two by particle one. Well, that's a transformation. That's a transformation that generally will change psi of x1 and x2. It'll change it from psi of x1, x2 to what? The psi of x2 and x1, which is not the same thing in general. A function of two variables does not have to be symmetric. It does not have to be such that if you interchange the two arguments, you get the same function back. So in general, a transformation, a transformation which swaps the two particles. Let's call that transformation, let's give it a name. It's an operator. It operates on a wave function and gives a new wave function. So let's name it. Let's call it the swap operation. Particle at s. And here's what s does. s acts on a state with particle one at position x1 and particle two at position x2. How do I know which is particle one? Particle one comes first. Particle two comes second. What does the swap operation do when it acts on a particle at position one and a second particle at position two? It just swaps them. It gives you particle two, sorry, particle one at position x2 and particle two at position x1. It just swaps them. If the particles had little names attached to them, this would mean something. Here you would have Harry at position one and Sally at position two. And Harry, I'm not sure I said it right, but you know what I mean. You change them. You just do a little dance and switch them. So this is a possible operation that you can do on the state of two particles. Here's an interesting fact. Take the operation s squared. What does s squared do? It swaps and then it swaps again. It swaps and it swaps again. What happens if you swap twice? You get back the same thing. Obviously you get back the same thing, even if the particles have little names attached to them. If you interchange Harry and Sally and then interchange Harry and Sally again, you get back to the original naming. So what do we know about s squared? s squared must be one as an operator. s squared must be one. It's a unitary operator. Why should it be a unitary operator? Because all transformations on the space of states are unitary. So here's what we know about s. s is unitary. That means it doesn't change probabilities. It doesn't change inner products. And s squared is equal to one. What are the eigenvalues of a matrix or an operator whose square is one? Either one or minus one. The only possibility. s squared is equal to one means that the eigenvalues are plus one or minus one. Okay, let's make a new principle. The new principle is that electrons or that identical particles, they could be photons, they could be electrons, whatever, that for a specific kind of particle, the wave function when you interchange two particles always comes back to the same wave function. So if s is equal to one, or comes back to its negative, if s is equal to minus one. See what that means. Let's take the case plus one. I think I'm beginning to fade. I think it's time to quit. I think you're probably also beginning to fade. I'll go through fermions and bosons next time. But the basic idea is classifying the wave functions of particles in terms of what happens when you switch the particles. It's a basic quantum mechanical process, or basic quantum mechanical operation, the swapping of particles. And the notion of identical particles is basically telling you that something simple happens when you swap two particles. Either the wave function does this or it does that, and we'll go and do it next week. For more, please visit us at stanford.edu.
(October 14, 2013) Building on the previous discussion of atomic energy levels, Leonard Susskind demonstrates the origin of the concept of electron spin and the exclusion principle.
10.5446/15092 (DOI)
This program is brought to you by Stanford University. Please visit us at stanford.edu. I almost always begin with the same sermon about, especially when teaching about quantum mechanics or relativity. The sermon is always the same. It's the fact that we as animals have inherited through the process of evolution certain intuitive ways of thinking about the physical world. And if you don't believe it, you think that maybe ordinary animals are not physicists. You watch a lion chasing an antelope and you notice that that lion, the minute that the antelope, that the relative velocity between the antelope and the lion changes sign, the lion just stops dead. Somehow he did some calculation, or she, it's usually a she, the lion, did some calculation, some physics calculation involving some very complicated concepts of velocity, direction, all kinds of complicated computations like that. A primitive chromannian man, not a chromannian man, Neanderthal, who comes to a cave and sees that the cave is blocked by a boulder and tries to push the boulder and can't push the boulder, decides to aim his body that way. Why? So that he gets a bigger component of force in that direction. Has he ever heard of force? Has he ever heard of components? Where did he get this idea of components? Did he know about signs and cosines? Yes, somehow he did know about signs and cosines. These are things which were inherited, biological in origin, and they are the basis of our intuitions about physics, our intuitive picture of the world. Much of physics has to do with those things, in fact, all of modern physics, everything in modern physics, has to do with those things which are beyond the intuitions that we were able to get from the ordinary world. Has to do with ranges of parameters which are way outside the range of parameters that humans or animals ever experienced. For example, it's not too surprising that human beings didn't know how to deal with velocities approaching the speed of light. But they got the wrong ideas about how to add velocities when nobody in 1900 had ever probably had never moved faster than 50 or 60 or 100 miles an hour. They probably did when they were falling off cliffs, but they didn't live to talk about it. Maybe they got up to 200 miles an hour, maybe. But nobody ever had experienced anything like the velocities approaching the speed of light. So it was not surprising that their intuitions, that their way of thinking about adding velocities, and so forth, the theory of relativity, and how you synchronize clocks, all that stuff, all that good stuff that Einstein did, that it was outside the framework of their ability to think about through intuitive pictures, through intuitive mathematics. They had to invent new mathematics. The new mathematics was abstract, meaning to say you couldn't visualize it, four-dimensional space-time. I can't visualize four dimensions. I've learned tricks to visualize it. So physicists, to some extent, rewire themselves, or people who learn physics, through a process of rewiring themselves to some extent to develop intuitions to be able to deal with these new ranges of parameters. But still, they're foreign, they're alien, they're peculiar, even to me. Quantum mechanics deals with a range of phenomena which is also outside the experience of ordinary humans, for which evolution simply didn't provide you the means to visualize. Evolution did not provide you the means to visualize an electron, to visualize the motion of an electron, to visualize the uncertainty principle. When you think of a particle moving, what is a particle? A particle is a thing with a position. At every instant of time, it has a position. If at every instant of time it has a position, it has a trajectory. If it has a trajectory, you can calculate the velocity along that trajectory. Just by knowing the separation between points and what the time interval is, you can calculate the velocity. And that's the intuitive picture of a particle. And where does it come from? It comes from thinking about rocks, throwing rocks, shooting arrows, all kinds of things that human beings normally do. So we never developed the need. It would have been very bizarre if our brains had been wired to understand the uncertainty principle. Why would Darwin have given us the incidentally if you preferred to think of the intelligent designer go right ahead? I prefer to think about Darwin. But why would either of Darwin's ideas or the intelligent designer have provided us with the ability to understand the uncertainty principle when it's never anything that's part of our ordinary experience? The answer is it didn't. And so quantum mechanics, for that reason, appears extremely weird to us. Physicists, as I said, rewire themselves and develop ways of thinking about it which are intuitive. But still, quantum mechanics is much, much more unintuitive incidentally than the special theory of relativity. And what we're going to try to do here is expose some of the weirdness of quantum mechanics, the weirdness of the logic of quantum mechanics, the weirdness of how quantum information works. This is not a class, a conventional class in quantum mechanics. A conventional class in quantum mechanics with stress such things as the Schrodinger equation and waves and how particles sometimes behave like waves and so forth. We may or may not get to a bit of that, but that's not the important subject that we're going to concentrate on. What we're going to concentrate on is the basic logic of quantum mechanics, the basic logic of quantum information theory. Physics is information. When you say something about a physical system, you're saying something, you're giving some information about it. You give the information in various forms, usually in the form of numbers. In classical physics, you often give, I will give you some examples, but you usually give it in the form of real numbers, the position, the velocity, a set of real numbers. In quantum mechanics, sometimes you use real numbers, but very, very often you give discrete information, discrete information such as yes or no or up or down or male or female. Well, that's probably not such a good example. The difference is between, you know, that's probably not, I think I'll withdraw that. Heads or tails, heads or tails, do I have a coin? No. Like a donkey coin, a float coin. Head, tail, head, tail. When I flip the coin, head, that's a piece of information. It could be tail. It's a two-valued system, either yes or no, up or down, heads or tails, or sometimes, they're logically all the same, of course. They're logically all the same, whether we're talking about heads or tails, up or down, or whatever it is, they're logically the same. And they're simply decisions which have two possible or questions, which have two possible answers, and a bit of information which has two possible answers is called a bit. It's called a bit, and it can either be a classical bit or a quantum bit. All real bits in nature are quantum bits, obviously, since nature is made out of quantum mechanics, but sometimes the quantum aspects of it don't manifest themselves. In an ordinary computer, the quantum aspects of the bit don't really manifest themselves for reasons that will come to, and it's just called a classical bit, the classical bit of information. This head, the coin flip, yes or no. The quantum bit is all, is the quantum analog of the flipped coin, the yes or no type question, but it is much, much more subtle. And the first thing we're going to want to explore is what is a quantum bit? Now, but before we do that, let's talk about classical bits. Cual bits can be described either by writing down a zero or a one. We could also use one and minus one, or we could use five and fifteen. Doesn't matter, but zero and one is a convenient notation for the two possible values. Zero could stand for heads, one could stand for tails, and so forth. So we're thinking, when we're thinking about this, we're thinking about some physical system. When we're thinking about information, we're thinking about a physical system such as a coin, and this is the information contained in that coin, either a zero or a one. There's a notation. This seems like a ridiculous and redundant notation. Its importance will only become clear when we start to think about quantum bits, but we're going to use the Dirac notation. The Dirac notation describes the state of a bit, not whether it's California or Oregon, but the configuration of the bit, and it's usually labeled with the notation zero or one or whatever other information, whatever other way you decide to think about the bit. These are the two states that a bit can have, either a zero or one, and it's represented, I don't know if I drew that well. Let's draw it again. Zero or one. These are the two states of a bit. All of this extra junk here is excess. You don't need it. It doesn't tell you anything. It just says that you're putting it inside the bracket. Incidentally, this pointy, bracketed object over here, the thing that contains the information on the inside is called a ket. It's called a ket because it's the second half of something which we will later learn is a bra ket or a bracket. There's another half that we haven't exposed yet. Now, what about multi-bits? Supposing you have more than one bit, and we're talking now classical physics. So far, we're not talking about anything quantum mechanical. Supposing we have several coins, and I line them up. I label them so we know which one is which. In fact, just in order to not confuse coins, let's make sure they're different coins. Penny, nickel, dime, quarter, half dollar, silver dollar. We have a bunch of coins. We can't confuse them. We can lay out some information by saying head, tail, tail, head, head, tail. That would be some information about a collection of bits. How would you label that? Well, you would label it with a string of zeros and ones. So for example, let's take zero always to stand for head. It's easy to remember. Zero stands for head, and one stands for tail for obvious reasons. Right. So my string of coins, head, head, tail, tail, tail, head, I would label zero for head, zero for another head, one, zero, one, one, for example. That's a configuration of one, two, three, four, five, six coins. Let's say it was six coins. That's a configuration of a multi-bit system. In this case, six bits. Again, for reasons that add absolutely nothing to this description, we're going to stick it inside a ket. Stick it inside a ket, which is just a kind of notation. It might be a good idea to put some commas between these, but maybe not. Maybe it's just best to leave it that way. That's a specification. It could be the specification of the bits of information inside a computer. It could be just a series of heads of tails and so forth. But before we do anything else with this, let's ask. I have a very simple question. How many possible configurations, how many possible states are there of, well, let's start with one bit. If there's only one bit, then there are only two states. What if there are two bits? Well, then you can have up, up, up, down, down, up, down, down, four, two times two. So for two bits, we have two squared. What if we have a hundred bits? The answer is two to the one hundredth power, two times two times two times two, one hundred times. So if you have an in-bit system, the number of possible classical configurations is just two to the n. Let's write that down. Let's put a notation in. Let's write the number of states. The number of states, n sub s, the number of states of a system of little n bits is two to the n. Let's suppose we, oh, let's invert that first of all. Let's invert, well, little n is what? Little n is the number of bits. Big n is the number of states. Little n is the number of bits. So if the number of bits is four, then the number of states is two to the four, which is sixteen and so forth. We can invert this and we can write, if we knew the number of states of a system, then we can take the logarithm of this equation. Log to the base two is particularly convenient. If we take the log to the base two of the number of states, that's equal to the number of bits. You can generalize this. Not every system has, as its number of states, two to a power. Supposing I have a state, a system, a die, you know, the things you use in Las Vegas to throw away your money with. It's got six possibilities, one through six. That is not two to any particular power. It's just six. But we can still generalize this definition of the number of bits of information. In fact, the number of bits of information that a system can contain is, by definition, the logarithm to the base two of the number of states, which for the die would be log to the base two of six. What is log to the base two of six? Is it an integer? No, it's some stupid irrational number. I don't even know what is it, how big is it about? Two to the two is four, two to the three is eight, two to the 2.53, seven, nine, eight, six, four, team, or whatever. So the amount of information, which is always the logarithm of the number of states, does not have to be an integer. But we're going to be considering systems which are made up out of some number of bits, each of which has two states. So for the simplicity we're going to be talking about systems, the number of states is always two to a power. That's just for simplicity. There's nothing special about it. But almost every system can be represented that way or approximately represented that way. Let me give you an example. Supposing we have some question of physics which has as its answer a real number, but we're only interested in that real number to a certain approximation. The temperature, the temperature in the room. I'm interested in the temperature in the room to a certain number of significant figures. I can represent the temperature or any other number for that matter by writing it as a number in base two. If what's the temperature in this room, incidentally, it's about 300 degrees from absolute zero. So it's 300. I can write 300, not as 300, which is, what does 300 mean? 300, you know what it means? It means three times 10 to the two plus nothing times 10 to the one plus zero times 10 to the zero. But we write it to things in base two. I don't know what 300 looks like in base two. Somebody can figure it out. Base two, everybody know how to arithmetic in base two? Anybody not know arithmetic in base two? Okay, so everybody knows arithmetic in base two. You write out any number that we like as a series of zeros and ones. One zero zero one zero one zero zero one. That's some number to the, some particular integer, it's an integer. So if I'm interested in the temperature and I'm not interested in being too careful to define fractions, I want to know whether it's 72 degrees or 73 degrees. I don't care about 72.4069. I can write it as an integer. And that integer can be represented as a sum of bits, not a sum of bits, but as a collection of bits. Every number, every single number, if you're willing to truncate the number of decimal places, approximate that number, and say I'm interested in that number only to 35 decimal places or whatever, or to 35 places to base two, then that number simply is represented by and represents a collection of bits. And incidentally, if you want to have a finer grain description of the temperature than integers in centigrade, you just use a more refined notion of degree. You go down to ask how many degrees is it, but not in centigrade units, but in units of 10 to the minus 100 centigrade. Again you can give it as an integer and integers can always be represented as sequences of zeros and ones. So almost any information in physics can be represented in terms of bits, in particular the measurement of quantities such as temperature, for example. Let me give you another example. This is a more complicated example of the same thing. Supposing I'm interested in a field. A field means a thing which can vary throughout space. Well, the temperature can vary throughout space. The temperature is a field. It varies throughout space. It's not one of the more interesting fields from the point of view of fundamental particle physics or anything, but that certainly is a field. It varies from place to place, and how can we represent that? Can we represent that in terms of bits? Yes, if we're willing to tolerate certain approximations, and we're always willing to tolerate some degree of approximation. What we do is we break up the room into a lot of little tiny cells. I won't try to draw a three-dimensional room. In my notes I drew a three-dimensional room. It took me about a half an hour to put in all the lines. Just a two-dimensional room. And here's what we do. We first of all order the cells. We make the cells small enough so that the temperature doesn't vary very much from cell to cell. So we might fill this room with several billion cells, label the cells. This is the first cells, two, one, two, three, four, five, six, I don't know, up to a thousand. Thousand and one thousand and two thousand and three thousand and four. And we can label all of the cells and list them. Once we've listed them, we can write the temperature of the first cell. There's the temperature of the first cell. I'm putting a little comma in just to distinguish between cells. Then we can write the temperature in the next cell, zero, zero, one, one, two, three, four, five, six, seven, eight, nine. I've kept nine decimal places in the basis, in arithmetic in base two. One, one, zero, one, however, till I'm finished. Then I go to the next cell, do the same thing. Temperature there is one, one, zero, one, zero, zero, and so forth. The way all I have is a list of zeros and ones. This long list of zeros and ones, if somebody knows how to use it, is equivalent to knowing the temperature at every point in the room. The same is true of the electric field, the magnetic field, anything which varies from place to place. So almost everything that I can think of in physics can be represented in terms of bits. So if you know everything about how bits work, you basically know everything about how physics works. Of course, you may not know what the rules are for manipulating these things, but this is the basic setup of physics, information in the form of a series of questions, each of which can be answered yes or no. Now, of course, you may want to refine your description. To refine your description, you may want to add more decimal places to the temperature, to the specification of temperature, and you might want to make your lattice finer. That's just making a better approximation. So the right thing to say is that most physical systems that we know about, as far as I know all physical systems, can be represented at least approximately and perhaps to always increasing approximation by a series of bits. That's why we get to use computers to do physics. If this weren't true, we couldn't use a computer. We couldn't use a digital computer in any case to do physics. We have to use analog computers or something. So let me give you another example, another example of how you might use bits to represent another. These are all so far classical systems, as I said. I don't want to redraw the lattice, but I do want to get rid of this top row here since I've already mutilated it. Here's a lattice. And what I'm interested in is the motion of particles. This lattice is just an artificial, imposed lattice that I've imposed on the room here just so that I've divided the room into mathematical cells. And what I'm interested in is the motion of particles moving around in this room. With any given instant, I can ask the question, let's take a very simple case. Let's take the case where a particle where you can't squeeze more than one particle into one of these cells. We can imagine that. The cells are about as big as a particle, in which case you can't squeeze in more than one. Then every cell either has a particle or it doesn't have a particle. We can label the cells that have particles with an X. We can label the cells that don't have a particle with nothing or better yet. We can label the cells that have a particle with a one and the ones that have no particle with a zero. In that case, this becomes a specification of where the particles are in the lattice. It's no longer the temperature, but the same long sequence of zeros and ones. Now the number of zeros and ones would just be equal to the number of cells in the lattice. What would this number mean? It would mean that in the first cell there's a particle. And the second cell is no particle. And the third cell is no particle. And the fourth cell is a particle. And the fifth cell no particle. And the sixth cell particle. And so forth and so on. And so given such a string of numbers, you are given a specification of where the particles are in this room. In that way, again, motion of particles, motion of fields, temperature, just about anything in physics can be represented in terms of bits. Any questions? Right. A bit is by definition a question about a system which has only two possible answers, which you can always take to be yes or no. It used to be a game, 20 questions, where somebody would think of a category. And then you would stand there and ask yes, no questions until you tried to figure out what the category was. So that was using the idea of bits. Yes, question? Oh, I just arbitrarily said supposing we are interested in the temperature to a certain degree of accuracy. Right. So I'm interested in the temperature to accuracy. But now I'm not speaking about temperature. I'm just giving another example. These are just examples intended to show you something which is more or less clear. Otherwise we could not use computers to simulate physical problems. Physical problems. Yes. Right. But for the general real number, you need an infinite number of bits. Any rational number can be represented by a finite number of bits. And the rule, well, that's not quite true. You have to remember to repeat them. So if it's rational, it's going to repeat after some point. So but if it's an irrational number, then you need an infinite string of bits. But in general, we will allow infinite strings of bits, although not in a genuine computer. Well, so far, remember, we're doing classical physics. All right. So far, no quantum mechanics. So let's see. Yes, we were going to come to that very, very shortly. Let me tell you how very quickly. An electron? First of all, we're not talking about motion yet. We're talking about configuration. Configuration means the state of a system at a given instant of time. So the presence of an electron at a given instant of time, let's suppose the nucleus is known to be right over here, and we're not going to ask about the nucleus. The nucleus just sits there. It's a lump on the... All right. So we could say at instant number one, when we begin the experiment, the electron is over here. In that case, we would write down a string of zeros with a one someplace. Pure zeros, one electron. Pure zeros except for one place in the sequence where there's a one. Now if we wanted to describe the motion of the electron, we would say starting with this configuration, we move, and let's use this symbol here to indicate that at the next end, we've broken up space into a lot of little individual cells. We could also break up time, I thought I had my watch, but I don't. We could also break up our watch into a digital watch which digitizes time just again as either a convenience or an approximation. And we could say if at digital time number one, the electron was... Or the system was described by one electron located at this location, then what happens next? Next it moves to some new configuration. In this case, it might move over one place. One, two, three, four, five, six. It moves over to the sixth place. One, two, three, four, five, six. And so forth. So the motion of a system is described by a rule of updating, of updating information, how you update it from one instant to the next. So physics basically consists of two... A physical system consists of two things. It consists of a collection of possible states which can be labeled by a collection of bits, and it consists of a time evolution which is an updating which tells you how to take one collection of bits and replace it by another collection of bits at a slightly later instant of time. I don't know if that answers... To actually work out an orbital motion orbiting around here gets confusing because when you jump from one layer to the next, if this is one and this is a hundred, then a hundred and one is over here. So you don't jump from a hundred to a hundred and one. You might jump from a hundred to over here which would be 200. So it can be complicated, the updating procedure. It can look complicated, but nevertheless it's an updating procedure that just updates your state of knowledge at each instant of time. That's classical physics. Now there are some rules and we're going to come to them. But before we do, let's define the space of states. I want to emphasize we are still doing classical physics. There is nothing quantum mechanical even though we're talking about discretizing systems and making out of them systems of individual bits, so far we are dealing with what should be called classical bits, C bits I think they're called as opposed to Q bits. Q bit is a quantum bit. These are classical bits so far. So let's take all of the configurations and just abstractly in a purely abstract way, let's take all of the configurations, incidentally, what is it? It's about ten by ten, this is roughly a ten by ten lattice. Ten by ten lattice has a hundred sites. How many states does it have? I'm not talking about one particle now, I'm talking about any number of particles can be on this lattice. How many different configurations are there? Two to the hundred. Two to the hundred. A very, very, very big number. Two to the hundredth power, that's how many different ways we can arrange zeros and ones on this lattice or specify whether there's particles in various positions, a very large number of possible states. But let's just abstractly think about all these states and just draw them as points. If there are ten to the hundred, I have to draw ten to the hundred points, which I'm not about to do. These are the various states, these are not the lattice points, these are the various states. For example, for one bit, if I had only one bit, then the space of states would consist of only two points, up and down, and I would just draw two points. This would be the space of states of a simple one bit system. Now let's ask, what are the possible laws of updating? In other words, what are the laws of motion? The laws of motion are the laws for updating configurations. What are the possible laws of updating? Well here's one possible law of updating. This could stand for heads, this could stand for tails. Let's think about it in terms of coins for the moment. This could stand for heads, this could stand for tails. If we start with heads, if I had a coin we would do it, heads and tails, heads, tails. One possibility is very simple. If you start with heads, it stays heads, nothing happens. If you start with tails, it stays tails, nothing happens. That's a law of updating, it's not a very interesting law of updating. How would you draw that? Well, here's how we'll draw it. Heads goes to heads, we'll make an arrow. If we start with heads, it stays heads. If we start with tails, it stays tails. So we draw an arrow from what you start with to what you end with. What's another possible law of updating? Here's a law of updating. If it's tails, it becomes heads, then it becomes tails, then it becomes heads, then it becomes tails. That's a little more, not very much, but a little more interesting, a slightly more interesting system. It just flip-flops back and forth. How would we draw that? We would draw that again. Heads, tails, heads, tails. If you start with heads, you go to tails. If you start with tails, you go to heads. So the law of updating in this case is just described by such a diagram, basically. A diagram which tells you if you start at a given state what it will be in the next instant of time. Is that clear? This is one way of describing the laws of physics. Write down all the states. Keep in mind what they stand for, of course. Whether that in this case one stands for heads, one stands for tails, or whatever it happens to stand for. If it's the male-female, this could be an interesting case of, that would be an interesting, this would be a very interesting law of motion in that case. I don't think I want to explore that any further. If this were my undergraduate class, I would never have brought that up. This is the more likely law of updating for sexuality. Female, male. So you see, simple laws sometimes apply. Sometimes they're a little more complicated. Can you think of an interesting system that flip-flops like this? Off-hand, I can't think of anything. It's obvious that it applies to a lot of things, but off-hand I can't. No, no, no, right, right, right, right. But yes, but if you're just, you're not intervening. I don't want you to intervene. This is the system by itself. If you had some peculiar light switch by itself, we're back and forth and back and forth and back and forth in a regular way, but that's, what's, what, what? Stay in, you know, this is, they still, which, well, a lot of states to the pendulum in between. But yes, you've got the idea. It's hard to think of a simple example, but I bet by the time we all go home and we came back next week, every one of us would have an example of a, we could call this the flip-flop. This is the flip-flop motion. This is the, the, the unmotion. Well, we can extend this. If we know what the space of configurations is and we lay them all out, either abstractly in our mind or actually just write them on the blackboard, then the motion of the system can be represented by a series of arrows where I'm getting tired, but, and so forth and so on. Yeah, I don't know. Let's do, let's do the possible, let's think of some possible motions of a two-bit system. A two-bit system simply has four states. That's all we have to know. It has four states. Well, here's one possible motion. If we start with this configuration, we move to that configuration. If we start with this configuration, we move to that configuration and so forth. If we watched what actually happened with time, the system would move from one configuration to the next around a closed loop. Now, the closed loop is not necessarily a closed loop in space. It's a closed loop in the logical space of possibilities here, logical space of, of configurations. That's one possible thing that could, here's another one. Perfectly good. What this is, is it's a pair of systems, a pair of, it's a pair of systems which are separately undergoing flip-flops, each one undergoing flip-flops. This one is flipping and flopping. This one is simultaneously flipping and flopping. If we start over here, let's see what that stands for. That stands for example for both heads. It could stand for both heads. Then we go to both tails. Then we go to both heads, then we go to both tails. Or we could start with one head, one tail and do this. That's what this is. This is a pair of systems flipping and flopping. There are other possibilities. So there are different laws of motion that the system, whatever it happens to be, could have. So when you specify a system, you not only have to specify what the states of the system are, but you'll have to specify how it moves. And how it moves is a rule for jumping from one configuration to the next. Now let me give you an example of a logically perfectly sensible rule, but which is defective from the physics point of view. Never happens in physics. We can do it, I think. Well, let's do it with four states. Let's see how this, yeah, here it is. If you start here, you go here. If you go here, start here, you go here. Excuse me one moment. For some odd reason in my notes I've drawn, instead of a diamond shape I've drawn a square. Let me go back. There's my four states. Okay. If you start here, you go here. If you start here, you go here. If you're over here, you go over here. And if you're over here, you go over here. All right, so now we can say what happens wherever you start. If you start over here, you jump to here. You jump to here. You jump to here. You jump to here. You jump to here. You go here. You go here. You go here. You go here. You go here. You go here. Notice you never come back to here with this particular law. There's something different about this law than there is about the other examples. In all the other examples, well, can anybody spot what's wrong with this? Well, not what's wrong with it, but what's different about it. Well, it doesn't consist of loops. This is true. You can't figure out necessarily where you came from. You may be able to tell where you go next, but you can't always tell where you came from. For example, if you find yourself over here, you don't know if you came from here or whether you came from here. If you're over here, you have lost a piece of information. This is a motion which loses information. It loses information in the sense that you can't tell where you came from. There's no way to reconstruct the past, but you can reconstruct the future, or construct the future. Wherever you are, you're told where to go next. But wherever you are, you don't know how to get back. If you're over here, well, you know you came from here, but then you don't know whether to go back here or to go back here. This is what is called an irreversible history. It's a history or a law which loses information. And at the fundamental level of physics, the fundamental level where you're really keeping track of everything, not where your course-graining or not looking carefully, but where you're carefully looking at every degree of freedom of a system, classical physics never allows the loss of information like this. There is a unique future point wherever you are, and there is a unique past point wherever you are. That is one of the laws. It's not necessarily a law of logic. It is something which is true of all physical systems that they are reversible in that sense. Yeah. Let's say I change state twice. All right? And I'm over here. One two or one two? I don't know if I came from here or here. All right. We could give this property a name. We give it the uniqueness of the future point and the uniqueness of the past point. We could invent a name for it. We could call it unique parity. Do you know what the quantum version of unique parity is? It has a name. It's called unitarity. Unique parity is a name I just made up. Unitarity is the quantum equivalent which tells you that you can always reconstruct the past from the future. In the state of a quantum system, you can either run forward uniquely or run backward uniquely and you'll come to some unique previous state or future state. And that's called unitarity in quantum mechanics. But we haven't done quantum mechanics. Nothing's quantum mechanical yet. It's a kind of time reversal symmetry. It's actually not a time reversal. Okay, so it's not a time reversal symmetry exactly. It's a time reversibility, I would say. This diagram has a sense of orientation to it. If I start something over here, it goes around this way. It definitely does not go around this way unless I reverse it, unless I look at it backward in time. So it's not precisely what you would call time reversal symmetry. Time reversal symmetry means that you could either go, in going into the future, you could go either way. But in this case, you only go one way. But it's the reversibility of the laws that you can find the reverse law. Given a law, you can find the reverse law which will take you in the backward direction. I think that's right. Yes. Yes. So if someone's coming away from a point and says, I don't know which way to go. Oh boy, do I go that way or do I go that way? So it's clear that that's not a law of motion. Okay? No branching ratios. No branching ratios. Oh. Well, the moment we're doing quantum mechanics, classical mechanics, it gets more complicated with branching ratios. Let's see, classical mechanics is less fundamental in quantum mechanics. All real systems are quantum mechanical. The question why some of them suppress the quantum mechanics and you don't see it is a question which we'll try to answer as we go along. But we might ask it at the quantum level. And at the quantum level, I think we can give a better answer. But ultimately, at the end of the day, it seems to be a law that basically says that forward in time and backward in time are, I won't say equivalent to each other, but that there's no preferred, really preferred sense in which forward in time is different than backward in time, even though it feels like there is. This is the question that took 20 years to answer. And the answer is, in my opinion, no, it is not possible. But it was one of the great questions of physics that took a long time to answer. And I'm not going to get into it now, but it might be an interesting thing for us to explore toward the end after we've talked about quantum mechanics. We have to talk about quantum mechanics before that makes sense, that question. I think they're all Newtonian in a sense. In the sense, Newtonian to me simply means that there's a definite state for a system that evolves with time according to a definite law of deterministic. That's the right word, yeah. But I think you can think of more complicated situations. You just could start drawing some diagrams yourself and see what makes sense, what's reversible, what's not reversible. But yes, you're right, that is true. It loses the information as to where you came from. The system. I mean, if that doesn't mean after the first step, that's not a part of the system anymore. What do you mean it's not a part of the system? The system started here, it went to here, it went to here, it went to here, it went there. There is more. Could you come and observe it at some later point in time? You won't find that point. You won't find that point ever again, right? Right. But the main point is you've lost the distinction between the two possible starting points, whereas in all the other situations, if you know where you are and you know how many steps you made, you can say where you were. Well, I think for a long, long time, Mr. Stephen Hawking thought that this is the way black holes work, so not so clear. Not so clear. Okay. Yeah, yeah, good, good. So let's talk about that. If I take a bunch of molecules in a bathtub, what's a good example? Well, let's take the molecules in this room. And I still thought I'm all out in a certain configuration, very definite configuration. I put them all up in the left-hand corner of the room there, and I let them go. After a while, the room will be full of air just like it is. If I put them up in that corner of the room over there and let them go, after a while, same thing. Put it up in that corner. After a while, same thing. So it looks like we've lost information. But in fact, that's not true. If we followed every single molecule and we followed it in infinite detail with infinite precision, which we don't do, of course, then we could reconstruct by running everything backward. We can reconstruct the fact that the molecules may have come from that corner of the room. It's prohibitively impossible to do in practice, but in principle, following every single detail of every molecule. Now, what really happens in the real world is we lose information because we lose the ability to follow the details. Not because the information gets lost, but because we lose the ability to follow the information. That's where the second law comes, when you start losing the ability to distinguish different states. So we don't distinguish whether, in our coarse-grain picture, we don't distinguish the different detail, the level of the molecular detail. So it looks like different configurations become the same configuration, but that's only because we simply don't look carefully enough. It's because we're lazy. Do you need an infinite number of bits to... ah, ah, ah. You mean in the real room like this. No because of quantum mechanics. Because of quantum mechanics, no. But if it were not for quantum mechanics, yes, you would need an infinite number of bits. Now what does that mean? That means that you have to specify a bunch of real numbers. Precisely, with infinite, with tremendous precision, you have to precisely prescribe the locations and also the velocities, but in particular the locations of every single molecule with a tremendous amount of precision. And the longer that you want to track the system, the more precision that you need. So ultimately, to track a system for a long time, you need to specify with infinite precision the exact positions of every point, of every molecule. That means you have to give a set of real numbers. A set of real numbers involves, as you say, an infinite number of bits. So the answer is for a collection of real particles moving around that you really try to follow classically. Depending on how long you want it to follow it, you would need more and more bits to describe it. Oh, yes, yes, yes, that's right. If the room were really sealed, let's idealize this room so that nothing can get into or out of the room. All particles bounce off, reflect off the walls of the room so that it's an entirely sealed up room. Then the room can be described discretely because of quantum mechanics, at least up to some energy. If we know that the energy isn't arbitrarily high, then we can describe it by a discrete collection of variables that has no exit. So let's see. Yeah, so to make such a thing, we could just reverse all the arrows. Here's an example. No, this one has no exit. Well, if I wanted the exit to itself, I have to do this. We could do that, but as I drew it, it had no exit. But let's think about what it means. I'm not too interested in what here. The question is what happens when you're over here. When you're over here, you have two ways that you could go and you don't know which way to go, so it's not deterministic. It doesn't know whether to go this way or this way. It might go half the times this way, half the times this way. You might need some statistical rule. 50% of the time or 30% of the time goes this way, 70% of the time with random statistics, that would be non-deterministic. So it seems that the real laws of nature are both deterministic forward in time and backward in time. That's the implication of not having loose ends floating around like this, that they're deterministic either way so that wherever you are, you can either trace forward uniquely or backward uniquely. And that is all of classical physics in a nutshell. You're now taking a complete course in classical physics. There's nothing that does not fit that pattern, at least to an arbitrarily high degree of approximation. Let's take a 70 minute break. Well, I was going to jump to quantum mechanics, but before I do, I want to do a little bit of mathematics, elementary mathematics. Most of you know it. But nevertheless, let's lay it out. Vector C's and vectors. At the moment, I'm not going to mathematically define a vector in any sort of sensible, even approximately rigorous way or abstract way. I'm just going to tell you, a vector is a sequence of numbers, a finite sequence of numbers. And you can represent it in a variety of ways, but I'll give you two ways to represent a sequence of numbers. The first way is to write them one after another. Let's just give them names. I don't want to call them. My numbers now, at the moment, I mean real numbers as opposed to complex numbers. I don't mean zeros and ones. I mean arbitrary sets of real numbers. They could be zeros and ones. The zeros and ones are fine, but they're just general numbers. So I'll just flame. What shall we call them? Yeah, they're called components, but I want to letter for them. A is good. So A, A1, A2, A3, A4, and just put something around them to surround them so that we know. This would be a four-dimensional vector. Why four-dimensional? Because it has four components. Forget, don't try to visualize the vectors now. There's no value at all for our present purposes in trying to visualize these as pointing in space or anything like that. They're just lists of numbers. That's one way. There's another way that we can list the same set of numbers. Put them in a column. A1, A2, A3, A4. Same information in them. I mean, I'm not talking about information in the abstract sense that I used before. Same thing. Sometimes it's useful to write it this way. Sometimes it's useful to write it that way. You'll find out as we move along. When it's written in this form, it's called a row vector. When it's written in this form, it's called a column vector. What we're actually talking about now is notations, neat notations for doing certain arithmetical operations involving collections of numbers. When we get the complex numbers, we will then use complex conjugate notation, yes. For the moment, let them just be real numbers. Now, there's another concept now called a matrix. Think of a matrix as the following way. A matrix is a thing which acts on a vector to give another vector. It's a kind of machine. You put the vector into the machine and out pops another vector. According to a particular rule. Oh, sorry, before we do that, before we do that, let's imagine a particular column vector and another different row vector. Different row vector has different entries. Not the same set of numerical entries, but a different set of numerical entries. So let's call them B. B1, B2, B3, B4. These could be 6.01, 5.97, 3.04, and A1 could be 7.8, A2. None of them could be the same or they might not be the same, the A's and the B's. This is some particular row vector and some particular column vector. There's a notion of multiplying a row vector by a column vector. And the notion of multiplying a row vector by a column vector is a simple, is a following, simple operation. You take the first entry, oh, incidentally, the dimensionality of the row vector and the dimensionality of the column vector should be the same. That means that they should have the same number of entries. Not necessarily four. It could be 5, 6, 7, in which case they would be five dimensional vector spaces, six dimensional vector spaces. This extends to any number of entries into the columns and rows. But the rows in the columns should have the same number of entries. All right. There's the notion of the product of a row vector and a column vector. It's called the inner product. And it's very simply constructed. You take the first entry of the row and multiply it by the first entry of the column. You add to that. You add the second entry times the second entry plus the third entry times the third entry plus the fourth entry times the fourth entry. So the product of these two, which you could just write as B next to A, that product, the inner product, is B1A1 plus B2A2 plus B3A3 plus B4A4. It's a number. It's not itself. The product of these two vectors, the inner product, is not another vector. It's not a matrix. It is just a number. The numerical value is just gotten by adding up the column, sorry, the row times the column in just this form. B1A1 plus B2A2 plus B3A3 plus B4A4. Is that clear? Don't ask me why. That's definition. Is this what you call the dot product? Yeah. Yeah. If we were talking about ordinary vectors in space, it would be the dot product. Yeah. Yeah. More abstractly, for abstract vector spaces, it's called the inner product. But yes, it is the same as the dot product for three-dimensional ordinary vectors in space. These would be the components of the vector. Yeah. Okay. Now there's the concept of a matrix. And a matrix, as I said, is an operation that you can do on a vector to give a new vector. All right? But it's not any old operation. There are particular family of operations that are characterized by matrices. A matrix is represented by a square array of numbers. Let's call the entries M. All right? So in the first place, we put M11 to indicate that it's in the first row in the first column. Then M12. Then M13. Then M14. Okay? M, what should I call this one? To one. It's in the second row for the first column. This is in the second row, second column. Second row, third column. Next one. M31. M32. M33. M34. And M41. M42. M43. M44. Now, as I said, I've chosen four dimensions just arbitrarily. Four is about as many as I want to handle on the backboard. And it's big enough to be a little abstract so that it's general enough to see what's going on. All right. That's what a matrix is. That's all it is. Now, you can think of it, you can think of each column as a column vector whose components are labeled by the first entry here. Each one of these can be thought of as a column vector where the first entry labels the column entry. Or you can think of it as a collection of row vectors where it's the second entry which labels the component. Either way, you can think of it both ways at the same time, a collection of column vectors or a collection of row vectors, but all together it forms a matrix. Now matrices can multiply vectors. So let's put a vector over here. A1, A2, I should line them up more carefully. A1, A2, A3, A4. And when you, I don't know, I've done a reasonable job of keeping rows and columns underneath and next to each other. But if you like, draw some imaginary lines to separate them into rows and columns. All right. This matrix acts on this vector to give a new vector. What is the new vector? And here's the rule. I've made the vector wide because each entry is going to be a fairly complicated expression, but it is just another vector. It's another column. It's a column which I've had to draw wide in order to be able to fit in everything I want to write down. Here's what you do. If you want to find the first entry into this row, you take the first row and you multiply it by the column, the inner product of the first row with a column here. So what is that? That's M11 times A1 plus M12 times A2 plus M13 times A3 plus M14 times A4. In other words, you take all of this and you multiply it by this according to the inner product rule and that gives you the first row. M11 A1 plus M12 A2 plus M13 A3 plus M14 A4. Now you want the second entry into this new vector over here done exactly the same way except you go to the second row. And you take the second row and multiply it by the column. That's going to give you M. I'm only going to do two of these. The rest you can do yourself. M21 again times A1 plus M22 times A2 plus M23 A3 plus M24 times A4. And the other two entries you can figure out. You get them by multiplying the next row by the column and finally the third row by the column. That gives you a new vector. It's a way of processing a vector to produce a new vector. I will give you some examples. As we go along, it's a rule of multiplication which is very useful. The reason it's defined is because it's useful and we're going to see how it's useful by using it. Let me give you an example of how a matrix, how the idea of a matrix can represent the time evolution of the configuration of a system. Supposing again we have our configuration space. Let's label them. Let's label them. Let's label them. The first configuration, the second configuration, the third configuration, the fourth and the fifth configuration. These are not points of space. These are configurations of a system which has five distinct states. Let's take a very, very simple law of evolution. The first one, if you start here, you go to here. If you start here, you go to here. If you start here, you go to here. If you start here, you go here. What do I do if I'm here? Go back. No, that's not good. That's disallowed, I think. Is that disallowed? I think that's disallowed. I think that's disallowed. Yeah, that's disallowed because if you find just, yeah, that's not reversible. That's not reversible. That's not what I wanted to hear. What I wanted to hear is that you go back to here. So this is just a one goes to two, two goes to three, three goes to four, four goes to five, five goes back to one. It's a cycle. Here's another way to represent the same thing. We can represent the state of the system by a column vector. In the column vector, we simply insert a one someplace. If I want to represent the first state over here, I put a one and then a bunch of zeros. One, two, three, four, five. One, two, three, four, five. This simply represents the first state. What about the second state? The second state I'll represent by zero, one, zero, zero, zero. The third state by zero, one, and so forth. The states of a system can be represented by a column, but a particular kind of column, a column with all zeros and a one someplace. Where is the one? Namely whichever state you're focusing on. If you're focusing on the fifth state, put the one in the fifth entry here. What is this rule of evolution? The rule of evolution says that if you have a one someplace, then in the next instant of time, the one moves down. If you start here, the one moves down to here. In the next instant, it moves down to here. Next instant, it moves down to here. Next instant moves down to here. Then where does it go? Up to the top. There's a procedure that you do on this column to tell you where the system goes in the next instant of time. That process can be represented by a matrix. Let me show you the matrix that represents that. The matrix is an operation on a vector which you can think of in this case as the updating operation, the operation which updates the vector. Here it is. Let's see. We put zero, one, zero, zero. This is five dimensional, so I need five, zero, zero, one, zero, zero, zero, zero, zero, zero, one, zero, zero, zero. I'm sorry. I'm going to make this four dimensional. I'm getting sick of it. I don't like five dimensions. Five is too many for me. One, zero, zero, zero. Right. One, zero, zero, zero. Let's try it out. Let's try it out on this vector right over here. This represents the third state. What happens if we act with this matrix on the third state? Let's just try it out. Let's see what we get. Well, the first entry up on the top is gotten by taking the top vector and multiplying by the column. Zero times zero plus one times zero plus zero times one plus zero times zero. What's the answer? Zero. Next place. Zero times zero, zero times zero, one time. One time. Whoops. Whoops. Whoops. I did it. Okay. Instead of going down, it's going to go up. It's okay. Up and down, we just turn the whole thing over. Would you prefer? Let's just get it right. Let's get it right. Zero, zero, zero. Where did I have it before? Over here? Yeah. Yeah. One, one, one, one. One. Is that up here? Yeah. One. Okay. So, let's start over again. What's up on the top? Zero times zero, zero times zero, zero times one, one times zero. We're still okay. Zero. Next one. One times zero, zero times zero, zero times one, zero times zero. Still zero. What about the third place? Please, please, please God. Zero times zero, one times zero, zero times one, zero times zero. It's still zero. But now in the last place, I have zero times zero, zero times zero, one times one, and zero times zero. So, one. The column has moved down one step. Now, you can check for yourself. Here's your homework. Check that any place that you put this one, it will move down by one step till it gets to the bottom. And then it will recycle and go up to the top. Okay? So, that's a little thing to check. In fact, you can put in any numbers here. Only zeros and ones may make sense, but we could put in any number A, B, C, D. And what will come out over here is everybody will move down a step, A, B, C, but then D will move up to the top. So if you put a one in any one of the places, it will slide down one unit and then reappear at the top. The point is that the evolution of systems can be represented by matrices. of a particular kind, bunch of, in classical physics, in this kind of classical physics, there are always just ones here. Quantum mechanics is more complicated and more difficult, but in classical physics, but sprinklings of zeros and ones, so as to make the, each state shift into the next one. That's an example of the use of matrices in classical physics. So far, no quantum mechanics, just pure classical physics. There is an interesting, well, all right, this will do for the time being. We'll come back to it. So that's an example of matrix algebra, matrices multiplying vectors. What about matrices multiplying matrices? Why might we want to multiply matrices by matrices? Well, here's the idea. Supposing we wanted to upgrade or update a second time, to update a second time, what we would do would be to apply the same matrix to the resultant that we got. In other words, let's write it this way. Let's write it abstractly. We have a matrix M, which we multiply by a vector V to get a new vector V prime. That's just abstract notation for writing a matrix and a vector and getting a new vector. That's updating the vector V to a new vector. Let's update it again. Let's go one more interval of time. How do we do that? Well, what we do is we write M times V prime equals V double prime. We would do the same updating trick, except now update V prime instead of V. And we would get V double prime, V double prime being the state of the system after two units of time. But we could also write that by realizing that V prime is M times V. We could write this as M times M times V is equal to V double prime. This just means we apply the matrix twice. We can also think of it as squaring the matrix M and then multiplying it by V. How do you square a matrix? How do you multiply one matrix by another matrix? This is what you would do if you would want to update twice, once with one matrix and then once with another matrix or the same matrix. How do you multiply matrices? The answer is basically the same kind of rule. I will do it now for two by two matrices because it's getting too complicated even for four by four matrices. For a two by two matrix, we have M11, M12, M21, M22. Let's call it some other matrix N. N11, N12, N21, N22. The result of multiplying a matrix by a matrix is another matrix. It's another matrix. We do it in a very similar manner. Supposing we want the one-one entry here. We get the one-one entry by taking the first row and multiplying it by the first column. M11 times N11 plus M12 times N21. Same kind of inner product and we put it over here. Now supposing we want the next entry. For the next entry, we take the first row because after all, we're interested in the first row up here. We take the first row but multiply it by the second column over here. So what would be over here would be M11 times N12 plus M12 times N22. I'm not going to write it all out. Now we can move down to the bottom. To the bottom, if we wanted this entry, we would take the bottom row and multiply it by the first column. If we want the last entry over here, we would take the bottom row and multiply it by the last column. So we multiply matrices by the same kind of pattern that we multiplied matrices times vectors. We can simply think of it as multiplying this matrix by this vector, putting it over here. Multiply this matrix by this vector, put it over here. So there's a notion of multiplying matrices. What multiplying matrices does is it gives you a new matrix which updates you not by one interval of time but updates you by two intervals of time. If you wanted to short circuit the problem of updating and you wanted to update the state of a system five units of time, what you would do is multiply the matrix together five times. You do it in sequence. First the first times the next, the result times the next one, times the result times the next one. You can work out what the matrix is which would take you from the state of the system and an instant of time to a state of the system five instance later. So matrix multiplication, multiplying matrices by matrices is also an important concept. One last example of matrix algebra involves row vectors. Supposing you have a row vector and you want to multiply it by a matrix. The rule is you write the row matrix first, B1, B2, B3, B4, and then you write the matrix, M11, M12, M13, and so forth, M14, dot, dot, dot, dot, dot, dot, dot, dot, dot, dot, dot, the entire writing ends. Well what's the result going to be? The result is going to be a row vector. Here's the way you get the entries of the row vector. The first entry of the row vector you get by taking the original row vector and multiplying it by the first column vector over here. That product is the first entry. Then you take the original row and multiply it by the second column that gives you the second entry. Then you take the original row and you multiply it by the third column that gives you the third entry over here and so forth. You see the pattern, it's always multiplying rows by columns and putting them, putting the result in the right place, in the right row and column. In this case, a row vector times a matrix is another row vector. A row, a matrix times a column vector is another, here it is, a matrix times a column vector is another column vector. And a matrix times a matrix is another matrix. Get familiar with that. Work out some examples. Work out some examples of your own devising, just put some numbers in, multiply row vectors times matrices, matrices times column vectors, and matrices times matrices and get the experience of working out how these things work. Do it for two by two or three by three matrices and you'll get familiar with it because we will use it over and over and over again. In fact, that's the primary mathematical operation of quantum mechanics is multiplying rows and columns times matrices. If you know how to do that and you're familiar with it and you can read off the answers easily, you've got all of the basic mathematics of quantum mechanics. It would help to have a little bit of calculus to go with it, but the basic new thing is matrix multiplication and column vectors and row vectors. So please, practice with it a little bit. I should have made up some examples for you to do, but you can make up your own. They're very straightforward. Okay, we're getting close to nine o'clock. Are there any questions? Next time we're going to start talking about qubits, quantum bits, and how quantum bits are very different than classical bits. Question? Yes? What's your name? My name? Leonard Suskind. Oh, Suskind, if you like polishing the apple for the professor, you can call me Leonardo. I like that very much. Say it the other way, what restriction does reversibility place on? Yeah, yeah, that it have an inverse. Okay. Right. Yeah, yeah, yeah, yeah, yeah, but I mean if I were in a more abstract sense, the answer is that it should have an inverse, that the matrix should have an inverse. And the inverse, of course, is the thing that takes you back. Not all matrices have inverses. And you know what an inverse is. Yeah, okay, good. And if you don't, we'll come to it. Any other question? Yes, that is a good question. Yeah. The final exam is buying me lunch. That's a lot of lunches out there, boy. Look, nobody asks me about grading the class. I mean, a lot of you have been here before, so you know my policies. My policies are you're here to learn physics. There is nobody here who is here for a degree. And if you are, then I'd be glad to give you a numerical grade if you need one. In fact, everybody needs a numerical grade. I know that there's an enormous difference in the level of preparation of different people here. And to compare you in an exam setting wouldn't make sense because I do know that there's an enormous difference. I know that everybody here is here because you want to be here and you want to learn physics and not because you have to be here. So my policy is to either not grade the course at all or if somebody needs a grade in order for some particular purpose to give a D minus, the lowest possible grade, lowest possible passing grade. So I didn't tell you what it's for yet. That's right. It's an example of how you can use it to implement the idea of updating a vector from one instance to another. It's one example. But I haven't told you yet why we're doing this. I often spend an hour talking about qualitative aspects of physics. In this case, it was how do you abstractly think about deterministic physics, abstractly in terms of bits and so forth, and then spend some time doing some mathematics which really I won't tell you what it's for until the next time. But I want to make sure, since I'm going to start doing some quantum mechanics the next time, I want to make sure that everybody will recognize the little algebraic little bit of manipulations that we'll do and have the mathematics for the next time. So it's really for the next time that I set this up. Yeah? I think you will see. I think it will be clear. I think it will be clear. I think it will be clear. Yes, I do promise to tell you why. No. Is that quantum junkies? No, it's not.
Lecture 1 of Leonard Susskind's course concentrating on Quantum Entanglements (Part 1, Fall 2006). Recorded September 25, 2006 at Stanford University. This Stanford Continuing Studies course is the first of a three-quarter sequence of classes exploring the "quantum entanglements" in modern theoretical physics. Leonard Susskind is the Felix Bloch Professor of Physics at Stanford University.
10.5446/15089 (DOI)
Stanford University. Okay, so we've talked about a lot of things so far. Fields, quanta of fields, the relationship between fields and particles. That's the relationship between fields and their quanta. What are the quanta? The quanta are the discrete, indivisible units of energy that quanta mechanics implies for waves, for waves and waves of course are fields. Fields and waves are more or less the same thing. We've discussed the properties, not the properties of individual particles very much, but we've discussed some general properties, the properties of having energy, momentum, the properties of having spin, and the property of either being a fermion or a boson. Every particle is either a fermion or a boson. Fermions are the particles that don't like to live together in the same state. Bosons are the particles that do like to congregate and do condense into the same state. We've talked about wave equations on occasion or the field equations if you like, more generally. There is a kind of field equations. And we talked about how field equations can be generated from Lagrangians, from action principles. The quantum mechanical version of the Lagrangian is a kind of tool for codifying or codifying, whatever the right word is, codifying the interactions between particles. Various terms in the Lagrangian, things like, for example, products of fields represent processes which we can call vertices where particles come in and other particles go out. That would, for example, be a term in the Lagrangian which would have four powers of a field in it. So we've talked about these general things. We haven't talked very much about specific particles. Tonight, well, I expected to spend the first half hour beating around the bush a little bit. I'm going to spend a few more minutes beating around the bush. But then I want to come to really writing down what the particles are, who are the players in the drama, and start going through them more or less, well, not quite one by one, but in groups, and trying to give them some personality, give them some names, some properties, and also what it is they do and how they come into physics, into ordinary physics. Okay, but before I do that, there's kind of a triangle of concepts that we've discussed two legs of, but not the third leg very much. The triangle is particles or field quanta, particles, fields, and in modern physics, pretty much for every elementary particle. Now, we're going to have to think very hard in the future, not tonight. What is an elementary particle and what is a composite particle? We will come back to that. And you might think, I mean, everybody here probably has some answer if I were to ask you, how do you distinguish a composite particle from an elementary particle? And you will always be frustrated. You will always find some slippery reason that your definition didn't work. But for the moment, let's nevertheless imagine that there are elementary particles and that the elementary particles are the quanta of elementary fields, fundamental fields in the theory. And so they're more or less than one-to-one correspondence. But there's a third leg to this triangle. Anybody know what it is? Of course, you don't know what I'm thinking, so even if you did know what it was, you still wouldn't know it. Forces, forces, forces. So you can sense forces, forces. Okay, basically for every field there is a particle. For every field there is a force. Electric field gives rise to electric forces. Gravitational field gives rise to gravitational forces. It may be less obvious to you what the electron field, the electron has a field. It also gives rise to forces, and we'll talk about them a little bit. So there's kind of a correspondence that goes this way, there's a correspondence which goes this way. There must also be a correspondence which goes this way, a way of thinking about forces which is not based on fields, but which based on particles. If this correspondence or this triangle really makes sense. So I want to talk about that a little bit. Many of you know the idea, but let's just discuss it for a moment. Let me first discuss it in the context of electrodynamics. How forces come out of thinking, now this is classical electrodynamics for the moment, the complete field view of things. Supposing I put two charges into space, I want to know the force between them. How do I calculate it? Well, you could know the rule that the force is equal to the product of the charges times one over r squared and so forth. Or you could try to calculate it a different way. You could try to say, what do these two charges do to space? What they do to space is they create electric fields. Essentially that's the thing they do. They create an electric field that fills space. And I don't want to write down the details of what that electric field is. But let's just say this is the electric field. Let's forget the second particle and just write down the electric field due to the first particle, whatever it is. The electric field due to the first particle, incidentally it's proportional to the electric charge of the first particle. So let's explicitly put that in. E of the first particle times the electric field divided by the electric charge. This electric field here, I've divided out the electric charge. Maybe I shouldn't call it E. Let's call it E without the factor of the electric charge in it and put a little hat over it. That's this electric charge. And what is the field energy? Anybody know what the formula for the field energy is in an electric field? Electric fields have energy. They store their energy as a distribution of energy in space. And the field energy, the density of field energy, is the square of the electric field. So we square this and we integrate it over all space, the volume, and that's the energy stored in the electric field of the particle. Now, this energy does not depend on where we put the particle, right? The field will move. If we move the particle, the field will move with it, but the field energy will always be the same. The electric field will adjust to the position of the charge. The field will always give rise to exactly the same field energy. And this field energy here, remember E equals MC squared, that field energy is part of the mass of the particle. It contributes, the field contributes to the energy of the particle, and therefore it contributes to the mass of the particle. There's another way of thinking about a bit of renormalization, that there's some extra mass there because the field surrounds the particle in this way. Okay, I don't want to belabor that now. What about the second particle? Supposing the first particle wasn't there, but the second particle was there. This, of course, is the energy E, not the electric field. This is the energy. If there was only particle two, the field energy would be the integral of E2 times the electric field of the second particle squared. Now, if the two particles are not at the same place, well, let's not worry about the two particles. This one would also be independent of the position of this particle here. What about the energy if I have both particles? If I have both particles, then I have to add up the field first before squaring it. So the electric field, if I have two particles, will be the sum of the two electric fields, electric field, not energy. Let's call this energy En. Well, let's not call it H. Energy, just En. The energy, first of all, the electric field. The electric field will be the sum of the two electric fields. The electric field will be E1 plus E1, E1 plus E2, E2. Where these are the electric fields of particle one and particle two. This will be the total electric field. Incidentally, the electric fields are vectors, and so we might put little vector signs above them. The field energy you get by squaring this whole thing. So the field energy is the integral of a space. There are three terms. The first term is E1 squared, E1 squared. It's gotten by squaring this. The second term is plus E2 squared, E2 squared. And the third term is the interesting one. It's the product of E1, E2, twice as a matter of fact, times E1 dot E2. Actually, dot product. All right. What is this first term here? It's the self-energy. It's the self-energy of one particle by itself. It doesn't depend on the position of the particle. It's just a number. It's part of the mass of the particle. Let's forget it. It's already there as part of the mass of the particle. What about this one? Also, part of the self-energy of the second particle. What about this one here? This one is not the energy of either particle separately. It's proportional to the product of the charges. And it's proportional to the product of the fields. What if the particles were infinitely far away, so far away that their Coulomb fields hardly overlapped at all? You take one particle so far away that the Coulomb fields are hardly overlapping. Then this is going to be zero because wherever E1 is not zero, E2 will be zero, or almost zero. Wherever E2 is non-zero, E1 will be close to zero. So this will be very, very small as you take the particles apart. But what about when you bring them together? When you bring them together, the product of E1 times E2 will not be small. Nearby, where the particles are, both fields will be appreciable, and you will get a contribution from this. What is that contribution? That contribution as a function of distance, well, first of all, it's proportional to the product of the charges. It depends on the distance. And of course, if you work it out, as you might expect, it's nothing but the Coulomb force, one of our R squared. But I like to say it this way because it gives you the following picture that putting in both charges, taking them far apart, they don't affect, well, they're simply just independent objects. As you bring them together, they deform and distort the field in a way that depends on the distance between them, and the distortions of the field in between them, that field energy. That field energy, which is the contribution because both of them are there, that's the force law. So that's a purely field point of view of forces. A purely field point of view of forces. But if fields are nothing but collections of quanta, and quanta are particles, there must also be a way to think about forces in terms of particles. So let's talk about that a little bit. Before discussing electrodynamics in this way, let's talk about a slightly different setup. We're actually a very completely different setup. Let's talk about molecular forces. What do molecular forces come from? I'll give you one source, there are several origins of molecular forces. But here's a setup I want to think about. Let's begin with a pair of protons and a single electron. What is this? A proton over here, a proton over here, and a single electron. If this proton were very far away, then the ground state of the electron in the presence of the proton would just be the hydrogen atom ground state. It would be governed by some sort of Schrodinger wave function. Let's draw the Schrodinger wave function. The ground state would be the ground state of the hydrogen atom. And in the ground state, if this proton weren't there at all, not there at all, the electron in its ground state would certainly be found in the position nearby this proton. Now what if there are two protons over here? One proton over here, another proton over here, pretty far away, but not infinitely far away. Then there is another possible state of the electron with exactly the same energy where the electron is not over here, but it's over here with the corresponding wave function pushed over to here. It has exactly the same energy. Why? Because the two protons are of the same kind, and everything is completely symmetric over here. So if the electron is in orbit, well, a quantum mechanical orbit around this proton, or if it's in quantum mechanical orbit around this proton, the ground state energy will be exactly the same as if there was only one proton. But it's not quite true. Why isn't it exactly true? Well, if the protons are not infinitely far away, there are processes that could happen which could not happen if they were infinitely far away. Anybody know what the process is that can happen to the electron if the two... Tunneling effect. Tunneling effect. Yeah, the tunneling effect. The electron placed over here into its ground state can, with a small probability, suddenly appear in this atom over here. That's called quantum mechanical tunneling. It has to overcome an energy barrier in between the two. It takes some energy to pull the electron out of this atom. It'll get that energy back when it drops into this atom, but it has to go over this hill. Can it go over the hill? Classically it can't, but quantum mechanically it can tunnel from one place to another. Don't stand there and try to watch the tunneling. What happens if you try to watch for the electron going back and forth? Like anything else in quantum mechanics, watching it ruins it. You can watch over here. You can say, well, you can stand over here and wait for the electron to appear over here, but if you watch it from in here, you'll ruin the phenomenon. So the electron can hop back and forth. Hop back and forth simply means that there's a tunneling transition, a tunneling rate or a tunneling probability that if you put it over here, it can appear over here. If it appears over here and you wait a while, it will reappear over here and so forth. Okay, this process sets up a kind of equilibrium, long-term equilibrium in which the electron has an equal probability of being over here and over here with a wave function which looks like the sum of the two wave functions. Well, the sum divided by the square root of two in order for probability to add up to one. All right, so it looks like the sum of the two wave functions, which I didn't draw very well, but in the middle over here, it's slightly different. You might draw the exact wave function of the electron orbiting particle one as if particle two were not there. Same thing for particle two. And then add them up. Well, the correct tunneling wave function, the correct equilibrium wave function, is not exactly the sum of these two wave functions. It's a little bit different in here, a little bit, not much. But the point is the energy of the system is not exactly what it would be if the two protons were infinitely far away from each other. If they were infinitely far away from each other, you could put the electron over here, it would have a certain energy, or over here and it would have exactly the same energy. When you put them closer together, the energy of the electron is not exactly the same as it would be if they were far apart, if they were infinitely far apart. So if I were to plot the energy as a function of the distance between them, when they're very, very far apart, the energy of that electron is simply the energy of the electron with only one proton, and the other one just not there at all. At large distances, the energy would just be whatever it was. But as you bring the protons together, the energy of that two protons plus electron is a little bit less, turns out to be a little bit less, and that little bit less is a function of the distance between them. So now we have a situation when the system is in equilibrium with the electron tunneling back and forth and in equilibrium between the two, the energy is a function of the distance between them. What's the relationship between force and energy? You can think of this as potential energy. You can think of this as a potential energy between the two protons. The derivative of the energy. So the derivative of the energy, the energy has a gradient in here, force always pushes you toward decreased energy, and so this effectively creates an attraction between the two protons. That attraction is a kind of covalent bond. It's a kind of force between the two of them, which is due to the sharing of an electron, or to the fact that the electron can be in a superposition of quantum states in the two wells, in the two potentials of these two things here. Now, does that mean that a proton and another proton with an electron between them, which has this attractive force, will bind together? Not quite, because there's another force around. What's the other force around? The electrostatic force. The system doesn't have zero charge altogether, and so there's a repulsion force, and the repulsion force can overwhelm the attraction force, but nevertheless there is a force between two protons if there is an electron around, which is due, in a sense, to the jumping back and forth of the electron, or to the diminishing of the energy by the wave function sort of adjusting a little bit, a little bit better than it would if you just added the two wave functions together. As I said, that's the origin of covalent bonds, sharing of electrons, and in this case you can think of it, you can draw a picture. Now, the picture may or may not do anything for you. You have a proton over here, that's the world line of a proton, another world line of a proton, and the electron might start out being bound to this proton, hop across to this proton, hop back across to this proton. You shouldn't take this too literally, because if you stood in the middle, as I said, you would disrupt the process, but it's a description of the quantum mechanical state of the electron between the two protons, what it does, this possibility of the electron jumping back and forth, lowers the energy relative to if you just had one proton and just one electron in a bound state there. That means that there is an effective force, and you can picture it. This picture is like any analogy, it's often very, very misleading, but the presence of the electron in between the two protons creates an attractive force. It's called particle exchange, it's called particle exchange or electron exchange between the two protons. Now, yeah. So, is it as being the electrons hopping back and forth as reducing the effective positive charge of it? No, I don't think that that is quite the right picture. It doesn't really depend on the electric charges. It just depends on the fact that you can find, by adding together two wave functions, you can find a wave function with a little bit lower energy than either one separately. Is this what the magnet referred to as life-like-slide? That's what the magnet used to say, like, like, like. Like, like, like, I don't know. No, no, no, no, he was probably talking about the Bose's statistics of that photons like to be in the same state. I don't know what he was talking about. He made the same thing, he said that light, likes, like, like. Electrons like each other, photons. No, no, electrons hate each other. They don't like to be in the same state. Photons like each other, they love to be in the same state. I think he was probably talking about the opposite of the exclusion principle for photons, the Bose's statistics of them. Of course, I don't know. Protons have a tendency to be a cow. Protons. Yeah, that's because of their electric charge. Right, now what happens if we put the electron between them? Is that reducing the impulse? It does a little bit, but not very much. In fact, the force due to the electron jumping back and forth is a much shorter range force than the electrostatic attraction. So this force would only be important at relatively small distances, whereas the Coulomb force would fall off much more slowly. So the Coulomb force would be a longer range force. But, you know, it is there. It is there. The reason why it shows up is if you have a neutral system. I mean, it shows up better if you have a neutral system. If you have two electrons so that the total charge of the system is zero, then of course you'll have the possibility one electron orbiting around this proton, a second electron orbiting around this proton here. Now, the two objects here, they're called atoms, hydrogen atoms, and they're called electrons. So if you have a neutral system, there is no electrostatic force between them, but you still have this possibility of electrons jumping across. Electrons can jump across, a tunnel across, and then it does create an attractive force which can bind an atom, bind a molecule. Yeah, yeah, yeah, it is bound, but... Yeah, at small distances. At small distances, it is enough to overcome the Coulomb force. That's right. Yeah. Is there a possibility between the two protons that creates some kind of tension between them that creates some third force that the electrons kind of orbit around? Well, it doesn't create a third force, but it creates a sort of slightly nonlinear effect due to the two forces. But it's just a solution of the Schrodinger equation in the presence of the two forces does not have the energy that it would have if it was just one of those charges there. But let's leave it at that. There is this notion of the energy being lowered by the possibility of alternating between two quantum states. Is this thing all related to quantum entanglement? Not so much quantum entanglement. It's related to what are called off-diagonal elements in the Hamiltonian, but it's not too closely related to entanglement, I would say. For entanglement, normally you need two electrons. You only need one electron for this. For very, very, very high temperatures, does the electron move between the protons from a much higher probability? Well, at high temperatures, because it depends on the temperature, at very, very high temperatures, all that will happen is the electron will get kicked out and go to Alpha Centauri, and it won't even remember the fact that there was a proton around. If it gets hit hard enough by a very hot photon, but if the temperature is not that extreme, what it can do is cause, even if you only had a single atom, it could cause the electron to occupy higher levels. Now, higher levels have bigger wave functions, so the overlap between the higher wave functions could be larger, and it would certainly have some effect on the force. But that's a kind of special effect. All right, we have a concept that we see in the classical electrodynamics that the effect of having two charges changes the energy in between relative to what it would have been if you only had one, and it creates a force. Similar kind of thing here. If you have two force centers and the possibility of the electron being in a superposition of states, in other words, of the equilibrium being in a quantum superposition of the two states, the two states being over here or over here, that also lowers the energy and creates a force. Is there a way to think about the Coulomb force here in quantum mechanics where it is in some sense similar to this jumping back and forth of the electron? Yeah, there is. Let's come back to a single proton. A single proton is interacting with the electromagnetic field. That means it's interacting with the field whose quanta are photons. Classically, we might just describe it by solving the field equations for the electric field in the presence of a charged particle. Quantum mechanically, we think about it differently. Particularly in quantum field theory, we think about it as the emission and absorption of photons. That's what the Lagrangian of electrodynamics tells us. It tells us about the probability for the emitting and absorption of photons. So one way of thinking about the electron is that the actual physical electron is a quantum mechanical superposition of states in which, first of all, there are no photons. A superposition of a state with no photons. A superposition of states in which a photon has been emitted, and therefore there's a photon present. So it's a charged particle together with a photon around. Two photons can be emitted. The photon can be reabsorbed. The net effect in the end is some kind of equilibrium distribution, quantum mechanical equilibrium distribution of photons surrounding the electron. And you can measure those photons. I mean, a little bit different than measuring free photons. These photons are sort of trapped at the electron. They're emitted and reabsorbed. And if I have another electron over here, it is also emitting and absorbing photons. So there's an equilibrium. But every so often, now this should be thought of in the language of Feynman diagrams, every so often a photon which is emitted from here gets absorbed over here. And I always think about it as two jugglers, each juggling balls. And every so often, if they happen to be close enough together, a mistake will be made. And Joe will grab Mohr's juggling objects. Really what you really want to do is you really want to calculate the energy due to the electromagnetic field or due to the quantum superposition of photons that are present. You want to calculate that energy as a function of the distance between the charged particles. When they're very far apart, the two energies just add. They just get the sum of the energy of one charged particle and another charged particle. But as they get closer together, the charges begin to influence each other or in the language of Feynman diagrams, the emission of one photon by one particle and the absorption of that same photon by another particle creates a force. Creates an energy which is not exactly the sum of the two energies. That energy is the Coulomb force between the charged particles. So we have two distinct languages. Well, apart from just writing down the Coulomb force law, we have two other ways of thinking about forces. One of them is through classical field theory where you calculate the fields of objects and square them. And the other way is through the exchange of particles. Feynman diagrams with particles are exchanged back and forth. Any particle can be exchanged in some context or another. So every kind of particle in one way or another produces a force. In molecular physics, it's the exchange of electrons which create forces. In electrodynamics, it's the exchange of photons back and forth which create forces. So any particle is also connected with a force when that particle can be exchanged or jump back and forth between two something else. We'll describe some more examples, but I did want, before we move on, to just discuss the relationship between particles and forces. So we more or less have a one-to-one-to-one correspondence. Particles, fields for which those particles are the quanta, and forces which are associated with exchange processes where that particular kind of particle can bounce or jump back and forth. That's an important theme. So when people say that there are four forces in nature, now there's a force for every possible kind of particle. And we will discuss some of them as we go along. Okay, I think it's time now to start naming the particles, listing them, listing their properties, and discussing what they can do, what kind of processes they can engage in. And I could write down a big long list of all the elementary particles and then give you a test next week to see whether you memorized it or not. That wouldn't be fun at all. I think it's probably better to divide them up into small groups, which are in some way simply related to each other, get familiar with them a little bit and what they can do before we try to write down the whole damn standard model of particle physics, which is a monstrosity. I mean, you know, it's not my fault, certainly not my fault, that the standard model of elementary particle physics is an ugly monstrous mess. It's not Steve Weinberg's fault. It's nobody's fault, or if it is somebody's fault, that person, well, he may be in the room, but he's a little bit diffuse. Frankly, we don't understand why the particles are what they are. We don't understand why there is an electron and no particle whose name slips my mind because nobody's ever named it, because it doesn't exist, but nevertheless, why some particles exist and why other kinds of particles don't exist, we largely, by and large, don't know. We do understand some relationships between particles. If this one and that one and that one exist, then there's got to be another one to match up with them in some appropriate way. But in the end of the day, there are many more parameters, many more different types of particles than there are known relationships between them, which cut down the size of the problem. Okay, the mathematics and the relationships cut down the size of the problem somewhat, factor of two, factor of three or something like that, but there are hundreds of particles. So it is, honestly, a mess. If you want to understand what people are hoping for in the next round of experiments and so forth, you'll have to understand some of that mess and what the puzzles are about it. So let's begin. Let's name the particles. We begin with the most obvious ones. Let's make a table. Let's see, we need some columns. We can put the name of the particle over here. I'll put the symbol for the particle over here. I'll put the particle type over here. Now, what do I mean by type? I simply mean whether it's a fermion or boson. I'll put the electric charge over here. There's another quantity which characterizes particles. We'll call it the baryon number. I'll tell you what it is as we go along. And there are other properties. At the moment, I'm not listing them. And we'll put the mass over here. All right. The first one, of course, said I'm not listing them in any particular order. As it happens, I am writing down the lightest one first. The lightest one is, of course, the photon. The standard symbol for a photon is a gamma, like for a gamma ray. It has a field associated with it. The gamma particle is the quantum of a field. In most cases, or many cases, the field carries the same symbol as the particle itself. But for the photon, the field is called A. And it's really the vector potential of the electromagnetic field. You could use electric or magnetic fields, but you can also use the vector potential, which is another way of describing the electromagnetic field. It has a boson. I didn't write down the spin, but the spin is one unit of spin. The electric charge is zero. The baryon number, whatever that is, is zero. And the mass is zero. So that's the first particle. The next particle of interest is the electron. Now, the electron is a particle which has an antiparticle. It has an electric charge. If it has an electric charge, it must have an antiparticle of the opposite charge. And it is a convention, whether we think of the particle as the electron and the antiparticle as the positron, or whether we think of the positron as the particle and the electron as the antiparticle. The relation between them is mutual. So when I write the electron, I really mean the electron and positron. And so we could write E here plus minus. Standing E minus is the electron. E plus is the positron. The field of the electron is usually psi. And maybe you could put a little E downstairs to indicate that it stands for the electron. It's made up out of creation and annihilation operators for electrons. Electrons are fermions. It's charged. Okay? It's charged in what units now? Well, the standard particle physics or quantum mechanical unit for electric charge, believe it or not, is the electric charge of an electron. It's a good unit to work in terms of. You can go look it up in coulombs. It's 10 to the minus, what, 23 coulombs? 10 to the minus 19th coulombs, or yeah. It's some very small charge. But it's not useful to think about it in coulombs. All particles in nature have electric charges which are integer multiples of the electron charge. This could not true of quarks, but quarks are not observable particles. All right. So the charge we'll just write as minus one for the electron and plus one for the positron. So since we typically think of the electron as the particle and the positron as the antiparticle, let's just call it minus one and remember that the positron has the opposite charge. Variant number is zero and it's mass. Now we have to decide on units for mass. Okay? Units for mass are the same as units for energy. E equals mc squared. So we can either describe the mass as a mass in kilograms or in energy in joules, or we can simply invent a new unit which is more appropriate to microscopic physics to atomic physics. In fact, the unit that we use comes originally historically out of atomic physics. It is the unit in which the ionization energy of an electron and a hydrogen atom is what? 13.5 electron volts. An electron volt is how much energy you get if you pass an electron across a capacitor plate of one volt. It's a very, very small amount of energy, some tiny fraction of a joule. But one electron volt is the standard unit that came historically out of atomic physics, theory of electrons and so forth. It's going to wind up being too small a unit for us. Particle physics energies are larger than atomic physics energies, but for historical reasons we have the electron volt as our unit of energy. And the mass of an electron measured in electron volts is of order a million electron volts, one MeV. In fact, it's a half an electron volt. Not exactly. I think it's.51 MeV, and MeV stands for millions of electron volts. And MeV will be too small a unit for many other purposes. As I said, the particle physics is a mess, and the masses of particles extend all over the map. And so in trying to search for a useful unit, you're always frustrated. The useful unit will always be too small for the next... No, why? No, it's exactly.51. No, it's not. What's that? Pi 3. Pi equals 3. Pi equals 3. Right. Okay, now, quantum electrodynamics is the theory of electrons and photons, and we've talked about it a little bit, more than a little bit. And just we could also write down over here a table of the various... I don't want to make it a table. We'll just draw some pictures. The basic elementary process that takes place in quantum electrodynamics, photons and electrons, is the emission of a photon by an electron. Electron, electron photon, and as I've showed you several times, you can rearrange the legs of this diagram so that this could represent not the emission of a photon, but the absorption of a photon. A photon comes in from the past, or... Oops, sorry. You can even flip the electron legs around so that it looks like this. Electron going backward in time. Well, what does that mean? That really means a positron emitting a photon. So the basic building block for building up processes and interactions is this vertex with two electrons, electron, positron, whatever coming... one going in, one going out, and a photon coming out of it, or going into it. Out of that, you build up all of the various processes of quantum electrodynamics, but in particular, the process or the phenomenon that corresponds to the force between charges, and in the language of particle exchange, it's the photon bouncing back and forth between charged particles, which creates that force. Yet another way to describe what's going on here is in terms of a Lagrangian or a term in a Lagrangian. The Lagrangian describing this process here describes one electron in, one electron out, and one photon out, and so you build that out of the field operators. One electron in is psi electron. It creates, it has the creation of annihilation operators in it. An electron out is psi dagger of the electron, and a photon emitted is the field operator for the photon, and if you plug in all the creation and annihilation operators, you will find in here, buried in here, terms where a photon is emitted, an electron absorbed, another electron emitted, and so forth. And this codifies, this describes all of the various basic processes that can happen in quantum electrodynamics. Now, there's a parameter in the Lagrangian that appears the electric charge of the electron, and that, roughly speaking, is associated with the amplitude for a electron to emit a photon. The electron, let's say you hit the electron or whatever, the electron plows into the side of a x-ray machine. What's the probability that a photon will be emitted? Well, the amplitude is the electric charge, and the probability is proportional to the square of the electric charge. So all of this here is a way of describing all of these basic processes, and when they're combined together, they produce forces, they produce processes where photons are emitted, the whole world of quantum electrodynamics flows out of this, basically, one expression here. We won't try to go through that again now, but let's start adding a little bit to our particle list. The next particles of interest, these are, of course, familiar to you, I'm sure. Now, if this were 1960, when I was, well, I was, 1962, when I went to graduate school, probably the next particles I might write down would be protons and neutrons. Protons and neutrons are now subsumed by quarks. It is widely believed that, in some sense, protons and neutrons are composites of quarks, although we will come, remind me to come back to this issue of the difference, or what it means to be composite versus elementary. It's a very interesting story, but naively, quarks are smaller than protons and neutrons, and so protons and neutrons, which are big fat globs, are made up out of quarks. So, next thing are quarks. There's a lot of quarks. We won't even get to all of them tonight, or the whole set of distinctions between them, I don't think we will, but there's more than one type of quark. There are six types of quarks, six distinct particles, all of which are very similar to each other in some respects, very different from each other in different other respects. So, let's list them. They go under, they have names, all right? Well, why do they call what they are? They call what they are just a historically randomly people assigned silly names to them, and there's no logic to what they're called, no logic at all. There's, first of all, the up quark, and that's called U. The symbol is U. So, let's see, we should write there are quarks here, quarks. And I suppose the symbol for quarks is Q. All quarks are fermions. We'll write down their charge later. They have all baryon number of one-third. Now, why one-third? Well, the reason is very simple. The baryon number is basically the counting of the number of protons and neutrons. Each proton is counted as having one unit of baryon number. Each neutron is counted as having one unit of baryon. Baryon is just a word for protons and neutrons, and a somewhat generalization of protons and neutrons. Nucleon is a word for protons and neutrons. There are other particles which will come to which are very similar to protons and neutrons. They're also called baryons. The prefix barry I think means heavy. And baryons were called baryons because they're a lot heavier than electrons. But counting up the number of protons and neutrons, what do you call the number of protons plus neutrons in an atom? Atomic weight. Atomic weight, yeah. The atomic weight, yeah. Is equal, approximately equal to the total number of protons and neutrons. And the total number of protons and neutrons in nuclear physics never changes. Protons can turn into neutrons. Neutrons can turn into protons plus other things. But the number of protons plus neutrons never changes. You do have isotopes. But still, the number of protons plus neutrons doesn't change. Number of protons can change. How can the number of protons change? No, I'm just saying that different types of protons have different numbers than neutrons. Oh, yeah. Indeed they do. But are there processes with number of protons and neutrons change? Protons, not plus neutrons, but protons and or neutrons change? Beta decay. Beta decay, radioactivity, radioactivity, one form of radioactivity, a neutron decays into a proton plus an electron and an antinutrino. Okay, so in that process, the number of protons changes by one. The number of neutrons changes by one. But the number of protons plus neutrons doesn't change. There's a generalization of that in all of nuclear physics and particle physics. There's a certain quantity, which for ordinary nuclear physics is just the number of protons plus neutrons. And that number is sometimes called the Baryon number. We'll have to generalize it a little bit beyond protons and neutrons. It's a Baryon number. And it could have been defined to be two for the proton or seven for the proton or pi for the proton. But whatever it is, it has to be defined as twice as much for two protons, three times as much for three protons. So it's a convention that you say the Baryon number of the proton is one. Incidentally, what's the Baryon number of the antiproton? Minus one. Minus one. Proton has Baryon number plus one. Antiproton, Baryon number minus one. Neutron, Baryon number one. Anti-neutron, Baryon number minus one, but clearly there's an element of convention in exactly what you call M. But once it was fixed that the Baryon number of a proton was one, then the Baryon number of a quark was a third, because three quarks make a proton. So it's just a historical glitch that the Baryon number of the most fundamental thing in nuclear physics is one third and not one. So if you look out at an antiquark, there are antiquarks. Baryon number is minus a third. So I won't write that down separately. The number of quarks is conserved? Number of quarks minus the number of antiquarks is conserved. Now, whether or not that is an absolute conservation law, it is not believed to be an absolute conservation law. What do I mean by absolute conservation law? Can a proton disappear? It can't completely disappear. Its energy has to go someplace. Are there processes in nature where a proton might disappear and become something that is neither a proton or a neutron or anything else that you would call a Baryon? For example, without violating charge conservation, a proton could become a positron and a photon. So a proton could decay into a positron and a photon. Does it? Well, you can wait. If you have a single proton and you sit there and wait, you will wait a long, long time before that proton decays. It's half life to decay that way is at least 10 to the 33 or 10 to the 34 years. So this is not something that happens very regularly. Does it happen at all? The thinking is yes, it does, but this is one of the very, very interesting questions of particle physics of whether the proton can decay. And when you say can it decay, you mean decay into something that does not carry a Baryon number. And it is widely believed that the answer is yes, but it's never been demonstrated. Incidentally, how do you have you wait around for 10 to the 34 years for a proton to decay? You get 10 to the 34 protons and you wait around for one of them to see one. Now 10 to the 34 protons is a lot of protons. If you have protons, then it's protons. No, that's not that. That's zero Baryon number because it goes to zero Baryon number. One has Baryon number one anti proton has Baryon number minus one. So that's not a Baryon. We call them Baryon violating processes, processes in which the Baryon number is not conserved. Nobody has ever witnessed such a process in the laboratory or anyplace else. Can they happen? There does not seem to be any reason why they cannot happen built into the mathematics of what we know. So we leave this as an open question, but empirically and at least to a very, very high precision, Baryon number is conserved and quarks by definition have one third of a unit. One third of a unit. As I said, minus a third for anti quarks. But in order to fill in the rest of this, in particular, electric charge and mass, we don't have to label which kind of quark we're talking about. And there are lots of different kinds of quarks. So let's put in a list of different kinds of quarks. This is not new particles which are not quarks. This is just a subheading here, quarks and then things underneath. There are up quarks, down quarks. Why are they up and down and not up and down? There's nothing up and down about them. What directions? Up and down? Up and down means in the Earth's gravitational field. There's nothing up and down here about them. Then it gets worse. There are strange quarks which are no stranger than non-strange quarks. It gets more and silly. There are charmed quarks which are charming, I suppose. And then there are, now this is, if you're squeamish or very, bottom quarks, bottom quarks, bottom quarks, and then top quarks. The symbols are of course exactly the same. I just didn't feel like writing the words. Up, down, strange, charm, bottom, top. Same as bottom and top. I got a little pretentious beauty in truth. Bottom and top, strange and charm. How and under what circumstances these things were named is not interesting. I don't know, I do know, but never mind. They're all fermions. Okay, the symbols are just what I wrote here. Sometimes q, sometimes psi q, sometimes capital Q. Sometimes, depending if you're talking about the individual types of them, you'll just use U, D, S, C, B, or T. Use the same names for the fields as you use for the particles. And there's no completely coherent convention that everybody sticks to. All right, they're all fermions. The whole lot of them are fermions, as I said. But now comes the charges. So it's useful to divide them into groups of two. These groups of two are sort of repetitions of each other. This group of two, in many ways, is isomorphic, not exactly, to this group of two, which is not the same, but which follows the same pattern. The pattern of these two is similar to the pattern of these two. Oh, I got them upside down. Excuse me, wait. If I want to keep the pattern, let me see. I think I want to put down, up. Down, up, strange. Down, up. Strange, charm, bottom top. Yeah. All right, so the down, up system is very similar to the strange, charm system. It's very similar to the bottom top system. So let's write down what the electric charges are, and you can see them. You can see this similarity. The down, up system has charge minus a third and two thirds. That's the first example of particles in nature which have charges which are not integer multiples of the electric charge of the electron. However, quarks by themselves do not exist freely in nature. They're always bound into other structures, protons, neutrons, and so forth, and those particles do have integer charges. Okay, the strange and the charm, exactly the same thing, minus a third and two thirds. Bottom and top, exactly the same, minus a third and two thirds. Can that be read? They're just repetitions of the same thing. What about the antiparticles? If I were to list the antiparticles over here, they would just have exactly the same charges as the particles except opposite sign. So this is simply replication. For some reason, nobody understands. Nobody even has any idea why nature replicated itself this way. Are they still all bearing on number one-third? Yeah, they're all bearing on number one-third. Okay, and then the masses, the masses are not similar to each other. The masses wildly vary. I'm going to put down estimates for them because the precise value, well, there are precise values of the masses, but you can go look them up on Google or wherever you want. The down quark is about 10 MeV. So it's 50 times heavier than the electron. A lot lighter than the proton. Incidentally, what's the proton mass? About 1,800 times the electron mass. In other words, about 940 MeV. This is a lot lighter than a proton. The up quark, about 5 MeV, less than the, than the down quark. And protons are made only out of up quarks and down quarks. So it doesn't look like there's nearly enough mass in quarks to be the mass of a proton or neutron. Okay, we'll come back to that. The point is, of course, there's more in a proton than just its quarks. Okay, strange in charm. Strange is roughly order of magnitude 100 MeV. That's strange. And charm is a little over about 1,000 MeV. Now notice that the down quark is heavier than the up quark. But the strange quark, which is analogous to the down quark, is lighter than the charm quark. So there's some pattern of inversion here of the masses. Again, nobody understands at all why that's true. Yeah, charm is about 1,012. A little more than that. Oh, sorry. 100, Mr. Zero. Decimal creep. Okay, 1,000 MeV. And the bottom quark is about 5,000. Oh, this is called a GEV, giga electron volt. The bottom quark, see, we keep changing units. That's about 5 GEV. And the top quark is about 170 GEV. So how much heavier is the top quark than the up quark? Can anybody do that calculation? How much? 30,000. 30,000? A lot. A lot. All right. So on the one hand, these particles are extremely similar to each other. They do the same things. They have the same properties. And yet they have masses which wander all over the map. Does anybody understand why? No. Are there any ideas? Yes. Do any of them work? No. We don't even have a clue about why there's... First of all, we don't even understand, of course, why there's even one of them. One family. These are called families. Why is there one family? Well, we don't know. We're not sure if there wasn't one family who wouldn't be here because there would be no protons and neutrons. But given that there's this one family of up quarks and down quarks, why are there strange quarks and charm quarks, or top quarks and bottom quarks, nobody has a clue. And that question is a stupid question. Yeah. When these quarks were found, the story goes that they were looking in a certain place because the math said the math should be a certain amount. I'm not talking about this. No, they said that they went through the calculations and they knew that this is where it was going to be, that it was going to be somewhere around some AGEV that they were going to find. Oh, the top quark. Yeah. And I was not sure if they predicted that it would be there before they went to the top quark. Yeah, but not on the basis of abstract theory, but on the basis of how the top quark would affect the work its way into other particles, top and bottom. I was just saying because you just said that the numbers nobody knows why and they're all over the ballpark, but yet they seem to know that that's where they wanted to look for it. Yeah, but not because there was any basic underlying reason why the top quark had that mass. What you could do is you could say, supposing the top quark had a certain mass and then calculate various Feynman diagrams involving the top quark in various ways, Feynman diagrams involving ordinary particles, not the top quark, but which had top quarks running around in the interior of diagrams. Okay? You calculate those diagrams. Those diagrams have an effect on processes involving other particles that you know very well, and then you say, how and what does the mass of this top quark have to be in order that it does the right thing for these other particles? Right? So it was not in any sense a deep understanding of why the mass of the top quark was what it was. It was simply an observation that you needed that particular value of the top quark mass in order that it be consistent with... Yeah, right. So you can't observe the quark, but you can predict how it will disappear, what will happen if there is one kind of showers and directions and various particles will be included. Okay. Go ahead. You're asking me the same. Yeah, that's what that was. To say that you can't detect the quark becomes us to a certain extent a matter of definition, what it means to detect the quark. You can certainly detect evidence of quarks. What you can't do is create an isolated quark in isolation from other things and then examine it. But we'll come. We're going to come to the question. I think you're probably remarking about jets produced. Well, I don't know what you're talking about. Yeah, we can talk about jets. Let's come back to it after we talk more about quarks. Does it act that some quarks have twice the charge of the absolute value as far as... Is that from theory or is that just a bookkeeping? It's semi-theory. Yeah, to some extent that is understood. I'm not going to explain it now, but now I'm telling you the facts. We can talk about how much of this is explained and how much of it... How much of the pattern is... The right question is how much of this pattern is required for theoretical mathematical consistency. But we haven't gotten any yet. Some of this pattern is required. Yeah, some of this pattern is required, but not much of it. Very little of it is required by mathematical consistency. Question? No. It's the relative abundance of the six kinds of quarks. Relative abundance in where? But what do you mean by relative abundance? You mean in the world? In the universe, yeah. Neutrons are unstable. They decay. Protons are stable. They exist in intergalactic space and everything else. They're all... Yeah, the only thing that has any appreciable abundance... All right, first of all, the only quarks which have any degree of stability are ups and downs. The rest of these decay to these. So the real abundance of real particles in the universe are all ups and downs. Okay? Now, what's the relative abundance of ups and downs? That's determined by how many quarks of each type there are in a proton. There are no neutrons. Oh, sorry, of course there are neutrons. There are neutrons in atoms, in nuclei. So you have to know the relative abundance of protons and neutrons to know the relative abundance of... But that's all it is, relative abundance of protons and neutrons. Okay, so what do we know? A proton and what is a neutron in this language? And let's come over here. A proton is three quarks and a neutron is three quarks. Now, as I said, I'm now in the business of telling you facts about these particles, not at this point trying to understand why these are the facts. We can't do everything at once. A neutron is two down quarks and an up quark. Let's just check that. An up quark has charged two-thirds and a down quark has charged minus a third. So two-thirds minus a third minus another third. This is electrically neutral. Charge equals zero. That's good. Neutron has charged zero. Proton. Proton is exactly the same as a neutron except with an interchange of up quarks and down quarks. Two up quarks and a down quark. Okay? Now, up quarks and down quarks are a lot alike except for their electric charge. The electric charge has to do with the interaction with photons. Let's forget photons. Let's forget about the existence of photons for the moment and just talk about quarks and the other particles which are important inside the nucleus and so forth. Then it would almost seem that a neutron and a proton are the same thing except for an interchange of the up label and the down label. You might then surmise that their properties are exactly the same if, for some reason, the electric charge is unimportant. In other words, supposing we don't have to worry about the interaction with photons, just forget photons, then it would seem that protons and neutrons should be identical because what's the difference? You just replaced up quarks by down quarks. It's not quite true because up quarks and down quarks don't have exactly the same mass. They don't have exactly the same mass. Now, 10 MeV by comparison with the mass of a proton is small. It's almost negligible. It is very small and it's almost negligible in nuclear physics or in particle physics. To some precision, to the extent that you can ignore the masses of the quarks, which are, after all, smaller by far than the masses of protons and neutrons, neutrons and protons are a good deal alike. The only difference is the slight difference in mass between the up quarks and the down quarks. Which one is going to be heavier, the neutron or the proton? Of course, it's going to be the one with more down quarks because down quarks are a bit heavier than up quarks. Down quarks are a bit heavier than up quarks and so it's likely that the neutron is heavier than the proton and it is. Neutron is a little bit heavier than the proton and it's largely attributed to the fact that the neutron has more down quarks. Now, that's not all there is to the difference between protons and neutrons. If the down quarks and the up quarks had exactly the same mass, but you counted for electromagnetism, in other words the photon, you said, okay, the photon is not completely unimportant, then which would be heavier? The proton or the neutron? Why? Yes, because it has some electrostatic self-energy because it's charged. So the proton would be a little bit heavier because of the self-energy of the electromagnetic field, the electric field, one would expect the proton to be a little bit heavier. In fact, the neutron is a little bit heavier and the reason is because the down quark is a little bit heavier than the up quark. Okay, but other than that... You're going to give us the mathes? Of course. Neutrons and protons. Yeah, neutron is about what, 938 MeV and proton is about 939 MeV or... What's about 1.4 MeV difference between them? Something like about 1.4 MeV difference between them. There's enough energy between them, there's enough energy separating them that the neutron can afford a little bit of energy to when it decays into a proton plus an electron plus a neutrino. There's a little bit of excess energy in the neutrino which allows it to decay. And allows it in particular to decay with an electron to an electron. Okay, so that's the very, very rough story of protons and neutrons. We'll come back to it. But now there are also these strange quarks. Let's forget charm quarks for a moment. There's an interesting question. Can you replace... Notice that in every respect, the down quarks and the strange quarks are similar to each other except for the fact that the mass is different. So you might ask, can you construct an analog of the proton and neutron where you pull out a down quark and replace it by a strange quark? Or perhaps two strange quarks. Pull out a down quark and replace it by a strange quark? The answer is yes. Of course you can because in every respect, the up quarks and the down quarks are the same. It's sort of almost like an isotope. You pull out a proton and put it in a neutron and it sort of sticks together sort of the same way. But it's even better here. So you can construct another kind of analog. These are called strange baryons. Strange baryons, I forget their names. Lambdas and XIs, they've got all kinds of names. And it's been so long since I worked on these things that I can't even remember their names. There are symbols like this associated with them, like this and the lambdas. I don't remember which one is which. Not important. But for example, you can create, I don't know what its name is, pull out a down quark and create a strange and a down and an up. What's the mass of this one over here? Sorry, not the mass. What is the charge of this one? Well, it's exactly the same as a neutron because you pulled out a down and put it in S. And the S and the down have the same charge. So this is also electrically neutral. But it has a strange quark in it and it's called a strange baryon. They're not stable. They can decay. We're going to discuss the decay processes later. But they're not completely stable. So they evaporate. And this one will turn into that one. But yes, they do exist. Likewise, you can put in two strange quarks and an up quark. What is the charge of that one? Same thing. Charge neutral again. And likewise, for the proton, you can replace it by up, up and strange. So there are all these strange baryons. When those decay, what kind of products do you have? What kind of what? Products. Oh, protons and neutrons. Only protons and neutrons? Yeah, well, no, no. You get some protons and pions and other things like the electrons and other things. Of course, you can't just say protons. It's a matter of the hour. And you are. You also only substituted the changes for balance. Yeah. And not ups or... No, you can't... No, this is... Well, the rules of what you can exchange are a little bit complicated. But at the moment, you can always exchange one for a similar one of the same charge. Now, you can also... Oh, incidentally, these strange particles are a little bit heavier than the protons and neutrons because of this extra 90 MeV of... Oh, no, yeah, an extra 90 MeV. 90 MeV isn't much. So they're somewhat heavier than the ordinary protons and neutrons. But in every respect, they're similar. The main difference of importance is because they're heavier, they can decay to the protons and neutrons. And so they're not found in the universe and in the abundance. All of those that you put up there so far can be constructed with the heading for it also. Oh, yes. Yeah, you'll have to take every particle and replace it by its antiparticle. Then it becomes the anti-neutron, the antiproton, and the anti-question mark. Excuse me. You call these strange baritons. The strangeness is not just another word for saying they have strange quarks. But if you put it... what if you put it at the bottom before it could... Okay, then they become bottom. Then they're bottom baritons. Then they're bottom baritons. Strange baritons, as you put it in essence. Yeah, yeah, yeah. Are they strange? I don't know. They were called strange because people became familiar with protons and neutrons by the 1950s. And then when they discovered these other things, they said, oh, that's strange. And the unexpected things. Yeah. Where's the missing masto from the proton and neutron? The missing masto, what do you mean by that? Well, if you had the mast of the... Oh, you mean if you just added up the mast of the quarks. Right. There's a lot missing. Yeah, that's right. There's a lot missing. So there must be something else in the proton and neutron. No, no, they're gluons. Yeah. No, no, no attempt here to be dramatic. But the gluons don't carry any interesting charge, baryon number. Do they carry mass? Well, they carry energy. So... Okay, so actually what you were saying there is it's binding energy. It's a form of binding energy, yeah. That's right. It's binding energy. Energies don't add. Not relativistically. I mean, well, masses don't add. The mass of an object is not necessarily the mass of its constituents. Well, the 20-year thing about the Kovalev bomb type thing. Mm-hmm. Mm-hmm. Right. You're using the language of particles here to describe these composite particles. It seems more complicated if you use the word language of fields. How would you combine a field and upfield? It seems like you would combine everywhere and not be a particle. See what you're saying. Field. Do you want to know whether you can construct a field to describe a proton? Yeah, you're using down and up as if they're particles. Yeah. And there's fields and they're everywhere and their modes of the field are combining these composite particles. Well, in thinking, yeah, I mean, in thinking about this particular aspect of things, it's much easier to think about particles than fields. We could think about it in terms of fields, but it's much harder. Question about the binding energies. Yeah. So if this difference between the quark energies and the proton energy or binding energy, that's a big positive energy. No, not an integer. It's not an integer. It's a big positive energy. Energy. Oh, I thought you said integer. Yeah. And usually when you think of things being bound. Usually negative. Negative. Right. Not true here. Well, yeah, I mean, we'll come to it, but yeah, sure. You talked about the three Gs or three thousandths of the N. Yes. Yes. Yes, there is. Yes, there is, but not yet. Let's talk first about mesons. Mesons are simply quark and anti-quark. Why, you know, you could raise the question, why do three quarks bind together to form something? Why don't two quarks bind together to form something? In the early days of quarkology, this was a complete mystery. Why quarks? Oh, you could say, oh, it's in order to make the charge an integer multiple of the electron. Well, that's a stupid answer. And not to be that stupid, but at this stage, it's a stupid answer. But yet, that is the pattern. That is the pattern that they combine together and forms in which this is not the only rule, but this is one of the rules of nature. We do understand it and we do understand where it comes from, but a rule, a rule of thumb at the present time, is the way things bind. They always bind into combinations which have integer multiple of the electron charge. Okay, so you can take that as a working rule temporarily until we have a deeper understanding of it. All right, mesons are also particles, composite particles, which also have the property that their charge is integer multiple of the electron charge, but they have zero baryon number. They do not have baryon number, which means that they must be made, that they're quark, they're combinations of quarks. They must be made of quarks and anti-quarks. In fact, they are quarks, a quark and an anti-quark. They're a quark and an anti-quark, and so let's write down. There are, first of all, the ones that you can make only out of ups and down quarks. Ups and down quarks are special, not in any deep sense, but they're special because they're much lighter than the other particles. They don't decay, or they don't decay as rapidly if they do decay. They're abundant in nature, and more than that, it doesn't take much energy and collisions to make them, so of course they were the first particles to be discovered, things made up of only up quarks and down quarks. The up quarks and down quarks are pretty light, and the mesons are typically pretty light. They're an up quark and an anti-quark, and then you can start thinking about the different combinations. There's an up bar quark and a down quark. There's an up quark and a down bar quark. There's an up bar and an up quark, and there's a down bar and a down quark. I think I got them all. Any quarks at the opposite spin? No, you can't do that. Don't they have to be the opposite time because it's one-third or two-thirds? So it's not going to be. No. Okay, let's work out what their electric charges are. The electric charge of an up quark is minus two-thirds, and this one is minus one-third, and this has charge minus one, right? Minus a third, minus two-thirds. That's fine. Charge minus one is like an electron. What about this one? This is the antiparticle of this. This one's not. This one is just the antiparticle of this. Oh, yes, right. This one is the antiparticles of each other because they are related by changing particles into antiparticles, down bar into down, up bar into up, and this has charge plus one. So these are antiparticles of each other. Up bar up, that's electrically neutral, obviously, and charge equals zero. Now in fact, neither one of these two by themselves corresponds to distinct particles. Remember we're doing quantum mechanics. Each one of these states can be thought of as a quantum state. Think of it as a quantum state. The quantum state consisting of an anti-up quark and an up quark. The quantum state consisting of an anti-down quark and a down quark. One of the things you can do in quantum mechanics is to superpose states. If you have states, you can add them. You can superpose them. Superposing them means making combinations that have a probability for being either this one or this one. There are two combinations of these two. I'll tell you what they are. Up bar up, plus down bar down, and what would I put in front of it to make it have total probability one? One over square root of two. This is a quantum state, and this is the orthogonal quantum state. Up bar up, minus down bar down. You know what this is? This is an entangled pair of quarks. Not entangled through their spin, but entangled through their upness or downness. If you thought of upness and downness as about being similar analogous to the spin direction of a particle, up or down, and that's where this terminology came from, then these would be two entangled states entangled not through their spin, but through their isospin, through their upness and downness. We haven't used the word isospin. Maybe I'll tell you what it means, but not right now. There's just this analogy between the spin directions and the upness or downness quantum number. These two things individually are well-defined particles. This one over here goes together with these two, and they are called the pions. They go together for a good reason, for a good mathematical reason, but they go together because their mass is very close to each other. It is believed they would be exactly the same mass if the up quark and down quark had the same exact mass. If the masses were the same. All three of these are virtually indistinguishable except for a small difference in their mass and they have different charges. 0 plus 1 and minus 1, and this is called the pion multiplet pi plus minus and 0. Three independent kinds of particles called pions, and as I said, and we can discuss why nature chose these particular combinations to be analogous to each other, but now I'm telling you facts and not mathematics. And the other combination is simply this one over here, and this is called the eta. So there's the pi plus, the four states here really define the three pions, pi plus minus and 0, and the eta meson. The eta meson is similar to a pion in many ways, but it's heavier. Why is it heavier? That is a complicated story. It's not an elementary story. It's a complicated story. Maybe we'll come to it, maybe we won't, but as a matter of fact, it's massed. What are the masses of the pions? The masses of the pions are about 140 MeV. The mass of a pion, mass of pion, and pi is about 140 MeV. So in other words, it's not too different from, question? Yeah, it's about 140 MeV. So it's somewhere in this range in here, not this heavy, and the mass of the eta is about three times heavier or something like that. The mass of the eta is about 500 MeV. I don't remember the precise number, but that's about right. Why that is, why the eta sticks out as being different was a puzzle for a long time. Not obvious, not obvious. Even today, it's a rather abstract mathematical fact, but it is understood. Isn't it also true that they're relatively long, half-wise? I'm going to compare it to most of them. The numbers of orders and magnitude difference. They don't decay by strong interactions. They decay by weak and electromagnetic interactions. So they're longer lived than many of these other particles. Yes, that is true. They are long lived. Okay, those are the mesons. Did you give the name for the up bar, down, down bar? Yeah, those are pipe numbers. No, they don't have names by themselves. They're linear combination, which are the sum and differences of them, and have definite names. Oh, I suppose they do. Yes, those are. Is that similar to, is that something like the neutrino oscillations? Well, in the sense that it's related to quantum superposition of states, yes, but it's got more to do with spin combinations of two half-spin particles. There are two ways that you can entangle. Well, we'll come to one next time. The last thing I simply want to tell you is that you can make strange mesons. You can take any d-quark and replace it by a strange quark, strange bar quark, and so forth. That makes new mesons which are called strange mesons, or, what's the other name for them? K mesons. K mesons are mesons containing a strange or an anti-strange quark, replacing a down quark. There are four of them, up bar strange, up strange bar, down bar strange, down strange bar. These two have charge zero. Charge zero, and these have charge plus one and minus one. These two are called the charged K mesons. These two are called the neutral change, the neutral K mesons. So there are three kinds of pions, one kind of eta, and four kinds of, how many does that make all together, let's see. Three pions, four K mesons, and an eta, yeah. There's more, there's more, there's more. Did we leave something out? Yeah, we left out one, we left out some. Yeah, we left out S bar S, huh? Yeah, there's nine. Nine, nine altogether. The reason it's nine is simple. There are three quines of quarks, three kinds of anti-quarks, three times three is nine. All right, okay. For more, please visit us at stanford.edu.
(January 11, 2010) Leonard Susskind, discusses the origin of covalent bonds, Coulomb's Law, and the names and properties of particles.
10.5446/15088 (DOI)
Stanford University. Yeah, all right, so let's just for a moment stop and say what we'll do next time. I'm not sure whether tonight I will get to one of the key puzzles, which is called the gauge hierarchy. We may. I had planned it in my notes and we may get to it, especially since I'm getting rid of the review. But next quarter I want to study some of these paradoxes, not paradoxes, but difficulties of the theory and how things like supersymmetry relate to them, how supersymmetry may or may not solve some of these puzzling features of the theory. How unification, what unification? Right now we have SU3, we have SU2 and we have U1. I'm talking now about the forces weak, electromagnetic and strong. Do these somehow sit in some bigger structure? Is there a group, for example, which contains the subgroups, SU3, SU2, and U1? I haven't told you what a subgroup is, but you can imagine. Is there a bigger structure, a bigger mathematical structure that contains SU3, SU2, and U1? Yes. There's a subgroup, there's a group called SU5. And there's a lot of interesting evidence that this whole structure that I've described so far fits very, very neatly into an SU5 structure in which quarks and leptons are all part of the same multiplets. We'll talk about that. Supersymmetry, unification, and the various puzzles that arise. And I will tell you a little bit about what LHC is cooked up to discover. Now, if one thing that LHC was cooked up originally to discover was, of course, the Higgs boson. The Higgs boson, discovering the Higgs boson is not just discovering the Higgs boson. For example, the last, it's more than just discovering the existence of a particle. It also involves a whole bunch of properties of this particle, which are all related to the fact that it is a Higgs boson. For example, I told you, I think it was the last time, how the Higgs boson and the Higgs phenomena is related to the masses of fermions. We looked at the Dirac equation. Instead of looking at the Dirac equation, we might have looked at the Lagrangian for the Dirac equation. I don't know if we ever wrote down the Lagrangian for the Dirac equation. I don't think we did. Let me write it down for you. Yeah, we did the Klein Gord. Let me write it down for you. It's very easy. We first write down the Dirac equation. Derivative with respect to x mu of, well, derivative with respect to time of psi, what was it? Minus or plus, I don't remember, plus derivative with respect to x sub i, where x sub i means x, y, or z, times alpha i psi. We can put the alpha i out here. And we set that equal to what? Beta times the mass, right? Times psi. That was the Dirac equation. Let me move everything over to the left-hand side. Did I ever tell you my greatest mathematical discovery? You don't need the equal sign. Right. You made that discovery, too. How old were you? Oh, I told you. Okay, this is the Dirac equation. Now, supposing I want to derive this Dirac equation by variation with respect to something. In particular, it happens to be psi star that you vary it with respect to. Well, it's a very easy way of constructing a Lagrangian such that when you vary with respect to psi star, it's the Dirac equation, which falls out. All you have to do is multiply the Dirac equation by psi star, psi dagger of psi star. Now it's not, now it's not zero, it's still zero, but now it is not the Dirac equation, it's the Lagrangian for the Dirac field. What does it have in it? It has terms, psi dagger, derivatives of psi. Those are the kinds of things which move a particle from one point to another. Remember in the Klein-Gordon equation, the kinetic term was associated with the hopping of the particle from one point to another. And that's what these terms are. It absorbs a particle and moves it to a new point of space-time. That's what the derivatives do. They move it to a new point of space-time, at the same time they jiggle its spin in a certain way. That's what these kinetic things are. And the mass term is just a term which takes a particle, absorbs it and emits it from a point of space with no derivatives at all. It absorbs it and emits it from the same spot. That's the Lagrangian for the Dirac equation. Do you need a gamma knot in there after this study? No. No? No. Okay. So you asked me, I hadn't used that language, but I will now use that language now. Good. Just for those who are curious about the more covariant language, if you take psi dagger and you multiply it by beta, that gives you something that's called psi bar. If you want to get back from psi bar to psi, you multiply it by beta. Why? Because beta squared is equal to one. So you can rewrite this in the following way. You write it as psi bar beta, that's just psi dagger, d by dt. And then over here, what we have, then we will have plus psi bar beta alpha sub i derivative with respect to x sub i psi. And then what about this one over here? Sorry, let's put the, yeah, this one is going to be psi dagger beta m psi. What will that be? No, no, well, it's just psi bar m psi. M psi bar m, m plus m psi bar psi. Now, if I change the name of these symbols, we can call beta gamma naught. Why naught? Because it's connected with, naught usually refers to time. The four coordinates of space-time are the zero component for time and one, two, three for space. So this one's usually called gamma naught. Beta is sometimes called, in fact, most frequently in modern physics. The alpha beta notation is mostly abandoned and you use gamma. So this becomes psi bar gamma naught dt psi. And this one, now we're going to change the name. Beta times alpha, just the product beta times alpha i, we're going to call gamma i, psi bar gamma i d i psi. Plus m psi bar psi. That's the Lagrangian. And now it has a very, very neat form. You can think of gamma naught d by dt as being gamma naught d by dx naught. This is gamma i d by dx i. It looks like a nice covariant sort of four vector product of gamma matrices times derivatives. In other words, you can write the whole thing in the form psi bar gamma mu d by dx mu. Let's just call it d by d mu. Now mu goes from one to four plus m psi. Okay, that's much more elegant than the original form that Derac wrote down. These are also, the alphas and betas are called Dirac matrices. The gammas are also called Dirac matrices. You said the psi on the bar left here was psi bar and you didn't bar it. All right, but all you have to remember though is that psi bar is not psi dagger. It's psi dagger with an extra gamma naught. This is the elegant form for the equation. In fact, psi bar psi happens to be a scalar. And psi bar gamma psi is a four vector. It has an index mu. So this is a more elegant way to present the Dirac equation. The Dirac Lagrangian, the Dirac equation is just setting this equal to zero. Okay, Dirac Lagrangian looks like this. Now, from this point of view, let's just examine, well, let's examine the various terms. You know there's a fifth Dirac matrix. Anybody know what it's called? Gamma five. Gamma five. What's missing? Gamma four. There's no gamma four. In fact, it's just a glitch. There's one more gamma matrix which is called gamma five, which happens to be the product of gamma north, gamma one, gamma two, gamma three. It's called gamma five. It happens to be one more matrix. And in fact, the eigenvectors of gamma five, gamma five squared is also equal to one. It has two eigenvalues, plus one and minus one. And the two eigenvectors, or the two eigenvalues, correspond to the left-handedness or the right-handedness of the particles. Gamma five is called helicity. Chirality, excuse me, chirality. Handedness. Chirol for hand, you know, like a, what was the Greek word for hand? Chiron, chirol, chirol, something with a chirol, CHIR. Chirality. Yeah, gamma five is called chirality, and it can be plus or minus one. And it corresponds to the handedness of the two particles. Now, if you work it out, if you work out the various things, you'll find that this term here does not couple the left-handed and the right-handed components of psi. Mean the things with eigenvalue plus one and minus one for gamma five. But just think of them as electrons or particles with a right-handed helicity or a left-handed helicity. These two here, sorry, these four here, do not multiply left-handed times right-handed. It just happens they don't. You can work that out by yourselves. This one does. Psi bar times psi happens to be, happens to be, happens to be, psi dagger left, psi right plus psi dagger right, psi left. That's what it is. So the mass term here mixes up the left-handed and the right-handed particles. That's what a mass is, a Dirac mass is. It's a term in the Lagrangian. It seems weird that it should have anything to do with inertia. It's a term which causes a left-handed fermion to flip to a right-handed fermion and so forth, and that's what's over here. The other terms don't do that. So that means a massless fermion has a definite chirality. The mass term flips the chirality back and forth and back and forth. Yeah. Then we talked about a class of interactions, the weak interactions. Let's put in particular focus on the z, the emission and absorption of z bosons. That is purely left-handed for some odd reason. For some odd reason, the z boson only is emitted by left-handed particles, not by right-handed particles. This is not true of the photon. This is only true of the z boson. Sorry, the w boson. The z boson is a little more kinky. It has asymmetric couplings to left-handed and right-handed. But let's focus on the w's. They couple through only the left-handed degrees of freedom. So that means that in effect with respect to the charges, with respect to the transformation properties, under the SU2, the SU2 that's connected with the w boson, the emission of w bosons, the left-handed fermion is charged and the right-handed fermion is uncharged. You're not allowed to put in a Lagrangian something like this, which would take a charged particle to an uncharged particle. So we're going to process in which a charged particle would come in and get absorbed, an uncharged particle would go out, and violate charged conservation. So this would not be a legal term in a Lagrangian that had a gauge symmetry which only coupled to left-handed particles. All right, then the question is how do you get a mass for fermions? The electrons and quarks, they all have masses, there's something wrong. Ah, here's where the Higgs boson comes into play. The Higgs boson also plays the role of a charged object with respect to the SU2 of weak interactions. In fact, it's a doublet. Just like the electron and neutrino are doublets, the Higgs boson is also a doublet. I didn't emphasize that, and I'm not going to emphasize it, I've sort of skirted around it. The Higgs boson also transforms under SU2. And roughly speaking, the way to make this into a legitimate interaction is to multiply by the appropriate Higgs boson field here. The Higgs boson, if you like, carries a weak charge, which is either the same or opposite depending on which component we're talking about, as the weak charge of, let's say, the electron, if we're talking about electrons here. The right-handed particle has no charge with respect to the weak interactions. So what we have to do is put a Higgs boson here and a Higgs dagger here. This means the emission of a Higgs boson and the emission of an anti-Higgs boson, if you like, with the Higgs boson carrying off the charge of the left-handed particle when it became the right-handed particle. So we introduce into the Lagrangian something which takes a left-handed fermion, emits a right-handed fermion, but also allows a Higgs boson to be emitted in such a way that the weak charge of the electron or the quark or whatever it is escapes in the form of the Higgs boson, the Higgs doublet. And that would be the end of the story, except for the spontaneous symmetry breaking which has to do with the Higgs field getting an expectation value. The Higgs field, well, I think we called it phi last time, didn't we? I think we called it phi. Let's just continue to call it phi, and we emit a phi particle here. That would be the end of the story, except that for whatever reasons nature has chosen the energetics of the Higgs field, this phi here, to have one of these Mexican hat potential energies. Why did it do that? Why did nature do that? We don't know. Okay, but we're very lucky that it did. Otherwise, chemistry would go to hell, and things would be very, very unpleasant. But with one of these upside-down Mexican hat potentials, that means that phi is equal to a number, just the magnitude of the Higgs field here, which we called F, plus a fluctuation, a fluctuation away from that value. And that fluctuation is what we actually call the Higgs field. So let's go back now. What happens to these terms in the Lagrangian? They break up into two pieces, not these two pieces, but these two pieces. One of them is, oh, I left out one thing. What did I leave out of this expression? A numerical number. G. G. The Yuccao coupling, let's call it G sub y. And there's a different G sub y for each fermion. These two should be the same. G sub y is just a number. A pure number carries no dimensions. They're called Yuccao couplings. And where they came from, I don't know, somebody wiser than me put them into the Lagrangian. OK, so here they are. All right, now let's use the fact that phi is equal to a non-fluctuating part, which we called F for the non-fluctuating part. We called it F, don't know. And for the fluctuating part, we called it H, the Higgs boson. That's not my notation. That's a standard notation. So this becomes GYF times psi dagger left psi right, plus psi dagger right psi left, plus GY Higgs field times the same thing. Same thing. And now we have a prediction for what the number F, where did that come from? The number F we obtained, that same number F, went into the mass of the Z and W bosons. The shift of the Higgs field also determined the mass of the Z and W bosons. Once you've measured the mass of the Z and W bosons, and this was done a long, long time ago, and we were measured, the mass is measured, about of order 100 GeV, 90 GeV, something like that. So that told us what F was. That told us what the number F was. F is a number of about, really, 200 GeV for F. So that's a number, and it has units of mass, incidentally. F has units of mass, 200 GeV. Ah, once we know that from Z and W physics, then from the masses of the fermions, we can read off what these Yukawa couplings are. Once we know, for example, the electron Yukawa coupling is some very, very small number. Why? Because the electron is much lighter than the Z and W. The Yukawa coupling for the top quark, that's large. Why? Because the top quark is so heavy. So the masses of the quarks wind up being these Yukawa couplings times F, and so far we've really gotten out no more than we put in. We've gotten out no more than we put in. We got F from the masses of, well, a little bit more, but basically the masses of the Z, W system, we got F, and then from the masses of the fermions, we got Yukawa couplings. So if that's all there was, we would have put in as much as we got out. But now we also have something else. We have GY Higgs times things like, let's just call them Psi Dagger Psi. These are terms in Lagrangian which take a fermion, this could be an electron, it could be a whatever it happens to be, which take a fermion, absorb it, and re-emit it plus a Higgs boson. So these are then processes where let's say electron, or it could be muon, or it could be tau, or it could be some quark. It emits a Higgs boson. Or it could be a Higgs boson coming along and decaying into an electron and a positron, same diagram. So once the Higgs boson materializes as a particle in the laboratory, we should know what the various decay rates are for it to decay to the different kinds of electron, positron, muon, anti-muon, or whatever. We'll know all of the interactions between the Higgs boson and the fermions and be able to predict all the decay rates, all the Higgs boson decay rates. Obviously it'll prefer to decay to heavier particles as long as there's enough energy because these Yukawa couplings are larger. Yukawa couplings are the coupling constant for the Higgs to break up into its constituents, into its decay products. So at the moment, I mean, Higgsology is a pure theory in the sense that it's predicted by the theory. There seems to be no other way to make it mathematically consistent, but neither the Higgs nor the direct couplings to these particles and so forth have been detected. That will happen. Well, I hope it will happen. And so there's a lot of predictive power. The theory also already has an enormous amount of predictive power, predictive power in the interactions of the Z's and the W's, which are experimentally detected. The interaction of Z's and the W's not only with respect to the fermions but with respect to each other. Z's, no. How would you actually predict the decay time? I don't mean going through all the nitty-gritty details, but I mean, sort of how would you set it up to actually make that prediction? Wow. You calculate a Feynman diagram. You calculate a Feynman diagram. You can just think of it as elementary quantum mechanics. This gives you a term in the Hamiltonian, which takes the Higgs boson to a pair of particles, and it just gives you the transition amplitude, which you square. And once you've squared it, you integrate it or sum it over all the final states as an integral to do, and that gives you the decay rate. It's a standard calculation that is basically a Feynman diagram. If you like, this gives you the amplitude. You square the amplitude to get the decay rate. Squaring the amplitude, you can just do by putting this right on top of itself like that. Here, the amplitude is squared, and that becomes a Feynman diagram. You calculate the Feynman diagram by Feynman's rules, and one of the things that comes out of it is the decay rate. But this is more than we're going to do here. That's what we have to do. So it's a very, very definite calculation within standard quantum field theory. The kind of calculation we've been doing since the 1930s. It's quite well known. The only things that you have to put in, which are new are the values of these coupling constants, masses of particles, and so forth. Okay, so that was the masses of fermions, the masses of the Z and W bosons, all of them being proportional to this constant F. Why don't we talk about the Higgs itself? Let's talk about the dynamics of the Higgs boson. The dynamics is really all contained in this Mexican hat potential. The minimum of the Mexican hat potential tells you how much the field is shifted, and it tells you what F is. You can make up different potentials that have this shape. There's a favorite one which actually has some important theoretical significance, and it goes as follows. Let's go to the origin over here. At the origin, the Higgs potential is a sort of upside-down parabola. Let's forget what it does far from the origin. Let's just concentrate on near the origin. An upside-down parabola means something like minus, let's call it mu squared phi squared. And it's traditional to put a 2 in here. It's just traditional. Minus because it's an upside-down parabola. Remember if it was plus mu squared phi squared, what would you call this term? You would call this term the mass of the Higgs boson. You would call mu the mass of the Higgs boson. Or you would really call mu squared the square of the mass of the Higgs boson. Making it negative is a little bit weird. It's as if there was an imaginary mass for the Higgs boson. But don't worry, there's nothing wrong with this. It's perfectly okay to have a potential which is upside-down. The only thing which is unusual about it is putting the field at the origin is not a stable position. It tends to roll off and roll down the hill. Whereas if the mass term was positive, it would oscillate about the top, about the central position here. So the field does not oscillate about the top because of this minus phi squared. Now, if all you had was minus phi squared, it would just go down and down and down. And the field would just run away to infinity. There would be no ground state to the system. There would be no vacuum. The vacuum is the state of lowest energy. There would be no state of lowest energy. So we've got to put in something to keep it from running away to infinity. We better put in something to turn it up. The simplest thing to turn it up is plus lambda, where lambda is just a number, pure number, times phi to the fourth. Why do I want to put in phi? Why not phi cubed, phi to the fifth? Why not odd powers? Because we want to keep it symmetric on both sides. The most important aspect of this potential was to keep it nice and symmetric. So phi to the fourth is symmetric, phi cubed is not. This is the next thing you can put in. You can put in all sorts of things to higher powers here. They're not so important. They're not so important. The most important one is phi to the fourth and expanding the potential in a power series. You can put in more, but this is the most important thing. In fact, it's also traditional to put a four over here. You'll see why in a moment. A two over here and a four over here. What is this number? This number is called a quartic coupling constant. Quartic because it has phi to the fourth. It is dimensionless. It's a pure number. We think we know something about that pure number. Not from direct experimental evidence, but from a lot of indirect evidence. And what we know about that number from somewhat indirect properties of weak interactions is that it's small, but not absurdly small. Probably about 1%, probably about 1.01 or something like that. Compared to the numbers we're going to be thinking about later, this is a number which is of order of magnitude one. It's not 10 to the minus 20. It's not 10 to the minus 15 or anything like that. So just think of it as a number for the moment of order one. It's also dimensionless. Mu is not dimensionless. It has units of a mass. Okay, now let's try to find out what f is. f is the position at the minimum here. All we have to do is minimize this with respect to phi. We cooked it up so that it would have a minimum. A rapidly falling quadratic term and then taken over by, incidentally, near the origin. This is smaller than this, right? Phi to the fourth is smaller than phi squared near the origin. So this has not got much effect near the origin. It picks up steam and eventually becomes much bigger than phi squared when you go to large phi. So that turns it back up. That's the logic. Okay, let's just find the minimum of phi, sorry, the minimum of v with respect to phi, and that will tell us how much the field gets shifted. All right, that's easy. We just differentiate with respect to phi. The derivative of phi squared is 2 phi. That eats up this 2 down here. That's why I put the 2 down there. Minus mu squared phi. This is the derivative of v with respect to phi plus lambda, which is a number of order of magnitude one, phi cubed. I also put the 4 down here so it would kill the 4 when I differentiated with respect to phi to the fourth. And we set that equal to zero. All right, now we can divide by phi. We divide by phi. Make this phi squared. And what do we get? We get phi squared is equal to mu squared over lambda. Again, lambda is an ordinary number. Don't think of it as very big or very small. This is the value of phi at the minimum. It's also called f. It's also the shift of the field, f. And so f squared becomes mu squared over lambda. Again, I emphasize lambda is not the interesting number here. It's the common garden variety number like one or a tenth or a hundredth or something like that. It's not very big, that's for sure. Okay. f is controlled by mu. f is also approximately the mass of the z and the w boson. It's also the thing which comes into the mass of the fermions. Where are they? All of the particles have masses proportional to f. Let's take the square root of this. So it's mu or f or mu, which is the controlling factor which controls all of these masses. All of them, all the masses of all the particles that we know about. That raises an interesting question. Why are all of the particles that we know about controlled by the same phenomenon, namely f? Why aren't there particles which have masses without spontaneous symmetry breaking? We can certainly make up a theory of such particles. For example, just make up a particle whose left-handed and right-handed components both have weak charge. Then we have no reason to put this phi in there. We wouldn't put the phi in there. And you could put any mass that you like in front of it. And it's perfectly symmetric. So if there was a particle which was, whose left and right-handed components both interacted with the weak interactions the same way, they could have a mass which was not proportional to f. All kinds of other particles that don't have any weak interactions at all if there was such particles could have masses which are not controlled by f. The general thinking which we'll come to, which we'll come to, and I'll explain why this is the general thinking, is that the natural mass scale for particle physics is enormous. Much, much higher than the masses of any of the ordinary elementary particles. It's thought that only this one number f is the unusual number in physics, the very, very small mass scale, whose origins are at present unknown. But what is thought, and I'll explain as we, this quarter, next quarter, various points, the idea that really the only mass parameter, the only small mass parameter in nature, is this f here, and those particles which by necessity have to have masses proportional to f. Why? Because it's in the nature of their mathematics that they would have to be massless if there was no shift of f. Only those particles are light enough to have been detected at present. That's the belief, that's a very widespread belief, that there's a good reason why all the particles have masses proportional to f, because any that don't probably have masses which are much, much larger. But still we have to explain the smallness of f. Small compared to what? Well, that's an interesting question, small compared with what? What is the natural mass scale for elementary particle physics? So that's where I think we want to go next. That's one possible mass, yes? Let's see, I think it, we'll take a break in two minutes, but yes, one possible mass which we'll talk about in a few minutes after the break is the Planck mass. Another is the so-called unification mass, which I'll show you what it is. These masses are typically 16 or 17 orders of magnitude, well let me see, 15 orders of magnitude heavier than the mass than f. 15 orders of magnitude heavier than the Z-O-W boson, for example. So what we're left with then is a puzzle, why is this f quantity so small? Once we can understand why the f quantity is so small, it drags down the masses of all the other particles, it drags them down simply because they cannot have mass other than proportional to f. They can naturally be massless in a world without spontaneous symmetry breaking. So, yeah. Would you say that again about the 15 orders of magnitude? Yeah, the natural, alright, f is a number of about 100 GeV. Alright, that sounds like a lot of, it sounds like a big mass. The other mass scales which come into particle physics, which we know about, the other possibly much more fundamental mass scales, are heavier than that. 15 orders of magnitude heavier than that. And what particles correspond to that? None that we've ever discovered, obviously. You're right, any good we predicted. Yes, of course, particles we predicted, but why did we predict them 15 orders of magnitude higher? In other words, when I say that there are other mass scales that we know about, we know about them from something. So the question is where do we know about them, if we've never seen them, if we've never seen particles of those masses, and we'll talk about that after the break. I gathered that in these very, very low temperature condensed matter boson, kind of things, there's all sorts of large particles in the particle. But aside from that, is there any common in solid state physics or any common example that gives light to that? Yes, yes. All right, so you asked about boson condensates. So a boson condensation or more precisely superfluidity, which is similar to a boson condensation, it's a condensation of helium atoms. Helium atoms, of course, not elementary particles. But on some scale, they are elementary particles. If you're interested in scales of order molecular size, a helium nucleus is an elementary particle, and it can be described by a field. That field naturally, an ordinary vacuum, has a value zero, just corresponding to no helium atoms present. Spontaneous symmetry can break and can happen, which can shift the value of the helium field. When that happens, in the same way that f is a shift of the Higgs field, when that happens, it's, roughly speaking, called a boson condensation. When the boson condensation happens, it spontaneously breaks a certain symmetry, and it creates superfluidity. But if the helium atom were charged, if the helium, not the helium nucleus, the helium atom, if the helium atom were charged, it would behave like the Higgs boson and make a mass for the photon. Now, does that ever happen? Anything like it ever happened? Yes, it does. In a superconductor, electrons can pair up and bind together in the superconductor, very weakly bound pairs, and those pairs of electrons have charged too. They're not helium atoms, they're not neutral here in the atoms, they're called cooper pairs, and they're bound configurations of electron, not electron and positron, but electron and electron. Are they separated? The cooper pairs aren't they separated? They're separated, they're not very far separated in space. They're separated at the opposite ends of the Fermi surface. No, think of them as particles. First approximation, think of them as particles. They're charged particles, and then they condense. They condense, which means the field for this pair of particles gets shifted. You can think of that field as really just being the product of the electron field with itself, describing two electrons, or you can think of it as a new object which carries charge two. When that field gets shifted, it creates a condensate of cooper pairs. It creates a kind of superfluidity for the cooper pairs, but it's now a superfluid of charged particles. It becomes the Higgs phenomena. It's, of course, a superconductor, and the superconductor in essentially every possible way behaves like the Higgs phenomenon. Photon gets mass, now it's a tiny, tiny mass because the scales are so terribly different. But the Compton wavelength of the photon becomes finite, and it behaves very, very much like this spontaneously shifted Higgs field. In fact, the phenomena was first discovered in the context of superconductors, and was rediscovered entirely independently by Higgs and other people. Okay, let's take a little break. When you begin at the primordial description of elementary particles, you begin with a Lagrangian, and in the Lagrangian there are a collection of parameters, such as masses of particles, charges of particles, and they're sort of the input. On the other hand, they're very rarely, if ever, the actual measured quantities. The measured quantities that you actually measure are usually the product of all kinds of interactions and things that take place between the high frequency modes of a system, and low frequency modes of the system, which renormalize or change the values of the parameters that you associate with experimental observations. An example is electric charge. So let's talk about that a little bit, and how charge gets renormalized. I'm not going to do any calculations. Calculations, you'll have to go and get a book and try to follow it. But the phenomena, the phenomena is a fairly simple one that's well known. It occurs in condensed metaphysics. In fact, it just occurs in classical electricity and magnetism associated with electric, what's it called? Electric, thank you, with the dielectric materials and conductors. The actual observable charge, normally observed charge of an object is defined by the Coulomb law, but the Coulomb law at large separation. The asymptotic force between two charges when they're far apart and that force, as you well know, is the product of the charges. Let's call it E1, E2. Let's take them to be equal charges, and in particular, let's suppose they have charge of the electron, just to be simple, then it's just E squared divided by R squared or in the potential energy, just E squared divided by R. So you take two charges far apart and you measure the force between them. At any given distance, it will not be precisely given by this formula, but if you asymptotically separate them, then you'll find asymptotically with respect to R, the force between them or the potential energy between them, goes like E squared over R. Attractive for opposite charges, repulsive for light charges. And that defines, if you like, the charge of a particle. Coulomb law at spatially very large separations. Now, if in a material you do this, you take charges in a material, or even just the air, and you separate them and see what the force law is, you'll find that it's not exactly given by the formula in which you go to the particle physics table, the particle physics table of particle properties, read off the electric charge of the system. Typically, you won't read off the electric charge of an ordinary chunk of material, but you might happen to know how many electrons or how many excess protons or electrons are in it. So you might know it's charge in units of the charge of the electron. You take them apart and you discover, no, the force is always a little bit less than that, always a little bit less than that, unless you're an absolute vacuum, an absolute vacuum, completely empty space. By definition, the electric charge is what you measure by measuring the force at a large distance. Now, what is it that happens in a material? Let's begin with a conductor. With an electric conductor, what happens to a charge that's put into the conductor? Well, the charge creates some electric field. For example, suppose it's a plus charge, it creates an outward electric field, the electric field creates a current, the current starts charges moving, and until the charge is completely discharged, in other words, until the current, until the charge has gone out to the boundary of the conductor, that the flow of charge doesn't stop. So the charge flows until the charged object, let's draw the charged object over here, there it is, charge flows away from it, collects on the boundary, but let's put the boundary off in Alpha Centauri someplace, and of course, an opposite charge, so if this is plus, an opposite charge cloud is found around it, it completely discharges it, so that outside there's no electric field at all, and no more reason for the charges to flow. Now, these charges are, of course, bound to the positive charge, they're bound to it, and the whole thing forms a neutral structure. If you take another negative charge over here, exactly the opposite thing will happen to it, it will create some plus charge around it, same deal, the charges will flow until it's completely neutralized, and now you take these two charges, you thought you had two charges to begin with, you evaluate or measure the force between them at large distances, and you find out that there is none. So, the charge that you put in to begin with is not the experimental charge at the end of the day, the experimental charge at the end of the day, in this case, is just plain zero. Now, what happens as you start moving the two charges, let's suppose they're really small, and you start moving them closer and closer together? Is this screening? Supposing these charges are really small, in particular, they're smaller than the clouds around them, they form. The clouds around them are controlled by various properties of the metal and so forth, and the electrons moving in the metal, and we imagine making these charges even smaller. We bring them really close together. We bring them so close together that the screening cloud can't get between them. Typically, there's a scale for the screening cloud. It's controlled by the dynamics of the electrons in the system, and the screening cloud, the particles making up the screening cloud, typically have a certain distance between them, and that distance controls the size of the screening cloud. When you bring these two particles closer than the screening cloud, they're no longer screened. They actually feel each other's raw, basic charge, and you see that there's a Coulomb force between them, which is the usual E squared over R. So, you could say then that the charge of the electron, or the charge of the particle in this case, is really a function of distance. In the case of the conductor, E of R is zero at very large R. It's whatever the ordinary charge of the particle is at very small R, and the force law, instead of being just playing E squared over R, has E squared of R divided by R. This is just a definition of E squared of R, if you like. I don't put anything new into it by calling it E squared of R, but it does have a nice sound to it. It's called the running charge. Running is a function of distance between them. Okay, now what happens if you have something less extreme than a conductor? For example, a dielectric. A dielectric, if you like, has electrons which are bound to the atoms. Electrons are bound to the atoms, but they're free to shift a little bit. If you like, you can almost think of the electrons as being on springs, where the one end of the spring, the electron is attached to, the other end of the spring, the ion, the positively charged ion is attached. And ordinarily, the electron moves symmetrically, vibrates symmetrically. Of course, this is not really a good picture of an atom, but it's good enough for our purposes. The electron oscillates and vibrates back and forth symmetrically about the atom. So on the average, the dipole moment of the atom is zero. But now you put the atom into an electric field, and what happens is the electron cloud shifts a little bit, and it shifts a little bit, leaving a dipole left over. The current doesn't flow, charge just shifts a little bit because they're bound to the atoms. Okay, so now what happens if you put a charge into a dielectric? Well, a very similar phenomenon happens. Let's suppose we put a minus charge in over here. All the ions are heavy, they don't move very much. But the electrons which are bound to them by the springs move out away from the central atom. What does that leave? That leaves a little bit of plus charge here. But it doesn't completely screen the charge. It screens some fraction of the charge, so that if you look very, very far away from the charge particle, you'll find that the field, the electric field, is diminished by a certain fraction, but it doesn't go to zero very far away. The fraction is controlled by the dielectric constant. The dielectric constant will tell you what the screening is. So again, you'll find that a large separation, this plus charge and this minus charge, don't satisfy the expected Coulomb law that you might have expected from the actual, let's call it the bare charge of these particles, but it's diminished by this effect. It's diminished, the Coulomb law is correct, but with a diminished value of the charge. So in this case, you would also have an E squared of R or an E of R. Again, if you bring the charges very close together so that they're within the screening cloud, then you feel the full strength of the bare charge. So again, E squared is a function of R. In this case, it wouldn't go to zero in infinity, it would just go to some constant fraction of the original charge. Again, the concept of a running charge. All right, now, exactly this phenomenon, it really is exactly this phenomenon, happens in quantum field theory. The origin of it is the pairs, let's take quantum electrodynamics. In quantum electrodynamics, the vacuum is full of electron-positron pairs. There are Feynman diagrams where electrons, positrons are created by photons, even just electron-positron loops with no photons, and they inhabit the vacuum, if you like. The other way you can think about it is, in terms of Dirac's negative energy C. The vacuum is filled with this negative energy C. Or you can imagine that there are electron-positron pairs in the vacuum. What happens when an electric field comes along? The electric field will shift the positrons one way and shift the electrons the other way so that every little virtual pair, electron-positron pair will become polarized. Polarized in exactly the same way that atoms become polarized by an electric field. The plus charge is shifting one way, the minus charge is shifting the other way. That's what happens in the vacuum when you put an electric field on. Okay, that's step number one. Step number two, of course, you put in what you thought was an original bare charge of magnitude E. The charge is in the vacuum rearrange a little bit. The vacuum is basically a dielectric. It's basically a dielectric. The charges shift a little bit. And in such a way that negative charges are screened by a little bit of positive charge from these virtual pairs, and positive charges are screened by a little bit of negative charge. The upshot is the charge that's felt at a large distance is not the same as the numerical value of the charge that you put into the Lagrangian, which is usually called the bare electric charge. But again, you start moving the electrons closer and closer to each other, and at some point they get closer than the size of the screening cloud. But there's one difference in this case. In this case, there is no set distance that the electron positrons tend to be separated by. The right statement is that the separation between the electron-positron pairs depends on their energy. The higher the energy of the virtual pair, the closer they will be. So there will be electron-positron pairs which form relatively big, let's call them atoms, they're not atoms, but let's call them atomic-neutral systems. There will be ones of higher virtual energy which are smaller, ones of yet higher virtual energy which are even smaller, ones which are higher energy which are even smaller. So what happens when you put the charge in and bring another charge closer and closer and closer? At first, you will bring them close enough that they'll be within the distance of the low-energy virtual pairs. The low-energy virtual pairs will no longer be able to screen the charge, so the charge will look a little bit smaller, sorry, a little bit bigger than it would when they were far apart. Why? When they were far apart, they were completely screened, not completely screened, but they were screened by all possible virtual pairs. You bring them a little bit closer, the low-energy virtual pairs which had a large separation between them, you bring the charges closer than that, and the low-energy virtual pairs are no longer important in screening, but the higher-energy virtual pairs are still doing some screening. You bring them a little closer, the higher-energy pairs fall out of the equation because they're too far apart, and yet higher-energy ones are still there. The effect is that that e squared of r is a function which keeps changing and changing and changing with r. It doesn't stop changing at some particular small distance. The closer and closer you bring the electron together, the larger the charge will appear to be. So if you were to plot, by virtue of the fact that there are pairs of every possible energy in the vacuum. So if you were to plot the electric charge as a function of radial separation, let's plot it as a function of inverse radial separation. Inverse radial separation is like energy. The closer things are together, the higher their energy. So let's plot it as a function of inverse separation, one over r, which is also related to energy or momentum or whatever. Then the electric charge that you'll see will increase. It'll increase very slowly, in fact. It's a very slow logarithmic increase. It doesn't increase rapidly. That's why it's very hard to see in an experiment. It's a logarithmic gradual increase of the size of an electric charge as you study smaller and smaller distance phenomenon. It's also true, for example, electric charge comes into calculations of scattering of electrons. If you scatter electrons at a low energy, they only get within a certain distance of each other. The higher the energy you scatter them, the closer those electrons can get to each other by virtue of the uncertainty principle. When you do the calculation, you have to put in the running charge. If you scatter electrons at an enormously high energy, they behave as if they had a larger charge than at smaller distances. So that brings us to the idea of running charge. What is the fundamental charge? What is the most fundamental value of the charge? Is it the one at long distances? No, that's the one that's controlled by all of this complicated screening taking place. As you get the smaller and smaller distances, you sort of peel away the screening cloud. As you get the smaller and smaller distances, you peel away more and more of the screening cloud. So it's really at the smallest distances where the basic parameter that you put into the Lagrangian, the Bayer charge, manifests itself. Yeah? How small and small are you starting to see a significant effect? Well, you start seeing a measurable effect, but a high precision measurable effect at about the Compton wavelength of the electron. So what's that? 10 to the minus 11th centimeter? Compton wavelength of the electron. The electron is half an MEV, so that's about 10 to the minus 11th centimeters. 10 to the minus 11th centimeters. You start to see the running of the charge. But this thing continues to... It doesn't look like that. Well, it would keep going on if the physics didn't change somehow, at very small distances. We don't know what the physics is at very, very small distances. I think you just said something about sort of the playing, that the small ones are there, and there we don't know what happens actually. At very small distances. Right, that's right. We don't know what happens at very small distances, so we can only extrapolate a certain... Yeah? So for blue-ons, it's kind of the opposite thing. There's some empty screening. It does. Well, that's a more complicated phenomenon. A more complicated phenomenon that's closely connected with the fact that blue-ons interact with blue-ons. And because blue-ons interact with blue-ons in a very nonlinear kind of way, the effect is actually the opposite. This is... You go to infinitely small distances, and you measure at infinitely small distances the interaction, and that's the bare electric charge. But you'll never get to an infinitely small distance. Is that a asymptotically approach to something? No, it doesn't asymptotically. It just keeps growing and growing and growing. Yeah, longer in the function is unbounded. Right. Right. So it keeps growing until something happens that was not taken into account by these calculations. What could it be? A smallest distance in the world, a Planck distance, something else happens, and from experimental physics, we don't know what terminates this. We certainly believe it's terminated at the Planck length. This physics certainly doesn't make sense at Planck length. But there has to be a particular length when you say that's bare electric charge. So let's call it Planck length. That's the second one. Okay. That's the third one. But you see, you can compensate. You can change that length. And if you also at the same time change what you called bare charge, it would compensate. Right. So at any given level, you could terminate it, call that charge the bare charge at that length, or change the length scale if you appropriately and judiciously change the value of the bare charge where it comes out at the low energy end, won't know about it. Right. So that's the phenomenon of renormalization. Changing things in such a way as to keep the physics, the experimentally known physics, fixed. Is that what you're zooming in and attenuating? We're zooming in and looking at closer and closer distances. Now, of course, we don't do that experimentally. So we do that in our mind, and we ask how do we have to change the physics at smaller and smaller distances to keep it fixed at experimentally accessible distances? How do we have to change the electric charge as we go to smaller and smaller distances in order to keep the physics fixed at any given wavelength? So what this is telling us is that as we go to smaller and smaller, more and more refined description, more and more fine-grained description, the value of the electric charge that we have to put into the bare description has to grow. Now, as I said, that will change at some point where the physics of quantum electrodynamics gives way to something else. So that's the electric charge. Yeah. Does the same phenomenon happen with gravity? The weightings are somewhat similar. We'll talk about it. It's quite a different phenomenon, but we'll come to it perhaps today. I don't know if we'll come to it today, maybe. OK, now let's talk about the strong interactions. The strong interactions, things are more complicated. Let me remind you of what we learned about the force law between quarks. The force law between quarks does not behave like e squared over r. What happens is some new phenomena where the flux lines that come out of quarks, these are not the electromagnetic flux lines. These are the flux lines of QCD, they're called chromoelectric flux lines, attract each other. They attract each other in a way that does not happen in electrodynamics. It's the self-interaction between gluons. And that has the effect of turning the Coulomb field into long tube-like structures, which are sort of squeezed by the vacuum into these long tube-like structures. The effect of this is as you separate quarks, the energy grows linearly with the separation between them. You create this Turkish Taffy-Gui stuff in between them, and the energy of it grows simply as the length of the Gui thing between them. That means that the force law, the energy grows linearly. What is the force, all right, so let's write that down. Instead of the energy growing like e squared over r, it just grows linearly with r. So this is the wrong formula. We should have put here, perhaps e squared or something like e squared, times r. Let's not call it something times r. Well, we can either say we had completely the wrong theory, or we can say that e squared is growing like r squared. We can say the picture from electrodynamics that charges are screened, simply the opposite happens, that e squared grows with separation. Something happens in this nonlinear effect here, which is quite the opposite, and so e squared grows with separation. Incidentally, at very small distances, again, distances which are so small, smaller than the size of these flux tubes, it reverts back to e squared over r. So it reverts back to the good old Coulomb law. But another way of saying it is that instead of e squared as a function of distance getting larger with distance, it actually gets smaller with distance. E squared is getting smaller, no, sorry, with inverse distance. It's getting smaller with inverse distance, larger with distance, smaller with inverse distance. So therefore, if I plot the quantum chromodynamics coupling constant, it will fall as a function of inverse r, like that. What about the one for the weak interactions? Well, the weak interactions are sort of in between. They have similarities with both. They're in between. They go like that. Well, maybe not as fast. It's an interesting and quite remarkable fact that if you actually take the standard model and you work out how the charges, first of all, you need to know what the various coupling constants are in the laboratory. Those are the ones that control physics at whatever scales you've done your experiment, 100 GeV or whatever, and you put in their values. And then you run them using just the standard model, the polarization of the vacuum and all the properties from the standard model. These three numbers more or less come together at a common point. They more or less come together at a common point. Now, in a more re-fault, well, in the supersymmetric version of the theory, which we will talk about next time a little bit, next quarter, they come together with rather high precision, about 1%. They come together about 1%. That looks like there's something happening at some energy which is unifying these forces. Their coupling constants seem to be the same at some high energy here if you just extrapolate these curves. The curves are calculable. Of course, you have to assume that no new physics comes in. That no new physics comes in, or if new physics comes in, you have to take it into account. New physics might shift these curves a little bit. In the supersymmetric version of the theory, these three numbers come together very nicely to about 1%, at an energy of about 10 to the 16th GeV. Why is that energy so high? It's because these curves vary logarithmically. They vary slowly. When things vary slowly, they vary slowly. It takes a long range of scales before they come together. They come together at about 10 to the 16th GeV, 10 to the 14th times larger than the mass of the W. It's some huge energy. It looks from these curves that some kind of unification is taking place there. Is this good enough to... It's a relatively high precision agreement in the supersymmetric theory and a non-super symmetric theory, it's not such high precision agreement, but it's tantalizing and it's interesting that this is the tendency for things. All right, if this is correct, you might guess that the fundamental length scale of particle physics, of the standard model, is up at this 10 to the 15th GeV. It's where the input seems to be simplest. Simplest in that the three coupling constants seem to be the same. We'll talk about what that means mathematically. It could have a very deep mathematical meaning, but for the moment it's just an observation about numbers, and we're going to work together at a particular energy. Now, that energy also happens to be the place if you were to plot the strength of gravitational forces. Gravitational forces are very weak at large distances, but I'm going to show you why in a moment. They get stronger and stronger and stronger at small distances, not only because of the usual 1 over r squared in the force, but above and beyond that the forces of gravitation become stronger and stronger. And if you will, to lay on top of that, let's see, the forces of gravitation would become strong at small distances. So that would mean gravitational forces, which are incredibly weak at long distances, would do something like that. They don't quite all cross at the same place, but they cross within a factor, remember these are logarithmic curves, they cross within a couple of orders of magnitude of each other. The gravitational, the weak, the electromagnetic, and the strong interaction cross all within a couple of decades of energy of each other, or a couple of decades of length scale, about somewhere between 10 to the 16th, 17th, 18th GeV. So what all this means, especially the gravitational side of it, we don't know, but if there is a unification scale, and if this really does correspond to some unity between the three forces, it looks like it happens at a scale which is not too different than the Planck scale. The Planck scale is the scale where gravity would become comparable to the other forces. So let's talk a little bit about gravitational forces, and why I say that gravity gets stronger at small distances. Ordinarily, now we're going to talk about what happens to gravitational forces as you vary the distance scale. And the only point here is that when you understand how gravity depends on length scale, you'll find out that gravity sort of falls on the same ballpark with the other forces at scales which are very small. That gravity is really no weaker than these other forces. Okay, so to understand this, we first have to erase the blackboard a little bit. Okay. We're used to thinking that the force law of gravity is one of r squared or Coulomb. Be surprised if I told you it's really one over r cubed at very small distances? You wouldn't be surprised. But you wouldn't. Okay, so let's go through why it's one over r cubed at very small distances. The reason is we write down, what do we write down? We write force. Let's work with force this time. We worked with energy before. It doesn't matter what you work with. Force is equal to m, m, two particles, masses, r squared, and Newton's constant. That's the force between two particles. All right, now this is an unrelativistic formula. This is the formula for particles at rest. When particles have a high energy, or when objects have a high energy, that gravitates and not mass, not rest mass, a moving object will gravitate more than an object at rest. If you take a box full of high energy particles, those high energy particles will gravitate more than they would if they were brought to rest within the box. So the right formula, or a more correct formula, is to put here, use E equals mc squared, or m, m, not m squared. E equals mc squared, or better yet, m equals E over c squared, and put in here energy of particles, E, E, over c fourth, I believe, right? Over c fourth. So this is a formula which is more accurate when the particles are relativistic. Okay, now let's go to quantum mechanics, and imagine bringing two particles very close together, much closer together than their Compton wavelengths. Because of the uncertainty principle, if we've brought them close together and we know that they're close together, that means that their momentum is extremely uncertain. To say that their momentum is extremely uncertain, you can say that their momentum is very large. Roughly speaking, if you brought them to within a distance r of each other, the momentum of them must be at least of order, what is it? h bar over r, right? Okay? Now, what about the energy of these particles? The energy of these particles you might think, well, the energy of a particle at rest is mc squared. If it has a little bit of velocity, you add plus one half mv squared. Sorry, one mv squared over two. But what happens when the momentum gets really large when r gets very small? When r gets very small, these particles become highly relativistic. Why? Their momentum gets much larger than any pre-assigned number when they get very, very close together. What's the energy of a particle, a relativistic particle, with momentum p? Relativistic means it's moving with almost the speed of light. Anybody remember? What is it? No, that's non-relativistic completely. No, no, no. The energy is p times the speed of light. Right. p times the speed of light. Okay? So we can now say p is h bar over r. And now we can plug this into the force between particles. Okay, so what is it? f is equal to h bar squared. Let's just, the particles come close together. Let's assume they both have about the same energy, which is a good assumption. So that's h bar squared, c squared over r squared. That's just e squared here, e1 times e2. That's this factor here, then there's a c to the fourth in the denominator. From here, there's a g and another r squared. So r to the fourth. Did I get it right? Right. The c squared is canceled. Well, they don't cancel. c squared. All right, when I said the force law was one of our r cubed, I meant the energy, not the force. The force is one of our r to the fourth. So when you bring particles really close together, squeeze them by whatever means, either an accelerator accelerates them to very high energy so they can get very, very close together, or you put them in a box and squeeze them. Well, there's no box that's going to hold particles that are that relativistic. But somehow or another, when the particles get that close together, the force law is quite different than Coulomb. It's one over r to the fourth. Okay, now let's ask the following question. Yeah. It's an attractive force. It's an attractive force, right. And it is much more potent at small distances than it is at large distances. One over r to the fourth is much, much bigger than one over r. One over r squared. This is because the relativistic speedup causes a mass to increase but not the charge to increase. Right, right, right, exactly, exactly so. Exactly so. Okay, now let's ask at what length scale are is the gravitational force approximately the same order of magnitude as the electromagnetic force? Let's do a little, let's, there's the electric charge, the electron's electric charge is a pure number. It happens to be fairly small. Let's forget it, but not ridiculously small. It's a fairly small number of something like a tenth or something like that in natural units. So, for the first round, we could just set the electric charge of an electron equal to one for simplicity. And then we could ask at what length scale is the force, what's the gravitational force between a pair of particles the same as the electromagnetic force? Let's say a pair of electrons. Does that ever happen? At ordinary scales, electric forces are vastly larger than gravitational forces, electric forces. As you bring them closer together, because the gravitational forces increase like one over r to the fourth, eventually the gravitational force will cross the electrostatic force and its smaller distances will become even more important, even larger. So let's see where that happens. To find out where that happens, we should set this equal to e squared over r squared. For simplicity for the moment, let's just set the electric charge equal to one. It's not equal to one, it's about a tenth or something about, it's smaller than one. G or e. G or e? You just erased the g. Oh, sorry, I didn't mean to erase g. I'm going to do that. E. Let's simplify and put a one here, which is not too long. E squared is really something like about a tenth. But it's not too bad, it's approximately right. Now we can read off from this at what scale these two kinds of forces become equal to each other. Just multiply by r to the fourth. This becomes r squared here. Now take the square root of it. Wait, did I, something wrong? You had r fourth, so it's r squared now. It's r squared, yeah. I think I made a mistake. I still think I made a mistake. No, but it's not right. Wait. I don't know the answer. H bar C over C squared r to the fourth is one of r squared. Right? So I have h bar G over C. H bar G over C is equal to r. It's r squared over r to the fourth. It was an r, it was an r to the fourth here, right? And then there was a one over r squared here. So I got an r squared here, right? I think it's doing r, right? Would you write out something that's doing the square root on both sides? What am I going to get? I'm going to get h bar over C. H bar over C squared of G, right? R. Which is the square root of h bar squared of G over C squared. No, C squared. Is that dimensionally correct? It must be. No, no, electric charge is dimensionless. And oh, wait a second, wait a second, wait a second. Yeah, hold on. I think I lost some dimensions. This is energy. This is, all right, let's do it in terms of energy. It's r cubed in terms of energy. And I want to set that equal to e squared over r. But this is not dimensionally consistent, the far is dimensionless. So I need some h bars and C's in there. Hm? Well, I want the charge to be dimensionless because I happen to know it's value in dimensionless units. It's about a tenth. It's close to one. So we have to put some correct. This has units of energy, right? If I, so we have one over r, one over r. And if I put, and this is not correct yet. It's not correct, it's not dimensionally consistent. If I put an h bar here, then this would be momentum. If I put a C there, it is now dimensionally consistent. All right? C, h bar over r has units of energy. And now this electric charge here is dimensionless. And that's the one which is about a tenth. Close to one. Okay, so I made a mistake. I thought I made a mistake. I should have, oh, I erased it. It's lost, it's gone. That's okay, you're, you have a, you have a C-sugar. Right. H, H squared, Q. Q. Thank you. All right. Right, it's changing. I think it's right now. Yes. Okay, now multiply it by r cubed. And we get r squared up here. Now we get a C cubed down here and remove an h bar. And I get r now is equal to the square root of g h bar over C cubed. I think I like that better. All right, what is that length? That's the Planck length. That's the Planck length. So the Planck length is the place where because of this growth of energy with squeezing, with diminishing the wavelength, because of that the gravitational force increases and becomes comparable or even bigger, and the new factors are 10 bigger than the electromagnetic force at the Planck length. So they also, so the electromagnetic force, the weak force, the strong force and the gravitational force all grow or all become more or less comparable somewhere near the Planck scale. This is thought to be deep. This is thought to be something important. If it is as important as it sounds, then the puzzle becomes not why is the Planck length so small, or why is the Planck mass so large, or why is that tenth of the nineteenth such a big number, it is why that f is so small in natural units. How does it happen that that f which controls the masses and the energy scales of all weak electromagnetic, all the ordinary interactions, in particular controls the mass of the z, the w, the quarks, the electron, why is it so small? That's one number which is some fifteen orders of magnitude smaller than any fundamental length scale, as far as we can tell, and it's just that one number which is very small. Once that one number is explained why it is so small, all the masses of all the elementary particles that we know about follow. All the others for all we know could be up at the Planck mass at the natural scale. They have no reason to be light. Their masses did not have to be proportional to this symmetry breaking scale. So that's kind of where we are puzzled about why this f is so small. Now, we'll go more deeply into this next quarter. It's not only that it's small, it's also very, very finely tuned. And we'll come back to it. So if you could somehow calculate f, then you could calculate the other masses. If you could calculate f, but any ordinary calculation of f will give a huge number. Why? Because it'll come out to be proportional to the fundamental scale unless there's some incredible conspiracy that, so, yeah, you can calculate it and all sorts of theories that it'll come out to you. I'm trying to visualize the Planck length in terms of the screening distance when we get so close. In gravitational physics, it's not helpful to think about a screen. Oh, okay. Yeah, that's right. Just about these next few minutes. If you get a higher order of feet. Higher orders of what? Of the Higgs field, of the feet. P to the sixth, et cetera. P. What's P? P. P, P5. P5, P5, P5, P5, P5. Each of those appear on the bottom. Well, there are real reasons why not. You wouldn't want to put in 5-5 there because that would break the symmetry between plus phi and minus phi. You could put 5-6 there, but there are... Those are annihilation and creation operators, those bodies. That represents four Higgs particles coming together. Absolutely. Okay, so that's another... Right. Once you know these numbers, you can predict something about the scattering of Higgs bosons. Two Higgs bosons go to two Higgs bosons. Right. Is the range of the polynomial in all the two variables then? Okay, so that's pretty restrictive. Well, it's more restrictive than you think. Beyond phi to the fourth becomes what is called non-renormalizable, which is a bad thing to have in a theory. So... We can talk about why phi fourth is the natural... The gronchian is polynomial of multiple true values, and there's never any exponent above four. Right. Except when there is. Have you left out any... Any terms like self-interaction or...? Well, self-interactions are what goes into the running of these coupling constants. For example, the self-interactions between gluons are what gave you the opposite effect from what happens in electrodynamics. Self-interaction between flux lines pulls them together into a sort of cable, and the upshot is that the running electric charge changes quite differently. Any other thing I thought of was cross terms between the interactive and the kinds of interaction? Well, there are such things. They come in in higher orders, and you have to calculate them. But fortunately for us, all of the coupling constants are quite small. So the higher order cross terms are smaller than the things that are perturbed. What exactly did you mean when you said F is phi d2? I need to come to it, and we will discuss it. I'll tell you, not now, but I guess that's important. Since we've talked about renormalization in groups, can you say something about what the renormalization group is? Not tonight, but some... applause
(March 15, 2010) Professor Leonard Susskind delivers the tenth lecture for the course New Revolutions in Particle Physics: The Standard Model.
10.5446/15087 (DOI)
Stanford University. Before we start, we're going to talk about quantum chromodynamics today. Quantum chromodynamics, and we're not going to talk about the deep mathematics of it and go into it. We have basically one lecture to do quantum chromodynamics, and it's a huge subject. It's a subject which easily takes a quarter to... But I'll stick with the highlights and tell you what the basic intuitive picture is. Quantum chromodynamics is similar to quantum electrodynamics. Quantum electrodynamics is the theory of electrons and photons. It's also the theory of the Coulomb force, the quantum mechanical theory of the Coulomb force. If applied to the problem of atoms in which you do not worry about the motion or the nature of the nucleus, the structure of the nucleus, you simply think of the nucleus as a fixed point creating an electric field, then quantum electrodynamics is also the theory of atoms. In the same sense, quantum chromodynamics is the theory of quarks and gluons and the things that quarks and gluons make. The things that quarks and gluons make are called hadrons, H, A, D, R, O, N, S, hadrons or hadrons, and they consist of things like protons and neutrons, which are called barions, which have a net quark number or a net barion number. That means an imbalance of quarks and anti-quarks, three quarks for the proton, three quarks for the neutron, and mesons, which are quark-anti-quark pairs, and all of them are full of gluons. Gluons are the electrically neutral stuff, which is the glue and the binding stuff that holds the quarks and gluons together, much the same way that the electromagnetic field or the Coulomb field is the binding agent that holds atoms together, atoms, other electrostatically bound objects. But before we go into it, I want to remind you a little bit about the mathematics of spin, because first of all, that particular mathematics will come up in another guise, but also a simply related mathematics to it will also come up in the guise of a quantity called color, or a concept called color. We're not going to do group theory in this class. We'll try to finesse it. But the basic mathematics of what we're talking about is group theory. As I said, I'm not going to use group theory, or at least not call it group theory as we use it. For spin, the theory of spin is the theory of a group. If you don't know what a group is, just ignore it. Group is the group of rotations of space. The theory of spin is also the theory of the symmetry of physics with respect to rotations of space. But we finesse that by not talking about the theory of rotations, but by talking about the theory of angular momentum. I'll just remind you very quickly that we derived everything about angular momentum from some basic commutation relations, which were of the form Lx with Ly equals I times Lz. I think there's an h bar in there, but there is an h bar in there. Those are the components of angular momentum. We worked that out last quarter, and we found some rather surprising and interesting things. Oh, of course, there are three other relations like this, which are gotten by permuting x to y, y to z, and z back to x. I won't write them down. But when we worked out the consequences of this, first of all, we broke the symmetry. When I say we broke the symmetry, I'm not talking about something, some symmetry breaking in nature. Just we broke the symmetry in our mathematical description. The symmetry being the symmetry of rotation, or the symmetry which interchanges x, y, and z, and so forth. We broke it in our heads, maybe not in the real physics, but just by focusing on the z component of spin, or the z component of angular momentum. We basically focused on it. We cannot measure all three components simultaneously. They don't commute with each other. Here they are. They don't commute with each other. At most, one of them commutes with itself, well, of course it commutes with itself, and no two of them commute. So at most, you can talk about one of them, or measuring one at a time, and we arbitrarily chose Lz. When we did that and used these commutation relations, what we found was, first of all, that L is quantized. And the difference between neighboring values of L was always one unit in units of h bar, number one. Number two, we focused on spin. And we described different spin possibilities. The first set of spin possibilities were the half spin objects, so the things which we call fermions. Those were the objects whose spin was centered about zero in such a way that the first level, the first z component of spin was a half, and the first one in the opposite direction was minus a half. Minus three halves, minus three halves, and so forth. Those were fermions, but right now the main point was that the mathematics of spin, or angular momentum, gave rise to some multiplets. There was first the spin half-multiplet. That was the multiplet with a spin minus a half and a spin plus a half, like the spin of the electron. That was associated with half-spin particles and fermions. Bosons, which for our purposes now just think integer spin, the first one was just spin zero. There are particles with spin zero. They have no spin. That's all. They just have no spin at all. Then there are particles with spin one. The photons have spin one particle. There are other spin one particles. There are typically ion atoms of spin one, and they have three states, plus one, minus one, in units of H bar, and zero. Then the next interesting family has spin three halves. Spin three halves again goes on the half-spin column here, and that has one half, minus one half, three halves, minus three halves. There are particles with spin three halves. There are certainly nuclei with spin three halves, and atoms with spin three halves. So these things also exist. Likewise, what comes after spin one? Spin two. Spin two, spin zero, spin one, spin minus one, or the z-component of spin one, minus one, two, and minus two. These labels here are the z-components of the spin. Now, when you speak about an object of spin L, L is the notation that denotes the highest value of z-component of spin. So this would be, or M, you can use M. Ordinarily, it's used M. M or L, it doesn't matter. Let's use L. L is a little bit different than M. When you speak of spin L, you're speaking about an object whose maximum spin along the z-axis is L. So this would be spin a half, L equals a half, this one would be L equals one, and so forth. How many states are there? How many independent states are there for a spin, a particle with spin L? Two L plus one. Two L plus one. For example, for spin a half, twice L is one, plus one is two, two states. If L is one, twice one is two, add one more, that's three, three states. And so two L plus one is the number of states... is the number of independent spin states for a given value of L. So that's something I want you to remember. Now, another thing. If you have more than one particle, such as an electron, then you have two spins. Spins can be combined. Let's consider what you can make in the way of spin by combining together the spins of two electrons. Let's forget the fact that electrons can orbit around each other. Let's ignore what's usually called orbital angular momentum and just concentrate on the spin angular momentum. What kind of angular momentum can you make for two half spin particles? Well, the maximum value of the z-component of spin is what? One. In units of h-bar. Let's forget h-bar now. When I say unit, the maximum spin I mean in units of h-bar. The maximum spin you can have along the z-axis is one. That's when both spins are pointing upward along the z-axis. That axis is the z-axis. So you can have angular momentum one. You can have angular momentum minus one. And you can also have angular momentum or z-component of angular momentum zero. But in fact, you can have two states where the z-component of angular momentum is zero. This one and that one. This one and that one. So there are two states, two independent states whose z-component of angular momentum is zero. How many states are there for a spin one object? Three. Two l plus one. Okay? So there must be a spin one combination because we can make total spin up equal to one. That's the maximum. So there must be somewhere there a some combination must correspond to spin one. What does a spin one have? It has spin one along the axis, spin minus one along the axis, and spin zero. But there are two states with spin zero, with z-component of spin equal to zero. They can't both be part of the spin one multiplet. What could the extra one be? It's got to be either spin zero, spin one, spin two, spin three, spin four, spin a half, spin minus, spin a half, spin three halves. So here's what we found. Let's write them down. It's both spins up, or both z-components of spin. I'm going to stop saying z-components. Both spins up, that's got z-component equal to one. Both spins down, that's got z-component equal to minus one. And then there seem to be two with z-component equal to zero. But if I look at spin one, all it has is plus, minus, and zero. There are only three of them. So this cannot be a multiplet. These four states cannot all correspond to spin one. What else could be there besides spin one? Spin zero. It can't be spin two, because spin two has to have five states. The only other thing is spin zero. So there must be quantum states here, three quantum states corresponding to spin one, and one, when you put the spins together and you ask what kind of angular momentum can you make, you can make spin one and spin zero. The only combination, the only question is what combination, what quantum mechanical combination of these two correspond to spin zero, and what combination correspond to spin one, to the missing spin one combination. And I will tell you right now, the answer is that the symmetric combination with a plus sign, incidentally, this state over here is symmetric between interchange. You can think of it if you like, as one spin over here, one spin over here. Gel labeled is the one over here and the one over here. The spin states, if they're both up, is symmetric. That means the spin state is the same if you interchange the two spins. Obviously, they're both up. This one is also symmetric. Up, down by itself is not symmetric. If you interchange spin, if you interchange the two of them, this one goes to this one, and this one goes to this one. They swap. But there's one linear combination, one quantum mechanical combination, which is symmetric under the interchange of the two. Plus minus, plus minus plus. This is the symmetric state. What should you do to it to make it have total probability equal to one? Divide it by the square root of two. These are the three states which together form zero, one, and minus one. If you rotate the axes, they transform into each other. This is a spin one up. This is a spin one down. Where's the ones with spin one in the other directions? It's made up out of this. So this is the spin one multiplied. And what about the other one? We're missing a state. There were four altogether. Here we have three linear combinations. What's the other linear combination? The other orthogonal linear combination. With a minus one. So there's another one here with up, down, minus, down, up. And the square root of two. This one, this is, these three correspond to L equals one. This one corresponds to L equals zero. So these are the ways that you can put two spin halves together to make the two possible combinations. This combination L equals zero. What is it like with respect to its angular momentum? It's a thing without angular momentum. As far as angular momentum goes, it's like a nothing. I mean, it may have some energy. It may have other things, charge, whatever. But as far as angular momentum goes, it's like empty space. It's got none. Okay. You might ask, can a up spin and a down spin come together and annihilate and disappear? Well, there's all kinds of reasons why electrons can't disappear. But just for pure angular momentum reasons. If we're just worried about conservation of angular momentum and nothing else, then which of these combinations can just disappear if, as I said, if we're worried about nothing but angular momentum conservation? Can this disappear? No, because it has two units of angular momentum up. How about down? How about this one? Yes or no? Let's take it forward. How many think yes? How many think no? Okay, the no's win because this one is not without angular momentum. It looks like it has no z component of angular momentum and it doesn't. But the angular momentum about the x and y axis of this is not equal to zero. It does not have L equals zero. It has L equals one. This is the missing thing that allows you to rotate the angular momentum into other directions. What about this one? Yeah, that one has angular momentum zero. And if the only consideration was angular momentum conservation, it could decay. It could just disappear. Now one way of thinking about it is you could say, well, yeah, that's good enough. That's close enough. This one here can disappear. Okay. So if you measure the angular momentum of the symmetric one to the effect of z, you get one or minus one. Yeah, wait. In which case? In the symmetric case. No, you get, along the z axis, you would get zero in either case. You know, along the z axis. But if you measure the angular momentum along some other axis, you would get one or minus one. All right, so that's the theory of angular momentum. Now, why am I bringing that up now? Or the theory of spin? In particular, spin a half. Spin a half on how you build out of it other spins. Oh, incidentally, let's try to build spin, how you build spin, three halves. Let's ask what happens if you put together three spins? Well, if they're all pointing upward in the same direction, and that would be the maximum, what would you get? Three halves. How many states are there with spin three halves? Now, incidentally, how many states are there all together of three spins each with spin a half? Eight. Two times two times two. So there are eight states all together. How many states are there for a spin three halves object? Four. So how many are left over? Four. What can those other four be? Could they be another spin three halves object? No, why not? There'll be zero on the other axis. Okay, so let's go through it again. If you have a spin three halves multiplet, there will be one state which has z component of spin three halves and one with minus three halves. We've already used those up. When we said there were four states which formed the spin three halves multiplet, we used them up. Now, all that's left now is spins whose maximum value is plus or minus a half. Okay, so what can be there? What an addition can be there? Only spin only spin one half. Anything else? No, they can't be spin zero because making three spin a half can never make spin zero. All it can make is spin three halves and spin one half. But we seem to have four states left over. The implication is there are two distinct ways to make spin one half. So there are combinations which correspond to the spin three halves. The easiest ones are all three spins up or three spins down. And then there are more complicated ones. And then there are two distinct ways to make spin one half which altogether adds up to four states. All right, so when you take three spins, you get spin three halves and you get spin one half twice. Two distinct ways of doing it. Let's keep that in mind. Okay, now why am I talking about spin now? What I really want to talk about is a concept called isospin, isotopic spin. Isotopic spin was the first internal symmetry group that are the first internal quantum number, the first analog of a spin which occurred in particle physics that did not have to do with the rotation of space, but it had to do with the rotation of an imaginary space or an internal space. Or if you like, it just gave rise to another set of quantum mechanical variables which were basically isomorphic, completely similar to spin. But it didn't have to do with spin. It didn't have to do with the rotation of space. You could imagine if you liked that it had to do with the rotation of some internal directions. Internal directions, imaginary directions of space, mathematical directions. So let me tell you where it comes from. I've already actually described it, although we haven't said it. We talked about quarks. And we talked about there's a whole variety of different kinds of quarks. But most of the quarks were rather heavy. They were rather heavy and had an appreciable mass in units of GEVs or hundreds of MEVs. The mass scale for Hadron physics is somewhere as hundreds of MEVs. The mass of a typical meson is a couple of hundred MEVs. The binding energy of particles is MEVs, sorry, hundreds of MEVs. Not MEVs, hundreds of MEVs. What object in nature has binding energies of water or a few MEVs? Nuclei. Nuclei, nuclei. Things that hold protons and neutrons together. But what holds a neutron together as three quarks has a binding energy of a few hundred MEVs. All right, so the natural energy scale is a few hundred MEVs. And there were only two quarks which were very light or whose masses were very light by comparison with that. And they were the up quark and the down quark. The other quarks are heavy and to make particles out of them there's a constant energy. And so for the lightest objects, the most stable, typically heavier objects will decay to lighter ones, the light objects of nuclear physics, the lightest objects and the ones which are stable, are the ones made of up and down quarks. Now up and down quarks have almost the same mass. Of course the down quark is about twice as heavy as the up quark, but they both have very small mass by comparison with the hundreds of MEVs of nuclear physics. So in some first approximation you can say that the up quark and the down quark have no mass. They're more important, their masses are close to being equal. That's a symmetry. What does that mean? That means if you took all up quarks and replaced them by all down quarks, everything would be pretty similar. Now of course the up quark is slightly more massive than the down... No, the down quark is slightly more massive than the up quark. So typically things made of down quarks will be a little more heavy than the things made of the corresponding up quarks, but the difference is small. For example, two up quarks and a down quark make a proton. Two down quarks and an up quark make a neutron. The mass difference between a neutron and a proton is very small. So that's an example of the almost symmetry between up quarks and down quarks. So in first approximation you just forget the difference. Up quarks and down quarks are symmetric with respect to each other. In the same way, or in the mathematically the same way, that a spin up and a spin down are symmetric. In the case of spin it really has to do with the symmetry of space, rotating the axes. In the case of up quarks and down quarks, it's just a mathematical manufactured space that you can imagine where you take an up quark to a down quark by flipping some direction, some imaginary direction in your head. But all you're doing is interchanging up quarks and down quarks. Thinking of up quarks and down quarks as mathematically the same or mathematically isomorphic to up spins and down spins, you come to the concept of isotopic spin. Isotopic spin is the analog of spin, but not up and down in the sense of the z-axis and flipping spin, but up and down in the sense of up quarks and down quarks. Because there's nothing really up or down about it. It's just the interchange of two labels, up quarks and down quarks. And so when we invent the concept of isotopic spin, and for all mathematical purposes, the replacement of up quarks and down quarks is very analogous to the replacement of an up spin by a down spin. Okay, so for spin we have two states, up and down, and that defines spin. For quarks we have up and down, and that defines isospin. Incidentally, what does the iso come from? Isotope. We're going to see in a moment that isotopic spin interchanges, when you interchange up quarks and down quarks, you interchange proton and neutron. Of course, if you interchange proton and neutron in a nucleus, you'll wind up making an isotope of something. So it comes from the word isotope, isotopic spin. But what it is, in nuclear physics in the old days, it was just the replacement of the interchange of a proton by a neutron, and thinking of the proton and neutron as a spin multiplet in a mathematical spin space. It was traced eventually just to the fact that there were two quarks, up quarks and down quarks. That was the origin of it. Yeah. Did isospin predate quarks? Oh yeah. That's to do the Heisenberg. Yeah. Well, something has to provide the answer to Charles. Yeah. Okay. So isotopic spin is not a precise symmetry of nature. It's not a precise symmetry of nature in that, first of all, the up quark and down quark don't have exactly the same mass. But even if they did, there would still be a distinction, a physical distinction, which would make them different, and that is their electric charge. Electric forces within the nucleus, except for a big nucleus, when a nucleus gets big, electric forces become strong. But for small nuclei, and especially for protons and neutrons and hadrons, electromagnetic forces are negligible by comparison with the other forces holding protons and neutrons together. So, an approximation. Ignore the mass difference between protons and neutrons, or ignore the mass difference between up quarks and down quarks, and ignore the fact that they have electric charge altogether and concentrate on the other forces of nature, in particular the strong interaction forces, which we'll come to. And then there is a precise symmetry relating up quarks to down quarks, and that symmetry is very much like ordinary spin symmetry. It is analogous to spin symmetry. Okay, let's talk about what you can make. Supposing, well, one quark by itself, we will come to understand, is not a physical object that we can examine in the laboratory. The simplest object that we can examine in the laboratory is three quarks, and that's a proton or a neutron. That's like having three spins. What can you make out of three spins? What you can make out of three spins is either spin three halves or spin one half, and you can make the spin one half in two distinct ways. But let's forget the two ways. Let's just say we can make spin three halves and spin one half. In the same way, taking three quarks, we can make an object of isotopic spin one half, or an object of isotopic spin three halves. Let's first concentrate on the object of isotopic spin one half. An object of isotopic spin one half, how many states should it have? Just by pure mathematical analogy with ordinary spin. Two, a spin-a-half object has two states. An isospin, isospin is the simplified word. An isospin state of one half also would have two states. What are those two states? Proton and neutron. So let's write down what the proton and neutron are in quark language. Now, let's say that's a proton. Let's imagine that we've labeled the quarks. By label the quarks, let's imagine they're really located at three distinct spots inside the proton. This, of course, is not really true, but they move around. But let's just simplify the story and say there are three distinct quarks, and we'll label them with three distinct labels. Here's a down quark, and here's two up quarks. All right, the down quark might be the quark at position one. All right, so let's call it at position one. The up quark might be at position two, and the other quark might be at position three. Is there anything wrong with this state? Quarks are fermions, right? Quarks are fermions. Because quarks are fermions, their states should be anti-symmetric. Now, that suggests that maybe the right combination for the u-quarks is to anti-symmetrize it, and that would be correct. That's not an important issue for us right now, but I'm just being very precise, that a down quark and two up quarks in an anti-symmetric state is a proton. An anti-symmetric state of two spins makes spin zero. So this is an isospin zero object times an isospin a half object that has isospin a half. But never mind, this is a bit of mathematics. It's a little too fancy. We don't need to worry about it. This is the proton. It's charged plus one, two-thirds, plus two-thirds, is four-thirds, and minus a third makes charge one. So that's the proton, and it is the combination of three quarks which makes an isospin a half state. Same as three spins making a spin a half state. So the proton is also a member of an isospin a half object. And the other one is gotten by simply interchanging ups and downs. And that's up, down, down. Let me not belabor this point about the symmetry here. And that's the neutron. Proton and the neutron in this language are symmetric with respect to each other, and they form an isotopic spin a half doublet. In the same sense that you can put three spins together to make a spin a half. And there are only two such states. So isotopic spin invades the theory of protons and neutrons, and in this language just gives us another, just like the quark has isotopic spin a half, so does the neutron and proton. Okay. There's another object which is very similar to proton and neutron in many ways. It's also made of three quarks, but it's the combination. Oh, incidentally, what is the spin of the proton and neutron? Actual spin. One half. Okay. So you've taken three quarks with the same spin. Sorry, three quarks with spin and three quarks with isot spin. And you've made a spin a half and an isospin a half. Okay. Now, there's another object which has three quarks in it, which has spin three halves and isospin three halves. If you like, the three ordinary spins are lined up in it in the same direction. And the three isospins are lined up. So what kind of thing are they? Well, if the three isospins are lined up, that must mean that there are objects with three up quarks. Three up quarks, three down quarks. Also, the three spins are aligned, just the ordinary spins. The three spins are also aligned to form spin three halves, or down or whatever. Three quarks up, three quarks, three up quarks, or three down quarks. This is not a neutron. This is not a neutron or a proton. This is a new object, which is a little bit more massive than a proton and neutron. It has a name. It's called the delta. It has isospin three halves and it has spin three halves. So it's sometimes called the delta three halves. It has both isospin and spin three halves. Let's forget its spin. Its spins are just aligned. If we line them up all along the z-axis, then they're all aligned. Let's forget that. What about this? Can this be all there is to an isospin three-half state? How many states does an isospin three-halves object have? Four, right? So there must be two more. And there are two more. There are two more delta three-halves states, two more of them, and the two more are U, U, D, and U, D, D. Four states, four objects, all of which are related by symmetries, they all have very close to the same mass. The only thing which distinguishes their mass is the little small difference between masses of up and down quarks, and they're all very similar to each other. I'm telling you this for a reason. This played an important historical role that I'm going to tell you about in a moment. These four states are called the delta three-halves, and what's their charges? Let's go through their charges. What's the charge of this one? Excuse me, what's the difference between U, U, D, and D, U, U? All right, let's work out the charges. Pardon? Let's work. What's the difference between that and the first half? Ah, the spinny. Spinny. Spinny, that's spin three-halves, that's spin one-half. Oh, okay. The isospin is... The Z component, no, it's part of the multiplet, which is an isospin three-halves object. Okay, right. Isospin three-halves. Right, so it's both spin and isospin three-halves. But by looking at these two, you couldn't tell that. You have to know a bit more about the nature of the way they're combined. But... No, they have a component of isospin, just as the analog of the Z component of spin would be one-half, but the full isotopic spin would be three-halves. Yeah. Remember, every object which has a spin three-halves comes in four states. Four states. Here they are one-half, one-half, minus three-halves, minus three-halves. Same is true for isospin. Okay, but let's concentrate on spin for a moment. These two states, they... That's these two. And the ones of maximum and minimum spin, that's these two. So it's got to be four of them. We know that a thing with spin three-halves has four states. The Z component of isospin can be one-half and minus one-half. But by analogy with ordinary spin, they will have to be four of them. Okay, but let's come to the important points. Let's first of all label their charges. This one has three charge two-thirds object. It has charge two. This one has charge minus one. And UUD has charge plus one and charge, what is this one, zero. So the charges are different. Well, these two do have the same charges. Sorry, did I get this right? UUD, no. Yeah, yeah, yeah. These do happen to have the same charges, proton and neutron. But these are different. Okay? All four of these objects have to, within a small discrepancy, the same mass. Just as the proton and neutron have the same mass, these have the same mass, but that mass is not the same as the proton and neutron. Proton and neutron, the mass is about 940 in units of MEVs, millions of electron volts. This is the mass of the proton and neutron, with the neutron being slightly heavier, not by much. Okay? The mass of the delta is about 1200 MEV. The delta can decay. It can decay into a proton and neutron and a pion. Let's just see how that would work. Let's take the easy ones. The easy ones have all three quarks the same, up, up, up. So here we have three up quarks moving along. And how can that decay? What is it going to decay into? It's going to decay into a proton or neutron and a meson. A meson is a quark-anti-quark pair. Let me just draw for you how this happens. This would be up, up, up. The two up quarks go off. This up quark also goes off, but quarks can't separate like that. Quark and an anti-quark appear in between, like that. The quark and the anti-quark in between could either be a down quark or an up quark. So one possibility is that this is another up quark. And then this would be an anti-up quark and this would be an up quark. An up quark and an anti-up quark have zero charge. This would be the decay of the delta-3 halves. What do we have here? This one, no, this one can't happen. This one actually can't decay. Not possible. We have to put a down quark here. There's just not enough energy for it to happen. This would be an anti-down quark and an up quark. So what would this thing be? Up, up, and down. That would be a proton. The proton is lighter than the delta. So there's enough leftover energy. How much energy is left over? Let's see. 260 MeV about, right? Something like that. 300 MeV roughly. 300 MeV. What's the mass of a pion? About 140. About 140 MeV. So there's enough energy for this to happen and still leave over some kinetic energy for the pion to fly away. What kind of pion is this? With an up and a down bar. What's its charge? Two thirds and one third. So this is a pi plus and a proton. Total charge two, this is the object with charge two. So these delta objects are not stable. They all can decay into a proton or neutron and a pion. And they're very short-lived. Their lifetime, oh, we could work out the numeric, I'll tell you what their lifetime is, order of magnitude. Order of magnitude, it's the time that light would take to cross the proton. So you can figure out that the proton is about 10 to the minus 13 centimeters. What's the speed of light? 10 to the 10th centimeters per second. So 10 to the minus 23rd seconds or something. They don't last long. They do last long enough to be identifiable as distinct objects when they're produced. These deltas, these deltas in collisions, particle collisions, deltas are produced and they're real objects, but they're very, very short-lived. Excuse me. As a matter of rotation, does it make any difference which order you put these? No, not for purposes here. No. One question about fermion. For fermions, you have the, the first time you have this down quark, you have the two up quarks, so they're the same. Okay, that's exactly what we have to come to now. There's something wrong here. Something's rotten in the state of Denmark. What is it? We have three up quarks whose spins are all in the same direction. We could put those spins, we could, if we like, choose those spins to be all up and make a delta-3 halves with its spin, ordinary spin up with all three in the same direction, and they'd all be three up quarks. They're not allowed to put two fermions in the same state. If, if you could fiddle around with a spin, maybe you could do something with a spin, but all the three spins are the same. Why are they the same? Because it has spin three halves. Three quarks with spin three halves have the same spin. They also have the same isospin, namely they're all up quarks. So we seem to have found an object which consists of three identical up quarks. That's a violation of the principle that you cannot put two fermions into the same state. This was the clue that led eventually to what is called quantum chromodynamics. It was realized that quarks have to have another property. They must have another property so that the three quarks in the delta-3 halves can be different from each other. The fermion statistics, the fermionic property of them requires that they be different. So there must be a label, another label that was hidden from view for some reason which, which is there, but not apparent in experiments. That label is called color. Color is again highly arbitrary term. It has nothing whatever to do with ordinary color. And it was just a label to, to, to a name, a name. Different, in different places, the three colors of quarks are different. In some places they're red, white, and blue, mostly in the southern parts of the United States. Other places are red, green, and blue, red, green, and blue being the primary colors of light that you see with your eyes. I will use red, green, and blue. I haven't heard red, white, and blue being used for a long time, not for the last nine years or so. But they're just labels. They're just labels. And in no sense are they physically different one from another in any sort of interesting measurable way. The fact that there are three distinct ones and that they are, that they are not the same is important. They have exactly the same mass. They have exactly the same properties. So let's now write down what we know about the labeling of quarks, all the various quarks we know about. There are up quarks, down quarks. That's the lightest one. Then there are charmed quarks, strange quarks. And after charmed comes top quarks and bottom quarks. They're not listed in order of mass. The up is the lightest, the down is next. The charm is much heavier than the strange by a factor of about eight or something like that. And the top is vastly heavier than the bottom, but I've just put them in. I've been perverse and put up on top of down. I don't know why. These are the six distinct types of quarks. But now for each type of quark, there is another label. So quarks are labeled by two labels. And the two labels are red, green, and blue. As I said, I don't want anybody to think they're really red, green, and blue. But that's the label. And where did that come from? How did we know it was there? Well, it's one of these things where physicists simply followed their noses. They simply said, look, there's something wrong with this quark model. You can't have three quarks all in the same state. They must differ by something else. Now they could differ by position, but it was known that for one reason or another, that it didn't have to do with position and momentum. That was known to be irrelevant for dynamical reasons, for energetic reasons. So that was not the issue. There were three quarks all in the same state. They must have. In fact, the situation was actually quite similar to what happened in atomic physics. Spin was discovered by Ullembach and Gautzmitt because of atomic spectroscopy. Helium had two electrons in the same state. No good. Violates the Pauli principle. Well, actually the timing here, I think, may be a little bit confused, but it could have worked this way. It could have said, well, Pauli tells us that you can't put two things in that. Maybe it did work. I can't remember now. I wasn't around. I don't remember. But one of the reasons that Gautzmitt and Ullembach might have, since I don't know the history in detail, might have invented spin, was because in the helium atom, there were two electrons apparently in the same orbital state. So you've got to attach to the electron another quantum number, another label. And that worked. That worked just fine. It worked a second time. History repeated itself, and various people, Nambu. Nambu was the one who realized this. Yochero Nambu. Nambu, the Japanese American physicist Nambu, realized that this called for another quantum number, and he said every quark has to have another degree of freedom. I don't think he called it color. I suspect that was Gell-Mann, but I'm not sure. But many, but a good deal later. And then what he said is, look, all that's going on here is that you put three quarks together, one red, one green, and one blue, and now you're in business. No more violation of the Heisenberg uncertainty principle. The same trick incidentally works for protons and neutrons, that you can understand the proton and neutron also as a red, what do I call this again, red, green, and blue. So that's how the primary color works. A question. So is there a physical property that can be measured or...? No, no. In every possible way they are the same, except we know there are three distinct ones. So if you had one, you could tell the other... Why? I mean, how can you say that? How can you say that maybe the, you know, the principle doesn't apply? Ritual? Huh? If you can't detect the difference, right? You can... If you have one, you can tell the other one is different. What do you measure? That's a little bit complicated. In collisions, you can tell that if you have one, if you have two of them, you can tell that they can't be the same, and that you can tell. Couldn't you, in... Are they conserved? The color is conserved, so in reaction, you would have to define it directly. A different color or something? Yeah, the color is conserved, but it's also always zero. It's not only conserved, but it's always zero. So it's a little bit funny conservation law. But look, you could say the same thing about... The question doesn't quite tell you that you need all three of the colors for a UUD. Say it again? No. But you need... Do need something to distinguish the two U's. Yeah. But it gives you more than you need in that case. It gives you more than you need. That's why historically the delta-3 halves played an important role. It was looking at this guy here, which was unambiguous. Three quarks parallel spin, parallel isospin, something's wrong. It would have been possible to analyze the protons and neutrons, but it would have been less convincing. Delta-3 halves was interpreted in terms of quarks. That really hit you over the head. So the ultimate resolution is... We now know, really experimentally, extremely well that the three quarks, that three different kinds of colors are really there. You know, even with ordinary spin, what's the difference between spin up this way and spin that way? There's no difference. They have exactly the same property. All you have to do is turn your head over or spin this way. Unless you have another one to set a direction or something to set a direction, then you can't tell the difference between them. And the other... I mean, if you have some object which picks out a direction and you bring a spin up to it, you can tell whether the spin is along a particular axis or not. In the same way, if you have a proton, a proton being different than a neutron, and the proton and the neutron are identifiable objects, I mean, in the laboratory, in the proton, it picks out a direction in this isospin space. It's up in the isospin direction, not down. And if you have another quark, you can discover whether that quark has its isospin parallel to or antiparallel to the proton. So, yes, you can tell... not its isospin, you can tell whether it's... I think I said something wrong, but close enough. If you have one object, it can provide a kind of frame of reference for the other ones and test whether they're the same or not. But the mathematics of quantum chromodynamics simply requires that you have these three different things. I would say maybe it's a violation of the Pauli principle. You could. There's no known mathematical framework for discussing that, and there's no quantum field theory that has anything but fermions and bosons, so you'd have to invent something new. Yeah, well, I thought these... I think what is very physical involves observable and, you know, like momentary position, and you can make a case that isospin is a kind of observer. Absolutely. But what would be the appropriate observable for Pauli? Much, much more subtle. Much more subtle. Much more subtle. Let's come back to it, and I'll tell you when we get there, after we've talked a little bit about... Question, especially... It was known that quarks and fermions are the kin of the proton and the adjacent area of the proton. Is that how they were released? Mm-hmm. Yeah. I guess I had the same question in different form. If you have a down above of proton, one of the three quarks is green, and since the down, there's only one of... I mean, the question is, how can you really associate one collar with one of the quarks? No, you have to think of quantum superpositions of states. So you might write... Let's take the case of the proton up, down, down. This could be green, red, blue, but we have to write all possibilities, all ways of combining them to make a real proton. So we have to use some quantum mechanics to symmetrize the wave functions and so forth, symmetrize and anti-symmetrize them appropriately. But the big advantage of this delta-3 halves is you didn't have to do anything fancy. Just three all in the same direction. No. Okay. Sure, you could say, well, maybe there was something wrong with our ideas about quantum field theory, but by now, the theory of quantum chromodynamics, which is exactly... Chromo has to do with color. Color is the important quantity in quantum chromodynamics, just like charge is the important quantity in electrodynamics. This theory is a highly accurate description of experimental data associated with the collision of hadrons, and its accuracy is way beyond what can be questioned now. So the ultimate answer is that it works. Okay, now let's talk about gluons. The missing ingredient now, we have the analog of electrons, the quarks. The fermions, they have some attributes, they have some electric charge, like electrons. They stick together, not quite the way electrons stick to a nucleus, perhaps a little more the way electrons stick to positrons, but they stick together somehow. What's missing is what sticks them together. What sticks together atoms is the electrostatic field. The electrostatic field is associated with photons. We can either think of it in field language, that every electron creates a field around it, or we can think of it in particle language, that electrons emit and absorb photons. The exchange of photons back and forth between charged particles creates the forces between them. What about quarks? What sticks them together? Particles very, very similar to the photon. At first it was a speculation, maybe such objects exist. Then a theory was built, a mathematical theory was built, with quarks and gluons, gluons being the analog of photons. They're very similar to photons. They have spin one just like the photon, which means they have the same kind of polarization states. They're massless like photons, very similar, but with one big difference that will come through. And they jump back and forth. On the one hand, a quark is a source of the field that's associated with gluons, the gluon field, in the same sense that the electron is the source of the electromagnetic field. On the other hand, not the other hand, but a similar hand, the quarks can emit and absorb gluons. What are gluons like? What quantities do gluons have? Photons are pretty, in a certain sense, uninteresting except for the fact that they have a polarization. They have a spin, they have a polarization. They have a momentum and they have a polarization and that's about all. They don't have any charge. By itself, a photon, if it collides with another photon, there are no forces. Now that's not exactly true. There are forces between photons, but there's secondary effects. They're not electrostatic forces because the photon is charged. There are secondary effects which come from quantum electrodynamics and loops of quantum complicated Feynman diagrams involving electrons. But the primordial interaction between photons is there is none. They move freely past each other and that's why a beam of light moving in one direction will pass through a beam of light moving in the other direction with no interaction, unless you're in some material. Okay, so photons are not very interesting and in the sense that they don't interact with each other, they are interesting from the point of view of their interaction with electrons. And basically all of quantum electrodynamics is summarized by one diagram. And that diagram, which we've drawn several times, is the emission of a photon from an electron. Electrons are drawn as having a directionality, the direction along which the charge is moving. You can flip lines around and every time you see an arrow going downward, that indicates a positron. But it's all one basic vertex. That's it. And out of that you can build forces, you can build collisions, everything else. Just for the purposes of bookkeeping, think of a photon as having the same charge as an electron and a positron. In fact, a photon, if it's given a hit and given a little extra energy, can decay into an electron and positron. It's not that in any sense a photon is made of an electron and positron. That's not the point. But it happens to have the same properties as an electron and a positron. In particular, it's electric charge. It also has an angular momentum. It has a spin, a spin of one. And with an electron and positron, if you line up their spins, you can also make a spin of one. So in many ways a photon is similar to an electron and a positron. It's sometimes indicated by drawing, by thinking of the photon as a fictitious composite of an electron and a positron. And then this diagram, this diagram of the emission of a photon can be drawn just by saying the electron comes along, becomes an electron over here. The electron over here was really a positron moving backward, which turned around. Now there's no content in this other than to say, without losing any electric charge, you can emit a photon and you can see it directly. You certainly don't need this to see that an electron can emit a photon and that there's no violation of charge conservation. That's totally obvious that an electron can emit an electrically neutral thing. But nevertheless, let's just draw the emission of a photon by thinking of the photon as a composite of an electron and positron. It's not useful for electrodynamics. The analog is quite useful for quantum chromodynamics. Okay, now let's come to quarks. Electrons we're finished with. And the important thing in quantum chromodynamics is the color. So let's begin with quarks. Quarks can be red, green or blue. Let's make a column vector out of them and use the language of quantum mechanics. A quark can either be in the red state, the green state, or the blue state. Let's make a column out of it. An anti-quark, let's represent anti-quarks by red bar, green bar, and blue bar. A gluon, first of all, a gluon is an object that can be emitted by a quark. If this is a quark and a quark goes off, a gluon can be emitted. But the interesting thing is the gluon behaves in some respects like a quark and an anti-quark. It's not a quark and an anti-quark. I'll tell you precisely in what sense it behaves like a quark and an anti-quark. But let's think of the gluon as a quark and an anti-quark in the same way. So how do we label, if we label each quark by a color, this now becomes a quark and an anti-quark. A quark and an anti-quark. How many different kinds of quarks and anti-quarks are there? Did I hear nine? Yeah, you're almost right. The logic was what I was looking for. I wanted you to say nine. There's a subtlety, there's a subtlety which will come through. There are really only eight. But let's say nine to begin with. What are those nine gluons? They're the nine combinations that you can make by taking a quark and an anti-quark. In other words, they make a matrix. If quarks are like a column, then, and we might think of, just for fun, we might think of anti-quarks as making a row, red bar, green bar, blue bar. Then gluons would fill out a matrix. In other words, to label a gluon, you would label it with two indices, two colors. What would that be? There would be red, red bar, red, green bar, red, blue bar, green, red bar, green, green bar, green, blue bar. So the diagonals are indistinguishable. The diagonal elements are indistinguishable. Well, the diagonal elements are not really indistinguishable, but the point is that the sum of the diagonal elements is not an independent quantum state. All right, so it would seem like there are nine gluons. Later on, we will talk about a particular subtlety which tells us the quantum mechanical superposition of red, red bar, plus green, green bar, plus blue, blue bar is a nothing. So the sum of the diagonal elements is not indistinguishable. So let's play with it as if there were nine. Then what kind of Feynman diagrams can exist? Well, a quark can become another kind of quark and emit a gluon. So let's draw the diagram for that. Let's take the case of a red quark becoming a green quark and emitting a gluon. What kind of gluon gets emitted? Red, green bar. Where's red, green bar here? Red, green bar. And if you like, you can draw that with a neat notation. The notation is just a bookkeeping device, really. And here it's useful. Here it really is useful. Just think of the red is going through and the green also is going through, except when you flip the green line over, it becomes an anti-green and a red. So the gluon that's emitted is as if the red just went through and the green annihilated a green bar. That's the basic vertex of quantum chromodynamics. There's not just one of them. There are nine of them, or actually only eight, but the pattern is quark. It goes to another kind of quark and a gluon is emitted. If a red quark goes to a red quark, then it's a red, red bar. If a red quark goes to a blue quark, then it's a red, blue bar, and so forth and so on. Okay? Now, that's, as I said, the basic phenomenon, or the basic primitive building block of quantum chromodynamics. You can build all sorts of, sorry, that's half, that's one of the building blocks. There is another building block that isn't there for quantum electrodynamics. As I said, photons don't interact with each other. In particular, a photon can't emit a photon and another photon. Electron can emit a photon and stay an electron. Photons don't emit photons. So photons don't interact with each other in any way except in materials. Yeah? Gluons are only emitted with a change of quark. No, they can, no, you can have, they don't emit quarks. I mean, about this process. Whether it changes or whether it does not change, this is the only method of where which one emits a gluon. That's right. This is the only way in which gluons are emitted. But there is something new that makes quantum chromodynamics, first of all, far more complicated and far more interesting than electrodynamics. Let's take a gluon moving along. Here's a gluon, let's see. Let's take that gluon to be a red, blue bar gluon. That's a red, blue bar gluon. Now, can this happen? I'm going to show you something that, yes, is really part of quantum chromodynamics. Maybe it's not too surprising once I draw it. This is not really a quark, but just imagine it's a fictitious quark making up the gluon. All right, fictitious quark goes off. Well, if it's a fictitious quark, it better not really go off, but we'll see in a moment what happens to it. And the blue bar keeps going. But now, if these really were quarks, a quark in an anticork could form. What kind of quark in an anticork would you like to put there? Green? Let's put green there. This would be green going this way, and this would be anti-green going this way, and this would be red. So red goes through, blue bar goes through, and green becomes anti-green going that way. Well, what do we have now? Now we have a basic vertex in which three gluons come together. Let's draw gluons as wavy lines similar to photons. We now have a vertex in which a red blue bar becomes a red green bar and a green blue bar. We don't really have to remember what combinations are possible. All we have to do is figure out which diagrams we can draw where all of the lines go through without being interrupted. You can figure out what the various couplings or what the various possible fundamental diagrams there are connecting gluons. This is the new thing. What it means is that gluons interact with each other. Gluons exert forces on gluons in a way that would be unthinkable for photons. Photons cannot exchange photons between them. Gluons can exchange gluons between them. So now let's come to forces. The gluon behaves in a way which is similar to the photon and does something which is similar to what the photon does. It can be exchanged. Here's a diagram. It's easy to draw diagrams where quarks interact with each other. All right, let's draw a diagram in which an anticork interacts with a quark. Here's a blue bar anticork which emits a blue bar green gluon, becomes a green gluon, and now the green gluon, the green, sorry, let's see, this would be a green bar. This would be a blue green bar gluon. His green bar is blue, but let's say that blue goes right through like that. And what do we have here? We have here now an exchange of a gluon between a blue bar and a blue making a green bar and a green. That's a kind of force between quarks. That creates a force between quarks in very much the same way that photons exchanged between electrons create forces. You might want to make a force between a blue bar and a blue bar and then we wanted a force between a blue bar and a blue, then we would make this blue bar here. All right, so we can make all sorts of forces this way. But what about forces between gluons? All we have to do to this diagram is add an extra two lines. What did I have this originally? Blue bar. Green bar up here. Green up here. Let's put another line in here. It's not another quark now. It's really going to be representing a gluon. And this one, let's take to be red. Red goes right through. Let's put another line over here. I don't know what did I take that one to be? I think I took that one to be blue bar. This is now a diagram which represents the exchange of a gluon between two gluons. This is really something new. This is something very, very different than electrodynamics. What does it lead to? It leads to forces between gluons. Quarks can bind together because of the exchange of gluons and make hadrons. But gluons can bind together and make objects. There are objects in quantum chromodynamics which contain no quarks. There are no bound objects. There are no bound states, composite objects, and quantum electrodynamics just made up out of photons. There are objects which are just made up out of gluons. And how do they happen? Because gluons can exchange gluons back and forth. We could just summarize this by saying there's a force due to the exchange of gluons between gluons. This would also mean, for example, that if you had two waves of gluons going past each other, they would interact with each other, they would deform each other, they wouldn't just go through each other just like two electromagnetic waves. In fact, even if you had a single gluon wave moving along, the different parts of it would exert forces on each other and cause it to deform or do whatever it might do. So the watchword is that the dynamics of gluons is nonlinear. Yeah? Do we have to have the same number of reds, greens, and glues? You just have to make sure the lines follow through the diagram uninterrupted. Do we have to have a conservation follow? Well, that's what the following lines means. Okay, well, we have to have a conservation follow. Where? Did I draw this wrong? I might have drawn the... Oh, sorry, this should be green bar. This should be green bar, green... green bar. Yeah, sorry. The rule is follow the lines. When a line turns around in time like that, change it to an antiparticle. And that's basically the only rule of quantum chromodynamics, that there are interactions between quarks and gluons, and they satisfy the, let's call it the follow the line rule. All right? And there are interactions between gluons and gluons, and they also follow the line rule. We still have an anti-color problem with RG. Let's see where. RG bar. Oh, sorry, RG. RG bar, this should be B bar. And that should be G. No, wait, this should be B bar. Left right here. Going this way. This one should be G, and this one should be G bar. Okay, let's draw it, let's do it over. Yeah, yeah. I missed it up badly enough, but I should do it over. Yes, yes, yes. Okay, so I think I had blue bar over here, red if I remember. I don't remember. Red goes through, so it stays red. Blue bar goes over here, and then turns around, so this must be blue. Okay, this one was blue bar. It has to be a quark and an anti-quark. Blue bar, straight through, blue bar here, and now we have our choice what we want to put over here. So I think I put green going this way. Green, green, green bar. Green goes right through the diagram. Okay, now I think it makes sense. Every line just goes straight through. Red goes straight through that way. Blue bar goes straight through this way. Well, we can put the arrow the other way to indicate antiparticle. So this is going to be the same arrow as the other quark and anti-quark? Yeah. Gluons always have the properties of quark and anti-quarks. Earlier, you, by analogy with the photon, you said the gluons were massless. Yeah. In these diagrams, with all these still convincingly massless? They are massless. We're going to come to what meaning of the mass of a quark and a gluon are. All right. Yes, they are massless in a technical and special sense. I'll tell you right now what the special sense is. We're going to quit in a minute or two. In fact, right now, I'll give you an example. We're going to come, we're going to study this theory one more week, and we're going to talk about the confinement of quarks, and we're going to talk about the structure of hadrons and so forth. But let me just tell you in what sense a quark or a gluon does or does not have a mass, or does or does not have the mass that we ascribe to it. So let me imagine that I have a object of mass m, a small object of mass m, and the small object of mass m has attached to it a, I don't know what it is. It's whatever you want it to be, some sort of wiggly, soft piece of chewing gum or something that dangles off it. Now I want to move this object. If I move the object with a very small force, the force being so small that it doesn't deform, that the acceleration is so slow that the whole thing moves off together. What kind of mass does it have? The answer is it has the mass of whatever you put here, plus the mass of the lump of the, of the, of the wad of chewing gum or whatever it happens to be. We're thinking now purely non-relativistically, just to give you an analogy. On the other hand, supposing I shake this thing with a very high frequency, and I ask, what are the properties of the motion of the, of the core of it over here when it's been subjected to a very, very high frequency force of some sort? What kind of mass does it have over here? Just the mass of this object alone, right? The rest of it doesn't have time to adjust to the, to the forces. It just stands still. Now maybe this sends out a wave all right, but the very rapid oscillation here, the response of this end here would be the response of an object of mass m. So what is the mass of it? Is it the mass of the sum of them or is it the mass of the thing at the end? The answer is ill-defined. The answer depends on the frequency of the force that you, that you exert on it. Masses of objects are frequency dependent. Or, well, in this sense, in this sense, the, the mass of this object, the observed mass of it that, that it will respond with would depend on the frequency of the motion. In the same sense, the mass of a quark is frequency dependent. If you shake a quark as some kind of object inside a haydron, there are three of them and there's a bunch of mushy glue on stuff and they're holding them together. Okay? If you were to try to move the quark by itself, but if you were trying to move the quark slowly, grab the whole of that quark if you could do so and you moved it very slowly, the whole thing would drag along and what would be the mass of it? What would be the mass you would experience? The mass of the whole thing, which could be the mass of the proton, which is 930 something or others. What about if you were to exert a very high frequency force on it? I'm not going to tell you what, well the answer would be what we normally call the mass of the quark. That's this 5 or 10 MeV for an up or down quark. So the mass of the quark when it's subjected to a very, very high frequency, or when it's hit very hard and you try to see how it flies off, dragging this other stuff behind it, the initial impulse that it gets and the initial velocity that it goes off with will be sensitive to one value of the mass. On the other hand, if you hit it very slowly with a very gentle low frequency force, the whole thing would move off. So the concept of what is the mass of a quark is somewhat ambiguous. And to keep the discussion simple, let's say the mass that we usually ascribe to the quark are these high frequency responses, analogous to the mass at the end of the wall of chewing on it here. Isn't that a restatement of delta x, delta p? Not really. In this case, this is a completely classical phenomenon that you have a small little nut at the end of a wiggly something or other. You shake it very rapidly. You see it accelerates with one kind of acceleration. You accelerate it with a low frequency. That's a classical phenomenon. It does not have to do with the... Last lecture we described the electric charge. The charge of the electron. Yeah. With a similar... Yes. Yes. Lesson. That's exactly right. The lesson is the parameters that we describe particles with are dependent on frequencies and wavelengths of the interactions that they engage in. They're called running... So we say the charge of a quark is close to the same constant period. Yeah. Is that... The same story applies as a case in the lecture. It does. It does. We have to sort out exactly what we mean by the charge of a quark, but let's just put it this way. It's one-third the charge of a proton, but yes, you're right. We do have to define it carefully. And that's our whole story into itself. Yeah. Yes. Why do you always have to have a quark in an antiviral or a quark tank? Two quarks from a quark. That's a good question. Next? No, no, no, no. That's what we're going to talk about. That's next. Right. Yeah. This might be to not have the right time for this question, but this... All this stuff with the quarks here, it's so odd with the fractional charges and the quarks. So the question is, at that time, you pushed chemical and individual quarks. How did people figure this out? Well, they took time. Yeah. Well, there was a number of clues. There were a number of clues. By the time I came into physics, the idea of quarks was already established. I mean, by 19... I was a graduate student in 1963 when Murray Gell-Mann announced the idea of quarks. So... And I do know how he came to it, but the... There were clues. There were a lot of clues. There were a lot of clues, but there were also a lot of inconsistencies. So it was a pattern of suggestive facts together with apparent inconsistencies, such as the violation of Fermi statistics. And there was another fact, which was very peculiar. The fact that we'll come to next time had to do with the fact that quarks are never produced in the laboratory, that they're always permanently confined inside protons, neutrons, mesons. That was another fact. And it wasn't one person who put it together. It was a whole variety of people who put the whole thing together. Nambu had the right idea in the early 60s. He had the right idea. But the whole thing got put together and nailed in place, and the whole structure was put together over a period of 10 years. You're explaining why there's only eight quarks. Yeah, we will talk about that. From the start, this whole theory is ignoring electrons. This theory is ignoring electrons now, just like in quantum electrodynamics, we ignore quarks. Then we have to put them together. We have to put them together into some coherent thing in which quarks and electrons and photons and all of them form one bigger structure, some of which has been done and some of it has not been. Yeah, it is a process of isolating. I mean, physics always works that way. You isolate, you divide and conquer, and then you have to put it all together. Okay, good. For more, please visit us at stanford.edu.
(January 18, 2010) Professor Leonard Susskind discusses quantum chromodynamics, the theory of quarks, gluons, and hadrons.
10.5446/15085 (DOI)
Stanford University. I'm going to do something now I've wanted to do for years. I really wanted to do this. Okay. Ladies and gentlemen, people out in the audience viewing this from afar, we are now going to stop for a commercial break. I want you all to go out and buy my book, The Black Hall War. My battle with Stephen Hawking to make the world safe for quantum mechanics. In that book, you will learn about the cruelty of theoretical physicists. You will not only learn about Schrodinger's cat, but you will learn about Susskind's fish and how he flushed them down the toilet. Audience, please. Yeah, yeah, come on. Right now back to our program. You may remember that when we last saw Alice and Bob, Alice was crying bitterly because the cruel Bob was forcing her to learn group theory. Alice, sad to say, is not finished. Bob is not finished with her. Tonight we will see Alice suffer more. Okay, are there any questions about group theory for the moment that, now that we're back to the main storyline, any questions about group theory? We're not finished with it. We're really not finished with it. Yeah. There's two parts of your exposure on group. It seems to be some kind of abstract mathematical object that has some kind of multiplication to table. And there's also the part on how those groups act on various objects. Can we dissociate the two, or is it...? Well, they're completely related. There's the abstract notion, let's take the case of rotation of a spin. There's a perfect case to study. There's the abstract, completely abstract notion of a rotation in space. And I don't tell you what it acts on, I just tell you, that is an operation that you can do. Physics has to respect the symmetry under the rotations of space. And so we must then say, how does the rotation of space act on the quantities which are physically relevant? If we were talking about classical physics, we might just say that the rotation of space rotates the coordinates of a particle, it might rotate the direction of a magnet, whatever it does. We know how to describe that. We describe that by... we know how to describe that. In quantum mechanics, the states of a system are always represented by a linear vector space. There's the abstract notion of a state of a system, and there's the concrete representation of it in terms of column vectors. How big are those column vectors? What's the number of entries into the column vector? Well, that depends on the system, but basically it's the number of mutually orthogonal possibilities for that system. If the system in question only consists of an electron spin, then there's only two states, two mutually orthogonal states, spin up, spin down. If it's two electrons, and again we're only interested in the spin, then there would be four states, two up, two down, one up, one down, one down, one up. If we're talking about a more complex object, for example, the entire motion of an electron, its position as well as its momentum, then there are an infinite number of states. The number of states that it takes to describe the orbital, the number of mutually orthogonal states of an electron, speaking now about its position, is infinite. It could be anywhere in space, or you could choose to describe it in terms of momentum. So the column vectors describing an electron's position are infinite dimensional. An amplitude for every possible location or for every possible momentum. So there must be representations of the rotation group, of this group of rotations, which are infinite dimensional matrices, and they are. They are. Just how, what are the matrices which act on the column vectors, which describe the system that you're interested in, the size of those matrices will depend on the number of mutually orthogonal states. All right, so let's come back. Spin of the electron, throw away everything else out of your head. Spin of the electron, 2 by 2 matrices. The group elements are the abstract rotations. They're represented by 2 by 2 matrices. A spin one particle, a spin one particle has three mutually orthogonal states. The rotations are represented by 3 by 3 matrices. In that case, those 3 by 3 matrices are familiar to some of you. They are the three dimensional matrices which mix up the components of 3 vectors. What comes after spin, after 3 halves? One half, well first spin zero, what about spin zero? Spin zero, rotations don't act on it all. They just leave it alone. So it's completely trivial. You can think of it in some very useless sense that rotations act on a spin zero just to leave it alone. That corresponds to one by one matrices. Just the unit matrix, that's all. Just the unit matrix, there's nothing. 2 by 2, that's spin a half. 3 by 3, that's spin one. 4 by 4, that's spin three halves and so forth. So the same abstract group element, the same abstract symmetry operation may have many matrix representations of different dimensionality. That's an important fact. And we're going to spend a little more time on it. Let's just review for a moment what the symmetries we talked about, the symmetries of rotation of space on a spin, on a spin a half particle, in particular on a spin a half particle. And we said those rotation operations become represented by matrices which I called u. u because they're unitary. So these are 2 by 2 unitary matrices. I won't bother being more specific than just putting dots there. Those 2 by 2 unitary matrices act on the state vectors of a spin a half. Again, I won't be more specific than writing that. Let's erase it for the moment. And this is called a representation of the group. It's in one-to-one correspondence with the group elements, matrix multiplication, group multiplication, and in one-to-one correspondence. Let's just flesh it out. What does it mean for matrix to be unitary? It means that its inverse is its Hermitian conjugate. But we know that the number of degrees of freedom, the number of parameters in a unitary 2 by 2 matrix is how many, how many did we say they were? Eight to begin with. Four elements, each complex. That's four elements. This is four equations. That cuts you down to four. But there are only three independent parameters, the three Euler angles if you like. The three angles describing a rotation, namely the rotation angle and the two parameters which determine the direction of the rotation. So there are only three independent parameters of a rotation. These matrices have one-to-many parameters. And to get rid of that one extra parameter, one more condition, and that's that the determinant of U, I'll just represent that. We could write it as debt, but I'll just write it as absolute value, but it's not really absolute value. It's a symbol for determinant. But that is equal to one. Now, a unitary simple fact, incidentally, that when you multiply matrices, the determinants multiply. The determinant of A times, sorry, the determinant of the product of A and B is the determinant of A times the determinant of B. It is also true that the determinant of a Hermitian conjugate of an operator is the complex conjugate of the determinant of the original operator. So here's what we can say. Let's take the determinant of both sides of this. The determinant of both sides of this is, first of all, the determinant of U dagger U, is also equal to the determinant of U dagger times the determinant of U. And what's the determinant of one? One is equal to one. One here, of course, means the unit matrix, or one's on the diagonal. All right, now, next thing is the determinant of U dagger is the determinant of, well, is the determinant of, is the complex conjugate of the determinant of U, all right? And if the determinant of U is one, the determinant of U dagger is one, and the whole thing is consistent. The only point is that it is consistent. The consistent condition on U's, that's, when I say consistent, I mean that it's consistent with the law of matrix multiplication. If you take a bunch of unitary matrices, every one of which has determinant one, and you multiply them together, you'll again get a thing with determinant one. All right, so it's a mathematical theorem, which may not be too surprising for many of you, but the special unitary matrices, two by two matrices, which have three parameters in them, special means that U dagger, U equals one, that the group of those matrices is identical to the group of rotations in three-dimensional space. Now, it's not quite true. Go ahead. Is there, is there nothing that says that the determinant has to be real? No, the determinant of a... It just has to be radius one. Complex number of reals. Yeah, right, the determinant of a unitary matrix must be a pure phase. Okay, that's just from the fact that it's unitary. Okay, from the fact that it's unitary. But you can take any, here's the point, you can take any unitary matrix. Let's suppose its determinant is e to the i theta. You can then multiply that unitary matrix by... Well, you'd multiply it by e to the minus i theta over two, because when you calculate the determinant, you multiply two elements together. So if you took U and multiplied it by e to the minus i theta over two, you would construct a special unitary matrix. All right, so the point is the extra phase that's in there is almost trivial. You can always get rid of it by a redefinition of U. And in fact, the rotation group doesn't contain this piece here. It's just isomorphic to the matrices with determinant one. Okay, I told you a little fib when I said that SU2 is exactly the same as the rotation group. It's not exactly. There's a two-to-one correspondence instead of a one-to-one correspondence. The unitary operator's U and minus U, if U is a unitary operator, U dagger U equals one, so is minus U, whatever U is. All right? In fact, there's a two-to-one correspondence. The matrices U and minus U both correspond to the same rotation in space. But that's a fine point that I... which is not going to play any special role in what we say, at least for the time being. So a simple statement, which is almost true, is that there's a correspondence... certainly there's a correspondence, but that there's a one-to-one correspondence. Not quite true. It's two-to-one correspondence between the unitary matrices, two-by-two matrices, the special unitary matrices. Those were determinant one. And the group of rotations. So for that reason, the electron spin or the electron wave function is called a doublet. And it's acted on by two-by-two matrices. Now, let's talk a little bit about the generators of a group. I think we actually talked about it, but let's be specific about it again. The generators of the group, when you have a symmetry, and the symmetry is represented by a set of matrices like this, then there's a concept of the generators. The generators have to do with infinitesimal elements of the group. Infinitesimal element of the group means, in the case of rotation, a rotation about some axis by a very, very small angle. In other words, it's an element of the group which is very close to unity. Unity means no transformation. Elements which are very close, which are just very small, shifts of angles and so forth, are called infinitesimal elements of the group. And the generators of the group are very closely related to these infinitesimally small operations. Now, one of the reasons that the infinitely small elements are important and interesting is because you can build up any element out of a sequence of small ones. In other words, if I want to rotate about some angle by a finite rotation, I just think of it as a multiplication of a bunch of infinitesimal ones. So if you know the properties of the infinitesimal generators, you can put together the entire structure of the group. We're not going to do that. We're not going to study that today, but we are just going to define the generators of the group. And the reason we're going to define the generators of the group is because they really are the conserved quantities. They are the quantities that are conserved, and which play the role of conservation laws once you know what the symmetry group is. Okay, so let's define them. Incidentally, for what I've written here, there's nothing specific to SU2. This could be SUn, where n is the number of elements, the special unitary n by n matrices. They're, of course, not all isomorphic to rotations. It's only the SU2 group, which is isomorphic to rotations of space. SU3, SU4, SU5, not so easy to visualize. Okay, so here we have the definition. And now let's take an element which is very close to the identity. It doesn't have to be SU2. Let's write it. U is equal to the identity, 1, plus something small, so let's put an i in for simply notational simplicity. It'll simplify notations. A small parameter, epsilon, and a quantity which is often, I'm not going to call it g, g for generator, because we've already used g for group element. And so therefore I will use t. Why t? Beats me, I don't know. So u is 1 plus i, t. What is t? Let's find out what we can learn about t. We use the fact, first of all, that u dagger u is 1. Alright, so u dagger u, that's 1 plus i epsilon t. Oh, incidentally, it's easy to prove that if u dagger u is 1, the new u dagger is 1. I won't prove that for you, that u and u dagger always commute with each other. Not for all operators, but for unitaries. So it doesn't matter which order you multiply. Alright, so it's 1 plus i epsilon t times 1 minus i epsilon t dagger. If this is u, then this is u dagger. i gets changed to minus i when you complex conjugate. And this is supposed to be equal to 1. Let's retain things only to the lowest order in epsilon. Epsilon squared is too small for our interest tonight. So let's just keep things to order epsilon. And that says 1 times 1 cancels the 1 there. And it says that i epsilon times t minus t dagger is equal to 0. Or to simplify, it just says that t is equal to t dagger. What's an operator that's equal to its own conjugate? Permission. And Hermitian operators in quantum mechanics represent observables. Alright, Hermitian operators represent observable quantities. For the case of rotations, what are these t's? What physical significance do they have? There are three of them. First of all, why are there three of them? There are actually an infinite number of them, but there are three linearly independent ones. So let me explain why that is. The reason is because the rotation group has only three parameters. There's only three independent ways that you can rotate. You can rotate about the x-axis. So I could have made an infinitesimal rotation about the x-axis. I could have made an infinitesimal rotation about the y-axis or the z-axis. And in fact, for an infinitesimal rotation, if I wanted to rotate about some other axis, what I could first do is rotate a little about the x-axis and then a little about the y-axis and then a little about the z-axis. Another way to say it is that the generators behave like vectors. If I want a vector in an arbitrary direction, in other words, a vector rotation about some axis, about an arbitrary direction, the way I can think of it is making it up out of the components, the component rotations about the x-y and z-axes. So there are three independent directions that you can rotate in and three independent linearly independent t's. Oh, we missed one point. Come back. Back up. There is this condition here. The condition that the determinant is one. All right, so let me tell you before we'll get back to it in a moment. The determinant being one, the determinant of any matrix, which is of the form one plus, let's call it epsilon times a small matrix, the determinant, the determinant of m is equal to one plus epsilon times the trace of small m. The determinant of a matrix which is close to the identity is one, that's just the identity, and then the small deviation is epsilon times the trace. That's something to prove. But we can see what that means immediately. It means that the trace of t is zero. The trace of t has to be zero in order for the determinant of u to be equal to one. All right, so that's the second condition. The trace of t is zero, and it should be Hermitian. How many independent Hermitian 2 by 2 matrices are there with trace equal to zero? The answer is three of them, and they correspond to the rotations about the three axes. But more abstractly and generally, the generators of the group of rotations are the angular momentum operators for a system. So that's what angular momentum means in quantum mechanics. It is the generators of the rotation group, and you act with the angular momentum, or one plus i epsilon times the angular momentum, to rotate a system, a small amount, and then if you want to rotate it a large amount, you just do it over and over again. So everything is contained. All the information about the structure of the group is contained in the generators. In fact, it's contained in the commutation relations of the generators. Those are the important things. I won't go into them again. We did commutation relations of angular momentum. I just wanted to connect things up a little bit for you. But the important thing is that the conserved quantities associated with the symmetry are the generators. So there are three independent 2 by 2 matrices? There are three independent 2 by 2 matrices? Trace-less 2 by 2 Hermitian matrices. Yeah, they have a name. The Pauli matrices. Yeah, the three Pauli matrices. And then if you went to 3 by 3 matrices? 3 by 3 unitary matrices? Unitary, traceless. And there are 8. Like the Gelman matrices. Yeah, the other Gelman matrices. We're going to do that in a moment. That 8 is the same 8 as the 8 gluons. The 8-fold way. Actually, that's a different 8. Not because it's mathematically the same, but it's associated with a different SU3 symmetry than color. But yeah, mathematics is the same. OK, so that was a little bit about a little review. The sorrows of Alice. Let's torture Alice even more. Let's go to SU3. SU3, the pattern is exactly the same, except we're talking about 3 by 3 matrices. In fact, I'm not even sure that there's very much more to say about it, other than doing some counting and finding out how many independent generators there are. So to do the counting, it's very easy. A 3 by 3 unitary matrix has how many elements? How many? Starts with 18. 18 independent, 9 independent complex numbers. That's 18. But then we have U dagger U equals 1. Now, this really is an equation for each element. This is a product. This is a matrix product. So we can think of it as ij. And this is delta ij over here. How many equations is this? This is nine equations. So we have 18 minus 9 is 9, but then one additional equation that the determinant is equal to 1. That's 8 equations. And so we can guess that there are 8 different... If we start with the unit matrix, in how many distinct directions in the group space can we deform away from the identity? And the answer is 8, the implication being there are 8 linearly independent 3 by 3 traceless Hermitian matrices. Traceless Hermitian matrices, the number of parameters is 8. So then we're going to come back to that. But before we do, let's talk a little bit about how representations combine to form new representations. Now, what we're talking about here is combining systems together to form new systems. In the case of spin, we might be talking about combining more than one particle together to create some sort of composite, for example. And that composite would also have a spin. Let's not worry about the motion of the particles. Let's just think of them as things that we staple on top of each other, so they're nailed down to each other. But each one has a spin. So each one has a spin. And let's suppose that we take two such particles. We put them on top of each other, two half spin particles. What do we get? Well, we get four possible states, but those four possible states can be thought of as a basis in that four particle states, can be thought of as a spin zero object and the three spin one objects that a half, two half spins can make. If we take two half spins, we can certainly make a spin one. Why? We can line up the spins in the same direction. So the z component of spin can be one. It can be minus one. And it can be zero, but it can be zero in two ways. All right, but still, a spin one particle only has three states. So one of those two ways of making zero must not be part of the spin one multiplet. What can it be? It can only be spin zero. Why spin zero? Because there's nothing for it to transform into. So there are three combinations which form the spin one multiplet and one which forms the spin zero multiplet. When you combine spins together that way, there are two possible total spins you can make. So if you were to take an atom made out of two half spin particles and ignore the orbital motion altogether, one possibility is that the spins are in the same direction. That would give you spin one. And the other possibility is that they are in the opposite direction, in which case it would be spin zero. It's ortho and para-hydrogen, in particular for the hydrogen atom, where the two spins would be the proton and the electron. So you combine, it's important to know how to combine spins and to know what you get, what possibilities you get when you combine them together. What happens in SU3? Now we want to jump to SU3 and discuss some very specific representations of SU3. They're the important ones which we will discuss. First of all, there is the triplet. The triplet is simply, if u is a three by three matrix, then it has to act on a thing with three components. In the present context, the three components represent the three colors of a quark. So we're talking now for the moment about one quark. I'm not three quarks, one quark. And that quark can be red, yellow, red, blue, or green. It can be red, blue, or green. This is the amplitude that it's red. This is the amplitude that it's blue. This is the amplitude that it's green. And what does the group do to them? It mixes them up. Red, blue, and green takes any particular state and sends it to some quantum superposition of states of mixed colors, shall we say. So if we took a red quark, that would be like an upspin, 001, and we rotated it by this SU3 operation, we would get something of a mixed character which would have amplitudes for being red, blue, or green. Some quantum superposition. So that's the meaning of these operations. But it's interesting and important to know what happens if you combine representations together. Wait, let me go back a step. Let me go back a... Yeah, now we want to talk about the various representations of SU3. The representations of SU2 are the spin states. It's been 0, 1, 1, 3 halves, 2, 5 halves, and so forth. Those are the representations of rotations. We want to know something about the representations of SU3. And I'm not going to tell you very much, just a very little bit. There are rather two and three important representations for our purposes. One of them is just a three-dimensional representation, and it can be thought of as the three components, the three things that go here could be thought of as the three field operators for, let's call them, quark, red, quark, blue, quark, green. We can either think of these matrices as mixing up the states of a single quark, or we can think of it as mixing up the field operators that create single quarks, red, blue, and green. All right. Now, next, oh, this representation by three-by-three matrices that acts on the quarks, that representation is called the three. In the same language, if you use the same language, you would call the half-spin particle of SU2. You would call it the two, two for two states. So the three is a representation that's identified with the quark field itself, or with quarks. Now, there are also anti-quarks. Anti-quarks, their field operators are the complex conjugates of the quark operators. So you can ask yourself, if there is a matrix which acts on Q to give some new thing, we could complex conjugate everything. We could complex conjugate everything, and we could say, let's write it. If, let's call it U, U is a three-by-three matrix, if it acts on the column vector Q, I'm now being schematic, U is a three-by-three matrix, Q is a column vector. If it acts to give, let's call it Q prime, the rotated Q, then what is it that acts on the complex conjugate, or the Hermitian conjugate field operator? That's the object which represents anti-quarks. Well, it's quite obvious. It's going to be the complex conjugate, not the Hermitian conjugate, the set of elements. If U is a bunch of elements, then U star is nothing but the complex conjugate elements. Let's call it star. Let's call it star to be consistent. So there is a second representation of SU3, which is the set of matrices which are the complex conjugate matrices. For every matrix U, there is a complex conjugate, and that defines a second representation that's called a complex conjugate representation. It is the transformation property of anti-quarks, and that representation is called the three-bar. This is very abstract, but what is it when the case of electrons, what does this correspond to in case of electrons? It is simply the fact that if the electron wave function gets multiplied by e to the i theta, then the positron wave function gets multiplied by e to the minus i theta. They transform with complex conjugate group elements. So you would say that the electron and the positron are complex conjugate representations of the U1 group. That's the language. The quark and the anti-quark are complex conjugate representations of the group SU3. It would be the complex conjugates of the Halley-Splint spin matrices. For the case SU2, yeah. Now SU2 is a special degenerate case where the complex conjugate is equivalent to the original matrices by another matrix, and it's a special case. It's called a real group, but this is not important for us now. The three and the three-bar representation are quite different. I mean, they look very similar to each other, but they are distinct. They should not be thought of as the same representation of SU3. They're quite different. And in fact, the generators of the three-bar representation are the negatives of the generators of the three representation. That's not too hard to prove. They're the, that means they carry the opposite color. If a quark carries a red color, then an anti-quark carries a minus red color. It has the opposite value for the generators. All right, so these are the quark and the anti-quark. Now what happens if you take a quark and a quark, two quarks, or a quark and an anti-quark? Just like you could take two electrons and make a spin one, which was a triplet. Not got to do with this three, but the triplet of SU2. Just like you can make a spin one or a spin zero, you can do some similar things with three and three-bar. You take two quarks and put them together. You can call that three times three-bar. Three times three, two quarks are three. This is just a notation. Three times three means two quarks. Sometimes this is represented by a circle to indicate that you're not actually multiplying numbers. You're simply building a space which is now a space of two quarks. It has nine independent states, red, red, red, blue, red, green, and so forth, all nine states. And if you work out what representations of SU3 appear when you multiply three by three, the answer is, first of all, there is an anti-quark representation. That's going to prove to be interesting that if you take two quarks, you make something whose color is the same as an anti-quark. That's a little bit odd, don't you think? But it's true. It's true anyway. And one more representation called the six-dimensional representation of SU3. We're never going to need the six-dimensional representation, but there is a six-dimensional representation of SU3. I can tell you what these things correspond to. If you think of the quarks as just indices, color indices, then you can combine the two quarks states symmetrically or anti-symmetrically. You can combine them symmetrically or anti-symmetrically. Anti-symmetrically gives you the three-bar, symmetrically gives you the six. But this is not terribly important. The point is if you make two quarks somehow in the laboratory, the transformation properties of the two-quark system under this SU3, there will be one term or one way of combining them together, three independent states of that system, which will be like an anti-quark, and the other one will be this mysterious six, which we don't give a damn about. We'll see why we're going on. Now, what happens if we take a quark and an anti-quark? If we take a quark and an anti-quark, what we get is different than what we get when we get a quark and a quark. We get, again, six and three is nine states, but it's not the same nine possibilities. First of all, we make an object which is a one. What is a one? That's a thing which is completely invariant under the color operations. Roughly speaking, it's a red, plus a red, anti-red, quantum superposition of red, anti-red, blue, anti-blue, and green, anti-green. That's one thing, and that's a single representation, and it simply does not transform under the SU3 group. It's called a singlet, an SU3 singlet. And one way of saying it is it's the analog of combining together two electrons to make a spin-zero state. It's like a spin-zero state. It does not transform under the rotations. It doesn't transform at all. It's just one state, so how could it transform? There's nothing for it to transform into. So when you take a quark and an anti-quark, there are nine states, one of which, one linear combination, red, anti-red, plus blue, anti-blue, plus green, anti-green, that just forms a singlet. And then there are eight others, and those eight others are called the eight-dimensional representation of SU3. That eight, incidentally, think about that eight for a moment. We've seen eight in another place. It was the number of generators. Just like the number of angular momenta. When you rotate space, the angular momenta rotate into each other. When you rotate in color space, the eight generators get mixed up with each other. The way the eight generators transform is called the eight-dimensional representation of SU3. This is sometimes called the adjoint representation. The words don't matter. It's just a representation that transforms the same way as the generators. And that's what you get when you multiply 3 cross 3 bar. Finally, one other operation. What do you get if you take three quarks? Three, cross three, cross three. You take three quarks, there are 27 states. Well, you can do it in pieces. You can say, if I combine together these two quarks, what I will get is either a one, sorry, is either a three bar, which is an anti-quark, or a six. So two of these quarks together will necessarily either transform as a anti-quark or as a six. Six is an un- as a peculiar representation, which I said we won't care about very much. Now, what happens if you take this product and now cross it again with three? In other words, we see what we get when we combine two quarks. Here's what we get. And then we combine it again with another quark. We will either get what appears if we take a quark and an anti-quark, and what that is, that's one plus eight. The plus here just means, or, you either get a one, the singlet, or an eight. So it's like saying you either get spin one or spin zero. That's what this plus means, or. When you multiply six times three, I think you get an eight plus a ten. But this, again, is not very interesting to us. We don't care very much about that. The interesting thing, and the most interesting thing, is when we multiply three quarks together, we get the possibility again of a singlet, of a state which is completely colorless. It has no transformation property under the color. It's neutral. It's just like taking a plus charge and a minus charge which cancel each other. We get the singlet which has no SU3 transformation at all. And it can be thought of as a combination which simply is invariant under the SU3 group, under the SU3 of the mixing up the quarks. What is it? Can we guess what it is like? I'll tell you what it is. It's just a red, a green, and a blue quark. But you have to be careful. It's a red, green, and blue quark anti-symmetrized. Not important. It's a red, green, and blue quark. A red, green, and blue quark are the singlet. Then there are other combinations, red, red, green, and so forth. They form the remaining states here. But the most interesting one for our purposes will be this singlet here. So we see that when we take a quark and an anti-quark, just with regard to the colors, you can combine the colors in a particular way to get a singlet plus other stuff. And when you take three quarks, you can combine them together to get a singlet plus other junk. So the question is, is it the probability of these other things so low that we don't consider it? The question is what the energy of them is. And I will tell you, the energy of them is infinite, but we will come to that. The energy of them is infinite, and so they don't appear in the spectrum. Now, how can the energy of them be infinite? And that was a great puzzle, which we now understand, and we're going to talk about it a little bit. All right. Right. But now, ah, there is one other kind of particle in nature, sorry, there's many other kinds of particles in nature, but there's one other particle in quantum chromodynamics. We're talking about quantum chromodynamics, QCD, the theory of quarks and gluons. What, of course, we've left out is the gluon. The gluon is an object, which I think I've told you before, behaves as if it were a quark and an anticork. With respect to the symmetry, it doesn't behave like a quark and an anticork with respect to what happens if you hit it or anything, just with respect to the color symmetry. Its color properties are the same as a quark and an anticork, but only the 8. It transforms as the 8-dimensional representation. It transforms the same way as the generators of the group. All right. So again, it is this octet, which is separate from the singular here. It's the remaining states here, and that's what a gluon is. A gluon is a thing with two indices, like a quark and an anticork, a quark and an anticork arranged into this octet of SU3. So it's not neutral. It itself is not neutral. So let's quark. Apostolets, if you like, of quantum chromo, of QCD. It's got an SU3 symmetry, which I won't write down. The quark transforms as a 3. Sorry. The anticork transforms as a 3 bar and the gluon or the gluon field as an 8. Because these equations don't really mean anything. Particles can't equal numbers. Particles are particles. Numbers are numbers. This just means the quark field transforms as a triplet. The quark is the anticork as the anti-triplet, complex conjugate, and the gluon is the 8, which is a piece of a quark-anticork. This remaining possible gluon doesn't exist in nature. Do we know why? Not completely. But the important point about it is it doesn't mix up with the other components. And so throwing it away is a consistent, a mathematically consistent thing to do because it doesn't get mixed up with the other components when you transform the SU3 group. That's group theory as applied to quantum chromodynamics, as applied to quarks and gluons. Another postulate of quantum chromodynamics, and it's not really a postulate. It's a dynamical output of the theory. But let's take it as a postulate and then explore what it means in a moment. And that postulate is all of the particles of nature, the real particles of nature, not quarks and gluons, which are never seen singly but are always seen in composites, that the particles which can be separated off and examined individually can, well, particles of finite mass, the real things that occur in the laboratory, are all particles, real particles. I don't know, there's nothing not real about a quark, so I hate to call them the real particles. The unconfined particles, the free particles, the liberated particles, are always transforming under the one. In other words, they are singlets. All the real particles in nature are singlets. There is only two independent ways, well, I'll tell you what the ways of creating singlets are. This is a mathematical fact now. There are only two fundamental ways of making singlets out of quarks. One of them is to take a quark and an anti-quark. That creates a singlet. The other is to take three quarks. Now, you can do other things. You can take a quark and an anti-quark and juxtapose it with three quarks. But the elements, the basic, or you can take three quarks and another three quarks, three quarks in a singlet and three other quarks in a singlet. That will also be in a singlet. But all of the objects that you can make up have the quantum numbers of combinations of things made out of three quarks or a quark and an anti-quark. What about the gluons themselves? First of all, before we do it, let's give these things names. What is an object made out of three quarks? Or a baryon. A baryon, right. They always have half spin and half spin. When I say half spin, that can be one half, three halves, five half, seven halves. Why? Because they're made out of three half spin particles. What about the quark anti-quarks? These are the ones, of course, in the three. In three-cross, three-cross, three. There's the triplet, sorry, the singlet are called baryons. The singlet in three-cross, three-bar, what are they called? Quark and an anti-quark. Mezzons. Now we left out one thing. Can we make things out of the gluons? Can we make things out of the gluons? Yeah, there's a thing that you can make just out of gluons. In fact, if you take an eight-cross and eight, and I told you roughly what happens if you combine quarks together, you could ask what happens if you combine gluons together, or gluons with quarks. I'm not going to go through the whole list of multiplication tables that you get when you combine all sorts of things, but there's certainly at least one more interesting thing, and that's an eight-cross and eight. This is simply two gluons. Two gluons. Now it's not obvious. There are three states altogether. Eight-cross, eight. One of them is a singlet. What did I say? 63? 64 states. Did I say 63? Yeah. There are 64 states in eight-cross, eight, and the rest of them form 63 things, which are a combination of eights and other things. The eight-peers a number of times, I forget which, I don't remember which representations appear, but there are 63 other states. None of them are singlets. There's one way of combining together gluons to form singlets. So you might ask, is there a particle which is composed of two gluons? Just two gluons bound together into an SU3 singlet, and the answer is yes. They're called glue balls. They're not made of quarks. They don't have any quark content in them. They're fundamentally just a pair of gluons. You can also make three gluons, incidentally. It's possible to get more complicated things. But there are glue balls. So the spectrum of hadrons, the spectrum of strongly interacting particles, which come from quarks and gluons, consist of baryons, mesons, and glue balls. All of them are color singlets. Color singlets is the watchword. Now, another fact, another fact. You can check this yourself easily. I think we've actually checked it before. Remember now, what we can make is quark anti-quark, quark quark quark, or gluon gluon, and combinations of those. What kind of electric charges do we get if we combine three quarks? Well, I think we can get charge two. Let's see, we can get zero, one, and two. We can get two by taking three up quarks. Three down quarks. Sorry, what with three down quarks? Three down quarks is minus one. Three down quarks is minus one, and you have minus one, zero, one, and two. But the important point is, you cannot make a fractional charge with three quarks. There's no way to take three quarks and make a fractional charge. What about a quark and an anti-quark? Well, again, you can take an up, an anti-up, or two-thirds and minus a third, no, two-thirds and one-third, you know what the story is, and you can again make integer charges. I don't remember you telling us what the charge of this range was. It's the same as the charge of the down. I did tell you because the way that I grouped them... Yeah, it's the same as a down. All of the quarks are either up-like or down-like, and they come in three families, the up-to-down, the charm of the strange, and the top and the bottom, but they're three replicas of the same thing. So all you can make is integer-charged particles. And so from the point of view of color, the fact that you only have integer-charged particles in nature is the same statement as all the particles that are combined always into SU3 singlets, color singlets. The question about that is that a singlet is like a pair of a tangled electron. Is that any significance here? Yeah. Other than the way you transform them? Well, that is the way you make such states, is to entangle them very definitely. They are certainly entangled. Does it have any... I'm not sure... You can't separate them separately and make a correlation zone like you did in the L-stereo... Well, you can't separate them because they get stuck together, but you can separate them as far as you like, and it'll just cost you a lot of energy. So in principle, yes, they're entangled objects. Not easy to manipulate. You want to try to do an experiment of that type with them. Question? I'm actually having trouble understanding those equations. I know. Not the details of how you get it, but what the operations are. All right, good. The left-hand side's got 3 cross 3, which is the group 3... The representation 3. Okay, so let me tell you more mathematically what is going on here. Yeah. Let's take SU2 first, all right, and take two spin-a-halves. We can think of the two spin-a-halves in terms of the field operators which create their particles. That's the easiest way to think about it for the moment. So let's just call the field operator psi, and there are two components to it. The psi which creates a up spin and the psi which creates a down spin. Let's call them psi... psi i, where i can take on two values, either up or down. Okay? Now, supposing we want to create two particles. If psi creates a particle, or psi dagger, it doesn't matter. If psi creates a particle, how do we create two... If psi creates a particle, we can think of the two components. We can think of it as psi 1 and psi 2, or psi up and psi down. And the SU2 operations act on this. Now, that's one particle. How do we make two particles? Well, we act twice with psi. But we can act with psi up twice. That creates two up particles. We can act with psi down twice. That creates two down particles, or we can do it one of each. So, let's write psi i, psi j. In fact, these two operators don't really have to correspond to the same particle. We could even think of one of them as being the electron, the other being the proton and the hydrogen atom, for example. Let's even give them different names. Psi and phi. They might be the same, they might not be, but let's allow them to be different for the moment. Okay? So, this now is an object with two indices. It's an object with two indices. It's kind of like a tensor. If we were thinking about vectors in three-dimensional space, tensors are objects with two vector indices. This object is like a tensor, and in that it has repeated indices. Now, SU2 can operate on it, because SU2 operates on psi and it acts on phi. This is a collection of four objects. Up, up, up, down, down, up, and down, down. So, we can lay out those four, psi up, psi up, psi phi up, psi up, psi down, and so forth and so on. We could lay them out in an array and then ask what happens when you act with SU2 on this composite object here? Well, they get mixed up with each other. They get mixed up with each other for the simple reason that the psi's get mixed up with each other and the phi's get mixed up with each other. And so, certainly, these psi times phi's are going to get mixed up with each other. And there's going to be a four by four matrix which represents the same action as just going through psi's and phi's separately and rotating them by the group action. Okay. Now, you look at these... at the way these things mix up with each other. You look at the way they mix up with each other and you come to a discovery. You discover that the combination... let's see what it is. Psi up, phi down, minus psi down, phi up doesn't mix with anything else under this operation. It just goes into itself. It just goes into itself. This is an easy thing to check. You just see how SU2 acts on psi up. It mixes it with psi down. So it acts on phi down. And you check what happens to this particular combination. And the answer is nothing. So this is a singlet. This is one. The other three possibilities, what are they? They are psi up, phi down, plus psi down, phi up. This is a... I have not put in square roots of 2, you could put in the square roots of 2 if you wanted. This is orthogonal to this. And then there's psi up, phi up, and psi down, phi down. What happens when you do the SU2 rotations on these states? They mix into each other. In particular ways, they will mix into each other. This singlet does not get mixed with these three states at all. And these three states mix up among themselves as if they were a spin-1 object. They get mixed among each other. And this is then called the three. Spin-1 has three states. These three mix up among themselves. This doesn't mix up with anything. So when you combine two half spin states, when you combine two half spin states, you make a one or you make a three, depending on which combination you take. You make a one or you make a three. You make spin one... sorry, you make spin zero or spin one. That's the sense in which these equations are being used here. When you combine together a quark and an anti-quark, there are nine different ways to do it. One of the linear combinations behaves in this way here, and it's a singlet, and the others transform into each other eight objects, eight additional objects, which transform into each other. So that's the meaning of these equations here. How many of each kind of combination you get? How does that... is that a unique thing? This multiplication table... It couldn't be two plus seven if the... Say it again. Okay, so... You have to know some good things. You have to know some good things. Three couldn't be two plus seven or... No, you have to know what the representations of SU3 are. We did a bit of work to find out what the representations of SU2 were, and they were all the spin half and spin and integer spin particles. We could spend two more weeks studying SU3 and find out what the representations of SU3 are. What are... What matrix representations are they? There are not matrix representations of every dimensionality. There are matrix representations. For SU2, there happened to be, for every dimensionality, a matrix representation of SU2. For SU3, not so. There's the singlet, there's the triplet, there's the anti-triplet, there's the octet, the eight, there's a ten, there's a fifteen, there's a twenty. There's a whole bunch of them, but not every number appears in the possible matrices which have the same multiplication table as SU3. So that's a bit of work to prove. We didn't want to go there. I just went there by saying, all right, here's what you get. And... But the real interesting point is the singlets that appear. The ways that you can combine quarks together to get combinations which don't rotate, or which are the analog of electrically neutral. Why do I say electrically neutral? Because electrically neutral are exactly the objects which don't transform under that U1, the things which the phase cancels out of. Remember, go back again to electrodynamics. The things which are electrically neutral are the combinations where all the phases cancel out. If an electron has an e to the i theta, and a positron an e to the minus i theta, then an electron and a positron, the combination of them are neutral. Don't transform at all. Two electrons, that gets an e to the two i theta. So it's not called neutral, or it's not called or transforms under the group. So singlets are important, but you should just think of them as neutral objects with respect to the transformation properties. And... I mean, is there a possibility that you could just show with that singlet, that transformation? Just one example there, is that the... Oh, we can go through it, yeah. Maybe it's too much. It is a little too much, but... Yeah. One simple way to see it. I don't know if this will help you at all. If we write... Just write this object here. You know what epsilon ij is? It's the two-by-two matrix, which is the anti-symmetric matrix. Yeah. And contract the indices this way. That's equal to this. That's equal to this. And it's easy to prove that this is a... Oh, okay. This... I'll just spend a moment at it. In every dimensionality, for matrix dimensionality, there is always an epsilon symbol, which has a certain number of indices. For two-by-two matrices, the epsilon symbol... The epsilon symbol is the anti-symmetric... The fully anti-symmetric matrix. Compose that of zeros and ones. Okay? All right. Epsilon ij is the two-by-two matrix. What about for... What about for the three-by-three matrices in a three-dimensional? It's epsilon ijk. Requires three indices. All right, so I'll tell you right now, if you take an SU3, epsilon ijk, and let's call it... What did we call the quark fields? Q? Qi, Qj, Qk. Then you make the three-quark singlet, which has no color. ij and k have to all be different from each other. Otherwise, this is zero. So that's why I said a red, a green, and a blue make a singlet. Okay. We don't want to make Alice suffer too much tonight. For what it's worth, I just happened to bump into this in this other book, that 8 cross 8 equals 27. Oh, let's write it down. Okay, 8 cross 8. All right, so that means when you take two 8s and put them together, in other words, two gluons, you first of all make a singlet. What else? Plus 8, plus 8, plus 10. Well, I'll say it again. 8 plus 8, plus 8. Okay, I'll tell you what this means in a minute. Plus 10, plus 10. Plus 10, plus 10? Right. Okay. All right, so let's see, is that right? 10 and 10 is 20, and 8 and 8, this should make a... 8 times 8 is 64. Oh, it's plus 27. It's not equal to... Yeah, it's plus 27 at the end. You see there's a plus of 27? Yeah. Does that make 63? I can't tell. 64. Is that 64? Yeah. Okay, good. All right, so for your information, this is a fact I know. I know the representations of SU3 up to 27. I think there's a 35, but I don't remember. But... Another 10? Yeah. We're not up to 64? No, okay. So what this means is, first of all, there is one and only one combination, quantum superposition of states of two gluons, which makes a singlet, one and only one, and that's the glue ball. Then it turns out there are two orthogonal ways of making octets. Two... That's 16 states altogether, but they form two groups, and the first group mixes into themselves without mixing into the second. The second group mixes into itself without mixing into the first, and both of them form octets. Different combinations, different patterns of symmetry of the wave function of the state of the two gluon system. Then there's something called a 10. And again, there are two distinct ways of combining the gluons together to make tens, but who cares about tens because they don't appear in nature anyway. And then there's a 27-dimensional representation. How about in 3 x 3 x 3? Okay. So there's a singlet. I don't know. We had them written down, but I don't remember. Isn't it fair to say that what happens if you change the basis of the thing to the... Let's go back to the spin-2 case to the singlet and the triplet and the two up, up, up, down, down. Where are we going? Oh, the main... Here? Yeah. So you have that and use those four things as a basis. Then the matrix, which describes the operation, factors into a... Has one zero, zero, zero, zero, and then a... Let's see, yes, and a lower part that moves the three among themselves. So you can describe and interpret how the matrix looks. If you just work in terms of the basis vectors, up, up, up, up, up, up, up, down, down, up, and down, down, then there's a certain set of four by four matrices which represent the group. But if you work in terms of these linear combinations, this one, this one, this one, and this one, then the matrices have exactly the property that you say. There's a one over here and then three by three matrices down here. Right, and the same thing for this other case here, there's a one... Yeah, that's right. The matrices, the big 64 by 64 dimensional matrices break up into a bunch of blocks. It's blocked diagonal. It's blocked diagonal. In a particular basis. In the basis, that's right. In fact, what you're saying is that there exists a basis in which this complicated matrix simplifies into blocked diagonal. That's right. So there are 64 states. So the SU3 operation has to be represented on the two-glouin system as 64 by 64 dimensional matrices. But in some basis, those 64 by 64 dimensional matrices are a one, an eight by eight matrix over here, another eight by eight matrix over here, then a 10 by 10... I don't have enough room. Another 10 by 10 matrix and then a 27 matrix down there. In a particular basis. That's exactly right. So each one of those doesn't mix the other ones. That's right. Each... That's right. In the right basis, they don't get mixed. This one doesn't get mixed with this one, doesn't get mixed with this one. They only get mixed together as an eight dimensional representation. Is there a label or a name for that type of subset of the total matrix for 1264 by 64? That basis. You mean for this, for a piece of it like this? Or in that whole set? Well, when you... I'm not sure what the right buzz word... I think you're looking for a particular term which is escaping me right now. I don't think such a term exists, but I know what you mean. You say that... The 10 particles don't exist in nature and the others assess that you've declared non-existent nature so you don't care about them. Well, maybe you shouldn't say you shouldn't care about them, but they won't appear as real particles in the laboratory. We're going to talk about what creates this peculiar situation that you can't have a free quark by itself, but we'll come to that. So let's talk about it a little bit. Incidentally, the quantum chromodynamics idea goes back a long ways. It goes back a sometime. I know who... it's Dhu Nambu, the Yochiro Nambu. And Nambu had the idea quite... 1962, quite maybe good 10 years before quantum chromodynamics became the standard theory of quarks and gluons. In fact, the whole idea of color goes back to him. He didn't call it color. I don't know what he called it. And the idea of gluons also in the same idea. And his idea was fairly simple. He understood that if you made color singlets, then you would not get fractionally charged particles. That was his motivation. What do you have to add into quantum chromo... into the theory of quarks in order to have some dynamics which will forbid particles of fractional charge? So he realized that if the quarks carried this kind of color quantum number and if the color always had to be a singlet, then you would never get... you would never get fractionally charged particles. He had an answer to the question of why particles should only occur in singlets and it went something like this. He said, alright, now we need another postulate. And the other postulate is the connection or the interaction between quarks and gluons. The postulate is that gluons play the same role in quantum chromodynamics that photons do in electrodynamics. But photons couple to the electric charge. The source of the photon field is electric charge. The gluons, there are eight of them. What are the sources of the gluons? What is the source of a gluon? There must be eight quantities, eight conserved quantities, which you can think of as being similar to charge, and each one of them is a source of that particular gluon. Of that particular species of gluon. The eight gluons correspond to fields which are similar in some sense to photons or to the electromagnetic field. The electromagnetic field is sourced by electric charge. What is the source of the gluon field? Well, what eight quantities do we have available? The eight generators of SU3, which are analogous to angular momentum for SU2. Angular momentum is a conserved quantity. It's additive. If you have two objects, you add their angular momentum. And color really means when you speak of the color of an object, you're speaking about how it transforms under the group, but the actual thing which radiates, the thing which radiates the gluon field, is the color itself, or the color itself, or the generators of the color group. Eight of them. And each one can emit the appropriate kind of gluon. The first meaning of this is that if you take a colored object such as a quark, it's going to be surrounded by a gluon field. So much like electrodynamics, the quantum chromodynamics replaces the electromagnetic field with a chromodynamic field. But the only thing new about it is that there are eight such fields. And the eight such fields transform the same way under that same eight associated with the eight gluons. So that's the first thing. Now, the second thing is to know something about the dynamics of electromagnetism. Let's begin with a charge and an anti-charge. A charge and an anti-charge attract each other. The meaning of that is that if you bring them close together, the energy goes down. So a charge and an anti-charge have less energy if they're close together than they are than if they're far apart. That's why they bind. That's why they bind, because the energy of a neutral system is less than the energy of the two charged, rather, the energy of a neutral system is lower than the energy of a charge system as a rule. For example, the energy of two plus charges is much more than the energy of a plus charge and a minus charge. How do I know that? Because the two plus charges repel each other. You have to do work to push the two plus charges together. You get work out of letting a plus charge fall towards a minus charge. So here's an example where a system with a net charge has more energy than a system in which the charges cancel out. Another way to think about it is if you have two plus charges, the lines of force have to go somewheres. And so there's net field out beyond the object. That net field has field energy. Field energy is always positive. It's proportional to the square of the electric field. And so this field energy stored in the fact that there are two plus charges here, if there was a plus charge and a minus charge, there might be field in between them. And let's suppose you brought them very close together. There might be a little dipole field in there, but there would be no field on the outside. There would be no large field energy on the outside. And so the field energy or the field energy stored in a neutral system is less than the field energy stored in a charge system. The larger the charge of the electron, the more profitable it is, let's say, to have a neutral system energetically, or the less profitable or the more expensive it is in energy to have a net charge system. So if the electric charge of the electron was a thousand times bigger than it is, the self-energy of the electron due to the electromagnetic field would be much bigger than it is. In fact, the repulsive force between two plus charges would be much, much stronger than it is, and the amount of energy it would take to assemble two plus charges would be very much bigger than the energy needed for a plus charge and a minus charge. The huge charge of the electron would then make it very, very prohibitive to have net charge and very, very efficient low energy to have all charges cancel out. So if the coupling constant of electrodynamics were large, very large, we would be very used to the idea that the low energy particles, the low mass, masses energy, of course, the low mass particles in nature, the low mass objects in nature would be electrically neutral. If the charge was big enough, we might never have discovered individual electrically charged particles. They would be confined. It would just be too prohibitive in energy to pull them out of an atom, for example. So what Nambo said is, look, if I have this quantum chromodynamics and if color, the color generator, or the color degree of freedom, is the source of the gluon field, then if we make the charge, the numerical coupling constant, the analog of electric charge, if we make it large enough, then it will be simply impossible to pull apart these quarks into non-singulate states. A singulate state is the analog of neutral. Pulling them apart, you would be left over with net amounts of color, which would cost you lots of energy. So he understood that when systems were in the color singulate state, they were fully attractive and pulled themselves together. And if they weren't, there was always an element of repulsion that meant it cost an enormous amount of energy to pull them together. So that was Nambo's explanation. He said, quantum chromodynamics has a natural way of making sure that the real particles in nature are integer charged and that quarks always come in either the quark-antiquark combination or the three-quark combination. And this was essentially correct. It turned out to be correct, but the dynamics is a little more interesting than that. So let's talk about the dynamics and why the dynamics is more interesting. Any questions up till now? Okay. Why is the dynamics more interesting than that? And the reason the dynamics is more interesting than that is because the analog of the photon, the gluon, is charged, carries color. The gluon itself is not neutral with respect to the, remember, the neutral piece was this ninth member that we threw away. The eight gluons themselves are not neutral. They transform under the eight. It's as if they had a color charge. That is why gluons don't appear in nature as objects. The color, they themselves are colored, and because they're colored, they have a big color field around them, and they're prohibitive in energy. But this is interesting. What it says is that the gluon itself is charged in a way that the photon is not. The fact that photons are not charged means they don't interact with each other. They're not sources. A photon is not a source of another photon. Electrodynamics, or at least classical electrodynamics, is a completely linear theory, meaning to say that electromagnetic waves just pass right through each other. That's because the electromagnetic wave itself is not charged and doesn't influence other electromagnetic waves. So electrodynamics, the photon and the electromagnetic wave are very simple and linear. But in quantum chromodynamics, it's much more complicated. The analog of the photon itself is charged. That means that the analog of an electromagnetic wave intersecting another electromagnetic wave will be a serious interacting phenomenon. In fact, even just a single wave of the gluon, one part of it will interact with the other. One part of it may have some color. The other part of it may have some color. Those colors will interact with each other. And the dynamics of gluons is far, and not just gluons, but the field, analogous to the electromagnet, the Maxwell field, is far, far more complicated and dynamically interesting, but also much more difficult to understand. It turned out that the pretty much the important thing that happens, which is new, can be summarized fairly easily. And I'll summarize it for you on the blackboard in terms of pictures. The equations that go with this, there are equations, but the pictures, I think, are better than the equations. There's a quark, and there is its quantum chromodynamic gluon field around it. Now, in electrodynamics, the different pieces of the photon field or the electromagnetic field do not interact with each other. And so there's no sense in which one field line repels or attracts other field lines. They're just completely independent. They don't interact with each other. In quantum chromodynamics, the gluon field itself is not neutral, and therefore the gluon field interacts with other pieces of gluon field. And the result is very simple. It turns out to be fairly simple. The lines of flux basically attract each other. They attract each other, and they attract each other in a way which makes a rule about lines of flux as they don't end except on another charge. The lines of flux coming out of a quark or coming out of a color charge attract each other, and in the process of attracting each other, energetically they want to form tubes. They want to form tubes of flux with the flux running right down the tube. It is more or less, you can either imagine that the lines of flux are attracting each other or that empty space is repelling the lines of flux and pushing them and squeezing them into a tube. That's what happens. That's pretty much the main effect of the nonlinear interactions between the gluon field, that it makes it energetically favorable to take those lines of flux and bundle them, and bundle them into tubes. Let's suppose that's true now. Now supposing we have a quark and an anticork. We have a quark and an anticork. Lines of flux come out of the quark and go into the anticork. If this were electrodynamics, those lines of flux would separate and form a nice dipole pattern that we're all familiar with. Anybody who's ever looked at a magnet, then right. The field energy associated with this big dipole configuration is relatively small because the field gets weak far away. In this configuration, there's a uniform field, no matter how far you take these particles from each other. There is a uniform field in between, just exactly as if you took an electric or magnetic field and pushed it to a tube. There would be a uniform field, the lines of flux don't end, and because there's a uniform field, there's a uniform energy per unit length. What's the result of that? The result of that is if you try to separate a quark from an anticork, the energy just grows linearly with the distance between them. As you pull this out, it's like the gluon field was really sticky, really Turkish taffy. Except Turkish taffy would get thinner and thinner as you stretch that out. So it's not quite like chewing gum with Turkish taffy. It's like a kind of chewing gum that as you separated it, always created more chewing gum in between so that it never got thinner and thinner. There's always a fixed number of lines of flux coming out of every charged object and they don't disappear. So this tube, it's called a flux tube, are a fluxoid and the fluxoid has a uniform charge per unit distance. Any charged object, which means any object with nonzero color, any non-singlet, any object which is not a singlet, here it is, it puts out lines of flux. Those lines of flux bundle themselves and unless they are, unless they end on an object of opposite color, that object is just going to have an infinite charge because those lines of flux have to go someplace. They'll end at infinity, the cost and energy will be infinite. That's the basic dynamics of quantum chromodynamics. Linear flux tubes like this, which are the ingredients which keep quarks from flying around freely. If you hit a quark real hard inside a meson, this is a picture of a sort of extreme picture of a meson, a quark and an anti-quark. You hit that quark hard, it goes flying out, but it simply can't get away. The field energy just increases and increases and increases linearly with distance until you've run out of kinetic energy and then it gets pulled back. Or something else happens. What's the other thing that can happen? Right. Yeah, right. The other thing that can happen is that the string can break. If this is a quark and this is an anti-quark, then spontaneously out of the vacuum, particle pair production can create an anti-quark over here and a quark over here. But you still haven't created a quark. You've created two mesons. So you give one quark in a meson a good shot, you hit it really hard, it tries to escape and either it can't escape because it runs out of energy and gets pulled back. But actually more likely what will happen is a quark and an anti-quark will be created in between and you'll just create two mesons. If this quark over here is still going too fast, it may still try to escape. It may try to escape this one over here. What will happen? And so what happens when you hit a quark really hard is that inside a meson is that a whole bunch of mesons go flying off, forming what is called a jet. Do you ever get any other electrons? Oh yeah. Oh yes, absolutely. You can get anything as long as it's neutral. This is the picture of a meson. The other ingredient that is in quantum chromodynamics is that three quarks can exist. This doesn't explain three quarks. The three quarks are explained by a mathematical fact that the flux lines can come together in threes. Quark, quark, quark. But that's not too surprising because three quarks again make a neutral object. Three quarks again make a neutral object and that means that if you have three quarks there will be no net lines coming out of it. So somehow the lines of force that come out of each one of them have to be able to cancel each other. That's quantum chromodynamics. It's based on the SU3 group and it's called a gauge theory. A gauge theory is simply any theory where the conserved quantities are coupled to a Maxwell-like field, are coupled to a field similar to photons. It has a deeper meaning than that, but for our purposes, anytime there is a conserved quantity analogous to electric charge which functions as the source of a photon-like field. Photon-like field, gluon-like field, that's called a gauge theory. The gauge theories are always based on symmetries. Why are they based on symmetries? Because they're based on the idea of conserved charge and conserved charge means a symmetry of some kind. Incidentally, what's the connection between conservation and lines of flux? Conservation and lines of flux are intimately connected. If you have a rule, for example in electrodynamics, that every charged particle has to be the source of lines of flux, and that lines of flux can only end on charged particles, then charge has to be conserved. There's no way, remember these lines of force go off to infinity. Well, you could say, I could make this charge suddenly disappear if I allow the lines of flux to simultaneously all disappear. But that would violate the speed of light constraint. You would be able to send a message arbitrarily far distance if by simply removing that charge, you suddenly changed the field at infinity. The fact that you cannot send a signal faster than the speed of light tells you that you can't remove this charge. Why? Because it's sort of anchored by the flux lines at infinity, and if you could remove it, either one of two things. Either you would send a sudden signal far away that the charge wasn't there by removing the field, or you would be left over with these lines of flux that don't end on charges. So if lines of flux have to end on charges, then that's tantamount to saying that the sources of the field are conserved, which in another language says there must be a symmetry. So some symmetries are connected with photon-like fields which give rise to lines of flux. Those theories are called gauge theories. And you can't even imagine without breaking the rules of a theory what it would mean for the charge not to be conserved, because the rules of a theory attach the charged particles to the lines of force. Okay, that's quantum-chromo-dynamics. Now we've actually gone, either depending on how you count, either two-thirds or three-quarters of the way through the standard model, we've gone through nine generators, if you like. The eight generators of SU3, and what's the other one? The electric charge, the electric charge, which is not the ninth generator of U3. The electric charge is a nine conserved quantities. They couple to the nine gauge fields, the eight gluons, and the photon. The standard model has three more generators, which we will talk about. So we've got nine out of 12. That's three-quarters. The other way of counting is to say we studied SU3. That's quantum-chromo-dynamics. We've studied U1. That's electrodynamics. The standard model is SU3 cross SU2 cross U1. We're missing the SU2. The SU2, it's not angular momentum. There's another ingredient in the standard model called SU2. We'll start to take up SU2 next time. We've covered a lot of ground. And I think in about two more lectures, we can probably finish off the basic structure of the standard model, SU3 cross SU2 cross U1, put it together into a package. And I think the other element that we need, of course, is the Higgs field. The Higgs field plays an important role there. But I would say within about three lectures, we will have the working parts of the standard model. And then we can start to explore what the puzzles of it are, why people think we have to go beyond the standard model, and why people think there are going to be interesting new things discovered at CERN. I think that's what we want to get. OK, good. For more, please visit us at stanford.edu.
(February 1, 2010) Professor Leonard Susskind continues his discussion of group theory.
10.5446/15084 (DOI)
Stanford University. All right. We've really studied two different theories a little bit. Quantum electrodynamics and quantum chromodynamics, both of them are gauge theories. In fact, just about all of nature as we know it is one way or another is controlled by gauge theories of different kinds. And so I ought to tell you what a gauge theory is. Now, I'm not going to get into any depth what gauge symmetry is. I'm going to tell you what a gauge theory is in more or less the minimal mathematical way. First and simplest gauge theory is Maxwell's theory. Maxwell's theory of light or Maxwell's theory of electromagnetism, let's not worry about what the word gauge means. We may go through it later, but not now. But what it is, is a theory of fields which have all of the properties of the electromagnetic field. The electromagnetic field has six components, three components of electric field, three components of magnetic field. You can represent those six components in terms of a four vector called the vector potential. I'll just write it down for you. I'm not going to play any big role today anyway. But just to tell you, there's a four vector that's generally called A, and it has an index mu. It has four components, A0, that's the time component of it, and the three space components A1, 2, and 3, or A, X, Y, and Z. Let's not write them out in detail. So it contains a three vector and a fourth component, A0. The significance of A0 is that it's the electrostatic potential. It's the thing whose gradient is the electric force. So for example, the electric force on a charged particle is just gotten by multiplying the electric charge by the gradient or the derivative fx, derivative of A0 with respect to x. So it's just the electrostatic potential that's measured in volts, in particle physics of course it's not measured in volts, volts per unit charge, excuse me, in whatever way you measure electrostatic potential. That's the meaning of A0. Notice first of all, oh, and of course this is the electric field. This is the electric field. The components, derivative with respect to x, y, or z, so it's the gradient of A, and the gradient of A is just called the electric field. So the electric field is determined by the gradient of A0, and the magnetic field is determined by also derivatives, also derivatives of the vector potential, and in fact the magnetic field is the curl of the vector potential. So one of two ways you can study electrodynamics, either in terms of the electric and magnetic fields, of which there are six, or in terms of the vector potential of which there are four, but there are equivalent descriptions of electrodynamics, and we won't go into Maxwell's equations, but let me just remind you that Maxwell's equations give rise to electromagnetic waves. An electromagnetic wave is a wave of electric and magnetic field, and a typical electric magnetic wave has a direction and space that it moves, let's just represent it by an arrow, and it has a polarization. To understand what the polarization means, you'll have to think about the electric and magnetic fields, and the electric and magnetic fields oscillate as you go down the wave. Here's the electric field, for example. The whole thing moves with the speed of light, but at any given instant the electric field might look like that, and the magnetic field is always perpendicular to the electric field. Let's try to draw it, see if I can give it a drawing. I'm imagining the electric field is in the horizontal plane, and it's always perpendicular to the, let's see, is this going to... it's always perpendicular, like so. In appropriate units, the electric and magnetic field are equal to each other and perpendicular. This would be a plane polarized electromagnetic wave. In a plane polarized electromagnetic wave, the direction of polarization, which of course is always perpendicular to the direction of motion, the direction of polarization is determined by the electric field. So the electric field determines the direction of polarization, and that's electromagnetism in a lot of electromagnetic waves in a nutshell. The other thing, of course, about electromagnetic waves is that there are sources. The sources are charges and currents, but let's particularly focus on the sources of the electric field. The sources of the electric field are electric charge, and to just draw a picture, to put it at the level of pictures, every electric charge creates an electric field. The electric field lines never end except out of the charges. A positive convention, the convention is a positive electric charge, puts out an electric field, which is radially outward, falls off as 1 over r squared. You can imagine that the number of electric field lines coming out of a charge is fixed, proportional to the charge, and therefore the number of electric field lines passing through any sphere is the same no matter how far the sphere is. To say that mathematically, the integral of the electric field over the sphere is the same wherever you go, however far out you go. That's Gauss's law. That's Gauss's law, and that Gauss's law has a very strong consequence. If you don't have any other charges in the system, let's say just a plus charge, then you can't get, and the rule is electric lines only end on charges, then there's no way to get rid of a charge. No way to get rid of a charge. If you try to get rid of the charge, either the electric field has to suddenly disappear everywhere. That would violate the rules of the speed of light, that you can't send a signal faster than the speed of light. So instead, if you said that a wave moved outward of missing electric field, that also wouldn't make sense, not unless there were charges at the end points of the electric field. So all this would really be, is this would not be the elimination of the charge, it would just be taking the charge and sending it out as a shell. It would be a shell of charge going out. The charge would not have disappeared. So it's a consequence, a deep consequence of the structure, the mathematical structure, Gauss's law, and other laws that electric charge is conserved. That is the essence of the gauge theory. Essence of the gauge theory, Maxwell-like fields, I'll use the word Maxwell-like because they're not all truly Maxwell's electromagnetic field. Maxwell-like fields consisting of electric-like and magnetic-like fields, if the gauge field is weakly coupled, and I will tell you what that means later, but it basically means that if the interactions between the parts of the electromagnetic field are weak enough that they don't interact with each other seriously, then the motion of a gauge field is exactly the same as a light wave. It will have a polarization. It will move down the axis with the speed of light unless something else, unless it's something in the dynamics that changes that. Naively at least, it's in every way similar to the electromagnetic field. The only difference is, it also has sources. Every gauge field has sources analogous to the electric charge, but it may not be the electric charge. It might be something else. Let me give you an example which is an unphysical example. Not because it's inconsistent, but because it just happens not to be true as far as we know. There is another quantity in nature called the Baryon number. It's simply the number of quarks, but it's a conserved quantity. And you could imagine that Baryon number is the source of... Now, Baryon number is not electric charge. Why not? Well, because protons and neutrons have it. So it is not electric charge, but it could be the source of its own gauge field. If it were, that would mean that there would be forces between protons, neutrons, protons, protons, neutrons, neutrons that would be analogous to the Coulomb forces between charges, except it would have nothing to do with electric charge. They would have only to do with Baryon number. So, for example, two objects with the same Baryon number would repel each other. A proton and a neutron would repel each other because they have the same Baryon number. A proton and a proton would repel. On the other hand, proton and the antiproton having opposite Baryon number would attract. Now, the only rub in this ointment is that there is no such gauge field coupled to the Baryon number. Baryon number does not come with a field like this that surrounds it and no long-range force associated with it. So, Baryon number is not an example. Color is an example and the color forces of quantum chromodynamics also, oh, well, before I do that, one or two other points, one or two other points. The other point, of course, is this is a completely classical description of an electromagnetic wave. The corresponding quantum mechanical description is in terms of quanta, but of course, if we think of quanta as particles, at least that's about the only mental picture that we have to describe these discrete little objects, the discrete little objects which drawing them, of course, is always misleading, but let me draw a photon, there's a photon, a little point. That little photon has a position, moves with a speed of light, but also it has a little flag associated with it and that little flag is its polarization. I won't try to draw a flag on it. In other words, it has a pointer which either points this way, this way, or somewhere in between that indicates the polarization of the electromagnetic wave of which it is a quantum. That's a complicated statement, but I think you get it. So photons come with polarization and that polarization in principle can be rotated. You can rotate it by sending it through a quarter wave plate or in any number of ways. And that's photons. That's the photon theory of electromagnetism. And in the quantum theory, one can think of the Coulomb field as, roughly speaking, a field set up by the emission and absorption of photons, emission and absorption of these photons. Okay, so that's electrodynamics in a nutshell, electrons are charged, protons are charged, and so forth, and they interact in this way with the gauge field. Another aspect of gauge fields, I've already mentioned it, that there's always conservation laws associated with them, namely conservation of the sources. But conservation laws in both quantum mechanics and classical mechanics are always associated with symmetries. Symmetries of some sort or another are always behind conservation laws. The conservation of electric charge, we've talked about that, and I showed you how if you study the quantum mechanics of electrons in terms of charged particles in general, there's a wave field describing those particles, this could be the electron, and the symmetry associated with the conservation of charge is just the multiplication of the field by a phase. So that was the simplest example of a gauge theory, a gauge charge, meaning the electric charge, and a symmetry that goes along with the conservation of that charge. Those things go together. Okay, well, let me just very quickly remind you how it worked in quantum chromodynamics. In quantum chromodynamics, a more complicated structure. You do have Maxwell-like fields. Let's label them. The Maxwell-like fields are labeled by indices. So in quantum, let's continue to call them A. There is no universal symbol for the gauge field of quantum chromodynamics. Sometimes it's called G, sometimes it's called A, sometimes it's called B, sometimes it's called C. I'm going to call it A, but in order to distinguish it from the electromagnetic field, we give it some indices. I and J. It's a matrix. Now, it's a matrix, and that matrix transforms under a group. The group was Su3. So let me just remind you very, very quickly that we started thinking about Su3 not by thinking about gluons. This, of course, would be the gluon field, but by thinking about quarks. So a quark was an object which we called Q. Q could stand for the quantum field of the quark, and it has an index. The index, it's not the up-down index, it's not the charm-strange index. We'll come to that sooner or later. It is the color index, red, green, or blue. I takes on three values. And the symmetry operation is not multiplying by e to the i theta, but multiplying by a special unitary matrix Uij. It gives us Q prime i. That was the symmetry operation, which is a kind of rotation in a kind of three-dimensional complex space. Don't confuse it with ordinary three-dimensional space. This was the symmetry operation, and one would say that the Qs, think of them as particles, if you like, the Qs form a representation of Su3, which is called the fundamental representation. It's called the fundamental representation. It has three entries, red, green, and blue. Sometimes it's called the defining representation. It's the smallest non-trivial representation. And it just can be thought of as a three-component vector, one, two, three. And the unitary matrices can be thought of as matrices, three by three matrices. OK, we've gone through that. The other thing that I told you is that antiquarks, or antiparticles in general, are represented by the complex conjugate fields, fields which are simply the complex conjugates of the original ones. The relation, the mathematical relation between particle and antiparticle, or the wave functions of particles and antiparticles, that relationship is complex conjugation. So we could ask, how do antiparticles transform? Well, we simply realize that if you want to transform the complex conjugate field, you should use the complex conjugate matrix. If you multiply q by u to get q prime, then you must multiply q star by u star to get q prime star. The set of matrices u are called a representation of the group. The set of matrices u star are a distinct and different representation of the group. And the language we would use is that quarks and antiquarks are described by the three-dimensional representation. And we could call it three star, meaning complex conjugate. It's usually indicated by a bar. Two different representations of SU3. So there's seven, we make this. Nine, actually, well, sorry. You asked me how many matrices there are? The number of generators is eight. Yeah, because these matrices are assumed to be special unitary matrices. Now you can ask, and it's interesting to ask, what kind of theory would you make if you didn't insist that they'd be special? And I'll tell you another time, maybe later, but not now. Okay, now what is Aij? Aij has two indices. And the way to think about it is that one of those, it is mathematically got the same group theory structure, the same symmetry structure as having a quark j and an anti-quark i. In other words, this is an object whose indices transform one index as the three-dimensional representation and the other index as the three-bar. Another way to think about it is it has all of the properties with respect to the symmetry, not with respect to all physical properties, but with respect to the way it transforms. It transforms as if it were a quark and an anti-quark sitting on top of each other. Not at spin, just its color, as if it were a quark and an anti-quark, with one special rule, namely the trace of Aij vanishes. The singlet, the singlet that you can make, the piece which has no transformation property under the group at all, that you can kill and forget about. That's why there are eight Aijs instead of nine. So Aij is like a thing with a quark and an anti-quark index. But because it has a quark and an anti-quark index, it also transforms when you under the SU3 transformation group, in that sense it's different than the photon. The photon is completely electrically neutral. Its field, the photon field, does not get a phase when you rotate the electron wave function electrons. Go to e to the i theta times electrons, positrons go to e to the minus i theta times positrons, photons do nothing. That's because they don't have electric charge. But the gluons, they do have color. Now what kind of color do they have? They have the color of a particle and an anti-particle. Like you say, a particle and an anti-particle shouldn't have any color. But they do because you could have a red particle and a blue anti-particle. A red, that would make a thing which was not really neutral, which knew about color. So gluons do have color and they have color in what is called the adjoint representation, which is the representation of the 8th generators. Okay, now, what about the interaction between quarks? Incidentally, there are many other gauge theories of interest. They're all pretty similar to each other. The group might not be SU3, it might be SU2, it might be SU4, it might be SU10, it might be anything. And the objects of the group might not be called quarks. Nevertheless, or the fundamental objects of the group, the analogs of quarks might be called something else. They might have a different name. But they would be objects which had a single index. The anti-objects would also have a single index but of the complex conjugate kind. And the analog of the gluons which are called gauge bosons. Gauge bosons, the Maxwell-like fields always have one index of the particle type and another index of the anti-particle type. So that means, let's see, if we're talking about SUN, that means how many generators altogether. Well, N times N is N squared minus one for the trace. So it's N squared minus one distinct gluons. Now, what is the connection between the quarks or more generally, the particles which transform like quarks perhaps in a more general way? What's the connection between them and the gauge bosons? And I've explained that to you before, but let me just remind you again. If you have a quark labeled by I, where I could be red, green or blue, and another quark labeled by J, that the I-th quark can become the J-th quark if a gauge boson is emitted. What kind of gauge boson? Well, as I showed you before, the way to think about it is just to think about these lines running through the diagram. I runs right through, so that's I, and J also runs right through, except when J is pointing downward, it should be thought of as the analog of an antiparticle. And so an I would become a J by emitting a quantum of the field A, is it J-I or I-J. No, J-I, doesn't matter very much. By emitting a gluon of the I-J-th type, an I-th particle becomes a J-th particle, and that is one of the basic interactions of gauge theories in general. Fundamental objects, analogous to quarks, gauge bosons, analogous to gluons, and a rule about emission and absorption that you have to follow the lines and never lose an index. But now that raises something new. If a gluon itself has the properties of a quark and an antiquark, so let's say this is the I-J-th kind of gluon. Well, the I-J-th kind of gluon can also emit a gluon. How would it do it? The I-J-th gluon could become the I-K-th gluon. I-J, let's say, red-green gluon could become the red-blue gluon, as long as the indices make sense and they're followed, anti-K here. So this would be an interaction in which a gluon of type I-J became a gluon of type I-K, and a gluon of type K-J. The order of I and K matters. A K-J is not the same as a JK. That introduces something absolutely new which is not in electrodynamics, namely that the gluons themselves are charged particles and therefore exert forces on each other. Exert forces in a way that photons do not, exert forces on each other. The dynamics of gluons is much, much more complicated than the dynamics of photons. The last time what I told you was that one of the, and I'm not going to do the mathematics of this, it's not even fully understood it even now, but the effect, one of the important effects is that if you have a source, a color, a bit of color, with field lines coming out of it, again those field lines are not allowed to end, they're not allowed to end, but the effect, let's suppose that's quark, is an anti-quark over here, field lines have to come into it, so the field lines have to be continuous and so forth. I don't want, let me not draw more than that. Now the effect is that the field lines interact with each other in a way that they don't in electrodynamics. And the effect is very simple. The field lines get pulled into bundles which are called flux tubes, the result of which is that no matter how far apart you pull two quarks, they'll never escape from each other because the energy stored in this gooey piece of glue just increases linearly with the distance between them. All right, that's a summary of quantum chromodynamics, but it's also a summary of gauge theory. It's also a summary, very, very quick summary of gauge theory. Maxwell-like fields, sources, Gauss's law, making sure you have conservation and symmetries, symmetries having to do with the conservation laws, but more than that the symmetries telling you how the particles interact with each other in terms of emission and absorption. There is one other thing in a gauge theory, and it's the coupling constant. The coupling constant is the analog of the charge of the electron. The charge of the electron, let's talk about what the charge of the electron means. It is, of course, the coefficient in a Feynman diagram when you emit a photon from a charged particle, the amplitude for that, the quantum mechanical amplitude for that is the electric charge. But if you wanted to think about it operationally, you can imagine an electric charge slamming into the cathode or the anode cathode, the cathode of a cathode ray tube. When it slams into it, it gets accelerated, and the question is what is the probability that an electron which is stopped suddenly emits a photon? The answer is the square of the electric charge. The square of the electric charge, there's some pi's in it and h-bars and things, but basically the square of the electric charge in suitable units is the probability for emitting a photon. So that's what in quantum mechanics and field theory, that is what the meaning of the electric charge is. It's a measure of the square of the electric charge is the probability. Why? Because probabilities are always squares of amplitudes. But that's the fundamental significance of the electric charge, and the electric charge is a dimensionless number, just the probability that when you stop an electron you emit a photon. So it's a dimensionless number. It's a small dimensionless number. With appropriate sets of definitions, the square of the electric charge, really what tends to come into things is not quite the square of the electric charge, it's the square of the electric charge divided by some 4 pi's, 4 pi's are always floating around in there. It's called the fine structure constant, and the fine structure constant, which is pretty much the probability of a photon emission, is a number of about 1%. It's close to being 1 over 137, a famous number, 1 over 137. The 137 has no particular significance. It's just numerically about what the fine structure constant is. And let's just call it 1%, it says there's a 1% probability of emitting a photon. So if you had a picture that when an electron hits a cathode ray tube, it sends out a spray of photons, that's not right. What is right is that if 137 photons hit the cathode ray tube, on the average one of them will give off a photon. In that sense, electromagnetism is a weak force, or a weak process. The probability of emission is weak. All right, that's the quick summary of gauge theory. What I wanted to do is give you very, very briefly a run down on some numbers and orders of magnitude in various parts of particle and atomic physics. And then go on to the weak interactions. Now the reason I'm giving you a run down about numbers is to show you how different the weak interactions. Quantum chromodynamics is the theory of the strong interactions. The strong interactions was the word before quantum chromodynamics was invented for the interactions between subnuclear particles, between hadrons, between protons, neutrons, mesons, and all the things which are made of quarks and gluons. So it's theory of strong interactions. And there are electromagnetic interactions and there are also weak interactions. So I want to spend a little bit of time with numbers to show you what, where the terminology came from, why one is called weak, why one is called strong, why electromagnetism is sort of in the middle between the two of them. But let's just review a couple of numbers. First of all, in terms of units, energy, distance, and time do not have to be thought of as different units. They're all related with each other. First of all, time and distance really should not be distinguished in any sensible set of units that are useful for fundamental physics. We can set the speed of light equal to one, and if we do, if we fix the speed of light equal to one, then time and distance have the same units. And we can use the same units. As far as energy goes, we could also choose to set Planck's constant equal to one. That's a useful thing to do in quantum mechanics. An awful lot of equations get simpler if you set h bar equal to one. Well, they don't get a lot simpler. They just get rid of the h bars, but you can set h bar equal to one. Now, let me just remind you there's a connection between energy, e equals h frequency, right, or h nu, or h bar omega, all the same thing. The units of h bar, well, we've set h bar equal to one, so in units in which h bar are equal to one, energy is equal to frequency, but what's the units of frequency? One over time, number of oscillations per second. So that means that energy and time have inverse units to each other. You don't need to have a separate unit for time and energy. Any time you have a unit of energy, it defines also a unit of time. Now, of course, the unit of energy is inverse to the unit of time, so big energies correspond to small time intervals. And, okay, so let's write down some connections. And since energy and time are connected to each other, and time and space have the same units when C is equal to one, then time and space have units which are just inverse units to the units of energy. So, for example, here's a unit of energy, one electron volt. We've used electron volts before. One electron volt cannot be thought of as a unit of distance, but the inverse, one inverse electron volt, so it's one over an electron. One inverse electron volt is a unit of distance, and just to have an idea of what it is, it's about 10 to the minus seventh meters. One electron volt is about 10 to the minus seventh meters. So an electron volt is a fairly small distance, but on the scale of fundamental physics, it's not a small distance. 10 to the minus seventh meters is what? A atom is 10 to the minus tenth meters, something like that. Yeah, okay, so it's fairly big. An atomic diameter, yeah, an atomic, and we can convert, right, and an atomic diameter, one atom, this is the atomic diameter, that's about 10 to the minus tenth meters, and so you can convert that to electron volts. Is it, I don't know, whatever it is, an inverse of about 1,000 electron volts, a kilo electron volt, an inverse kilo electron volt is an atomic diameter. Okay, another fact, just converting distance, if we know the distance or size of an atom, then we know the time that it takes light to go across it. All right, so the transit time, there's another quantity, the time that it takes transit time, the time for light to cross an atomic diameter, what is that? That's about 10 to the minus 18 seconds. I've used the fact that light goes at about 10 to the eighth meters per second. Order of magnitude, 10 to the minus 18 seconds, these are just some numbers that I just wrote down, but a different time scale. All right, so this is the time scale for light to go across an atom. The different times, and by an atom, I don't mean the nucleus, I mean the whole atom, another time scale is the time scale that it takes for an electron to orbit an atom. Now you might think, why isn't that just about the transit time? Transit time is about 10 to the minus 18 centimeters. The reason is because electromagnetic forces are fairly weak. Because they are fairly weak, the force on, and the ultimate reason that they're weak is because this fine structure constant, this alpha, which is E squared over 4 pi, that's a small number of about 1 over 137. Another way to say it is the electromagnetic force on an electron is weak. Because of that, the acceleration on the electron is not too big, and the orbit is fairly large. The weaker the force on an object, the larger the orbit is going to be. Sorry, the slower the electron will move is what I meant to say. The slower the electron will move, and because the electron moves slowly by comparison with the speed of light, how much slower? About 1 percent slower. About this much slower, the electron moves around with only about 1 percent the speed of light, and that means the orbital time, orbital time, that's about 10 to the minus, let's see, it's bigger. That time is bigger, 10 to the minus 16 seconds. Another quantity sometimes of interest is the decay time for an atom. How long does it take for an atom, an excited atom, let's say a hydrogen atom with its electron, one orbital up from the ground state, one orbit up from the ground state, how long does it take to decay? Now, that is small, doubly small, with two powers of alpha. What are the two powers of alpha? First of all, the acceleration of the electron is small. Remember, charges emit radiation when they're accelerated. The acceleration is small because the force is small. So first of all, the acceleration is small. The electron moves around with a relatively small acceleration compared to what it could have been if alpha were bigger. It moves around with a small acceleration, but on top of that, just like the electron plowing into the cathode ray tube, there's another factor of alpha when the electron is accelerated in the probability that it emits a photon. So as the electron goes around here, first of all, it's moving with a slow acceleration, and second of all, even that acceleration is not very efficient in producing photons just because the fine structure constant is small. The net result is that the time scale, order of magnitude, decay time, how long it takes for an atom to decay. Decay time is about a nanosecond, 10 to the minus 9 seconds, much longer than the orbital time or the transit time. The main thing is there's a variety of different scales in the atom, and they're all related by the fine structure constant. They're related by powers of the fine structure constant, the fine structure constant being a small number. There are fairly large ratios between different time scales, transit, orbital, and decay time. All right, now let's come to Hadrons. Hadrons are like atoms, they're atoms made up out of quarks. We could ask very much the same kind of questions. First of all, we could ask what is the Hadronic diameter? The Hadronic diameter of a typical Hadron, it could be a proton, neutron, meson, they're all more or less the same. Here's Hadrons. Hadrons, Hadrons. The Hadronic diameter is a lot smaller than the electron. Sorry, than the atom. It's a lot smaller than the atom. It's not 10 to the minus 10th meters. It's about five orders of magnitude smaller, 10 to the minus 15 centimeters. That's also more or less the diameter of a nucleus. Nucleus, of course, being a few protons and neutrons could be bigger. Sorry, meters. Hadron diameter is about 10 to the minus 15th meters, five orders of magnitude smaller than an atom. Part of the reason is because there's two reasons. There's two reasons why it's smaller. One of them has to do with the fact that the constituents are heavier and heavier things will sit closer to the center. The other has to do with something else that we'll come to in a minute. But it's about five orders of magnitude smaller. The transit time has just gotten from the size straightforwardly. Transit time, this is not an independent thing. It's just another measure of the size of the thing. Just to get the number straight, it's 10 to the minus 23rd seconds. So that's a typical time scale for, well, it's for light to cross the Hadron. But now we can also talk about the orbital motion of quarks. How long does it take for a quark to swing around its wave function or swing around the interior of a proton? And there the answer is about 10 to the minus 23rd seconds. So how long does it take for it to decay? How long does it take for it to decay? And the answer is about, now I'm talking about particular kinds of decays, a decay where you hit a proton, it starts oscillating, and then emits a pion. Those decays are the same as the quarks. They're different than this hierarchy of scales here. What is the conclusion? The conclusion is the analog of the fine structure constant is much bigger, close to 1. So what is the conclusion? The conclusion is the analog of the fine structure constant is much bigger, close to 1. For Hadrons, in other words, the emission probability, if a quark were to get stopped in a fictitious cathode ray, quarky cathode ray tube of quark flying off, the probability to emit a gluon would be about 1. That's what these things indicate here. The hierarchy here is entirely due to the fine structure constant. The lack of a hierarchy here can only mean that the corresponding quantity in quantum chromodynamics must be close to 1. By now it's been measured in many ways, and it is much closer to 1. It's much larger than the corresponding quantity for electrodynamics. One is a little bit, no. It would seem that just special relativity to say that the case that could be much less than 10 to the minus 23, that's a kind of case like, go for 1. Yeah, I think that's a fair statement. It's about as fast as it could be, in other words. It's about as fast as it could be. Nothing's slowing it down. This is why the strong interactions are called strong. Now, the facts about the numbers here were known before quantum chromodynamics, and it was understood that there was no hierarchy of scales, and that's why it was called strong interactions. We now trace it to the properties of the charge or the fine structure constant. Incidentally, for quantum chromodynamics, the analogous quantity is not really one. It's about a fifth, but much bigger than the 1 over 137 here. All right, so that was just some numbers and some facts about the strong force. It's called alpha QCD. So this is called alpha, and the other one is called alpha QCD, and it's about 0.23 or something like that. Or in context of paper, you write alpha QCD. I've never heard alpha QCD referred to as alpha QCD, but I think if you were writing a paper with both of them, you might call the top one alpha QCD, yes. I've never seen it referred to that, right? Well, particularly if you're talking about shielding or something like that. No, I can't, I don't think I've ever seen it. Oh, oh, oh. It's just called alpha. Sometimes the unshielded one is called the bare alpha. The shielded one is called the renormalized alpha, right? Okay. Well, that brings us to the weak interactions. The weak interactions are called weak. Primarily because the decay times associated with them are very long. They're much longer than the decay times associated with either hadrons, and they even tend to be longer than the decay times associated with atoms. So there are processes in nature, decay processes in nature, in which particles, elementary or otherwise, decay by weak interactions where the time scale, in other words, the half-life of these decays is far longer than any scale that can be accounted for by these, by numbers like this. So as an example, here's some examples. Well, the lifetime of the neutron. The neutron is a hadron. The decay is neutron goes to electron, proton, plus anti-nutrino. And the lifetime for that is about 12 minutes. That's absurdly long on particle physics scales. Part of the reason is easy, but it hardly accounts for the extraordinary stability of the neutron. Part of the reason is the neutron is only a little bit bigger than the combined masses of the electron, the proton, and the neutron, and the neutrino. Let's just see. Somebody have out their pocket? The mass of the neutron is 940 MeV. About 940. So the proton is about 939. I think the actual difference between them is about one and a half MeV. But then there's the electron which has a mass of about half an MeV. All together out of this 940, almost a thousand MeV, the difference of mass of the proton, the amount of available mass left over is tiny. It's 0.1% or less. If the neutron was slightly lighter than the sum of these masses, it could not decay at all. Just energy conservation would not allow it. Remember, mass is energy. It would not have enough energy to decay and still leave over some energy for kinetic energy of these particles. So if the neutron was exactly the same as the sum of the energy of the electron, the proton, and the neutrino, the sum of the mass of them, it wouldn't be able to decay at all. If you gave it a mass, tiny, tiny, tiny, tiny bit above that, it would be able to decay, but it would take a very long time. The decay would be very slow. So part of the slowness or the long lifetime of the neutron is attributable to the fact that there's very little energy available for the decay to take place. But given that, that's hardly the bulk of the story. Twelve minutes is so absurdly long that there's something much, much more than that going on. Another example, let's see, where do I have it? I guess I didn't write it down. Another example of a, this is called a weak decay. For obvious reasons, it's called a weak decay. Another example is a charged pion, let's say a pime minus. A charged pion decays to an electron. Electron has negative charge and an anti-nutrino. Yes, and an anti-nutrino. That's another possible decay. A charged pion goes to an electron and an anti-nutrino. A positively charged pion could become a positron and a neutrino. The lifetime for that, I believe, is about 10 nanoseconds. Longer than these atomic decay times. And atomic decay times are very, very long by comparison with particle physics decay times. Now, there is a decay of the charged pions to these particles. There's another decay involving muons. We're going to come to muons soon enough, similar to electrons. But the point is not the particles that they decay into, but two things. First of all, they also decay very slowly, nanoseconds. But there's plenty of energy available for these decays. These decays are not suppressed by the fact that the energies of the initial and all the masses of the initial and final states are very close to each other. So there's something else going on in weak interactions. Whatever weak interactions are, there's something that makes them proceed very, very slowly and weakly. So that's what we come to next is weak interactions. The primary or oldest example of which is the decay of the neutron. The beta decay of the neutron called beta because electrons are originally called beta waves. And this is called the beta decay of the pion. In fact, it's quite similar. Quite similar physics goes into both of them. So to understand what's going on, we need to understand about some more particles. Particles that we haven't come to yet, the leptons. Let's go back to the quarks for a moment and let me just make a table of the different kinds of quarks. Horizontally, or make a sequence here which will be red, blue and green. These are the colors of quarks. Then along here, let me list the up quark, the down quark, the charmed quark, the strange quark, the top quark, and the bottom quark. Each one of these boxes is filled. There are red, blue, and green up quarks. Let's put an x there. There's an x here. There's an x here. There's an x here. X here, x here, and so forth. The boxes represent all the possible quarks that exist. Now, quantum chromodynamics has to do with symmetries which connect red to green to blue. It has to do with these unitary transformations which mix up the colors of quarks. They do not, those symmetries do not mix horizontally. An SU2-SU3 symmetry, a color symmetry, may rotate an up red quark into an up blue quark or an up green quark. And the nature of the symmetry is to act vertically, to mix up things this way. And also at the same time, we'll mix the different down quarks, the different colors of down quarks. The same symmetry will mix the different colors of charmed quarks and strange quarks and so forth. So the color symmetry acts vertically. It mixes up the different rows here. Is that the unitary matrix? Yes, the unitary matrices mix up red, green, and blue. I might have done better to put the rows where the columns are and the columns where the rows are, but no, it's okay. It's good. No, no, it's good. It's good. Right. The weak interactions are associated with symmetries which act horizontally in this picture. They mix up with down. They mix, and at the same time that they mix up with down, they mix charmed with strange. And they mix top with bottom. Let me just remind you that an up quark has the same properties apart for its mass. Same properties as a charmed quark or a top quark. The down quarks are similar to strange quarks and bottom quarks. Ups have charged two thirds. Downs have charged minus two thirds. Same here and here. So these are symmetries which mix, if you like, upness with downness. At the same time, charm this with strangeness and topness with bottomness. It mixes these up. The symmetries, those symmetries, what would you care to speculate on what group might be involved? SU2, because it acts on things, on doublets. We simply have three doublets here. They don't take up to charm or up to top. They simply act on up and down horizontally among pairs of things. A group SU2 and it is a gauge symmetry. In other words, it also comes together with forces, with gauge bosons, with interactions, and with gauge field. We're going to get to those short enough. But we've left out something from this table here. It's kind of as if there was a fourth color. Now the reason, the fourth row here is not usually identified as a color. And the reason is because the particles do not interact with the gluons. But nevertheless, it is another row here, another row in which there are particles which are filled in here. And in a certain sense, they are also doublets, these particles are also doublets, and also get mixed up under this SU2 symmetry which moves things horizontally. What are they? What are those particles? They're the leptons. So in a sense, the fourth color could be thought of as lepton number. But what are the leptons? The leptons are, first of all, in the first column here, there are analogous to the up, or analogous to the down, there is the electron. The electron is a lepton. But its partner, its partner is the neutrino. But there are different neutrinos. There's not only one neutrino. This is called the electron neutrino. What comes in the next column? The muon. The muon, in every respect except for mass, is the same as an electron. Just as in every respect except for mass, the charmed quark is like an up quark, or the strange quark is like a down quark. And together with it, there's its own neutrino, the muon neutrino. And finally, the last one is called the tau and nu sub tau. All of the charged leptons, incidentally, neutrinos, of course, as you might guess from their name, are electrically neutral. The electrons, muons, and tau's have charged minus one. They are put under the down column here, not because their charge is the same as the down charge, but because the difference between the charges of the two is the same as the difference up here. What's the difference between the charge of an up and a down? Two-thirds minus minus one-third. So the difference between this column and this column is plus one unit of charge. The difference between this column and this column is also one unit of charge. One, sorry, zero minus negative one. So as you go from here to here, you decrease charge by one unit as you go horizontally. Now, should we take a break for five minutes? Let's take a break for five minutes. And then I will tell you about the W bosons. The W bosons are the gauge bosons of the weak interactions. All right, so let's try now to see if we can make a microscopic gauge theory of the weak interactions. As I said, the weak interactions are based on a gauge theory which mixes up to down, neutrino to electron and so forth. And so rather than to get into mathematics for the moment, the mathematics of the group SU2, which we'll come back to, let's think a little more simply. We've seen that in various situations the gauge bosons typically have the quantum numbers or the quantum properties of a particle and an antiparticle. In the case of the quantum-chromodynamic interactions, it is a quark and an antiquark. That's why we had nine gauge bosons and so forth or eight removing the trace. So we could start by saying let's look at the possibilities of gauge bosons which have the properties of a particle and an antiparticle. Let's not distinguish between whether we're talking about quarks, red quarks, blue quarks, green quarks or leptons. Let's just say a particle from this column with an antiparticle of this column. And for simplicity, let's just focus on the leptons. You'll see in a moment that you'll get the same thing if you focused on the quarks. But let's focus on the leptons and talk about objects that we could make with a lepton and an antilepton. First of all, we could have some simple things, electron with a positron. Now I don't really literally mean electrons and positrons. We could have electrons and positrons. Electrons and positrons have no charge at all. So let's drop that for a minute. We'll come back to it. There are things which have properties of electrons and antilelectrons, neutrinos and antinutrinos. But more interesting for the moment is the combination of electron, antinutrino. So what is the charge of an object which has the same conserved quantities, the same properties as an electron with a stuck on top of an antinutrino? Well, let's just focus on its charge. It's electric charge. Its electric charge is minus one. Neutrino is neutral. Electron has negative charge. So whatever this object is, it has negative electric charge. Let's give it a name and let's call it a gauge boson. Let's assume that it is also a gauge boson. Gauge boson means that it behaves like the Maxwell field, the whole same thing. But it has negative charge. Let's call it W. W, that's the traditional name for it. W minus. Why minus? Because it has negative electric charge. We could, that was this one with this one, we could have taken down quark with, I guess, anti-up quark. What happens if you take down quark with anti-up quark? How much charge do you get? Again, minus. Same as this. Down quark with anti-up quark would also have minus charge. So if you like, you could also imagine that the W minus has the same properties as a down quark and an anti-up quark. In other words, it's one from the second column and anti-one from the first column. That's the property that it has. Let's call it W for weak, in fact. I think W was originally the notation for weak. The negatively charged W boson. And of course, there's also a positively charged boson, W boson. Which is like anti-down with an up or like positron E plus with a neutrino. E plus with a neutrino would be a W plus. Let's suppose the pattern is pretty similar to quantum chromodynamics, that the W bosons are like either photons on the one hand or like gluons. Which means that they would be objects which could be emitted from the leptons. They would be objects which would allow a lepton of one kind. Let me not label it. A lepton of one kind to become a lepton of another kind. By emitting a W boson. Let's see if we can find some examples. An electron can emit a W minus to become a neutrino. An electron becomes neutrino by emission of a W minus. You see the action of the gauge bosons is to do the same kind of transition that the symmetry is ultimately going to be imagined to do to mix electron into neutrino in very much the same way that the colors got mixed by emitting gluons. You could follow the lines if you like and say the electron and the anti-neutrino over here can be imagined to be a W minus boson. Same pattern. Let's just draw it by saying a W minus is emitted. But that's not all you can do with a W minus. You can also have a down quark go to an up quark. With a W minus. That also works. In fact, the W minus is an object which can be emitted in a transition from, let's see, from the right column to the left column. A red down quark can become a red up quark by emitting a W. A blue down quark can become a blue up quark by emitting a W. Likewise for green. But you can go further. A strange quark, not the strange quark, the strange quark can become a charmed quark. That's another one. Strange quark, charmed quark, again, W minus. So this new symmetry, this idea of a new symmetry which acts horizontally and only horizontally between neighboring pairs here introduces the possibility of gauge bosons which cause transitions between electron neutrino, down quark and up quark, strange quark and charmed quark. Now a W plus and a W minus are antiparticles of each other. The meaning of that is that you can flip these diagrams, put the W minus down, W here, but when you flip it, it becomes a W plus. So a W plus can be absorbed by an electron, plus a unit of charge, minus a unit of charge to become a neutrino. So a whole family of different processes are codified by this one vertex here. We can turn them upside down, neutrino goes to, you know, what we're allowed to do. We've done these things before. And that's the basic new element of the weak interactions. Actually, that's all, yeah. Well, it's not clear to me why you have W minuses in all three of these cases. There's a lot of charge conservation. It couldn't be W plus. Would you accept I don't know what a W means? W is the name of a particle. It's the name of another gauge. Yeah, but the charge conservation tells me it's going to be a minus. But I haven't seen any reason why it should be the same particle. Well, okay, that's right. But what it comes down to in the end is an empirical fact. It's the empirical fact that the symmetry of the weak interactions acts simultaneously on all of these doublets here. When an up goes to a down, a charm has to go to a strange, a top has to go to a bottom, and the leptons have to mix with each other simultaneously. The implication of that, the implication of that from a practical point of view, is simply it's the same symmetry which acts, it's the same gauge bosons which are there. You could have imagined a theory in which it's quite mathematically consistent to imagine a theory where different W-type bosons would have been emitted by up quarks, charm quarks, or tau quarks, or even leptons. Right? It's a fact of nature. The fact of nature that nature is simpler than it might have been that the same gauge bosons cause transitions between any point, you know what I mean, I'm going to get tongue-tied if I try to say it. Okay, well first of all, what processes can we explain? Good question. Can a W-beg into an electron and a negative neutrino as well? Negative, so say it again. Can you flip the arrows? Yes, yes, yes, yes. You can flip arrows and you can flip lines. Yes, yes, certainly you can flip the arrows. Yeah, absolutely. Absolutely. And you can also flip lines up and down in various ways. For example, another way you could flip the line is you could take this up quark here, instead of going out, instead of an up quark going out, an anti-up quark comes in. Whenever you flip a line from past to future, or from future to past, you'll also change from particle to antiparticle. I'm not sure, but on the right-most side, W-stereo is a strange antitron. Yes. So that's the same as a down antiparticle? Yes. The same thing? Wait a second. Sorry. Yes. That's a strange antitron, right? Yes. That's the same as a down antiparticle. It's not literally a strange antitron, it's just a rule that tells you that whatever these W bosons are, they're associated with transitions from one column to the other. The triton's a charm. Did I? Sorry. Strange. Right. Electron to neutrino. Strange to charm. Is that wrong? No. No. It's not an anti-charm. That's the point. Who said it was an anti-charm? Well, it shouldn't have been. It's different from coming in from the past. Right. If you follow the lines like that, at the point where the line turns around, it has to be thought of as an antiparticle. Right. Okay. Or if you like, you can always think of it as strange anti-charm coming in and making a W boson. Okay. Just keep in mind, whenever you flip from the past to the future, if you want to know what a W boson will decay to, let's read it downward, it will decay to a strange and an anti-charm. And that really can happen. Yeah. So can you create W boson from like electron and a... Anti-neutrino. Anti-neutrino. Yes. And then call the W boson to change into like a strange and a charm. Exactly. That's where some of these processes are going to come from now. So let's think about the various processes. Let's start just for fun. Let's start with the decay of the pi minus. What is a pi minus? A pi minus is a quark and an anti-quark with altogether negative charge. So let's say a quark and an anti-quark with negative charge had better be... Here's the pi minus. That's equal to a down quark and an anti-up quark. Down and anti-up. Now, what can a down and an anti-up do? Let's see if we can find down and anti-up here. Here it is right here. Down and anti-up can become a W minus. So it's a possible transition that is allowed by the physics of W bosons. So here's a W minus. But now a W minus can do something else. A W minus here is... Let's say here is a W plus. Let's put W plus upstairs here. This is W minus now. And let's put the neutrino downstairs, anti-neutrino. And read this as saying that W minus goes to what? Electron. Anti-neutrino. So W minus can go up to electron, anti-neutrino. And here it is. E minus, ordinary electron. So that's the basic underlying process that governs the beta decay of the negatively charged pion into a electron and an anti-neutrino. Now, which anti-neutrino is it? There are three distinct anti-neutrinos. There's an electron anti-neutrino over here. So it's the electron anti-neutrino. But it can also... I didn't draw all the various possibilities, but any place you saw an electron, you can put a muon. Any place... And as long as you carry along the muon neutrino. So another process is muon. Anti-muon neutrino. And just for clarity, this is not strictly a Feynman diagram, right? This is just showing an interaction. It's pretty close to a Feynman diagram. Except for the time direction. Oh, yeah. Right. I don't know why. All of a sudden I change directions. Yeah. For some reason. Yeah, right. Time is now going horizontally. I've become a computer scientist and they measure... Yeah, yeah, they do. Right. Yeah, it's pretty much... Yeah, that's good. We can turn it up. I can't do that easily, but yeah. Yeah, that is essentially a Feynman diagram describing the decay of the pi minus to the muon. As it happens, the... For technical reasons having to do with some complications and Feynman diagrams that are not interesting to us. As it happens, the primary decay... Well, primary, I mean the most probable. The most probable decay of the pi minus is to the muon anti-neutrino, not to the electron anti-neutrino. It's a technical point of no special interest to us. It just happens to be that way. Wouldn't it be consistent instead of considering the Feynman diagrams with a time deal is more like a kinopaly equilibrium deal? Which way you're going? Well, both things can happen. Both things can happen. We're going to have a real equilibrium. We're in empty space. We're in empty space except somebody made a pi minus. A pi minus was created how? A pi minus might be created by some nucleons coming together and creating by strong interactions. By strong interactions, a pi minus might be created. Now that pi minus is just going along and saying, wait a minute, I can decay if I like into a neutrino and a muon. Then of course, the neutrino and the muon have enough kinetic energy to fly off and they won't come back together again because they've departed. They've gone away. Okay, on the other hand, it is true that if you were to take a muon and an anti-neutrino and fire them toward each other, there would be a certain probability that they would make a pi minus. So yes, you can read it both ways. Let's go. A quick question. Like in one dimension, you have an SU3 and the other an SU2. Is it possible to combine them into an SU6 or something? SU5. Well, how great. SU5. Not two times three, two plus three. SU5. Tonight, we want to get to SU2. I'll be happy if we get the basic ideas of SU2. Now, keep in mind, we're doing something a little bit funny because we have, SU2 has how many generators? Three. So how many gauge balls should it have? Three. So far, we only have w plus and w minus. So let's just be another one looking around. And that other one must be associated somehow with something like electron, anti-electron and neutrino, anti-neutrino, but we'll come to it. Just like the, yeah, we'll come to it. There is a third gauge boson looking around. And it does play a big role in the weak interactions, but it didn't play a big role historically in the weak interactions just because for reasons we'll come to. Historically, it was the w boson, which was the first to be conjectured. Speaking of historically, was the weaker the strong force sort of suspected first? Of course. All right. So let's talk about the history. The history of these forces goes back to Becquerel, goes back to radioactivity. Alpha, beta and gamma decay. Alpha decay was what? A mission of a two protons and two neutrons from a nucleus. That's a strong interaction. It's part of the theory of Hadron's and nuclei. Gamma decay, that was this one right here, weak interactions, and gamma decay? Photon. Historically, they happened at the same time in the same experiment. Question? The energy conservation, how it works there? You're assuming some kinetic energy in the world? Yeah, the energy, let's work on it. You never worry about the energy conservation in intermediate states like this. If the energy of the intermediate state doesn't match the energies of the initial and final state, it just means that the intermediate state can only last for a very short time. There's an energy time uncertainty principle. You can violate the energies in a Feynman diagram for short periods of time. You don't have to worry about conservation of energy between here and here. The only energy we have to worry about is the initial energy and the final energy. Okay, so let's worry about it. A pion has an energy of about a mass. Let's suppose the pion is at rest. The mass of the pion is roughly about 140 MeV. The energy of the muon and the neutrino, well first of all it's the rest energy of the muon and the neutrino. And the rest energy, the rest mass of the neutrino, sorry, the mass of the muon, I think is about 100 MeV, I forget exactly, and the mass of a neutrino is for practical purposes zero. It's too small to be important to you. So there's more energy to start with than the energy in the masses of the two final particles. Does that mean it can't happen? No, it just means the two particles will go off and carry some kinetic energy. So the difference between the energy on this side and the sums of the masses on this side is just the kinetic energy of the outgoing particles. That's the energy balance. The momentum also has to be conserved, and that means if the pion was at rest to begin with, the momentum of the muon and the momentum of the neutrino have to be equal and opposite, and they'll go out back to back, so to speak. If you analyze the neutron decay in the same way as you would do that. We're going to do that now. Yeah, let's do the neutron decay right now, see how the neutron works. Oh, let's do something simpler first. Let's do the muon decay. The muon is also something that can decay. So let's take the muon. Here's the muon moving along. Now what can the muon do? The muon can become a neutrino sub muon and emit a W. What kind of W? W minus, right? It can become a W minus and a neutrino. But now one of the things that W minus can do is it can become an electron and an anti-neutrino. So this W minus can now become an electron and an anti-neutrino of the electron variety. Can this happen? The only thing that would, yes it can happen. Not only can it happen, this is the way the muon decays. The muon decays because it is sufficiently much heavier than the electron, about 200 times heavier than the electron. The neutrinos are practically massless. So the energy balance is you have about 100 MeV here, a half an MeV here and the risk goes off in kinetic energy. So that's an example of what's called a purely leptonic process. No quarks in it at all. Is it an anti-neutron? Is it what? An anti-neutron. Yeah, it is. So it's one neutrino, one anti-neutrino and an electron. Okay, so that happens. And that's the primary decay, it's the only decay of the muon. So the muon is an unstable particle because of this. But the reason the muon is unstable and not the electron is just that the muon happens to be heavy enough to be able to decay to the electron. The electron is not heavy enough to decay to the muon. So in a sense it's an accident of the masses. The same is true of the tau, incidentally. The tau can also decay to other leptons. But all right, that's the mu decay. And notice the components are the same. The basic components are the same. A transition from one column to another, followed by a W, and then the W makes another transition from one column to the other, or if you like, a decay from one column to the other. All right, what about the decay of the neutron? Decay of the neutron. Would it be consistent to say the purpose of a linear accelerator, per se, is to effectively increase the mass of the electron so that it can now go the other direction? Absolutely. No. You don't change an answer, just put nothing in it. Absolutely. That's exactly right. Okay, we're going to do neutron decay. So the neutron is, maybe I better switch to vertical. I don't want to, vertical, vertical notation, back to vertical. Neutron is three quarks, two down quarks, and an up quark. Right? Minus four thirds plus two thirds, sorry, minus two thirds, yes, that's a neutron, electrically neutral, neutron coming in. Okay, now what is possible? First of all, we can have the up quark become a down quark, and emit some sort of W, but that'll give us three down quarks. Three down quarks is a fine thing. There's nothing wrong with three down quarks, except that the mass of three down quarks, of a particle with three down quarks, is always too heavy for the neutron to decay. So that's not what happens. What about one of the down quarks? The down quark can emit a W, become an up quark. This is again a W minus. The down quark emits a W minus, becomes an up quark, and now we have two up quarks and a down quark, up, up, down, what is that called? That's a proton. And the W does what the W always does. It decays to an electron and an anti-neutrino. So that's the beta decay of the neutron. Could it decay into a muon and a neutrino? Not enough energy. The muon is too heavy so that there wouldn't be enough energy left over for the muon. But other than that, if it were not for that, it could happen. Another way to say it is if you could jam some more energy into the muon, you could add extra energy into the neutron. How can you do it? You hit it, you give it a shove. You give the neutron a whack, you excite it, and if you excite it enough, it can decay to a proton, a neutrino, anti-neutrino, and a muon. But without somehow providing that extra energy, the neutron will only decay to the proton, electron, and the anti-neutrino. Okay, now let's come, let's see, where are we? I think we'll finish for tonight. But remember, we have a puzzle, and the puzzle is why is it that the weak interactions give such slow decay rates? And I can think of one reason, namely, the coupling constant or the analog of the fine structure constant could be ridiculously small so that the probability of emitting these W bosons would be very, very small. That is not the reason. The fine structure constant for the weak interactions is about the same as for electromagnetism. That is not the reason. The reason is something else, and we'll come to the something else next time. When you say that there are these symmetries, you mean that the Lagrangian is invariant? Lagrangian, the Lagrangian is invariant under those operations. That's exactly what it means. I didn't want to start writing bunches of Lagrangians, so I've used the shorthand of saying there's a symmetry, right? But that's what it means, yeah. For more, please visit us at stanford.edu.
(February 8, 2010) Professor Leonard Susskind discusses gauge theories.
10.5446/15080 (DOI)
Stanford University. You think I don't know, don't you? No, as a matter of fact, I do know, and that's what we're going to talk about tonight, and it's called the phenomena. What is the? Higgs. Higgs. I think you started to cover this last week, and I didn't really... You described what you were talking about as giving mass to the photon. Yes, that's what we're going to talk about tonight. We don't actually give mass to the photon, so what's different? No, no, no. That's right. That's right. The real photon, the symmetry group associated with it, is not spontaneously broken. Tonight we're going to talk about how if the symmetry group associated with the photon were broken spontaneously, how it would give the photon a mass. Now, the reason we're doing that is because exactly that phenomena happens with the Z boson and the W bosons. The photon is a little bit simpler. Indeed, it might have had a mass. The fact that it doesn't have a mass is perhaps an accident of nature. The absence of an appropriate Higgs field. We're going to talk about what the Higgs field is to give it mass. So that's what we want to talk about tonight. How gauge bosons, like the photon, get a mass when spontaneous symmetry breaking. What is the symmetry, incidentally, when I say associated with the photon? There's a conserved quantity, which is the charge. Conserved quantities always go with symmetries. What's the symmetry that's connected with the conservation of electric charge? You won. It's the thing which multiplies the charged fields by a phase. It is a kind of rotation in the complex plane, the complex field. So I'm going to explain to you tonight how spontaneous symmetry breaking of that symmetry induces a mass for the photon. Now, there is a situation in nature where the photon does get a mass and the symmetry of you won symmetry is spontaneously broken. Does anybody know what it is? It is condensed matter, yes. Okay, the answer is in a superconductor. In a superconductor, photons propagate with a mass. Now, that's in the superconductor. When they leave the superconductor, they're photons again. When they're in the superconductor, they behave as if they had a mass. So that's an example of the Higgs phenomena. And we can talk about it. That also be the case when the photon is traveling through something other than a vacuum? Well, superconductivity only happens when the photon is traveling through something other than the vacuum. No, no, superconductivity or the photon, no, no, no, the photon does not propagate with a mass in a prism. Its velocity is slower than the speed of light, but it's still true that at zero momentum, meaning infinite wavelength, its frequency is zero. Okay, let's talk about the difference. That's an interesting difference. Might as well talk about it. The relevant issue is the shape of a certain curve. The curve is either energy versus momentum or for waves, what corresponds to momentum for waves, either inverse wavelength or wave number. K, everybody remember K? K is basically the inverse of the wavelength and vertically what do we have instead of energy? No, K is like P. Vertical, it was energy. Frequency, frequency, omega. Remember, E equals h bar omega and P equals h bar K. So K is wave number, the number of waves that fit into a unit distance. So these two planes are the same except for a factor of h bar. Alright, what does omega versus K look like for an electromagnetic wave in empty space? Well, what's the connection between omega and K? Omega equals C times K. That's it. Omega equals C times K. Now, the same relationship incidentally is true of sound waves. Waves on a long string and so forth, approximately the same relationship. The difference of course is they have different velocities of propagation. But notice that omega is zero when K is zero. That means on this side over here that the energy of a quantum or wave, the energy is zero when the momentum is zero. What do we call a particle when its energy is zero, when the energy is zero if it's mass is zero? We call it, well we call it photons, yes, but we call it massless. Alright, so any wave which has the property that omega goes smoothly to zero as K goes to zero, we call massless. And typically the shape of the curve looks like that. It's going to be positive and negative corresponding to a wave going to the left or the right. Let's forget going to the left. The frequency goes to zero when the wavelength becomes infinite. That's massless. And all that happens in a prism is that C changes. C changes. Now what about a particle with mass? For a particle of mass, its energy is equal to the square root of P squared plus M squared. Now there's some C's there. I think there's a C to the fourth, no. C squared, C squared, blah, blah, blah. C squared, P squared plus M squared, C to the fourth. But if we set C equal to one, it's just squared of P squared plus M squared. What does that look like? That's a curve which looks like this. It's a hyperbola. We square this, we get E squared, let's set C squared to one, E squared minus P squared equals M squared. That's a hyperbola. You see the difference. In this case, when P goes to zero, the energy is not zero. That's the rest mass, or just the mass, if you like. The rest mass is the value of the energy when the momentum goes to zero. There are two different situations. One, the velocity of light may be different in a prism than it is in empty space, but that just changes the slope of this curve and does not change the fact that the energy is zero at the origin here. The other situation is that the energy is not zero at the origin and has this hyperbolic shape. That's called having a mass. Notice that there's a frequency, same thing here, that a wave with infinite wavelength has a finite frequency. What does infinite wavelength mean? It means you disturb the field every place the same. You rigidly make the field absolutely homogeneous. Homogeneous means it's everywhere is the same. You shift the field everywhere and you let it go. What happens is everywhere as it starts to oscillate simultaneously, that's the phenomena over here. Infinite wavelength, zero wave vector, the whole field oscillates. Also corresponds to particles at rest oscillating because they have mass. That's what the important thing to keep in mind is that oscillations of a field when it's homogeneous, when it has infinite wavelength, those are the things we call mass. I was answering the question about the... Is it the wave number that changes when it's moving through a solid? You can have any wave number. If you took a ray and run it into a piece of glass, it's not the omega that changes. That's right. If you have a wave of a given frequency and it shines on a piece of glass, the frequency doesn't change when it goes into the material. But as I think you're foreseeing, two different materials might have a different slope here. So at the same value of frequency, you could jump from one wave number to another. And that's what happens. At this situation where the entire field shifts, would that be equivalent to the concept of entanglement? No, no, no. This is a perfectly classical wave field. I understand that. But what I'm saying is entanglement says that this particle over here and this particle over here respond instantaneously and by the entire field shifting, would that account for... No, no, no, no. Entanglement is a situation where you have two particles in two different states which are entangled with each other. A over here, B over here, or A over here, B over here. There has to be two different states. Spin up and spin down, spin down and spin up. A classical field is when all of the particles are in the same state. So it's quite different. So on the curve that you drew, that looks more like the end of that one, yeah. What does that represent? This is called... The relationship between omega and K, between frequency and wave number, is called the dispersion relation. You say there's no dispersion when omega is a linear function of K. When omega is a linear... incidentally, the slope of this curve is the velocity. The omega decay is... yeah, the omega decay is the velocity of the wave. So if the slope of this is constant, everywhere, then all waves move with the same velocity. That's light. All waves move with the same velocity. If the slope varies from place to place, for example, as it would if there was a mass, then different wavelengths propagate with different velocities. In that case, the wave will disperse. So it's called the dispersion relation. Let's name it now. Omega as a function of K, that relationship is called the dispersion relation of the wave. So the words go that a massless excitation is one with a linear dispersion relation. A massive particle is one with a dispersion relation that does not go to zero as K goes to zero, but has a curvature and shape like that. And the mass is nothing but the energy of the quantum when the momentum is zero. So that's M. Energy of a quantum as a function of the momentum when the momentum goes to zero is the mass. So that's the notion of mass. And it is connected with the curvature of the potential energy of the field at the... the curvature of the potential energy at the equilibrium point, wherever the equilibrium point happens to be. Why? Because if you shift the field away from the equilibrium point, let's say, every way, homogeneously, everywhere is the same, what happens is it starts to oscillate about there. It's pulled toward lower energy, it overshoots, goes up and goes back, and that corresponds to omega not being zero at K equals zero. K equals zero means you shift the field everywhere simultaneously, homogeneously. So we take the field and we shift it everywhere simultaneously, and the frequency at that point is the mass. If the frequency is zero, and the only way the frequency will be zero is if the potential energy is flat as a function of the field. If the potential energy happens to be flat and you shift the field away from the minimum here, it's not a minimum anymore, it's just a flat direction, it's called a flat direction. Of all things, it's called a flat direction. If you shift the field along a flat direction, it won't respond, it'll just sit there. So that corresponds to the homogenous field, K equals zero, having no energy, no frequency, frequency zero. So you spot a massless object when there's a direction, a flat direction of the potential. Here the flat direction of the potential is going around the rim. On the other hand, if you excite the field perpendicular to that, it oscillates. And so this kind of field has two distinct kinds of quanta associated with it, one massless, one massive. What's the massive one called? In the context of the standard model incidentally. Massive. The Higgs boson. So oscillating this way, we would call the Higgs boson and it has a mass. What about oscillating this way? That's the Goldstone boson. It's massless, but there is no massless Goldstone boson in nature. So something happened to it. In the parlance of high energy physics, the Goldstone boson got eaten by the Gage boson resulting in giving the Higgs boson a mass. So that's the phenomena that we want to do mathematically tonight. The Goldstone boson eaten by the Gage boson resulting in a mass for the Higgs boson. Remember the Goldstone boson is the one associated with a motion around here. The Higgs boson is the one associated with this over here. Okay, so let's go through it. There's mathematics to this. If I could do it without mathematics, I would, but I can't. So we're going to do it with mathematics. And we've already set up the ingredients. We've already set up the basic ingredients for it. What we have to do is combine them together. So that means in a standard model, there's only one spontaneous symmetry ranking, and that's the Higgs field. As far as it's known. It's not a simple U1 symmetry. It's this SU2 symmetry of the weak interactions that take ups to downs and neutrinos to electrons. So it's a little more intricate than just the U1 symmetry ranking. I've always, well, I hear this curve. I always thought I heard that the Higgs was giving mass to everything, including... Including itself. Including itself. Yeah? Is that right? Yes? Well, no, no, no, no, no. Yes and no. Yes and no. Okay, so let me tell you what the point is. There are a collection of particles in the standard model, which for one mathematical reason or another cannot have any mass unless the symmetry is spontaneously broken. Now we're going to go through those reasons. We're going to go through them for various particles and see what the reasons are. Why mass is disallowed by the standard model without spontaneous symmetry breaking. Then when spontaneous symmetry breaking happens, all of those particles get mass and they become the masses of the standard model particles. Now you might ask, what about particles that didn't require a spontaneous symmetry breaking to get a mass? Are there any particles that don't require a spontaneous symmetry breaking to get a mass? And the answer is yes, there can be. Those particles are all the particles that we don't see in the laboratory. Why would it be that we would see those which symmetry breaking gives a mass and we don't see those that symmetry breaking doesn't give a mass? Any speculations of why that might be? Yeah. The mass might be distributed. Yeah. Yeah. The natural mass might be way up at the Planck scale. The natural masses of particles may be very much larger, leaving over a bunch of particles which don't have masses for what you're calling natural reasons. And those particles only get mass from spontaneous symmetry breaking. Those particles are the light ones. Those are the ones we see in the laboratory. We have every reason to believe and in fact I think we practically know by now that there are additional particles in nature. We know the dark matter particles. We don't know with absolute certainty what the dark matter particles are, but it seems very likely that the dark matter particles are particles a thousand times heavier than a proton, for example. Why is it that those particles have large mass and the other ones don't? And the answer is in the mathematics of the theory, those particles which have large mass don't require the phenomena of spontaneous symmetry breaking to get a mass. So we're going to go through that. That's my goal for tonight and if we don't make it through it entirely next time, the Higgs phenomena and what the words mean, the Higgs particle gives such and such a mass. The first class of particles which cannot have a mass unless the spontaneous symmetry breaking are the analogs of the photons, the gauge bosons. So tonight I'm going to try to show you how the photon gets a mass from spontaneous symmetry breaking. Let's begin with a boson field phi. We did this last time but since it's a subtle and difficult concept, let's do it again. And phi is a complex field. That means there's a phi star to go along with phi and we can write phi in two ways. Phi real plus i phi imaginary. What does it correspond to? It corresponds to what I drew before, phi real, phi imaginary. Or we can write it as phi equals rho e to the i alpha, where rho is the distance along here. It's not a real distance in space, it's a distance in field space and alpha is the angle. Two different ways, polar coordinates and Cartesian coordinates for the field. Now, incidentally, phi star is of course phi real minus i phi imaginary and it's also equal, this one here, is also equal to rho e to the minus i alpha. Rho and alpha are functions of position. This is a field, so these are functions of position. And as you can see, in either case there are two fields, the real field and the imaginary field, or the radial field and the angular field. You might begin to guess about the connection between rho alpha and the two kinds of motions that can take place here. Alpha corresponds to exciting the angular modes. If you vary alpha from place to place, it's like varying around the circle. If you vary rho, it's like moving radially back and forth. Which one do you think gets a mass? Rho. Which one doesn't get a mass? Alpha. Which one is the Higgs boson? Rho. Which one is the Goldstone boson? Alpha. Good. You got it. I hardly have to go through the mathematics now. No, really, that's the way the mathematics is. Now, the trick in going through the mathematics is to write the Lagrangian for this field. It's derivative of phi star times the derivative of phi. This, of course, means time derivative squared minus space derivative squared. But I'll just write it that way. Derivative of phi with respect to space and time as four vectors dotted into each other. That's the gradient terms in the energy. And you can work them out in terms of phi and phi star. I'll tell you what it is. This becomes derivative of phi real squared plus derivative of phi imaginary squared. In other words, it just behaves as if there were two independent fields, real and imaginary. Or we can write it in terms of rho and alpha. And this would become... I'll write it down for you. You can work it out yourself. Derivative of rho squared. And then plus rho squared times the derivative of alpha squared. Rho is the magnitude of the field, alpha is the angle, but each of these are fields. And this is what Lagrangian is. You can either think of it in one way or the other. No problem. Now let's add in a potential energy. The potential energy I'm going to assume is symmetric. Symmetric with respect to rotations in this plane. In other words, that it really does have a U1 symmetry. For a potential energy to have a U1 symmetry, it means it's only a function of rho. Equivalently, it's only a function of phi star phi. It does not depend on the angle. So let's write that. Plus V of rho. That also corresponds... Well, okay. V of rho. Let's take... Actually, the potential always comes in with a minus sign in the Lagrangian. So let's put it in with a minus sign. A simple case would be rho squared. An energy which increases quadratically away from the origin. That's a possibility. Let's think about that possibility. Rho squared. It's not really what a... It's a possible expression. What would that correspond to in terms of phi real and phi imaginary? Just rho squared. Phi real squared plus phi imaginary squared. That's the Pythagorean theorem. Rho squared is phi real squared plus phi imaginary squared. So if the potential was the very simple form of quadratic, we would add here phi real squared, or minus subtract, not important, phi real squared plus phi imaginary squared. Notice the Lagrangian would actually be the sum of two terms, one for phi real and one for phi imaginary, and both of them would have a mass term. There could be a coefficient here, m squared over 2, let's say. Then you would have this situation which I drew over here, where both directions of oscillation correspond to a mass. And they're quite independent of each other. They don't even couple to each other. They don't even talk to each other. Just two separate fields. One that I labeled phi real, the other one I labeled phi imaginary. Now, there could be more complicated things here which would couple them together, but this is the basic picture if the potential looks like this. But what if the potential looks like this? That's also a function of rho, but it would be inconvenient to represent it in terms of phi real and phi imaginary, especially since the minimum is stuck out here away from the origin. The coordinates phi real and phi imaginary are not obviously the best coordinates when you're stuck out here. Better coordinates are the angular coordinate and the radial coordinate. Why? Because they correspond to two different kinds of oscillations with two different frequencies. In other words, just think of this as a function of rho and alpha. Now, this potential energy doesn't depend on alpha at all. But it does depend on rho. It depends on rho in a way which is minimized when rho is equal to f. And in fact, it takes a good deal of energy to displace it away from f. So it's natural to write that rho is equal to f plus a bit of fluctuation. In other words, the minimum energy is when rho is stuck at f. Now, it's not always along here rho is stuck at f. But you can displace it a little bit and let's represent that displacement away from f by a letter of the alphabet. And I'm going to choose the next letter of the alphabet after f, which is h. h for what? h for Higgs. h for Higgs. So the Higgs field is just the displacement of the radial direction or the radial part of the field away from its equilibrium. All right, now, it cost a good deal of energy to excite this Higgs field. Why? Because it's got a potential and that potential might be rather sharp. In fact, it is rather sharp. So to some approximation, some rather good approximation, let's say that the low energy behavior is as if rho were frozen at value f. Now, this is a very common thing in physics to say that there are low energy excitations, low energy oscillations of a system, low frequency oscillations, and high frequency oscillations, and ignore the high frequency oscillations. So this is the equivalent to ignoring h here. Just saying rho is equal to f plus something that takes too much energy for us to worry about today. We'll worry about it another day when our accelerators have more energy. What would this look like then? Well, rho is stuck at f. We might as well ignore its derivative. V of rho is just V of f. So V is stuck at the minimum. If it's stuck at the minimum, it doesn't vary from place to place. This and this are irrelevant to the dynamics of the angular field. Furthermore, since it costs so much energy to make a bit of h, we can just approximately say that rho is equal to f. And we find that for the low energies, for the small energy excitations, that the Lagrangian is just f squared, a number, times the derivative of alpha squared. There is no potential energy associated with alpha. Why is there no potential energy associated with alpha? For the obvious reason that we don't gain any potential energy when we go around the angular direction. This is a little bit less familiar because it has this f in it, but we could redefine the field. We could redefine the field by inventing a new field, call it beta. Beta is equal to f times alpha. It's just a rescaling of the field, just a redefinition. And then we would rewrite this as just d beta squared. No mass term. No mass term because it doesn't cost any energy to displace the field homogeneously everywhere. No mass term, and so alpha or beta, depending on which you want to write, is a massless field. It is the Goldstone boson. And as I said, what its energy corresponds to is slow variations of the field. As you vary the field from place to place, slowly you get a little bit of energy. If you vary it slowly enough, you get a little bit of energy from the derivative terms here, but no potential energy. So that's the hallmark of a massless particle, and it's the Goldstone boson. Goldstone bosons are the massless particles which are the consequences of the spontaneous symmetry breaking of a continuous symmetry. Okay, that's the buzzwords. Now, that's half the story. That's the spontaneous symmetry breaking half together with the Goldstone boson phenomenon. The other half of the story, which also has to do with varying the phase of a field from point to point, is the Gage invariance. So I want to spend a little bit of time, I'm going to give you a break for a few minutes. Then we're going to review again what I told you last time about Gage invariance. That should take us another 15, 20 minutes or 10 minutes, I'm not sure. And then put the two of them together and see what comes out. Okay, and what will come out will be that the Goldstone boson is eaten by the Gage boson giving the Higgs boson a mass. No, giving the Higgs boson a mass and also giving the Gage boson a mass. Yeah. I'm confused whether rho is the radius in the complex plane or is the field value at the radius in the complex plane? I don't know. The field is a point in the complex plane, the value that a field takes on is a point in the complex plane. At every point in space, the field has a value which is a complex point in the plane. In other words, at one point in space, the field is 3 plus i times 6. And another point in the plane that's pi plus i times e. Some place else, it's the Euler-Mcaroni number plus i times the Schmullerwitz number. We haven't specified where the complex plane that is except we've said that it's a radial symmetric. What's a radial symmetric? The value of the field at a point is not radially symmetric. The value of the field at a point is the value of the field at a point. It's not radially symmetric, it's the point on the complex plane. What's radially symmetric is the potential energy. The potential energy is only a function of this squared plus this squared. 3, 6, the field at some point has the value over there. At some other point in space, the field takes on the value over here. At some other point, it takes on the value over here. The potential energy is only a function of the sum of the squares. But 3 plus i, 6 is the value of the field and it doesn't say anything about where in the complex plane that occurs. 3, 6. 3 is a function of the whole complex plane. The value of the field at x plus i, y is... No, it's not the value of the field at x plus i, y. The value of the field is x and y are not positions of space. This is not the field at the point x and y. This is the field whose value is a point in the complex plane. It's an abstract idea. Fields can be complex, they can be real, they can be multi-component. They can assemble them together if they have two components into a complex variable. So the field itself is a complex variable. It's a complex variable where as you move around in space, the field takes on different values on this complex plane. Get used to that idea. Fields don't just have to be numbers, they can be complex numbers, they can be multi-component objects. I have a question. In the sombrero picture there, going in the mass-fold direction, if you give it a big enough displacement, it will go into the other equilibrium point. Yeah, yeah. You're going to hold it up, take it, and go over the map. Just swing around. Does that manifest itself in some kind of a particle? Well, the problem is that the amount of energy that would be involved for the Higgs field is way beyond the energy that even a big accelerator can, so it's out of the ballpark from even the LHC. But yes, I mean, in principle, you could give the field a big enough knock to knock it over the top. Not everywhere simultaneously in space, that would be an enormous amount of energy, but in some regions, you know. But if that were exactly in line with the peak of the... Very unlikely....go off and wind up all over. So it's very unlikely. Right, that's right. Yeah, we'd make a monstrous splash of some kind going off and on. Let's take a ten-minute break. Let's go through again what we mean by a field as a function of space. And what this idea of a field, which is a complex variable, is just to make it clear once more. We have space, x, y, and z, right? Now we can have fields in space. Let's say scalar fields. You can have a scalar field in space, which... Now, what characterizes it as being a scalar? It means it doesn't change when you rotate coordinates. That's all it means. It doesn't change... Not when you rotate the coordinates we're talking about here. When you rotate the coordinates of real space, it doesn't change. That's what a scalar field is. Now you can have several scalar fields. Let's say in particular you can have two scalar fields. Let's give them names. Let's call them phi i and phi r. They're two scalar fields. You can then assemble them if you like. This is totally a matter of convention. It's totally a choice. It's a useful choice. It's one of these things in mathematics which sometimes are really just useful conventions. And this is one of them. In fact, the whole idea of complex numbers is a useful convention. It's a useful convention for representing pairs of numbers. That's all. And the pairs of numbers are called the real and imaginary parts of the numbers. Not... Better not call them x. x stands for space. But just real and imaginary components are just ways of speaking about pairs of real numbers. Now it is a little more than pairs of real numbers. It's more structure that goes into the complex numbers. But fundamentally it's a way of talking about pairs of numbers. You can plot pairs of numbers on the Cartesian plane. The Cartesian plane is not real space. It's just the Cartesian plane in which you've plotted the real and imaginary parts of complex numbers. If I had two fields, two scalar fields, one of which I called phi real, that's its name, phi real and phi imaginary, I can assemble them together into a complex field. Yeah, a complex field. And at each point of space there would be a phi real and a phi imaginary. At each point of space I could go and plot the value of the field on the complex plane. The complex plane is not space. It's just an auxiliary construction. An auxiliary construction which is useful in replacing two real fields by one complex symbol. That's all it is. It's a trick for representing, for cutting down the number of equations that you write. Cutting it in half basically by writing them as equations for complex variables. Okay, so in that language you would say that the scalar field, the complex scalar field phi, is that just that? It's a complex scalar field and at every point in space, real space, it has a value. That value is not a single number, it's a pair of numbers. Why? Because it's a pair of fields to begin with. That pair of numbers can be plotted. So at this point of space over here, the field is over here. At that point of space, the field is over here. At that point it's over here and so forth. That's the idea of a complex scalar field. Sometimes the real and imaginary parts are called the real and imaginary components of the field, but don't get those confused with the components of a vector field or a tensor field or the components of the gravitational field. Those components have to do with projections onto real space-time coordinates. This has to do with the projections onto this fictitious two-dimensional plane that's useful for writing pairs of numbers. Now, once you realize that you can write the field as a complex variable and plot it on the two-dimensional plane like that, then it becomes a possible symmetry of nature or a possible symmetry of mathematics to rotate the field. In other words, there might be a symmetry which says that you can relabel or redefine phi i and phi r by rotation. And that happens when we have Lagrangians like this, which don't change when you do what? When you multiply phi by e to the i times a phase. These are the class of Lagrangians which don't change when you rotate the field simultaneously everywhere in space. When you rotate it in this fictitious plane, in other words, take the field at every point of space and you rotate its value. You give a new value to the field which is related to the old by changing the angle here rigidly everywhere in space. Then this Lagrangian does not change. We went through that last time and that's the basic idea of a complex field with a symmetry, with a symmetry of called u1 symmetry. It's a global symmetry. It's a global symmetry. That's called a global symmetry. A global symmetry is one where you're not allowed to change the phase angle in space. You just rotate it everywhere. So you can see that symmetry just by looking at what's a function of rho? Yeah. Well, you can look at it in a number of ways. You can see that since this has real squared plus imaginary squared things, whenever a thing is a function of real squared plus imaginary. And the other term was the Lagrangian theorem. Yeah. Right. So this is a function of rho also. You can see it here. Incidentally, what does the symmetry operation do on rho and alpha? Rho stays the same. Rho stays the same, and alpha? You shift it by a constant, namely the angle that you're rotating the field by. Well, shifting alpha by a constant doesn't change its derivatives. Yeah, that's why I'm saying that the potential is the only part where the derivative doesn't connect. That's correct. Right. So, all right. Now I think we have the basic idea. Now let's go through again. All right. Now, yes, let's go through the idea of a gauge transformation. We went through it last time, so I'm just going to remind you. We're not going to spend a lot of time on it. We're going to do it as fast as we can, but please ask questions. It is not a symmetry of this Lagrangian to multiply the field phi by an arbitrary function of position, an arbitrary phase, this is called a phase, e to the i times this times phi of x, of x. This is not a symmetry. We went through this last time. We rewrote that Lagrangian in terms of phi prime and found out that it didn't have the same form. In other words, the Lagrangian written up there, in particular, the term derivative of phi star times the derivative of phi, I don't have to put in the indices mu here, you know about that. That is not the same as the derivative of phi prime star derivative of phi prime. Not the same, and therefore it is not a symmetry. Okay? This is not a symmetry. If we do not multiply, that's because when the derivatives act, they act not only on phi, but they act on theta here. They act on theta, if theta is variable. If theta didn't vary in space, then this would be a symmetry. But if theta varies in space, it's not. We worked that out in detail last time, and I'll show you in a moment what we found. But to make it a symmetry, you have to add another field, a kind of compensating field. A compensating field that will compensate for the changes in the Lagrangian when you shift the phase of the field. Keep in mind, it's useful to remember that shifting the phase of the field is just adding a function to alpha. Okay? Shifting the phase of the field is adding a function theta of x to alpha. It changes the phase angle everywhere by amount theta. If theta is a constant, it's a symmetry. If theta is not a constant, it's not a symmetry. In order to make it a symmetry, you have to add another field, and the other field is called the vector potential. It's a four vector. It has an index mu. It's also a function of position. Out of it, you build the electromagnetic field, E and B, your electric and magnetic field. The transformation property, when you do a gauge transformation, like this is called a gauge transformation, when you do a gauge transformation, you'll have to do something to A at the same time. In fact, you'll have to take A prime of x and set it equal to A of x plus or minus. I always get confused by the signs here. I think it's plus the derivative. This is A mu of theta. In other words, you add to A the gradient, the spacetime gradient of theta. If you do these two operations together and you construct the right kind of Lagrangian, it will be invariant under this operation. If it's invariant, it's a symmetry. Let me remind you how it works. Incidentally, that's good enough. You have a minus there. You had the minus there last time. Last time I had minus. I think the idea is that A goes to A prime plus D mu. I'm just saying, if you write it with an arrow notation of the transformation going from starting with A mu and going to what, then you have a minus sign. If you write the transformation as A mu goes to A mu plus D mu, then when you write it with the A prime. I was reading it the other way. This is the transformation that you make. These are the prime variables. These are the unprimed. I may have written it the opposite way last time. I don't remember. But I will compensate for any mistakes, not any mistakes, but any change of notation by changing it consistently everywhere. From one week to another, I don't remember the precise notation. But both of them, you can do it either way. You wrote that D equals del plus IA and D star equals del minus IA. I may have changed, I may have inadvertently changed notation. But as long as you do it consistently everywhere, it's fine. So let's now, we've seen that the ordinary derivative of phi prime is not simply related to the ordinary derivative of phi. Let's write out the relationship. Let's take the derivative of phi prime. Derivative with respect to any axis. I'll stop writing mu's and nu's. The derivative of phi prime is, first of all, equal to e to the i theta, which is a function of x, times the derivative of phi. If that's all there was, and we did multiply the derivative of the complex conjugate of phi by the derivative of phi, the e to the i theta's would cancel out, right? So if that's all there was, we'd be in fine shape, we would say there's a symmetry. But there's another term. And the other term is phi times the derivative i times the derivative of theta, all times also e to the i theta. It's this term here, which is the nuisance. And if all there was was something which carried, and if theta only entered through e to the i theta here, it would cancel out when we multiplied this by the complex conjugate. But theta comes into its derivative over here. That's the problem. All right, to fix that problem, why should you want to fix it? Well, it's a fact of nature that there is a gauge, that there are gauge symmetries. There's also an interesting fact of mathematics that such symmetries are possible. To make it possible, you replace the ordinary derivative by what's called the covariant derivative. This is a mathematical concept that comes from the theory of fiber bundles, but I think it was, I don't know whether the theory of fiber bundles was invented before or after the notion of gauge invariance and physics. I have a feeling it was invented afterwards, but somewhat independently. I think variance was really vile to generalize the inner physics. That's right. I think so, yeah. Right, I don't know, but I don't know whether mathematicians invented it completely independently through the fiber, but I really don't know. In any case, it is both a mathematical and a physical concept. And in order to make the Lagrangian-Gaegian variant, you invent a new kind of derivative. It's called the covariant derivative. With respect to, again, with respect to each direction of space or time, you write that this is equal to the ordinary derivative. I think it's plus. Now I'm going to get myself confused. I'll try plus if it doesn't work, we'll change it. Plus i times a times phi. Now there is an implicit index in here. Once more, I will write it just a, remember a is a vector. Derivative plus multiplying by a. Sometimes this is indicated by saying that derivative is replaced by derivative plus i a. But it only makes sense if you apply it to a complex field like phi. So this is the definition of the covariant derivative. And if I were to write the covariant derivative of phi star, it is by definition just a complex conjugate. So it's equal to d mu phi star minus i a mu phi star. Okay. Now I assert, easy to check, easy to check, that the relationship between the covariant derivative of phi and the covariant derivative of phi prime is very simple. All you have to do is evaluate the covariant derivative of, what is the, this is the covariant derivative of phi. Let's write down what the covariant derivative of phi prime is. Well, it's exactly the same except you stick primes everywhere. So this is d mu phi prime is just equal to d mu phi plus i a prime times phi. Phi prime, thank you, phi prime everywhere is phi prime, phi prime. Okay, if you stick phi prime over here, you're going to wind up with this extra term over here. If I evaluate d mu phi prime, let me write it better, d mu phi prime. Here it is. Let's see, is this right? No. What did I do here? This was d mu phi prime, right? Well, you used the capital phi. Hm? Phi, phi, the real phi. Well, it's a hook. No, no. Yeah, no, no, there's no difference between capital and not capital. No, here it is. So regular phi's are actually lower case and when you use the bars, they are uppercase. Okay. When you don't use the bars, they're lower case. What bars? Those bars are cases. What bars? If you don't have the i bars on the top and the bottom, phi is lower case. Sorry, where's the i bar? You know, those bars that you write on the top, yeah, like those things. It's uppercase phi. Yeah, that's an uppercase. Yeah, I'm just telling you, there's no distinction in my notation between uppercase and lower case. Sometimes I like to put the little things there when I'm feeling in a good mood, I put these things on. But I'm not feeling in a good mood, I leave them out. I see. So that's it. In most of the books, they use lower case. Yeah. But then for special cases, they will. No, no, no. Okay, phi is phi. Phi is phi and I did not mean to distinguish uppercase from lower case. Wait, wait, wait, wait, I've got to put it all, I want to be consistent here. What? What would be easier to rate the other two? What? What would be easier to rate the other two? I may swear. That's up to you. Oh, okay. Now, was there a question? He was asking about indices balancing up the propeller. I don't remember my special tip by foreback. Do we need an index up and index down intervention on the groove? Yeah. What's really meant by D mu phi star D mu? Is one a conjugate theory? Is one a complex conjugate or the other? No, no, no, no, no, no, no, no. They are the, um, okay. So let's, if you have a four vector, an arbitrary four vector V mu, you can multiply it by G mu nu. That's the metric and get V mu. V mu. All right, the meaning of this symbol for ordinary special relativity is that the time components of these two are the same in sign and the space components are opposite. All right. So, right. The meaning of this is it should be taken to be the upper, the lower, the covariant vector times the, right. But I don't want to write that every time over and over. So I just write an abstract symbol derivative times derivative. Yeah. I think you have to have the lowering operator there. Hmm? You have lower on the left, upper on the right. Sorry, you're right. Thank you. Right. But we've already gone through special relativity a number of times and so I don't feel the need to be so careful about relativity indices. Right. So I won't even, I won't even bother writing them for the most part. I keep saying I'm not going to write them and I keep writing them. Okay. Good. Now, here I, right. If I plug in d mu phi prime, I will get this unwanted nasty piece over here. But if I also plug in a prime times phi, where is a prime? Let me write a prime over here. A prime equals a to the plus or minus d mu theta. Then they will cancel. They will cancel. There will be a term di d mu theta times phi. You know what I mean. Right. So if I, the two d mu theters will cancel each other. One from calculating derivative here and the other from the explicit difference between a prime and a. So what will I find? Well, I'll tell you exactly what I will find. I will find that d mu phi prime, this one over here, is equal, not quite to just d mu phi. What am I missing? E to the i theta. Overall, e to the i theta. So you say, well, I didn't get anywhere. D mu of phi prime is not the same as d mu phi. It's not gauge invariant. True enough. But it only differs from being gauge invariant by an outside overall factor of e to the i theta, which will disappear when you multiply this by its conjugate. Notice that in this formula here, the derivative of theta does not appear. You have the covariant derivative of phi prime is equal to the covariant derivative of phi up to this overall factor of e to the i theta, which will cancel when you multiply this by d mu of phi star prime. That will be equal to e to the minus i theta times d mu of phi star. So when you multiply these two together, the unwanted, unpleasant things that have derivatives of theta in them disappear. What is the cost? The cost is you had to introduce a new field which transformed in this way over here. But once you allow yourself that freedom to introduce a new field, the vector potential, then this Lagrangian over here becomes invariant than the gauge transformations. Later on, we'll write it out in some more detail. But one more point before we jump to the conclusion, and that's the electromagnetic part of Lagrangian. This field is a charge carrying field. It's not the electron because the electron is not a boson, but it could be some charged boson of some sort. Some charged boson of some sort, and there are charged bosons. A helium atom is a helium nucleus is a charged boson. So phi represents some charged boson. A represents the vector potential, and the electromagnetic field is also governed by a Lagrangian. The Lagrangian is to define f mu nu to equal d mu a nu minus d nu a mu. The space time, the mixed space time component is the electric field. The space space component are the magnetic fields. And the square of f, now that there are some signs that go into it, f mu nu, f mu nu, that's e squared minus b squared, that's the Lagrangian of the electromagnetic field. The important thing for us is that this is gauge invariant. This does not change when you make a gauge transformation. What happens to a? A changes by the gradient of theta. So what happens to this? f prime becomes f plus d mu d nu theta minus d nu d mu theta. These two terms cancel. So f is itself gauge invariant, and when you add f squared, it's still gauge invariant. So the whole upshot is, oh, one more thing, one more thing you can add that's gauge invariant. And that's a potential which depends only on five star phi. A potential which depends only on five star phi. So the full Lagrangian of a simple gauge theory involving the interaction of electromagnetism with a charged scalar field would look like this. It would have d mu phi d mu, let's just put d phi d phi star. It would have v of phi star phi, which is just v of rho, if you remember, and then plus f squared, plus e squared minus b squared. And it would be fully gauge invariant, no change in it when you make a gauge transformation. That's the simplest gauge theory in a nutshell. And that's basically it. That's basically the structure of all gauge theories. They have a structure similar to this. Now let's ask how what happens when the symmetry of rotation of phi is spontaneously broken? When does that happen? It happens when the potential has this upside down sombrero. I guess it's not an upside down sombrero. When it has that shape so that the ground state of phi breaks the symmetry. What happens? What's the next thing that happens? What new thing happens? Oh, before we do that, let's just remember that this potential is such that the angular direction is a goldstone boson, no mass. The radial direction, which we call the Higgs boson, has a mass. And furthermore, the photon does not have a mass. How do I know the photon doesn't have a mass? The reason is simple, the F is only a function of the derivatives of the vector potential. That means if you were to shift the vector potential everywhere simultaneously, there would be no change in energy. Shift the vector potential everywhere together, no change in energy. That means a massless particle. So there are two apparent massless particles floating around in this theory. One is the goldstone boson and the other is the photon. And now, watch what happens. Let's take, oh, what would a mass term for the photon look like if you had one? Well, the mass term is always something which is quadratic in the field. For example, the mass of phi, if it existed, would be phi star phi, or phi squared, phi real squared plus phi imaginary squared. Or the photon mass B, it would be something proportional to a squared to the vector potential squared. If you allowed yourself to put the vector potential squared into the Lagrangian, you would have a mass for the photon. That would not be Gage invariant. Look at it. Here's a squared, which means, this means a mu, a mu. If you put that into the Lagrangian, what would happen when you make a Gage transformation? It would shift to a plus the gradient of theta squared. All right? But that's not the same as a squared. A squared is not the same as a plus derivative of theta squared. So a squared, just a squared. A squared would not be Gage invariant. So a mass term is not allowed by Gage invariants. Gage invariants is a symmetry which prohibits the photon from having a mass, or does it? Well, let's look at this Lagrangian a little more closely, and in particular, let's look at this term over here. In fact, let's concentrate on the Goldstone-Boson piece of it. And we can do that by writing that phi is f times e to the i. Ah, no, let's not. Let's not. Let's just, let's go for it. Here's the terminal Lagrangian. All right? So what's in here? d phi is derivative of phi plus, is it plus i a times phi? Is that right? It's what you wrote. That's what I wrote. d phi star is derivative of phi minus i a phi star. All right? Now let's suppose that we're looking at the ground state when the field is right over here, when phi is equal to f. Let's say just the real value. The real value of phi is equal to f. Let's, for simplicity, let's say the imaginary part of phi is zero. So the field is sitting right over here. Let's look at the various terms. First of all, ah, well, the most important term is this one here. A times phi times phi star. But phi times phi star is stuck at the minimum here. It's stuck at the minimum because it takes a lot of energy to shift phi away from the minimum. So as long as we don't excite a lot of energy, there's an effective term in the Lagrangian, a squared times phi star phi, which is just equal to f squared times a squared. Somebody said, why'd you say, hm? Because you immediately see that this is behaving like a mass term for the photon. This is behaving exactly like a mass term for the photon. What's the mass? The mass is basically just f, actually twice f, well, squared of two times f. But this factor, this numerical factor here, is playing the role, forget this, it's playing the role of a squared times f squared, which would be a mass for the photon. It would be an energy that you get when you shift the field homogeneously. So what have we learned? We've learned that when you spontaneously break the symmetry, when you study the theory in the neighborhood of the minimum over here, the effect of one of the terms in the Lagrangian is just to make a mass for the photon. Isn't that interesting? When the photon propagates in this world where the symmetry is spontaneously broken, it behaves as if it had a mass. And it does have a mass for all practical purposes. So we've gotten around the statement that says photons can't have mass. Now real photons, of course, don't have mass. But these photons in this world with a spontaneously broken symmetry do have mass. The second thing is the Goldstone boson disappears. Why does the Goldstone boson disappear? Well, let's remember what the Goldstone boson is. It's this angle alpha, the angle of the field on a complex plane. What does a gauge transformate? Well, first of all, what is the Goldstone boson? The Goldstone boson is simply a slowly varying or a varying value of alpha. Alpha is a field. It can vary in space. If it couldn't vary in space, it wouldn't be interesting. Alpha can vary in space. But you can make alpha vary another way. You can make it vary by doing a gauge transformation, adding a little bit of theta of x. So you can make alpha vary from place to place, and you might think that that makes a Goldstone boson. But it doesn't because this is a symmetry. A symmetry means that you don't change the energy. It means that the system stays in the ground state. This is not a real change. It's a gauge transformation. So the Goldstone boson completely disappears. It completely disappears. It just becomes a gauge transformation, no real energy associated with it. And at the same time, the vector potential gains a mass. So it's a sort of zero sum game. You get rid of one mass. Well, you tell me, it's not a zero sum game. You've removed the massless Goldstone boson and removed the massless photon, replacing the massless photon by a massive photon. And at the same time, the Higgs boson now has a mass because it oscillates about here. That's the Higgs phenomenon. It, by plugging in, basically just by plugging in the value of the Higgs field or the value of phi at the minimum here, we generated a mass term for the photon and where did it come from? It came from this covariant derivative. That's all. It just came from the covariant derivative. Ah. Could you rewrite the whole Lagrangian in those terms? Yes, you can. And if you can, you can rewrite the Lagrangian in those terms. You can do a gauge transformation and basically completely remove this angular degree of freedom. Completely remove the angle from the, I'll show you a simple baby version of that. If you completely forgot the radial degree of freedom, just completely froze it. Just completely froze it. You would find that the Lagrangian, let's see, the covariant derivative of phi would become f, I think, times the derivative of alpha plus the vector potential itself. Maybe times e to the r. Let's see, phi is just equal to f times e to the i alpha. And so the derivative of phi is f i alpha, sorry, f i f derivative of alpha times e to the i alpha. So there would be an e to the i alpha multiplying the whole thing. This is just setting the magnitude, the rho equal to f. I think f would come on the outside here, wouldn't it? f would come on the outside. This would be the covariant derivative of phi, and it would contain the derivative of the angular part. That's the important thing. It contains the derivative of the angular part. What about the derivative of phi star? That would be the same thing, derivative of alpha i here, minus i a times e to the minus i alpha f. I'm getting tired, so it's getting a little bit late. That's what you would get. When you multiply them together, you would get exactly what you expect. OK, now you also have f mu nu. You have this times this in Lagrangian, and you have f mu nu squared. Now let's make a gauge transformation. This is a clever gauge transformation, which is designed to get rid of this term altogether. How can you make a gauge transformation that gets rid of this term? Just set theta equal to minus alpha. Remember what happens to the vector potential when you make a gauge transformation? It just picks up a derivative of theta. If you make your gauge transformation using for the gauge function theta, alpha itself, this just goes away completely. Sorry, it doesn't go away completely. It just becomes a. And this terminal Lagrangian just becomes a squared. So by design, you can construct a gauge transformation, which just removes this. This times this gets rid of the alpha dependence altogether, and you just get f squared times a squared. All you get is the mass term. That's all that's left. And of course, you still have f squared, but this is gauge invariant, so it doesn't change when you do the gauge transformation. And that result, the angular degree of freedom just sort of disappears. Some magic happened, the angular degree of freedom disappeared. The Goldstone boson is gone, and the photon has a mass. Meanwhile, the Higgs boson, which is this radial oscillation, that's still there. So this is the magic of the Higgs phenomena, and it's the phenomenon of giving a mass to the gauge bosons by spontaneous symmetry breaking. Is it fair to say that the oscillations in alpha are indistinguishable from just gauge oscillations? Yes, from just gauge trends. Yes, that's fair to say. It's not only fair to say it's true. Now, I will tell you right now that this is also the physics of superconductors. In a superconductor, there are charged bosons. The charged bosons are bound pairs of electrons. They're called Cooper pairs. Cooper pairs are loosely bound pairs of electrons, but they are bosons. So there's a charged boson field in the system, and the charged boson field gets shifted away from the origin by some complicated dynamics in the superconductor. It creates a condensate of those charged pairs, and it's equivalent to exactly this kind of spontaneous symmetry breaking, where phi would be replaced by the field describing the Cooper pairs. So this is also the physics of a superconductor, and in the superconductor, the photon also behaves like a massive particle. One last thing. How can you lose a degree of freedom? How on earth can a degree of freedom disappear? The angular degree of freedom, it didn't disappear. What has happened? A massless photon has two polarization states. Remember, polarization is a little arrow that is perpendicular to the motion of the photon. It's, if you like, the direction of the electric field oscillations. So every photon carries with it a polarization vector. There are two orthogonal directions for the polariz- If the photon is going that way, the polarization can be that way or it can be that way. So there are two polarization states. You can also think of them as the states of circular polarization of the photon. There is no state in which the polarization vector of the photon points along the direction of motion. Maxwell theory doesn't allow that. Why is that consistent if the photon is described as a vector? How can it be that it only has two components and not a third? And the answer is very simple. The photon moves at the speed of light. It can never be brought to rest. Suppose it could be brought to rest. Supposing the photon could be brought to rest. If it could be brought to rest and you brought it to rest, it was going down that way, and you brought it to rest, it had a polarization along the x-axis, and you brought it to rest. It would be a particle at rest with a polarization along the x-axis. Now you could accelerate it along the x-axis, and all of a sudden it would become a particle whose polarization was along its direction of motion. So the answer is if a particle like the photon were to have mass, it would have to have three directions of polarization. You could bring it to rest, and once it's at rest, there's no distinction between the three directions. It would have to have three directions of polarization, and then if you sped it up along some axis, it would have three directions of polarization. The massless particle can never be brought to rest, and it's perfectly consistent for it to only have two directions of polarization, namely circular this way or circular this way. Another way of thinking about it is if you take a particle circularly right-handed polarized, and you brought it to rest and then accelerated it off in the opposite direction, it would all of a sudden have left-handed polarizing. It would have left-hand, so it's also a way to think about it. The point is that you have not lost a degree of freedom. The Goldstone boson has really become the other missing polarization of the photon. Another way of saying it is you started with four degrees of freedom. The two components for each photon, the two components of the charged scalar, and the two components of polarization of the photon. In the end, you were left again with four degrees of freedom, three polarizations of the photon and the Higgs boson. So you really haven't lost any degrees of freedom. You've turned the Goldstone boson into what's called the longitudinal degree of freedom of the photon. The one in which the vector potential or the field is pointing, the electric field is pointing along the direction of motion. That's what's happened, but they all have mass. Now the next time, I will show you the physics in a simplified context, of why the fermions, the electrons, and the quarks get their mass from the Higgs phenomenon. So far we haven't talked about the quarks and the electrons. They also have mass. What does their mass have to do with the Higgs phenomenon? That will be the next time. When we finish that, we will have a setup in which all particles, including the Higgs boson, all get their mass from the same place, namely the spontaneous symmetry breaking. Then we can start to talk about this very interesting situation that all of the particles in nature, that we know, that we've measured, are the ones which would become massless if there was no spontaneous symmetry breaking. Boy, I've had enough for tonight. For more, please visit us at stanford.edu.
(March 30, 2009) Leonard Susskind explains the Higgs phenomena by discussing how spontaneous symmetry breaking induces a mass for the photon.
10.5446/15074 (DOI)
Stanford University. Alright, last time I started to tell you something about how quantum field theory gives rise to a theory of the particle interactions through an object called the Lagrangian. I think that was not terribly clear, so I want to go back to it a little bit before starting to discuss the details of particle physics. We're still thinking about quantum field theory. The basic technique that I was alluding to last time is called the path integral method of quantum mechanics. And it is the most direct route to Feynman diagrams, to the theory of particle interactions that are based on Feynman diagrams. So let me just try to briefly go through the basic ideas, which we started to do last time, but I don't think I was sufficiently clear. Anyway, the path integral formalism or method in quantum mechanics due to Feynman is a generalization as the quantum mechanical version of the principle of least action. So I just want to remind you what the principle of least action is. If we're talking about a particle, now I don't mean a particle from the particle physics quantum field theory point of view, just a classical Newtonian particle, the motion of the classical Newtonian particle from one point of space time to another, just very quickly, is determined by a Lagrangian. Lagrangian is a function of the coordinates of the particle, the time derivatives of the particle, and from the Lagrangian, whatever the Lagrangian is, one constructs the action. And the action is an integral along the trajectory of the particle of the Lagrangian. For every trajectory, whether it's the true trajectory or not the true trajectory from here to here, by the true trajectory, I mean the solution of Newton's equations from one space time point to another space time point, whether or not the trajectory is a solution, it has an action, the integral of the Lagrangian along the orbit. The classical principle of least action is that the trajectory followed by the particle through space time minimizes the action. From that, you can derive differential equations, and those differential equations are called Newton's equations. And I will assume that you know a little bit about this idea of action. Now, the same idea applies to classical field theory. And the way the idea works is for a particle, for an ordinary particle motion, the Lagrangian is a function along an orbit, a function of an orbit. For a field theory, and of course it's an orbit which connects some initial configuration to a final configuration. For a field theory, the equations of motion of a field theory, classical equations of motion, are also determined from a Lagrangian. The Lagrangian is a function of whatever fields, I'll call them generically just phi, and not just the time derivatives of the fields, but the space derivatives and the time derivatives, let's say derivative of phi with respect to x mu. That's just a shorthand way of saying that the Lagrangian depends on phi and derivatives of phi. One thing which is required by the theory of relativity is that the Lagrangian be a scalar. That a transform on the Lorentz transformations is a scalar, but that being said, the Lagrangian can be anything built out of phi's and derivatives of phi. What is the action? Imagine we have a region of space time, time as usual flows upward, space is horizontal, and we have a region of space time between an initial time and a final time. Then the action in this region, the action is the action in this region just as the action for a particle trajectory is the action from one end point of the trajectory to another. Here it's the action in a region of space between an initial configuration and a final configuration. The action is equal to the integral over that region, let's write a d4x which means the time and the x and the y and the z of this Lagrangian. What's the total integrated value of the Lagrangian over space and time, whereas for the particle it's just an integral over time? Now what do you do? What's the analog of a starting point for the particle? A value of the field on the initial surface here. Given not just one field, but all the fields in the problem, prescribing the values of the field on the initial surface here and prescribing the values of the fields on the final surface, you can ask is there a solution of the equations of, oh let me go back a step, forget that. Yes, is there a solution of whatever the equations of the theory are which starts with a given initial value of the fields at time t equals zero, let's say, and ends with the fields being something different that's some later time? The answer is the principle of least action again, the equations of the theory are formulated by saying yes, there always doesn't exist something in between which is a correct solution of the theory which has a given initial value of the fields here and a given value of the fields at the end point, and the correct solution of the fields in here is the one which minimizes this action, one which minimizes the action. So the principle of least action gets extended to a kind of space time principle of least action where the degrees of freedom in here are fields. Okay, now let me very quickly remind you what the quantum mechanical use of action is. How exactly you derive the classical principle of least action from the quantum mechanical version of Feynman's quantum mechanical version, we don't really need to get into here, you can go look it up and anything about path integrals, but for a particle, let's come back to the particle motion for a moment, according to Feynman or according to quantum mechanics, a thing that you might want to calculate is the amplitude. Is there anything wrong with the value of a field returning to itself? No, no, no, no more than the possibility of the position of this thing returning to itself. So how do you go from one space time event to another space time event? Why space time events? We're having values of fields here and values of fields here. Well, integrating d4x, so there has to be some d4 trajectory. No, the trajectory is replaced by a history, a history of the field from the beginning to the end. A history means the values of the fields all along here, everywhere is in here. The idea of a trajectory becomes a trajectory in field space, which means- We have to specify actually the coordinates as well as the field values to get an initial point. To get an initial point, what are they? Well, you're specifying the initial boundary condition or whatever as field values. Field values all along some initial surface. You pick the initial surface. In other words, you pick time- It's the whole field at some cross-sectional- Yeah. All right, which is another way of saying the whole field at a given instant of time. And the final configuration is the whole field at some later time. And the question is, is there a solution of the theory, whatever theory is, which interpolates in between these in the same way that a trajectory interpolates from here to here? The rule is minimize the action. That if carried out, we did this number of classes ago, but if that's carried out, the requirement of minimizing the action leads to partial differential equations for the fields and those partial differential equations are things like Maxwell's equations, things like Einstein's equations, and so forth. All right, so let's come back now to the quantum mechanical idea. The quantum mechanical idea has to do with amplitudes. Amplitudes are things out of which you construct probabilities. So for example, you can ask the question, what is the amplitude that if a particle is injected into the world, however it's injected in, at a spacetime point x and t, what is the amplitude that if I look for it at a later time, let's say this is at x and time t, what's the amplitude that if I look at a later time, let's call it t prime, that I will find it at position x prime and t prime. So it's the amplitude that if I start a particle at this point, close my eyes for a while and then turn on a detector which looks for the particle at a certain place, the amplitude for finding it there. That amplitude is a complex number that depends on the initial point, on the final point. It's a complex number whose magnitude squared is the probability. That's the rule. Where the amplitude or multiply by its complex conjugate and that is the probability for the particle to go from here to here. Feynman's rule is the following. It says take the action for any trajectory, take an arbitrary trajectory and write down the expression e to the i minus i, the action of the trajectory. We can just write it out. It's integral of, now we're talking about particles now, integral of the Lagrangian dt from one point to another point. It's a function of or functional of the trajectory. And now, actually there's a factor of h bar in here, a factor of h bar in here, that's where quantum mechanics comes into it. Sum this or integrate it over all possible trajectories. Well, in non-relativistic quantum mechanics, you don't let them go backward in time. Let all possible trajectories which never loop back on themselves in time, sum that over all the possible trajectories. Each trajectory, it's a complex number. Air them all up. Now, how do you add up such a thing in some complicated kind of integral? But we needn't worry too much about the details of how you actually calculate it. That according to Feynman is the amplitude for going from here to here. It's the sum of all possible classical roots. Classical root does not mean a solution of the equations, it just means a possible root. It's the sum of all possible roots whether or not the solutions of the equations, of Newton's equations, of e to the minus i times the action, measured in Planck's constant units. Incidentally, Planck's constant has units of action. So this is simply the action in units of Planck's constant. The e arises because to go a little step involves something very one plus an act term in the action, you have to multiply these both together. Yes, right, exactly. That's exactly right. The other thing here is the i, the i is part of quantum mechanics, what can I say? Yeah? That sum is the probability. That sum is the thing whose square is the probability. By square I mean times its complex conjugate. This sum could hardly be a probability because in general it's complex. It's got an i in it. If you multiply it by its complex conjugate, that is the probability. This is the probability amplitude. The complex thing. So this is Feynman's formulation of quantum mechanics and now it can be extended to quantum field theory. In quantum field theory, the corresponding question, now we're not actually going to formulate it this way because, but this is a correct way to formulate it. We're going to fudge it a little bit and make our way through it without any rigor. But the same idea, you start with an arbitrary configuration of the field. So that's some initial phi of x. X now being just x and not t. An initial configuration of the field. And now you look at some final configuration of the field. Let's call it phi prime of x. These are two different functions. And you can ask, what is the probability amplitude that if I start the field at some value and let the system go and then detect the field at a later time, imagine you had some way of starting an initial condition with a field being prescribed at every point of space. Of course, that's an idealization. You don't have a chance in the world of doing that, really. But you had some sort of apparatus which allowed you to start the field in an arbitrary configuration, let it go, and then you had another apparatus which detects the values of the field at every point. Then you can ask, what is the probability that if you start the field in a given way that at a later time the field will have some value? Again, that probability is determined in terms of a probability amplitude. And the probability amplitude is, again, the sum over all possible ways of interpolating the field between initial and final, all possible, exactly what I said, all possible ways of filling in between the initial state and the final state with a field value at every point of e to the minus i h times the field action. That's the integral dx dt dx dy dz d4x of the Lagrangian, which is a function of phi and the derivatives of phi. So you evaluate the Lagrangian at every point, integrate it up. That gives you the action for a particular, let's call it a trajectory. It's not really a trajectory in any ordinary sense, but it's a history. Let's call it a history, better yet. For each history, each possible history that you can imagine, it doesn't have to be a true history or a real history, for every possible history you can imagine, there is an action. The action is itself an integral, but then you consume this over all possible histories, and that is the amplitude to start with a field value phi and end with a field value phi prime between these two trajectories here. That's the notion of a path integral. I'm not concerned that there is an infinite number of paths, but the action is going to be a finite number for each of these paths. Yeah, but it's a finite complex number, right? Yeah, so how can that converge to anything? Why is that not the infinite number? No, there are many, many integrals. Well, okay, you're asking. Let me phrase it a little more strongly. Aning up an infinite number of numbers, well, that's nothing special. I think what you're pointing out is that each one of these numbers is not only finite, but has a magnitude equal to one. Or something that is not decreasing. It has a magnitude equal to one. It's an exponential of i times something, right? So any number like this is a number which on a complex plane, not minimizing it. You're summing over all. You're just summing over all of these amplitude, okay? These things do converge. They converge because of the oscillate. They all lie on the unit circle, not on the infinite circle, on the unit circle. And yet the integrals do converge. This is definitely a golf box. For example, that's one example, but there are many examples where integrals like this converge. The kind of thing that can't converge, well, if you have an infinite number of numbers, all of which are positive, and you add them up positive and bounded away from zero, all right? You have an infinite number of numbers, all of them positive and bounded away from zero. Of course, that's going to diverge, okay? In this case, you have numbers which can cancel over here as one trajectory, over here as another trajectory, over here as another trajectory, these are the amplitudes for different trajectories, and this one cancels this one. So it's not hard for an integral where the- Well, you would say if they're equally distributed around the unit circle, then you get- Zero. Yeah. In fact, most of them, except for a small fraction of them, they are- That's right. They tend to cancel a lot. The only trajectories which tend not to cancel are the ones near the classical trajectory. We don't have to discuss that now, but that is the way that you go from quantum mechanics to classical theory, at least in this formulation. If you look for the particular trajectories where you have the least cancellation, those are the trajectories of stationary action. The trajectories where the action is minimum. But that's another story. This is the quantum mechanical path integral formulation that much of modern field theory, basically all of modern field theory, quantum field theory is based on. All right. Now, I'm not going to derive the next step. I'm simply going to state the next step, but I wanted to at least explain to you what this quantity is before I show you how in practice it's used. Now when I say in practice, this is a bit of an oversimplification, but not too bad. Not too bad. The last time we talked a little bit about how the Lagrangian is used to calculate processes, particles moving from one place to another, particles interacting. When I say calculate processes, what do I really mean? I really meant calculate the probability for an initial state to go to a final state. But there are two distinct ways to think about relative, well, about quantum field theory. One is in terms of fields and the other is in terms of particles. We know that there is this duality between particles and fields. We could ask a totally different, apparently totally different question. Instead of asking suppose you had a given field value here and you want to know what the amplitude is for a final field value, we could ask supposing you had some incoming particles, quanta of the field. We express things in terms of field quanta rather than in terms of classical field configurations. Supposing I told you that the field initially was in a state which was described by a particular collection of incoming particles. Incidentally when I say the field, I may mean a collection of fields. So I tell you not the value of the field along here, but rather the particle content coming in. And I tell you what the particle content is going out. And I ask you what's the probability that you went from one particle content to another particle content? It's a question of a similar kind of question, but expressed in terms of the particle representation of quantum field theory rather than the classical field configuration. And the answer, not surprisingly, involves exactly this same object. We talked about it a little bit. I think what I told you last time was not incorrect, but it was a real small piece of it. I said what you do is you took one, I'll just remind you of something I said, and then I'm going to say it again, but in a more correct way. You take one plus the Lagrangian, remember what we did. We said let's divide up into space time into lots of little cells. It's hard to give meaning to objects like this, like this path integral directly. The way to give meaning to it is to divide up space into lots of little cells. And instead of thinking about continuous functions or even discontinuous functions, think about the value of the field in each one of these cells. That makes it more concrete. And then in the end, you let the size of the cells go to zero. So let's not let the size of the cells go to zero. Let's keep them finite like this. What I told you is that you take the quantity one plus the Lagrangian in each cell, in the ith cell or it l sub i, that's the value of the Lagrangian in each cell, and you multiply it for all the cells. Remember I said that? If you don't, it doesn't matter because we're going to say it again the right way this time. And then we did something with this to try to calculate, and I tried to show you that there are pieces in here which describe the propagation of particles, the collisions of particles. So let's go back over it again because it really is central. All right. First of all, I missed being tired. I was not terribly clear. It's really one minus i times the Lagrangian in each cell. Now one minus i, one plus a small, this is for the moment, imagine this is a small quantity I shouldn't, actually it's not one plus the Lagrangian. It's one plus the action in each cell. Now the action in each cell is the Lagrangian times the space time volume in each cell. Space time volume is delta x, delta y, delta z times delta t for each cell. Let's call that a small number, let's just call it the space time volume, let's call it a to the fourth, where a is a small number. So each little cell is small, so the action in each cell is itself small because the cell is small. Now one minus or one plus a small quantity, let's call it one plus epsilon, is an approximation to e to the epsilon. e to the epsilon is, this is approximately equal for small epsilon, but the exact formula is a power series in epsilon. That's one plus epsilon plus epsilon squared over two plus epsilon cubed over, what comes next? Three factorial, which is six, three times two times one, and so forth and so on. So one plus epsilon is approximately for small things equal to e to the epsilon, but in fact it's more efficient and correct to really write something different than what I wrote over here, namely this thing. Let's think about what this thing is. In fact, don't you have to put an h bar in the... Here? Yeah. All right. I'll probably wind up setting it h bar equal to one as usual, but right. Okay, what is this object? All right, this exponential here, what's in here, we're going to imagine replacing not by an integral but by a sum. The integral of a space and time here, just imagine that we've replaced it by the sum. And what is it a sum of? It's the sum of the action in every little cell, right? Each when I replace the sum of the integral by a sum up in here, when I replace the integral by a sum up in here, what I'm really doing is just adding up the action in all these little cells. Now, the thing about a exponential is that the exponential of a sum is the product of exponentials. e to the a plus b is e to the a times e to the b. So this can also be written as another way, and another way. You forget the summation here for a minute, that's summing over paths. Before we do that summing over histories, what this object is, it's an exponential of a sum, so it's also the product of e to the minus i over h bar a in the first cell, e to the minus i over h bar action in the second cell, a is action, e to the minus i over h bar a in the third cell. And it's exactly this kind of product, product over all the cells, not of one minus i times the action, but of e to the minus i times the action. They're close to each other. It actually wouldn't matter which we used in practice. But okay, so let's now consider question. Okay. All right, so now having this form here, let's go back to what, let's forget particles for a minute and think about fields and re-express the path integral idea. So here we have a region of space-time that's been chopped up into tiny little cells. Enough. All right, what's the idea of an initial condition? The idea of initial condition is to start on the first row of cells here and give the value of the fields at every point in there. That's an initial condition. That's the analog of an initial condition. A final condition is to specify the fields in the last row. So the question then is what is the probability amplitude that if a field is specified in a certain way on the first row and the field is specified in the last row, what's the probability to go from one to another? Or the amplitude. The amplitude that if you start at a certain way and let the system run, you will later find it in the final state. And the final state now means the field in each one of these cells here. Answer? You take e to the minus i times the sum of the action in all the cells, but we now realize that this is simply the product of e to the minus i of the action in each cell. We multiply together the action in all of these cells for a given history, for a given history, for a given history, which means a given value of the fields in each cell. And we multiply them all together. We compute the exponential of the action and then we sum it over all ways of populating the cells with fields. All possible histories means all possible values of the fields that could be in every cell with the exception of the first row and the last row. The first row and the last row are fixed by the initial and final conditions. In other words, by the initial and final conditions. We don't look at the field in the interior. We simply start the system and then detect it. And the role is sum over all possible field configurations that could exist in between. So that has reduced this idea to a discrete form in which we see that what we have here is this infinite product, or this product over actions. All right, now let's try to formulate some ideas about particles. Instead of asking the question, what if we start with a given field configuration? What if we start with a given particle configuration and end with a given particle configuration? What is the amplitude to go from one to the other? So I'll tell you how you think about it. I know we're not going to derive this. We've talked about it a good deal. We've talked about the ideas a good deal. We've talked about the fact that fields are made up of creation and annihilation operators. And what we're multiplying together here is functions of the fields. Functions of the fields and therefore functions, yeah. I read somewhere that someone thinks that field quanta are not necessarily identified with particles. All right. You would disagree with that? Well, I'm not sure who and what was the context. We've got field theory. No, I'm not sure who said it and what the context was. But you are... Well, give me a little more to go on. The question was, somebody said something. My grandmother probably said it. The field quanta in quantum field theory are not necessarily identifiable as a particle. The only thing I can think of is that he was talking about quarks. And of course, in some sense, quarks are particles. But in some other sense, they're never detectable as particles because they can never escape from one another. That's the only thing I can think of. So quarks are the quanta of the quark field, but they're never directly detected as separate quarks, well separated from other quarks. That's the only thing I can think of that he might be speaking about. Well, I guess during unitary evolution, it doesn't necessarily boil down to observables. And so you may not have particles that are identifiable at intermediate states. I don't know what they're... If you accept the idea of a quark as a particle, it's true that there's not a one-to-one correspondence, necessarily, between the fields in a theory and the particles in the theory. That's an idea which is true as long as the coupling constants in the theory are small. So there is not necessarily a one-to-one correspondence. Yeah, I mean, it is true. There are field theories that for one reason or another, you wouldn't describe their quanta as conventional particles. But at some level, it's just the difference between words. If you define particle to mean quanta, then there's no difference between them. They're indivisible. They carry energy and so forth. So for my money, I would call them particles. But yeah. Can you talk about some of the histories? Is there some limitation on what histories you can talk about? Do they have to sort of continue this in some sense? No. No, in fact, they don't have to be continuous. First of all, of course, the idea of continuity on a discrete space like this doesn't quite mean very much. The field in the neighboring cell is just going to be a different value of the field. The rule is divide the space into cells. And actual practice, this is the way quantum field theory is defined. You divide the theory into cells. You sum over all possible ways of populating the cells with values of the fields, all possible ways. No restriction to things which are approximately continuous. And in fact, quantum fields are not approximately continuous. They jiggle a great deal. And you sum over all the possible values of the fields. And that gives you your input. So in principle, for each field and each derivative, you can assign it all possible values from minus and from minus. Well, you don't assign the derivatives separately from the field. The derivatives, of course, are related to differences of the fields in neighboring boxes. So you populate with field values, and then derivatives are replaced by, yeah. And you can use, there are different rules that you might adopt. You might define the derivative here to be the difference of the field here and here. Right. Yeah, that's a minor detail. Yeah, but in any event, you might take the real line up and break it up into small intervals, but you'd still have a sum over a countable number of possible values of the field. For each field, and if there were two fields, then you'd be doing that over. Yep. By the field here, I mean all of the fields. Right, so you would give a value to each possible field in the theory at each. This looks like- It's a really nasty-looking computation. Absolutely. Well, so since these are complex functions, they're analytic functions, so- Well, the analytic functions of- But the relationships between the values of the field have to conform to what you learn in complex variables. No, no, complex variables is about the theory of analytic functions. Analytic functions are extremely smooth. The smoothest functions you can think of. These functions do not have to be smooth. They're complex valued, but they're not analytic functions. The field as functions of position, definitely not. They're on the average highly discontinuous. If I were to do the computation of the field here, how do I know- That would converge and go all the right way? Yeah, I mean how many different fields I have to consider? How many different assignments have to- So of course this by now is a major industry called lattice gauge theory or lattice quantum field theory lattice because it divides the world into a lattice. And this has been studied to death, how to do these integrals in practice. By now it is a very effective tool, but it took some 30 years to develop the computing technology. I was involved in it in the very, very beginning. I wasn't involved in the computer technology of it, but setting up the rules for lattice gauge theory. So there was a history. The first part of it was setting up the rules for it. And that took about three weeks and then something like 30 years to develop the technology to compute these things. And you're right. I mean, how do you know when you've sampled enough of the space? So there are- and a lot of the wisdom of it came from statistical mechanics where you do very much the same thing. You calculate partition functions or probability distributions for this checkerboard here, not a quantum field theory. It could be a real crystal lattice and you might be summing over configurations of whether there is or isn't an electron at each site. So a lot of the methodology came from the quantitative study of statistical mechanics systems. We have the same question. How do you know when you've sampled enough? This is not the subject of tonight's lecture. So the answer is the experts know. Well, they think they know. And they get good answers. They get good answers and agree with experiments. So it seems to work. Okay. So let's go back and remember that quantum fields are a shorthand for creation and annihilation operators. And if we're talking about, let's say, the product of some fields in this box, times the fields in this box, that can represent the annihilation of a particle in this box and the creation of a particle in a neighboring box. So we might just represent it by a particle moving from one box to another. What kind of things in the action here actually do correspond to a particle moving from one box to another? Let me tell you, there are things in the action here which, strictly speaking, are not associated with one box but with a pair of boxes, namely the derivative terms. The derivative terms, I was a little bit hasty here when I said you multiply all these things together, one for each box, as a little bit of a cheat because there are terms in the Lagrangian which are associated with pairs of boxes. So really you can think of it as summing over the boxes and summing over neighboring pairs of boxes, but that's a detail. Which terms involve pairs of boxes? The terms which involve pairs of boxes are things like the derivative of phi with respect to t squared. The derivative of phi on a lattice becomes the difference of phi in neighboring boxes. So, in here, this derivative of phi with respect to t might really be written as phi at point x, well, phi at point t and x minus phi at point neighboring time and x. Do we actually have second derivatives? No second derivatives. These are first derivatives. I mean, would we? Never. Okay. Never. Bad idea. That was real damage. Now, what about derivatives with respect to spatial coordinates? Same thing. It corresponds to the difference to neighboring points in space. And then the instruction is to take one half of the square of the derivative. Well, one half of the square of the derivative will have in it. I just want to focus now on the terms which multiply phi times phi at a neighboring point. Let me just focus on those. There's also terms in here which multiply by phi by the same, by the value of the same point. But in particular, there are terms in this Lagrangian which multiply phi at one point times phi at a neighboring point. Every time you lay down a term like this, it represents the motion of a particle from here to here. You can think of it that way. Okay. Let's concentrate on these terms. Let's forget the interactions in the Lagrangian. These are simply, these are called the kinetic terms, the quadratic terms in the Lagrangian. They're things which are easy to deal with, and they do correspond to motion of the particle from one point to another. Let's look at this. And let's include only those. We don't even need to write it in this form. Let's just write it as e to the minus i times the sum over all pairs of boxes, over all neighboring pairs of boxes. I said over all boxes, but it's clear that's not quite right. We want to think of it as sums over neighboring pairs of boxes of things like phi in one box times phi in a neighboring box. I'll use the notation x and x prime to represent neighbors on the lattice. That's what goes into the action, or that's what goes into the exponential of the action. There are some coefficients in front of it, of course, but that's of secondary importance. And now we can expand out this exponential. Let's see what's there. As one, minus i times the sum of phi of x, phi of x prime. And then things like, let's see, what's the next term? i times i is minus one, so it looks like it's minus. One of phi of x, phi of x prime squared times another factor of the sum of phi of x, phi of x prime. This could be phi of x prime, x double prime, x triple prime. X double prime and x triple prime are one pair of neighbors. This is another pair of neighbors, and I think I left out a factor. There should be a two factorial downstairs. I'm simply expanding out the exponential here. And what's the next one? Well, it has three powers of the sum, four powers of the sum, and so forth. Let's look at each term here. The first term has something which involves a sum over the lattice of an annihilation of a particle at one point and a creation of another particle at another point, at a neighboring point. The next term, oh, incidentally, the rule that I'm going to tell you again, I mean the business of telling you some rules now, I'm going to tell you some rules now, the amplitude for going from one thing to another, the rule is you've got to close off, you must not have a dangling endpoint. A dangling endpoint can only dangle like that if you've got an operation to put a particle into the system. A dangling endpoint, for example, we're going to have diagrams which have particles going from one point to another. If they end like this, they're illegal unless there's an instruction to put a particle in at this point and to take a particle out at that point. So endpoints like this, we will simply rule out as things unless they correspond to a specific instruction to put a particle in at this point and take it out. Because we're only putting particles in in the initial state and taking them out in the final state, the only endpoints would be allowed on the top and the bottom. Okay, so let's take a particular term here, phi of x and phi of x prime. These are neighboring points on the lattice and they correspond to a motion of a particle from one place to another. There's a sum of terms. There could be a term for a particle from here to here, a term for a particle from here to here, a term for a particle from here to here. But none of these will contribute unless, of course, there would be one situation in which they would contribute. That would be, of course, if this layered structure here was only two layers high and we put in a particle over here and took one out at the neighboring point. Then there would be a contribution to the amplitude of that coming from this term in the product, phi of x times phi of neighboring x. That would contribute and it would contribute to the amplitude, basically the factor minus i. It would tell you that the amplitude to go from one point to another was simply minus i, this coefficient here. Okay, but that's not very good if this, how do we get from here to here? From here to here. How do we get a particle from here to here and calculate the amplitude that if we put in a particle over here, that will detect it over here? For that, we have to find in this sum of products here, we have to have a term which will leave no dangling ends. Okay, let's just take this term here. This has a particle moving from x to x prime and then another particle moving from x double prime to x triple prime. Are you saying the second term contributes only when the initial and final state are adjacent? This term here, yeah, yeah, that's right. This notation means neighboring particles, neighboring boxes. Here we have one neighboring box and here we have two other neighboring boxes. Yeah? Well, now we have the five x prime, the first term is five x five x prime. Yeah. And the second term is five double prime. Yeah, it just means pick two neighboring and sum over all possibilities. Sum, this is a sum over all neighboring pairs. There doesn't have to be adjacent to the first. No, no, no, no, in general not, but it can be. In fact, this could even be the same pair, but it doesn't have to be. Sum over all of them. All right, so what does this one do? It moves a particle from x to x prime and this one moves a particle from x double prime to x triple prime. So one of them moves a particle from here to here. This is x prime to x double, x, from x to x prime. And then the other one moves a particle from here to here. That's x double prime and x triple prime, let's say. Obviously, there's going to be dangling ends unless, well, let's take this case here where we only have three layers. If we only have three layers, the only way to avoid dangling ends is to have the first particle connected to the second particle, sorry, the first box connected to the neighboring box and then this box connected to this box. There won't be any dangling ends except for the dangling ends which correspond to the initial particle and the final particles. So if this was only three layers thick, we would find a contribution here, namely where x prime is the same as x double prime, that transports the particle from the initial position to the intermediate position and then from the intermediate position to the final position. What would be the amplitude then associated with the particle going from here to here, at least corresponding to this term? It would be a one over two factorial and a minus sign. That would be it. We would go off the coefficients and the coefficients tell you the amplitude. Thus far, we don't have a way to get from here to here, not with two steps anyway. Two steps can, however, if this was only two layers thick, we would have now a way to get from here to here, to go across the diagonal. How do we go across the diagonal? We go again to this second order term here, the term which has a two factorial in it, and we find the term which takes us from x to x prime and then from x to x prime and then from x prime to from here to here and then here to here. What's that? There are other paths to get from here. Yeah, there's another path where you jump from here first to here and then up. I don't think there's any other besides that. Yeah, not with just two terms. Right. Okay, but now there was no reason to stop here with only two terms. Let's go on. We can have three terms. Three terms would correspond to going from one box to an adjacent box. Well, it would correspond to three distinct steps. And if those three distinct steps were connected together into a chain, in other words, in this sum of products, we found the term which took us, we could come right back again, that would contribute together with the direct jump from here to here, would contribute to the amplitude to go from this point to this point. So you see, built into this prescription is something like the original particle path integral idea. The amplitude to get from here to here is the sum over the amplitude to go from here to here and the amplitude to go through another root, also this one back here. And eventually, if you expand this out to arbitrary order, there will be a term in there for every possible root that you can take to go from any initial configuration to any final configuration of that particle. So first of all, just thinking about a single particle moving in spacetime, this field Lagrangian contains information about the amplitudes to go from, now let's take an arbitrary, to go from any point to any other point. What you do is you add up all of the possible ways of going there and what is the coefficient for each way of going there? You read it off the coefficient that multiplies that particular term in this sum. One over two factorial, one over three factorial, some of them have eyes, some of them don't have eyes. Remember that I squared is minus one, so some of them have eyes and some of them don't have eyes and so in general, the amplitude to go from one point will be a complex number. That's one of the things that's new here, that they can propagate up and down and that's a feature of relativity. What's that? Oh, yeah. Oh, indeed. Yeah. Yeah. All right. Let's talk about going backward in time. You could think of, let's use one that goes backward in time. You can think of this as either allowing a new rule where particles can go backward in time or you can follow time forward and say what really happened here is the particle moved from here to here and then a particle pair, particle and antiparticle in fact, were created, the particle half of it going to here and the antiparticle combining together with the original particle to annihilate. Think of it either way. But the rule now allows trajectories which go backward. Let's come back to this term here. In this form, since you have n factorials in denominator, does this mean that you sort of have convergence in the nulls? Well, the n factorials certainly help the convergence, but they're not enough to make it converge. Yeah. That's right. No, no, no, no, no, no, but the sum over convergence. Configurations doesn't have to converge. Yeah. This part converges. Yeah. And this is going to be some finite number of total configurations considered. Yeah. Okay. Let's, all right, now let's go on to a slightly different problem. We studied the particle moving from one point to another. Now let's suppose we put in two particles and we want to know what the probability for the two particles to go to two other particles is. In other words, we want to start, and I'm going to take the case where there are only two layers for a moment. There's the case where there are only two layers for a moment. And I want to know the probability that if I start a particle over here, it will get to here. No, if I start two particles. One over here and one over here, that they'll get to here. Okay. Well, come back to this term. Now remember what this term did for me before. Before it allowed me to hop three units. But that same term allows two particles each to go one unit. Look at it. This can create a particle. This can annihilate it. This can create a different particle and annihilate it. This corresponds to a graph where you start a particle in here and goes to here. That's this factor. And the other factor, instead of taking the particle that's already there and moving it over here, it just takes a totally new particle and moves it to here. So this same factor, the same term contributes both to the motion of a single particle, three boxes, and it contributes to two particles each moving one box. Did I say that right? More or less. Three boxes, one, two. Two boxes. Yeah. So there's a lot in here besides just motion of a particle from one place to another. It has information in it about any number of starting particles going to any other number of final particles. Now in fact, particle number doesn't change as long as we just take this into account. Why not? Because every starting, every dangling endpoint here, well, let's see. Yeah, it has to end somewhere. Well, if you start and end with different numbers of particles, then obviously something has to happen. Well, that would mean there has to be a dangling in somewhere. Supposing I wanted to, I mean, I can't, how would you get from one particle to two particles? What if I start with electron and positron, I could end up with no particles? Well, OK, that would be something, that would just be something like this. Two particles. Let's put another layer in here. Yeah. Yes. That's true. That would correspond to an electron and a positron, for example, annihilating each other. The only problem with it is it doesn't conserve energy. So in fact, when you add it up, all these amplitudes, you would get zero. But on the certain circumstances, if there was an electromagnetic field to soak up the energy, yes, you're right. It would have information about an electron and positron annihilating. Right. So there's just a lot in here. There's a lot in here. But not everything. Not everything. To find out the other things that are implicit in the field theory, there are other terms in the Lagrangian. Now what you put in the Lagrangian is determined by experiments. And it's really just a way of codifying the results of experiments. But what else can you have in Lagrangian? We had the derivative terms, let's say, phi dot squared minus the derivative of phi with respect to x squared and y squared and z squared, all that kind of stuff. That was the stuff that I wrote down as phi at one point times phi at another point, at a neighboring point. Then there are possibly other things to just involve phi and not its derivative. So for example, you could have things like phi squared. Remember what the coefficient is? It's usually a 1 half here. That's a convention. Remember what the coefficient of phi squared is? The square of the mass of the particle. We should divide by 2. That's, again, a convention. And if we included this term here, what that would do, it doesn't move a particle from one place to another. It absorbs a particle at one point and emits it from the same point. From the same point. So what would correspond to graphical constructions where we would add a new rule, f of particle enters a region here from one box to another. One of the terms that can act is this mass squared term. It would simply take the particle and do nothing to it. It would leave it in the same spot. Then another term could come and move it, but it would weigh that particular path with a factor of m squared. Every time this term acts, it weighs the path with another factor of m squared in the amplitude. So there would be terms where no m squares act, where only these derivative terms act. In other words, when we're expanding this product out, we only have the derivative terms. That would be the theory of massless particles. The theory of massive particles would have a new rule that one of the possible things that can happen at every step is you may or may not just weigh that path at that point with the m squared. So the mass term is another rule for how you evaluate these paths, these path integrals. Another rule. It's we don't have to get into detail. The main point is that the mass of a particle is codified by another term in the Lagrangian, which is m squared phi squared, which weighs paths in such a way that it knows about their mass. All right, but more interesting is the more complicated terms, things like phi squared. Let's put a g, sorry, phi cubed. Phi cubed can do a number of things. It can annihilate one particle and create two particles. It can annihilate two particles and create one particle. It can create three particles. It can annihilate three particles. All right, so that's a new kind of thing that can happen on this checkerboard here in expanding out the action, expanding out the exponential. There might be additional terms here, for example, g phi cubed. When we expand out, we can have not only these products which take a particle from one point to another, but also processes where particles are created and annihilated in threes, either two in, one out, two out, one in, or three in, or three out. What that looks like on here is, for example, supposing we have several of these quadratic terms which move particles in one cubic term. What can that look like? Well, what it can do is it can take a particle, moving it from here to here, and suddenly in that box, we put a phi cubed. In that box, it can absorb one particle and emit two more. Now, the next thing that could happen is the two particles could move to other boxes. But basically what it does is it takes a particle, absorbs it, and creates two, and those two particles can be on their way. And this would then contribute to a process where one particle came in, two particles went out. In fact, you can make very, very complicated things. You can make things of, let's go back to the one particle going from one place to another. One particle going from one place to another. Let me stop drawing checkerboards. Let's just imagine the checkerboard there. One particle going from one place to another can go directly, just a bunch of small hops, a bunch of these hops. But also something else can happen. On the way, at some point, the cubic term could act. If the cubic term can act, it can take that one particle coming in and split it. Split it into two. And then both particles move along until another cubic term, something quadratic in the cubic term, something with two cubic terms in them, can create an extra particle and reabsorb it. So in fact, the actual amplitude to go from one point to another by one particle is not just the sum of simple trajectories going from one point to another through the space time, but is more complicated. It has infinitely more complicated. It has processes where the particle splits and rejoins. In other words, where the particle emits a second particle which is then reabsorbed. An example of this would be an electron moving along and emitting a photon and reabsorbing it. Part of the amplitude for an electron to go from one place to another is through the process of emission and absorption of a photon. It can get wildly more complicated. Of course, we have to add all of these up. The rule is add up the amplitude for every possible way to go from the initial state to the final state. You could have several of these splitting and joinings. You could have things jumping across the gap between particles. And this can get arbitrarily complicated from the point of view of electrons and photons, things like this could happen. Electron goes through, photon emitted, another photon emitted, but even worse, the photon can break up into an electron and a positron. But then the electron and positron can exchange a photon between them. All of these processes come out of expanding out the action like this and finding the individual terms which do all of these things. If you want the amplitude for any one of these processes, you just go back to what appears here, find the term that you're looking for, and find the coefficient in front of it. It may or may not have an i, depending on whether it's an even power or an odd power, and it will have some factorials and so forth. That will give you the amplitude for a specific process. But then the full amplitude to go from one thing to another is the horrendously complicated sum of all of these things. Right, so you may have certain things that happen in multiple of those histories. In multiple of those histories, you may have similar terms, which then the overall amplitude for that process would be actually from all the different histories. Yeah, from all the different histories going from the initial to the final. So there are two ways to think about field theory. One is to think about fields and think about histories as initial values of the fields, final values of the fields, and you sum over all the values of the fields in between. And the other is this particle way to think about it, where the field is replaced by a distribution of quanta, and again, amplitudes for initial configurations, which means an initial particular set of particles and a final set of particles. The basic processes, the basic underlying processes which happen are governed by this Lagrangian. That is the most important thing. Any time you have more than two fields interacting, you have an interesting interaction process where particles can split and join and so forth. What would happen if you had phi to the fourth here, which is a perfectly good interaction? That would be some place where a particle, for example, could come in and break up at the three other particles. One to absorb the initial state, three to emit the final state, or it could be a scattering in which two particles come along, collide, and two go off. Anything that looks like a vertex with four particles altogether, two in and two out, or two in, or three in, one out, or so forth, or just four out, or four in, they're all governed by this term. And so forth and so on. Let's see, how are we doing? So that's the spontaneous arrival of four particles at the final state? Yeah, except once again, it would violate energy conservation. So if you really worked it out and added up the amplitude from every position where this could happen, every spacetime position where it could happen, you would find out that the constant doesn't happen. It doesn't happen. In other words, they all add up to zero. So processes which violate energy typically all add up to zero in these things, energy and momentum. So the conservation of energy falls out of the new calculation? Yes. And the conservation of momentum, same thing? Yeah. Now, we did some examples. I'll show you how integrating the vertex over all time conserves energy, how integrating it over all space conserves momentum. Same thing is true here. Integrating the position of these vertices and the endpoints of these things over all possible positions in space and time will simply add up to zero unless energy and momentum is conserved. But apart from the conservation laws, everything that you can draw down will happen with some probability. Now, how could it possibly be that you add up this humongous thing and the answer doesn't come out infinite? Well, the answer is that there are coefficients here. There are coefficients here, these are called coupling constants, particularly the coefficients for the higher powers here. We could call this G3, we could call this G4. And the rule is in a diagram, in a Feynman diagram like this, every time you have a vertex, let's say with three particles coming together, the amplitude contains a factor of G, G3 in this case. So the more complicated the diagram gets, the more powers of G that it has. Each vertex gives you a power of G. If G is a small number, then each successive power of G gives you a smaller and smaller contribution and you have a chance that the thing might be able to converge. The technical question of whether it really converges or not is a very difficult one, but you can see that you have a chance if G is a small number. If these coupling constants are small numbers, then the simpler the process is, in other words, the fewer number of vertices, the bigger the contribution and the larger the number of vertices, the more complicated, the smaller the contribution and you have a chance at any rate at some sort of convergence to an answer. If the coupling constants are large, it means these series do not converge and if they don't converge, it doesn't mean that the theory is wrong. It just means you've chosen a naive prescription for trying to work with it. But in practice, most of the quantum field theory, most of the things we do with quantum field theory are based on the assumption of small coupling constants. And in those circumstances, it's possible to evaluate these finite, not just evaluate them, but add them up and find out that the one with four vertices is much smaller than the one with two vertices. So that's the name of the game. Do we do color theory in the same way? Yeah. We're going to talk about color theory. Now the next step is to write down the list of all particles and to write down either in the form of Lagrangians or in the form of Feynman rules. The Feynman rules and the Lagrangians have the same information in them. Three identical particles coming together, let's say three bosons, that's five cubed. Particles going from one point to another, those are the kinetic terms, the quadratic terms here. So the Feynman type rules and the Lagrangian are the same thing. And you can either describe the world by specifying all the vertices incidentally. The lines between one point and another are called propagators. The place where more than one particle, more than two particles come together, those are called vertices, vertices and propagators. But as I said, they're just simply another shorthand way of describing the Lagrangian. The Lagrangian is a shorthand way of describing the graphs. So we would have stated all of particle physics in a kind of dumb way by simply writing down the list of all particles, writing or constructing or even just expressing a quantum field for each particle, and then writing down a Lagrangian for all those particles. We contain all of the interactions or we can write down all of the particles and simply specify what all the possible vertices are. Either way, they come to the same thing. They're kind of a complete list of everything that can happen. Once you know everything that can happen, you can start thinking about calculating amplitudes for particles to go from one place to another. Particles to collide. Particles to collide and scatter. Here's particles colliding and scatter, two electrons colliding and scattering just by a photon jumping from one place to another. Again, Lagrangian tells you how to calculate the amplitude for each of these. As I said, it contains only information, including symmetries, including conservation laws. So I think it's probably, shall we write down the list of all elementary particles? So when you write down a final diagram, basically the bottom is the initial configuration and the top is the initial configuration. That's right. The bottom is the initial configuration of particles. The top is the final configuration of particles. As I said, it is the kind of complementary way to think about quantum field theory, complementary to the field description where you would start with a given field configuration and end with a given field configuration. Incidentally, in some sense, the complementarity between particles and fields is very much like the complementarity between momentum and position. These are two different ways to describe the same reality, and there are uncertainty relations between them. If you know with precision the number of particles, then you know that there's uncertainty in the values of fields. If you know the values of fields, then there's uncertainty in the number of particles. So these are really two complementary ways to describe the same thing. And the Lagrangian is useful in both contexts. Question. The couple of constants, are those predicted by theory or are they perfectly determined? Because under certain circumstances, there are symmetries which relate coupling constants. So this or that coupling constant for this process might, for symmetry reasons, be related to another coupling constant for a different process. So some of them are related to each other by symmetries. But their values in general are simply come from experiment. All right, much of this, if you really go and learn about quantum field theory, has its beauty. It does have elegance and beauty and so forth. Now when we start writing down the list of facts, what particles there are, what their masses are, what their coupling constants are, it's just a mess. It's just a very ugly mess with very little coherence, except the coherence that comes from symmetries. Symmetries tell you relationships between different kinds of particles and their processes. But apart from the symmetries, which are few and not so much, it is a large number of random facts about a somewhat unmotivated collection of different kinds of particles. But still you're pretty confident of the coupling constants because of your… Once you measure them, you measure them. If you change it a little bit, then you get wrong answers. Oh absolutely. No, no, no, I mean, okay, let's put it this way. You have too many particles, which means too many fields, too many for any aesthetic sense. You have a lot of them, a hundred of them or something. A lot of coupling constants, which are just more or less numbers pulled out of experiment, and a lot of masses, which range all over the map. A few relationships between them, but once you know them, once you know them, that's it, you can calculate with great precision anything about those particles, anything about those particles, anything about quarks, electrons and photons and so forth, means atomic physics, or it means particle physics, it means atomic physics, it means nuclear physics, it means chemistry, and maybe it means biology, maybe. So, the amount of input that goes in is perhaps bigger than you might like, but the amount of output is huge. Question for electrons, for example, how many of these G sub n's would be? G2, G3, G4, these be calculated? How many G's are there for electrons and photons? And electrodynamics? One, the electric charge. The electric charge? Well, in a sense, the electric charge is a kind of G3. The electric charge is a kind of G3. The electric charge is the coefficient in the amplitude representing an electron coming in, an electron going out, and a photon being emitted. In terms of symbols, this diagram would be represented by an electron going out, an electron coming in, and the field operator for a photon, which is A. I'm not writing down the details, there are some power, sorry, there are some direct matrices, but then the numerical coefficient in front of it is the electric charge in certain units, certain dimensionless definition. The only processes in quantum electrodynamics are the emission and absorption of photons from electrons. That's all there is. Well, if there are electron collision with a photon, there's just a little bit of electron collision. The simplest process would be photon absorbed. I meant to say proton. Oh, well protons are not usually considered a part of quantum electrodynamics. Now you can think that when you do quantum electrodynamics, you are doing atomic physics. You think of the proton as a point infinitely massive. You don't think of it as something described by quantum field theory. You just think of it as a nailed down, heavy particle which never moves until you open up the theory of protons and neutrons. So the scattering by a photon, a photon by an electron, this is the first diagram. But the electron is moving at almost the speed of light relative to the proton? Proton, yes. Right. Yeah, that's right. Let's draw some diagrams just to get some feel for what kind of things we have to add up. This is the lowest order diagram that goes into the scattering of a photon by an electron. There are two powers of the electric charge. So there's an E squared. But this is the amplitude. The amplitude gets squared to find the probability or the cross section. Cross section is another way of speaking about the probability for the scattering. All right. So that means that the whole process, the probability is proportional to the electric charge to the fourth. The electric charge, at least with the suitable definitions, is a small quantity. The square of the electric charge with, again, with the suitable definition is about one, well, that's the number which is 1 over 137, the fine structure constant. So this is proportional to the square of the fine structure constant. And so scattering in quantum electrodynamics is a unlikely process. A photon comes in and strikes an electron. What is the probability that it goes right through it versus the probability that it really scatters it? The probability that it really scatters it is governed by this e to the fourth, and it's a very small number. Now we have to add all the processes that we can find, add them up to form a real amplitude. Incidentally, there's another one here which looks like this, where the final photon is emitted before the initial photon is absorbed. That's another process. We have to add them all. But then we start adding more complicated things. Let's focus on here, and we can add another photon in many ways. This is not the only way we could have to go from here to here, from here to here, many photons. But how many powers of electric charge does this have? This has four powers of electric charge in the amplitude and eight in the probability. So this is even weaker. But then we can do even more complicated things. We can put an electron-positron pair in here. Each time we complicate it, we add two more powers of electric charge. So in quantum electrodynamics, the expansion is typically an expansion in powers of the square of the electric charge. Every time you add another complicating structure to a Feynman diagram, it's two powers of electric charge, and it means it's about a couple of hundred times smaller in amplitude than the previous one. That's why when you add them up, it looks like it converges, because more and more complicated they get the weaker the probability from that particular configuration. Incidentally, you don't square this and add it to the square of this and add it to the square of this to find the probability. You add them up, and then you square. That's the rule. Add and then square. Add and then square. That means there's cross terms between them in calculating the probabilities, called interference terms. So that's the structure of quantum field theory. That's what we do with it. The next issue, as I said, we're now at the stage where we can really start to sensibly talk about particle physics. We can name all the particles. We can specify their masses. Those are the parameters in Lagrangian here that we've spoken of. We can write down the Lagrangians governing them partly from theory, partly from experiment, and discuss their symmetries. The symmetries can be discussed by looking at the Lagrangian. We'll do that. We can write down the standard model of particle physics and discuss some of its properties. So I think we're off to, I think we're set now to move ahead and really discuss the world of genuine real particles and what can be discovered in the world. Instead of that m squared, you could add the Higgs field and then that would be a quadratic state. We're going to discuss that. Where are the five of the derivatives? Yeah? Where are they at the same time? Oh, they're here, x and x prime could stand for two different times. That's how we manage to move vertically. All right. Now, where does the parameter correspond to the particle just somewhere in the penitentiary? There's also some terms. Now, there are various terms where the particles just stand still. When we wrote down phi of x minus phi of x prime squared, that did have terms in it like phi of x squared and phi of x prime squared. Those are terms where the particles stand still. The cross terms are the ones where it moves. So there are terms where the particles stand still both from this and from this term. Right? Yeah. Okay. For more, please visit us at stanford.edu.
(December 3, 2009) Leonard Susskind gives the tenth lecture of a three-quarter sequence of courses that will explore the new revolutions in particle physics. In this lecture he continues on the subject of quantum field theory, including, the diary equation and Higgs Particles.
10.5446/15071 (DOI)
This is Sanjay. Did we get Sanjay on the record? Yes, okay. So we just want to announce that we're trying to collect up all the old lectures, the video lectures that are on YouTube, in one kind of a collection. And the idea is for everybody who's taking classes online and offline, we can actually have an easy way to navigate to all the lectures, you should have to hunt for them, right? So we're passing out a little sheet that's got a web address on it. Right now this is a temporary housing for it and it should be good until whenever. But when we do get a final home, we'll have a link that'll directly go to it as well. So this link should be good for hopefully all eternity until we hit horizon and all that good stuff. All right. Okay. So this is the address. The H-T-T-P colon slash slash new packet tech, one word, new packet tech dot com slash resources slash, that's the address where it's sort of the website for the class. So we'll be the website for the class. And it's for anybody outside, inside, where they can easily access without all the confusion that's taken place up till now, easily access all of the lectures that are online. All right. We have been discussing a simple quantum field and we're not finished with it. I have to take a two-year course and believe me, a real course in quantum field theory is genuinely two years of work. It really cannot be done in one year sensibly. I have to take that two years of quantum field theory and condense it down to a couple of lectures. We've already taken a couple of lectures. But I think we've had some forward motion. I want to take the very, very simple version of a quantum field that we've already discussed. First of all, remind you what it is. And discuss how it is used, again, to describe particle processes. We've done a bit of this. We've talked a bit about how the quantum field can code or codify scattering processes or creation and annihilation processes of particles. But I want to go into it in just a little more depth so that you can see where some of the really interesting aspects of quantum field theory come from. And how they influence questions like energy conservation, momentum conservation. How are they related to these quantum fields that we've discussed? Okay, before I do so, we need a little bit of mathematics, a little bit of formal mathematics, not much. It's mathematics that we've done before in these series of classes. But I want to get them up on the blackboard. The first thing is the Dirac delta function. Just to remind you what it is, is a function which is sort of a limit of a genuine real function. It's a function of, let's say, it's coordinate. Let's for the moment just call this coordinate anything. It could be x. It could be k. Let's just call it y so that we don't prejudice whether it's something that I've defined previously. The Dirac delta function is a function which is a sort of lump. It's concentrated someplace. It's concentrated someplace and not other places. But it's the limit of an infinitely sharply concentrated function. So we imagine that we can go to a limit where this lump-like function is infinitely narrow. Now, I can't draw it as infinitely narrow, so I'll draw it with finite width. But imagine in your mind narrowing it, narrowing and narrowing it. Now, of course, if you narrow it without raising its height, the area under it will decrease and decrease and decrease to the point where there's no area left under it. So what I want to do is to keep the area under this function fixed as I decrease the width. So as I decrease the width, I'm going to raise up the height of it in such a way that the product of height times width stays constant. How constant? One. Just one. An area of one located someplace infinitely narrow. That's called a Dirac delta function. If this point is y equals, let's call it a, just to locate the particular point, then the Dirac delta function is delta of y minus a. And delta of y minus a is zero whenever y does not equal a, over here or over here, and equals something infinitely sharply peaked at y equals a. That's the Dirac delta function. And it has the property, by definition, that the area under it, the derivative, the integral with respect to y, is one. So it's so narrow and so high that the area is one, and it's concentrated at the point where the argument of the function is equal to zero. In other words, when y equals a, that's the Dirac delta function. Okay, now I want to show you how the Dirac delta function emerges from a certain integral, an integral that we will come on many, many times. Let's take the function e to the ikx, k, oops, e to the ikx. Now, again, as we discussed last time, we're discussing this on an interval, which is a periodic interval, which has total length all around it equal to l. So the distance around here is equal to l. I don't know if this, just the distance around it is equal to l. That's periodic, and so functions that live on this periodic space here should be periodic, meaning to say they should come back to themselves after one full loop. Okay. Now, what is k? First of all, k is one of the allowed values of wave number on this. In other words, one of the values of k where e to the ikx is periodic. So let's assume an allowed value there, and let's take this function and integrate it over the entire cyclic x dimension here. We could take x going from zero to l, or for symmetry, I could take it to go from minus l over two to l over two. In other words, instead of starting x at zero and going to l, I could start it at minus l over two and go to l over two, put zero at the center here. Nothing special about this, it just symmetrizes things nicely so that the negative half and the positive half are symmetric. When you leave the space, if you're marching along and you come to l over two, you pop back up at minus l over two. Okay. So we're taking a function, a periodic function, e to the ikx, and integrating it from minus l over two to l over two. What is the answer? Okay, first of all, what is the answer if k is equal to zero? l. This is l, let's write this, equal to l if k is equal to zero, and what if k is not equal to zero? Zero, because if you take a periodic function that oscillates and you integrate it, then you get zero because it's positive as much as it's negative. Only if the k equals zero, in that case, of course, this doesn't oscillate, it's just equal to one, then you get l. So, now let's think of this as a function of k. Let's draw the k-axis. Let's draw the k-axis. Here's the k-axis. Now, k is not just any number, it's one of the allowed numbers. So it's discrete. Does anybody remember what the allowable values of k are? Two pi n over l, right? In particular, as l gets bigger and bigger, the distance between neighboring values of k gets smaller and smaller. And eventually, as l gets infinitely big, these discrete intervals shrink to zero. Okay, so now what's the distance? Let's put k equals zero. k can be positive or negative, incidentally. Let's put k equals zero right over here. Here's k equals zero. All right, what's the interval between neighboring values of k? Two pi over l. That's the distance from k equals zero, or from n equals zero to n equals one, for example. Two pi over l. Now, let's take this function. This function is a function of k, of course. We've integrated, I'm sorry, I've integrated it over x. It's only a function of k. What is it equal to? Let's plot it on here. It's equal to l when k is equal to zero. So right at k equals zero here, it's high, and it's equal to l. Let's just give it a slight width just for the purpose of drawing it. Just for the purpose of drawing it, right at k equals zero here, it has a height equal to l. But at k not equal to zero, it's zero. That's what we concluded. At k, it's equal to l if k is equal to zero, and it's equal to zero if k is not equal to zero. So, how high is it? It's height l. How wide is it? Well, one interval. You can't think of anything smaller than one interval on here. It has a width 2pi over l. What's the area if I imagine giving it that much of a width? What does the area under it? 2pi. So that means that this function is 2pi times the delta function. This function here is 2pi times delta of k. Why delta of k? At k equals zero, it's not equal to zero. At k equal anything else, it is equal to zero. Now, what is the meaning of this? The meaning of this is in the limit of very, very large l. This becomes a very, very narrow, very, very high function whose area is equal to 2pi. We'll keep that in mind as we go along. This is called the Dirac delta function. And we can now say, let's now go to the limit of very large l. If we go to the limit of very large l, in other words, we're making these intervals progressively smaller and smaller, we're really approaching the situation that really does define the Dirac delta function, then this integral goes all the ways from minus infinity to plus infinity. So just think of this as a formal prescription for an integral of e to the ikx, and the rule is it gives delta of k, 2pi times delta of k. This is equal to 2pi times delta of k. That's something of importance that when you integrate over a function of e to the ikx like this, dx, you get a Dirac delta function of k. Let's see if you can guess. Supposing instead of doing the integration over x, I did an integral over k, exactly the same integral, except it's not at all the same integral, e to the ikx. But instead of integrating dx, I integrate dk. What must that be? It's a function of x. But it's exactly the same structure here except I just interchange the x and k, right? x and k appear symmetrically here, so if I interchange x and k, this doesn't change, but I've changed the integral over x to an integral over k. So this is now some function of x, but what is the function? 2pi times delta of x. So these are two little observations that we will see happening, we'll see their utility occurring in a number of places. That's the Dirac delta function. That's our first little bit of mathematics tonight. What was that? Next little bit of mathematics. I taught you what a ket is. A ket is a symbolic notation for a quantum state. I also think I told you... I didn't use the word ket. I did, I did, I did, I did. And I told you it was half of a bracket. Yeah, okay. It's the other half of the bracket now that we want to talk about, the brach, otherwise known as bra. These are just notational devices. For our purposes, these are simply notational devices. And the question is, when do you use the bra and when do you use the ket? For our purposes, we will think of kets as initial states in a process. If a process starting with some initial state is described by some quantum state, we'll describe the initial state as a ket vector. Initial, in. Final states we will describe by bra vectors. And when you put a bra vector next to a ket vector, you make a bracket. This is simply a bunch of symbolic manipulation, but at the end what comes out of the symbolic manipulations are numbers. Okay, numbers, experimental numbers. So I'm going to teach you now some symbolic manipulations. For every, let's go back to the harmonic oscillator. For the harmonic oscillator, we characterized the quantum states of the harmonic oscillator by occupation numbers in. The number of excitations of the harmonic oscillator, the number of times, the number of units of energy that's been put into the harmonic oscillator. We can also describe it, that's a ket vector. That's the ket description of a particular quantum state. We could also write it in terms of a bra vector. So far I haven't told you anything. I just told you there's two ways to write the same thing. You say what's the difference between them? Not much. But if we are clever in our use of this notation, it will help us do some bookkeeping. That's interesting. All right, now, first of all, there's the notion of the inner product between a bra vector and a ket vector. Now all of this is in our past quantum mechanics classes, and I refer you back to it. We're going to go through it with lightning like tonight. If I have a ket vector n and a bra vector m, I can put them next to each other and putting a ket vector next to a bra vector in this way always gives a number. The ket vectors are abstract things. The bra vectors are abstract things. But the product of two of them, back to back or from, I don't know, front to front, I'm not sure which, in that form is a number. This number, now m stands for some quantum state, the nth quantum state of the oscillator, and n stands for the nth state of the oscillator. This number, this is definition now, is equal to zero if m is not equal to n, and it's equal to one if m equals n. So the bra vector and the ket vector for the same value of n have a product which is one. It's called the inner product, the inner product between these two. It's one if n equals m, it's zero if n is not equal to m, or to write it in a unified form, we can write it as delta nm, whose definition is it's zero unless n equals m, and one if n is not equal to m. It's kind of like the Dirac delta function. It's a discrete form of the Dirac delta function. All right, this is notational, this is just notational tricks for things. Now, let's come to creation and annihilation operators. What I want to get at before we go further is how creation and annihilation operators act on bra vectors. I've told you how they act on ket vectors, and I have not told you how they act on bra vectors. Now, of course, you can say, of course, I haven't told you, in fact, I haven't even told you the rules, which would allow you to deduce how they act. But I'm going to show you how they act and then show you why this rule was particularly nice. All right, so let's take creation operators, first of all. What does a creation operator do when it acts on the nth quantum state of an oscillator? It multiplies it by square root of n plus 1 times the n plus first state. What about the annihilation? The annihilation gives you square root of n times n minus 1. So one of them raises the quantum state, one of them lowers the quantum state, and in each case they multiply by the appropriate square root of n. Now we can ask, now we're asking for a definition. This is not a theorem which we want to prove. This is a definition of how creation and annihilation operators operate when they act on bra vectors. So how shall we take a plus to act on the nth bra vector? And I'm going to tell you now what the rule is. The rule is that however it acts on the nth vector, it gives another vector, another bra vector, of course. Let's circle that bra vector and now take its inner product with the nth ket vector. However this acts, whatever it gives, it gives another bra vector. That's the rule. When an operator acts on a ket vector, it gives a ket vector. When it acts on a bra vector, it gives a bra vector. And the notation is such that when an operator acts on a bra vector, you put the operator to the right and hit it to the left when it acts on a ket vector. So it's a neat notation. The rule is that you get exactly the same answer as if you did what? Allow a plus to act on m and then take the inner product with n. See, there's two distinct operations. However a plus should act on n, we'd like to find out how it should act on n, so we're going to define it so that you get the same answer for this bra ket as if you allowed a plus to act on m. Well, I've given you the rule for how a plus acts on m and then take the inner product with n. So let's see if we can figure out from this, from this set of abstract principles or abstract definitions, how a plus acts on n. Let's see, let's first take the bottom line here, the bottom line, let's calculate it. It's n and now a plus acts on m, what does that give? Square root of m plus 1, that's a number, we take it outside. Square root of m plus 1 and then what? M plus 1. What does this give? This factor gives 0 unless n is equal to m plus 1. If n is the same as m plus 1, then you get the number square root of m plus 1, which also happens to be the square root of n, of course. So that's one way of calculating what, that's this over here. But notice, it only gives an answer, a non-zero answer, if this is one unit higher than this. Sorry, yeah, sorry. It only gives an answer if m is one unit lower than n. If m is one unit lower than m, then a plus comes along, increases m by one unit and then we get a non-zero answer. So the only way to get a non-zero answer is if m is one unit lower than n. Well, let's look at this over here. Supposing that a plus acted on n to raise n, then we would only get an answer if m was one unit bigger than n. But according to this rule, we only get a non-zero answer if n is one unit. I think, sorry, let me say it accurately. I get my, okay. From this form, we see that n has to be one unit bigger than m to get a non-zero answer. n must be one unit bigger than m. If on the other hand, a plus increased the value of n over here, then we would only get a non-zero answer in the opposite situation, where n was one unit, what? Less than m. So this can't be the right rule that when a plus acts to the left, that it increases the index here. What it must do is decrease the index here. And in fact, that's the definition. That's the correct definition. When a plus acts on a bra vector n, it doesn't increase n, it decreases n. n minus one. And what about the numerical factor there? The correct numerical factor is just square root of n, if you want to. You can work this out. All right. What about n a minus? What does that do? Well, the answer is, I've given you enough rules that you can determine yourself what it does. What it does is it increases n, n plus one, times square root of n plus one. In other words, to make the story short, when a plus acts on ket vectors, it increases n and multiplies by square root of n plus one. a minus decreases in and multiplies by square root of n. When a plus and a minus act on the bra vectors, they just interchange. a plus decreases in and multiplies by square root of n. n a minus increases in and multiplies by square root of n plus one. That's the rule. That's the rule which leads to a very lovely calculus that's useful in quantum mechanics. A calculus meaning now tricks for computing simple things. Let me give you an example. Let's calculate in two distinct ways the following quantity. Let's take a plus a minus. Remember what that was? What does that stand for? Yes, it stands for, it's a quantum mechanical operator that stands for the occupation number, the number of quanta in the state. And let's calculate this quantity here. A minus, I'm sorry. A minus. Let's calculate it in two different ways. Now for those who studied some quantum mechanics, you'll know that this expression stands for the average value of whatever this quantity is in the quantum state n. But for the moment, let's just calculate it as an abstract exercise. But let's calculate it two ways. In the first way, let's allow this operator to act to the right. First, a minus acts to the right. What does a minus give when it acts on this? It gives square root of n times n minus 1. We still have a plus, which we haven't used up yet, n. Now, numbers come on the outside. Numbers like square root of n, they come on the outside. Square root of n. Now what does a plus do when it acts on n minus 1? It pushes it back up and multiplies by what? One integer higher than what appears here. So that means again, square root of n. That gives us two square roots of n, which means n, times n what? n. A plus n minus 1 brings us back to n. And nn, what is that? That's just 1. So it just gives us n. If it acts a plus a minus when sandwiched within the quantum state n, it just gives us the numerical number n. What would happen if we did it in the opposite, not in the opposite order, but by acting to the left on the bra vector? Let's do it. What happens when a plus acts on n to the left? It gives us n minus 1 times the square root of n. But we still have to act with a minus. What does a minus do when it acts on n minus 1? It raises you back up. Raises you back up to n and gives you another square root of n. So you see, with this definition that the creation and annihilation, or the raising and lowering, get interchanged when you go to their action on bra vectors and ket vectors, then it doesn't matter which way you imagine these operators operate. We'll get the same answer. That's a useful notation. And so when you see a thing like this, you don't have to ask, should you operate with this to the right and then take the inner product with the left-hand side, or should you operate to the left and then take the inner product with the right-hand side, you get the same answer. That's the beauty of that particular definition here. All right, that's good. Now what are we going to do with all of this? Well, of course, we're going to study quantum fields, which are objects built out of these creation and annihilation operators. Yeah, let's go right there. Let's go to the quantum fields, go back to a simple quantum field that we've discussed already, and then come to the description of quantum processes, scattering processes, creation of processes, annihilation processes, and so forth, how they're described by quantum fields or how they're described in terms of mathematical expressions involving the quantum fields. Why are we interested in processes like collisions and creation and annihilation processes involving particles? Because in the microscopic world, that's about all we can do. If we want to do experiments, the only real handles we have on experiments is colliding particles together and see what comes out, and describing those processes, how the initial state of the particle of two particles, for example, morphs into some other state involving five, seven, nine particles, four particles, whatever it is, the tool for that is quantum field theory, and we're setting up some simple examples. All right, so let's go back to the definition of the simplest quantum field. The simplest quantum field, we took it to be a function of only one coordinate, namely x. There's no reason why we can't think of x, y, and z here, and make position into a three-dimensional thing. If we do so, momentum also has to be, oh, incidentally, we'll work in units in which h bar is equal to one. I don't think tonight the speed of light will come into anything, but what's the connection between momentum and k if h bar is equal to one? They're the same. Normally, you would have an h bar over here, h bar k, but if h bar is said equal to one, k and momentum are the same thing. Okay? Right. Okay, so we concocted a thing which we call the quantum field associated with the point of space, x. As I said, x could be a three-dimensional point of space. If it is, then k has to be three-dimensional. If we're living in three dimensions, then the momentum is three-dimensional, meaning to say it has three components, and then k will also have three components. But other than that, the formulas I'm going to write down are pretty much the same in three dimensions, whatever number of dimensions we're doing. Psi of x was a sum over all the allowed values of momentum. If we're talking about the universe on a circle or the periodic universe, then these are the k's which are 2pi over l times an integer. If we're talking about an infinitely big universe, then k can be anything, any number. Sum over k, summation over k of the creation operator for a particle of momentum k times e to the minus ikx. That now has become a quantum field. There's a conjugate quantum field, and now we pretty much, if we're quantum mechanics people, we talk about the Hermitian conjugate. If we're classically oriented people, we simply talk about the complex conjugate. You can call it psi star or psi dagger of x, the complex conjugate, which is a similar thing involving annihilation operators times e to the plus side kx. Is this a definition? This is definition. This is definition. But definitions, the question, it's always appropriate when somebody gives you a definition to say, why is that definition? And then the usual answer is wait. Wait till you see how we use it, and you'll see that it's a useful definition. So I'm afraid that's the situation here. Why is this the definition? Because this is a useful definition. I could have put something else here, and it would have been a useless definition. So it's premature to ask why this is the definition, but it is a nice simple expression. It's not very complicated. Some over the allowable values of momentum, creation operated times e to the minus i kx, or the complex conjugate, which has annihilation operator times e to the plus i kx. Now, I'm going to do something even a little bit fancier. I'm going to give these psi of x's some time dependence. So far, they don't depend on time. The definition does not involve time. But I'm going to introduce some time into the game. And how do I introduce time? Why do I introduce time? Well, remember that there's a connection between the momentum, the wave vector k, and the frequency of an oscillation. These are fields. These are fields. Fields oscillate. As is written now, it doesn't oscillate. It doesn't have any time dependence. If this has anything to do with a real field, waves, and so forth, we better introduce some time into it. Well, that's not so hard, because we remember that for each value of k, there is a frequency. Frequency tells us how things change with time. Supposing I have an object, a mathematical object, which has a certain frequency, omega, how do I write the function that oscillates with frequency, omega? e to the i omega t. e to the i omega t, which also happens, of course, to be cosine omega t plus i sine omega t. This is a function with a definite frequency. The frequency is omega. All right, now, in past lectures, I pointed out to you that for various kinds of waves, each k, k is a wave vector. It's inversely related to the wavelength. For each k, there is a frequency. I can say that there is, for each k, there is an omega of k. And each oscillation, each time we see an e to the minus i kx, we can put in front of it an oscillation, in front of it or behind it, an e to the i omega of k times t. Now, this thing has not only space dependence, but it has time dependence. And, moreover, it's time dependent. We do the same thing here, of course, e to the minus i omega of kt. Let's erase Psydagger for the moment. We'll come back to it. I want more room on the blackboard. Here's an object which has both space dependence and time dependence. The time dependence has been arranged in such a way that for each value of the wavelength, k, it oscillates with a time dependence, which is just the right time dependence for that wavelength. This is now a function of space and time. It truly is a quantum field now. It varies in space and time. I want to take an example and show you that in the example, there is an equation, a wave equation for Psy of x and t, and see if we can figure out what the wave equation is. See if we can find the wave equation for Psy of x and t knowing the connection between omega and k. Now, I haven't told you what the connection between omega and k is. But let's suppose that we're talking about a non-relativistic particle. In other words, I'm not talking about a photon. I'm talking about a species of particle which moves with much less than the speed of light. We're working in the approximation in which everything is non-relativistic. Okay, when h bar is equal to 1, let's put it over here, h bar equals 1, omega has another name. What's the other name for omega? Energy. Remember, energy of a photon or energy of a quantum, of a single quantum, we're talking about a single quantum now, the energy is h bar omega. If h bar is equal to 1, omega and energy are the same. Momentum is equal to h bar times k. So when h bar is equal to 1, energy is frequency and momentum is wave number. Okay, what's the connection for a slowly moving particle? Slowly means only slowly moving compared to the speed of light. What's the connection between energy and momentum? Energy. Right. Energy of a slowly moving particle is the square of the momentum divided by twice the mass. This is, of course, the same thing as writing energy is equal to one-half mv squared and p is equal to mv. All right? If we solve for v and plug it on into here, we get this relationship here. Okay, that also tells us now the relation between omega and k. k squared over 2m. Right? So now, we really do know the precise form of psi of x and t. It's a sum over the allowed values of momentum and omega of k is not just some arbitrary omega of k, but it is k squared over 2m. With that proviso, psi of x and t solves or satisfies a differential equation, a wave equation. Let's see if we can see what the wave equation is. First of all, in wave equations, derivatives with respect to space and time enter and are equal to each other in some form. That's what a wave equation is. Equations for how space variation is related or how time variation is related to space variation. So let's consider first the time derivative of psi. What's the time derivative of psi? I'm going to call it psi dot, standard notation for time derivative. Psi dot we obtain by just differentiating the time dependence with respect to t. That's equal to sum on k, a plus of k, e to the minus i kx, times what? Times i omega inside the summation i omega of k, e to the i omega of k, t. All I've done is say every time this is still under the summation here, every time I differentiate with respect to t, it pulls down a factor of i omega. So that's psi dot. Now let's look at the space derivatives. What is the space derivative? Decide by dx. Well, we do exactly the same thing. Every time we differentiate with respect to x, it pulls down a factor of minus i k. So this is equal to the same kind of thing, summation of minus i k, a plus of k, times e to the minus i kx, e to the i omega of k, t. So differentiating with respect to x always just pulls down a factor of i k. Differentiating with respect to time pulls down a factor of i omega. Now there's no simple relationship between this and this. Why not? Because omega is related to k squared. If omega was simply related to k, for example, if omega had to be equal to k, then we would just say psi dot is equal to d psi dx. For example, imagine a world or a kind of particle where the frequency is in fact equal to k. That's a very simple situation. In that situation, i omega k is the same as minus, apart from a minus sign, is the same as i k. And we would just say that such a field satisfies a equation that psi dot is equal, I think, to minus d psi by dx. Is that clear? But that's not the case here. We have to have omega equals k squared. So how can we get k squared? Take another derivative. Each time we differentiate, it brings down a factor of i k. So let's differentiate again. That brings us minus i k squared. What's minus i k squared? I think that's minus k squared, right? Minus k squared. Minus or the minus sign. But now, k squared is 2m times omega. k squared is 2m times omega. So let's write that. 2m times omega. And let's divide the left-hand side by 2m. 2m is just a number. Excuse me. Yeah? I'm confused. Is the minus sign really supposed to be like I thought you had minus i? Let's see. Let's check. When I differentiate with respect to time, it gave an i omega. So that's i omega. Now, when I differentiate it with respect to k, it gave a minus i k, and I did it twice. So that gives minus i k squared. What is minus i k squared? Minus k squared. Minus k squared. This is minus k squared. Well, yes, as far as I can tell, the minus sign is there. There was a minus k squared, and then I substituted for minus k squared minus 2m omega. And that's where this came from. OK, I'm more familiar with this in the form in which we divide both sides by 2m. This doesn't matter, of course. Let's divide it by 2m, and we get 1 over 2m times the second derivative of the psi with respect to space squared is omega times all this stuff. But that's clearly proportional to psi dot. What's the relationship? You've got an i omega there. Here we have an i in the formula. We can divide by i, which is the same as multiplying by minus i. OK, to make a long story short, the right equation should be minus i psi dot. That's the time derivative of psi divided or is equal to 1 over 2m times the second derivative of psi with respect to x squared. Now, do I have the sign right or not? Of course I don't. I never get the right. OK. Now is it right? No, you had it right. I'm not sure. 1 over 2m squared? 1 over 2m, not 1 over 2m squared. 1 over 2m. 1 over 2m. OK, let's see. Is the minus sign there or not? Let's put a minus here, and then there's a plus here. So it looks like it's the plus sign. As far as I can tell, it's the plus sign. Yeah. Well, it's just that you have the i on the opposite side of the equation than I do. Well, of course. That will do the trick. OK, does anybody know the name of that equation? Schrodinger. It's the Schrodinger equation. But it's not the Schrodinger equation for the same thing as in elementary quantum mechanics. In elementary quantum mechanics, psi is just a function of position. It's not an operator. It doesn't do things. It's just a thing whose square is a probability. Here, it is an operator. When it acts on states, it creates particles. It annihilates particles. This is an instance of the relationship between particles and quantum fields. Quantum fields are operators. They happen to have the same equations as the Schrodinger equation of elementary quantum mechanics. They are very closely related. But they're quantum mechanical operators. They're observables. You can observe them. Under what circumstances do they behave like classical fields? In other words, you can measure them and the same way you measure the electromagnetic field. The answer is that they behave like classical fields when the number of quanta is very large. When the number of quanta is large, then the magnitudes of the fields are large and the quantum fluctuations are small, the quantum uncertainties are small, exactly like a harmonic oscillator. A harmonic oscillator, if it's got a big motion, behaves classically. It only got one quantum unit of excitation that behaves very quantum mechanically. So this is an example, as I said, of a quantum field, and it has creation and annihilation operators in it. Nevertheless, it's a thing which satisfies an equation, and the equation is the Schrodinger equation. This is obviously a more advanced notion of the Schrodinger wave function just saying it's a thing whose square is the probability for a given particle. It's something a little bit different. Okay, and as I said, it is a quantum field. It's the simplest version of a quantum field. How do we use it? How do we use it to describe processes? We've already talked about this a little bit. What I didn't talk about, in which I want to come to now, is energy and momentum conservation, and how energy and momentum conservation are codified or codified, whatever the right word is, in the various dependencies of psi and the way that we describe various processes. I want to imagine now describing the very, very simplest process in which a particle scatters off a target. The target, in this case, is a thing which is so heavy that it doesn't require. A particle comes and either reflects off it or scatters off it. In three dimensions, it could have its trajectory changed. In other words, it could be coming in from the left and go off straight ahead. But what can you say about a particle scattering off an ordinary target, a target that's just stuck there, it doesn't move, stays there forever and ever? Which of the conservation laws would you expect to be true for that particle? How about momentum? Just a moment. I wonder if this is so fake. Yes, of course. Look at refraction, reflection, and absorption. Well, let's forget absorption for the moment. The particle comes in and goes out, so it's not absorbed. But yes, it could be reflected, bang, bang. It could be refracted, in which case its direction of motion has changed. In any case, the momentum of that particle generally is not the same coming in as going out. Of course, secretly what happens is momentum is really conserved, but what happens is, of course, the target absorbs some of the momentum. But let's just fix the target. We're going to change it, nailing down the target, and the target is not part of our mathematical description, or the motion of the target is not part of our mathematical description. Then in the mathematical description of a particle scattering off a target, the momentum of the particle is not conserved. What about the energy? Yeah, the energy of the particle is conserved. If you take a tennis ball, well, a tennis ball is a bad example, because it heats up when you throw it against the wall. But an idealized tennis ball where you can ignore heat, you throw it against the wall and it bounces off. What can you say about the way that it bounces off? You can't say that momentum is conserved. The momentum is gone from that direction to that direction, but the energy is conserved. So how can we see, can we see, that in some simple model of the scattering process, that momentum is not conserved and energy is conserved using these wave fields? So we talked about this last time a little bit, and I showed you how you could describe a process of scattering, creation, annihilation, and so forth using these quantum fields. So here is a model. Here is a space is horizontal, time is vertical, and the axis over here represents a target. It's fixed in space. It neither, and it goes on and on forever into the past. It just sits there. And a particle comes in. It hits the target at some point and either bounces off or goes forward or scatters into some other direction. So we describe the initial state by saying there's one particle with a momentum k. Let's call it k-in for initial, initial or incoming. And then a particle goes off having scattered off the target, and let's use a language now. It's absorbed by the target and suddenly re-emitted by the target. Think of it, instead of just thinking of it as bouncing off the target, let's in the back of our mind have a picture in which it is absorbed or annihilated by the target and then instantaneously recreated by the target, but possibly with a different momentum. Let's call that final momentum k-final. Let's just call it k-i and k-f, initial and final. And what are we interested in? We're interested in the probability amplitude, the quantum mechanical probability amplitude, that the particle scatters from momentum k-initial to k-final. Now, one more ingredient. I'm going to assume that the scattering can happen with equal strength, equal at any time, always at the same place. We're going to place the target at x equals 0, but we're going to assume that the scattering process can occur at any time. Whatever time that particle gets there, it will scatter or not scatter or do whatever it does, but with no special dependence on the time at which the process takes place. That's going to be our basic assumption. How do we describe the event of the particle being absorbed at position x equals 0? Well, we describe that by means... This should be psi dagger, shouldn't it? I think this should be psi dagger. It's the daggers that go with the plus signs here, I think in my notation that I used last time. I think that's right. I don't think so. I think a plus always goes with a minus here. I think so. And then this one would be plus if I remember. We describe it in the following by a kind of bookkeeping. It's all bookkeeping, but it's useful bookkeeping. You absorb the particle. This is a field which creates particles. Let's write down the other conjugate field which annihilates particles. It's the complex conjugate or psi of x and t which is made up out of annihilation operators e to the plus i kx e to the minus i omega of kt. It's just the... Everything is conjugated. e to the minus i kx becomes e to the plus i kx and so forth. And these are to be thought of as complex conjugates of each other or Hermitian conjugates of each other. All right, let's imagine first of all absorbing the particle at the origin. How do we do that? We absorb it by an operae where we start with the initial state. The initial state has no particles with any particular momentum except one particle with momentum ki. So if we describe it in terms of occupation numbers, only the occupation number associated with momentum ki would be nonzero. We can also just label that by saying there's one particle with momentum ki and be done with it. We don't need to specify all the particles that aren't there. But nevertheless, it's useful to keep in mind that when we specify the initial state, we're specifying all of the occupation numbers of all the various quanta and only one of them, according to assumption here, is present. So there's one here and all the others are zero. And we just label it by saying, well, there's one particle with momentum ki. Then that particle is either... is annihilated at the origin, if it's at the origin. And let's describe that by psi of x and t, but x is equal to zero. Zero and t. y is x equals zero. I'm assuming that if the process happens, it happens at the target and the target is at x equals zero, but it can happen at any time. So that's the process of absorbing the particle at time t. That's what this says. But then the particle is immediately recreated. You say, why does it have to be? It doesn't have to be. That's the model that I'm making. A particle absorbed and immediately re-emitted. That's a mathematical model. It's not necessarily a law of nature about any particular kind of particle. In fact, in the real world, a photon can be absorbed by an atom and emitted later or earlier. So what we're talking about here is a simple mathematical model in which the process happens all at an instant. Particle absorbed and re-emitted. And what's the emission described by? It's described by the creation of a particle also at point zero. So that's psi dagger of zero and the same time t. But what time? What time should I put there? The same time, but which time? It will matter. So why one or zero? How about let's say it can happen at any time, any time that the particle gets there? Okay, whenever the particle gets there. That's the same as saying, integrate this over all possible times. In other words, there is a process in which the particle is absorbed at this time, at this time, at this time, and so forth. So let's not prejudice what time it happens by just averaging or integrating over all possible times. This is a mathematical expression which we can work out. I will show you as we go along what the implications of this integral over time are, but the basic physics of it is that the process could happen at any time. No special time. No particular time is singled out as special. And we get rid of the specialness of the value of time by just integrating over all values of time. What does this give? This gives the final state. Here's an initial state. These are operators which operate on the initial state and give some final state. How do we calculate the probability that a final state consists of one and only one particle moving with some other momentum? If you remember your rules of quantum mechanics, we take the inner product of this state with the final state. So let's say one particle with momentum k final. This is a number. Remember, a ket vector, operator, bra vector, give us some kind of number. Is this number the probability for the scattering? Not quite. We have to square it. We have to take the absolute value of it and square it. Now, I've left out one important ingredient in this expression. We're going to evaluate this. We're going to go through it and we're going to see what it says. Of course, I'm purposefully, or not purposefully, I would rather be able to explain all of this in complete logical order. But given an hour to do all of quantum field theory, we have to draw on things that we've done in the past. In the past, I explained that probabilities are squares of amplitudes and amplitudes are inner products between initial states and final states. Here's the thing we want to calculate. But I've left one thing out. I've left out the strength of the scatterer. The strength of the scatterer is a measure of the strength of interaction between the scatterer and the scattery. So let me give you an example. You could have a charged particle at the origin, which is scattering a photon. A photon comes in, is absorbed by the scatterer, and re-emitted. What is, in that case, what corresponds to the strength of the scatterer? The answer is the electric charge of the charge. The bigger the electric charge, the more probable it is that the photon will get scattered. There is a measure of the strength of the interaction between scatterer and scattered particle, the strength of the coupling between them, which has to be codified somehow, and it is simply by a numerical number put here. It's called the coupling constant, G. G for what? I don't know what G is for. It's a measure of the strength of the coupling or the strength of the interaction between the scatterer and between the scatterer or the target and the particle itself. I'll give you an example. Some examples. The scattering of a meson from a proton is a strong process, meaning to say that if the meson is absorbed, if the meson arrives at the location of the proton, there's a high probability of scattering. That's indicated by a large coupling constant. A photon in the vicinity of a charge also could be a proton. A photon is also absorbed and re-emitted by a proton. The probability for that scattering, the probability that the photon gets redirected, is much smaller than the probability for the meson, and that's indicated by the coupling constant being much smaller for the interaction of a charged particle with a, you think of something even smaller? Neutrino. Yeah, a neutrino interacting with a proton has an even smaller constant for scattering off this. That's what may... Well, that could be bothered also for an electron. For the target being an electron or the... Well, right now we're just making a model, in fact, of an electron or something like an electron scattering off a target. So, don't think of this as being valid for any real thing. This is a simplified model for a number of different situations. What about the scattering of a graviton by a nucleus? Graviton is the analog for gravity of a photon. Well, it can happen, but it's an extremely small constant. I won't even try to tell you how a 10 to the minus some large number by comparison. All right, so the strength of the interaction and the probability that this actually happens, the particle gets redirected, is indicated or described by the coefficient g, which appears here. It's just a number. Now, it's called the coupling constant. It's called a coupling constant. There are many coupling constants. All right, so let's see if we can calculate this. What are we trying to calculate? We're trying to calculate the probability of going from k initial to k final. That's all. Yeah. Question? Question? Yeah. Is there any logic for g being a constant as opposed to being a function of omega or k? That's a good question. The answer is there is no logic to it. No, there is logic to it, but there are situations where it's a function of omega. I'll show you some situations. As I said, this is a model of a particular kind of simple scattering. There are more complicated kinds of scattering. A more complicated kind would be, and this would really be for the case of a photon and an atom, for example, a photon comes in, atom gets excited, and then de-excites by emitting the photon. Here is a little gap, a time gap, between the time the photon is absorbed and the time it's emitted. This happens to be equivalent to saying that the coupling constant is omega dependent. So as I said, we're talking about a simple model and just trying to find what the consequences are. There's going to be one important consequence, and it is very general. The rest is not so general, but the one important consequence that I want to get to. So let's just plug away. We're going to just go straight ahead, plugging in for psi, its value written in terms of creation and annihilation operators, and then use those creation and annihilation operators to annihilate the initial particle and recreate the final particle and calculate the expression that's up there. The square of it will be the probability. So let's do it. Okay, so yeah. If G is equal to 1, that's a good question. You could ask what happens if G is much bigger than 1. Does that mean that the probability is much bigger than 1? No, it doesn't. It just means that you have to go ahead into a more complicated calculation. This calculation that I'm doing is only correct for very small g. It's correct for very small g. What you have to do when g is larger is extremely interesting and will be very important to us in what follows. The answer is you have to do higher order perturbation theory. Okay, but we're not doing that now. Well, we're just doing the simplest thing. So let's do the simplest thing and just plug in for psi its value. Psi, the first psi, not the psi dagger, but the first psi is a summation over k, not k initial, but just k, a summation index, a minus to absorb the initial particle of k, e to the i kx, e to the minus i omega sub kt. And that acts on the initial state, which happens to have one particle of momentum k i. Yeah? Mm, thank you. X is zero at the scatterer. So let's leave it out. e to the i k zero is just one. Which terms in this sum will contribute? Only when k is equal to k i will we get anything. Right? Why? Because the annihilation operators give zero when they act on states where there are no particles. The only particle around has momentum k i. And so only when k is equal to k i do we get anything. But nevertheless, let's leave it in this form just for the moment. Now, what about the other operator, psi dagger? That's also a sum over momentum. I better use a different index because I don't want to use the same summation index twice and get confused. So I'll call it, let's call it L. And now we put a plus of L that creates the new particle with momentum L times e to the, well, x is still equal to zero. x is still equal to zero, so we don't get anything from here. We have e to the i omega sub L times t. What about the final state? The final state is the state with a particle of momentum k final. Okay, which term, first of all, the term in the sum over k which contributes is only when k equals k i, right? What about the sum here over L? What does L have to equal? k final. Remember that when a plus acts to the left, it annihilates. It has to find the particle to annihilate. A plus of L only gives you something when L is equal to k final. So really, both of these sums collapse. Only k equals k initial and L equals k final. That's all there is. What did I leave out? G. But I've also left out something else. Where is it? Integral dt. It's the integral dt, which is the thing that I'm really interested in. All right, so the only contributor here is k equals k i. What does a minus of k i do when it acts on a state with momentum k i? It just creates a state with no particles, right? It annihilates the particle with momentum k i. What about this sum? This sum is only non-zero when L is equal to k final. So we can put k final here. And when this acts on a state with k final, oh, sorry, yeah, OK, right. We're OK. This is good. What happens when a acts on a state with k initial? It just gives a state with no particles, right? So that's just no particles. What happens when this acts on a state with particle with momentum k final? It just gives no particles again. What is this number? One. So that much is one. So all this operator nonsense of creating and annihilating particles all goes away, and all we have is a number g. That's the probability for the scattering, or that's the amplitude for the scattering g. Nothing else except that we have this integral. Not in this expression. I think you're thinking about a relativistic problem. No, there was only one particle here. Because there was only one particle. Yeah, there was only one particle. OK, but the important thing, the real only thing that really I'm illustrating here is the integral over time. And what is it? It's an integral dt. I omega, let's just call it final, minus omega initial. This is k initial times t. That's the whole upshot. The time integration gives us an integral over e to the i omega final minus omega initial t. All of the operator nonsense just gives us a one. Nothing very interesting. The coupling constant gives us a g, and now we have to evaluate this integral. What is this integral? Delta function. This is an example of the delta function, an integral dt of e to the i something times t. Let's see, was it 2 pi times the delta function, if I remember? I think it was 2 pi g times 2 pi. Times delta of omega final minus omega initial. Does this ring a bell? Delta of omega final minus omega initial. It's only nonzero when omega final is equal to omega initial. Omega final is the final energy. Omega initial is the initial energy. So somehow, magically. We notice that if we had not integrated over time, we would not have gotten this delta function of energy. So somehow, there's a connection between the fact that there's a conservation of energy and the fact that there's no preference for any specified time. We could have made a model in which the scattering happens if the time is within some boundaries. Then this integral would have only gone over some limited amount of time. It would not have made a delta function. So the ingredient here, which is closely connected with energy conservation, is time translation symmetry. That every time is like every other time. That whole thing is squared. Oh, the square of the delta function. Yeah, yeah, yeah. For the probability, we have to square this. This is the probability amplitude. The probability itself will have the square of this. But who cares about the square of a delta function? The delta square of a delta function is also zero if omega final is not equal to. We have to worry about squares of delta functions. But in any case, this will be zero unless the initial energy is the same as the final energy. Let's assume the initial energy is the same as the final energy. Then this is the operative important part of the scattering amplitude. Let's call this by its name. It's the scattering amplitude. And the probability for the scattering is proportional to 4 pi squared g squared. The strength of the scatterer, or the probability for it to scatter, contains g squared. They always wind up containing pi's. But the important thing is the g squared. And which final momenta can happen? Well, I didn't specify anything special about the final momentum here. This final momentum could have been anything as long as the energy was the same. So this scatterer has the property that it takes a particle and with a probability proportional to g squared changes its momentum with equal probability to any momentum with the same energy. That's it. So this is a scatterer which can scatter into any direction with equal probability. Any direction with equal probability. That, of course, is not true of all scatterers. This is this very simple model that scatters with equal probability in all directions. And the coefficient g squared, 4 pi squared g squared, in this case, is the probability. So this is, I've shown you many things so far tonight. In particular, the definition of a coupling constant, the fact that the integration over time is the thing which ensures energy conservation, which is just another way of saying the problem has time translation symmetry. There was nothing special picked out about one time or another time. And I've illustrated the idea of a scattering amplitude or the amplitude, the thing which becomes squared in order to calculate a probability. This is a fairly generic set of ideas, but the details can differ. Can you always simplify these things by redefining the positions to x to 0? Can you always simplify this by redefining the positions to x to 0? Good question. Yes, in this particular case, yes. The question of what would happen had you not put x equals 0 here, right? That's kind of the question. What you would have found is there would have been another factor. The other factor in here would have been in the amplitude e to the i k initial minus k final times the position of the scatterer. What should I call the position of the scatterer? Just calculate the position of the scatterer. Let's just call it x of the scatterer, x of the target. x of the target is a number. x of the target is just the position of the target. When the position of the target was 0, this didn't occur. If the target was moved over to position x target, you would get an extra factor. But what would happen to the probability? No change, because when you square, meaning to say, when you multiply by the complex conjugate, this goes away. Multiplying this by its own complex conjugate, which is what I mean by squaring, multiplying by its own complex conjugate, this goes away. So this does not appear in the probability. You get the same probability no matter where the target is. But in an intermediate stage of calculation, in calculating the amplitude, the position of the target would occur. So it's a very simple model. So you can cram two years into an hour. But the mathematics, unless I see it wrong, seems to say if we said that it didn't matter where it happened, but it happened exactly at time 0, we'd get a different conservation. Yes. Who would you guess the conservation is? Well, it's momentum. Momentum and energy conservation. Right. Right. And does that have a physical interpretation, or is this more appropriate just than that? Well, of course, in the real world, if we're studying, for example, the interaction of two particles, and we take everything into account, both particles and so forth, the scattering can occur at any time, at any place. Right. But you could imagine, you could certainly imagine an approximate situation where you might, OK, you're asking me, is there ever a situation where, here the approximation was we just ignored the recoil of the target? OK. And therefore, the momentum was not conserved. Is there ever a situation where energy is not conserved? And that sort of situation is what happens when you have time dependence in the coupling, or time dependence in the strength of interaction. Now, yes, there certainly are situations where that can be a good approximation to say something makes a sudden change in the system from the outside, and when a sudden change in the system happens, that's when the scattering takes place. Yes, we can think of situations where that happens. I can. We can try to concoct one. I can't right now. I'm a little too fuzzy. But yes, we can certainly concoct a situation where something sudden happens in such a way that it makes the scattering possible at an instant of time, and then energy will be completely not conserved. Right. Do most of the coupling constants come from experimental data? Yes. Also, you just said introducing a time dependence in the coupling constants. Well, that usually means when there's a time dependence in the coupling constant, it usually means there's something that you've ignored in the system, which is either moving or time dependent, something whose dynamics and degrees of freedom you're sort of ignoring in the same way that you ignored the possible recoil or repositioning of the target here. So I'll try to think of a situation where there's an interesting violation of energy conservation or where the bookkeeping was such that you threw some of the energy away. But let's come back to it. Yeah, hitting a moving target. Hitting a moving target. Right. Hitting a moving target is an example. Where the target, where again, the motion of the target is thought of as completely fixed. Yes. Right. And on certain scales, you have to worry about the size of the target versus the frequency of the size of the wave. Yes. So electron versus x-ray versus gamma versus... Yeah. Here in this model, the size of the target was zero. Okay. Yeah. In this, right? And you're absolutely right. Something interesting happens when the target has a certain finite size, and we can discuss that. But let me just come back to the question about the moving target. If you think about it for a minute, of course, if you have a moving target, energy is not conserved. For example, supposing that wall was moving toward me and I had a tennis ball, and all of a sudden the wall hits the ball and the ball moves off, the energy of the tennis ball, we're not accounting for the energy of the moving wall. We're not trying to worry about that. So if the wall is moving, that corresponds to a time dependence, in this case, a time dependence, not of a coupling constant, but a time dependence of the location of the scatterer. That would be enough to make energy not conserved. And we could see it in the mathematical formalism. OK, so to reiterate, these creation and annihilation operators are the tools which allow you to discuss transitions of particles from one state to another state. We have not talked about, well, actually we did talk about creation and annihilation. Let's discuss another situation. Oh, incidentally, supposing psi was the field operator for an electron, each kind of particle has its own separate field. So electrons have a different field than photons. When you put psi here, you better specify what particle you're talking about. Let's think of the electron. Psi describes electrons now. Here was a process in which an electron came in and an electron went out. What happened to the total charge? Did the total charge change? No, one particle came in, one kept particle coming out. Now, let's imagine a different situation. Let's imagine a situation where one electron comes in and two electrons go out. OK, crazy. I mean, it can't happen, but let's for the moment, nevertheless, try to imagine it. How might we describe it by the same kind of mathematics? One electron comes in, two electrons go out. How do I modify this? My simple model is two electrons go out from exactly the same point. Yeah, we might put another, we might square this. In other words, psi of 0t, psi dagger of 0t, psi dagger of 0t, psi of 0t. Oops, there should be a t here, right? Yeah. What about, now this, this of course can't happen in nature because electric charge is conserved. This corresponds to the annihilation of one electron in the creation of two. Bad idea, but nevertheless, let's write it down. What about two electrons in and two electrons out? Is that OK? Well, at least it doesn't violate electric charge conservation. How would we describe that? Another psi. Psi, psi, psi dagger, psi dagger. How about two electrons in, three electrons out? Psi, psi, psi dagger, psi dagger, psi dagger. OK, which ones of these are allowed? And what's the rule? I'm not deriving a rule now, we just got to observe a rule. Same number of creation and annihilation. Right? Same number of psi's as psi daggers. This is not allowed. This is allowed. This is not allowed. Rule is, same number of psi's as psi daggers. There's no way to express that. Is the order a matter? Is the order a matter? Is the order of the operator? It's not a question of the order of the operators. It's the number of psi's versus the number of psi daggers. When the number of psi's, all right, let's, when the number of psi's is different than the number of psi daggers, versus when the number of psi's is the same as the number of psi daggers. How can you diagnose these combinations? Of course, it's very easy just to count the number of psi's and psi daggers, of course. Let me give you a mathematical diagnostic that really does have a deep meaning. Let's imagine a transformation. Now psi dagger does mean the complex conjugate of psi. It is the Hermitian conjugate in the language of quantum mechanics, but the Hermitian conjugate is the analog of a complex conjugate. Let me imagine that I take an expression like this, just a mathematical expression like this, and I transform psi by multiplying it by a phase. A phase means an e to the i times some number, let's call it alpha, times psi. I just, I'm just doing now blindly playing a game. Wherever I see psi, I multiply it by e to the i alpha times psi. What happens to psi dagger? If I multiply a complex quantity by e to the i alpha, what happens to its complex conjugate? e to the minus i alpha psi dagger. What happens to objects that have an equal number of psi and psi dagger? They stay the same. What happens to objects which don't have an equal number of psi and psi dagger? They change. So we could characterize the allowed processes by the ones which are described by operators which are invariant, unchanged by the operation of changing the phase of the operator. Now that, hm? Indeed. But for the moment, let's give it a simpler name. Invariance under changing, under redefining the field so that you change its phase. Overall, everybody, you change the phase by a constant phase factor. If the description, if the interaction expression here is invariant, then charge is conserved. If it is non-invariant, then charge is not conserved. What happens supposing I had psi plus psi dagger? What kind of thing does that correspond to, incidentally? All right, I'll leave that to you to think about. But is this unchanged by phase? No. No, no. Isn't that the real part? The real part of psi, and if you multiply by a phase, you change the real part of psi. So that's not a quantity which is invariant. Of course not. Should the target, and you have an electron impinging on a target, you may have many seconds of electron in between. Yes. Well, wait. What kind of target? A target full of electrons? Well, sure, you can have, yes, yes, yes. Right. You say what happens if you have an atom, and you hit the atom with an electron, and 75 electrons go off. Well, charge is not violated, of course. What has happened is the atom has changed. All right, this is a situation in which you cannot get away with ignoring the dynamics of the target. In the particular, the important dynamics of the target is the change of charge of the target. You can't, in this case, treat the target as though it was a completely passive thing. You've got to remember. All right. What's that? Yes, that means the target itself has to be described by a quantum field. All right, and we can discuss that. I think we've gone far enough for tonight. All right, so what have we found? We found that time translation invariance corresponds to energy conservation. Phase invariance, we can call this phase invariance, corresponds to charge conservation. And we haven't worked it out yet. We will work it out next time. That spatial translation invariance corresponds to momentum conservation. I'll show you how that works next time. But we're making our way slowly into how particle physics and its processes are described by fields. That's our goal for the next one more lecture on it. I don't know, maybe half a lecture on it. And then we'll move into some of the symmetries and some of the more interesting laws of particle physics. Yeah. We can just easily have had a process that was psi, psi, dagger, psi, psi, dagger, psi, psi, dagger. Yes, but it's not hard to re... Okay, right, that's great. I've chosen always to put all the daggers to the left and the psi's to the right. Remind me next time and I'll show you how to rearrange other expressions so that they become like this. I know what you're asking. You're asking what would happen if instead of this we put psi, psi, dagger, psi, psi, dagger. And I will tell you next time. But they still match. This is allowed. No, not quite. Not quite. Not quite. Not quite. Remind me next time. I'm too tired now to start going into it. But remind me absolutely next time, what is the difference between this and this? And I'll show you next time. All right, I actually left time for some questions. If you want to ask some questions, I just have run out of steam in the... Yeah, go ahead. Have you seen that in various input, that implies charge interference? Well, it's not a trivial. If you have as many psi's as psi dagger's, that means the interaction eats the same number of electrons as it spits out. All right, each psi annihilates an electron, each psi dagger creates an electron. So how do you conclude that phase interference implies charge interference will get away around the earth? All I did was point out that all of the objects with equal numbers of psi's and psi dagger's will be invariant under the phase operation here. If they have different numbers of psi's and psi dagger's, let's take a case. Psi, psi dagger, psi dagger. You get it, right? I'm not sure what the rest of the question is. If they have equal numbers of psi's and psi dagger's, it's phase invariant and it conserves charge. If it has different numbers of psi's and psi dagger's, it's not phase invariant and it doesn't conserve charge. So we're just observing that when it's phase invariant, it conserves charge, when it's not phase invariant, it doesn't. There are two other rules you'll be taking. Two other rules at the same time, conservation rules? Energy, momentum, and charge for the moment. We haven't gone through momentum conservation, although it was mentioned by somebody here. Ask me if the process can happen at any place instead of any time, then yes, then it conserves momentum. But we'll come to it. We'll do that case. That's any other? Yeah. In classical mechanics, it's North's theorem. In quantum mechanics, it's even simpler, but here I'm sort of short-circuiting all of that discussion by just showing you how it works mechanically in terms of these quantum fields. Question. Up there, the probability of that first system is 4 by square root of g squared. Does that mean when you empirically sign a die into a couple of classes, that it must be less than 1 over something? It must be less than 1 over something, or your model is very incomplete. If you ask me in what way it's incomplete, I will tell you. The real scattering process is not described by just a particle coming in and getting scattered out. It's described by something more complicated, which is a sum of amplitudes in which the particle comes in, scatters off the target, comes in, goes back out, comes back in, scatters off the target twice, comes back in, scatters off the target three times. It's a whole infinite series. Each term has a factor of g in front of it. It's only when g is small that only the first term is important, but we'll come to that. This is a basic theme that when the coupling constant is small, you can get away with the simplest minimal process when the coupling constant gets large, one has to, well, let's just call it what it is, one has to sum up an infinite number of Feynman graphs. Basically, these pictures that I've drawn are Feynman graphs, and the vertices of the Feynman graphs are described by these operators. The vertices of the Feynman diagrams tell you the basic elements that can happen. Particle comes in, bounces back out. That's psi times psi dagger. Two particles come in and bounce back out. That's psi dagger, psi dagger, psi psi, or whatever. And we'll give things their proper name next time. Yeah. Are there any situations when coupling constant has imaginary component? Well, yes? What time is it? You want me to... The existence of a theta parameter in QCD. The existence of a theta parameter in QCD, I'm sure that means nothing to you. Yeah, it does happen that there are complex values of coupling constants. It usually means that time reversal invariance has been broken. Complex coupling constants are usually indication of a violation of time reversal invariance. That's really what it comes down to. Now, you can't see that from what I've told you up till now. At least if you can, you're smarter than I am. I can see it, but only because I know how to see it. It's not obvious, but that is the case. Complex coupling constants mean that time reversal invariance is broken. In other words, things don't look... If you run the process backward as a movie going backward, it's not a possible process. You said at one point also that instead of having the... The scattering occurred, the outgoing particle at the same time in the incoming particle, to be before or after. It also mentioned before. It's not happening. Not now. No, no, no. I did say that, and I sort of regretted it the minute I said it. But we will... No, this is... It's a fair question, but it's not the time for it now. Yeah. Gene Franklin used to say it lasts before you tickle it. Gene Franklin would say it lasts before you tickle it. Okay. You're probably going to say your guess at some point, but how do you turn this into something that's relativistic? Ah. Again, you're jumping way ahead. I'm trying to go slowly, and everybody's pushing me to get ahead. Okay. You really want to know? You add this and this and put a square root of omega k in the denominator. No, no. We're jumping ahead. I mean... You talked about charge conjugation. Conservation. Conservation. With phase factor. Yeah. But the rest mass, M, that doesn't change also. In a non-relativistic process, but in a relativistic process it can. And there's nothing in basic quantum mechanics which says that rest mass can't change. It's a combination of quantum mechanics or not even quantum mechanics, even just classical mechanics together with an invariance principle and the invariance principle is Galilean invariance, which is an analog of, a non-relativistic analog of Lorentz invariance. But, yeah, it is true. A non-relativistic situation, total mass can't change, but it can in relativistic scattering. So, what did you want to ask me exactly? Well, is this that logically we think of rest mass conservation in the same terms as charge conservation? No, no, no, no, no, no, not at all. Not at all. One is absolutely true in nature. The other is only true for very, very slow velocities. Right? There's nothing sacred in physics about conservation of rest mass. And what's more is not true. An electron and a positron annihilate into two photons. This is no conservation of mass. There is a conservation of electric charge. So, one of them is a sort of accidental non-relativistic fact. The other is a deep underlying symmetry of nature, which happens in simple case just to correspond to this phase invariance. Next time we'll talk about fermions. We'll talk about what happens if you have more than one species of particle. Supposing you have electrons and muons and quarks and so forth. How do you describe all of this? What's allowed, what's not allowed, the kind of processes. And all in the language of quantum fields. Yeah. Strictly speaking with your scatterer again, you gave us, you said it was an electron, but I thought we only knew about bosons so far. Ah, good point. Good point. Good point. We have all of them. Good point. Yes, yes, good point. Yeah, this, so we have, we're talking about a bosonic version of an electron. Yeah. Sorry, I completely missed that. You're right. Right. But there are particles in nature which carry electric charge. Yeah, right. Exactly. A charged pyme-ezon, a slowly moving charged pyme-ezon would be an example of a boson which you would describe in this way. Yeah. Yeah, so good. Absolutely. Yeah. All right. Thank you. For more, please visit us at stanford.edu.
(October 26, 2009) Leonard Susskind gives the fourth lecture of a three-quarter sequence of courses that will explore the new revolutions in particle physics. In this lecture he continues on the subject of quantum field theory.
10.5446/15070 (DOI)
Stanford University. Good velocity and phase velocity? Okay, good. Let's talk about it a little bit. Generally, the phase velocity of a wave doesn't have anything to do with measurable properties of the wave or of the measurable things in quantum mechanics. It's the group velocity, which is really the velocity at which signals move, the velocity at which particles move, and so forth. So let me just show you what's at stake and what's going on. The first thing is, if you have a plane wave, a sine or a cosine or an e to the ikx, and that's all, just e to the ikx, we'll try it, e to the ikx minus omega t, or just sine, the sine of kx minus omega t, we'll also do. In fact, let's just write that, sine of kx minus omega t. Looks like that. And how does that wave move? How does the top of each crest move? The top of each crest moves in such a way that this thing inside the sign is constant. In other words, for example, right over here at time t equals 0, and at x equals 0, the sine wave, or the sine wave is not a maximum at that point, at x equals 0 and t equals 0, the sine wave happens to be over here. Okay, but where is the sine wave after a certain amount of time t? It's at the place, or that point here, has moved. It's moved to the place where kx minus omega t is equal to 0. Okay, where is that? That's at the position x is equal to omega over k times t, just dividing by k. Now, that's not too surprising. If you think about it for a minute, omega is a frequency. So it's inverse to the period of oscillation, to the time of oscillation. It's one over the time of oscillation. There's a 2 pi in there, but the 2 pi's appear in both things. So omega is like one over the period of the wave, one over t, the amount of time that it takes for the wave to oscillate, and k is like one over lambda. Actually, there are 2 pi's in these equations, 2 pi's in the numerator, let's put them in. And so omega over k, what's omega over k? Omega over k is lambda over t. It's the wavelength divided by the period. The wavelength is the distance it moves in one oscillation. The period is the time that it takes. The ratio is the velocity. So this is the velocity of the peak of each wave, for a plane wave. So that's called the phase velocity. Now, let's think about Schrodinger waves for a minute. Schrodinger waves have a connection between omega and k. The connection is to remember it, all you have to remember is that omega is really secretly energy, and k is really secretly momentum. If we set h bar equal to 1, and so using the formula that energy is equal to momentum squared divided by mass, that just becomes omega squared is equal to k squared divided by 2m. Did I say momentum squared divided by mass over twice the mass? This is, as usual, with h bar equals c equals 1. In fact, there's no c's in this formula, but there are some h bars. OK, so that would tell you, for example, let's compute the phase velocity of a Schrodinger wave. For a Schrodinger wave, omega is equal to k squared over 2m, not omega squared. Omega is k squared over 2m. So what is the group velocity? It's omega divided by k, omega divided by k is equal to k over 2m. That's the phase velocity, k divided by 2m. What's the velocity of a particle in terms of its momentum and its mass? Momentum divided by m. There's an extra factor of 2 here. What's that doing there? The peculiar thing, the phase velocity is not the velocity of the motion of a classical particle. OK, let's just keep that in mind. That's true. But let's do something else now. Supposing I add a constant to the frequency of every wave, I simply take the frequency of every wave, all of them for all k, and I add the same constant. What should I call that constant? I'm going to name for it. It's mc squared, but let's not belabor it. It's not important. Let's just add a constant, an overall constant to the frequency of all waves. What does that do to the group velocity, to the phase velocity? Well, it changes it. Omega divided by k will now have an extra term in it. It'll have an extra term in it, and it won't be the same. Whatever it does, it'll be c over k. It'll have an extra term in it. The phase velocity will have changed. But I tell you right now, there is no physics in just adding a constant to the frequency of every Schrodinger wave. What does it correspond to? It corresponds to adding a constant to the energy of a particle, and adding a numerical constant to the energy doesn't ever change anything. Only energy differences are ever important. All right, can we see what's going on in terms of the Schrodinger wave? Well, the Schrodinger field, psi, is some kind of sum or integral, either a sum or an integral, of creation operators, or rather, annihilation operators in this case, e to the i kx, e to the minus i omega t, where omega is the omega which is related to k. So let's write omega of k. Now, supposing I add to omega just this constant c, what does it do? It adds in or multiplies into each one of these waves, e to the i ct. Sorry, e to the minus i ct. All of the waves in here that make up the Schrodinger field all get multiplied by the same time-dependent phase here. So what it does is if you had a solution of the Schrodinger equation and you added a constant to the phase, it would just change the solution, let's say psi of x and t. It would just multiply it by e to the minus i ct. That's all it would do. It would multiply the wave, sorry, by, I'm sorry, of x and t. It would change the wave by multiplying it by a time-dependent phase. All of the quantities of physical interest that are made out of the Schrodinger field involve psi times its complex conjugate. The probability density, for example, is psi times psi star. All possible expectation values, quantum mechanical expectation values always involve psi times its complex conjugate. So what happens to psi times its complex conjugate when you perform this operation? It cancels out. It cancels out because psi star gets multiplied by e to the plus i ct. They multiply them, they cancel out. As a consequence, there is no physical measurability to adding a constant to all of the phases. And nevertheless, it changes the velocity of the Schrodinger waves. So there must be something about this change of velocity, which is, or this change of frequency, which is not so much, let's say, for the phase velocity. The phase velocity is not something which is ever measured when, or ever appears when you multiply psi times psi star. So it's an artifact of a particular mathematical set of conventions. On the other hand, the group velocity really does mean something. It is the motion of the wave, the motion of it. If you have a wave packet which is centered in some place, in other words, if you have a bunch of waves which add up, a bunch of waves, a bunch of plane waves, which add up to something that looks like that, maybe it has some oscillations in it, but which look like a lump, the whole lump moves with a velocity that's called a group velocity. But let's see if we can see just a little roughly why. If you take a whole bunch of plane waves, every plane wave looks like this, goes on and on forever. How is it possible to add them up so that you get a concentrated lump someplace? The answer is destructive interference, right? Destructive interference so that the waves are in phase and add up over here and they're out of phase and cancel over here. So the question then comes up. If we're interested in the motion of waves and how waves of slightly different frequency reinforce each other and make high points and low points, let's try to follow how that works. Let's take two waves, not a single wave, but let's take two waves of very neighboring close momentum and try to follow where the reinforcing constructive interference takes place. That would be good enough. Let's just follow where the constructive interference takes place between the two waves. So in particular, take e to the ikx. We could just take sine. Sine is good enough. Sine kx minus omega of kt. And let's add to it, this is just a wave moving along with momentum k, energy omega. And let's add to it another wave, which is, let's put parentheses around this, sine k prime, where k prime and k are very close to each other, x minus omega of k prime times t. Where are they in phase and where are they out of phase? In phase means they reinforce each other. Out of phase, in particular, it'll mean that the two terms here have the same sine and there are other places where they'll have opposite sine. Well, one easy way to find out where they reinforce each other is to ask when is the argument, argument meaning the thing in the bracket here, when is the argument of this wave the same as the argument of this wave? Those are the places that in that case, in that particular place, where the argument of this wave and the argument of this wave are the same, it's clear the waves will reinforce each other. They'll reinforce each other and they will be constructive interference. Where is that? That's at the place where kx minus omega of kt is equal to k prime x minus omega, let's just call it omega prime, omega prime means omega of k prime t. But let's write this in the following way. k minus k prime x is equal to omega minus omega prime t. Two neighboring waves will reinforce each other along a trajectory where x and t are related by this equation. Okay, let's just divide it by k minus k prime. And now assume that k and k prime are very close to each other, just for simplicity. Let's imagine that they only differ by a very small amount, a small fractional amount. Then what is omega minus omega prime over k minus k prime? It's the derivative of omega with respect to k. That's all it is, is the derivative of omega with respect to k, if omega and omega prime are near each other. And so what it says is the place where they reinforce each other will travel along an orbit which is x is equal to the derivative of omega with respect to k. That's right. d omega by dk times t. This is also a velocity. It's the velocity of the place where the waves reinforce each other. And it's called the group velocity. It's the group velocity. Let's calculate it. If omega is k squared over 2m plus an arbitrary constant, this is the phase velocity, the phase. What about the group velocity? That's the derivative of omega with respect to k. Derivative of omega with respect to k. That's equal to 2k divided by 2m, otherwise known as k over m. And what do we get from here? Nothing, zilch. This, of course, this is the group velocity, the group. Notice two things. The group velocity does not depend on this extra additive thing that I put into each frequency. Number one. And number two, it's exactly the velocity of a non-relativistic particle, momentum divided by mass. So the two went away. The two went away. Yeah, that's right. The two went away, exactly. When I differentiated k, I got 2k divided by 2m and 2n away. So the group velocity is the thing which is most closely associated with the classical motion of a particle. It's also the only thing that's associated with propagation of signals and so forth, not the phase velocity. We can go ahead a little more with this. Let's see, do we want to? Okay, so let's go through that and see why that's true. This c here is not the speed of light, incidentally. This c is not the speed of light. But let's go through that. Let's take a relativistic wave and calculate both the group and phase velocity. For that, we need to know a fact or two. We need to start with the relationship between energy and momentum. Does any of this have anything to do with something that they talk about, light goes faster than the speed of light? No, no, it doesn't. It doesn't. It doesn't. It doesn't. It is a good question, but it doesn't have to do with that. It's a separate issue. Could he be thinking of the classic, the waveguide where the phase velocity is always greater than the speed of light and the group velocity is almost less than the product is the speed of light? Yeah. Let's, so let me give you an example of that happening right now. Not for a waveguide with a, yeah, it is a waveguide, no, not a waveguide, just a wave equation for a particle with a mass. We begin, we begin in order to, in order to figure out the relation, the wave equation is nothing but the relation between omega and K. I know the wave equation, just the relation between omega and K for a relativistic particle, particle moving close to but not at the speed of light. It's got a mass. It's got a mass. If it didn't have a mass, it would move with the speed of light. Okay. What's the connection? Omega is energy. Let's write it first in a form that we might remember best. What's the relation between energy and momentum for a particle which is relativistic? You remember? C over C equals momentum. What's our minimum equals C over C? The relation with momentum. P. Square root. Square root. Okay. Square root of P squared plus M squared, but do we want to put the speeds of light into here? I think there's a C to the fourth and a C squared, but let's set C equal to one. Yeah, we've already set C equal to one. Later on, we can put back the C's if we want to. It's not necessary. Here's the energy. And if we now set H bar equal to one, this is exactly the same as writing omega is equal to square root of K squared plus M squared. So for a relativistic wave, a relativistic particle, the frequency is the square root of K squared plus M squared. Now if N happens to be equal to zero, then it's omega equals the magnitude of K. Apart from a speed of light which I've set equal to one, that's the relationship for a wave moving with the speed of light that omega is equal to K. What's the group velocity? First of all, what's the phase velocity? The phase velocity is omega over K, right? V phase is omega over K, and that's one. What does that mean? One. Speed of light. Right. What about the group velocity? V group. That's the derivative of omega with respect to K. What is that? One. The group velocity and the phase velocity are the same for a wave with mass equal to zero. All right? So forget that this is mass. This is just a, yeah, it is mass. Of course it's mass. Energy equals square root of momentum squared plus mass squared. All right, but now let's calculate the group velocity when the mass is not equal to zero. Well, both of them. Let's calculate both of them. Omega over K, that's the phase velocity. That's the square root of K squared plus M squared over K, but let's put it all inside the square root. Let's put it all inside the square root. Then it becomes K squared plus M squared over K squared. This is bigger than or smaller than one. Bigger than one. K squared plus M squared over K squared is bigger than one. So the phase velocity, omega over K, is bigger than one. Bigger than the speed of light. Okay? Oh, we could still be scared. Waves moving faster than the speed of light. But what about the phase velocity? Let's work that out. Ah, group velocity. Thank you. The group velocity is the derivative of omega with respect to K. All right? What's the derivative of omega with respect to K? Oops, not T. The omega dK. All right? We have to take the derivative of a square root. That gives us one over twice the square root of K squared plus M squared. And then we have to differentiate the argument here with respect to K. So that just gives us K. It gives us K over S. The derivative. Which is also equal to square root of K squared over K squared plus M squared. Isn't that nice? The group velocity is squared of K squared over K squared plus M squared. The phase velocity is squared of K squared plus M squared over K squared. They're inverse to each other. This one is less than one. This one is less than one. In fact, this is the velocity of a relativistic particle with momentum K. K squared. K squared. Thank you. Yeah. They're just inverse to each other. And as it happens, as was pointed out by somebody, the product of the group velocity and the phase velocity are equal to one. So they're just inverses of each other. Now that's somewhat accidental. That's not a very general fact. Yeah. Very in there somewhere is that the wave speed must be a function of frequency. What? The wave speed must be a function of frequency. The wave speed is constant of frequency. You get no dispersion. That's right. That's this case here. Omega is equal or proportional to K. That's the case where all waves have the same velocity. So it's only in the case where all the waves of different frequency have the same velocity that the group and phase velocity are the same. That's a good approximation. Not only for light, it also happens to be a good approximation for sound and other things to at least lower frequency sound waves all move with pretty much the same velocity. That's right. That's right. That's right. So the difference of these two kinds of velocities is connected with wave dispersion with the changes with the fact that waves change shape. Absolutely. Absolutely. Okay. So that's the story about waves, about group velocity and phase velocity. And as I said, in quantum mechanics, because the effect, for example here, the effect of an additive shift in omega is no physical effect whatever, it's an example of the fact that most times the phase velocity is irrelevant to real physical propagation of any signals or any energy flow or anything like that. Probability flow. Question, the moment we last, you said that you can't measure the phase velocity because these complex conjugates are the same. At least for the Schrodinger wave, yeah. Okay. Can you make it for a photon? Well, for a photon, they're the same. They're the same. They're the same. Now, it gets a little more complicated when you have various kinds of dispersive materials that the photon can move through, but that's for these squalid states. Is all this waves that don't spread out over time? Yeah, what's true for waves that don't spread out over time are the ones for which all waves have the same velocity. If different components of the wave have different velocities, then it's going to spread out, all right? All right, so that's the case where omega is a nonlinear function of k, basically. If omega is a linear function of k, then they don't spread out. And that's the case of all waves having one universal velocity. Okay, at some point, maybe if somebody comes back and asks me this question about light cones and things, we can come back to it. It was a good question, but it's not quite related to this. All right, let's see. Was there anything else I wanted to fill in before getting to fermions? Yeah, before getting to fermions, let's discuss the issue of momentum conservation. We discussed the issue of energy conservation, and I showed you an example. It was really just an example of how the time translation and variance of the amplitudes that go into a scattering process in quantum field theory conserve energy. I'll just remind you very quickly, and then show you how the momentum conservation works, very, very similar. We will come back to these things, but then I want to get on to fermions. That's what I want to get to tonight. Okay, let's get back. Where were we? I was going to talk about momentum conservation just really quickly. Do you remember how energy conservation worked? Energy conservation we had, for example, some process, it doesn't matter what it is, a bunch of psi-creation operators, no, a bunch of psi-annihilation operators, a bunch of psi-creation operators act on a state with a bunch of particles in it with momenta, and then we calculate the overlap of the inner product or the probability amplitude for a final state with a bunch of other particles, k prime, blah, blah, blah, blah, blah. These fields act on the initial states, these fields act on the final states, and I'll just remind you a little bit how it worked. We said supposing all the processes take place at the same point of space and time, at the same point, or the example that I gave was one in which one particle came in, one particle went out, and they came in and scattered off a point in space where we integrated over all possible times. The particle was there, not this particle, but the target was there at all times, and so the particle could have scattered at any time whenever it gets there, and that's represented by integrating this over time. If you remember, each one of these size, or each one of these sizes made up out of a bunch of things involving a's, e to the i k x's, and e to the minus i omega t's. Omega's being the energies, k's being the momenta. Now, if we said all of this took place at the origin of spatial coordinates, space coordinates, then this is just equal to one. And when the operators act and we're finished calculating everything, the only time dependence in this amplitude comes from these e to the i omega t's. Let me remind you what it looked like. It looked like e to the minus i omega of each momentum, k1 t, e to the minus i omega of k2 t, dot, dot, dot, dot. That's for the initial states, and then for the final states, there was something similar, e to the plus i omega k prime 1, blah, blah, blah, blah, times t. Blah, blah, blah, means the sum of all of the outgoing momentum, and here it's minus the sum of all the incoming momentum, omega k incoming times t. The last step was to say, if we integrate this over all times, representing the fact that the scattering could happen at any times, the integral of this gives what? Integral over time. A delta function. A delta function of the sum of the final momentum minus the initial momentum. Sorry, final energies minus the initial energies. When we integrate this over time, what we get is a direct delta function of, let's call it omega final, that means the sum of all the incoming energy, outgoing energies minus the sum of all the incoming energies. In other words, the answer is zero unless the incoming energies are equal to the outgoing energies, and this was just as a consequence of saying that the process could happen with equal probability at any time. Now, what about momentum conservation? Momentum conservation is exactly the same sort of thing, except it has to do with space translation. If a process can happen at any point of space with equal probability, then it's momentum that's conserved. Let me give you just one example, one very simple example. This is a process in which a particle comes in and splits into two particles. Can this really happen in nature? Yes, it can really happen in nature. Particles decay. And an incoming particle of one momentum can decay into particles of other momentum. But as you know, momentum is conserved, but let's see if we can see why that is. So the incoming particle is described, well, of course, it's described by a particle coming in. The outgoing particles, this is k, outgoing particles are k prime one and k prime two. The operator which eats the initial particle, let's call that psi of x. This represents the particle being absorbed at point x, and then two particles created also at point x. That's a simple example. One particle is eaten at point x, and then it spits out two particles also at the same point for simplicity. But it can happen at any place. It can happen at any place, and that means we have to integrate this over all space. Let's see what we get. Again, these sines have creation and annihilation operators. There's some annihilation operator in here which can annihilate or just work on this particle, eat that particle. That's some a minus of k, and it has an e to the ikx in it. Let's forget the omegas. We don't need the time dependence in here. So that's the operator which eats this particle over here. And then there's these two operators which put back the particles over here, and they give us some a plus of k1 prime, a plus of k2 prime, and then there's an e to the minus ik1 plus k2 prime, x, same x. These operators act. They annihilate the particles. We calculate the matrix element. We call trivial, the only thing of interest that's left over is this function of position. This is the amplitude that the process happened at point x. But then since x is not a special place it could happen anywhere as we integrate it over x. What do we get when we integrate it over x? A delta function of momentum conservation. So if a process can happen with translation and variance in space, then it will conserve momentum. In fact, it didn't matter how many particles are coming in and going out in this little demonstration here. What worked the same way? Translation and variance, or the idea that the process could happen anywhere is with equal amplitude. That's momentum conservation. The idea that it can happen with equal probability at any time, that's energy conservation. There was another symmetry we talked about a little bit. The other symmetry that we talked about a little bit was a possible symmetry. If we multiply psi by e to the i times, let's call it lambda, in other words, just change the phase of psi. What will that do to probability amplitudes? Well, the answer is it will do nothing to them as long as it's the same number of sides and side daggers. If there's the same number of sides and side daggers, that's the same thing as saying that the number of particles coming in is the same as the number of particles going out. And so symmetry with respect to changing the phase of the wave function like that, just multiplying it by an overall phase and multiplying its complex conjugate by the opposite phase, that's associated with certain conservation laws, such as conservation of electric charge. Conservation of electric charge tells us that is contained in symmetries of this type here. If these were charge carrying fields, charge carrying fields mean fields for electrons, for example. If they were fields for electrons, then this symmetry would be correct only if there were as many electrons in the initial state as in the final state. This is, of course, a law of nature that charges conserved. So the law of nature that charges conserved is the same as the symmetry under changing the phase of the wave function. These are some examples of symmetries. That was a quick review. Now I want to move on to the notion of a fermion. Yeah. We hear the integration of our space. That's overall three spatial dimensions. Yeah, say. That would mean the conservation of all three components of the momentum, exactly. Then if you have two positive k's and a negative k, the k's may have a distribution. In other words, you could have one is larger, one is smaller. The other one is smaller. Yes, yes, yes. Is there anything here that could point to? Some relative probability for different ways that the momentum is shared. Yeah, not for this case here. For this case here, it's basically neutral with respect to the ways the momentum is shared. Here, just nothing. We'll come back to these things. The kinds of, we'll come back to it. We'll come back to it. I simply wanted to give some very, very simple illustrations of the connection between symmetries and conservation laws and how quantum fields are used to describe physical processes of creation and annihilation of particles. Yeah. A technical question. I have in my notes, we were doing a sum over k of annihilation operator times e to the plus i kx times some state. And I wrote that it reduces to one term. The state has a particle, I think. That's right. There's one particle. Only one particle. There's one particle. And I'm annihilating by summing over k. And I wrote that it reduces to one term. And I wanted to ask whether that means that if I, if the terms which are annihilating places where there are no particles. It just don't give you zero. Zero. It gives you zero. Yeah, zero vector. Right. Zero vector is zero. It's nothing at all. All right, let's just, let's now. So, the part from the theory of bosons, the characteristic of bosons is, first of all, you can have any number of them in any quantum state. And as we described a couple of lectures ago, they have a tendency to want to accumulate in the same state. An example of that is if a particle decays and produces a photon, for example. An atom decays and produces a photon. It may produce a photon in any direction, with any momentum, with some probability distribution. But if at the same time as it decays, there happens to be a large number of photons all in the same state, in other words, all moving the same way with the same energy and the same frequency, it will preferentially decay so that the photon that comes out has the same momentum as the extra photons which are already there. That's the square roots of n that occur in the boson operators. Fermions are very different. Fermions don't like each other. They don't like to congregate into the same state. In fact, they simply can't be in the same state. This, of course, was a discovery that was first made by Pauli, I guess. Was it Pauli? Yeah, sure, Pauli exclusion principle. In the context of atoms, two electrons never occupy the same quantum state within an atom. And that was generalized by Pauli himself, Pauli, Fermi, others, to a powerful principle that fermions are particles that simply can never be in the same quantum state in atoms, out of atoms, anywhere, very different than bosons. OK, let's work out some of the mathematics, or some of the bookkeeping. The mathematics of creation and annihilation operators is really a form of bookkeeping, useful bookkeeping. But let's try to work out a theory of quantum fields of fermions, particles that can never be in the same state. Well, we start with the same kind of construction as we did for bosons. Imagine a quantum field psi of x, which is, again, exactly the same formula as for bosons. Summation over all of the allowed momenta I guess I call it k, usually, all of the allowed k. If the allowed k are continuous, it becomes an integral. An annihilation operator for a particle of momentum k, e to the ikx. The states of the system are, again, labeled by how many particles there are in each quantum state. But the rule is never more than one. So the state of the system is a bunch of zeros and ones, never more than one particle in the same state. That's the character of a fermion. Now, are there other kinds of things that can have three particles in the same state, but not more, or that must have more than seven particles in the same state, or other odd varieties? The answer is no. But to prove that the mathematics of quantum mechanics together with special relativity dictate that the only possibilities of fermions and bosons would take us far beyond this class, so we're not going to do that, but we are going to describe what a fermion is. All right, so this is exactly the same. This formula is exactly the same as for bosons. Psi dagger of x, same thing. Incidentally, this is all non-relativistic particles. We have not got the relativistic particles. Eventually, we will. But what do the creation, the same thing with creation operators? A plus of k e to the ikx. But now the rule is a little different than before. Is there a minus sign? Yeah, there is a minus sign. All right, so now let's focus just on one possible momentum of the electron. This could be an electron. Now let's focus on one possible state. Not worry about all of the states, but let's just say some particular quantum state. It's either filled with an electron or it's empty. Those are the only two possibilities. Let's represent that. I guess we can represent that by 0. That represents no electron in that state or 1. That represents one electron in that state. This isn't multiplying this. This is just two possible states for a comma here. What is the algebra or the abstract mathematics of the creation and annihilation operators? Let's see if we can make some guesses. You'll make the guess. Creation operator on the state with no particle. You want to guess? One particle. What else could it be? What about a creation operator on the state with one particle? 0. Now 0 does not mean the state with no particles. It just means there's 0 vector. I don't write that as just 0. It doesn't mean anything. Or it's just not a vector at all. It's a non-state. Hm? What's that? It's a operator still linear. Louder. Yeah, they're all linear operators. In quantum mechanics, all operators are linear operators. Oh, when one speaks about an operator, especially in quantum mechanics, it's always a linear operator. I think mathematical terminology is that operator usually means maybe always means linear operator. Otherwise, it's operation, but I'm not sure. But yes, in quantum mechanics, operators are always linear operators. OK, let's see if we can make some more guesses. What about a minus on O? Well, you can't take away that, which is not there. But what about a minus on the state with one particle? No particles. No. No particles. All right, so these are operators which, well, they do what they do. They do what they do. Just to clarify, the 0 there, that means that doesn't happen when you can't do something. Yeah. Let me give an example. If we described harmonic oscillators by wave functions, ordinary wave functions, then there would be a wave function for the ground state. And the wave function for the ground state, let's just discuss this for a minute. The wave function for the ground state would be some Gaussian wave function like e to the minus x squared. That would be the wave function. This would be identified as the ground state, and therefore the state with no excitations. On the other hand, the wave function psi of x, which is literally equal to 0, that's the state 0. No wave function at all. Probability is just 0. It's not a state. It's just a nothing. OK. That's the starting point for the algebra of fermionic creation and annihilation operators. There's nothing in the notation that helps us distinguish between fermions and the bosons, right? Well, yes, I suppose we should. Yeah, it's a good point. We should probably use a different letter. Let's not call fermionic operators A. We could call them alpha. Sometimes C is used. Why don't I use C? OK. The problem is that there are many different not inconsistent, but many different notations in the literature. And I guess for us, we probably are wise to use a notation where bosonic and fermionic operators have different letters associated with them. So let's call it C. This is a common notation also that fermionic operators are labeled C. But they are the analog for fermions of the As for bosons. Let's see if we can figure out what C dagger C, or C plus times C minus, does. What does that operate to do when it acts on anything? So let's figure out what it does. What happens when it acts on O? Well, C minus when it acts on O just gives 0. And C plus isn't going to be helpful in getting it back to being non-zero. 0 is 0. So this is 0. You simply can't take away what's not there. Even if you try to put it back afterwards, you can't take it away to begin with. What about C minus times C plus on O? What does that give? C plus when it gives O gives what? 1. And then C minus? O. Let's call it O. Let's call this O as opposed to 0. So this gives back the state. In other words, it gives back exactly the same state. It gives back exactly the same state. What about C plus C minus plus C minus C plus? If it acts on O, let's see, what does it give when it acts on O? We're just adding those two equations. It just gives O. The ground state. Sometimes known as the vacuum. Let's now do the same thing, acting on 1. What is C minus C plus acting on 1? Well, C minus when it acts on 1 takes the particle away and leaves O, the vacuum. And then C plus puts it back, right? So this gives 1 back again. What does it give when it acts on 1? Acting on 1. It does the 1. Minus? Yeah. That's correct, isn't it? Yeah. OK. Now, what happens when we do this? It's 0 because we can't add another particle to one that's already present if it's a fermion. So this is 0. What happens if we add these states? If we add these two equations, we find again C plus C minus plus C minus C plus on 1 gives 1. Equal to I. Yeah. Both of these operators, well, this operator, this particular combination, when it acts on any state, gives back exactly the same state. So that's written in operator notation by saying C plus C minus plus C minus C plus is equal to the unit operator. It just gives back the same state, whatever it acts on. This combination, an operator times another, plus the interchange. What would you call if there was a minus here? You would call it the commutator. With a plus here, it's called the anti-commutator. So it's called. And it's the symbol for it is a curly bracket. Sometimes A is a plus sign. Sometimes. Curly bracket of A and B is by definition AB plus BA. So the first thing we learn is the creation and annihilation operators, anti-commute to give 1 on the right-hand side. This is incidentally the analog. Yeah, it doesn't matter which order you write them in, since they're the same. Well, no, sorry, they're not the same. But it doesn't matter which order you write them in. C plus C minus C minus C plus same thing. This is the analog for bosons of the commutator of A minus with A plus is equal to 1. It's the analog. It's not the same thing, but it's the analog. In this case, it's commutator. For bosons, in this case, it's anti-commutator. Now, what about the anti-commutator of C plus with C plus? The anti-commutator of C plus with C plus is just C plus C plus plus C plus C plus. It's just twice C plus squared. Place this. What does this give when it acts on anything? Well, if it acts on the vacuum, it can create a particle. But then the next one is going to try to create a particle on top of a particle which is already there. You can't do that. If it acts on a particle which is already there, you can't do that. So whatever it acts on, C plus C plus always gives 0. So C plus C plus gives 0. What about C minus C minus? Anti-commutator C minus C minus. That's just twice C minus squared. If it acts on a state with a particle, the first operator removes a particle, and the second one looks for a particle, can't find one, and says 0. If it acts on a state without a particle, it also gives 0. This is the analog for harmonic oscillators of commutators. Is that 0 or 0? No. No, on C minus C minus squared. Is that a vector? No. This is an operator. It's not a vector. Right, it's the operator 0. Yeah, it's the operator 0. Right, it's the operator 0. The operator 0 is such that when it acts on any vector or any state, it gives 0. It really is literally 0. Whenever it acts, it just doesn't give anything. It gives 0. There's an operator which is 0, and there's a vector which is 0. Not a vector, but a state vector, a state vector which is 0. Two different uses of 0, but they're both 0. And in fact, when the 0 operator acts on any state vector, it gives the state equal to 0, 0 vector. OK, so fermions and bosons have a parallel mathematics to each other, where wherever you see commutator for bosons, you replace it by anticommutator for fermions. And it is a consequence of this that you can't put two particles in the same state. If you try, that's trying to put C plus, C plus on 0, and C plus times C plus is 0. That's what this equation says. C plus, C plus, plus C plus, C plus is equal to 0. So this algebra incorporates and encompasses the idea that you cannot put two particles into the same state described by these creation and annihilation operators. I want to prove a little observation about this. These C's here create and annihilate particles with given momentum. What we've proved, what we've assumed, that some combination of prove and assume prove from the isantic, or incidentally, yeah, we should write this more completely. Supposing we're now talking about creation and annihilation operators for different momenta, k and k prime, then I will just tell you this is delta k, k prime. In other words, it's 1 if k equals k prime, 0 otherwise. And these things are always just 0. C minus, C minus, I just equal to 0 for all k and k prime. That's the algebra of creation and annihilation operators, which if you like, by definition, this defines the algebra of creation and annihilation operators for fermions. The fact that it really does describe fermions in nature, of course, is a marvelous correspondence between mathematics, simple mathematics, and some physics. But here's an interesting question. As defined up till now, what this says is that you cannot put two particles into the same momentum state. You cannot make two particles carrying the same momentum, two fermions, two fermions of the same kind, incidentally, two fermions of the same species. You cannot put them into the same state, momentum state. Can you put them into the same position state? What this tells us is that if we find a particle with momentum k fermion, an electron, we cannot find another one with exactly the same quantum state, the same momentum. What about the same position? Can we find two particles at the same place in not momentum, but in position? So let's try to find out. How do we create a particle at point x? We create a particle at point x using psi of x. Psi of x on the vacuum represents a particle at point x. Oh, sorry. I think we should have psi dagger, creation operators. So the question is, what happens if you try to create two at the same point? Well, let's see what we have. That means we're going to have a sum over a k and a k prime of c plus of k, c plus of k prime. And then let's do it at the origin. Let's just do it at position 0. Then these e to the ikx aren't there on 0. Well, what is this? Anybody tell what this is? All the terms are 0 when k does not equal k prime. No, no, there's 0 in any case. You can rewrite this in the form c plus of k, c plus of k prime, 1 half c plus of k plus 1 half c plus of k prime, c plus of k. Because it's under the summation here. But this is just the anticommutator of two c pluses. Anticommutator of c pluses are all equal to 0. So this is 0. That's 1. Why? Because the anticommutator of c pluses is 0. The y raise is 0? You should have asked me that before. Because I wrote it on the blackboard. Well, yeah, yeah, no. Right, right. All of this is very clear if we're talking about only 1k. Right, it gets some real impact, non-trivial impact, when we generalize it in this fashion here to all k and k prime. So there's basically some physics that's going in here. And what is that physics? It's the physics that not only can't you put two particles into the same momentum state, but you can't put them into the same position state, for example. What I'm showing you is that with this algebra, if this algebra is correct, then it's not just that you can't put two particles into the same momentum state, you can't put them into the same position state. In fact, you can't put them into the same state altogether. But this was just an example. With this algebra, it says that you cannot put particles either into the same momentum state or the same position state or any other state for that matter. So these are fermions. This is the pattern for fermions. Yeah. I've always made me understand that you could have two electrons in the same momentum if they were far enough to form. Yes, yeah. The correct statement is you can't have two electrons in the same state. Now, what constitutes a state? The constitutes a state may not just be its momentum. It may be its momentum and its spin, which we haven't come to yet. This is one of the things we're going to come to shortly is the concept of spin, a very important concept. We haven't come to it yet. So at this stage, we're imagining that a particle is characterized by a momentum. And if it's a fermion, you can't have two in the same momentum. Now, the correct statement is, of course, you can't have them in the same state. And if a state requires other ingredients besides its momentum, for example, its spin, then you can't have two particles with the same momentum with the same spin. But spin is only really parallel and anti-parallel. But what about two electrons and two hydrogen atoms that are separated some of these in some way? Could they possibly have the same momentum? Momentum or? Momentum. I'm just going to stick with momentum. You are the atoms of the hydrogen. Just an electron. Maybe you're just a bad example. Well, electrons and a hydrogen atom are not in momentum states. They're in atomic energy levels. Now, the atomic energy levels over here and over here for two different atoms are two quite different wave functions. So if you have an atom over here, an electron might occupy a wave function that looks something like this. Another atom over here might have an electron which looks like this. This wave function and that wave function are not the same. They're quite distinguishable by the fact that one of them is over here, one of them is over here. So yes, you can have two electrons in two different atoms, both, for example, in the ground state, in some particular state. Yes. But you can't have two electrons in the same atom with the same quantum numbers. Being more generic? Would that be more generic than just talking about atoms or could you just use the term systems? Is that used in the same? States is the right statement. The right nomenclature is you can't have two electrons in the same quantum state. In the same quantum state. A system means a collection of degrees of freedom, but it doesn't imply a particular state. So an electron is a system. An electron can be in a particular state. So the nomenclature is an electron is a system. The state of the electron is perhaps some particular orbital state, orbital and spin state in a particular atom. Can I just pair that one further? Let's suppose you have an electron beam in a picture tube, a cathode ray tube. No. You have a stream of electrons. So that means that no two of those in that little glass jar would have the same. Yeah, they really look like this. A packet over here, another packet over here, another packet over here, another packet over here, and they're not in the same state. But not in the same momentum. They're not in the same momentum, not in the same position. Right. In big contrast to the idea of photons, photons march down the axis all, for example, in the same plane wave. Right. That's why you can't build up a coherent beam of electrons in the same sense that you can have a coherent beam of a, you just can't put more than one of them in the same state. So a fermion is in many respects a bigger abstraction than a boson. Can you have two electrons in the same place with different momenta? No. No, no, no, no, no, no, no. Remember, you either describe a thing by a position or a momentum. They're not both. The position and momentum are intertwined. Mutually incompatible. So it's only like spin and position that's built. That's because spin commutes with momentum and position. But position and momentum don't commute. So you can't speak about an electron at a position with a momentum. But you can speak about an electron at a position with a particular spin. Is there some sense in which this bound electron over here and that bound electron are, therefore, in that sense, in a position representation or something? This is not. They're not a position eigenstates because the wave function isn't that concentrated, but they approximately are. They're approximately located at the position of the atom. You can't put two of them into the same atom in the same way, but you can have two different atoms, which is like having an electron over here and an electron over here. That's fine. Yes? So what is the definition of a state? Yeah. What is the definition of a state? Hmm. I think I would have to refer you either to my own lectures or a book on quantum mechanics. But it just means it's the quantum mechanical analog of the configuration of a system. The configuration of a system. A system is a collection of degrees of freedom. It's a collection of coordinates and momenta and things like that. A state is a particular value in classical mechanics of the positions and momenta. In quantum mechanics, the state of an electron is everything you have to specify about the electron to state all the things that can be measured about the electron. It's a wave function, actually. If you have an electron here and electron in the same state, can they be in the same state? Obviously not. One of them is here and one of them is in the endromeda galaxy. That's pretty different. It's like saying, suppose you have one in Nevada and one in Connecticut. Can they be in the same state? Did they have the same state of momentum? Yeah. What's that? They have the same momentum. I mean, they can both be going east at the same time. Well, if you know where they are, it means you don't really know their momentum with infinite precision. So if you know anything about where they are, it means that you don't completely know the momentum. But yes, to within the extent that an electron in Nevada can have a pretty well-defined momentum, but still not so well-defined that it might be in Connecticut, right? Well-defined, but not so well-defined that it might be in Connecticut, then yes, you can have an electron in Nevada and an electron in Connecticut, which have pretty much the same momentum. But if you really try to make that momentum infinitely sharp, that electron in Nevada would be equally likely to be in Connecticut. The one in Connecticut would be equally likely to be in Nevada, and you simply can't put a ball into the same state. You just multiply the uncertainty by together and you get my exposit of the greater. Okay, so we now know what a fermion is. More or less. We use the algebra of these operators in a very similar way. We write down expressions involving electron operators, which annihilate electrons, or which take electrons out of the initial state. Creation operators which put electrons into the final state. We can even multiply them by operators for photons, photons incidentally being bosons. And we can write mathematical expressions for the process of some number of electrons coming in, some number of electrons going out, plus a bunch of photons. And we can write similar kinds of bookkeeping as we used to describe just the simple processes involving bosons. And we're not going to go into that now. I mean, it's, it's, it is the story of particle physics, how you describe those reactions. But before we get to that, I want to talk a little more about the differences between fermions and bosons. Questions? Yeah. I'm going to do a state exam. Is the poly exclusion principle of locally communicated conservative? What does it mean? Is it globally enforced instantly like a hanging of it? Now, if you had, if you had two different processes going on in Nevada and Connecticut, they were both about to create a particle. But if they're in, if they're in, if one is known to be in Nevada and the other is known to be in Connecticut, then they're not in the same state, period. Yeah, but let's say it's not known. Let's just say that there's a slight probability difference. So there's a certain amount of fuzziness. Here's the situation. Let's imagine that Nevada is a point, and Connecticut is a point instead of a big state. Here's a point. Here's Nevada, and here's Connecticut. An electron located right at the center of Nevada, I'll call N. N for Nevada, not N for neutron, not N for anything else. Let's call it capital N, incidentally. N is not an integer, it's just Nevada. All right? An electron in Connecticut, I will call C. Now, there's a quantum state of an electron, which is a superposition of being in Nevada and Connecticut. It has some probability of being in Nevada and some probability of being Connecticut. Nevada plus Connecticut. This is one electron with equal probability, you should put a 1 over square root of 2 here to make the probability come out to be 1. This is a state of an electron which has equal probability of being in Nevada or Connecticut. It's a quantum state, and if you look for that electron, you'll either find it at the center of Nevada or at the center of Connecticut with an equal probability. Incidentally, there is another state like this, which is orthogonal to it, which has a minus sign here. What you can't have is two electrons both in this quantum state. You can have one electron in Nevada, one electron in Connecticut, but you can't have two electrons, which are in this state of both being equally shared between Nevada and Connecticut. Now, that has implications. It's probably not at all obvious to you what those implications are, but supposing we try to make two electrons in this state, you can't do it. You can't have two electrons which are equally shared in this way between Nevada and Connecticut. Can one have a plus sign and the other have a minus sign? Yes. One can have a plus sign and the other have a minus sign. Yes, yes, yes, yes. One electron can be in the state with a plus sign, the other one in the state with a minus sign. And we're talking about at the same time here. We're talking about at the same time. Absolutely. I was talking about the same time. So the ambiguity then is that in Connecticut are spatially separated. And so there's a question of what do you mean by at the same time of the non-triviality? But when you think about it in the moment of the space, you don't have the spatial separation. Remember we're doing non-relativistic physics. So there's no limitations of the speed of light in making experiments and so forth. At the same time, I mean the same time. Somebody can jump from here to there in no time at all and check. Does that happen to you for a non-trivialistic physics? Not much. Not much. Not much. There is an additional fuzziness because of relativity about how particles are located. But for the moment, I wanted to deal with the non-relativistic ideas. Let's come to the ground state of a system of bosons and the ground state of a system of electrons to see how different they are. Let's imagine the three-dimensional world, and since I can never draw three dimensions, I will only draw two. But I want to plot all of the possible allowable momenta, two-dimensional momentum. This is the x component of momentum. This is the y component. I really want three dimensions, but as I said, I can't draw it. So I want to draw a system of electrons, electrons confined to be in a certain finite volume. The finite volume is very much like when we studied one dimension, we studied electrons on an interval of length L with periodic boundary conditions, but that's not so important here. If electrons are confined, or particles are confined to be within a given volume, the momenta are discrete. Remember when the electron was moving on the periodic interval, it had momenta, which had to be, what was it, n over L, and there's a 2 pi in there, 2 pi n over L is equal to k. Right. So what did we say? We said if we drew the momentum axis, the possible values of momenta were discrete. Now that was a consequence of the space being finite, space available to the particle being finite. The bigger L is, the closer these levels get. Let's continue to think about things confined in a finite volume. In that case, both the x and y and z components of the momentum would again be discrete. So the allowable values of the momentum vector would formal lattice. This is not space. This is momentum space. The values of the possible momentum vectors. Let's put zero right over here at the center, and then they go on and on forever. It's not bounded. It just goes on and on forever. Just like this axis goes on and on forever. This n can be anything. So these are the possible allowable values of the momentum of a particle in a periodic three-dimensional box. Let's start with bosons. Let's take a collection of n bosons, all identical to each other. They're all the same kind of particle. And let's put them into our box. And let's look for the ground state, the state of lowest energy. Well, let's put the first boson in. Where will it go if it's supposed to go into the state of lowest energy? Zero. Zero what? Zero momentum. Zero momentum. In other words, we'll put the boson in. That's zero. This is kx, ky, kz is equal to zero right at that point. That's the single particle state of lowest energy. Remember, the energy is k squared over 2m. The lowest possible energy for that boson is to carry zero momentum. If we want to find the ground state, we just put in the particle with zero momentum. And we're not supposing we want to add another particle to it. Another boson of the same kind. Where do we put it to keep the energy as low as possible? Same place. Right, just the same place. We put two particles in at the same place. In fact, we put all n particles in at the same place. That'll be the state of lowest energy. Now, that is called a bosocondensate. The idea of a bosocondensate is a large number of particles all in the same state, but in this case, all in the lowest energy state available. Incidentally, the lowest energy state available, since it has the lowest momentum, fills up. It's a wave function which fills up the box. It's a wave function which fills up the box. And so the wave function of each one of these particles fills up the box. And we simply put a lot of particles in, all of them very uncertain about where they are in the box. Put a lot of them in. That's, first of all, called a bosocondensate, and it's also the ground state of a system of bosons. The large number of particles builds up a kind of classical field strength so that the Schrodinger wave function is pretty classical. Okay, that's very, very different than what would happen if we had in fermions. Let's forget spin. We're not going to be interested in spin now. Where should we put the first one? We're going to put them in one at a time. We want to keep the energy as low as possible. Okay? Obviously, the first one to go in, we want to put in to the lowest state. What about the next one? What the next energy level is, over here, the next one would maybe go over here. There happened to be four states of the same energy. So we could put it here, here, here, or here. This is in two dimensions. In three dimensions there would be, what, six states altogether. So by the time we put in five electrons, they will fill up these states here. Now let's put in some more electrons. We can't put them in the same state, forbidden. So the next one will go in in the next lowest energy level, which will be over here. Incidentally, the energy being proportional to k squared is proportional to the distance squared from the origin. So what we want to do to keep the energy as low as possible is crowd the electrons in momentum space into as small a sphere as we can. In other words, when we have a lot of electrons, when we put them in, they will fill up in momentum space. Well, of course, they'll fill up a sphere. There might be a little confusion about the jaggedness of the spherical edge here, but that's not important. This, of course, will have a lot more energy than the corresponding boson system. The corresponding boson system had all of its particles in the lowest energy. If you look for a particle, all you will find is a particle of zero momentum. In the downstate of the mini-phermeon system, you will find the more fermions you put in, the higher the maximum momentum will be. There will be lots and lots and lots of particles of large momentum. And, of course, only a handful at low momentum, because there aren't that many states at low momentum. There are clearly more states within a given shell at large momentum. Well, first of all, the energy of the system will be much higher, putting fermions into a box costs more energy. Number two, there's this pattern that they fill up a sphere out to some boundary where the boundary is determined by how many electrons you have. Depending on the number of electrons, it will fill a sphere, and that sphere is called the fermi sphere. It's called the fermi sphere. The history of this is a little bit confused. It was Dirac who figured out fermions, and that's why they're called fermions. And Z Einstein figured out bosons, and that's why they're called bosons. They all contributed very heavily to the subject. Anyway, this is the ground state of a collection of fermions. Fills the fermi sphere. Let's draw over here the corresponding thing in momentum space. This is all in momentum space. I don't need to draw these lines. I just want all the particles condensed at the center of the momentum space for bosons. Bosons, fermions. Let's put a little bit of this lattice in so we can see where the next states are. What's the first excited state? We have in bosons, they're all at the center. What's the first energy level above the ground state? Well, that's easy. We take one boson and move it through the next energy level. We take out one boson in the ground state and put it into the first excited state. Which one, which boson do we do? It doesn't matter, they're all the same. So we have n minus one bosons in the ground state and one boson in the first excited state. There are four first excited states, that's not so important. The second excited state, it might correspond depending on the details, it might correspond to putting one electron in the second excited state or two electrons in the first excited state. But one excited, not electron, bosons. A variety of different things. But you can see the way the pattern to excite the system to give it more energy is to take electrons from the ground state and put them into higher energy states. Okay. Let's come to the fermion system. Well, it's also very similar. But let's try to figure out exactly what would correspond to the lowest, yeah, question? Okay. What would correspond, maybe not to the exact lowest energy state, but to the very, very low energy states above the ground state? We want to give it just a little bit of energy. What's the cheapest way to give it energy? Should we take an electron from the center here and move it outside the, move an electron to some place inside the fermi sphere because there's already an electron there? I think as you were about to say, it wouldn't be too smart to take one deep inside the fermi sphere and bring it to the outside because that would take a lot of energy. It takes how much energy? You're going to take one out of low energy and put it back at relatively high energy. That's pretty costly. What's the cheapest thing? Take one from very near the surface of the sphere, inside the sphere, and move it to the outside. How much energy does that cost? Not much because you've taken a pretty high energy electron and just displaced it a little bit in energy. So that creates an electron out here with a little bit of energy above the edge of the fermi sphere, but it also leaves something. It leaves a hole in the fermi sphere behind it. That hole is a thing. Let's suppose these are really electrons. Okay, let's consider a real-world situation. We have a box and that box is full of positive charge. The positive charge is protons, of course, or nuclei. But for simplicity, let's just imagine the positive charge is smeared over the box and then we put electrons in on top of it. And we put enough electrons in to fill up a fermi sphere like this. It's electrically neutral because there are as many electrons as there are protons in the box. So it's electrically neutral. In the ground state, the fermions fill up this fermi sphere here. Now we take an electron out from just below the fermi sphere and put it in just above the fermi sphere. We have not changed the charge. It's still electrically neutral. But we have created one extra particle with a little bit of extra energy above the fermi surface and left a hole in the fermi surface. Left a hole with a missing particle. A missing particle, just think of it literally as a hole. It's an absence of a bit of negative charge, a bit of negative charge with a certain momentum over here. It can be thought of as the presence of a positive charge, a hole. A hole can be thought of as a kind of particle in this context. A hole behaves as a particle. Is it a particle with negative energy because it's below the fermi sphere? No, let's see how much energy it takes. How much energy does it take to move a particle from here to here? Well, we took out a particle of energy, let's call it particle, that we took out, let's call it energy epsilon. And we put in another particle of energy epsilon prime. When we took out the particle of momentum of energy, how much energy does that cost? The difference. Some of the difference. I think the difference. You think the difference? I think the sum. Okay? Well, no, no, no. I'm sorry, I'm sorry. Let me be more precise. It is the difference, but I'm getting tired. I'm reaching the point of the point is it costs some energy. It costs some energy. I'll tell you how much energy later. But do you have to push energy into the system in order to pop an electron out of the next energy level? Yeah. Well, the point is how much energy do you have to put in? You have to put in the amount of energy that it would take to remove this particle and bring it exactly to the Fermi surface, and then an additional amount of energy to bring it from the Fermi surface off the Fermi surface. Here's the picture. Here's the Fermi sphere. The Fermi surface is the edge of the Fermi sphere. All right? Let's take a particle just below the Fermi surface, bring it from here to here. How much energy does it take? Well, it takes two bits of positive energy. We can count it this way. The first bit of positive energy is the energy that it would take to remove this particle and bring it right to the Fermi surface. That's a little bit of positive energy to bring it to here, and then a little more positive energy to bring it to here. So it's actually the sum of two positive terms. The two positive terms being the absolute value of the difference of the energies from the Fermi surface. You have to add two little bits of energy. One of them can be thought of as the energy of the electron over here, measured relative to the Fermi surface, and the other one can be thought of as the energy of the whole. They're both positive. This one has a bit of positive charge because it's an electron. This one has a bit of negative charge because it's a missing...no, sorry. This one has a bit of negative charge because it's a positron. No! I did that on purpose. This one has a bit of negative charge because it's an electron, and this one has a bit of positive charge because it's a hole. Thank you. Right. Now, this electron could be moving around and then suddenly pop into that hole. The hole is available now. The hole is available for the electron to drop down into. This does an electron can drop from an excited level of an atom down to an unoccupied quantum state. If there's an unoccupied quantum state, this electron can drop down into this hole. What will happen when an electron drops its energy and drops down into the hole? Well, energy has to be conserved, so the energy of the electron has decreased. A photon is emitted. You can say it another way. You can say an electron comes together with a hole and annihilates the electron, annihilates the hole, and a photon or photon or more than one photon, some number of photons goes out. So you can speak about this in the language that a hole is a kind of particle because it's a fake particle. It's not a real particle in the real sense of the word, but it's a kind of fake particle that has the opposite charge of the true electron. It can be counted as having some positive energy, and when the electron drops down into the hole, you can think of it as the electron hole coming together, annihilating and producing some heat, radiation, whatever, the process of annihilation. Yeah? When you say you're moving these electrons, you're moving from one lattice point to another lattice point, but you're not. Lattice in momentum space. Yeah. No, no, no, you're moving them from one energy level to another. So it's the analog of having an atom. All right, let's speak about the analog of what this means in atomic physics. In atomic physics, the box could be not the momentum space box, but the box that confines the electron could just be the atom. The proton at the center is some positive charge, and we fill it up with some negative charged electrons. Of course, in this case, the proton charge is not smeared over the atom, so it's concentrated at the center. It's a little bit different, but we take an atom which has the same number of electrons as protons, and we start with all of the electrons in the lowest available states. The lowest available states, that's the ground state of the atom. That's the analog of filling up the Fermi sphere here. Now a photon comes along and hits an electron and kicks it up into an excited state. Photon comes along, hits an electron just below the Fermi surface and kicks it up just above the Fermi surface. A low energy photon, so it doesn't have much energy. It can't take an electron from the center here and bring it to the outside. That would take a lot of energy. That's a very low energy photon, and so it comes in, and all it can do is hit an electron, or be absorbed by an electron near the Fermi surface, and kick it just above the Fermi surface. Another language is the photon has annihilated and been replaced by a particle, or an electron, and a hole. So it's particle hole creation by taking the energy from a photon. In the same way that a photon hitting an atom excites the atom, of course when it excites the atom, it leaves a vacancy. The vacancy in this language is called a hole. So across the whole sphere, if a photon hits anything in that sphere, will it kick off something near the atom? Well, if it's a low energy photon, let's say, you know, some radio wave photon, very low energy, and it tries to get absorbed by an electron near the center, it just can't give that electron enough energy to get out of the Fermi sphere. It can't jump to any place inside the Fermi sphere because it's already occupied. So the answer is a low energy photon can't be absorbed by an electron in the interior. So the cross-section doesn't get bigger as the sphere gets bigger. It's going into the surface, not the volume. Yeah, it all comes off the surface, that's right. The high energy photon, of course, that can kick an electron from deep inside. This is called, the language is that things inside the Fermi sphere are in the Fermi C, SEA. This is deep in the Fermi C, this is near the surface of the Fermi C. Only a high energy photon or a collection of high energy photons coming together can kick an electron from deep inside the Fermi C to the outside. A low energy photon can take a shallow electron or one just beneath the surface and put it just above. Yeah. Yeah. Is there anything that keeps the electrons from interacting with the electrons directly without using photons? Nothing except we're ignoring it. No. Interactions between electrons make the story somewhat more complicated. But interactions between electrons are pretty weak. They're pretty weak. They're not very strong. And for the most part, then they can be ignored inside a metal, for example. If the photon has excess energy, it has enough to kick multiple electrons. If a photon has a lot of energy and enough to excite multiple electrons, then what happens today? Does one electron go, get extra excited and then fall back? Well, anything can happen as long as it's consistent with the conservation laws. But typically it's a much lower probability process to create two electrons and two holes. If it kicked out two electrons, it would make two holes. So that would be a process in which four particles, four particles were created altogether. And that's much less likely. Super excited one. Yeah, that's right. It's more likely to super excited one. Another question. A total electric effect here is sort of what we're talking about. In the case that two photons simultaneously can be absorbed by an electron, then one of them wouldn't do it, but both of them will. Yeah. Okay. It's always welcome you need both of them to conserve all the energy in the room. It's just a probability situation is not on the vision. If you have a low energy photon that's not, it doesn't have enough energy to excite any of the electrons. It doesn't absorb. Well, it doesn't take, it just takes a tiny, tiny bit of energy to kick it from here to here. So, yeah, there is some energy which is so low that it can't do anything. It's true. But does it then not interact or does it absorb? Yeah, pretty much just passes through the system. Doesn't it? Well, then that's not really true. Yeah, there is no energy. The energy photon will find other ways to interact. It will stop a whole collection of electrons lashing. That's more complicated. It doesn't. I mean, you mean that they're actually moving on the grid back and forth? They actually will be moving on the grid back and forth. Space. Space. Well, no. But that's a more complicated one. Why did they have the electron to have a weak layer with each other? You said the electrons would not interact with each other very strongly. But don't they all have the same negative charges? Yeah, but they're also in a background C of positive charges. So the whole thing is electrically neutral. The electrons are interacting with each other, but the electron interactions are pretty weak, partly they're weak because the electron charge is screened by the proton charges. But I really didn't want to get into the electron interaction. That's a separate issue that we've quite collected. Will we talk about that screening? Yeah, we can talk about it, but not... I don't want to talk about that now. Screening is an important phenomenon in particle physics, and it has its analog here. But let's come back to it another time. What I really wanted to get to was the Dirac equation. I have 15 minutes to give you the very, very simplest version of the Dirac equation, which we will come back to repeatedly, but the very, very simplest version of the Dirac equation. Just to complete a story here. It's obvious, you know what story I'm starting to tell you. I'm telling you the story about antiparticles. Holes are antiparticles, but what does this have to do with not a metal in a box, but with relativistic quantum field theory and the vacuum and particles and antiparticles? So I want to spend just a couple of minutes giving you the very, very simplest version of Dirac's logic, and to keep it really simple, let's stay with one dimension of space. Okay, one dimension of space and only one dimension. X. We'll come back to more dimensions next time. And let's write a wave equation. This wave equation is going to be for a field that I'm going to call psi. Ah. Right. This electron, this is not an electron, it's a neutrino. Ah, I don't know what it is. It's a particle. It's a fermion. It's a fermion, but it happens to move with the speed of light. Okay, it moves with the speed of light, and that means omega equals k, right? Omega equals k is the relationship between frequency and wave number, or between energy and momentum. This is c equal 1. Omega equals k. There's another possibility. Omega is equal to k, and what's the other possibility? Omega equals minus k. Okay, but let's take omega equals k. What kind of wave equation does that correspond to? Well, that corresponds to the wave equation d psi by dt is equal, I think it's minus d psi by dx. This is partial derivative. This is even simpler than the Schrodinger equation. Why do I say that this is omega equals k? Let's imagine that psi is equal to e to the ikx minus omega t. The psi by dt brings down a factor minus i omega. The psi by dx brings down a factor ik. If the relation between omega and k is just omega equals k corresponding to c equals 1, then that just says d psi by dt is equal to minus d psi by dx. This is a very, very simple wave equation, and it describes waves which only move in one direction. They cannot move in both directions. They move only, is it to the right or to the left? Can you tell if this? Yeah. These waves move to the right. What would it take to make waves that move to the left? You'd want d psi by dt equals plus d psi by dx. With either of those two equations, you have particles which move in one direction, let's say in this case to the right. Let's suppose these are fermions, and let's even imagine they carry electric charge, so they'll electrically charge fermions. There's something a little bit crazy about this. Omega equals k. Energy equals momentum. Can k be negative? Sure, why not? I can write e to the ikx, and I can also write e to the minus ikx. I can write both possibilities, things which are... But there's something a little ugly, not ugly, a little bit dangerous about negative values of k. They carry negative values of omega, and negative values of omega mean what? Negative energy. This equation describes electrons of positive and negative energy. Let's just draw a picture. Omega equals k. Here's omega equals k, omega versus k. It's just a straight line. For k positive, the energy is positive. For k negative, the energy is negative. This equation simply describes electrons which can have either positive or negative energy. That sounds like trouble. Why is there trouble? Why is electrons with negative energy trouble? Well, if electrons can have arbitrarily low momentum, remember what happens in an atom to an electron which is excited. It radiates and drops down to a lower state. If there were unboundedly low energies, arbitrarily negative, then electrons, let's say positive energy electrons, could emit photons and drop down energy. But they wouldn't necessarily have to stop at zero energy. They could keep dropping down and dropping down and dropping down and make arbitrarily negatively energy electrons while at the same time pumping up the energy of the electromagnetic field. This sounds very dangerous and it is dangerous. Something wrong with this. You can't have arbitrarily low energies. Well, what is the lowest energy state of the multi-electron system? What is the Fermi sphere in this case? The Fermi sphere in this case is, let's suppose we have a zillion electrons, as many as we like, and we put the first electron into the system. Where does it go? It goes down into the lowest energy state. But the lowest energy state is the way to hell down there somewhere, off at minus infinity somewhere. The next one, off at minus infinity, this is very dangerous. This is a bad idea. What Dirac said, this is the simplest version of Dirac equation. This is a Dirac equation in one dimension. What Dirac said is, let's pretend that all of the negative energy states were filled. Just like filling the Fermi sphere, fill all the negative energy states all the way up to zero energy. So all of these states are filled. There's an electron in every negative energy state. You can't make a state of lower energy than that, incidentally. That's the lowest possible energy you have. You've filled up all the negative energy states. Every time you put a negative energy thing in, that lowers the energy. You've put as many negative energy states in as you can. Now if I start putting in positive energy electrons, that will increase the energy. So the very lowest energy state, the thing that you could call a ground state, is to fill it with as many negative energy electrons as there are negative energy states. And that's your starting point. Instead of a starting point with no electrons, your starting point is to fill up completely the negative energy C. That's the vacuum. That's empty space according to Dirac's theory, Dirac's simple version. That's the state of lowest energy, let's call it the vacuum. And now what can you do? You can take, the only thing you can do is to take an electron out of the C, the negative energy C, and give it positive energy. So you can take an electron from over here and put it someplace else. What are the properties of a state with one electron over here, with positive energy and positive momentum, and one missing electron over here? Okay. It's a missing electron. Well, how much momentum does it have in the free? It's negative, so it's K-negative. Well, we remove the negative momentum electrons. That's zero. No, no, no. The whole. How much momentum does the whole have? Positive or negative? Positive. Positive. Positive. Positive. Why is it positive? Because it's a missing deficit of negative momentum. So what we've done is create a positive energy electron and a positive momentum. We've created two positive momentum objects, both moving to the right, one with plus charge, sorry, one with minus charge, the electron with minus charge, and the missing electron having positive charge, negative charge and positive charge. That, in summary, and we're going to do this more completely, but that was Dirac's theory of the electron and the positron, the positron was simply a whole in the infinite negative energy C. He wrote down this equation or the generalization of it, looked at it and said, I love that equation, but it's got this terrible disease of having negative energy states. No problem. These are fermions. You can't put more than one electron in a negative energy state. So just fill it up completely and call that empty space. Having called that empty space, what can you do to it? You can remove one of the negative energy electrons and put it back as a positive energy electron. Sorry, put it back as a positive energy electron. That leaves a hole. And Dirac said that hole is a particle of positive charge. Now, at first he was very excited because he thought it was the proton. It wasn't the proton. It's easy to prove that the mass of the particle here has to be the same as the mass of the, these are particle and antiparticle. This was the discovery of particles and antiparticles. But this was the basic logic and there was equations like this which led to it. Notice what a disaster you'd be in if you tried to describe a boson by this equation. That would be a madhouse. You would just keep putting particles into the negative energy lowest, as many as you like and there would be no ground state. There would be no ground state. You could just keep putting in more and more and more negative energy bosons. So this wave, this wave equation only makes sense for fermions. It would not make sense for bosons. And that was Dirac's great discovery that fermions can be described by Dirac equations like this that would not make sense for bosons. Why? Just the reasons I explained and the idea of antiparticles. We'll come to antiparticles for bosons next time and we'll discuss the Dirac equation and more completeness, the real Dirac equation. This is a elementary version of it. But we'll come to it next time. There's a number of things wrong with this electron. For one thing it only moves to the right. That's crazy. For another thing it moves at the speed of light. That's crazy. So there's a number of things that are wrong with this electron, but... No, it's a neutrino. No, it's a neutrino. A neutrino is a boson. A neutrino is a boson. Yeah, this is a neutrino. But it's not quite a neutrino because the neutrino also has mass.
(November 2, 2009) Leonard Susskind gives the fifth lecture of a three-quarter sequence of courses that will explore the new revolutions in particle physics. In this lecture he continues on the subject of quantum field theory, more specifically, energy conservation, waves and fermions.
10.5446/15068 (DOI)
Stanford University. Well, we're going to study tonight, the Zangular Momentum. What we're after is one of the most important, interesting, unintuitive, and yet very, very simple aspects of elementary particles, or anything else for that matter. There's spin. There's spin angular momentum. Let's first talk a little bit about what angular momentum is. Well, let me pretend for the moment that you knew what angular momentum was, which you probably do. But you get a rough idea. It's got to do with how fast the system is rotating and how massive it is and that sort of thing. Angular momentum is, first of all, a vector quantity. It points in a direction. The mathematical definition of the angular momentum is that its direction as a vector points along the axis of rotation. So that's kind of obvious. There's a right-hand rule. If the system is rotating about an axis, you don't know offhand without a definition, whether the angular momentum is pointing that way or that way, all I told you was along the axis. So you need a rule. The rule is the right-hand rule. If it's rotating that way, you wrap your fingers around the direction of rotation and your thumb points along the direction of the angular momentum. That's not something you prove. That's something you define, the direction of angular momentum. It's built up out of the mass and the speed of rotation and that sort of thing, the moment of inertia to be exact, which is a combination of the mass and the size of the system and the angular velocity. And any system of any ordinary system, composite system, this cup here, made up out of lots of atoms, it actually can have two kinds of angular momentum. One is called orbital angular momentum. And the orbital angular momentum is a consequence of the motion of its center of mass. It could be angular momentum. Angular momentum, first of all, is relative to a particular axis. If an object is moving around an axis, it has angular momentum relative to the axis, even if the thing is itself not spinning. It's a good word, spinning. Spinning means rotating about some internal axis. It has angular momentum by virtue of the fact that it's moving, let's say, in a circular orbit around my head. I see it. It has angular momentum. That's not the angular momentum we're interested in, that orbital angular momentum. The angular momentum we're interested in is the angular momentum of spin. So what is the spin angular momentum? The spin angular momentum is whatever angular momentum would be there in a frame of reference where the system was at rest. So if the system is at rest, the only angular, where the center of mass of the system is at rest. I don't mean that it's not rotating. I mean that its center of mass is standing still. In the frame of reference where its momentum is zero, ordinary momentum, any leftover angular momentum is called spin, basically. Now, I wish I had a basketball or something that I could spin, but just to illustrate. Ordinarily, in ordinary thinking about things, you can take a basketball and you can start it rotating. And you'll say that's the same thing, that's the same object as the original basketball which wasn't rotating except it's now rotating. If we also keep track of the fact that in quantum mechanics, the amount of angular momentum is discrete. You can't interpolate continuously between different angular momenta. Then you could ask the question, it becomes a question of definition. If you start an object with a little bit, with no angular momentum, and then you rotate it up, you spin it up and give it some angular momentum, is it the same object rotating or is it a new object? Obviously, a matter of definition, but in practice, the real issue is how much energy does it take to set it into rotation? Now, you say I can set it into the smallest amount of rotation, arbitrarily small amount of rotation, and so it takes arbitrarily small amount of energy to start it rotating, but that's true in classical mechanics. You have a continuous interpolation between the thing not rotating and the thing rotating. But in quantum mechanics, you don't have a continuous interpolation. And so I would say it's a matter of definition whether you want to think of a rotating nucleus as the same object or a discreetly different object. But the real question, as I said, is how much energy does it take to start a system in rotation? Given that the rotational states are discreet, it makes sense how much to ask, how much energy does it take to set it into the first excited state, the first lowest amount of rotation that you can get. If it's very small, it only takes a little bit of energy, then the object is probably pretty recognizable as the same object as when it was rest. But if it takes an enormous amount of energy, well, it could take so much energy that it would just blow the system apart. So for example, an atom, if you try to set an atom into rotation with too much angular momentum, the electrons will just fly off and that atom won't even be there. So you certainly make things which are distinguishably different when you set them into rotation and discreetly different. What about an electron? Can you set an electron, forget for the moment that the electron has intrinsic spin for a minute, let's forget that for a minute. Can you set an electron into rotation so that it resembles the same object except rotating about some axis? Or is the electron somehow so small and so point-like that it doesn't make any sense to set it into rotation? A thing which is infinitely small, a simple point, a simple mathematical point, it's hard to conceive of, at least with our usual mental pictures, of setting that thing into rotation. But we don't know that the electron is infinitely small, maybe the electron is not infinitely small. If it's not infinitely small, maybe we have a chance of seeing a rotating electron. So the question becomes how much energy does it take to set an electron into rotation? Now, the answer tends to depend on the size of an object. For a given mass, it depends on the size of an object. Surprisingly, maybe it's surprising, maybe it's not surprising, the smaller the object, the more energy it takes to set it into rotation. Big objects, you can set into very, very small angular velocities. Small objects take larger amounts of energy to excite them, to a given amount of rotation, to a given amount of angular momentum. The electron may be so small, and it's probably known to be so small, that the energy that it would take to give it one more unit of angular momentum would be astronomical, would be maybe the Planck energy or some humongously large energy. And so we don't see in the laboratory rotating electrons. In fact, a rotating electron may be so different from an ordinary electron that we wouldn't even call it an electron. The ups out of this is that when we talk about objects and quantum mechanics, or particles in particular, or nuclei, or any simple systems, relatively simple systems, they have an amount of angular momentum which characterizes them, and which once and for all is fixed. You don't talk about electrons with different spin angular momentum. The angular momentum of the electron is always the same if it has any angular momentum at all. Say, why is that? Well, just because if it had more angular momentum, you'd call it a different object. Now, angular momentum, as I said, is a vector. We're going to define exactly what I have defined at the moment, but it's a vector. It has a length. The length of the vector is proportional to the speed of rotation and so forth. It can point in any direction, or at least classically it can point in any direction. Shall we think of pointing the angular momentum of an object in different directions as corresponding to different objects? Let's suppose we now do have an object which has some angular momentum. We call it, what shall we call it? I don't want to call it an electron. It's a, what? A spin-tron. A spin-tron. Okay, a spin-tron. It's a spin-tron. We identify it as a spin-tron, and its angular momentum is pointing that way. Can the angular momentum point in another direction? Well, yes, it better be able to because the laws of physics are rotationally invariant. There's nothing special about one axis than another axis. So, yes, the same object can be made to rotate in a different direction, even if it can't be made to rotate with more total angular momentum. So, pointing it in different directions, we would say, corresponds to the same object. If an electron does have angular momentum, we should be able to think of that angular momentum in any direction. On the other hand, the amount of angular momentum, the magnitude of it, is quantized. And so, as I said, the gap to the first excited state of the electron may be so large that we wouldn't even call it an electron. Okay, so the angular momentum can point in any direction. It is quantized, and now we have to enter into the theory of angular momentum tonight. What we're going to do is the mathematics of angular momentum, and it's very magical. It's magical, and it seems totally abstract, totally unintuitive, and then, pop, outcomes, experimental facts and predictions, and the entire properties of spin as an experimental observational fact about particles. Alright, let's, yeah, question? Okay, do you know what the moment of inertia of an object is? Okay, let me answer the question. The simplest, it is kind of intuitive a little bit. Well, it isn't, it isn't. The moment of inertia is a combination of the mass of the object and the radius of the object. We take a ball of some sort, alright? It's MR squared. Now, it depends on the detailed shape of the object, and it depends on how the matter is distributed. Sometimes it's three-halves MR squared, but it's a border MR squared, and it's called the moment of inertia I. The energy of a rotating object, that's one thing. Now, then there is the angular momentum. The angular momentum of the object is made up, if the object, let's say, the outer boundary of the object, let's say, is moving with velocity V. Then, roughly speaking, order of magnitude, the mass times the velocity, that's the ordinary momentum of a piece of it, mass times velocity, and if you multiply that by r, that's the angular momentum. Momentum times distance is angular momentum. So the momentum of a little piece of it times the distance from the center of it is the angular momentum. And if you write down the kinetic energy, the kinetic energy is one-half MV squared, what you'll discover is that it's the angular momentum squared divided by the moment of inertia, twice the moment of inertia, to be exact. So that's the kinetic energy of rotation is the angular momentum squared divided by twice the moment of inertia. Now, for a given mass, let's say the mass of the electron, the smaller it is, the smaller the moment of inertia, but the moment of inertia goes in the denominator, and that means the amount of energy that it takes to increase the angular momentum by one unit is inversely proportional to the square of the size of the object. So it's a classical mechanical fact. The only thing that comes new in quantum mechanics is that the angular momentum is discrete. For a given angular momentum, a smaller object costs more energy. That's a classical mechanical fact. Okay, let's do some of the mathematics. Tonight we're really going to do the mathematics of angular momentum. It's both totally unintuitive and simple. Simple enough, and this is a great example of where you see extremely abstract mathematics, which you can follow, suddenly popping out very unintuitive answers, and nevertheless, experimental answers. All right, let's begin. Well, first of all, let's begin with a single particle orbiting a center. Circular orbit for simplicity. The angular momentum is the momentum of the particle times the distance from the origin. Mass times velocity times distance, but we'll just call it momentum. That's the angular momentum in this very simple context. Is it positive or is it negative? That depends on the sense of rotation. Going this way, I think I'll call positive. Going that way, I'll call negative, but it's not very important. That's the angular momentum of a point object. Now, this object has angular momentum because it has momentum, and it has momentum because it has velocity. So this is not yet spin, but now supposing we had two particles at opposite ends of the diameter here, both going with the same sense of rotation. Now, the center of mass would be at rest. The center of mass of the system would be at rest, but the angular momentum would certainly not be zero. It would be the sum of the two angular momentum. They're going in the same direction. Here's an example where the center of mass of the system can be at rest, but there's still angular momentum. This is spin. Now let's get a little more refined in our precise definition of angular momentum. I said before, angular momentum is a vector, and that means, or that's denoted by putting a little arrow over the top of it, momentum is a vector, and also spatial location, here's the spatial location of an object. It is relative to an origin, relative to an origin. We can call it a vector. It's the radial vector. Let's talk about the components first before we get on to angular momentum. The components of the r vector are just the coordinates of the position of the particle. So the components of r, this r over here, let's write it over here, the components of r are just x, y, and z. x, y, and z being the coordinates of the position of the particle. And I'm also going to sometimes call them x1, x2, and x3. x3 being z, x2 being y, x1 being x. Okay, so we have these two vectors, and somehow the angular momentum is a product of the two vectors. The angular momentum is itself a vector. How do you make a vector out of two vectors by multiplication? What kind of rule of multiplication is there that takes two vectors and combines them together by some rule of multiplication to make a new vector? That's right. There's only one rule. There's only one such rule, and that's the cross product. The cross product, r, and in fact, there's an ambiguity. Is it r cross p or p cross r? That's a question of whether you're right-handed or left-handed. Do you use the right-hand rule or the left-hand rule? The standard convention is that it's r cross p. Is that by your left-handed? I don't remember. I think it's right-handed, but it's left-handed. I'm not sure. I don't remember. We could figure it out, but I don't want to. Yeah, it's r cross p. Oh, we labeled the coordinates of the r vector. There's also the p vector. The p vector also has coordinates, and its coordinates are px, py, and pz, or p1, p2, p3. That's the notation we'll use. Now I can ask, what are the components of the angular momentum? So for that, all you have to know is how to build a cross product, and I'll assume everybody knows how to build a cross product, so let's just do it. Here's the rule. The x component of the cross product, the x component of the angular momentum, is the y component of position times the z component of momentum minus the z component of position times the y component of momentum. This is the only one that I ever remember. Lx equals y times pz. I read that off. The other one here, I know that I have to put them in the opposite. I have to change pz to z, py to y, and then I want to go down from there, Lx, or y, and Lz. I never remember them, but I know the rule. The rule is you just cycle from x to y to z, and back to x to y to z. Think of x, y, and z as being points on a clock, and you go from one to the next. So when you go from x to y, y goes to z, z goes to x. Let's go down to the next one, y, z, x. I'm not finished with the formula. z, x, y, py, minus xpz, minus ypx. The only reason I emphasize the cycling is to keep track of the sign. How do I know that Ly isn't xpz minus zpx? OK, that's the rule. You cycle through that way. And that's the component of angular momentum of a point particle moving in the vicinity of some origin of coordinates. It's the orbital angular momentum of that particle. It's not the spin angular momentum. To make a spin angular momentum for a system of ordinary and classical mechanics, to make a spin angular momentum, you've got to have a lot of particles, at least two anyway. I showed you how to make a spin angular momentum with two of them. You'll have to have several particles, two being several. How do you build the angular momentum or composite system? You just add the angular momentum as vectors. You add them as vectors. You add the angular momentum of all the constituents. Once you add the angular momentum, then you can have a spin where the total momentum is equal to zero, but the angular momentum is not equal to zero. But this is the basic formula for a single indivisible point particle. And as I said, you just add them up for a lot of particles. Next step, we want to do the quantum mechanics of angular momentum. This is the classical mechanics. Actually, it's also the quantum mechanics. But to get to the quantum mechanics, the basic mathematics of quantum mechanics is the quantum mechanics of operators. All quantities of physical significance, meaning to say things that you can observe, measure, all those things are represented in quantum mechanics by what? Operators. Yeah, emission operators, operators. And there's no special exception about angular momentum. The components of angular momentum are represented by operators. But to figure out what those operators are, all we really need to know is what the operators which represent the position and momentum are. In fact, we don't even really have to know very much about the detailed property of the position and momentum operators. All we have to know is their commutation relations. With their commutation relations, that's all we need to know in order to carry on and work out everything. Okay, so what does it mean, or what are the implications of two operators commuting? Let's suppose I have two operators which represent two observable quantities, the anus and the b-ness of a system, and I know that a and b commute. What does that tell me about observation? Yeah, exactly. You can simultaneously measure them. What if they don't commute? Then you can't simultaneously measure them. The most famous example of two things that you cannot simultaneously measure, of course, is position and momentum. The implication is that in quantum mechanics, the operators representing, where I erased them, p1, p2, p3, on the one hand, x1, x2, and x3, don't commute among themselves. So what are the right commutations or relations? Now, to some extent, these are postulates of quantum mechanics. You find them in the first chapter of Dirac's book, and they're basically postulates, if you like, but there are limitations on what you can write down, but we're going to take them as postulates. The first postulate is that you can simultaneously measure all three coordinates of position. There's no limitation on how well you can determine the x, y, and z coordinates of position. They're all simultaneously measurable. And the implication of that is that every x commutes with every other x. We can write that down x i, x j equals 0 for all i and j. For all i and j. Incidentally, anything commutes with itself. It'd be very weird if something didn't commute with itself. That would say you could measure it, but you couldn't measure it simultaneously with itself. Bad idea. OK, so everything commutes with itself. Same thing for momenta. There is no limitation on being able to simultaneously measure the different components of momentum. So pi pj equals 0. What about x is with p? So that's exactly what the uncertainty principle is about. It tells us that we cannot simultaneously measure coordinates in position. But which coordinates and which components can we not measure simultaneously? And the answer, there's not a lot of freedom about this, but nevertheless I'm going to just state it as a postulate, that x1 and p2 can be simultaneously measured. The x coordinate of a particle and the y component of its momentum are not limited by the uncertainty principle. It's the x component of position and the x component of momentum, which are not simultaneously measurable. So we have that x with px is not equal to 0. Anybody remember what it is equal to? Rh bar. It's small. Why? Because we certainly don't expect a macroscopic big heavy objects to behave with such bizarre behaviors. So it's small. One unit of angular momentum. Sorry, one unit of Planck's constant. And so forth for y and z. And so we can write down a general formula that the commutator of xi with pj is i h bar if i equals j and it's 0 if i doesn't equal j. Is that clear? So to represent that, we put the Kronecker delta here. And that's a nice systematic way to describe the properties of position and momentum. That turns out to be all we really need to know in order to understand the properties of angular momentum, in order to understand the properties of spin. Let's see, there's one other point that is worth emphasizing. That's that angular momentum has units of Planck's constant. How do I know that? Well, angular momentum has units of distance times momentum. Here's distance times momentum. On the right-hand side is Planck's constant. It also occurs, of course, in the uncertainty principle. Delta x delta p is greater than or equal to whatever, some number times Planck's constant. So you see that Planck's constant has units of length times momentum. And angular momentum has units of length times momentum. So it's not completely surprising that angular momentum is quantized in units of Planck's constant, in units of the basic quantum in nature, which is Planck's constant. OK, but we haven't gotten there yet. I just pointed that out. What I want to work out for you is the commutation relations of the components of angular momentum. We're going to find out that once we work out the commutation relations of the angular momentum, components of the angular momentum, we can forget this. We won't need it anymore. It's just the components of the angular momentum whose commutation relations we want. So let's do it over here. Let's first compute the commutator of Lx with Ly. It's not very hard. I will have to wing it a little bit because I would have to remind you what all the rules about the algebraic rules of commutators are. But it's pretty simple. It's simple enough that you can see what happens. All right, so we want the commutator of Lx with Ly. Lx with Ly is Ypz is the commutator of Ypz minus Zpy. That's the first entry into the commutator. And we're going to commute that with, that was Lx with Ly, which is Zpx minus Xpz. I think in the last 24 hours I've done this three times on the blackboard. The first two times are last night when the 30 people who failed the course showed up late. You know, there's another cheat that you didn't use. You used the cycling from top to bottom. Yeah. But you can actually just cycle from left to right and forget about the other two equations. So Y starts with Z. Z starts with X. However you like. Yeah. Yeah, that's fine. Right. OK, let's see if we can figure this out. Let's see Ypz. Let's look at the commutator of this term of the first factor with this term of the second factor. There's nothing to prevent Y. When you think about commutators, you're thinking about pushing operators through each other. Can they freely pass through each other without a change in the value of the operator? All right. Now Y commutes with X. So it can go right past X. It also commutes with Pz. Pz commutes with X. And Pz commutes with Pz. So you can push this one right through this one. No obstruction. So these two commute with each other, contribute nothing to the commutator. I think there's another combination. Let's see. Is it this one? Z, yeah. These two also commute with each other. The two which don't commute with each other are two pairs. This and this, and this with this. So let's see if we can work it out. We have Ypz, Zpx. The only thing which doesn't commute here is Pz with Z. So I maintain that the answer is just Y times, I don't want to use red, is just Y times Px. And it doesn't matter which order you write them down. Y times Px. And then there's the commutator of Pz with Z. What's the commutator of Pz with Z? Minus ih bar. Why is it minus ih bar? Because commutators in the opposite order change sign. Commutator is anti-symmetric, meaning when you change the order of what you write in the bracket, it changes sign. AB minus BA, if you interchange it with B, it would just change sign. All right, so this commutator of Pz with Z is minus ih bar. Then we have this guy over here with this one over here. What is it that doesn't commute here? Py and x commute. So plus, minus times minus is plus. Py times x. But then we have the commutator of Z with Pz, and that's ih bar. So ih bar. Another ih bar. Or in other words, the whole thing, oh, that's, sorry. What did I write here? Peace of Y. You're going to write this one? Peace of Y. Peace of Y, right? Yeah. X piece of Y. And again, in each of these expressions here, it doesn't matter which way you order the two constituent operators here. OK, the whole thing then is ih bar, xpy minus ypx. A bunch of mumbo jumbo, but nevertheless, it's very effective. Xpy minus ypx is Lz. So we've learned that the commutator of Lx with Ly is just, sorry, ih bar. ih bar Lz. That's kind of fitting that when you commute the two components of the angular momentum, you get back another component of angular momentum. You don't have to remember anything about P's and X's anymore. The commutation relations of angular momentum are closed among themselves, and the algebra, as you call this, the algebra of angular momentum is something abstract which doesn't even really remember where it came from, meaning to say that doesn't contain the P's and X's anymore. That's Lx with Ly. Then, in a cycling vertically or horizontally, doesn't matter, you can write down the next one, Ly with Lz is ih bar Lx, and the next one, Xyz, Yzx is ih bar Lz Ly. OK, those are the fundamental relationships on which the theory of angular momentum is built. Now, the next fact which I will tell you is that if you have many particles and you add up the angular momentum and then commute them, you find exactly the same relationship. It doesn't matter that this was a single particle, it could be many particles, and it could be many particles in your own rest frame, in the rest frame of the center of mass. So this will also be the commutation relations for the components of spin angular momentum. So these, as I said, are our starting point. You might ask, is there anything special about the choice of axes? I chose the X, Y, and Z axes, but I never told you what X and Y and Z were. They could be the edges of the room, or they could be, as you wanted, to use a diagonal direction, to other funny diagonal directions. As long as they're mutually perpendicular, they form a perfectly good Cartesian coordinate system. Given the components of a vector in one frame of reference, you can compute them in any other frame of reference. So in principle, we could work out from this the commutation relations of the components of angular momentum in some other set of axes. What we would find is that they have exactly the same form. They would have exactly the same form if I chose prime axes, which are different, and we just put some primes in here. In other words, the commutation relations of angular momentum are rotationally invariant. They take the same form in every frame of reference. And that guarantees that whatever theory we're making of angular momentum, it will be independent, the observable facts will be independent of what axes we choose to do the mathematics here. Is it independent of translation as well? It is, but the spin angular momentum is quite independent of direction. It's also independent of translation. Okay, next step, now we're going to be doing abstract algebra, okay, magic. Am I going to get too far? Well, so far, so far, we didn't get anything out of it. It only becomes magic when you get something. You can write down the infinite number of formulas. You can get a computer, you can plug in these rules into a computer, and then have it grind out all possible theorems. Ape can do that, right? And mostly they'll grind out uninteresting theorems. It becomes magic when all of a sudden you look back and you say, wow, that's an experimental fact that I can confirm, or that's telling me something interesting about the physics and the operational process of measurement. So we'll get there. But we're still just doing abstract mathematics for the model. Now, of course, I know when I'm going. You don't know where I'm going, so you just have to follow. But after you see it, you'll know when then you can do it yourself. Okay, so some of you know. What do you have, x, y, and z axes that you choose spinning along with the particle? No, that we don't want to do. That we don't want to do. We want to use inertial reference frames. And spinning reference frames are not inertial. Nevertheless, I will tell you, it's still true, but we don't want to get into that at the moment. Certainly the angular momentum is not invariant undergoing the rotating frames of reference. If we had a system which is spinning and had some spin and we went to a rotating frame of reference which spun with it, then it wouldn't look like it had any angular momentum. So it's clear that we don't want to go to rotating frames of reference, or at least the angular momentum won't be invariant. So inertial frames of reference. Reference frames which have no centrifugal forces and no Coriolis forces and so forth. Now we're going to invent two new operators which are very simply related to these, as simple as possible. They're called L plus and minus. Or two again. Let's write it down. L plus and L minus. L plus is equal to Lx plus i Ly. And L minus is equal to Lx minus i Ly. Now I'll tell you right now what these are. These are raising and lowering operators analogous to the harmonic oscillator creation and annihilation operators. They take an angular momentum along a given axis, namely the z-axis, and they bump you up a step and they bump you down a step. But we're going to show that. We're going to show that. Another definition is that the measurable values of Lz, we're going to be concentrating on Lz. There's nothing special about the z-axis. But on the other hand, there's nothing to forbid me from focusing on the z-axis. Nothing special about it. Nevertheless, I wish to choose an axis to work with. I'm going to choose the z-axis. And I'm going to call the eigenvalues of Lz, the measurable values that it can take on. I'm going to call them M times h bar. h bar is just a number. If we set h bar equal to 1, which I think I will do now because I will never on a blackboard in real time be able to keep track of all the h bars, later on we can put it back. Lz is just called M. That's a historical notation. M, as I said, is the angular momentum in units of h bar. So it's just a, but, and it will turn out to be a quantized variable. Okay, it's called M, it's just a historical reason that I don't know. And that's a notation we'll use. What I want to calculate, I'm not interested for all purposes in the commutator between Lx and L plus and L minus. I'll tell you if you want to work it out. It's proportional to Lz. But don't, that's not our interest right now. I want to work out the commutator of L plus with Lz. Lz, y, as we go along. So we want L plus with Lz, also L minus with Lz, but let's do L plus with Lz. This is equal to commutator of Lx plus iLy with Lz. Well, we have all the rules. This is easy. Oh my God, whatever I do with L. Here they are. Commutator of Lx with Lz, can somebody read it off for me? Minus Ly, iLy. Minus iLy, we're sitting h bar equal to one for the moment, right? So Lx with Lz, do I believe you? I do. Minus iLy. And what about Ly with Lz? So it is plus i and now Ly with Lz. Can somebody read it off? iLx. So it's i squared Lx and i squared is just minus one. And this is just equal to minus L plus. It's minus Lx plus iLy. So we have our first commutation relation. I think I can get rid of this. It's just minus L plus. What about L minus with Lz? I won't drag you through it. I'll just write down the answer. I think the answer is plus L minus. When you compute, when you commute L plus with Lz, you get back L plus when you commute. L minus with Lz, you get back L minus with a sign that depends on whether it's L plus or L minus. These are the basic commutation relations that we'll be using. Okay, the next step is I want to show that L plus and L minus are bumping operators which bump you up and down. Like raising and lower, like creation and annihilation operators for harmonic oscillators, that when L plus and L minus act, they change the Z component of angular momentum. We've got to focus. Remember, we can only measure one component of the angular momentum at a time. Oh, I didn't say that, did I? No, we didn't say that. We should say it right now. How many of the components of angular momentum can we simultaneously measure? That's just one. Any two of them don't commute. Any two of them don't commute. And so at most we can measure one component of angular momentum at a time. We pick one direction. We call it the quantization axis, but there's nothing special about it. We focus on it. And we are interested now in the possible values that Lz can take on. In other words, what are the possible values of m? And what we're going to do now is we're going to show that L plus bumps you up in m. Increases the eigenvalue m by what you make. So let's see if we can do that. Let's suppose we have a state. Let's call it m such that Lz acting on m is equal to what? m on m. In other words, that little m is an eigenvalue of Lz with eigenvector ket vector m. That's what it means to say that m is an eigenvalue. We know that. We've been through this a number of times. What we want to prove is that if you act with L plus on m, that's a new vector. Let's put a red box around it. It's a new ket vector. What I want to prove is that that's a ket vector with eigenvalue m plus 1. With eigenvalue of Lz equal to m plus 1. So let's do it. All we have to do is multiply this by Lz and see what we get. What should we get if it really is an eigenvector? We should get m plus 1 times the same thing in the red box. We should get, I won't bother writing it down, but we should get m plus 1 times the thing in the red box. This would be easy to compute if Lz commuted with L plus. Then we would just push the Lz through and use the fact that Lz on m just gives us m. That would be easy. If it's not that easy, we can't push Lz through L plus, why? Because they don't commute. But if we use the commutation relations, we can. Let's try it out what this commutation relation means. It means L plus Lz minus Lz L plus equals minus L plus. I've written it out by hand in excruciating detail. The thing that I'm interested in here, the thing that came up here was Lz times L plus. Let's put that on one side of the equation and write that this is L plus Lz plus L plus, oops, not Lz, but L plus, equals Lz L plus. That's what I want, Lz L plus, with Lz on the left, L plus on the right. Here I have it on the left side of the equation. We'll see. Is it useful? We're going to find out if it is useful. This is equal then to L plus Lz acting on m plus L plus acting on m. I'll stop here if there are any questions. I think I've been clear, but if not, I'll... Okay, but now it's easy. What is Lz when it acts on m? Just replaces the Lz by an m, right? So let's take an m out, put the m over here, make out this Lz, close it up a little bit, n times L plus. And here we just have L plus times m. What are we going to do with that? I don't know what we're going to do with it, but we're just going to do it. Let's put a red box around it. L plus times m, L plus times m. Same red box, same thing in the red box on each side. So what's in the red box has the property that when Lz hits it, it multiplies it by m plus 1. The red box is an eigenvector of Lz with eigenvalue m plus 1. That's a little bit of magic. And what it tells us is that if we have a state with a given value of Lz, there must be another state with one additional unit of Lz. In other words, you can bump up Lz by one unit. You can bump down Lz the same way except using the lowering operator. So you can check that L minus on m gives you an eigenvector with eigenvalue m minus 1. So now let's think about, now we've learned something rather non-primitive. We've learned that the spectrum of angular momentum, the z-component of angular momentum, is spaced by integers. Here's the spectrum of angular momentum. And whatever it is, it's spaced by integers. I don't know, it could be, I don't know what spectrum of angular momentum is yet, the spectrum of possible values of it. It could be pi, pi plus 1, pi plus 2, pi plus 3, pi minus 1, pi minus 2, or it could be the square root of 2, the square root of 2 plus 1, the square root of 2 plus 2, whatever they are, they're gapped by integers. We still don't know what any one of them is. I didn't know it's only integer. So this is the word for integer. Yeah, we'll come to that. Okay, now, next question. Can it go on and on forever? Now remember, we're talking about a particular particle. We're talking about a particle or some kind of particle. We're not talking about the whole world. We're just talking about some particular element particle. Let's say a neutron or even a nucleus. How much can we bump up the angular momentum, the z component of angular momentum, before it's not the same object anymore, before we simply run out of possible values? So we're talking about an object. Let's think classically for a minute. It's rotating about some axis. And we're not allowed to increase the rate of rotation. That would give us a new object, a different object. But we are allowed to rotate the angular momentum. We're allowed to rotate the angular momentum, but we're not allowed to increase its amount without changing the whole nature of the object. Okay. So another way of saying it is that there's a given length or a given value of L squared. The magnitude of L is fixed. The magnitude of L is fixed, and it characterizes the particle. Just as a particle is characterized by mass, it's also characterized by a length of the angular momentum vector, or better yet, the square of the length of the angular momentum vector. Okay. How do we maximize the z-component of angular momentum? We simply point the object upward. We point the spin axis upward. And that's going to be the maximum angular momentum, the maximal z-component of the angular momentum. So from an intuitive point of view, there ought to be a maximum value for the angular momentum of a specific particle, a copterous, and after that, nothing. Well, how can that be? How can that be? We've already proved that by acting with L+, we bump up the angular momentum. How can it possibly be a ceiling to the rule? The answer is exactly the same as the harmonic oscillator creation and annihilation operators. Remember what happens to the annihilation operator if you hit the bottom state, the state of lowest energy. It just gives you zero. That's the one possibility we left out here, that perhaps there are states, or a state, where L+, will give zero. That would have to be the case if there was a maximum angular momentum there. So for the maximum angular momentum, if there is such a thing, there is, for every given object, L+, on let's call it Mmax, must equal zero. So there's a cutoff. It can't go past a certain point. What about the minimum value of the angular momentum? Minimum means the one deepest down in the cellar. It had better just be the negative of this. Why must it be the negative? Why couldn't it be something else? That's just rotational invariance. If you can build a system with an angular momentum along a given axis of a certain value, you must be able, by rotational invariance, by every axis being equivalent to every other axis, you must be able to rotate that angular momentum and have exactly the same value except pointing in the opposite direction. So the bottom must be, of course, Mmax, and Mmin must be equal to minus Mmax, equals Mmin, and Min, of course, being negative. And from Mmax and Mmin, we can bump up an integer or bump down an integer. And that now fixes for us some properties of a spectrum. It's got to be symmetrically located relative to zero, and it's got to be gapped or spaced by integers. There's only two possibilities. Think about it for a few minutes. There's only two possibilities. Well, there's many possibilities, but they come in two families. The first in the first family, zero, is a possible value of M, this is M equals zero, and then M equals one, M equals minus one, and so forth, until you get to Mmax, and until you get to Mmin. Mmax. Mmax minus Mmax. How many such states are there altogether? Two Mmax plus one. Yeah, two Mmax plus one. Mmax here, Mmax here, and then one more at the origin. Twice Mmax plus one. That's the number of possible states, because I haven't told you what Mmax is, but that's because Mmax could be anything, depending on the specific particle or the specific object we're talking about. So specific particles have their own value of Mmax, and that Mmax is called the spin of the particle. For example, there are particles that have no spin at all. Then the spectrum of angular momentum is just M equals zero and nothing else. When you hit it with a raising operator, you get zero. When you hit it with a lowering operator, you get zero. That spins zero. Then there's the possibility of spin one. Spin one is a situation where M equals one is the ceiling, and M equals minus one is the floor. It can't go below the floor, it can't go above the ceiling. That's spin one, called the spin one particle. You can have spin two particles, you can have spin three particles and so forth. An example, the Higgs boson is the spin zero particle. The photons spin one particle, and the zebozons spin one particle. Are there spin two particles in nature? Well, the gravitons spin two particles. How about spin three particles? No object that is ordinarily called an elementary particle is spin three, but there are certainly objects with spin three. There are nuclei with spin three. There are basketballs with spin three. There are galaxies with spin three. They are the hard to find in. But yeah, there are lots of objects with spin three, but among the elementary particles in nature, the things that we ordinarily call the elementary particles, spin zero, spin one, spin two, spin two is in. Now that's not completely clear, because I missed a possibility. And the possibility was that M equals zero was not in the spectrum. It was in only other situation, which is symmetrical about the vertical, and which is gapped by integers, spaced by integers. Let's put zero over here. It's to start at M equals a half, and M equals minus a half, and then go upward in units of one. So this one would be three halves, this one would be minus three halves, and so forth, until you come to a max. In this case, all the eigenvalues would be half integers. The simplest example being just the half spin particle. Again, you call, you describe the particle as having a spin equal to the maximum value. So an object which only had spin a half, or a z component equal to a half and minus a half, that's the simplest object with the half spin kind of spectrum. Let's call the half spin particle. The next one would be called spin three halves particle, and spin five halves particle. And particles come in these two types. Integer spin and half integer spin. Is that a question? No. Half integer spin and integer spin. Yeah? Is there a maximum half spin for particles like the Ritzberg-Bolstberg? Yeah, yeah, but the only thing is that the maximum is going to be a half integer. It's going to be an integer. Spin three halves particle, a maximum with the three halves. Spin five halves particle, spin the maximum with the five halves. And the total of the three halves is going to be a half integer. Yeah. Oh, yeah. In all speculations about elementary particles, basically a combination of Lorentz invariance, quantum mechanics, and a few things tell you that there are no particles, elementary particles, that spin higher than two. That will allow zero, one, and two, and it will allow a half and three halves. So all the standard particles of elementary particle physics are zero, a half, one, three halves, and two. But you can build composite objects, yeah? Your line there was for negative spin values. Well, negative spin values just means rotated around the axis, so it's pointed down. And we characterize the particle by the maximum value. It's been meaning the maximum positive value. So let me go kind of way out of here. So why are there only, say, two orientations? Who said there are two? There's been zero particles or none? Okay, two or none. There's been three particles or three? I know what you're asking. You're asking what happens to the directions in between. You're asking, yeah, okay, we're going to talk about that. That's where quantum, that's quantum mechanics in your face. Okay, we'll have to add one to that. Yeah, we'll come to that. That's not all of the topic at all. Okay, and when we add to that, what do you mean by like a corkscrew wave? A corkscrew motion is a motion which is moving and rotating. It's a wave versus a wave. An electromagnetic wave can be a corkscrew wave? Yeah. Meaning circular polarization. Circularly polarized waves. Yeah, currently all our waves, all our math, they're in classical waveforms. Yeah, yeah, yeah, but nevertheless, a circularly polarized photon is one with a spin angular momentum along the axis going exactly the direction that you can. A circularly polarized wave is composed of circularly polarized photons, and a circularly polarized photon means one whose angular momentum along that axis is pointing along that axis. But to talk about objects at rest, which is what we're talking about now, it doesn't make sense to talk about a corkscrew motion because corkscrew really has to do with a simultaneous rotation and motion along an axis. So at the moment, the object you're hunting for, the mathematical term, is helicity. Helicity has got to do with the orientation of the spin relative to the direction of motion. We'll come to that very, very important. Is there an elementary particle that we know that is 3 halves spin? It depends on whether we're... Define the term no. No electron. That we measure or that we have 100 or... 100 or 100s, I guess. It's a matter of speculation. Speculation is to... wrong word, conjecture. It's a matter of conjecture that there exists a supersymmetric partner of the graviton, which is called the gravitino, which has spin 3 halves. That has not and will never be experimentally detected. Therefore, we're saying the coin is a... If you use L-School... I have a question. The... Directly. Directly. At this point, the Ls are no longer acting like their original definition because if you just plug their... their original definition in, you wouldn't get these limits. So, there must be a difference. It looks like there's no longer acting on the original space of psi functions that we had. Somehow, there's a new... Psi functions. The original space of cats. No. We didn't know what the original space of cats was. We didn't know what the original space was. Okay, bro. If you say... What we're deducing is the properties of that space. Okay. That's what we're doing. We didn't know what the original space was. It's not the space that you usually take in elementary quantum cats. No. No. This is something new. So, I think... This is spin space. You're going to tell us what the space is if you come... Oh, it is the space of these integers. That's it. By a particle of spin M, the space is the basis vectors. You specify what a state is by saying what its basis vectors are. Its basis vectors are simply labeled with the values of M along its answers. That's it. That's... Yeah. If you took the absolute value of L, that vector, is that an observable quantity? Would you get it back? It is. So, let's... All right. So, let me tell you what's true. Okay. Instead of the absolute value, let's take the sums of the squares of components and see what we can find out about that. We're going to try to do it without my notes. Yeah. Let's take L squared. You might think that L squared is just N max squared. Yeah. That would make sense, right? You take the spin and you point it directly upward. The problem with that is that you can't know the z component and the x and y component simultaneously. So, if you know the z component that's vertically upward and max, there's bound to be some uncertainty in the x and y component. All right. So, that uncertainty in the x and y component will tell you that Lx squared plus Ly squared plus Lz squared will not quite be N max squared. All right. It'll have a little extra. So, let's compute it. That's a fun thing to compute. Let's see. I was going to go without my notes, but there's always one thing which I have to go through to remember. Yeah. Okay. Let's take L squared. L squared means Lx squared plus Ly squared plus Lz squared. And, of course, the name of the game here is to rewrite things in terms of L plus L minus Lz. L plus L minus Lz are... So, what is L squared? Let's start with L... All right. So, what would we do to get L squared? We would add Lz squared, and that would be it, right? Classically. But not quite true quantum mechanically because this is extra commutator term. The extra commutator term, L plus L minus is not equal exactly to Lx squared plus Ly squared. It's, in fact, Lx squared plus Ly squared plus a term coming from the commutator of Lx with L... Lx with Ly. So, let me write down what that extra term is. What's the commutator of Lx with Ly? I. I. I. It's got to do with Lz. There's an I over here, but the commutator also has an I in it. So, the I's are going to cancel. And, if you work it out, you'll find out that it's just plus L... Let's see, that L plus L minus is Lx squared plus Ly squared. I think it's minus Lz. And that means that L squared has another term in it which is just plus Lz. It's definitely plus over here. I think it's minus over here. Just an extra term coming from that commutator. The little mistake we made when we said that L... That L plus times L minus is Lx squared plus Ly squared. We made a little mistake and a little mistake is proportional to Lz. And here's how it comes in. Okay. Now, let's take L squared and operate on a state which is m max. Which has the maximum eigenvalue of m. What do we get? L squared on that is equal to L minus L plus on m max. Plus... What do I get when Lz hits m max? M max. So here we get plus n max squared plus n max times the ket vector m max. What about this? I went through all this effort but I'm still left with something I don't know what it is. But I do know what it is because L plus when it hits m max has nowhere to go. Alright, we've reached the ceiling. This is zero. So what have we found out? We found out that the eigenvalue of L squared for this state up here is just m max squared plus m max. In other words, m max times m max plus one. m max times m max plus one. Not m max squared. m max times m max squared plus one. Classically, it would just be m max squared, the maximum value. But because of the fluctuation, because of the uncertainty principle, which says that things which don't commute can't simultaneously be specified, there's got to be some fluctuation in them, and that fluctuation manifests itself by a little shift here. Okay, so let's take the case, for example, of a spin zero particle. What is L squared for a spin zero particle? Wow, that's not interesting. So if max is zero in that case, this is zero. Good. What about a spin one particle? Turns out to be two. What about a spin a half particle? A half times three halves is three quarters. So for a spin, what about a spin thousand particle? Classical. Yeah, that's right. If n max is big enough, this little shift is not important. We take a spin of 100 billion billion billion, we're just going to get the square of it plus a tiny irrelevant little change. So classically, it's a very good approximation that L squared is just a max. Now, if you worked out, you can do this, you can work out L squared on each one of these states, and you'll get the same value. All of these states have the same value of L squared. That is the quantum mechanical analog of the statement that when you rotate the particle to change the z-component of angular momentum, it doesn't change the square of it. It doesn't change the magnitude of it. So that's a little exercise. You could work out using, again, just the commutation relations of the angular momentum. Can you know the square of the angular momentum and the z-component simultaneously? Yes. You can, and you can check that by computing the commutator of L squared with the individual components of the angular momentum. What you'll find is that the commutator of L squared with any one of the components is equal to zero. So that means, although you cannot know two components of the angular momentum simultaneously, you can know a component and the total square value of the angular momentum. Every electron has a square spin, a spin angular momentum, equal to three-quarters. Every electron has the same value, and it doesn't matter which playway the electron is oriented. Did you say that the spin one, the L squared, is going to be zero for all, or just for the zero case? No. L squared is zero for the spin zero case. Okay. The spin zero only has one state. It's got angular momentum zero. And it was one that the L squared... The L squared is two, I think. One times one plus one. Right. If a spin three, or what is it? L times L plus one is three times four. Twelve? You said that... I'm trying to see how it's always the same no matter which level you put. Oh, that's not to work out. So how do you work it out? You work it out by saying, let me take m max and now hit it with l minus. This gives me the next one down. And then hit it with l squared and use the tricks. Use the algebraic tricks. And so that when you hit it with l squared, it just gives you the same market value. The same as... So the algebraic statement is that L squared on the thing in the red box gets your red box around the vector that you're looking for. This is equal... This is equal to m max times m max plus one. m max... Not m max, m max. m max plus one times the thing in the red box. All right, that l squared has exactly the same market value as it had for max itself. Yeah, and you get that from the formula over there. You get that from manipulating the operators. You get that from manipulating the operators. You use this formula here and manipulate it and fiddle around with it and... Yeah, it takes a few minutes to work. It also follows from the fact that l squared commutes with l plus and l minus. It commutes with all the components of the angular momentum, so it commutes with l plus and l minus, and it just follows from that. So, yeah, they all have the same squared angular momentum. Now, let's talk about rotational invariance, not in detail. I'm not going to talk about it in detail. That'll be for another time. But we seem to have nothing but some discrete states of angular momentum along the z-axis. What about rotating that? If we rotated it, wouldn't we get angular momentum which had fractional value along the z-axis and maybe some components in the other direction? So, for that, let's concentrate for simplicity on the spin-a-half case. In the spin-a-half case, the spin-a-half particle, the spin-angular momentum has only two states. It's not true that an electron has only two states. It has an orbital motion and it has a spin-angular momentum. Let's just concentrate on the spin and nothing else. Then, concentrating on the spin, there's only two possibilities. Up and down, half spin up, half spin down. Let's call them, I don't like using Riff or Equation. Let's call them up, plus, and down, minus. These are the two basis states of the spin-angular momentum along the z-axis. But there are certainly many more states that I can write down. These are not the only states I can write. In fact, the general quantum state of a system with two states like this is to add them with complex numbers. Alpha and beta are complex numbers. What is the probability in such a state that the spin is alpha? Alpha star alpha. Probability for alpha equals alpha star alpha and the probability for down is equal to beta star beta. We add them together. Total probability has to be one. And so one of the rules is alpha star alpha plus beta star beta equals one. One other fact that if I were to make a phase rotation of each of these complex numbers, in other words, if I was to multiply alpha and beta by the same phase, e to the i theta, e to the i theta, that does not change the physical character of the state. And the reason is because all interesting quantities are things times complex conjugates. These will cancel out of any interesting physical quantity. And so this one, well, the way to say it is there's one degree of freedom, namely the overall phase of alpha and beta, which is unphysical and which doesn't matter, which is irreverent. Okay, let's count now the number of variables, the number of parameters that it takes to specify a quantum state of the spin of an electron. Alpha is a complex number. It has two real components. Beta is a complex number. It has two real components. So far we have four real components. But we have a constraint. Alpha star alpha plus beta star beta is one, so that means only three real components, the number of independent variables. On the other hand, one combination, the overall phase, is unphysical. Let's just eliminate it some one way or another. That brings us down to only two independent variables specifying a quantum state. Two independent variables specify the quantum state. Now, how many independent variables if an electron were a little arrow of a given length? It's angular momentum vector. How many variables does it take to specify the orientation of that angular momentum? Two. The polar and azimuthal angle of the sphere describing the end of the vector. They're the same thing. You rotate around the spin of the electron by varying alpha and beta, but no matter which axis you measure the angular momentum, it's always an integer. That is to say the actual measure of value is an integer. What about the average value? The average value be, I'm sorry, it's not an integer. It's a half integer. It's half or minus a half. Can you get anything but plus a half or minus a half for the average value? The average value. So what does the average value mean? What does the average value mean? The average value means you take repeated identical experiments where in each experiment the electron was prepared with exactly the same quantum state, and in each case you measure the z-component for the angular momentum. Sometimes you get plus a half, sometimes you get minus a half. The average is to take the ensemble of all of them together, add up the z-component of angular momentum, and divide by the number of experiments that you did. Just the average angular momentum. That can certainly be anything. That does not have to be. For example, if we just chose alpha equals one, and beta equals one, what's the average z-component of the angular momentum? Zero. In fact, this happens to correspond to the spin of the electron along the x-axis instead of along the z-axis. Up along the x-axis. It's been rotated by 90 degrees. Nevertheless, when you measure it along the z-axis, you'll get plus or minus one. But the average is zero. Because the average is which behave like the classical variables. If you take a spin up, and you rotate it by 90 degrees, the average spin along that vertical axis will be zero. The average is which behave classically. This is a spin along the x-axis. What's a spin along the y-axis? Anybody remember? What is that? That's the point. No, no, no, no. I guess it was the wrong guess. This is a spin. This is the z-axis. Here's the z-axis. Here's the x-axis. This corresponds to a spin which is pointing along the x-axis in this direction, with a plus sign. With a minus sign, it corresponds to a spin pointing still along the x-axis, but in the opposite direction. You can't find the one which points along the y-axis. You need to imagine the average. i and minus i correspond to orientation along the y-axis. And with other values of alpha and beta, you can make spins which are pointing along the arbitrary axis. Pointing along the arbitrary axis means if you measure the component along that axis, you'll always get the same answer. That's pretty much a spin in a nutshell. And as I said, it is the electric charge, the mass, the spin of a particle, are its most important properties. Now there's a correlation which I should not only mention, but emphasize very strongly, because it's one of the prime facts of elementary particle physics. It's actually a theorem of relativistic quantum mechanics, but we're not going to try to prove it. And the theorem is a correlation between the values of the spin of the particle and whether it's fermion or boson. Remember, fermions are the ones that you can't put two of them in the same state. Bosons, they have a power-the-exclusion principle. Bosons are the ones that you can put into the same state. They behave like photons and they make classical waves. Half-spin particles are always fermions. Without exception, by half-spin, I now mean one half, three half, five half, anything which is measured in half units, where the spectrum is half units instead of units. Those are all fermions and all particles which have integer spectrum like this are bosons. A spin zero particle is always a boson without exception. A spin two particle is always a boson. So what happens if you take two half-spin particles? Take two half-spin particles, which one of which happens to have one value of m and the other has another value of m, each one having a half unit. It could be three halves and minus seven halves or whatever. What happens if you add up their angular momentum? You get an integer, not a half-input integer. That tells you that objects which are made up out of an even number of fermions are always bosons. Any object made of an even number of fermions will have an integer spin. Because it has an integer spin, it's a boson. What happens if you take a boson and add it to a fermion? Let me give an example. An example would be a hydrogen atom. A proton is a half-spin particle, just like an electron. Half, not three halves, not five halves, a half. Proton has a half-spin just like the electron. You take a hydrogen atom, it's fermions. You take an electron and you put it in orbit and you create a hydrogen atom. The hydrogen atom becomes a boson. This is not the hydrogen isotope. This is the hydrogen atom with just a single proton. It is nucleus. A deuteron is a proton bound to a neutron. Neutron also is a half-spin particle. A deuteron is a boson. Half-spin and another half-spin. What about deuterium? Deuterium is a proton and a neutron that forms a nucleus with an electron in orbit around it. What are the possible values of the z-component to spin going to be? It's basically three half-integers added up. Three half-integers will always be a half-integer again. So a boson together with a fermion is a fermion. A boson together with a boson is a boson. And a fermion together with a fermion is a boson. Deuterium is a fermion. Hydrogen with only one proton is a boson. I think I got it right. Incidentally, particles and antiparticles always have the same spin, the same mass, opposite charge. What about positronium? Positronium is a positron in orbit around that electron and a positron orbiting each other. What would that be? Boson. Bosons have a property of being able to go through things. Bosons have a property of what? Being the same state. I heard superconductivity is two pairs of electrons together and they act like a boson. Therefore, it's sort of transparent or something like that. Before you can understand the superconduct, you have to understand the superfluid. A superfluid, for example, helium forms a superfluid. Let's talk about helium. What is helium a boson and a fermion? What near helium? Why is that? It's got two protons, two neutrons and two electrons. Two protons and two neutrons makes a helium nucleus and alpha particle into electrons. What's a boson? So whatever bosons are, helium is a good example of them. Helium atoms can all go into the same state. It's when basically a superfluid of helium atoms is almost like a classical way that you make by piling them up all in the same state. That's not quite right, but it's close enough for us. They all move together in the same state. Now, an electron, a metal, is a fermion. You can't put two electrons into the same state. But how about pairs of electrons? Here's a paradox. Okay, here's a paradox, which we talked about last night, but we'll talk about it again tonight. You take two fermions and you put them together and make a boson. Okay? How can it be that you could put two of those composites into the same state when you couldn't put the constituences into the same state? Does that make sense? Does it or doesn't it? No? No? Okay, I'll tell you what doesn't make sense. I'll tell you what exactly it means. You know, let's do that next time. I think I'm running out of steam. And remind me, remind me about this. This is so interesting that it's worth ten minutes on. How two fermions, when they make a boson, how do you put the bosons into the same state, even though the fermions can't be put into the same state? We should get back to that. I'm reaching that point in the evening where I'll get confused if I try it. Since protons and neutrons are composite, why is it that they're half-spin? No, no, the composites of odd numbers of fermions. Protonistic quarks. A quark is a half-spin particle. So it's going to be a fermion. Three fermions make a fermion. All right, just the last observation has to do with the Pauli exclusion principle. It all comes, it was all discovered from staring at the periodic table long enough. Pauli said that you can't put more than one electron into the same orbital, orbital in an atom. No, he didn't say that. He said you can't put more than one electron into the same quantum state. He knew very well that you can put two and no more than two electrons in the same orbital state in the atom. You know that because he knew, roughly speaking, that you take a hydrogen atom, you double the charge of a nucleus, and you put another electron in, the two electrons go into the same state, the same orbital state. So Pauli's exclusion principle did not apply to the orbital motion, it applies to the entire quantum state. The entire quantum state includes the spin state. You can put two electrons into the same orbital in an atom, as long as their spins are in the opposite direction. You can't put two electrons with spin in the same direction, as long as the axis here, you can't put them in the same direction, but you can put them into spin states in the opposite direction. As long as they are different, as long as they are different to electrons, and that's the property of the spin in the atom, two of them can't be in the same quantum state, but quantum state includes everything about the particle. So that means in every orbital in an atom, you can put no more than two electrons, and when there are two electrons, they have to be in what's called spin singlets, which means their spins cancel. You can think of that like mesh gears. They're going in the same direction. They're going in opposite directions. Yeah, they're going in the same direction, you can strip the gears. Indeed. I know I've been in my car. An orbital in an atom is more than just what Schell said. What does an orbital mean? An orbital simply means the state of an electron if you ignore the spin. Am I using orbital in a way that a chemist wouldn't use it? Yeah. What I mean by orbital motion is not what a chemist means by an orbital. What's an orbital again? SPDF. I mean a quantum state, a solution of the Schrodinger equation, a solution of the Schrodinger equation, ignoring spin. It creates the chart, the periodic chart that's up there. So that's how come you have two electrons in the air? There are two things. You have SPD and an all-act stop has to do with orbital angular momentum. But then there's principle quantum number, which is called IN. When I speak about the orbital motion, I mean both IN and the orbital angular momentum. It's a chemist notation to call orbital SPD and all-act stop. You know what this stands for? I don't know what it stands for. Yeah, it doesn't mean spherical. But it actually means a quantum state of zero angular momentum. P means angular momentum one. This is the orbital motion over here. D is angular momentum two. F is angular momentum three. And I don't know what G comes after that. I don't remember. That's the orbital angular momentum of the electron. The other quantum number is the distance, basically the distance of the electron from the proton. And that's the principle quantum number. So what I would call the orbital motion, meaning that the actual motion of the electron is a composite of the principle quantum number and the orbital quantum number. Let's call that a state of orbital motion. The state of orbiting, both principle and the quantum. All right, so the rule is that how we devise was that no electron can have all of its quantum numbers the same. Incidentally, the orbital, well, yeah, okay. So from what you're saying, can you deduce how many electrons can be in each shell in an atom? Yeah. How many can be? Not how many are. You can deduce the maximum number. Yeah. Yeah. Yeah. We're not going to do that now. See, we haven't intended to do it at all. See, we've got the principle quantum number and then the orbital quantum number and then the spin quantum. Let's talk about orbitals in the sense that can us use it. Orbital means SPD and that, okay? Yeah. An S particle is spin zero. That means the orbital angular momentum is zero and there's only one state of the orbital angular momentum. What about P? P is angular momentum one. We're talking about orbital angular momentum one. But the rules of angular momentum are the same for orbital angular momentum, spin angular momentum. So if the orbital angular momentum is one, then this is three possibilities. If the orbital angular momentum is two, that's a d-wave or a d-state, there are five states. So when you say one, that actually means up and down? Everything, everything being specified. So there's two electrons. All the things which can be simultaneously specified. And then when there's three, then there's two in each of those three. So you get six more electrons or a total of eight. Right, exactly. I hadn't intended to talk about atomic physics to anybody, but I don't know. It's extremely interesting. And some features of atomic physics show up again when you're thinking about quarks orbiting each other and so forth. A little different. Okay, any questions? If not, I have a date tonight, or a Friday night, for a date with my fellow. For more, please visit us at stanford.edu.
(November 13, 2009) Leonard Susskind discusses the theory and mathematics of angular momentum.
10.5446/15067 (DOI)
Stanford University A very important concept in particle physics are the field equations, of course. The equations of motion of the fields describing the particles. We've described boson fields, fermion fields, the electromagnetic field. All of these fields satisfy equations. The form of the equations is more or less wave equations, and it involves derivatives of the field with respect to space, with respect to time, and the field itself. It involves derivatives of the field and the field itself. In other words, the field undifferentiated. How do we code these equations? Just write down all of the equations for all of the fields. Yeah, we could do that. We could write down the equation of motion for the electromagnetic field, for the electron field, for all of the fields in the system, and that would just be the equations of motion. But in standard classical mechanical systems, field theory, even quantum field theory, the equations of motion always come from Lagrangian. Lagrangian is a very important concept in classical mechanics, in quantum mechanics, and in quantum field theory. Lagrangian is an object that you do certain things to, to generate all of the equations of motion of the system. Not just one by one, the equations of motion for each field, but it contains in one simple expression, in one condensed compact expression, all of the equations of motion of a system of fields, or a system of degrees of freedom in general. The Lagrangian idea in classical mechanics is very closely related to the principle of least action. We're not going to go into that here. I'm going to just very quickly remind you of the connection, what a Lagrangian is, what it depends on, how you generate the equations of motion, and do one or two examples, that's all classical physics, and then explain to you what the meaning of the Lagrangian is in quantum mechanics. The same Lagrangian, except expressed not in terms of classical fields, but in terms of quantum fields, has a totally new meaning in terms of quantum mechanics that dictates the motion of particles, the interaction between particles, cross sections for collisions, all sorts of things. It codifies it in a very simple expression. One compact idea. Okay, so the Lagrangian of a system of fields, let's call it script L, is a function, I'll write it, L of, for the simplicity, let's just imagine one field. I'll tell you what you do if you have more than one field. But supposing there's one field, phi, the Lagrangian depends on the field, and it depends on the derivatives of the field with respect to time and space. So it could depend on the derivative of the field with respect to time and the derivatives of the field with respect to x, y, and z. Or more compactly, it depends on the field and the derivatives of the field with respect to x mu. This means derivative with respect to x mu. Okay, all right. Now, what do you do with the Lagrangian to generate the field equations? I'm not going to prove this. We're not going to argue for these equations. We've done that in the past several times. I'm just going to tell you what the rules are. The rules are to generate the equations of motion, you differentiate the Lagrangian with respect to the derivatives of the field. For example, derivative of L with respect to derivative of phi with respect to t. I'll show you an example. This looks abstract at the moment. Then you differentiate with respect to t. You add to that the derivative with respect to x of the derivative of the Lagrangian with respect to the gradient of the field along the x-axis. Same thing for y and z. And then you set that equal to the derivative of the Lagrangian with respect to the field itself. Lagrangian depends on the field itself. That is the equation of motion for the field phi. If your Lagrangian depends on several fields, then you do the same thing over and over for each one of the fields, and that generates the system of equations for all of the fields. So as I said, we'll do an example. As we go on, we'll do some examples. But let's just do the very simplest example just to see how this works. A simple scale of field, a very simple scale of field, simplest field, has Lagrangian which is equal to the derivative of phi with respect to time squared. There's a conventional one-half in front of it. Derivative of phi with respect to time squared minus the same half derivative of phi with respect to x squared. Same thing with y and z, but I won't write it out. Oh, that's a minus sign here, minus sign. And then finally, minus some function, and we'll call it v of phi, or v of phi. That's the nature of the Lagrangian of a simple scale of field. So there is to it, no more than that. Let's see if we can work out what the field equations are. You first begin by differentiating Lagrangian with respect. We could call this phi dot here. Let's just call it phi dot, the derivative of phi with respect to time. And let's simplify the notation, or condense the notation. Instead of the phi by the x, let's write phi sub x. Phi sub x means the derivative of phi with respect to x. Likewise for y and z. Yeah, that will be in here. That will be in here. Okay? Right, okay, so let's first differentiate the Lagrangian with respect to phi dot. Here's phi dot right here, phi dot squared, one-half phi dot squared. The derivative with respect to phi dot is just phi dot itself. Differentiate one-half phi dot squared with respect to phi dot itself. Differentiating, the two cancels the one-half and you just get phi dot. But then you have to differentiate again with respect to time. That makes it phi double dot. Two-time derivatives, second-time derivative. Then, plus. Now, the space derivative terms come in with the opposite sign from the time derivatives. I'll explain why in a moment, but just accept it. They come in with the opposite sign. So you get exactly the same kind of thing, but with the opposite sign with second space derivatives. Second derivative of phi with respect to x squared. Same thing with y and z. And then, on the right-hand side, we have the derivative of the Lagrangian with respect to phi. The only place where phi itself occurs, undifferentiated, is in v of phi. So what we get on the right-hand side is minus the derivative of v with respect to phi. That's the form of a field equation for a very, very simple scalar field. Some examples of v of phi. The simplest example of v of phi would be a constant, but then it would have no derivative with respect to phi, and it wouldn't even contribute here. It's not interesting. We could have something linear in phi. That's all right. You can have something linear in phi. But if this incidentally is field energy, v of phi is field energy. If v is unbounded from below, it means, in other words, if v is just a linear function of phi, that means the energy, the minimum of the energy is infinitely far off, negative infinity. That's not good. The first interesting example is to put phi squared. phi squared, v of phi is equal to something proportional to phi squared. All right? A proportionality needs a constant. Let's call the constant m squared, and let's put a 2 in. There's no content in that. m squared over 2 is just some constant. I call it m squared over 2 because it has some physical meaning that we'll see in a moment. Now, what happens if I differentiate m squared over 2 phi squared with respect to phi? That gives me m squared phi. Differentiating v of phi with respect to phi would just give me m squared phi. All right? So again, simple wave equation, phi double dot minus derivative of phi with respect to x squared dot, is equal to minus, I think, m squared phi. That is a very, very basic form for a wave equation, both in classical field theory, quantum field theory, our electrodynamics, all sorts of situations. This wave equation shows up. What does it say about the relationship between the energy and momentum of a quantum of the field phi? Let's move on to quantum mechanics now. In quantum field theory, the energy is related to the frequency. Differentiating twice always gives us a factor of minus 2, sorry, minus omega squared. If we have a wave function which is of the form e to the i omega t, perhaps times e to the minus ikx, I think I want this way. If I differentiate with respect to time, that brings down a factor of minus i omega. If I do it twice, it gives me a minus omega squared times phi times phi. What happens if I differentiate with respect to x twice? It gives me two factors of k, of k sub x. So this will give me plus k sub x squared phi, plus k sub y squared, plus k sub z squared, all times phi. And on the right-hand side, we have minus m squared phi. We can just cross off the phi here, cancel it out. And what this wave equation tells us is just a relationship between omega and k squared. It tells us that omega squared minus k squared is equal to m squared. That's really the whole content of the wave equation. For a wave moving down a particular axis, it tells us that omega squared is equal, better yet, omega squared is equal to k squared plus m squared. If we remember, let's work in our favorite units, h bar equals c equals one, sorry, h bar, h bar equals c equals one, then omega is nothing but the energy. Omega is related to energy by a factor of h bar. This is energy squared equals momentum squared plus m squared. This is the usual relation. If I wanted to make it, I want to put the speeds of light back in, I would put, let's see, I think a c squared over here and a c to the fourth over here. Question? But in units in which the speed of light is equal to one, this is just e squared equals p squared plus m squared. This is the usual relationship between energy and momentum if we interpret m as the mass of a single quantum. So what this equation tells us then is really nothing but the relationship between energy, mass, and momentum with the stipulation that the parameter m here is the mass of a particle. So that's quite the elegant fact. One other point, in writing Lagrangians, it's important you want your equations in the end to be relativistically invariant, invariant on the Lorentz transformations. The only thing you'll have to do to keep them invariant on the Lorentz transformations is make sure that the Lagrangian is a scalar. If it's really a scalar and transforms as a scalar on the Lorentz transformation, your equations of motion will be invariant on the Lorentz transformation. That's why this minus sign is here. Remember, whenever you're doing special relativity, there are minus signs in the relationship between space and time, if you like. So that's the basic setup for Lagrangians. And it's really quite simple. There could be several fields in the Lagrangian, and they could be interacting with each other in some way or another. Let me give you another example of just one field. Supposing instead of v of phi being proportional to phi squared, we also put in something else, for example, I don't know, g times phi cubed. Then there would be another term in here. Where is it? There would be another term in here, which would be, I guess, minus 3g phi squared. The equation of motion would no longer be linear. Linear means only contains first powers of derivatives in the field itself. If an equation is linear, it means you can add solutions and get new solutions. But now we have a nonlinear term in the equations of motion. Nonlinear means quadratic in phi. Quadratic, cubic, any kind of higher power of phi. And so we see a pattern. The pattern is things in the Lagrangian, which are quadratic in either derivatives or the field itself, lead to linear equations, and things which are higher powers than quadratic lead to nonlinear equations. Nonlinear equations mean solutions don't add. First, let's take the case of linear equations. If we have linear equations, and let's suppose we have some wave moving down an axis. It's a solution. And now we have another wave superimposed. Well, they're not superimposed yet. It's another wave packet from the same field moving to the left instead of the right. And they pass each other. Well, if the equations of motion are linear, they will just go right through each other. That's the nature of linear equations. They'll just go right through each other. And the solution will remain just after. What this is is a certain solution. And the sum of the two solutions is just nothing but the two wave packets coming at each other. For all time, the solutions will just remain the sums of the original two solutions. And all that will happen is they'll pass right through each other and come out the other end without scattering, without deforming. And that's it. During the time that they pass through each other, there may be some interference, but then they'll pass through each other and the two wave packets will separate. Separate, undeformed. On the other hand, if there are nonlinearities in the equations of motion, that means the wave packets scatter. They influence each other, and they might not only scatter, they might break up, they might do all kinds of things, much more complicated. And that creates scattering in particular. Scattering of waves which will go off and ran in directions, whatever. OK, so that's what nonlinearities, higher powers do here. It causes the wave packets to be nonlinear and to scatter. Another example, just to show you what you can do, is you might have two fields, for simplicity, two scalar fields. To make two scalar fields, we would take just these terms here and write, let's call it two fields, phi, and give me another letter. Another Greek letter. Sigma, rho. OK, let's take phi and rho. So another possibility for a Lagrangian. There's nothing special about these. I'm just writing down random things. 1 half d mu phi squared. That's the same thing I've written here. I just shortened it up and made it simple. That stands for phi dot squared minus the derivative of phi with respect to x squared and so forth, just to give it a short hand. We do the same thing, what do we call it, rho? d mu rho squared. And then we can add something which some v, some potential energy, which depends on both phi and rho. OK, so this could be some phi and rho. Let's take an example. There could be an m squared, phi squared over 2. Some other constant, let's call it capital M squared, rho squared over 2 minus. If that's all I had, supposing that was all I had, then the field equations would be uncoupled, separate field equations for phi and rho. Doing this operation on phi, we would get phi double dot minus the second derivative of phi with respect to x squared dot dot dot is equal to m squared minus m squared phi. And we get the same kind of thing for rho. Rho double dot minus the second rho by dx squared dot dot dot equals minus big m squared rho. In other words, we would just get two separate independent wave equations with two different parameters, little m squared and big m squared, with two parameters we correspond to the masses of the two distinct fields or to the quanta, the masses of the quanta of the two distinct fields. This would be a system of two kinds of particles, phi quanta and rho quanta, with two different masses, m and big m, little m and big m. Now we could add something more complicated. We could add, for example, let's add rho times phi squared. And pick that random, nothing special about this. When I work out the equation of motion for phi, we have on the right-hand side the derivative of v with respect to phi. What is the derivative of v with respect to phi? Well, it's just equal to minus here. The derivative of v with respect to phi is just twice rho phi, differentiating phi squared, this gives me twice phi. On the other hand, the equation of motion for rho, we have to put on the right-hand side minus the derivative of v with respect to rho, and that's just equal to minus phi squared. OK, so here we see an example of an interaction between two fields, a terminal Lagrangian which involves both fields. If a terminal Lagrangian involves both fields in some non-trivial way, the result will be that the wave equation for phi will contain rho, and the wave equation for rho will contain phi. If in addition, this is not quadratic, then the equation is a non-linear, they're non-linear and coupled. That means that a wave packet of phi will scatter a wave packet of rho, and a wave packet of rho will scatter a wave packet of phi. They'll come together and do things and not just pass through each other. If this term is zero, they'll just pass through each other. So somehow, the terms in the Lagrangian like rho phi squared are telling us something about interaction between the quanta. It's telling us that the quanta of the field scatters each other. That's what a term like this is indicating. These terms here tell us about energy and momentum. This term tells us about mass. If we didn't have this non-linear term, the content of this equation would just be k squared, omega squared equals k squared plus m squared. That's all of these terms. And this term tells us about the interaction between quanta. That's classical physics. That's classical field theory in a nutshell. Let me write down another example for you. Another example is the Dirac equation. Well, let's first of all consider the Dirac equation. What is the Dirac equation? It has the form, I forget where the i's go, i psi dot is equal to alpha derivative with respect to x psi. I think there's an i here also. Anybody remember? I think so. Plus m beta psi. Beta was a matrix. Alpha was a collection of three matrices. This really means alpha 1 d by dx1 plus alpha 2 d by dx2 and so forth. That was the Dirac equation. Can it be written in terms of Lagrangian? Yes, it can be written in terms of Lagrangian. Lagrangian is just, I think it's i psi dagger derivative of psi with respect to t. Psi dagger is the complex conjugate of psi plus, I think it's plus. It's not very important. Psi dagger alpha derivative of psi with respect to x and then finally plus psi dagger beta psi times m. You'll sometimes see this is the Lagrangian. This Lagrangian, if you do the same operations on it, will give you the Dirac equation. That's the Lagrangian for the Dirac equation. We don't have to go through it. It's not important for us right now. I just want to indicate that there is a Lagrangian which generates the Dirac equation the same way that the scalar field equation was generated from the Lagrangian of the scalar field. It's got Dirac matrices in it, so it's a little more complicated. It's a multi-component field, but nevertheless that Dirac field is generated, or the Dirac wave equation is generated from this Lagrangian. This would just give you the ordinary Dirac equation. Now, supposing there was another field in the problem, possibly a scalar field, a scalar field, a boson field, how might you, you would then add in also the Lagrangian for the scalar field? In other words, where is it? The scalar field Lagrangian, maybe with just the m squared or maybe the m cubed, the phi cubed term. Is that parenthesis in the right place? I mean, you have the i over the holding. I don't remember. Maybe not. Is it important? Not important. Not important to us. The important thing is that we can couple the Dirac field to the scalar field, which essentially means make the scalar field scatter the Dirac field, scatter the scalar field, by putting terms in Lagrangian which couple them together. For example, something like Psi dagger, now I'm writing something that won't work, Psi dagger times Psi, it'll work, but it's not Lorentz invariant, Psi dagger times Psi times phi. Now, this is not really quite legitimate because it's not Lorentz invariant. If you wanted to make it Lorentz invariant, I'm not going to explain this tonight. We'll have to go back to some of the Dirac matrixology, but you would just put in a beta in here. That would make it Lorentz invariant. The beta is just a matrix. It doesn't change the structure in the appreciable way. It's product of two Psi's times a phi. And that would give you an interesting field theory where Psi fields scatter Psi and Psi fields scatter Psi and the whole thing is a big interacting mess. When Psi's come on size, they scatter each other. Yeah, it's the same beta. It's the same beta. Right, notice Psi dagger Psi times M and Psi dagger Psi times Psi. So it's almost as though M was being replaced by a scalar field here. This is the way... Oh, if you took this Lagrangian, it would generate both the equations of motion for Psi and for Psi. There would be a Dirac equation with an extra term proportional to Psi and a scalar field equation with an extra term that would have Psi bar Psi in it. So this is the pattern. This is the way quantum field theories are expressed. They're codified by writing down a Lagrangian. Lagrangian is a thing which, in a very simple way, expresses all of the content, all the dynamics, all of the interactions between these fields. That's classical field theory. Classical wave packets scattering other classical wave packets. What about quantum field theory? All right, so I'm going to tell you very briefly what this has to do with interaction between particles. We've already talked at length about the scattering of particles and how it is represented, for example, by products of fields. Fields are built up out of creation and annihilation operators. So if you take a field, for example, and I don't know, let's take a simple example, phi cubed. What does that represent when you think about it in terms of creation and annihilation operators? Phi will contain both creation operators, let's call them A pluses, plus A minuses. This is very schematic. No detail. It'll contain various creation and annihilation operators. And when it's cubed, it will contain all kinds of terms. Terms which can absorb or annihilate three particles. That's the A minus cubed term. Terms which can create three particles. Terms which can eat two particles and spit out a third and so forth. So phi cubed has a meaning in terms of particles coming in and particles going out. Particle comes in, two particles go out. At a point x, that's what phi cubed of x represents from a quantum mechanical point of view. Phi cubed of x represents one particle coming in and splitting into two at the point x. But there are many terms in adding up phi cubed of x. The various terms correspond to possible things which look like this, namely two particles coming in, one particle going out. There's a possibility of three particles going out, just produced at the origin, three particles coming in. What else? Have I missed anything? I think that's it. One particle goes to two, two particles go to one, no particles go to three, and three particles disappear. That's the kind of thing which would be codified by phi cubed of x. In fact, the Lagrangian represents, some of the terms represent interactions like this. The other terms, the quadratic terms, the higher powers here represent interactions like this. The quadratic terms, where are they? Phi squared and here, these represent just the motion of a particle, the motion of an undisturbed particle. So let's see how that works. Let's take a terminal Lagrangian, here's a terminal Lagrangian, derivative of phi with respect to x squared. Let me think of it the following way. The derivative of phi is equal to phi at one point minus phi at a neighboring point, let's call it x prime. That's what a derivative is, right? A derivative is basically a difference of the field at two neighboring points, and again, schematically, so squared. That's what phi, derivative of phi with respect to x stands for. This is, let's put the one half here. This is one half phi of x squared plus one half phi of x prime squared. But the important point is the term which is phi of x prime. These two terms are not very interesting. What do they do? They absorb a particle at a point and produce another particle at exactly the same point. They might eat two particles at the same point, or they might take an incoming particle at point x, absorb it, and then just spit it back out at the same point. So they don't move the particle around, they're sort of passive, more or less passive. This term here absorbs a particle at x prime and re-emits it at point x, or vice versa. In other words, what it does is it moves a particle from point x to x prime. It absorbs it, if it finds it at x prime, and then re-emits it at point x. So what is it then? It's a, if this is point x and this is point x prime, it just moves a particle from x to x prime. That's the nature of these quadratic terms in the Lagrangian. They move particles from one point to another. If you wanted to imagine the motion of a particle from one point to a distant point, you might simply multiply these terms in the Lagrangian together many times so that the first time would move the particle from x to x prime, the next time it would move it from x prime to x double prime, x prime to x triple prime, and so forth, but stated simply the quadratic terms in the Lagrangian govern the motion of the undisturbed particle. That's what they do. They're connected with free motion of a particle. On the other hand, the nonlinear terms, the nonlinear terms mean the terms which are cubic, for example, and you could have higher terms, they could be quartic. They take particles and bring them together and scatter them or annihilate them and create new particles. So that's what the Lagrangian does. It quantifies these interactions, these processes, in a very, very condensed way. Yeah. When you take that to the partial, you don't have to worry about the volume of this lens next to the cone. When you take that to the partial, you don't have to worry about the x prime. Yeah, we do. Certainly we do. We could call a difference between the distance from x to x prime. Let's just call it distance L, and we put an L squared downstairs. So, right, you do have to put a numerical factor in there. But the numerical factor doesn't change anything. It doesn't change what these things do. The number is just a number. It still has the effect of removing a particle at x prime and making it reappear at x. What does the mass term do? The mass term, which is just m squared phi squared, also is sort of like these terms here. It's also something which absorbs and emits a particle at the same point. You can represent it, let me represent it, by saying a particle comes in, m squared over 2, a particle comes in and a particle goes out at the same point x. It comes in and goes out. So this mass term, m squared, x squared, it doesn't move the particle around. It just absorbs it and reemits it. Absorbs it or just, it counts particles, really. It just tells you there's a particle there. But strictly speaking, it absorbs a particle and reemits it from the same point. That's the nature of the mass term. Now, we are, of course, drawing diagrams. These diagrams are really nothing but Feynman diagrams. We're not going to go into the depths of Feynman diagramology. It's not important to where we're going to go. Of course, it is important to really understanding particle physics. But let me just illustrate one more little Feynman diagram. One more diagram describing a particle process. This is an important one. This is the basic process of quantum electrodynamics. Quantum electrodynamics is a description of electrons and photons. Electrons and photons, electrons described by the Dirac field. Now, first of all, the Dirac field for an electron is composed out of creation of, let's see, Phi is composed, itself composed, only out of annihilation operators. It's composed out of annihilation operators. So let's write it as C, I guess C minus. But in the expression for psi, there's a sum over all the possible momentum, all the possible momenta and properties of the electron. In particular, there are annihilation operators for particles with positive energy. Let's just write it this way. And also, annihilation operators for particles of negative energy. Remember, the Dirac equation has solutions with both positive energy and negative energy. And there are annihilation operators in psi for positive energy and for negative energy. Psi dagger has creation operators, C plus, also for positive energy and for negative energy. That's just what the Dirac equation is. That's what the Dirac field is. So, annihilation operators and annihilation operators. Now we do a little switch. This is the annihilation operator for an electron of negative energy. It removes an electron of negative energy. Absolutely equivalent to removing a particle of negative energy. So, the annihilation operator is adding an antiparticle of positive energy. The removal of a particle of negative energy is the same as the addition of an antiparticle of positive energy. That's this filling the Dirac C sort of thing and so forth. And so, this operator over here, the annihilation operator for particle of negative energy, it can be re-labeled, it can be re-labeled as a creation operator for an antiparticle of positive energy. Let's indicate now that this is for an electron and this is for a positron. So, the Dirac field contains annihilation operators for electrons and creation operators for positrons. Likewise, Psi Dagger contains creation operators for electrons and annihilation operators for positrons. There's just a switch of terminology, a switch of notation to represent the fact that a positron is a hole in the negative energy electron C. Okay, so now let's write down what the basic interaction of quantum electrodynamics is. The basic interaction of quantum electrodynamics involves Psi Dagger, Psi, and the photon field. The photon field is the electromagnetic field, the vector potential of the photon field. If you want to put things incorrectly, the vector potential is a vector, so it needs a vector index. And then we have to make a vector out of Psi Dagger Psi. We do that with the Dirac matrices and the... Let's write it correctly. It's Psi Dagger Psi times the time component of the vector potential, a naught, plus Psi Dagger Alpha Psi times the space components of the electromagnetic field. What is this thing I call A? The thing I call A is just a field operator for photons. Photons, of course, are particles which have a directionality, their polarization. Their polarization is a little vector. So photons have, in addition to a position and a momentum, or a momentum, they have a little orientation. They carry a little flag with them, which points in a certain direction. It's their polarization. And so in other words, they have a little bit of a vector character to them, and that's what this vector is. It's just a polarization of the photon. What does this do? This absorbs an electron. Let's see. It absorbs an electron. It can emit a positron. No, it can emit an electron. Let's see. This one times this one. Psi Dagger times Psi can absorb an electron and then re-emit the electron. And at the same time, emit a photon. Is that a term in Lagrangian? Yeah, that's a term in Lagrangian. Is that a scalar? Yeah, it is a scalar. Okay, so let's go back to the Dirac equation. You can make scalars by multiplying vectors together. Who asked me? Yeah, you make scalars, for example, by multiplying vectors together, right? Given two vectors, B mu and A mu, you can make a scalar like so. Yes? Now, let's, right, we have A mu, but we don't have a vector composed out of a size. But we do have a vector composed out of a size. So let me tell you what the vector composed out of a size is. It's just Psi Dagger Psi is the time component of it and Psi Dagger Alpha Psi is the space component of it. That's it. All right, so Psi Dagger Psi gets multiplied by the time component and Psi Dagger Alpha Psi gets multiplied by the space component. You add them together. Okay? So that makes a scalar. That makes a scalar and... Can we go back to that other board? The one you just covered up, I'll pull this down. Yeah. Okay, thank you. Now what does A have in it? A has creation and annihilation operators for photons. So what we have here is products of creation and... Y'all, let me think. Creation operators for electrons and annihilation operators for positrons, that's here. Here's annihilation operators for electrons and creation operators for positrons. And here's creation and annihilation operators for photons. So it describes a whole variety of different kinds of processes. Let's draw some of them. A whole variety of different kinds of processes in which electrons, positrons, and so forth are created and annihilated. For example, it can take an incoming electron, let's put a little arrow on the electron to show its direction of motion. It can scatter it and emit a photon. What is that? An annihilation operator for an electron, a creation operator for the electron, and a creation operator for the photon. That's what this... We can also have electron comes in, electron goes out, photon is absorbed by the electron. So his emission of a photon, his absorption of a photon. Okay, what else can we have? We can have... Let's label this. Electron, electron, electron, electron. But we can also have electron coming in and positron coming in. E minus. Electron is a negatively charged particle. We'll call it E minus. Here we can have an electron coming in and a positron coming in. That's usually drawn in this way. An electron coming in is drawn with an arrow upward, a positron coming in is drawn with an arrow downward. And then, if you draw it that way, it looks as though the electron just turns around. But it's not really... It's not really what's going on. What's going on is an electron is coming in and a positron is coming in and a photon going off. That's another thing that's contained in this combination when you multiply side dagger times side times A. More, what else? Electron, positron being produced by a photon. All these various processes are contained in this one single expression here, side dagger, side times A. And they all have the same coefficient in front of them, same numerical coefficient in front of them. All these processes, the probability for them, the amplitude for them, the quantum mechanical amplitude for them, are all related to each other simply by the fact that the coefficient in front of them is the same numerical value, whatever it happens to be. What does it happen to be, the numerical coefficient in front of this? I didn't write it down. We should write it down. Well, it's not the fine structure constant. It's close. It's just the electric charge of the electron. Yeah. Okay. What is the electric charge of an electron? The electric charge of a particle is basically the amplitude. It's the square root of the probability that an electron, when it scatters, emits a photon. An electron, at every point along its trajectory, can emit a photon. The probability per unit time along the trajectory for it to emit a photon is, the probability is the square of the electric charge. So the electric charge is the square root of the probability. It's the amplitude for the photon to be emitted. It's less intuitive. What's that? The third one that you just drew, the photon turning into the electron. Yeah. It's also the probability for that. It's the same. Yeah. The electric charge. That's to be said, it's got a scaling factor there, isn't it? Scaling factors? The probability is the probability per unit. It's the probability per unit. It's a dimensionless, it's actually a dimensionless quantity. It's a dimensionless quantity. An example would be an electron bangs into a wall and says, stop, stop dead. What's the probability that it emits a photon? What's the probability that its energy or its momentum, its energy goes off as a photon? Question? Yeah. So an electron of a given energy plows into a wall, stops dead. What's the probability that its momentum continues in the form of a photon? And that's the square of the electric charge. So that's the meaning of the electric charge in quantum mechanics. In classical field theory, it's just a coefficient in front of the interacting psi and e-fields. Does the electric charge have units? No. Okay. Yes. Look, the units for electric charge were defined by people who, well, how much current goes through a wire, this sort of thing, that sort of thing. In microscopic quantum mechanics, we usually define the electric charge in a dimensionless way. We don't measure it in terms of coulombs. We measure it just in terms of the charge of an electron in units of the charge of the electron. So it's dimensionless. But really what it is, it's the dimensionless probability that a particle which collides with a wall emits a photon. And it really is dimensionless. The only dimensions that you really need in physics are mass, length, and time. In fact, you don't need any dimensions. You can work in dimensionless units for everything. Plunk units, they're called. Plunk units are dimensionless. And in plunk units, everything is dimensionless, including the electric charge. Is there some use in grouping? Is there some use in grouping things that have charge involved? Grouping. Yeah. What is that called? Grouping. Well, I mean, sometimes you'll write equations that have several terms which you can then go through the whole process and then pull them out by equating left hand and right hand side like terms on left hand, left hand or right hand side. So it sort of has a, I don't know, I'm just asking. We can't give it anything more. The question is about the timing of the creation of the violation. Is that simultaneously or is there any kind of delay that comes out of the equations? No, not for these basic fundamental processes. The process, okay. These operators are all evaluated at the same space time point. All at the same space time point, which means that the electron is absorbed, re-emitted, and the photon is emitted from exactly the same point. Is that what you're asking? Yeah. Yeah, okay. I guess what I was saying is you have different kinds of conserved quantities and they individually have to be conserved. Oh, yes, that's true. Yeah, the various conserved quantities here would include charge, for example. That has to be conserved. In fact, the conservation of charge is nothing but the sort of continuity of these arrows. If we had a process like this, what would that be? That would be an electron coming in and a positron going out. No good. Then a negative charge would turn into a positive charge. So if we represent the flow of charge by this arrow here, then an electron moving vertically upward, time is upward, of course. An electron would be represented by an arrow going up, a positron would be represented by an arrow going down. We do not allow arrows to come together. The rule is always through-going arrows. Arrows come in and go out. Come in and go out. Come in and go out. And that is tantamount to the conservation of charge. The conservation of charge is also ensured by the fact that we have one psi and one psi dagger. A psi either creates an electron or annihilates a positron. That means, did I say that wrong? A psi annihilates an electron. A psi annihilates an electron or creates a positron. In either case, it increases the charge by one unit. When it annihilates an electron, that increases the charge by one unit. When it creates a positron, it increases the charge by one unit. Psi dagger does the opposite. So psi increases the charge by one unit. Psi dagger decreases the charge by one unit. And so whatever this term here governs and the way it processes, it conserves electric charge. So the term in the box is Lagrangian in the right hand corner. Lagrangian in the infinity. In the infinity. Look, I haven't written the whole thing. There's also the terms like this, which govern the photon and the Dirac equation. These are the interaction terms. Yeah, those are the interaction terms. I haven't bothered to write the quadratic terms. I don't think it's technically accurate, but in my education, my undergraduate education, classical mechanics, you always had the feeling that the Lagrangian was something you could figure out from understanding the class of physics, and there's the right answer that you could use. And then a little bit, what I think I'm hearing you say, and then what I've done in reading on the side, is that quantum mechanics, Lagrangian isn't something you can deduce, it's something you guess at, and you find out a right by definition of the experiment. When did you ever deduce to Lagrangian and classical mechanical system? Well, I actually just looked in the back of my palm in the exam. Yeah. But it was something that was defined in my memory, it was a difference between the kinetic energy and the potential energy. Yeah, it is. In fact, here's kinetic energy of the field, here's potential energy of the field. These spatial gradients are a kind of generalization of kinetic energy. Kinetic energy means proportional to the square of time derivatives. Potential energy means not no derivatives. So that's exactly what it is. Kinetic energy minus potential energy. Is that exactly the same in quantum mechanics? Yeah, yeah. Right. It's the same in quantum mechanics. Isn't that a miracle? No, it's not a miracle. The quantum mechanical Lagrangian is much more fundamental than the classical mechanical. You start with a quantum mechanical system described by some Lagrangian, and then in the limit of large numbers of quanta, you work out its classical behavior. The remarkable property here is that if this kind of Lagrangian here governs the quantum mechanics, then when you go to the large number of quanta, it will also govern the classical wave equations. That's not obvious, but it's true. As far as how physicists find out what Lagrangians are, what the Lagrangian's systems are, well, of course, sometimes they guess them on the basis of deeper theory. And sometimes they get it right, and sometimes they don't get it right. But we know the Lagrangian of quantum electrodynamics with extreme precision. We know the Lagrangian of the standard model of particle physics with extreme precision. How do we know it? We know it from experiment. We know it by colliding particles together and working out just what kinds of basic interactions it takes to describe those interactions between particles. So there's a little bit of an art there, but when you do collide particles together and see what comes out, you are looking pretty directly at these vertices here. Of course, personally, you find the diagrams, if you rotate them, they'll still be valid. Rotate from here to here. Rotating means... I think I left something out. I left out the possibility that an electron and a positron can come together, and a photon also be absorbed at the same point. So you can take any one of these legs and push them up, push them down, and it's still a valid... right. Quite remarkable, I mean, that one simple term can describe all these different processes. Any questions? That one that you just did, that sort of says that the photon is together with the positron and an electron results in nothing. Right. Well, of course, yeah. Several lectures back, I showed you how energy conservation is derivable by integrating such processes over all time. Do you remember that? When we integrated the amplitude of such a process over time, we found energy conservation. When we integrated it over space, we found momentum conservation. This is the basic process. If we took that process and calculated the amplitude for it, and then integrated it over all the places where the process could happen, that integration would ensure energy and momentum would be conserved. If energy and momentum is conserved, this kind of thing can't really happen. It can't really happen because it would violate the conservation of energy. Okay. So we have a little more work to do after we've written down this diagram, expressed an amplitude for it. We have to integrate it over everywhere with equal probability, and that makes sure the energy and momentum are conserved. All the congress happened where out of nothing you get an electron and a lot of electrons and photons. I'm not quite sure what you're saying. Are you saying that when we integrate over time, excluded processes will automatically go away? Yeah. This is so easy. Question? The left side, things like the partial of the field with respect to time and respect to energy. The left side of what? Here? Yeah. And then over on the right side, the field is expressed in an operation, an annihilation operator. Can you take it? The partial of the respect to time of the creation operator? No, you don't take the, okay, good question. Remember, you don't take the derivative of the creation operator. The field looks like sums or integrals of creation operators or annihilation operators with things like e to the ikx e to the minus i omega t, right? You're differentiating these things, not these things. These things don't depend on position. Yeah, they're like constants. They're like constants. Here's what you're differentiating, these objects here. That's why you pull down factors of k and omega by differentiating. Yeah. Let me give you some other examples of interactions. Other kinds of processes which exist in nature, which can be described by this kind of field theory. Protons, neutrons, and mesons. Protons, neutrons, and mesons form a system which is fairly similar to quantum electrodynamics, similar in spirit. Protons are fermions. Neutrons are fermions that are described by exactly the same kind of fields as electrons. So let's give names. Psi proton, psi neutron. And then the other objects in the theory are mesons. Pi mesons are an interesting collection of mesons. What are pi mesons? They are, we'll learn as we go on that they're quark anti-quark pairs, but we're not interested in that now. Let's just think of them as particles. They're scalar particles. These are Dirac particles, and pi-ons are scalar particles. They come in three varieties. Pi plus, pi minus, and pi zero. What is the plus minus and zero? The electric charge. The electric charge of a pi plus is plus one unit, one unit in units of electron, electron charges. This one is negatively charged, and this one is electrically neutral. The proton is positively charged. The neutron is neutral. So let me give you examples of some interactions which do occur in nature. There's a Psi dagger proton. Let's, a Psi dagger proton. Psi neutron. Psi neutron absorbs a neutron and creates a proton. Let's put, what do we have to put here? A pi on. Is it pi plus or pi minus? Let's see. Let's try pi plus. All right. This absorbs a positively charged pi meson, and, oh, I think I have it wrong. Pi minus. Absorbs a negatively charged pi meson, pi minus. Emits a neutron and absorbs a proton. Do I have that right? No, I don't. Absorbs a neutron and emits a proton, and emits a proton, and this does not look possible because it looks like a negative charge went to a positive charge, so I think I want a pi plus here. This is one interaction which occurs in nucleophysics. This is one interaction which occurs between mesons and nucleons. Nucleons are protons and neutrons, and it's described like this. This, of course, is not a really fundamental interaction. The real thing is quarks, and we're going to break this down into processes in terms of quarks. If we didn't know about quarks, and when we didn't know about quarks, we represented protons and neutrons in terms of fundamental fields, pi mesons in terms of fundamental fields, and we would write down things like this. What else can be there? Side dagger neutron, side proton, and pi minus. This absorbs, let's see this, absorbs a pi minus and a proton, and gives off a neutron, and side dagger proton, side proton pi zero, and also side dagger neutron, side neutron, and pi zero. What are these things described? They describe proton goes to neutron plus a pi plus, but of course the same vertex here describes many processes. The same one describes proton anti-neutron, anti-neutron, anti-neutron, I'll represent just by a bar, emitting a pi plus, a whole variety. You can push the legs up and down. When a leg is switched, you have to replace it by an antiparticle. If you take an outgoing neutron and take that outgoing neutron, switch it around, there's proton coming in, it becomes an anti-neutron coming in. The outgoing neutron, when switched around, upside down, becomes an incoming anti-neutron. Here's an incoming proton, incoming anti-neutron, they annihilate each other and create a positively charged pi meson. You can work out where all the different interactions that take place between protons, pions, and neutrons. Basically, all of them can happen as long as they conserve charge. As long as they conserve charge, they all happen. They're all part of nuclear physics, all possibilities, and they're codified in a small number of terms, side-dagger, proton, sine, neutron, pi plus, and so forth. That's another example of where this sort of technology was important in the early days before we knew about quarks. The fact that the proton and neutron are made up out of quarks, well, the size of the quark distribution is sort of like an atom, like electrons going around an atom and so forth, but it's much, much smaller. Much, much smaller, the proton, and because it's smaller, it means at some level you can think of it as a point particle. In other words, for all measurements, interactions, and so forth, which don't break it up, which don't break up the proton, you can think of the proton as pretty much a point particle. In that approximation, you pretend protons and neutrons are described by fundamental fields. Of course, what would happen if you hit the proton and neutron too hard is it'll break up. It'll sort of break up into quarks. If it breaks up into quarks, then you won't get away with the description in which the proton and neutron are described by simple fields. You'll have to remember that it's made of quarks. All right. This is when a proton and electron come together. When a proton and electron come together. Yeah. There's, of course, another vertex involving protons. And the other vertex involving protons also involves the photon. Here I just told you about protons, neutrons, and pimesons. But because protons are charged, there's another vertex which is proton-photon-proton. You can't have proton goes to neutron and a photon. That doesn't conserve charge. So proton goes to proton and a photon is another allowed process. And also electron goes to electron and photon is an allowed process. So a possible process that can take place when a proton encounters an electron is the proton can emit the photon and the electron absorbs the photon. We haven't talked about building up more complicated processes yet. We've been talking about building up more complicated processes. But you can see the pattern. You can build processes out of these elementary units. The elementary units are these vertices and you build up. Two different spacetime points. How would you write those branches? This process is not described. The point units that you start with are described by the Lagrangian. This corresponds to the product of the Lagrangian of two different points. So Lagrangian is some basic unit of interaction which tells you that a proton is absorbed and emitted at a point and the photon. That's the interaction term. That's side-dagger proton, side-proton, and A. Then there's the quadratic terms which move the photon along. Remember they were the things which move the photon along from point to point. They can act many, many times moving the photon from one place to another. And then the photon can be absorbed over here. We're clearly not deriving all of this in any sense from first principles. I'm just telling you that these basic elementary processes out of which you build everything are codified in Lagrangian. And Lagrangian always describes things which take place at a point or perhaps a pair of neighboring points. Two kinds of terms in Lagrangian. Here they are. This one here describes a thing at a point. Something comes in and something goes out from the same point. The other terms in Lagrangian involve derivatives. And they correspond to things at very close by neighboring points. Motion of a particle from one point to another. No action at a distance in quantum field theory had worse there or not at worse. It's a good thing. But you can move a particle from one point to a very, very infinitesimally close point or you can have processes taken place at a specific individual point. There are no things in Lagrangian which take a particle from one point and put it at another point. Those processes are built up at a more elementary process. In this case, something taking place at this point, something taking place at this point and then a large number of little processes moving the electron, the photon from one point to another. So Lagrangian only has terms of one of those at one point then. One point or a pair of neighboring points as derivatives. So Lagrangian describes that photon electron exchange. It just has terms for protons, electrons and photons. There's no term in Lagrangian which describes this whole process. This process is described by a large number of basic units. Large number of basic units is a process taking place at this point. That's side dagger protons, i proton, and emission of a photon. And then basic elementary processes which move the particle from one point to a neighboring point to a neighboring point. And then the absorption of the proton over the end. But all of these are in the Lagrangian that you write for this. Each individual process is in the Lagrangian, but you have to compound them together to make up the general process. Like compounding and multiplying? Yes. Lagrangian can act any number of times at different... Remember, the Lagrangian itself is made out of fields. Fields are made out of creation and annihilation operators. And the Lagrangian can occur at each point of space, next point of space, next point of space, and so forth, and build up whole processes. So the Lagrangian is not something which is a final answer. It describes the basic units and elements that compound themselves together by product. Exactly as you said. The form... So the Lagrangian also has time terms in it. Time derivative terms. Time derivative terms. And then we try and extend that to processes that are taking place over certain distances where we have under conditions where time is affected by relativity. Like that, can't you have problems? Can't you have what? Can't you have problems or are we just not seeing the relativity? In case of relativity. Yeah. What do you have? I'm not sure what you're asking. If you were doing your compounding of the Lagrangian and very close to the event horizon of black holes, did that still spread the process well? You really want to talk about black holes? No. Black holes have not well described at one field period. How far can you... how far can those infinite testing distances be? Well, you'll have to think about it. So when I said distance and time. Distance and time. I guess we're all going to try and think what are the fields into it. What kind of things are they? The fields are symbols which operate. They're operators which make processes happen. They're processes. The basic processes or basic elementary processes. They absorb a particle or they emit a particle. That's all they do. They absorb it, take it in at one point. They take it in at one point or spit it out from the same point. Multiplying fields together can bring in one particle and spit out another one. Here, let's put it this way. A single field by itself can absorb a particle or spit out a particle. The product of two fields can absorb a particle and spit out a particle. You understand the difference between or and and. In one case, either the field or single power of the field absorbs a particle or it emits a particle. Two powers of the field absorb and emit a particle. And then a property described with the runchium describes that mechanism for every point in the space. It's just a code. It's just a code. It's just a code for the elementary processes which you have to compare together. Well, it does have time derivatives as well as space derivatives. It does. Okay, so you think in terms of space time. You think in terms of space time, x and t. And you think of the trajectory of a particle as being built up, or motion of a particle as being built up at many little steps. Now, of course, we have to take limits. We take limits by replacing derivatives, differences by derivatives and so forth. But let's think about little differences rather than derivatives. We break up the trajectory into a lot of little pieces and then a particle is absorbed at this point, re-emitted at this point. So it's jumped from one point of space time to another point of space time. And it's absorbed at this point again and re-emitted and re-emitted at this point. In the process, it hops from point to point to point to point. Now, in the limit that we subdivide this trajectory into an infinitely small series of steps here, the number of processes goes to infinity. All right, so there are limits involved. But the simplest way to think about it is to subdivide everything into little pieces and think of basic elementary units of motion from one point to a neighboring point. It's an effective way to think about it. In fact, quantum field theory really always does have to be defined by dividing up space time into lots of little cells. That's the way you really define it. You break it up into lots of little cells. Having broken it up into a lot of little cells. Yeah. I'm just going to bring it back to the original question. So when you put all this together, so you can overlay these fields on top of each other to describe the processes, that now you can, for any given point in time and space, what will happen at any given time and point in space, and you can make it as complex as you want. The question that I originally asked was, if now you combine them to describe a multi-step process, as people were asking about really, as soon as that multi-step process has a motion of the particle you were describing, where the quadratic term allows the products of the commission. So motion simply means annihilation at one point, creation at a neighboring point. Well, I was just talking about the quadratic term being the products of the commission moving. As they move apart, it is really high energy. Can there be other problems with things like time dilation and stuff like that that takes place in the project? When there's more and more distance between them. All of that is rather automatic. If the Lagrangian is a scalar, everything will be Lorentz invariant. All of that kind of thing about time dilation and so forth will be automatic. You won't have to worry about it. But I'm not sure exactly what question you're asking, so I can't answer it. What is the connection between these operators, between the creation of annihilation operators in the field and the phospho-glomite can operators in position and momentum, and the indeterminance, ishbergic, uncertainty? The connection between creation and annihilation and Heisenberg uncertainty. Heisenberg uncertainty is another way of saying you either describe things in terms of position or momentum, not both. In terms of creation and annihilation operators, the creation operators and annihilation operators are either functions of position or functions of momentum. And the field operator, psi of x, is the creation operator for a particle at point x. It is made up out of field operators, let's say A, which are functions of momentum, times e to the i, blah, blah, blah, blah. So creation operators are either creation operators for a particle of momentum p, or creation operators for a particle of position x, but they are not creation operators for particles of position x and momentum p. We never have operators which create particles of momentum p at position x. Either they are functions of x or they are functions of p, and the relationship between the functions of x and the functions of p is simply Fourier transform. Fourier transform is the essence of the uncertainty principle. And what it says is that to build a function which is localized in position, you have to use a lot of momenta. To build a function which is localized in momenta, you are, well, it's going to be spread over x. But this is just exactly the same thing as it is an ordinary element to quantum mechanics. You either have something which is a function of x or a function of p, and nothing which is a function of x and p. That's all. You were describing how you subdivided the xt in space. Could you complete that? Could you just finish that explanation? Yeah, I mean it's... Okay, at some level it's no different than what you do in classical physics. In classical physics, how do you define derivative? You define derivative by dividing up space into little cells and then define differences. That's what a derivative is. Now, in classical physics you can really just take the limit rather smoothly and forget the fact that it originated from a very discrete picture. In quantum field theory it's more problematic. You really do have to define the theory from the beginning by dividing up space into little cells. And having divided it into little cells, in each cell the field operator is now not a function of continuous position. It's now a function of which one of these cells you're in. Derivatives become differences, and the quadratic terms in the Hamiltonian take a particle from one cell, remove it, and put it into the other cell. So that's how you move... that's why it says derivatives from your perspective, moving from place to place. Yeah. Right. So you define the theory by first dividing up space, placing your field operators, one or however many, in each cell, and then the basic quadratic interactions will absorb a particle from one cell and put it into the neighboring cell. Then in the end you take a limit where you take the cells to be infinitely small. Okay? So in terms of a discrete picture, a process might absorb an electron over here, the electron which happened to have come from a neighboring cell, and so forth, the electron came from a neighboring cell, absorb it over here, emit it over here, and emit a photon. Then the photon will hop from cell to cell to cell by the quadratic terms. Also the electrons will hop from cell to cell. The photon will hop from cell to cell to cell, where it might meet an electron. So there's just basically two kinds of things, hopping from one point to another, and interactions which absorb some particles and emit them from the same point. And the mass term here, this m squared phi squared, it can be thought of as a kind of interaction where a particle comes into a cell and is emitted from the same cell, with a coefficient m squared. Is there a relation between the size of the cell and the planar cost? No, no, no. The end you want to shrink that size of the cell to zero to very precisely define the theory. Quantum field theory is kind of the limit of a discrete theory in which motion is replaced by hopping and in which space is discrete. So the function of taking a product of the Lagrangian is basically a probabilistic type of a thing, contingent probability, that in order for this process to go through, you have to have something going from here to there to there, as opposed to it's not a time thing. Because it's over space time. That's when you put in the cells, right? You put your joint probabilities and your branches in each cell, right? Yes, each cell, each cell, that's right. And then the process goes up to TX. Well, the process is best thought of as a thing that happens at space time. Yeah, I mean you can... Yeah, it goes down to this progressive act of life. Is it interesting to think about what the initial state of such a system might be? Sure, very interesting. But that's determined by what the experimenter decides to prepare in the laboratory. And I think just for illustrative purposes, what sort of life, some simple initial conditions... Sure, electron comes out of a half-filament. And how would you represent that in this formula? At the position of the filament, the field operator operates, which creates an electron at the filament. So the initial and final conditions, the initial conditions are simply the start of the particles coming in. And that happens at whatever the source of the particles are, the sources. The final conditions are just described by where the detectors are. So, and in each case, creation operators or field operators located at the points of the filaments which create the hot electrons, describe the emission of the electrons from those points, and the absorption of the electrons that the detectors are also described by field operators at the positions of these detectors. It's not good to do field theory, but it's not okay to develop your model of space time to just put the Lebronjian's where the emitters are. That's right. You describe... That's right. You start... The easy way to think about it is to start with the vacuum. Now, the initial state is not the vacuum. The initial state is some particles present produced by the filaments or whatever the sources of the particles are. So those you represent by a bunch of creation operators or field operators at the position of the sources, psi, psi, psi. That's the initial state. The final states also a vacuum and then annihilation operators. I don't know, side dagger, side dagger, side dagger, I guess like that. And then stuff takes place in between. These are just the initial and final states. But what goes on in between is determined by this Lebronjian. What goes on in between is determined by the Lebronjian. So you might put in many factors of Lebronjian here. Each factor of Lebronjian might absorb a particle and emit a particle at a neighboring point. With enough powers of Lebronjian here, you can take all of the particles and move them simultaneously from one point to another. So the starting point would be some particles which were produced over here. That would be this. The final states might be particles absorbed over here. What happens in between is governed by the dynamics, which is just this Lebronjian repeatedly acting and taking particles from one point to another, creating annihilating particles, and then at the end the detectors absorb them. In principle, an infinite number of Lebronjians. An infinite number. So are you in some sense integrating over those Lebronjians? Integrating over the positions of them, yeah. Is that what you mean? Integrating over the positions. I'm not trying to give you the finding rules. I'm not trying to explain exactly how you use these things. The main point for us is that the basic interactions are coded in simple expressions involving field operators. The reason we're going there is because that makes it very easy to think about symmetries. For example, this term in the Lebronjian has a symmetry. What is that symmetry? We've talked about it before. The symmetry is the symmetry of multiplying psi by e to the i times theta, where theta can be anything. Psi dagger at the same time gets multiplied by e to the minus i theta, and psi times psi dagger will be unaffected by this change in psi. I'm sorry. So here's a symmetry. A symmetry of this Lebronjian is multiplying psi by a phase, multiplying psi dagger by the complex conjugate phase, and doing nothing to a. That will not change this Lebronjian. That becomes a symmetry. What is that symmetry associated with? Well, it's just associated with the statement that you have equal number of sides and psi daggers. If you had different numbers of sides and psi daggers, this would not be invariant. What is the symmetry that has to do with equal number of sides and psi daggers? Charge conservation. The point is that the basic utility of this for us will be to write down expressions where we can read off the symmetries and from the symmetries read off the conservation laws. That's really what this is all about. The symmetries, the conservation laws, and the very, very basic primitive processes which take place at each point of space time. That's what Lebronjian contains in it. What do you do with it? Well, you multiply it together many, many times, which is just the same as saying you repeat basic processes in any order that's allowed. Anything that can happen that's allowed to happen by conservation laws. And all of that is described by taking this Lebronjian and just hitting and hitting and hitting and hitting so that you take the initial state and eventually evolve it into the final state. What is the output of this? The output of this is an amplitude, but that amplitude is governed by Feynman's rules, which we're not going to take up now in any case. We probably will at some point. But for tonight, all I really wanted to do was to show in to show you how basic processes are coded by a Lebronjian. And we will use that to see how symmetries and conservation laws are connected with each other. Not tonight though. Do the ionized and ionized vectors of these operators, for any role in the field? Sure. Sure. I mean, let's take the case of the electromagnetic field. The eigenvalues of the electric field or field operators in general are the possible measurable values of the field if you measure it. You do measure electromagnetic fields. How do you measure an electric field? You measure an electric field by putting a charged particle into it and seeing how it accelerates. You can get different values. Those values are the eigenvalues of the electric field. It's kind of interesting. The electric and the magnetic field field operators in general are built up out of creation and annihilation operators. Creation and annihilation operators don't commute with each other. That means in general that the field operators don't commute with each other. What's the not to commute? Phi has to commute with Phi because it's the same thing, right? What's the commutator between a thing and itself? Zero always. So the commutator of Phi with itself is always zero. Not so because you can take the commutator of Phi at one point with Phi at another point, a neighboring or some other point. It could be a different point. It could be a neighboring point. Each one of those Phi's is made out of creation and annihilation operators. They're not the same operators because they're evaluated at different positions of space or time. And in general, they will not commute. What is the meaning of this? The meaning of this is that in general you cannot simultaneously measure the field operator with two different positions. There are uncertainty relations in measuring fields or time derivatives of fields at two different positions of space time. So you asked me about, I think you asked me about that. Did you ask me about that? Yeah, you asked me about the field operator as a thing which can be measured. Well, yes, the field operators can be measured. There are eigenvectors and eigenvalues play the usual role as in quantum mechanics. And the commutation relations tell you limitations on being able to measure two things simultaneously. So all the standard rules of quantum mechanics apply to this system. In the case of eigenvectors, we learn that the electron and gate fields measure in volts. They're actually gate fields. They're measured in volts? In volts, okay. They're both from meters. Yeah, an electric field is measured in volts from meters. Is the photon field in volts also? Both are at the same thing. Yeah, but it's a poor way to think about it. I mean, you don't want to think about volts and meters when you're thinking about... You're better off measuring it in terms of number of photons. And it's not profitable to think of the electromagnetic field when you're thinking about microscopic physics in volts and meters. Why did you call it a motor? Probably all of you are not proper to think about the quantum field theory when you're trying to wire your house. Yeah. Just like summarized today, could you say that the idea behind what you're saying is that the way to study all these things is basically by taking the Brajians and what's the building block and you put them together in different ways and that's the general idea. Processes can happen over... Processes can repeat one to the next, the next. In other words, you can have a process followed by a process, followed by a process, followed by a process. All processes are built up out of basic elements. And the basic elements have one important feature that they are associated with points of space or neighboring points of space. They're pretty close together. So that's another way of saying that they're local. There's no action at a distance. That the basic indivisible processes are local and non-local processes such as a photon going from one place to another are built up out of these local processes. We have a creation of a photon, right? And then it moves across space and you got these Lagrangians that you think about. Is there some simplification by compacting many of them? Can you come up with a transformation that kind of represents all of them? So basically you can break it down to three processes, the generation, the movement and the creation and the destruction. Roughly speaking, you simply take... This is not exact but it's roughly speaking. You take one plus, let's call it the interaction Lagrangian, L interaction at point X. And now you multiply them together at all points of space, X1, X2, X3 and X4 and so forth. What do you get then? Well, first of all, you just get one. That's not interesting. That doesn't do anything. And then you get all the possible terms in which every factor here is one, except one, except one factor. That will give you the sum of the Lagrangian over all space. Okay, let me show you what I mean. So you take one plus the Lagrangian at point X. Do the same thing at point X prime. One plus Lagrangian at point X prime. One plus the Lagrangian at point X double prime. And you do this for every point in space. Now every point in space means every point on this grid. Let's take just two terms. Let's just take two factors. What does it give you? It gives you one plus L at point X plus L at point X prime plus L at point X times L at point X prime. Right? What does this represent? Well, this represents nothing happening. This represents a basic, for example, a basic interaction at point X. This represents the same basic interaction taking place at point X prime. So this could be nothing. This could be particles absorbed in the mid-dit at point X. This could be particles absorbed in the mid-dit at point X prime. And what about this one? Right, so this could be something else happening over here. And this one would represent particle absorbed at point X, something emitted, and then... Right, given that first one happened, then the second one happened. Well, the first term here just represents one process and only one process takes place. The last term says that something is contingent on something else about having happened. I mean, both things happen one after the other. So it represents two particles being absorbed, particle, and then two particles being... You see what it means. And eventually, by the time you multiply all of these terms together, you get an infinite number of terms in which a basic unit can happen at every point of space once. And then an infinite number of terms where two points of space are involved. And then an infinite number of terms where three points of space are involved. Let's draw one where three points of space are involved. Two particles come in, a particle goes out, another particle goes out, another particle... That's only two points of space, space-time, and then another vertex over here. So that's two particles in, three particles out. I think what he was saying is that if you have motion terms... Oh, they're also in here. I'm just saying, since you have motion terms, one motion term of movement from a photon for a finite distance takes an infinite number of applications of the Lagrangian. Well, that's true, but we... So I'm just saying that, well, you kind of have equations of motion for that part of the Lagrangian and you can just trace out the motion or something. Motion of a particle from here to here will involve a bunch of terms... This term will appear in this big product. Somewhere in this big product, there will be a term which involves L at one point, L at the neighboring point, L at the neighboring point, L at the neighboring point, and which will transport the particle from here to here. So somewhere in this big product, you'll find exactly the right thing to take you from here to here. Actually, by many roots, there will be many... Okay, first of all, there will be many, many such terms which can take the particle from here to here, depending on the orbit. The actual amplitude to go from here to here is the sum of them. All the possible ways you can get from here to here by multiplying out this long string of factors will give you a possible process whereby the particle can go from this point to this point. The total amplitude, the thing that you square to get the probability, will be the sum of all ways that can go. So you would be consistent with the double split, the fraction, the difference? Yeah, exactly. So when you multiply out this big string of terms here, you will find terms corresponding to all possible roots from here to here. The amplitude will be the sum of all those ways of getting from here to here. And when you square it, there will be interference terms, in particular if there are double splits. What's the one coming from the method? I just know what it's there. Where is the second one? It's a trick to get all the combinations. Yeah, that's right. It's a trick to get all the combinations. It really comes from finding path integral methods. But you can just think of it as a trick to write down all the combinations. And that's what really this is all about. Just all possible combinations of ways of going from one place to another are described by this product here. I'm cheating a little bit when I write this down. There's really a more technical expression, but this is close to the right thing. I mean, I would say that here is like, we can derive the shape of the principle of these reaction reaction. Well, okay. In quantum mechanics, it's not the principle of, yeah, the principle of stationary action. Good. Yeah, that's right. The principle of, no, it's not the principle of stationary action. In classical mechanics, the principle of stationary action picks out a particular trajectory. It picks out the trajectory of, let's say, minimum action. This is the principle of summing over trajectories where the amplitude for summing over the trajectories is determined by the action. So, that means going to Paris on the way to Paris. Yes, including going on Paris from the way from my office to my house. So, you just said it's an art form to figure out how to express that? No, no, no. It's an art form to figure out what the Lagrangian is. Once you know the Lagrangian, the rules are very definite. I'm not even sure if I can start speaking. It does. It does. It does. But only like any other mathematics has its art. You have to be clever. The, well, what do we call it when you don't know what you're doing and you're groping and you're guessing and you, art, art. Yeah. The art form is in finding tricks to figure out what the Lagrangian is from experimental data. Once you know it, then you use it in a very, very precise formalism to calculate probabilities. The teacher says, research is what I call what I don't know what I'm doing. Yeah. But, you know, somebody tells you, here is the Lagrangian. Here is the process I'm interested in. Two particles in, four particles out. Two particles come in from here, four particles go out over here. Calculate for me the probability for that process to happen. Then quantum field theory is an extremely precise tool. There's no ambiguities in how you use it. The ambiguities are not the ambiguities, but the art form is in figuring out what that Lagrangian is from experimental data. And, so, the forms, as I was saying, are Lagrangian for every particular part of those interactions. Yeah. The big book of Lagrangians. Okay. I think what you're asking me is how complicated the Lagrangian of the world is. You know, at least in physicists' minds, there is one Lagrangian that governs everything. Now, that may not really be true, but let's suppose that it is. And it contains a large number of fields. It contains all of the fields describing all of the particles and all of the interactions. And it may have the sum of many, many terms like this, describing lots and lots of different interactions, but it's one Lagrangian. It may be composed out of sums of pieces which individually look like recognizable pieces, but the whole thing is one big Lagrangian that governs all of nature. Now, this may be really false, but that's the idea that there's one big Lagrangian with many terms in it that just governs everything. You got there one term. Well, maybe a little better than one term at a time, but roughly speaking, yes. Every time we discover a new particle, we discover a new field that has to go into that Lagrangian. Every time we discover a new process, that's a new interaction in Lagrangian. So you might say, well, this whole thing is a big, giant mess, and indeed it is. There are probably at least 100 different elementary particles, things that we call elementary particles, 500 different terms in Lagrangian. In fact, it's so bad that the kinds of Lagrangians that people write down to describe the standard, if you wanted to write down all the physics which is known, which means quantum electrodynamics, the strong interaction, strong nuclear force, quantum chromodynamics, the weak interactions, gravity and everything else, Lagrangian would fill not a book, I don't think, but it would fill a couple of closely spaced pages. Are these fields what determined the shape of the Klabiyau manhorses? So today you're going beyond basic quantum field theory. You're talking now about those things which are outside the framework of standard quantum field theory. So you packed it all into one number, and then there's this coupling constant. The coupling constants are the coefficients in Lagrangian. And that's the way the experiment will be. So for example, here's a coupling constant. The mass squared is also a kind of coupling constant. It's also a coefficient in Lagrangian. Various coefficients, you can have 5 to the 4th, 5 to the 6th, 5 to the 500. Each one would be a coupling constant. So it's a mess. But once you know Lagrangian, you know a lot. You deduce a lot of different kinds of processes out of it. Now do you have to know the whole thing in order, for example, to study quantum electrodynamics? Well, if you wanted it with infinite precision, you would have to know how the electron interacts with all the particles, all the different kinds of particles through all of the possible interactions. But in fact, quarks are not very important to the way electrons behave. Quarks are pretty much isolated from electrons. They don't mix too much with them. And so you can pretty much ignore quarks if you're only interested in electrons and photons at atomic energies. You just don't have enough energy to sense the tiny, tiny quark structure. And so for practical purposes, you can often break off a piece of this Lagrangian and say all the other things in it are not terribly important to what you're calculating. Break off small pieces of the Lagrangian and calculate with it. But in principle, if you wanted infinitely precise things about photons and electrons, you would just need the whole thing. This is not a pretty picture. I mean, this is really not a pretty picture. It very much makes you feel that quantum field theory is some effective description coming from, like, hydrodynamics or any other effective description, which is coming from some more fundamental physics. And just what's the right term? Effectively describing things at some coarse-grain level. Quantum field theory is probably just a coarse-grain description of something more fundamental. And as long as that's the case, it can be very, very complicated or precise. I think the word you want is phenomenological. Yeah, yeah. I think that's the right word. The standard model of these gravity are also just like the efficiency of QE. What's officially called the standard model is our gravity. And then I was going to ask you. But you can add gravity into the standard model and talk about the standard model with gravity. If the Higgs... I'm going to ask you. We don't need that in this case, right? We need general relativistic. We're going to say relativistic. Oh, these were prime genes all relativistic. Right. Right. So, if we were to look at the Lagrangian's standard model, we wouldn't see anything if they were taking into account general relativistic. General relativity, this does not take into account. General relativity, special relativity it does. Yeah, this is the combination of quantum mechanics and special relativity. If all the particles are massless, except for the terms involving the Higgs, basically you have what? Just terms which are Higgs interactions that have like mass terms for the other particles? Yeah. Okay, so I'll give you... I think we talked about this before, but I'll tell you again. So we see that what the mass term is, is it's the coefficient of phi squared. Supposing we had another field that I'll call H. And supposing in Lagrangian I had, yeah, let's write it down some terms. H phi squared, now that's not quadratic, that's cubic. So what does it describe? It describes a phi in, a phi out, and an H being emitted. Okay. That's what this term does in Lagrangian. Now imagine there's more terms, something like V of H. Now V is potential energy. V is potential energy, it's the potential energy of the field H. Supposing V of H happens to be a function which looks, oh, let's say something like this. Here's H, and V of H is such that the minimum of the potential is not at H equals 0. The minimum of the potential energy defines the stable equilibrium points. The vacuum is a stable equilibrium point. So, and for this particular potential there might be two minima, and the real vacuum would be chosen to be one of these two minima. This minima has a non-zero value of H. So this vertex then becomes just a, this vacuum value of H here, just becomes a number in the vacuum and translates into the, into the, yeah. Yeah. One last trigger question. Is the standard model capital concept a first-partial creation and annihilation? Oh, absolutely. Yeah, oh, absolutely. Yeah, definitely. For more, please visit us at stanford.edu.
(December 1, 2009) Leonard Susskind discusses the equations of motion of fields containing particles and quantum field theory, and shows how basic processes are coded by a Lagrangian.
10.5446/15066 (DOI)
Okay, the question arose about the paradox of if two particles, both fermions, make up a boson, can we have a boson, two bosons, which are, let's say, in the same state, in the same state? Perhaps a more general question. The wave function of bosons, the wave function of bosons, is supposed to be symmetric under interchange of the two bosons. That means, for example, if you had two bosons and the wave function describing them was a function of, let's say, x and y, this doesn't mean the two coordinates x and y in the x and y direction. X and y are simply now standing for particle number one and particle number two. If they're bosons, then the rule is that that must equal psi of y and x. That's another way of saying you can't distinguish between the two particles. There's no difference between saying you have a two boson system in the state psi of x and y and saying you have two bosons in the state y or psi of y and x. It's interchange symmetric between them. Okay? And in that case, you can certainly have two particles in the same state. For example, a special case of this would be that psi is a product function. Psi of x and y is a product of, I don't want to use the label psi again because I've used psi to label two particle wave functions. But let's invent another thing to describe one particle wave functions. Let's just call it phi. And psi of x and y could be phi of x times phi of y. This would correspond to a boson with two bosons with wave function phi, with the same wave function. In other words, in the same quantum state. That's what this would mean, a two particle state. And each particle is in the state phi. This, of course, is obviously symmetric. It is the same as phi of y and x, sorry, phi of y times phi of x. Wave functions are not operators. They commute. Wave functions are just ordinary numbers. They're not matrices. They're just functions of x. And they satisfy all of the algebraic properties of perfectly ordinary functions. And so, what did I write? Yeah, I wrote the right thing. So yes, two bosons can be in the same state. And the wave function be symmetric. What about two fermions? The rule for two fermions, if we put them, if we have two fermions, is that the wave function has to change sine. Now, that's a very odd thing, but nevertheless, that is the mathematical rule, the abstract mathematical rule, for fermions. When you interchange them, the sine of the wave function changes. Now, okay, so that's the difference between them. Then, can you have two fermions in the same state? No. Because phi of x and y is not equal to minus phi of y, and you see what I mean. Phi of x times phi of y is not equal to phi of y and phi of x with a minus sign. All right, now, supposing I have a wave function for two particles, can I make, and it's not symmetric and it's not anti-symmetric, can I make something out of it which is symmetric or something which is anti-symmetric? So, let me show you how you do that. Supposing I had some wave function which was neither symmetric nor anti-symmetric, let's just call it phi of x and y. When you interchange x and y, it neither changes sine nor does it stay the same. So, it's neither symmetric nor anti-symmetric. All right, if you want to make something symmetric which is an appropriate wave function for two bosons, you just add psi of y and x. Now, this is obviously symmetric. If you interchange x and y, this one becomes this one, this one becomes this one, and nothing happens to that there. Now, this is a symmetric wave function. This is a possible wave function for bosons for any psi of x and y. And, if we're talking about fermions, we can do the same thing, except that we put a minus sign there. Now, what happens if we interchange x and y? This one goes to this one, this one goes to this one, but the whole thing will change sine. The whole thing will change sine if we interchange x and y. So, this is a wave function for fermions. This is called symmetrizing and anti-symmetrizing a wave function for two particles. Okay, now let's suppose we have four particles altogether, four fermions. Okay, four fermions, what are the rules for four fermions? And, for simplicity, let's suppose there are two different kinds of fermions. Let's say there is electrons and protons, and that we're talking about hydrogen atoms. Now, the rule, okay, so let's see what is the rule. We have a four-body wave function. Let's now let electrons be called x and protons be called y. So, the wave function is a function of two x's, let's call them x1 and x2, y1 and y2. It's a function of four variables. What are the rules? The rules are, first of all, that the wave function has to change sine if you interchange x1 and x2. That's the fact that the electrons are fermions. It must also change sine when you interchange y1 and y2. Incidentally, let's say, let's do this right. It must also change sine when you interchange y1 and y2. Yeah, let's, let's, that's the rule. Those are the only rules. Sine change when you interchange x1 and x2, that's because electrons are fermions, and a sine change when you interchange y1 and y2 because protons are fermions, okay? So, let's see what kind of wave functions we can build. Let's start with a hydrogen atom, a hydrogen atom located at the origin. A hydrogen atom located at the origin has a wave function which is a function of its electron and its proton. So, one now picks out a particular, one of the two hydrogen atoms. One now stands for one of the two hydrogen atoms, and this might be some wave functioning, wave function governing one of the hydrogen atoms over here. But supposing I want to shift its location, how would I shift its location? Well, it's easy to shift its location. You just shift, you just shift the origin of both x and y. I guess I don't really have to do it. Okay. I don't need to shift its origin. And then we have another pair, and the other pair is describing a hydrogen atom at some other location. Alright? Since it's a hydrogen atom at another location, it has a different wave function. The wave function may be the same except that it's been translated, but it is different, phi of x2 and y2. Now, in general, this is neither symmetric with respect to the x's or anti-symmetric with respect to the x's or for that matter with respect to the y's. So, it's not a good wave function for fermions. Supposing I want to make it anti-symmetric under interchange of the electrons, that's very easy. We just subtract off here psi of, I just interchange x1 and x2, leaving y1 and y2 alone. Alright? So, let's do that first. Minus psi of x2, y1, phi of x1, y2. It is now anti-symmetric with respect to the x's. If I interchange the two x's, the whole thing changes sign, but it's not anti-symmetric with respect to the y's. How do I make it anti-symmetric with respect to the y's? The answer is take the whole thing that I have here into change the two y's and put a minus sign in. Alright? So, we put in minus psi of x1 and y2, phi of x2, y2. That's exactly this with, sorry, y1. This and this are the same except that y2 and y1 have been interchanged and I put a minus sign in. And then I want to anti-symmetrize this one with respect to the exchange of the y's. So, that becomes plus, so I have to change the sign, psi of x2, y2, phi of x1, y1. Okay? So, this is now anti-symmetric with respect to both the x's and the y's. If you interchange in this expression x1 and x2, it'll go to this one and change sign. If you interchange x1 and x2 here, this one and so forth. The whole thing is fully anti-symmetric. Yeah? My problem is if you take those two hydrogen atoms and put them closer and closer, there's nothing preventing you from doing that because there are bosons, right? There may be forces between them. The polyprinciple does create a kind of force between the atoms. It will make it very, very, it'll make it essentially impossible to put two atoms right on top of each other, but you'll simply attribute that to a force. But will you be able to put two atoms into the same momentum state? Momentum states are spread out all over the place. They are spread out in space. So, when you put two atoms into the same momentum state, you're not ramming them in on top of each other and it doesn't cost a lot of energy. So, yeah, the question of whether you can stick two things on top of each other is a different question whether they're fermions and bosons. It can even happen that billiard balls can be bosons. You can't stick them on top of each other. They'll have hard cores. Nevertheless, they can be bosons. You can't put them at the same point. It just costs too much energy, but you can't put them into the same momentum. But if for your wave functions, eigenvalue, eigenfunctions of momentum, then couldn't you have the two hydrogen atoms in this same eigenstate of momentum and then you would have to fermions in the same state? Would it be? No. No. No. No. No. No. Just take any wave function like this. Is this, it's anti-symmetric with exchanges to x. It's anti-symmetric with respect to exchanges of y. What about if I interchange one and two? One and two is interchanging the two atoms. What happens then? It's symmetric. It's symmetric. Yeah. So it's symmetric with respect to interchange of the two atoms. And why? Because when you interchange the x's, you get a minus sign. When you interchange the y's, you get a minus sign. Minus times minus is plus. So let's say, yeah, if I interchange one and two, x1, y1 will become x2, y2 with the same sign. x2, y2 will become x1, y1 again with the same sign. So this one will, these two will transform into each other. By simultaneous changes of everything with a one or everything with a two. And that's exchanging with two atoms. Likewise, let's say we have psi of x2, y1, but here we have psi of x1, y2. Both of them have minus signs. So these will go into each other. So the whole thing is symmetric with respect to the interchange of atom one and atom two. That doesn't mean that it doesn't, this is quite, this whole story here is independent of the question of energies. It may be impossible for other reasons to put two things on top of each other in space. The reasons being that it may be very strong repulsive forces. But a good example, yeah, we just wrote down a wave, an appropriate wave function for two atoms, for two bosonic atoms. No, these may or may not be in momentum space. There may be anything. There may be anything. Well, I mean, what is, are we saying that if we have two bosonic atoms that that's what their state will be? Yes. Okay. Yes. If we have a bosonic, if we have an atom with a wave function psi of x1 and y1, and another atom, if we didn't know anything about this atom over here, we might write down a wave function for the electron and proton and the other atom. And now we want to write down a wave function which is appropriate for the two atom system. The wave function for the two atom system has to be anti-symmetric when you interchange any x's, anti-symmetric when you change any y. And so I just basically use this rule, except I use it twice. Once to exchange the x's, that gives me the top, and then take the whole thing and interchange the y's and change the sign. The result is quite clearly, if you look at it carefully, interchange symmetric under the exchange of the two atoms, one and two, atom one and atom two. And that, at the end of the day, at the end of the day, that is the rule that a bosonic atom and another bosonic atom, when you interchange them, the wave function has to have the same sign. Okay. Now what kind of states can you put two, yeah, momentum states you can put two atoms into in the same wave function that doesn't cost a great deal of energy because they're hardly ever on top of each other. But it is true that if you try to stick two atoms on top of each other, it's very hard to do because it costs very, very costly in energy. Let's see. What would happen, in fact, what would happen, okay, let's see what would happen if we made both atoms with the same wave function, not one atom with wave function phi and one atom with wave function psi, but two with the same, what happens then? Let's see. Okay, so let's, okay, so there are two atoms with the same wave function. We have to subtract, all right, so this is this, psi of x1, y2, psi, psi. Is that zero or not? I don't think so. It's not zero, is it? Is it zero? x1, y1, where else do you see? Yeah, yeah, yeah. No, no, no, it's not zero. No, this one and this one are the same, okay, so it's just twice this, two doesn't, two is irrelevant. We don't care about a factor of two and twice this, and this is not zero, okay, this is two atoms in the same state, it's not zero, okay. Some magic is going on here, you can't put two electrons in the same state, meaning to say the electron wave function has to be anti-symmetric, the proton wave function has to be anti-symmetric, both are true here, but yet the two atoms are in the same state, so ponder that a little bit, write down that wave function and look at it and, of course, the point is, among other things, that in the atom, the electron is not in a well-defined position, it has a probability of being many places in the atom, you put two atoms in the same place, it's quite true, the two electrons can't be in the same place, that's true, two electrons cannot be in the same place. Let's see, what would happen, here's our wave function for the two atoms, perfectly good, anti-symmetric and so forth, now supposing we tried to stick the two electrons into the same place, that would mean setting x1 equal to x2, then it would be zero, alright, so although the two atoms are in the same state, it's still true that in that state, if you try to find the two electrons in the same place, let's just see what would that be, the two electrons in the same place would be psi of x1, y1, psi of x1, y2, I've just set the two x's equal to each other, minus psi of x1, y1, psi of x1, y2, so I've forced the two electrons to be in the same place and guess what, I think they cancel, yes they do, okay, so as I said, ponder a little bit and you'll get familiar with it and you'll see how it can possibly be that two fermions can be a boson. Does that final mean that whereas you can exchange two atoms, two hydrogen atoms in different states, you can't... Before I set x1 equal to x2 here, these two atoms were in the same state, the atoms were in the same state, but the electrons and the protons were never in the same state. When you did this operation of anti-symmetrizing, you basically projected out the pieces of the wave function where the electrons were at the same point and where the protons were at the same point, yeah, so nothing was left of that piece of the wave function. It's still zero if you go back to feet. If you go back to what? To, I think, feet, for the atoms in different states, but you put the electrons in the same state. Yeah, yeah, just any time you try to put electrons in the same place or in the same momentum, it's going to be zero, right? Okay, let's come back to spin a little bit. Spin is more fun than we've dealt with so far. The reason I'm spending time with spin is because the mathematics of spin is what you really have to know to know such things as the mathematics of isospin and the mathematics of color and the mathematics of all of the symmetries of particle physics. The mathematics is the same, the physics as was pointed out here, the analogies between these different kinds of conserved quantities, formal mathematical analogies, but nevertheless, let's study the mathematics of spin a little more, a little bit more about the Dirac equation, just a bit, and then I want to introduce the concept of isotopic spin, or isospin, which is a concept which I think dates back to the 30s. Okay, so let's start with half spin. The electron is a half spin particle, so I'll call a particle an electron, I could call it a proton, I could call it a muon, anyone of the half spin particles, but let's just call it an electron. It has two possible states, if the two possible values for the angular momentum along the z-axis. Along the z-axis we've called L sub z, we've called that M. Not quite, not quite, not quite, not quite. There's an H bar in there, right? But from now on we'll set the H bar equal to, you know what. Okay, so for the electron, this is not the full angular momentum incidentally of the electron. In fact, I'm using it for a really weird notation. Usually L is used for orbital angular momentum and S is used for spin. So let's revert to that. S for spin. S for spin, but the mathematics of it is exactly the same as the mathematics of the L's. In fact, we could write it down over here. Let's write it down over here. The mathematics of angular momentum, but now I'm calling it S for spin instead of L. I don't know what L stands for. It stands for angular momentum. All right, so what was it called? The commutation relations we wrote down last time were Lx with Ly equals Rh bar Lz. I now becomes Sx with Sy is equal to ih bar, but I'll set h bar equal to 1, Isz, and cycle down, you know what to do next. Sy is z equals Isx and so forth. All right? Cyclic permutations. All right. Oh, one other thing, we have S plus and minus, just to remind you, S plus and minus is just Sx plus minus Isy. These operators raise the value of the z component or lower it, depending on whether there's a plus sign or minus sign, and so they're the raising and lowering operators for spin angular momentum. All I've done is change notation from L to S. Interance expand. Interance expand. It doesn't, it could be any spin. I mean, it could be a spin which is due to, you know, the rotating basketball, but let's say entrance expand. Entrance expand, yeah. Okay, let's focus on half spin. Half spin is the case where Sz or M can take on the value minus a half and plus a half, and that's all. That's a two-state system. Incidentally, I'm also forgetting completely everything else about the electron. We're forgetting, for example, about where the electron is, what its momentum is, and all that sort of stuff. Just to get it for the moment, and you're just concentrating on the spin. All right, so concentrating on the spin, the quantum mechanics is the quantum mechanics of a system with two orthogonal states, and only two orthogonal states. It doesn't matter where it came from in quantum mechanics. The space of states is just the space of states of a two component or a two-state system, and you can always write it as a column vector, let's call it alpha and beta, where alpha is the amplitude that the spin along the z-axis is up, and beta is the amplitude that it's down. So you can always write the state of the system, the ket vector, let's call it psi, which is alpha up plus beta down along the z-axis. You can just symbolically represent it by constructing a little column vector and putting an alpha there and a beta there. Alpha star alpha is the probability that the electron is up, beta star beta is the probability that it's down, and that's it. In the same notation, operators become matrices. Let's see what we can learn about the matrices. These components of the spin have to be two by two matrices. Why two by two? Because the space of states is two-dimensional, so they have to be two by two matrices, and they have to satisfy this algebra. That's it. If you can find two by two matrices which satisfy this algebra, oh, one other thing, we might want Sz to be diagonal. Yeah? Did you just say the dimensions of space are two? No, no, no, not the dimensions of space. The dimensions of spin space. Spin space. Spin space. Yeah, not the dimensions of space. No, what I said to do is to forget about space. Forget for the moment the position of the electron, forget its momentum. We're thinking now of an electron that somebody has nailed down to the wall so that it can't move around, and all you can do is measure its spin. It has two distinct states if you measure the z-component of its spin. That's it. Only two states. And so let's label them up and down. Any quantum state of the spin is a linear superposition of them with complex coefficients, and the probabilities are the probability for up is alpha star alpha. The probability for down is beta star beta. It's just a change of notation, a common change of notation to replace this clumsy thing here by just writing a little column with an alpha and a beta. And remembering that the lower component stands for down, the upper component stands for up. And the same way, operators become two by two matrices, and the two by two matrices just act on these two component vectors. What did you say? Did you say you wanted sz to be diagonal? No, I said I would choose sz to be diagonal. sz. It doesn't matter. You just search around and see if you can find matrices, two by two matrices, which satisfy this rule. Yes, it's an operator. It's the angular momentum. It's an observable. It is an observable. It's an operator, and it becomes a matrix. All operators become matrices that act on vectors to give other vectors. Same thing we've done over in the past. All right, so let's see if we can find. Well, of course we can. Otherwise, we wouldn't be doing this. Three matrices which satisfy these commutation relations. xyz. So two by two matrices. Two by two matrices. All right, they are not unique. They are not unique, and the choice of them really corresponds to a choice of orientation of axes. But there is a choice of them which is particularly useful for us. They're all equivalent, incidentally. As I say, they're related just by rotation of the spatial axes, real space axes. And it is not important what choice you use. You'll get the same answers, but I will choose the following choice. And you can check that these matrices satisfy sx is equal to 0, 1, 1, 0. Sy is equal to minus i i 0 0, Sz is equal to 1 0 0 minus 1. You will never find three matrices which have these rules which, no, sorry, one half of this. One half of that. One half of that. OK, check it. Just go through it. I will not do it on the blackboard. It takes two minutes to do, well, maybe three, three minutes to do. And you'll find that these matrices do satisfy these commutation relations here. They are the Pauli spin matrices. The Pauli matrices are these things without the one half in front of them. But the real angular momentum matrices which satisfy these rules here are half of them. Without the half, they're called Pauli matrices. They're just called spin matrices. I think they're properly called spin matrices with the half. So this is a representation. This is a representation of the angular momentum matrices, a particular two-component representation of the half-spin angular momentum matrices. OK, let's now see if we can find the states. That means the states means the column vectors which correspond to sigma z being up and sigma z being down, all right? Or s z being up and s z being down. Those are the two possible orthogonal states, eigenstates of s z. All right, this is very easy. We just say, what does it mean to say a state whose s z is plus or minus? It means an eigenvector of s z with eigenvalue plus one or minus one. We expect, incidentally, that the components of spin have value plus and minus a half, all right? Plus and minus a half. But that means that the s's should have value plus one and minus one. Of course, that's true. These diagonal elements here are the eigenvalues of s. So let's just be very pedantic. Being very pedantic, we're looking for vectors, let's call them alpha beta, which have the property minus one, that this is either equal to plus or minus, let's say plus first, alpha beta. This would be an eigenvalue in which sigma z equals plus one. What about sigma z equals minus one? That would be one zero zero minus one alpha beta equals minus alpha beta. That's the meaning of an eigenvalue and an eigenvector. If we find solutions of these, the solutions will correspond to the states in which the spin is up and the spin is down. Let's just check. Let's check that out. So it's completely obvious. Let's just work it out. The top entry here by matrix multiplication is one times alpha plus zero times beta. So just multiplying out the matrices, this is going to be one times alpha plus zero times beta in the top and zero times alpha and minus beta in the bottom. Well, obviously this is not equal to this unless what? Unless beta is zero. If beta is zero, then we've found a solution to the eigenvalue or the eigenvector problem which corresponds to an eigenvalue plus one and surprise, surprise, it corresponds to a vector which only has an entry in the upper place. The upper place stood for spin up. What shall I choose alpha to be? I can choose it to be one. Why one? Because I want the sums of the probabilities to be one. So alpha star alpha plus beta star beta should be one and a convenient choice is just to choose alpha equal to one. Now what if I, let's put back what I erased. What I erased is that this thing times alpha beta was alpha minus beta. Supposing I want to get a minus sign, then what do I do? Choose alpha equal to zero. If alpha is equal to zero, this is true. And that corresponds to the vector zero one. Again no big surprise, it corresponds to the state in which the electron is definitely down and has no probability of being up. Okay, let's go a little bit farther. What about, we've done this many times in these series of classes but just spend a minute or two doing it again. What about wave functions, state vectors, whatever we want to call them, ket vectors, corresponding to the possibility that the x component of spin, we're now going to do a different experiment. We're going to measure instead of the z component of the spin, we're going to measure the x component of the spin. Again the two possibilities are plus a half and minus a half. What are the eigenvectors which correspond to the two possible measurements for that case? We're measuring sigma x and not the, or Sx and not the, in that case we want to ask zero one, one zero, alpha beta equals alpha beta. That's the eigenvector with eigenvalue plus one and the eigenvector with eigenvalue minus one will have to satisfy one one if such things exist. Alpha beta equal minus alpha beta. These are the eigenvalue equations for plus and minus. Here I've put in sigma x instead of sigma z, sigma x. Okay, so what is the top equation, let's do the top one first. What does the top equation say? Alpha equals beta, right? Let's be pedantic again. Zero times alpha, one times beta. So working out the matrix product here or the matrix times vector, it gives us beta alpha. This matrix just interchanges top and bottom. One times alpha, sorry, zero times alpha, one times beta. Same thing on the bottom row. And so is there a solution of this equation? Well, yes, all it says is beta equals alpha. In fact, the bottom equation says alpha equals beta, which is the same as beta equals alpha. So we solve the equation by setting alpha equal to beta. A convenient choice, a convenient choice, they're all equivalent to each other, but a convenient choice is that we choose one over square root of two, one over square root of two, and y one over square root of two. So again, so that the total probability adds up to one. If we had the one-half over here, would we be looking for eigenvalue one-half? Yes, yes, yes, yes. Yes, if we had included the one-half, we would be looking for eigenvalue one-half. So to look for eigenvalues of Sx, we just look for twice the eigenvalue for sigma. Good point. Okay. All right, let's see what the other possibility is. The other possibility is 0, 1, 1, 0 alpha beta, which happens to be beta alpha. That's by multiplication here. That should now equal minus alpha beta. That's the other eigenvalue. Take this away for a minute. The thing in the middle here, this just becomes the eigenvalue equation or the eigenvector equation for an eigenvector with eigenvalue minus one. And what does that require? That says beta equals minus alpha, and it also says alpha equals minus beta. If beta equals minus alpha, then alpha equals minus beta. And so all we have to do is put into the entries here, sorry, not this, but alpha beta here is equal to one over square root of two minus one over square root of two. All right, so what does this say? It says that if we take a linear superposition of states where the electron is up along the z-axis and down along the z-axis, we superpose those states in the quantum linear superposition, we make an electron which is, let's use the word right word, which is polarized along the positive x-axis. If we make the same kind of linear combination with a minus sign, then we get one which is polarized along the minus x-axis. All right, so we've now accounted for electrons which are up along the z-axis, down along the z-axis, up along the x-axis, down along the x-axis, and finally there's the issue of the y-electrons polarized along the y-axis. I'm not going to work it out. We could just minus i, minus i, i, alpha beta. You have to solve this one and this one equals minus. The two solutions are easy, very easy to do. The two solutions are alpha beta equals one i. That's for the plus sign. I think that's for the plus sign. Let's see. I think that's for the plus sign and the other one is one minus i. One minus i. Yeah, sorry, excuse me, one over square root of two again. Notice that there's more information in these complex coefficients than just the fact that the probability of the z-components of spin is a half a half. This has probability a half a half. So did the previous case where the electron was along the x-axis. The fact that there's an i here, something new, it tells us that the electron is along the y-axis. All right, so this way. So now we've accounted for electrons which are polarized along all six directions. What about a general direction? An electron which is whose spin is pointing along another axis. Well for that, I'm not going to go through it now. You take linear combinations of these. We won't do it now. We'll do it another time. It's not something I intended to do this evening. All right, so that's quantum mechanics of a- Are you guessing that the minus sign goes on the other eigenvalue? I may have the sign wrong. Do I? Well I'm thinking the minus i over square root two goes with the first one because of the minus i in the- You may be right. I think you may be right. I think you're probably right. So let's see. So this one should be minus then and this one should be plus. I think that may be right. Do I have it in my notes? Well, I don't have it in my notes. All right, that's been a half. So the whole theory of spin a half are all wave functions or all state vectors corresponding to all possible directions of spin are linear combinations of the two states in which the spin is up along the z-axis or down along the z-axis. No, I take it back. Do you take it back? Yeah, b equals ia. Okay. And it's plus, huh? Minus. I never remember. What about spin one? We've done this before. I mean, we've done this a lot of times before and so I've made fast work of it. But now let's do spin one. Spin one is also interesting. Again there are many, many choices of the matrices and they're all equivalent, physically equivalent, representing the angular momentum. But I'm going to show you one. All right, spin one, the z-component of angular momentum has three states. Spin one is the integer angular momentum. It corresponds to bosons and it's the case where the angular momentum can be zero, the z-component can be zero, one, or minus one. So there's three possibilities. Evidently, the space of states is three-dimensional now, not two-dimensional, three-dimensional. Again we want to find spin matrices which satisfy three by three matrices which satisfy exactly the same algebra. The algebra means the set of commutation relations, Sx with Sy equals Isz and so forth. Do there exist, I've left out the h-bar because I've said h-bar equal to one. Do there exist three by three matrices which satisfy this? The better because we sort of worked out the existence of a three-state system by assuming these commutation relations. Yes? I will just show you a representation and you can work it out and see if you can prove the commutation relations. The three by three matrices, let's say I think it's S, sorry, Sz, not z but Sz, spin along the z direction is equal to I times 010 minus 1,000, 0,000. It sort of looks sort of familiar but not familiar. It's three by three but it kind of looks like Sy over there but it doesn't have the factor of two. It's different. S, let's see what's next, Sy is equal to I, 0,010, 0,000 minus 1,000. Notice that this one has zeros in the third row and the third column. This one has zeros in the second row and the second column. And Sx has zeros in the first row and the first column and it's 0,0,0, let's see, Sx, 0,010 minus 1,0. There's a real symmetry between them. Each one has a 1 and a minus 1. They're all anti-symmetric. They're all anti-symmetric, each one has a 1 and a minus 1 and it has zeros all along the row and diagonal that match the letter here. If z stands for 3, then the third row and third column are zero. If y stands for 2, the second row and second column are zero and so forth. So they're easy to remember. The only trouble I ever have is remembering the signs. And again, you check those by checking the commutation relations. And they're not completely unique. But let's see if we can find the eigenstates of Sz. This time Sz is not diagonal. I have not chosen Sz to be diagonal. But nevertheless, we can certainly work out what the eigenstates are. They're not too hard to guess. So we want to solve Sz on, well, we want to take this matrix and now multiply it not by alpha and beta, but by alpha, beta, and gamma. How many, what should the eigenvalues be? 1,0 and minus 1. Those are the three. Those are the three states of z component. Let's try for 1 first. I hope I haven't written down. Yeah, good news I do. No. Yeah. Yeah, it's just, incidentally, this, this, this, this, this, this, this, this, this, this, this corner of it over here is very similar to the Sy over here. So it's going to be similar. I think it's 1, minus i, and 0. So here's an eigenvector. I think this is a true relationship. You can check it later. Here is an eigenvector with eigenvalue plus 1. It corresponds to the state where Sz, or where m is equal to plus 1. I didn't give it to you Sz in the simplest possible form. I gave it to you in a very symmetric form where x and y and z look very similar to each other, but nevertheless, the right way to find the state which corresponds to m equals plus 1 is just to find the eigenvector with eigenvalue plus 1. What about the eigenvalue minus 1? That turns out to be this one, minus. And what about the eigenvalue 0? Oh, incidentally, you should put a 1 over square root of 2 if you want them to add up to, right. OK. What about the eigenvalue 0? Anybody see what the eigenvector with eigenvalue 0 is? 0, 0, 1. This one is obvious. Get the top row, multiply it this. You get 0 because there's nothing to match this 1. Likewise here, this is just plain 0. Big fat 0. OK. What do these things mean? These mean that the three states, m equal to plus 1, m equal to minus 1, and m equal to 0, correspond to the vectors 1 minus i, 0, 1i, 0, 0, 0, 1. These you might want to put a square root of 2 if you like to make that probability add up to 1. All right. So again, we really do find, and you'll find there were no other eigenvectors, no other eigenvalues. That's it. And these are the mathematical representations of the very abstract things we did last time. One of the marvelous things is that these square roots of 2 really do turn up in experimental predictions, and we're going to do a couple of them, but not with ordinary spin, but with isotopic spin. I'm going to show you how those square roots of 2 turn up in various experimental quantities, not now. Can you read the second column? Those are both 1 over the square root of 2. No, i over the square root of 2. That's i over the square root of 2, and the first one is 1 and i over the square root of 2. 1 over the square root of 2, minus i over the square root of 2 and 0, 1 over root 2, i over root 2 and 0. This is, I think, m equals 1. This is m equals minus 1, and this is m equals 0. Now as I said, there are other representations. There are other triplets of matrices which satisfy the same algebra, but they are, in fact, physically completely equivalent. And I'm not going to go into the reasons for that now. These are a good set of matrices to work with. They're fairly elegant, and we'll use them more than once. OK, another question. Yeah. Supposing we now return to the real electron which can move around in space. It can move around in space so it has a position or a momentum and a spin. Incidentally spin and position are things which can simultaneously be measured. Spin and momentum are things which can simultaneously be measured. What are the things which can't simultaneously be measured? Well, position and momentum, but also the different components of the spin. They don't commute with each other, but spin and momentum do commute with each other. All right, how do we represent the wave function or the state vector of a electron given the fact that it also has a position? It has a position and a spin. And the answer is we turn these two component, they're called spinners. They're called spinners. This is what a spinner is. The right hand side, a two component object like this, which just represents up spin, down spin, is called a spinner. We just turn the spinner components into functions of position. So let's imagine that now. The electron is free to move around. The full state of the electron is described not just by a simple function of x, but by two functions of x. Let's call it psi up and psi down of x. And if we like, we can just make a symbolic notation where we put them in a column. What's the meaning of this? The meaning of this is the following. But you don't just ask what's the probability that the electron is up or the electron is down. You ask a more refined question. The more refined question is if the electron is found at position x, what is the probability that it's up? That may depend on x. So the probability that the electron is up might depend on where you look. The way of saying it is classically, just corresponds to the classical statement, that the spin of the electron could vary from place to place. Quantum mechanically, what varies is the probability to find it up, the probability to find it down. And so at every point in space, each point in space, you have a spinner. And the spinner tells you at that point in space, what's the probability the electron is up? Well, it's psi star psi up of x. That's the probability that if you look at point x and discover an electron there and then measure the spin, the z component of the spin, this is the probability it will be up. And psi star down of x, psi down of x is the probability that it's down. But at each point x, you have all of this apparatus here in exactly this form. And that's the whole story of what an electron is. An electron is a particle described by a wave function which has two components, the two components being the amplitude that the electron is up or down. What about a spin one particle? Exactly the same. A spin one, well not exactly the same, almost exactly the same. But the spin one particle, and I'm not going to call it psi, let's just give it another name, phi. Phi is a more common name for bosons. And spin one particle are always bosons. So it becomes phi one of x, phi two of x, phi three of x. Where at each point of space, at each point of space, this phi one, phi two, and phi three replace alpha, beta, and gamma from before. Phi three of x, yeah, okay, you understand. So it's just reproducing exactly the same structure over and over in space. And that's what spin is all about. Yeah? I'm not sure. Is psi one the probability that it's spin one? No, no, no. Okay, so let's go back. Let's go back. Good. All right, let's find out, let's ask what is the probability, here's our wave function, what is the probability that the electron has spin plus along the z-axis and spin minus along the z-axis and spin zero. Okay, let's work it out. Here's what we have to do. Remember those eigenvectors, the eigenvectors of spin for the spin one, not for the spin zero. How are they? Okay, for m equals one, that's spin up along the z-axis, the eigenvector was one over square root of two minus i over square root of two and zero. This is the eigenvector for m equals one. Now I'll remind you what the rule is for calculating the probability for this condition given this wave function. The rule is take the inner product of this with the complex conjugate of this, we take the inner product of this with the bra vector corresponding to this ket vector, which is its complex conjugate, and then square it, take it, it times its complex conjugate. So what we do, here's the rule, standard quantum mechanical rule, we take the inner product, we'll call this state here phi and let's call this m equals one, we take that inner product and then square it, we'll take its absolute value squared, it times its complex conjugate. How about this bra vector that I've called m equals one? Bra vectors are always complex conjugates of ket vectors. So that means the bra vector can be thought of as a row vector, that's just in our convention, one over square root of two plus i over the square root of two and zero, plus i because it's the complex conjugate in the second place here, i over square root of two. We take the inner product of that with phi one, phi two, phi three, what is that? i over the square root of two, phi one, no, one over square root of two, phi one, right? Phi one over square root of two plus i phi two over square root of two and that's it. One over square root of two, phi one, phi two over square root of two times i and then we multiply this by its complex conjugate. So what do we get? What does this times its complex conjugate give? I think it's just phi one squared plus phi two squared over two. This times its complex conjugate will give phi one squared plus phi two squared over two. So the probability that the electron is up along the z-axis appears to be phi one squared plus phi two squared over two. What about the probability that it's down along the z-axis? I think it's the same. I don't think so. Is it? No, I don't think so. Can't be negative. All right, I think it's the same. For this state it's the same. What about the probability that the z-component of spin is zero? That's phi three squared. In that case, the m equals zero state is one one zero. And the inner product is just phi three. Sorry, zero zero one minus state zero zero one. And the inner product is just phi three. So the probability that the electron is in the zero angular momentum state is phi three squared. So here are the probabilities. Looks like it's phi one squared over two plus phi one squared plus phi two squared over two. Same thing, phi one squared plus phi two squared over two and phi three squared. Comma, comma. Those are the three probabilities. They add up to one phi one squared plus phi two squared plus phi three squared. Sorry, this should really be, no, no, no, no, no, no. Phi one squared means phi one times phi one star. No, yes, it does. It does. It does. I think I'll leave this here for you to work out in detail. I hate doing things on the blackboard when I don't have them in my notes. I tend to make mistakes and I probably have made a mistake. But you get the principle. The principle is take the inner product of these things with, oh, I know what I did. Yeah, the reason I got these two to be equal is because I assumed that phi one, phi two were real. They don't have to be real. If they're not real, you'll get different answers for these two. Check it out. Try it out and see what you get. But the principle is always the same. Take the eigenvector, take its inner product with the actual wave function and take it square times its complex conjugate. That's a probability. And notice, most important in this case is that it's a function of position. These phi's are functions of position. It can vary from place to place. So that's the basic idea of spin. It's an exercise to work it out for spin two or for spin three halves. And it's significantly more complicated, but not in principle. Not in principle. And it's not hard to find the matrices, but a little harder. All right. I very, very quickly want to come back to the Dirac equation. Let's see. Yeah? What's the direction of the matrices? Three halves should be four. Yeah. Let's see. Three halves. Three halves is the case. Plus a half, minus a half, three halves, minus three halves. So the four states. Spin two is five states. No question. No. None of this discussion seems to be symmetrical with respect to time in x, y, and z, but no t. Oh, that's true. Yeah. So it's all this around here. Spin is something like mass best defined in the rest frame. Most easily defined in the rest frame. So in that sense, what you do is you study it in the rest frame, and then you find the right way to Lorentz transform it to some other frame. But we won't do that now. It really doesn't change very much when you go to relativity. And that's because the definition of spin is pretty much what you would get if you did all this in the rest frame of the particle. But yeah, we can simplify the story just so far. This is pretty non-relativistic. It won't change much in relativity, with one exception, which we won't do now. OK. Let me come back to the Dirac equation and see what the Dirac equation has to do with spin. Spin is best defined in the rest frame, but let's... OK. Here's the Dirac equation. It had matrices, 4 by 4 matrices, and it looked like this. I d psi by dt. Now the sys were four component objects, psi 1, psi 2, psi 3, psi 4. Where did that come from? All right, I'll tell you in a moment where it came from, but let's write the equation. The equation is that this is equal to minus i alpha sub m. m runs from 1 to 3, the three directions of space, x, y, and z, times the derivative of psi with respect to xm, sum over m. So this is alpha 1 d psi by dx, plus alpha 2 d psi by dy, plus alpha 3 d psi by dz, plus beta times the mass of the particle times psi. That was the Dirac equation that we wrote down a couple of lectures ago. Now let me remind you what the conditions on the matrices alpha and beta were in order... What was the condition? The condition was that the frequency... Let me write this in a different way. This is frequency is equal to momentum alpha dot k. We can write it this way, or just alpha m km. This is the same as alpha 1k1, or alpha xkx, plus alpha 2k2, plus alpha 3k3, which is the same as alpha xkx, alpha yky, alpha zkz, all right, plus beta m. This is shorthand. This is shorthand. Am I missing an i? Yeah, this m is an index. This is the mass of the particle. Okay, let's not use m here. i. i. Good. Thank you. Yeah, sorry. And this means sum over i. Usual summation convention. Okay. All right, when an index is repeated twice, it always means you sum over it. All right. And we're required that omega squared is equal to k squared plus m squared. Remember what we did? We took omega squared by squaring this out and required that we got k squared plus m squared. What were the conditions for that? The conditions were conditions on the matrices alpha and beta. I'll just remind you what they were. They required every alpha squared to be 2, I believe. Is it 2? I think it's 2. No, no, 1. 1. Every alpha squared i, for each i, alpha x squared equals alpha y squared equals alpha z squared. Is beta different from each of the psi 1234s? No, no, beta is one matrix. Oh, beta is a matrix. Beta is a matrix. It requires beta squared to be equal to 1. These things are just there so that you get the k squared plus m squared. Okay. But then, in order that you don't get cross terms, in order that you don't get things like k times m when you square it out, or k1 times k2 and so forth, we required things like alpha x, alpha y plus alpha y, alpha x equals zero. That would be, for example, the cross term between kx and ky. All right? So same thing for alpha x, alpha z. We can write this in a needle notation. Anticommutator of alpha x, alpha y equals zero. Anticommutator of alpha x, alpha z equals zero. And anticommutator of alpha z with alpha y equals zero. What about anticommutator of alpha x with alpha x? Oh, sorry. One more. Any alpha with beta should be also equal to zero. What's the anticommutator of beta with itself? Two. Because it's beta times beta plus beta times beta. Okay? Right. So we can summarize these commutation relations just by saying that everybody's anticommutator with himself is two, everybody's anticommutator with somebody else is zero. Simple kind, anticommutator. All right, it's a theorem. It's not hard to prove, but I'm not going to prove it. That the smallest dimension, the smallest number of components, the smallest matrices which can solve this are four by four. If you didn't have the betas here, then you can find two by two matrices which satisfy it. In fact, the sigma matrices, the Pauli matrices satisfy it. The Pauli matrices anticommute with each other. That's something you can check. And their square is equal to one. So if you didn't have the extra beta matrix here, the Pauli matrices would solve these equations. But in two by two matrices, there are only three matrices which anticommute among themselves that way. And if you look for a fourth one, they just won't be one. You have to go to larger dimensional matrices. You try three by three matrices, there aren't. You try four by four matrices and you find, yes, that there are representations of this anticommutation algebra here. I think I wrote down a representation. I'm going to write down another one tonight, which is a little different, but it's basically, it is, again, it's equivalent. So I'll give you an example of four matrices which satisfy exactly this algebra. Now, four by four matrices, these are four by four matrices, but you can write them in block form. Writing them in block form means that you divide this up into two by two blocks. You have two by two blocks, each one of which is a two by two matrix. That's just a simple way to write them down. And then for the alphas, for each alpha, alpha one to alpha two, alpha three, put in this two by two box over here the corresponding Pauli matrix, sigma i. There are two by two matrix, so it fills up four entries here. Down here, put the same Pauli matrix except for the minus sign. And zero's here. That's pretty easy. I know, I used a different representation last time. They are equivalent. I think I put the last time, I put the sigmas off diagonal, did I? Yeah, they are equivalent. They're mathematically equivalent, and there are similarity relations, similarity transforms which relate them. But I like this one better, I decided I like this one better. And what about beta? Beta is just in the same notation, same two by two block notation. It's just one, now what's one? One is the two by two matrix one zero zero one. It really is one, the unit matrix and the unit matrix down here. And that's it, those matrices have the desired property. Incidentally when you multiply four by four matrices in this form, you can do it by tricks where really you multiply in the blocks two by two matrices. So it's an easy way to do this, and it's no harder than working with two by two matrices, but I'll leave that to your devices to check. These are the Dirac matrices, the alphas and the betas, or one particular realization of the alphas and the betas. I think last time we had beta, the bottom would be minus i. Minus i here? That was last time. This is a different representation. The last time I had these off diagonal, and I don't remember what I had for this one, I can't remember. I thought I had beta diagonal last time. I thought I had beta diagonal. I can't remember. Yeah, yeah. Yeah, okay, this is a little bit different. And you get your choice. The physics of it will be exactly the same. They just correspond to choosing linear combinations of the four entries in the, and there's no real difference. It's as physically interesting as rotating coordinates. What you're doing is rotating coordinates in this four-dimensional space, which is not four-dimensional spacetime. These four entries here do not have to do with the four. They're not direct the same as the four dimensions of spacetime. They're just the smallest representation of this algebraic structure. Okay. That's what the Dirac equation is. Here it is. Dirac equation. It can be summarized in a form like this, where this really means just this. And here are the matrices. What I want to do, the only thing I want to point out about it tonight, is that there are two kinds of two-by-tunists to this. One is these blocks, and the other was within each block. This by, what shall I call it, this doubling of the two-by-tunists corresponds to a doubling of the two-by-tunists in physics. One of the two-by-tunists is positive and negative energy. The other is spin. So there are particles of positive energy with all possible spins. There are positive particles with negative energy with all possible spins. We can see this, and that's why there are four-by-four matrices, if you like. Roughly speaking, there's an entry, positive energy with positive spin, positive energy with negative spin, negative energy with positive spin, negative energy with negative spin. Those are the four possibilities. That's why, when I say positive spin and negative spin, I mean spin along the z-axis, half-spin along the z-axis. That's why there are four components to the Dirac equation. Positive and negative energy exactly as in the one-dimensional example that we described, and spin, up spin and down spin, if you like. That's why there are four entries in the Dirac spinner, in the Dirac equation. Of course the easiest way to see what's going on is to look at a Dirac particle with zero momentum. Just to take a particle with zero momentum, that means k is equal to zero. In that case, we can just throw away the alphas altogether, and we get a very simple equation when the momentum is zero, that means that psi is constant in space is what it means. It has no derivatives with respect to space. k is absolutely zero. In that case, the whole Dirac equation just becomes i d psi by dt is equal to im beta d psi. That's it. That's the Dirac equation. For a particle at rest, particle with momentum equals zero means a particle at rest. If it's at rest, the wave function has no space dependence. It's just completely constant with respect to space, and this is the whole thing. And it tells you what the possible frequencies are. We can write it a different way. We can write it maybe. Yeah. No i here. No i there. Let's see what it says. Here's the way we'll think about it. We'll take these psi. There are four entries, one, two, three, four, and we'll divide it, same kind of division into blocks. And each half of it, the upper half and the lower half, I'll put a thing with two entries, we'll call it psi plus and psi minus. The plus and minus mean positive and negative, and no, they don't. They don't mean anything right now. Just plus and minus. Nothing special. But each psi plus, what is psi plus? This is equal. Psi plus is psi upside down, and the bottom one here, I think, is something else, two others. This corresponds to an electron. What's the last column that your printer did? We're just writing the column vector in a notation which is similar to the way I divided up matrices. Divided up matrices by just drawing some red lines and doing two by two matrices in here, we'll divide up the spinners, the column vectors, by just dividing them in half, and in each place put a two component object. So psi plus and psi minus are themselves two component objects. Two two component objects make a four component object. It's just a pretty way to organize the Dirac indices, the Dirac structures here. Then we can write this equation very nicely. This equation is pretty simple. What does it say? Data is the matrix 0, 1, 1, 0. What does this matrix do when it hits a two component object? Psi plus, psi minus. What it does is it swaps the upper and lower one. When this matrix hits a vector like this, it just interchanges psi plus and psi minus. We can also write this as a two component object, psi plus, really a four component object, but two two component objects. And now we can see what the equation says. The equation says is that I d by dt of psi plus equals m psi minus and I d by dt of psi minus equals m psi plus. That's the whole upshot of the Dirac equation for particles at rest. It's even simpler than that. You can, the next step. Anybody got a good idea what to do with this to simplify it? That's one thing you could do. Adam and subtract them. Adam and subtract them. So what happens if you add them? I d by dt of psi plus plus psi minus equals m psi plus plus psi minus. All right? That's adding them. So subtracting them, I d by dt of psi plus minus psi minus equals minus m psi plus minus psi minus. The upshot here is that this really breaks up into uncoupled equations for two things. One thing is psi plus plus psi minus does not get mixed up with psi plus minus psi minus. And psi plus minus psi minus doesn't get mixed up with psi plus plus psi minus. The satisfy equations which are completely independent of each other. How about the frequencies of this one? What can you say about the frequency of this one? I d by dt is just omega, right? If the wave function has a definite frequency, then I d by dt just gives you its frequency. So this object here has a frequency. This one has a frequency omega, which is just equal to m. This one has a frequency omega equal to minus m because of the minus sign here. What does it say about the energies of these two solutions? One of them is a positive energy solution and one of them is a negative energy solution. So just exactly the same as the one-dimensional example we gave, there are positive energy, electrons, and negative energy electrons. I don't expect you to remember all of this and to be able to deal with all of this from one sitting with it. It's worth going back now, finding a little book on the Dirac equation and going through it. Basically, the 2 by 2 blockiness here has to do with positive and negative energy. Psi plus, plus, psi minus are positive energy things. Psi plus, minus, psi minus are negative energy things. But that still leaves the 2 by 2-ness of the individual entries here. Each one of these is a 2 by 2 entry. So let's forget psi plus and psi minus, we've taken care of that by adding them and subtracting them. But now let's look at the entry of just one of them. It also is a 2 by 2 thing. What is that 2 by 2 thing? It's the spin. It's the spin which is described by these Pauli matrices here. If we completely forgot about the 2 by 2-ness associated with the upper and lower pieces here and just worried about the 2 by 2-ness within here, then that's the same as the 2 by 2-ness within the 2 by 2 matrices here. That's the spin. So the Dirac equation predicts that positive energy electrons come in two different kinds, the two different kinds have to do with the upper and lower components or the up and down components within a psi plus here, and that's the spin. It also predicts that there's a positive and negative energy-ness to it. That comes from here. That has to do with the existence of positrons. Okay? Holes in the negative energy see a positrons. That's what the 4 by 4 is about. You can either think about it as positive and negative energy spin up and spin down, or you can think about it as electrons and positrons, each of which are half spin particles. So yeah. Can you portray the opposite way around where the big ones are spinning? Yep. You're asking whether you can... Just to make sure you're right. Yeah, absolutely, you can. Yep, you can put the positive and negative energy-ness into the smaller matrices and the spin into the bigger matrices. Yes, you can. Either way, they're kind of equivalent to each other and they just correspond to reshuffling of the indices. Could you go through that one more time on the positron explanation? Yeah. You can see from here that there are both positive frequency and negative frequency solutions. The things which have positive frequency are the sums of these two, and the things which have negative frequency are the difference between the two. Negative frequency means negative energy, positive frequency means positive energy. So if there are positive energy solutions, that means positive energy electrons. If there are negative energy solutions, that means negative energy electrons. What do you do with a negative energy electron? You fill it up. You fill up the Dirac C with all states of negative energy. You always lower the energy by putting in a particle of negative energy. If you want to lower the energy down to the absolute bottom, you fill every negative energy state. Among others, you fill in negative energy states which have no momentum. I just worked that out for zero momentum. But you fill all the negative energy states and that's the vacuum. You take a negative energy electron and stick it into a positive energy state. You've made an electron and a hole or an electron and a positron. So in exactly the same way as for the one-dimensional example, the positive and negative energy electrons become electrons and positrons. The other doubling becomes the spin. So there are half spin particles which come in particle and by particle pairs, and that was the prediction of the Dirac equation. One of these marvelous examples of just fiddling with abstract mathematics, little, little, little, little, and boy, and then out comes this very, very dramatic and credible prediction, positive and negative energy particles with spin. Remember that spin had only been discovered as in a sort of empirical fact from that chart up there a couple of years before. I mean, just a few years before Dirac did this, very shortly, a short time before. So yeah, it was a very dramatic event. Are you saying that negative energy electrons are positron? No. I'm saying the absence of a negative energy electron as a positron. We went through this before. You fill up all the negative energy particles and you call that the vacuum. You can't put more than one negative energy particle in a state. So the vacuum takes all the negative energy, e less than zero, e greater than zero, and fills them up. You put an electron into every one of those states. That has the lowest energy because all the negative energies are filled. That's the vacuum. That's empty space. Now you remove an electron from here and put it over here, not necessarily at the same place up here. What is that? Well, now there's a positive energy electron and the absence of a negative energy electron. The absence of a negative energy electron is a positive energy positron. It's exactly the same as in the one-dimensional example that we did, where the absence of a negative energy electron, a hole in the Dirac C becomes a particle of opposite charge. Why do you hire energy negative electrons above that fall down into there? They won't fall down to here. Right. But why let they fall and fill that little gap? They do. That's called positron, electron, and annihilation. It creates photons. Well, the first thing it has to do is conserve energy. Now, how much energy does an electron and a positron have? Both the positron and the electron have positive energy. Why does the positron have positive energy? Because it's the absence of a negative energy. Okay? It's the absence of a negative energy, so it has positive energy. All it is is a positive charge and a negative charge. When the positive charge and the negative charge come together, you can either describe it as the annihilation of the positive charge with a negative charge, but energy has to be conserved, so photons have to go off. The other way to think about it is as an atom, you know, an excited electron dropping down to a lower state, what happens then? Photons go off. Right. Yeah. Just a question on the mathematics. So here you very clearly, clearly show how in a zero-manifestation, this particular representation of data, you get the positive and negative aggregations of the sock. To get the spin representations in a zero-manifestation, would you take the different alpha-beta... I have a hard time seeing how you get that cleanly, because there's three alpha matrices. See again? There's three alpha matrices. There's more... Oh, and those three alpha matrices become the three spin matrices. But in a zero-manifestation, you don't see them. So to look at spin in a zero-momentum state, how would the mathematics show that simply? Well, the zero-momentum positive energy state, let's say. Zero energy positive, that would correspond to a spinner of the form, let's call it psi plus, we're the same psi plus here. Okay? We want to set psi minus equal to zero, because we don't want any negative energy part of the solution. So that means psi plus, psi plus. All right, well, alpha, for example, alpha is not quite... Alpha is not really... Are you asking me how you make the spin operators from the Dirac operators? No, not really. I'm just asking how... If there's a way, simply as you show the negative energy states, you show how you discover spin from the spring. Well, you've discovered that the positive energy particles come in a pair, one which you can call up and one which you couldn't call down. Now you want to see that that pair really has angular momentum. Yeah, that it really has the same. Now you can see that these Pauli matrices do come into it. Okay? But to see that it really corresponds to angular momentum, you have two ways of going about it. One is through the connection between angular momentum and rotation symmetry, and the other is through the fact that angular momentum is conserved. Okay? So what you would want to show, let's use the conservation definition, what you would want to show is that in any kind of collision where an electron comes in and scatters off something else, that if you assign the electron an angular momentum, whose description is in terms of this twofoldness within here, that that angular momentum together with all the other angular momentum and the problem would be conserved. All right. This is a little too much for us to do tonight anymore. What we can do more easily is show the connection with rotation symmetry. But I don't want to do that. I think we've all reached a certain limit by now. There happens to be another electron around. It can annihilate the positron, even if it doesn't have the same energy. Yes. Yes. And then you just get a different number of photons. Or photons coming out with, I'll tell you what happens. Okay, supposing you have two electrons with the same energy. Two electrons coming in in the center of mass frame, let's say. Well, I'm assuming that you had the particle pair, and then another electron happens to be around. Oh, it doesn't matter which electrons annihilate. I mean, electrons don't remember which one. One may have a certain energy and the other may have a different energy. Yeah, but when you kick this electron out of here, you can kick it up to any energy. It doesn't necessarily have the same energy as the positron. Yeah, when you hit that electron, you don't necessarily kick it up to the same energy. You just hit it. You hit it and it goes somewhere to some other energy. So then there's no memory of which, let's suppose there was another electron around. The deraxia is filled and now a positron, not a positron. Yes, a positron is formed and another electron. Then this electron can fall into the hole or this electron can fall into the hole. In either case, you get photons out. I'm still unclear. So let's say that the positron was the eighth level or whatever, negative eight. Why would negative seven and negative six drop into that? And why would it have to be an electron? Oh, no, that can happen. This is a perfectly good thing to happen. All right, so let's see. We start, let's see what that says. We start with negative seven over here, a hole. That means a positron, one positron, one e plus, a positive charge of electron. Notation for a positron is e plus. Notation for an electron is e minus. So we have one e plus with an energy equal to, let's say minus, with an energy equal to seven units, right? We took the electron with minus seven units and then removed it, which means we have plus seven units of positron. And the electron went someplace. Now, let's not even worry about where the electron went. The electron, this became an electron up here someplace. Now you ask, why can't one of these fall down to here? Okay, what would that mean? You would start with a positron and you would end with a positron, but you would end with a positron of a different energy. This hole was a positron. Now an electron from here comes and falls into this hole. There's a negative energy electron falls into this hole. That's what you're asking. Okay? So you started with a positron, here it was, or the hole, and you ended with a positron. It's just a positron of a different energy. So a positron has changed its energy. When a positron changes its energy, it can only do so by emitting photons. Right. How do you make an electron change its energy? By emitting photons, slamming it into something and having it emit the... So a positron from here, under the right conditions, sorry, an electron from here, can fall down to here. That's just another way of saying the positron scattered from one state to another and radiated some photons. Okay, so wouldn't they just always, you know, because there's always higher energy electrons, wouldn't they all fall down and the only thing would be left with a positron? The reason it doesn't happen, so you're saying, why is the positron stable? Why doesn't that... No, the answer is because you have to conserve both energy and momentum. And the conservation of energy and momentum doesn't allow it to happen. Okay, so we can work that out, but again, all right, so why don't you ask me that at the beginning of next week, and I'll show you why... Look, you could have asked exactly the same thing about the original electron. Why can't an electron always lower its energy and radiate photons? Why can't an electron just drop down in energy till it's the lowest energy electron? And the answer is because it can't simultaneously conserve energy and momentum and emit photons. Just so it just doesn't happen. So... So I will get in touch with you and give you the exact schedule. We will definitely meet 10 times this quarter. Next quarter, I have less obligations, so I won't be disappearing on you next quarter. Thank you. Thank you for your patience. For more, please visit us at stanford.edu.
(November 16, 2009) Leonard Susskind discusses the theory and mathematics of particle spin and half spin, the Dirac equation, and isotopic spin.
10.5446/15064 (DOI)
Stanford University. Let's review a little bit, and then I want to move on to generalizations of what we've talked about so far. I think we worked out the equations of an expanding universe. There were Newton's equations. Let's talk about something else first. Does Newton's equations really get it right? Yeah, Newton's equations does get it right for the most part, and let me explain why. Einstein's equations have to do with curved spacetime. Now, the universe that we're ultimately going to study has curved spacetime all right. And in fact, some versions of it even have curved space. That simply means that space itself, forget spacetime, just space itself. If you measure triangles on it, if you do various kinds of geometric exercises on it, you'll discover perhaps that space is curved. At the moment, it looks pretty flat, but it's possible that it will turn out on the average to be curved. And if it is curved, well, maybe it looks like a three-dimensional version, let's say, of a sphere. Well, we're going to study later, not tonight, maybe partly tonight. That's a portion of a sphere. That's a portion of a sphere. We're over here. We look out. We can only see so much. We can't even really see that the sphere is curved. But at a large enough distance, we may be able to see that the sphere is curved. On the other hand, supposing we just decide to look at very neighboring galaxies. Now, very neighboring galaxies can mean a billion light years from us now. Very, very neighboring galaxies, much smaller than what we think the radius of curvature of this universe is. Well, then it looks flat. And if it looks flat, it should mean that at least for that portion, if we're not interested in the whole thing, but we're just interested in the local nearby behavior, we should not have to worry about the fact that it's curved. If that's correct, then it means that the way these galaxies move relative to each other and how they move apart from each other, at least in the small here, can be studied using Newton's equations. That's what we've been doing. We've been looking at the universe in the small and studying how a small little fraction of it is expanding or not expanding whatever it's doing. And it's perfectly legitimate and in fact entirely consistent with Einstein, with relativity, except for one thing. We would run into trouble if the galaxies or whatever is present, galaxies, particles, whatever is present, if they were really moving past each other with a significant fraction of the speed of light. One of the assumptions is that the neighboring things are moving relatively slowly with respect to each other, something very, very far away, maybe moving with a large velocity relative to you. But as long as the things nearby are moving with non-relativistic velocities, you can study, relative to you, you can take a small patch of it. Now, small could mean 10 billion light years, okay? But you can take a small patch of it and study it without using any relativity, really. If we discover that there are particles moving with close to the speed of light past each other, then of course we will have to modify the equations. But there are particles moving fast by comparison with the speed of light past us. What are they? Neutrinos. Well, neutrinos for one, but photons. Not only photons from the sun, I mean photons that would be there even if there was no sun, the universe is filled in the same way that it's filled with galaxies. It's also filled with radiation, homogeneous radiation. That homogeneous radiation does move with the speed of light. That means that we have to modify our equations somehow to account for this very, very fastly moving, rapidly moving material, photons. We're going to do that tonight. But I want to just review what we did last time quickly. We first of all said, suppose that space is homogeneous and filled with galaxies, and I'm not going to try to draw all the galaxies. They form a gas, if you like. They kind of fill the blackboard with a certain number of particles per cubic meter. In other words, a density, a density that we called rho. And that was the content of the universe in kilograms per cubic meter, if you like. You could use some other units, but whatever the units you like. Physical units, kilograms per cubic meter. We called it rho. We laid down a grid on this universe, and laying down the grid, there was clear ambiguity. Imagine that we laid down the grid at some specific time, like today. We laid down the grid, and you could ask, what is the spacing between the grid, was a coordinate system? Let's call a coordinate x, and the distance between x equals something, and x equals something plus one. In other words, one grid separation here, one lattice separation, is a certain distance associated with it. How big is that distance? Well, we called it a, but how big is a? That depends on the grid that we laid down. If we laid down a very coarse grid, it would be one thing. If we laid down a fine grid, it would be another thing. And so it had better be that our equations, at least at the moment, do not prefer any specific value of a. We could lay down a different grid. The different grid could be twice as dense. So here's the black forms one grid, and the black and green together form another grid. If we looked at the more dense grid, we would also invent an a, let's call it a prime. A prime is the distance between neighboring points on the dense grid. That would be one half a. So if you ask me, what is the value of a, I'm going to say, I can't tell you until I know precisely what grid is laid down. And so a itself doesn't have a physical meaning, at least at this stage. Later on, we'll discuss more of what a means. But at the moment, on a flat blackboard, doesn't mean anything by itself until you specify exactly what the size of the grid is. Okay, so a is just a sort of bookkeeping device. On the other hand, ratios of a may mean something. Let me give you an example. Supposing I told you, now a is a function of time. Universe is expanding. The distance between neighboring galaxies, the galaxies are embedded in the grid. They move with the grid. The actual physical distance between galaxies is growing. Something I told you that over a period of time, a doubled. That has meaning. That means the distance between every galaxy, every pair of galaxies, doubles. So ratios of a, particularly ratios of a at different times, are recording a history of how the universe is expanding or contracting. And that's why our equations tend to only involve ratios of things with a. Let me give you an example. Let's take a dot, the time derivative of a. Well, that will depend on whether I use the black grid or the black green grid. Every a associated with the black grid is twice as big as every a associated with the black green grid, and if every a is twice as big, a dot, the time derivative, will also be twice as big. On the other hand, if I take a dot divided by a, and let's compare it with a prime dot divided by a prime, they will be the same. They will be the same because the ambiguity in the scale of the grid will cancel out. A prime will be a half a, a dot prime will be a half a dot, a, you know what I mean, and the factor of two will just cancel out. So things involving ratios of a, those are invariant, physically meaningful things. And if you remember, a dot over a is just the Hubble constant at a given time. That really does mean something, but a itself, an ambiguity in how big it is, but it's a once and for all ambiguity. Once you fix it, then you stick with it. Okay, another thing which is ambiguous was we introduced a constant nu. Remember what nu was? Nu was the mass contained within a single cubic cell of the grid. In other words, a grid element here, one by one by one by one. What one what? One grid unit. The amount of mass within that cube, which we called nu, is ambiguous because we haven't figured, we haven't determined exactly what the grid is. Nevertheless, once the grid is fixed, nevertheless, the amount of mass within a single cube is called nu. But it also changes when you change the grid. Now let's go to the equations that we derived last time. The equations we derived last time, I'll just remind you how we derived them. We took the universe filled with galaxies. We called them particles uniformly, and we placed ourselves right at the center arbitrarily, but still consistently. We placed ourselves at the center, looked at another galaxy which was at a specific location on the grid. Some x, some x on the grid, and we studied Newton's equations for the motion of that galaxy, which we would think would be horrendously complicated, horrendously complicated because they're interacting with loads and loads of galaxies. But we used the famous theorem Newton. Well, first of all, we imagined smoothing things out so that we could think of the distribution of these galaxies as smooth, smooth in uniform. We took a sphere centered at the center, centered at us, excuse me. And we used Newton's theorem that told us that relative to us in the coordinate system where we are at rest, the force on that galaxy only depends on the mass within that sphere, as if all of the mass were concentrated at the center. So we just said, let's concentrate all the mass within that sphere at the center and then study how under such circumstances the galaxy at x would move if it were under the influence of just that mass. You don't have to worry about the masses on the outside, you only have to worry about the masses on the inside, and we worked it out. And the simplest way we worked it out was just to say if there was a particle over here moving under the influence of a fictitious mass at the center, then the energy of that particle would be conserved. The energy consisted of two terms. Well, I think I'll, yeah, let me come back to the derivation again. I just want to review the derivation very quickly, but I'll come back to it in a moment. Let me remind you what the equation was. The equation that was derived was that a dot over a squared, that came from the kinetic energy, was equal to the density of matter, rho, measured in physical units, in other words, measured in kilograms per cubic meter, not this new object over here, but rho. And there was a little more. There was an eight, a pi, and a g, g newton. That was the equation that we derived by just looking at the conservation of energy. Rho, oops, yeah, rho, oh sorry, you didn't catch me, over three. That three ultimately came from the volume of a sphere, four-thirds pi r cubed. Okay, that's where the three, that's where the eight is two times four. Pi, pi is pi, three is three. This is related to the volume of some sphere. Okay, that was the equation that we worked out. That was in the case of zero energy. That was in the case where the potential energy of this point and the kinetic energy of it exactly canceled. It could also be understood as the situation where every galaxy is exactly at the escape velocity, just on the knife edge between being able to escape and not escape. That's what that formula was. We added one more thing to it. And the one more thing is we said that rho. Rho, which is the amount of mass, it's the amount of mass per unit volume, is related to nu. Is it the same as nu? No, because nu is the amount of mass per grid box. But the grid box, you don't know what its volume is. This is the mass within one cube. If you want the density, then the density is related to nu by dividing nu by the volume of one of these boxes. This is density per cubic meter. This is per cubic box. You divide that by the volume of a box, which is a cubed. And that's one more fact that we put in that rho is nu divided by a cubed. Now is rho constant with time? No, because A changes, but is nu constant with time? If nu just represents ordinary particles sitting in the universe which are never destroyed, never created, particles of protons, let's forget whether the proton decays for the moment, protons are forever and galaxies are forever for a moment. We won't believe that forever, but we will believe for the moment that galaxies are forever. Did you get that? Yeah, okay. If galaxies were forever or protons were forever, then nu would be constant. The number of protons within a box is the same at all times. All that happens is the box grows. So the protons thin out, but the number of them in the box stays fixed. And so nu stays constant. What is it? What is its numerical value? I can't tell you what the numerical value is because that has to do with the exact grid that I use. If I change grids, I change nu. Okay, so in any case, once I've established all of the conventions, I can replace rho by nu divided by a cubed. Nu is a constant. It doesn't change with time. Eight is a constant. Pi is a constant. Three is a constant. G is a constant. All of this stuff here is just a numerical constant. In fact, by judiciously changing your definition of the grid, since changing the grid changes the magnitude of nu, I can if I like. I can if I like, choose 8 pi g nu over 3 to just be 1. It doesn't make any difference. The point here is that the numerical constant that appears here really doesn't make any big difference to the way the university evolves. And the way you can see that is by changing definitions of the grid, you can change the constant. So we could, if we like, just set 8 pi g nu over 3 equal to 1, and then we would have a very nice differential equation. A dot over a squared is equal to 1 over a cubed. Now would you like to see how to solve that? We solved it last time, but we solved it by guessing. How many people want to see me solve it in real time? I know when you want to see me solve it in real time because you're hoping I'll make a mistake. Yeah, you were going to ask a question. Okay, well it's too bad. I'm going to solve it whether you want me to or not. Here's the way you solve it. Here's the way you solve all these equations. You first of all solve, you first of all take the square root of both sides. On the left side that's easy. It's just a dot over a. On the right hand side it's a little bit of a nuisance. It's 1 over a times square root of a. Do you see where that came from? A cubed is a squared times a. The square root of it is a times the square root of a. Okay, let's multiply both equations, both sides by a to get rid of it in the denominator. And that just tells us that a dot is 1 divided by the square root of a. That's where we would like to begin. Okay, now we're going to write this. The a by dt, which is what it is, is equal to 1 over square root of a. But now we're going to do something tricky. The very tricky thing we're going to do is instead of thinking of a as a function of time, we're going to think of time as a function of a. We're going to take a to be the independent variable. And to do that, here is the a dt. That looks like a is a function of time and we're differentiating it. But if I turn it upside down, meaning taking 1 over it, it's dt da. And now it looks like it's the derivative of time with respect to a. And that's equal to the square root of a. So how do we solve it? We look for a function of a whose derivative is the square root of a. What function has this derivative, the square root of a? Eight to the three-halves, right? Well, two-thirds eight to the three-halves. So we find out that t is equal to two-thirds. The two-thirds is absorbed into other constants that we've been ignoring, and that's times a to the three-halves. All right, we can solve that now. What we really wanted was a as a function of time, but that's easy. If we just neglect the two-thirds, I just don't feel like writing it. To get the two-thirds, this tells us that a is equal to t to the two-thirds. A is equal to t to the two-thirds, which is a function a t, which looks like that. And it gets flatter and flatter and flatter. The flattening of it is deceleration. The universe is decelerating, but it never comes to rest. We could see that it never comes to rest because when we look at this equation, there was no point at which a dot over a becomes zero. How do I know it doesn't become zero? Because the right-hand side is never zero. It gets smaller and smaller as a gets bigger and bigger, but it never goes to zero. And that's why the universe decelerates. Another way to see that it decelerates, it's just a particle moving away from a fixed force center. It will decelerate because of the attraction by the force center. And that was lesson number one. I redid it because, well, I always think it's worthwhile redoing the most important derivations. Yes, it's okay to throw away the negative square root. What's the logic? The logic from the equation, you cannot tell whether the universe is expanding or contracting. Both are possible from the equation. If you start, it's exactly the same deal as saying you have a planet over here. Coffee is a good planet. Well, I, and the almond here is a part, is a rock. And you see the rock is right over here. The equations of Newton will not tell you the rock is moving away or whether it's moving toward. There's one thing you can be sure of. Let's ignore for the moment moving in orbit. Let's assume we know that the rock is moving radially. There's one thing we know for sure that it's not standing still. It may be momentarily at rest. It might have come up and stopped, but it's not going to stand still. It's accelerating, it's accelerating back toward the planet. So it could be going out and decelerating, or it could be going in. You can't tell from the equation the same way. You can't tell from the equation whether it's expanding or contracting. And that has to do with the two possible solutions, whether the ADT is positive or negative. If it's contracting, it's negative. If it's expanding, it's positive. Okay. Excuse me. All of these objects are moving radially with respect to each other. But of course, I think of all these objects out there, they're all kind of spinning around each other. Okay, so that's right. But still, on the average, they tend to move as if they were embedded in this grid. Given that they are embedded in the grid, all the only option they have is to move radially relative to each other. Now, you're perfectly right. There are motions on top of that average motion. The motions on top of the average motion don't, of course, satisfy this. They're called by astronomers peculiar motion, meaning, I don't know, are they peculiar? Well, I don't know. The sun moves around, sorry, the earth moves around the sun, and the earth is not participating in this expansion relative to the sun. But if you go a few galaxies over, the whole thing is. So the real motion of real galaxies is a combination of this average flow. Think of it as a flow, a river going downstream. How fast is any given molecule moving? Well, I don't know how fast any given molecule is moving, but I do know on the average that clumps of molecules sufficiently averaged are moving with whatever the velocity of the river is. On top of that, there's the peculiar motion of the molecules relative to each other. So that's the way to think about it, a flow, and on top of the flow, fluctuation. And we're ignoring the fluctuation at the moment. Okay. There's two directions I want to expand it, well, not expand, there's two directions I want to generalize in today. The first, well, probably the second, the second will have to do with what happens if you replace massive particles, galaxies, or protons, and so forth by photons, by radiation. How does this whole story look different? If there was a universe, a fictitious universe which had nothing in it but photons, how different would this be? Now, why am I doing that? What a silly thing to do. Why take a fictitious universe which is only made of photons? Well, what we're going to find is that early on in the history of the universe, the most important thing in the universe was photons. Today the most, and when I say most important, I mean the largest concentrations of energy. Early on, the most concentrated, the most, the biggest form of energy in the universe was radiation. Today the most dominant form of energy in the universe is just the masses of particles, E equals MC squared kind of energy. That's what this theory is about. It's about the behavior of massive particles, non-relativistic, slowly moving particles. Yeah. That's also presumably made of particles that are pretty much standing still. Oh, oh, oh. 30 years ago we would have said it down. We're still 30 years back, or 25 years back or something like that. You're right. Good, good, good. Yes. But dark matter, by contrast, is part of this story. Yes. Thank you. That was helpful. Yeah. What I'm telling you now is an old story, but it's a story whose basics are important to us. Okay. Two directions I want to go. The first has to do with what happens if the universe is made out of other kinds of stuff other than just particles more or less rest relative to each other, radiation in particular. And the other is to move beyond the assumption that the energy is zero. There was no reason for that. It was arbitrary. We said arbitrarily, let's assume that every galaxy is moving away with exactly the escape velocity. Let's back off that now. And how do we back off it? We go back to the energy equations and say the energy of that particle moving in the fictitious field, not the fictitious field, but the field of the fictitious concentrated mass at the center. This is not a real galaxy now. This is the combined mass of everything within the green sphere. Particle out here, moving outward. And let's apply the conservation of energy to it. Let's go back through the derivation, apply the conservation of energy, and then see how the equations change when we go away from the limit of zero energy. OK, so what is the energy of this particle? This whole glob here has mass m. We'll come back to what m is in a moment. But it has mass m. It's not the mass of the sun. And it's not the mass of any specific galaxy. It's the mass of everything in this sphere. And the energy of this fellow over here is one-half m. m is the mass of this fellow, not the big mass in here, the mass of this galaxy, times its velocity squared. That's one term. And then the other term is minus mg over the distance between them. Let's call the distance d as we did last time. Oops, I missed something, didn't I? Another factor of little m, right? Right. Product of the masses, Newton's constant, distance between them. That's the energy of this particle moving out. And what do we know about it? We know that it doesn't change with time. It doesn't change with time because our model, that's a useful model, is just the motion of a particle in a fixed background of mass. The energy is constant. And so let's set the energy to be equal to some constant. Now, what does constant mean? Does it mean a numerical constant that's independent of everything else? No, it doesn't. It could depend on which particle we're talking about. It could. In fact, if the universe is homogeneous and isotropic, the only real thing it could depend on is the distance away. So I'll simplify the discussion for now by taking this particle to be at x equals 1. In other words, we're going to take a specific particle, a very definite one, at x equals 1 and focus on it. It has some specific energy and that energy will never change. So let's just call it E. And that's a constant. What value of the constant? I don't know. I can't tell you offhand. No more than I can tell you, if I know that there's a nut over here and there's a planet over here, there's no way I can tell you a priori what the energy of that particle is. I don't know how fast it's moving. So we have to study all possible cases. So we just put the energy and call it E and take it to be a constant, a numerical constant. Let's write out now everything. Let's divide this. First of all, divide it by M. Little m. Let's divide by little m. Divide by little m. And then over here we get E over M. Now E is a constant, M is a constant, E over M is a constant. So I really haven't made it any more complicated by dividing by M. I'll leave it here as E over M. But the right hand side is just a numerical constant. Let's even do a little more. Let's multiply by 2. It's still just a constant. And since I didn't tell you in the beginning what E was, all I know is that this is a constant. So let's leave it that way. Now what about the velocity? If this is the galaxy at x equals 1, then its velocity, well, its distance, the distance is A times x. The velocity is A dot times x. And so now we can just plug that in here. And x is 1. If I've chosen x to be 1, so it's very simple, the distance of the galaxy is just a scale factor A, it's called a scale factor, and its velocity is just A dot. So let's plug it in here. This just says A dot squared minus 2Mg over D, and D is A, is equal to some constant. Now I'm just going to call it C, constant on the right-hand side. This has many of the elements of this equation here, but this equation, this nice equation, we divided by A twice to get A dot over A squared. Why did I do that? Because I knew in the back of my head that A itself is not a meaningful thing. It's only ratios of As. So in the back of my head, I knew that what I really want to get is an equation for ratios of As, or A dot divided by A. So I divided this equation by A squared. A cubed. Notice something's happened on the right-hand side here, it's no longer a constant. C is still a constant, but A squared now has something new in it. And finally, we used that the mass divided by the volume cubed, now this is a sphere, this is a sphere of radius A, x is equal to one, a sphere of radius A, and the volume of a sphere of radius A is four-thirds pi A cubed. So I have A cubed down here, that's not quite the volume of the sphere. Let's fix it so that it is the volume of the sphere. Let's multiply, well I don't want to botch up this equation too much, let's rewrite it. A dot over A squared minus twice mg over A cubed is equal to a constant over A squared. But now let's multiply and divide by four-thirds pi, so this is four pi over three, and we have to multiply out here by four-thirds pi, that becomes eight-thirds pi, that's where our eight-thirds pi came from. Okay, four-thirds pi A cubed over three, that is the volume of the sphere. And what is the mass divided by the volume? The density. So by fiddling around, I got the Friedman equation, but with a right-hand side now, a right-hand side that knows about the total energy. If C is positive, it meant the total energy was positive, it meant the kinetic energy outweighed the potential. Sounds like, and it does, means that the galaxies are going to continue to recede forever. They've beaten the escape velocity. If C is negative, then the energy is negative, more potential energy, more negative potential energy than kinetic energy, and that's the situation where you expect everything to go out of ways and then come back and crash. So we now have a new equation, the Friedman equation for non-zero energy. The energy can be positive or negative. Both are allowed, total energy. And let's examine it a little bit and see if we can see a little bit what it says. It's not too hard to solve for any given case, but let's not solve it. Oh, one other step. One other step. Let's write it over here. A dot over A squared, and let's transpose this factor to the other side. Equals 8 third pi G times rho, but rho is nu divided by A cubed. Remember nu was the mass per unit cell size. Nu over A cubed, that's rho, and this G, or G is there, plus C over A squared. This is the new thing. C over A squared is the new thing that was not here in our initial study of this equation. What does it do? So let's look at it first. This is the real Friedman equation with this other term on the right-hand side. And it was derived from general relativity, not from special relativity. Sorry, not from Newton, but nevertheless, we just derived it from Newton. Good. Okay, so here we are, let's see what these things mean. Let's assume for the moment that the universe continues to grow. If C is positive, and everything else on the right-hand side is positive, 8 pi G nu A cubed, they're all positive, then the right-hand side is strictly positive, and A dot over A is positive. In other words, the universe continues to grow. If A dot is positive and A is positive, if A dot, the time derivative of A is positive, the universe continues to grow forever. It may slow down and slow down, but it will continue to grow, or at least it won't contract, and it will continue to grow. So if C is positive, the universe will continue to expand, it will eventually get arbitrarily big, A, arbitrarily big, I mean that A will eventually exceed any bound and become very large. So let's look at it in the limit that A becomes much large, very, very large. Excuse me, why does A dot over A have to be positive? I mean, the square of it, obviously, has to be positive. Right. The square of it is positive. That's right. Good. The square of it has to be positive, so let's assume that initially it is positive, that the universe starts out expanding. Okay? Then the only way it can ever get to be negative is if it goes through zero, and it can't go through zero if the right-hand side is positive. Good. So that's a good point. The right-hand side being permanently positive disallows this from being zero, and therefore it can only continue to grow. Can't jump from positive to negative without going through zero, and this is never zero. Okay. Now, which term here is bigger? One over A squared or one over A cubed? Well, that depends on whether A is bigger or small. All right. But in particular, let's go and think about very large. Let's start with small A first. With A being small, which is bigger, one over A cubed or one over A squared? One over A cubed. If you take A to be small enough, this will always beat this, and so for a sufficiently small A, this is negligible compared to this. We've already studied the case without this, and we know the answer. We know the answer as long as A is small, in other words, in the very early phases of the expansion, when A is just starting out growing, it's just beginning to grow, this term is not important. And what do we find? We found out when the term was absent that A expands like T to the two-thirds. But now let's go to the other limit. This is a standard way of thinking about all kinds of things, go to limits. Often an equation, this equation is solvable incidentally, but it's nasty. We don't have to solve it. We just have to know what it's doing at the two ends. The two ends mean when A is very small and when A is very big. Okay, so what about when A is very big? Which term is big? This term is the biggest term when A is big. When other A squared is much, much bigger than one over A cubed, you may have to wait a while and how long you have to wait may depend on this constant here, but eventually this term will become much larger. So let's study that equation. Let's study the equation A dot over A squared is equal to some constant, any constant over A squared. We first take the square root of it and what's the square root of a constant? Another constant. So I won't change its name, but it's really, this new C here is the square root of the old C. And of course the reason I'm allowed to do that is because it doesn't matter what the constant is here. We'll get the answer which is qualitatively the same for every C. All right, A dot over A is C over A. Now let's multiply by A. A dot is a constant. A dot is a constant. A dot is the velocity of the galaxy at x equals one. In other words, in the situation where the energy is positive, when the galaxy gets far enough away, the effect of gravity becomes negligible and it just moves off with a uniform velocity. That's what this is saying. A dot is just constant and that tells us that A is just proportional to time, the constant C times time. The galaxy just moves off with a uniform velocity. And so that tells us this is for positive C, for positive constant. C is not the speed of light here incidentally, although it happens to be a velocity. So it tells us that at late time this just moves as a straight line with constant slope. A dot is just a slope, moves with constant slope and the universe just expands with linear and unaccelerated, non-accelerated motion. Nothing as deep as going on here. Why am I using nuts when I should be using apples? All this is saying is if you throw this up hard enough so that it escapes from gravity with a good margin, with a margin, then it will just move off uniformly with constant velocity. So in fact, we find that. We find exactly that. What happens in between? Where's the crossover point? Well, I'll leave that to you. You can figure out approximately where the crossover point is. It goes as A to the, T to the two-thirds and then makes a transition to just T and sort of fuzzy in between. That's the matter-dominated universe. Why is it called the matter-dominated universe? Because the right-hand side of this equation contains a term which is just the density of ordinary matter, ordinary non-relativistic, slowly moving matter. That's it. Okay, so, oh, oh, well let's go to the other case. What happens if C is negative? That's the case of negative energy. Now you should be able to guess. It's a particle or the apple moving away from the earth with less than escape velocity, has negative energy, and it just falls back down. So that's actually not hard to work out. I'm just going to tell you what happens. What happens, first of all, what happens, then you have a negative sign here. You have a negative sign here, and now the right-hand side can become zero. Initially, this is the biggest term. At a late time, this is the biggest term, but they have opposite sign. That means somewhere in between, this became zero. That is nothing but the point where the up-going apple simply is momentarily at rest when a dot over a is zero. Okay, so with C negative, there's a crossover, and instead of moving off linearly like this, bang, crash, everything falls together. That's the matter of dominated universe, and it was a classic cosmology until other things were discovered. Okay, any questions? My assumption when I get no questions is either I was perfectly clear or perfectly, yeah. I understand that at some point, those two terms can't relate to each other. You get eight times over a squared equals zero, but that other side always has to be non-negative. Which does? The right-hand side always has to be non-negative. Now, it's going to become negative. How can it be a big time over a squared, that's the equal, that's the thing. Say it again. Your left-hand side is always got to be non-negative. Hence your right-hand side always has to be non-negative. You have to take a square root. You have to take a square root. Square root can have a part. We can write. Since the left-hand side is always non-negative, then the right-hand side is always non-negative. Yeah. So, I'm just wondering what happens if it becomes zero, then it might become positive again or stay zero or something. No, I made an assumption that the universe was expanding. And that breaks down at the point where it comes to rest. The mathematics is entirely identical to this, yeah. The, what would you write? You would write one-half mv squared is equal to gm over the distance. Am I missing anything? The mass of the earth, no, minus this, is equal to the energy. If the energy is negative, let's call it minus, yeah, then this is plus gm m over r. Basically, or over distance. It's the same equation. How does it happen if the, if e is negative, if e is negative, then there can be a crossover point. It's the same, exactly the same thing. Exactly the same thing. You know what happens. You know that the, that the apple just falls back down to earth. It's just a question of taking the root, the, there's a plus root and a minus root of a square root, yeah. But what do you say, that 8th or rho, whatever over 8th q has to be greater than or equal to c over 8 squared. Otherwise, something squared goes negative, which is mathematically impossible. You say this has to be bigger than this? Or equal to that, yeah. Hmm. I got a question. The point is a can never get big enough to turn the sign of this. A can never get big enough, let's see, if a is big, then this term is bigger and it will go negative. But it never does get negative. No, no, okay, good. The right hand side never gets negative. You're right. The right hand side can never get negative. All that tells you is there's an upper bound on how big a gets. Right. So a gets so big and then stops getting bigger and comes back down. A is always in a region where this is positive, but it goes through zero. Going through zero is just coming to rest and falling back down. Right. And on the way back down, you have to take the negative root. a dot over a is negative square root. Good. All right. Okay, so that is the matter dominated universe. Three possibilities. Positive energy, negative energy, or zero energy, and three different behaviors. Who decides whether the energy is positive or negative? Who knows. But we're going to find out that the connection of that positiveness or negativeness is connected with the geometry of the universe, not tonight. Last time we'll talk about the connection with this constant C with the spatial geometry of the universe. That's what, that's the main thing at this stage, the general relativity brought to bear on this. The equations are exactly the same, but the significance of this constant C takes on a new dimension and has to do with geometry, but not tonight. One more thing we're going to do tonight, we're going to take up the question of what happens if instead of being made out of material points slowly moving, what happens if the universe is made out of photons, made out of radiation? All right, to understand that, of course, we really do have to think about relativity, but really there's only one important idea, and it's just E equals MC squared. If we were to have done Einstein's equations, the right-hand side of Einstein's equations here, the right-hand side, the left, basically, all right, let's, I'll tell you what the connection with Einstein's equations is. On the left we have A dot over A squared, and then I'm going to add plus C over A squared, transpose it to the left, and on the right-hand side, go to 8 pi over 3G rho, nu over A cubed or just rho. Now this is mass density here, but Einstein's equations are of the form that something on the left having to do with geometry is equal on the right side to things that have to do with the density of energy and momentum. Energy momentum tensor on the right-hand side, geometry on the left, this clearly has to do with geometry, the rate at which the universe is expanding. I'm going to tell you next time that C over here has to do with the curvature of space. So this side here is Einstein's geometric side. The right-hand side is the source, namely the energy, momentum, whatever it is in the universe. What is the energy on the right-hand side of this equation? It's just the mass points and E equals mc squared. So oh my goodness, I'm calling this C, but please don't get the idea that it's the speed of light. It's sometimes called kappa, minus kappa. It's sometimes actually written minus kappa. Unfortunately, kappa is positive when the energy is negative and kappa is negative when the energy is positive. It's often called kappa. That's the basic connection with the general theory of relativity. But the only thing that we really need to do to this equation is to remember that when you go from Newton to relativity, what was mass density becomes energy density. All forms of energy density are what go on the right-hand side here. Well, it's energy density times the speed of light squared. Is that right? No, m is E over c squared. So it's energy density divided by c squared, which goes on here. We'll just set c equal to one. We're not going to worry about the speed of light. Speed of light c is equal to one. With that idea in mind, the right-hand side, instead of being the mass density, becomes the energy density, total energy density, some of which is just the E equals mc squared energy of the particles at rest. And some of it might be kinetic energy of particles, which we haven't really included at this point, the kinetic energy of relative fast motion of them. But if the universe is filled with photons, then it's radiation energy. So let's talk about radiation energy and how it's different from the E equals mc squared energy. All we have to do is think about a box. A box now, the box just corresponds to a unit cube in the grid here. Take one unit cube of the grid. It has a volume of a cubed. Let's suppose it has a certain number of photons in it. Now, photons have a wavelength. So here's our box, it's delta x equals 1 on each side. Its volume is a cubed. The ordinary particles, which are just sitting in that box and not moving, they have an energy which is just their mass. It doesn't change. The energy doesn't change those particles. They're just their mass and nothing else. But a photon behaves differently. So I'll tell you how a photon behaves. The first thing to know about a photon or electromagnetic radiation, first thing to know about a photon is that its energy is related to its wavelength. It's related to its wavelength to Planck's constant, h, what else? Lambda, the wavelength. The speed of light, which I'll set equal to 1. Important thing is that it's 1 over the wavelength. Now, something which I am not going to prove, but I'm going to tell you, is that if you take a box with a photon in it of a given wavelength, let's say you have a photon of a given wavelength and you expand the box, you expand the box slowly, for example. Then the universe is expanding pretty slowly. It takes 10 billion years for it to double in size. It's pretty slow. You take a box and you expand the box slowly. Then what happens to the photons inside it? What happens as the photons inside is their wavelength just stretches in proportion with the box. Now this phenomena, anybody who plays a guitar knows the phenomena very well. The box is replaced. Here's the string of the guitar. It's pinned down at one end, at the bridge over there. At the other end, it's pinned down by your finger at the fret. Then you pluck it, this starts to vibrate. Then if you slide your finger, what happens is you slide your finger. You're effectively changing the size of the box. This is changing the size of the box. You change the size of the box. The wavelength just changes and the note changes in correspondence with the change in the size of the box. The same thing happens to radiation in a box. Radiation in a box, the wavelengths just stretch in proportion to the size of the box. Because the wavelengths change for the photons, that means the energy of each photon changes as you change the size of the box. The world is filled up with photons. You can pretend that each of the photons, the number of photons in each box stays fixed. That's a true statement. Number of photons stays fixed, but their energy changes as you change the size of the box. In particular, the energy of each photon will be proportional to one divided by the size of the box. So we take one box, change the scale factor, then the energy of each photon decreases. Same phenomena as the frequency going down or up as you change the length, the effective length of the guitar string. So what does that mean? Yeah? That makes sense if the box is conducting and the electric given has to go to zero, then that makes sense, but you don't have to conduct the boxes in space. Just get the same thing, just get free space expanding, get the same result. One way I've thought about it is, like if you had mirrors, this box may have mirrors. The mirror is moving away to get it to offer a shift from the photon to the back. That's one way to think about it. That's one way to think about it. Last week you said that the expanding universe is sort of a mathematical artifact and that you can just think of particles. I never used the word artifact. You did. But you agreed with me. It depends on what your conclusion is. Okay. Well, that's what I'm trying to understand is that clearly there's something we're going on where the wavelength wouldn't change. Yeah. That's right. We'd have to do a little more quantum mechanics or a little more classical electromagnetism in the presence of an expanding universe to justify what I said. Let's take it for the moment as a given and we can come back to it. The answer, it is a correct statement that, and let me see if I can think of an example. Offhand I can't think of a good practical example where something similar happens. But nevertheless, it is true that the radiation in the box, the wavelengths of the photons readjust themselves to the sizes of the box. In other words, the photons stretch along with the universe. If you like, you can just think of the space stretching and with it the photon wavelength stretches. Let's leave it at that for tonight. For tonight, let's leave it at that. Let's just examine the consequences of it. Well, let's examine the consequences of it. The consequences of it is that the energy per photon decreases like 1 over A. In contrast to the case with ordinary particles where their mass stays the same and doesn't vary. By contrast with the ordinary particle case, the energy in the box, the energy in the box the mass in the box, effective energy or mass, does not stay constant, but it decreases, total energy in the box decreases like 1 over the scale factor. Every photon, its energy decreases when it gets stretched. So every single one of them, and so compared with the previous case, there's one more factor of A in the denominator. The energy in the box goes like 1 over A, and the energy density goes like 1 over A to the fourth instead of 1 over A cubed. Remember previously, the density here we said was some constant nu divided by A cubed. Now we get one more factor of A in the denominator, and it's just the fact that every particle, this energy decreases by one extra power of A. That means relative to the previous case, there's an extra A in the denominator. Nu could be taken now to be the number of photons per unit in a box, and the energy would be 1 over A to the fourth. It's the only new thing that happens. This is matter dominated, and this is radiation. A dot over A squared, again, 8 pi over 3, G, some constant nu, and then A to the fourth, downstairs. We can also put in this term here, minus C over A squared, but let's study the case that corresponded to zero energy, just to see the difference. To see the difference. Let's see what different behavior we get with this formula. This formula says, again, by appropriate choice of the size of the grid, you can make all the constants here, if you like, be one. By appropriate choice of the size of the grid, you can rearrange it so that it's just A dot over A squared is one divided by A to the fourth. What was it before? Do you remember? One over A cubed. Let's see if we can figure out what happens. Again, we're going to solve the equation now. We're going to go through the steps of solving the equation. Let's take the square root. A dot over A equals one over A squared. That means that A dot is just one over A, right? Did I do that right? That's the ADT. All right? We'll do the same trick as before. Turn it upside down. The T by DA is equal to A. So now, if we think of A as the independent variable, we're looking for a function whose derivative is just A. What function has its derivative A? A squared, right? A squared. One half A squared, to be exact. So that says that T is like A squared. Maybe there's a one half here, but that's not important. The thing is that T varies like A squared, or that the scale factor varies like T to the one half, square root of A. Well, I'm not much of an artist, and I can't really draw the difference between T to the two thirds and T to the one half. There's T to the two thirds. T to the one half is a little bit smaller, huh? It doesn't grow as fast. So T to the, but it looks pretty similar and does pretty much the same thing. You'd have to be an astronomer who are really interested in this to care about the difference between T to the one half. By that I mean, you know, the sufficiently similar, qualitatively similar. But if you really want to know what's going on in the universe, the difference between T to the one half and T to the two thirds can be very important. Okay, so that's the radiation dominated universe. The radiation dominated universe expands like the square root of T. It's not T to the three halves. T to the two thirds. T to the two thirds. T to the two thirds and T to the one half. What about the mixed case? Supposing a universe has both ordinary particles and radiation, like the real universe really does. Let's worry about that. Let's come to that. The mixed case, neither pure radiation nor pure non-relativistic particles. In that case, the energy density is two components, one for radiation that goes like one over A to the fourth and one for ordinary matter that goes like one over A cubed. So the kind of equation that we're going to have, I'm not going to write the details, I'm going to write down the general form of it, will be A dot over A squared. And that's going to have two terms now, one of which, call it constant number one divided by A cubed. That's ordinary particles. Constant C1 is just some measure of the number in each box. And then another term, and they're both positive, the energy of particles and the energy of photons is positive. And the other is some other constant over A to the fourth. That's the equation of motion for a universe which contains, like our real universe does, ordinary non-relativistic matter plus radiation. Which one is more important, one over A cubed or one over A to the fourth? Tenzane. Right. When A is big, which one is more important? This one. Excuse me. My mother always told me not to take such big bites from the apple. For small a, this is the more important. And the smaller the a is, the more important it is. When A is big, this is the more important. And again, swamps the other term when A is really big. So what that tells us without too much work is that when A is small, in the beginning of the expansion, the only thing that's important is the radiation. This one. The radiation term is dominant compared to the material protons, neutrons, galaxies. There were no galaxies at the beginning, but the radiation was the most important thing. And so the universe started expanding like T to the one-half. But then eventually this term took over, became the more important term, and it made a switch. Let me get another color. It made a switch. And started to grow like T to the two-thirds. Yes. Yes. That's correct. That's correct. And you might say, well, how do you know that energy of matter doesn't get converted to energy of radiation? That's something we're going to have to discuss. The answer is when things get cold enough, when the universe expands enough, things cool down and there's not much exchange between radiation and ordinary matter. And they are pretty well conserved, each one separately. They don't talk to each other too much. In fact, once things cool down to a certain temperature, that's a rather high temperature, one in a thousand degrees or more than a thousand degrees, 10,000 degrees. But once things cool down to a certain temperature, pretty much the photons are just free-streaming and don't care much about the particles. And the photons are too long wavelength to even scatter the particles very much. So that's right. We're exactly right that we've assumed that the two constants here are pretty much time independent. All right. That's the nature of a universe built of two components. Now, that's only two components. That's only two components. And there is a third component. There is a third component. It's the discovered dark energy. That's what we're going to talk about next time. We have ordinary matter. We have radiation. Those are commonplace things. Dark matter belongs to the matter category. It's simply a part of the ordinary particle matter that's invisible simply because it doesn't have any charge. It doesn't radiate much. So we don't see it optically. That's part of C1. This should be C2 here. I meant to write C2. Or maybe we should call it Cm for matter and C radiation for radiation. There's more than one component to the radiation. There's photons. It's presumably gravitons. And there are neutrinos. Now neutrinos have mass. But the mass is so tiny that even today the neutrinos are moving with very close to the speed of light and simply mimic the same behavior as the photons. So there's a radiation component. The radiation component consists of all particles which are so light that they're moving with close to the speed of light. The C1 or the C mass here consists of all particles which are heavy enough that they're basically at rest relative to us nearby. And that's cosmology sort of as it was known 30 years ago. This is the cosmology of 30 years ago. Yeah. Mark. So in that box that's expanding and the wavelength of the wave is stretching out, the total energy inside the box is decreasing. Where is that energy going? The energy is doing what? Decreasing. Decreasing, yeah. Doing work on the walls to expand it. Well, another way to say it is it's going into the kinetic energy, the kinetic energy of expansion. This equation is, remember what it was? It was the equation of conservation of energy. That's where it came from. So if you think about it that way, then start tracing. If this part of the energy decreases, then, or let's put it this way. Let's write the equation in a different form. Let's put a minus sign over here. Well, let's first, one step at a time, put minus, put minus, and make it equal to zero. I haven't changed anything. Now change the sign of the whole thing. This plus plus. This equation reads the following way, that there's a certain amount of energy and mass and radiation. If it changes with time, that energy gets transferred to a negative term. What's a little bit bizarre is that the kinetic energy of expansion, this term here is negative. This can be read as a conservation of energy. It can be read, energy never changes because it's always zero. Contains three terms, matter, radiation, and a term that has to do with the rate of change of expansion. Normally in the ordinary world, where the rate of expansion is very, very small. That's a tiny, tiny number. This here is a tiny, tiny number in our world at the present time, a tiny number. This never changes very much. It's not zero. I take it back. This is not a tiny number. It's a number, but it doesn't change much over a course of time. The sum of these two is zero and it stays zero. That's the conservation of energy. Am I saying that right? I'm not saying it right. Yeah? The conservation of energy is a symmetry with time. When you think about it in terms of the northward charge, would the fact that A is time dependent sort of excuse you from conservation of energy? There's two ways to think about it. You could say the universe is a time dependent background that everything moves in. And because the universe is time dependent, that means all of your equations have a time dependence. There is no longer a time translation symmetry and energy doesn't have to be conserved. The other way to say it is that there's a time translation symmetry. If you start the universe at this time or that time or that time or that time, you get exactly the same response. If you start at the universe at t equals minus seven, it would be exactly the same as starting the universe at t equals plus four. And thinking about it that way, you'll discover there's another term in the energy associated with the expansion rate. Let's not go into it now. This is very, very subtle and I want to spend at least 15 minutes on it later. Energy conservation. But just to keep in mind that these equations, their origin was energy conservation. So there's no way that we can be violating energy conservation. Yeah. You said the Friedman equations are interesting because you're going to derive them through Newtonian mechanics or general relativity. Newtonian mechanics presupposes a flat geometry and relativity assumes a curved geometry. A geometry of space. If we get space time for a minute. All right. Go about geometry of space. Einstein's equations permit flat space. These are, OK. Would that mean that space is flat because you can derive it from either not space time but just space? Would that be a conclusion? There's our equation. I think you're raised to it. Yeah. OK. Minus some kappa over a squared, OK. This had to do with the total energy. The fixed total energy. If kappa is positive in this equation, the energy is negative. If the kappa is negative, the energy is positive. Sorry. It goes this way. Kappa is minus the energy, essentially. Do I have it right? No. I have it wrong. I have it wrong. OK. Doesn't matter. One way or the other. This term over here has to do with the curvature of space. So we're going to study three cases. There are three interesting geometries that we'll be interested in. There's a flat geometry where space itself is flat. It goes on and on endlessly, homogeneously. And triangles have the usual properties. That's the case k equals 0. This is the case where the universe is curved like a sphere. So if you go around it, you come out the other side. You go around it. Finite and compact, we say. That's the case k equals positive. Any positive number here corresponds to a radius of curvature of the expanding universe. Expands like a balloon. You know, the classic picture of the expanding balloon. That's k equals positive. k equals negative is a negatively curved space. And we're going to have to talk about what a negatively curved space is like. Less easy to visualize than the sphere. So the next time, it's exactly what we're going to do. We're going to talk about the geometry of space and the three cases. Flat, positively curved, and negatively curved. They correspond to k being 0, positive, or negative. So would that mean that in the Newtonian case, k has to be 0? Well, except that you can sort of mimic up the case with k not being equal to 0 if you want, but it's a little bit of a fake. Yeah. A little bit. Yeah, okay. Right. Question, a box full of photons with a box that's expanding and the number of photons stays the same. Yeah. The energy going down with that box. What happens? Where does that energy go? You may answer the question, but I don't understand. Where does that energy go? It does work on the box. Take a literal box. A literal box with reflecting walls. Take a box with reflecting walls and ask what happens to the energy in the box when you expand the walls. Okay. Well, what does happen? This pressure in the box. This pressure exerted by the photons bouncing off the box. The pressure when you expand the box does work on the walls of the box. For a real box, the energy could go to a number of places. They could go to stretching the distance between the molecules in the box. It could go into making kinetic energy of the walls of the box, but one way or another, it does work on the box and increases the energy of the box itself. Either by stretching the little hooks, lost springs that are inside the box that hold the molecules together or causing the box to have a kinetic energy, kinetic energy of expansion or whatever else. Just heating the box. Just heating the box. So that energy has to be going in the space, in the grid, right? You said the grid is actually absorbing the energy. Yeah, in a sense, that's right. Yeah. Right, we'll talk about that. We'll talk about energy conservation more clearly. Right. But it's still, it's a correct thing to analyze it. This is not obvious, but it's correct. It's correct to analyze it as if it were stuff in a box. And the stuff that gets reflected off here from a real box with reflecting walls is compensated for in the fake box case, by every particle which goes across here is approximately compensated for by a particle which comes in the other side. So on the average, the expanding space behaves like an expanding box. Particles instead of reflecting go from one box to the other, but on the average as many particles go from this box to this box as from that box to this box. And so in a sense, it can be mocked up by saying that the particles reflect off the box. Increasing the size of the box means that work is done against the walls and we'll have to talk about what that means in general relativity when we come to it. Yeah. A couple questions. How soon after the Big Bang did the universe cool down to where the matter and radiation components were not really interacting or even changing? About 100,000 years. And then the second one was what astronomical evidence do we have to see the T to the 1 half behavior? Is that it? What evidence? Yeah. What evidence do we have to observe the T to the 1 half expansion? Is that in the cosmic microwave background? That's the expansion history of the universe. And remember that looking out at the universe, you effectively see a history. Looking to different distances, you're seeing the universe at different times. So by looking at the universe at different times, assuming that it's homogeneous, you can reconstruct the history of expansion. We'll talk about that. That's something that we can do. Basically by looking at the combination of density of galaxies on the sky, distance of galaxies, and we'll spend a little bit of time going through that, the combination of careful measurements of distance, density, and so forth allows us to reconstruct the history. And we can reconstruct the history because we're seeing it at different times. It seems like the T to the 1 half behavior would be before galaxies formed. Indeed. But we can still... Yeah. Yeah. Wouldn't that be sensitive to the big bang in the synthesis? I mean, wouldn't we be off on those calculations if it didn't do the same thing? Yeah. And combining it with CMB and all of those things. It was before galaxies formed, but in the temperature history of the universe, there's plenty of evidence that this picture is right. No, you're right. The T to the 2 thirds is easy to see. The T to the 1 half is largely has to do with microwave background and things like that. Get the equations down and then we'll go and talk about how we know how much of this picture is right. Any answers? We know it's wrong. Yeah. Okay. I think that this isn't a problem that I've understood. I can't hear you. I said I think that this is not a problem that I don't understand why. Okay. The apparent speed of an object relative to an observer is a function of how far away it is. I will expand the answer. Yeah. So, conceptually, it seems like an object far enough away could appear to be exceeding the speed of light. But I don't think that doesn't violate this idea that nothing can exceed the speed of light, but I don't understand why. Well, you have to ask what it really says, what the principle that things cannot exceed the speed of light. What it literally says is you can't see anything going past you if faster than the speed of light. It doesn't say that two objects in an expanding universe can't be receding from each other at faster than the speed of light. That is quite allowed. And as you point out that if you have a formula which says velocity is equal to distance times a Hubble constant, then you make the distance big enough, the velocity will sure enough exceed the speed of light. It's connected with the idea of a horizon, but you're right. These equations say that things can move, recede from each other at faster than the speed of light, but for sure they don't allow you to send signals in time faster than the speed of light. They don't allow you to witness something moving past you faster than the speed of light. Does this mean that objects sort of at the limit right before where the expansion rate exceeds the speed of light, eventually disappear? In a sense, if you know what I mean. Yes, they'll pass through the horizon and we won't see them. But that has to do with dark energy. Oh, that has to do with dark energy. Without dark energy, nothing would really move out of our ability to see. I will show you how the geometry works and I will show you how all these things can be understood. But little steps. Good. Okay. Yes. Is the beginning of the universe that we never see more of the universe, no matter how long we wait for it today, is that related to the fact that objects far and away are receding at a speed faster than, would appear to be traveling faster than the speed of light? It's more than that. It's the accelerated expansion of the universe. That is true. That is true, but the universe will slow down. No, sorry, excuse me. If this picture were right, and the universe would slow down, then things which are currently moving away from us faster than the speed of light, would in the future be moving away from us with less than the speed of light? And at that point, they would become visible. So if this were the pattern, eventually we could see everything no matter how far away. This is not the pattern. The pattern is that. And if this pattern is correct, then there will be, then there is an ultimate limitation that you cannot see past a certain point. But let's hold back on that. We're going to spend a lot of time on those particular things. Those are the really interesting things, and we'll explore them. The way I always thought about the question of why would the, when the box expands, the radiation, the protons, there's energy, is that there's a wave phenomenon going on, the wavelength increases. Yeah. What is the reason the wave thing about it? That the wavelength increases. Yes. So the energy the photons have is decreasing because they're wavelength increases. That is correct. The question is why do you have to suppose that as the universe expands that the wavelength of a photon will match it? Why is the wavelength of the photon sort of rigidly attached to the grid, if you like? That's the question. Why is the wavelength of the photon somehow rigidly attached to the grid? Now anybody who knows a little bit about waves and wave propagation knows that there's what's called an adiabatic invariant. The adiabatic invariant is the number of nodes, the number of times the wave passes through zero. And as long as you change things slowly, the number of nodes stays fixed. This is called an adiabatic invariant. And if we had a some sort of closed circle that waves could propagate on, this is something in principle we could build. And we have waves propagating on it. And somehow we, well I was trying to think of an example. I couldn't think of one. I still can't think of one. But imagine we have a dial that allows us to change the radius of this ring here that things are propagating on. In other words, change the circumference of it. Changing the circumference of it will not change the number of nodes. You might ask yourself how could it change the number of nodes? Suddenly, I mean we're going to change the radius gradually or continuously. At what point would you expect here how many nodes were here about 10 or something like that? You know what everybody know what a node means? Number of times that the wave goes to zero. It has to be integer. For one thing it has to be integer. The wave going around, a standing wave, let's take a standing wave for simplicity. A standing wave on here has to go around and be periodic. So the number of nodes has to be an integer. Now you start stretching the space that it's propagating on. It can't suddenly jump. At what point is it going to jump from seven nodes to eight nodes? It doesn't jump. What it does is the waves just get longer wavelength and shorter wavelength to match the thing that it's on. It's the same phenomenon except replace this ring by the universe itself. Number of nodes, number of zeros is invariant, never changes, and the only way that can be accommodated is if the wavelength of the photon increases. The wavelength of the wave increases. Now any wave, even if it's not a nice thing like this, let's suppose it's just a lump over here and then nothing, nothing over here. By Fourier analysis, it can still be expanded in waves that have this nice periodic smooth structure. Each one of them has a number of nodes which is fixed, and so that's good enough for us. The individual waves of definite wavelength can't jump the number of nodes, and so instead the wavelength has to accommodate the size of the thing that it's propagating on. Yeah. As you look out in the universe and galaxies move faster and faster, relativity says the mass of them relative to us would, that's the thing that's faster and faster, correct? The energy. The kinetic energy. If we were to weigh it, it would weigh more. Well, you can't weigh something as far enough or as far away. No, no, no, no, it's important. It's important. But go ahead. I think you may have answered my question, but assuming you can, it grows continuously, at some point it goes exactly at the speed of light, which means the mass must be zero. Internet. Mass. Oh, sorry. Yeah, the momentum. Mass is a thing that never changes. Mass never changes. Mass of the electron is always whatever it is. Yeah, I think what it means is the question itself is silly, because you can't talk about the mass, you've got to talk about the energy. You have to throw up the energy. From a naive perspective, the mass goes from really, really a lot to zero, since part of things with mass can't go at the speed of light. But as you just said, that is not the way of thinking at all. Yeah, I understand. All right. Well, look, to do, yeah, okay. It takes more time and space. What's that? Time and space access gets interchanged, right? I mean, what does that thing work out? That is a rich thing to work out. I mean, if you just said, if things are far away, it starts to grow faster than light, then at that place, what we see as a split, what we refer to as space, is fabled with what time access at this place is. Why don't we wait? Why don't we wait? We're going to talk a great deal about horizons and the geometry. The point is, we cannot analyze those kinds of questions by just thinking in terms of Newtonian or even special relativistic geometry. We have to analyze them by saying there's a metric, that there's a geometry, and we can analyze that geometry, and we will do that. We can just, we can go only so far without introducing real relativity. So far, we've been okay. We didn't have to introduce relativity. And the reason why is because we stuck to a small enough region, all these equations were derived by looking at galaxies which did not have to be very far away. We followed some galaxies which are reasonably close, and we follow them, if they're close enough, they'll be moving with a very tiny fraction of the speed of light. So that's what we did. We followed things which were close enough that they never got anywhere as near the speed of light. If we want to study the whole universe and out to distances, out to distances where the Hubble constant times the distance is comparable to the speed of light, if we want to study the universe on that scale, then we can't do it without relativity, and we really can't do it without general relativity. So you're jumping ahead of the game and trying to get there thinking only about Newton and special relativity. Coming back to the energy of the photon, just by stretching the wavelength, how does it decrease the energy of a single photon? It's more the density that is decreased, right? What's that? Say it again. If you stretch the wavelength, you say that the energy of an individual photon decreased. All that happened was that the energy has spread over a longer wavelength. No. No. The energy itself decreases, and the easiest example is to think about the photon in the box or the violin string. The photon in the box exerts pressure on the walls of the box. Because it exerts pressure on the walls of the box, it does work on the box as it expands. The work is the pressure times the change in volume, and that's equal to the change in energy of the photon. So the photon is doing work on the box, and in doing work on the box, its own energy is decreasing. Look, this is also true, incidentally. Forget photons. Just take ordinary particles, a gas in a box. What happens to the temperature in this box if you increase the volume? Down. What does that mean about the kinetic energy of each particle? If you do it infinitesimally slow, it doesn't go up. If you do it infinitesimally slow, it does or does not? It does not. It does. Yeah. If you do it very, very suddenly, what will happen? If you do it suddenly, here's the molecules, OK? And you absolutely, instantaneously, suddenly increase the volume of the box. What you find is a big, empty space. The molecules are still in the original volume here, and nothing has changed their energy, their momentum, or nothing else. That's the non-adiabatic case. The anti-adiabatic case means you increase it slowly, and slowly means there's plenty of time for molecules to bounce off the walls many times. Then it will cool. If you do things suddenly, it's likely to heat. Even if you expand or contract, if you do things really suddenly, it's likely to heat. Or at least not cool. But if you increase the volume of the box slowly, then it will cool. Is the whole rigid shift the same as this weight-length increase, the scale? It's connected. It's disconnected. But it's a phenomenon that doesn't particularly have to do with photons. It simply has to do with pressure against the wall, and increasing the box means that work is done by the pressure on the walls of the box. And to conserve energy, it has to cool. So this is the phenomena of the photons cooling. Okay. Good. For more, please visit us at stanford.edu.
(September 21, 2013) Leonard Susskind solves the expansion equation for universes with zero total energy, and then adds a non-zero total energy term, which leads to an exploration of matter versus radiation dominated universes.
10.5446/15063 (DOI)
Stanford University. We haven't talked much about geometry. We've supposed that space is flat, space. And we haven't even talked about space time geometry at all, this quarter. In fact, as a matter of observation, space is very flat. But it's not a principle. And it's important to understand, very important to understand, what cosmology would be like if space were not flat. And as I've emphasized over and over, it's not that we know that space is flat. We just know it's very big. Whether it's flat, positively curved or negatively curved, we really don't know. And so it's important to investigate the various possibilities. Now, we don't know a lot on scales of space much larger than 10 billion light years, 20 billion light years. But we will make some assumptions. The assumption will not be that space is flat, but at least for the time being, let's continue to assume that it's homogeneous, homogeneous and isotropic. Is that true? Maybe it's not, but we'll never know until we learn to find out its consequences. And the only way to find out its consequences is to assume it and see what it says. Then we should assume the opposite and see what it says. But because almost all cosmology is based on that assumption, it's a good thing to explore that space is not flat, but homogeneous. Now, what does that mean? Space not being flat means that it's curved. What kind of space is a curved? Well, a sphere is curved. A paraboloid is curved. An ellipsoid is curved. An ellipsoid with a bump on it is curved. So all kinds of curved surfaces that you can think of, now I'm thinking about two-dimensional surfaces, will come to three-dimensional spaces soon enough. All sorts of spaces you can imagine are curved, but only a very, very small number of possibilities are consistent with homogeneity. Homogeneity means the space is everywhere the same. Somebody located at a particular place in the space looks around him, sees everything around him, and sees exactly the same thing that somebody else sees and in the other position. That's called homogeneous. A ellipsoid, or let's start with a paraboloid. A paraboloid is most certainly not homogeneous. It's more curved near the tip of the paraboloid. It's less curved far away. An ellipsoid, a long pointy ellipsoid, is different near the poles of the ellipsoid than it is near the waist of the equator of the ellipsoid. And if you were walking around on it, you would notice the difference. You would notice the difference seriously. You would not be fooled unless, of course, it was huge and you couldn't sample the curvature. So ellipsoids, spaces with bumps on them. The surface of the Earth, if we take into account the mountains and the valleys on it, are curved surfaces, but they are not homogeneous. What kind of spaces can be curved and homogeneous? Well, basically there are only two kinds. Two classes. In fact, apart from the overall size of them, there are only two kinds. Three kinds, excuse me, three kinds. The first kind is flat space. Let's begin by thinking about a metric for space. Not space time, now, just space. The metric of space. Flat space is just good old flat space, right? Flat space is just good old flat space. And if we were talking about two-dimensional flat space, it would mean a plane. And a plane, the distance element, the distance between two neighboring points, we know what it is, is just dx squared plus dy squared. If we want to add a third dimension, all we have to do is add a third dimension, straightforwardly, dz squared. That's the way we describe spaces, by giving them metric tensor, or giving their metric the distance between any two neighboring points. If we know that, we know everything about the space. This is ordinary flat space. Okay. You could. We don't, but you could. Okay. Let's think about the two-dimensional example. And instead of working in Cartesian coordinates, let's work in polar coordinates. Polar coordinates have some nice feature when you think about cosmology. They have a center. And if you think of the center as you, they have a natural place to put you. You look around, you look around in the sky, you look up into the sky, you look around, you're looking around at angles. Your visual field is a field of angles. If the world was only two-dimensional, then literally looking around you would be looking around at angles. And what you would see in front of you would be, you know, laid out in an angular space. So it's useful to think in polar coordinates in cosmology. In the two-dimensional example, we introduce an angle, theta, and a radial variable, r. The usual thing. We've done this over and over. And I think everybody knows the metric. The metric in polar coordinates, the same metric, ds squared, is equal to dr squared, distance radially, plus not d theta squared, but what? r squared d theta squared. This is just a statement that the distance interval for a given d theta, there's a given d theta, a small angle, gets bigger and bigger and bigger as you move away, and it gets bigger linearly with the distance, but the square of the distance grows quadratically with r squared. So that's the metric of the ordinary plane. And we're going to give it another name. We're going to invent the new term for d theta squared. What is d theta squared? d theta squared itself is a metric. It's a metric for a particular space. It's a metric for a circle. If I had a circle, and incidentally, a circle is a kind of sphere. It's a one-dimensional sphere. It's one-dimensional. You move along it. Somebody living on it would just think they're living on a one-dimensional space. And sometimes a circle is called a one-sphere, meaning to say it's one-dimensional, but it has a topology that closes back on itself. And sometimes the circle is called omega-1. Omega doesn't, I don't know where omega came from, but one is just the fact that it's one-dimensional. And we sometimes refer to d theta squared as just d omega-1 squared. It just stands for the metric of a unit circle. d theta squared is the metric of a unit circle of unit radius. When I say unit circle, I mean radius equal to 1. The metric would just be d omega-1 squared. We're going to adopt that notation because it'll curve over and over. We don't always want to have to write the details of a metric, so I have a name for it, d omega-1 squared. Okay. Any questions? When you say that d theta squared is the metric for a circle, are we saying that it measures the distance along the circumference? Is it the square distance of a, yeah, yeah, yeah, yeah. It measures the distance along the circumference. Right. Now, it wouldn't make any difference if we took the circle, think of it as a piece of string going around in a circle, and formed it so that it looked like this. It would still be true that you could still label the circle by the same angular coordinate. Still label it by the same angular coordinate so that equal intervals along equal angular separations. And you would say that the metric of the formed circle would still be exactly the same thing, d theta squared. So it doesn't matter that you draw it as a circle. The circle is just a space along which if you go a certain distance you come back to the same place after a distance. And in this case, the distance would be 2 pi. If you measure in meters and you lived on a circle happened to have a circumference of 2 pi meters, you would call it a unit circle in the metric system. We'll just think about the abstract unit circle which has a length around it of 2 pi. But then, when it goes into the polar coordinate metric, each circle has its own radius. The radii increase and so we put r squared in front of the omega 1 squared. And that's the metric of the flat plane. That's flat space. In this form, it looks like there's a special place. But there isn't a special place. Every place is the same as every other place. It's just we use our place as the center. Now, let me draw it on edge sort of. There's a series of family of circles. And we can think of the flat space as a kind of nested sequence of circles of increasing size. Think of it that way. Think of the flat space as being composed of a nested series of circles. Each one having a radius, the circle, has a radius equal to its distance from the origin here. All right, next let's go to the sphere. The sphere is also a homogeneous surface. Every place on the surface of a sphere is exactly the same as every other place. So it's not the plane, but it is homogeneous. And let's discuss its metric. Again, now I'm talking about the two sphere. We talked about the one sphere, that's omega one. Now we're going to talk about the two sphere. The surface of the earth is a two sphere. It's two dimensional and it's a sphere. A sphere, of course, has a particular meaning. But for us, the main meaning that it has is that it has uniform curvature. It's everywhere is the same. And it has the property that if you walk around, that you come back to the same place in any direction. That's the main properties of it that we care about. So let's draw it. Here's the ordinary two sphere. And let's also think of it as a sequence of circles. In this case, we're going to start at this point. We're going to call that, here's where we are. I'm right at this point on this sphere. And now I look out at different distances. And what do I see? I see a series of spheres. Nested spheres, I look out. Near me, I see the first sphere. So when I say look out, I mean I'm an astronomer. I'm looking out away from my own position. I happen to be a two dimensional astronomer instead of three dimensions. And at one light year, I see everything arranged on a sphere. At two light years, I see everything arranged on another sphere. Another sphere, another sphere. But there's something different about this series of nested spheres. And it's that as I move out, instead of growing, the spheres stop growing. They grow more slowly as I move out away from the center here. And eventually I come to a point where the spheres don't grow anymore and then contract. Now this is not with time, this is with distance. The one sphere is the circles out of which the two spheres are formed. Instead of growing linearly with radius, they grow for a while and then they shrink. And in fact, we know how big each one of these spheres is. If the spheres are characterized by an angle, let's call that angle r. r is the distance from this point as measured, let's say, in angle. So r is zero over here. r is pi over here. That's just a way to label the sphere. That's just a set of coordinates to describe the sphere. Right where we are, that's r equals zero. The furthest we can see until the sphere closes up on itself at the back end, we'll call that r equals pi. What is the metric of the sphere in terms of r? It looks a lot like this, but slightly different. This is flat. Now, sphere ds squared equals dr squared. What's the radius of each one of these circles? Sine r. Sine r. At this end, where r is equal to zero, sine r is equal to zero. What's sine of pi? Also zero. Sine of pi over two is pi over two over here. Pi over two. Pi over two. On the equator, on the equator, the radius of each one of these spheres, instead of growing with distance like this, stops growing, comes back, and the answer is sine squared of r, sine squared means the square of the sine, becomes the metric of a circle, of a unit circle. Instead of r squared, we have sine squared r. We can write d theta squared here if we like. d theta would be the angle around here. But we can also call it d omega one squared. Same thing. So let's put the omega one squared. All right, so we built, and now, and now, what is this metric of? It's the metric of the two sphere. All right, so let's give the two sphere a new name. The new name of the two sphere is d omega two squared. Omega two is a two sphere. This is omega two. Omega one is a circle. Well, this pattern just continues. If I want to make a three-dimensional sphere, a three-dimensional sphere is a three-dimensional space. Everywhere is the same. If you go out in any direction, you come back to yourself. Think about looking out in the sky again. You look out in the sky, you see things at a certain distance. They form a two sphere now, not a circle. They form a two sphere around you. You look a little further, another two sphere, even bigger. You look a little further, and you see a very big two sphere, but then what happens if you look a little bit further? The two sphere starts to get smaller and smaller and smaller and smaller until you see around to the other side of it where it shrinks again to a point. Three spheres are just as good a space as two spheres. They're harder to visualize. Your visual cortex doesn't have the machinery to be able to visualize directly three spheres, but they're just as good. What they are is instead of a series of nested one spheres, which grow and come back and collapse as you go further and further away, there are a series of nested two spheres. One sphere, another one inside it, and a bigger one, bigger one, bigger one, and then they collapse. So you could think of it then as you move further and further away in R, at first you see a small sphere around you, a small two sphere around you. Then you see a bigger two sphere around you. I'll draw it as a two sphere so that you can see it. You see an even bigger one at further distance, and then they start to get smaller again. And where do they shrink to zero? At R equals zero, and again at R equals pi. No, the observer is on the sphere. Oh, no, the observer is at this end. This series of spheres here refers to the spheres that you see around you. The spheres that you see around you, the closest one, my head, my head is the smallest sphere around me, right? It's pretty small. That's what's went over here. I look out a distance of meter, and I see a bigger sphere. I look out a distance of ten meters, and I see a bigger sphere around me. If I were living on a three sphere, then I would look out to a distance which would be a biggest two sphere, and then they would start to shrink again. It's exactly the same thing as here. You see around you a small sphere, you see a bigger one, or a circle. A bigger circle, a bigger one, and then they start to get small. You're over here, so we could have drawn this by saying at this end we have a point, a little bit further away we have a circle, then we have a bigger circle, then we have a bigger circle, and then the circles shrink again. Now in the case of the two sphere, I can draw it as a sphere, a recognizable sphere, but just think of it as a series of circles which grow and then collapse. And they grow and collapse as a function of r, from r equals zero to r equals pi. If you want to go another step to three-dimensional spheres, you think of them as a nested series of concentric two spheres around you. Okay, now you should be able to guess what the metric of a three sphere is. This is the metric of a three sphere. It's the omega two squared equals, again there's a dr squared. There's always a dr squared that's distance away from you. And then there's the angular part, and the angular part now will not involve circles, but the angular part will involve two spheres, a series of two spheres around you. And that will be sine squared r, the omega two squared, not the omega one squared, but the omega two squared. Here's flat two-dimensional space. What about flat three-dimensional space? Here are the, these two spaces are spheres, two spheres and three spheres. This should be, oh, the omega three squared. The omega three squared, which is the three sphere, and we may be living in a three, on a three sphere. Actually we do live on a three sphere. Space may be a three-dimensional thing like this. Alright, this was the easier to imagine analog. Here's flat two-dimensional space. What about flat three-dimensional space? What would you guess? Do we have a guess for flat three-dimensional space in this form? We have another angle, and that other angle forms an omega two around you. It's just this. Flat three-dimensional space in polar coordinates looks like this, where this is a two sphere. It just corresponds again to the two spheres that surround you. Polar coordinates. This is standard notation. It's standard notation to call a metric of a sphere the omega squared. And you put a little index in to indicate how many dimensions it has. There's another way to view spheres. I might as well tell you what it is. It's in some ways a little more intuitive. Well, I'm not, no, I'm not sure, I'm not sure it's intuitive. It requires some extra baggage. A circle. Now a circle, if you were a little ant living on the circle, and you couldn't look off the circle, you couldn't look into the circle, all you could do is receive light from along the circle. You could communicate with your neighbors, but you would have no way of telling whether it was truly a circle or whether it was a thing like this, or even if there was no other dimension. Perhaps all there is is the space along the line with no sense of moving perpendicular to the line. That's what a creature living on this line who couldn't see off the line, you know, somebody who lived on an optical fiber maybe who could receive no light except from along the fiber, would have no way of telling whether that fiber in three dimensions was truly circular or if it had some other shape, and maybe wouldn't care, and even more might actually just be living on the one-dimensional space with no sense of a perpendicular direction. But still, nevertheless, we can, if we like, describe a circle by embedding it in two dimensions. It's only one-dimensional, but we can embed it in two dimensions, and how do we do that? We write that the circle is x squared plus y squared equals one. That's the circle, right? Common distance, every point, same distance from the origin, namely in this case, a distance one. That's the unit circle. The unit two-sphere, we introduce a third direction. Notice that to describe a two-sphere in this way, we have no choice but to introduce a fake third dimension. Now, the third dimension, in the case of the surface of the Earth, is real. You can move in the perpendicular direction. But again, if you thought about a world flatland, if you thought a flat land where creatures can only receive light from within the surface itself, then the extra dimension would just be a trick for describing the sphere. We would describe it as x squared plus y squared plus z squared equals one. That's a surface in three dimensions, but that surface in its own right is a space, and it does have exactly this metric. If we measure distance away from one of the poles, in the same way the metric of the two-sphere, this is the two-sphere, is exactly what I wrote there. Well, you can go another step. You can say, let me construct a three-sphere. To construct the three-sphere in this way, you have to embed it in a four-dimensional space. Again, now the four-dimensional space may really be a fake. Maybe only the three-dimensional surface makes any sense. But you would add one more letter, and this three-dimensional surface in a four-dimensional space is the three-sphere. Again, if you coordinateize it by distance from some point, this is the metric of the three-sphere. Okay. Embedding it in a higher-dimensional space may or may not make real sense, or in other words, really have physical significance. As I said, the surface of the Earth is embedded in three-dimensional space. If we live on a three-sphere, chances are it is not embedded in the same way in a four-dimensional space, but we don't have to answer that question. Okay. What's the difference in terms of what you perceive if you live on a sphere versus if you live on an infinite flat plane? How could you tell? Well, let me suppose you had telescopes, and telescopes allow you to determine the distance to distant objects, just the distance, in other words, to determine the R of the galaxies which are embedded in your space. A telescope in itself, of course, doesn't allow you to tell the distance, but let's suppose you had some various tricks to be able to tell how far things were away. The standard trick was, of course, spectroscopy and using the Hubble law. So it's a combination of things. But in this particular case, what could we use as a trick to tell how far a galaxy was away? Look at its luminosity, how bright it is. So we could look at it, just look at how bright it is. The luminosity we're getting from it, a bulb, a light bulb far away, looks less luminous than a light bulb close up. So let's assume we can tell. And let's look at an object, a known object. It's a galaxy. Let's assume all galaxies are the same. This is, of course, not true. But on the average, the average populations of different kinds of galaxies, we can look at a distant galaxy and we can, from a type of galaxy, if it's a galaxy like our own, with roughly the same number of stars, we can't really see the individual stars, but nevertheless, if it's a galaxy like our own, we'll assume it has about the same size, 100,000 light-years across or whatever. And so we look at these galaxies, we can tell how far they are, and for simplicity, let's just say they're all the same size. And then we can ask, how much angle do they sub-tend in the sky? We see a galaxy, and obviously, the further away the galaxy is, the smaller the angle it sub-tends in the sky. You know what sub-tends means, how big an angle it looks in the sky. So let's begin with the flat space and ask, assume galaxies, all galaxies have a diameter D. This is the diameter of the standard spiral galaxy. Our standard kind of spiral galaxy has diameter D, and we're looking at it from different distances, or it's different distances from us. How much angle does it sub-tend? All right, so we look at the metric. Here's the metric. Let's do a flat space, here's flat space. And for simplicity, just take the two-dimensional case. This doesn't matter, we just look along a circle. All right, so there's galaxies out here, galaxies here, galaxies close by, galaxies very far away. Each galaxy has diameter D, and let's think about the angle that it sub-tends in the sky. There's the angle that it sub-tends in the sky. It has size D. D is its true size. So that means that Ds squared from here to here is just D squared. On the other hand, it's equal to Dr squared. So I don't care about R, I'm just looking between here and here. Those two points are at the same radial distance. So if I look at this little line, or this little surface across here, Dr is zero. This is a little line at a fixed R. So Dr is zero. Here we are. It's equal to R squared d theta squared. Just as part, where is it? R squared d theta squared. That's for the ordinary flat space, two-dimensional space, two dimensions. D squared, the actual size of the galaxy is D squared. And it's equal to R squared d theta squared. Nothing new there. So what's d theta? d theta is the size of the galaxy divided by R. I've just solved this by writing d theta squared is d squared over R squared and taking the square root. So the angular size, this is obvious, incidentally. I'm not saying anything you don't know. The angular size in the sky is proportional to the actual size of the galaxy divided by its distance from you. So we could check this. And measure the distance to the galaxies. We can see how big they look in the sky if we lived in flat space. We would discover that the angular size of them decreased like one divided by the distance. Now let's do exactly the same thing on the two-sphere. Incidentally, this fact is true in three dimensions. It's true in any number of dimensions. But now let's do it on the sphere. And for simplicity, let's just imagine the two-sphere. So here we are. We're over here. And we're looking out at the galaxies, which are all about the same size. They fill the space pretty much homogeneously. We can tell how far they are from us in the same way that we told before. We can measure their angle. Let's see what we get. Again, the size of the galaxy is d squared. And now instead of being r squared d theta squared, it's sine squared r d theta squared. We look out here. We look at this galaxy over here. The angle subtended satisfies d squared is sine squared r d theta squared. Or d theta is equal to the size of a galaxy, not divided by r, but by divided by sine of r. What does that mean? Which is bigger at a given distance? Well, sine is smaller than r. r increases linearly. Sine r turns over. So sine is smaller than r. That means d theta is bigger than it would have been in the flat case. Distance to a ga- sorry, the angle subtended by a galaxy at, let's say, a thousand, a couple of million light years away from us. Let's make it more than that. A few billion light years away from us. If we lived on a sphere, that galaxy would look bigger. It would look bigger because sine r decreases. Well, it doesn't increase as fast as r. In fact, when you get around to the other side of the sphere where sine of r starts to decrease, the galaxies out here look bigger than the ones in close. Take this galaxy over here and compare it to one almost at the antipode, at the other end of the universe. At the other end of the universe, the sine is almost zero again. So the galaxy over here looks about as big as the galaxy over here. So you could tell. It would look fainter because it's far away, but it would have whatever properties a distant galaxy is supposed to have. But as I said, if you had a trick to tell how far away it was, you would find something different about the spherical geometry. Namely, up to a point, the distant galaxies would look smaller and smaller in angle, till you got halfway around, and then they would start increasing in size again. In fact, if there was a galaxy right at the antipode here, you would see it in every direction that you look. So you would see it if you look that way, you would see it if you look that way, you would see it if you look that way, it would fill the sky. That would be an extreme case of this, that the further you go, the larger things look. The analogous to the cosmic microwave? It is analogous to determining the curvature of space by looking at the cosmic micro-line, micro-over. What are the things whose size you are able to decipher in the cosmic microwave background? That is the size of certain acoustic lumps, but we will come to that. It's not galaxies. We don't look at galaxies in the microwave. We look at oscillating lumps of stuff, but basically it is the same. Okay, so there we have the three-sphere, and we discover what it means to an observer at the center of the universe, where is the center of the universe? There is no center, but being very self-centered, we put ourselves at the center, we look out at different distances, and we can also look at another thing, we can count the number of galaxies at different distances. It's obvious on a sphere that if you look out far, you see fewer galaxies than if you look the same distance on the plane. The plane sort of opens up, the sphere contracts. In fact, way out at the maximum distance, you basically have a chance of seeing one galaxy, whereas at the same distance in flat space, you would see a lot of galaxies. So counting galaxies is another way to tell. Counting galaxies, and you can figure out yourself how many galaxies you see at each radial distance on the sphere, and how many galaxies you see at each radial distance on the infinite flat space. Excuse me. I shouldn't be thinking of r as getting very large, I should think of r as getting close to pi. In the circle case. Yes, what does that mean? I'm going to see what that means. That's just a unit. That's just a unit. At the moment, I'm just telling you about the geometry of spheres. At the moment, we're not asking about increasing or decreasing or what made the sphere that size. We're just thinking about unit spheres. We might as well be thinking about unit spheres. Of course, if we thought of a sphere twice as big as a unit sphere, the same things would be true, but now we're just thinking about a world which is likely Earth except it's three-dimensional, and has a fixed known size, and it doesn't change with time. That's what we're talking about now. Now, of course, that's not really the case. The world does change with time. It grows. But before we talk about the time dependence, we've talked now about two kinds of geometries which are homogeneous, and I'm just going to say they're everywhere the same. There's a third. And the third has various names. I'm just going to call it the hyperbolic space. It's not as easy to imagine as the sphere. I'll tell you what. Before we do that, let's talk about another way of representing the sphere. This will be handy when we go to think about the hyperbolic space. There's a way of thinking about spheres which is useful. We'll get some ideas from it. But more important is that it's a useful way of thinking about the hyperbolic geometry. All right, the sphere. It's stereographic projection. A lot of you know what it is. You take the sphere, an ordinary space, and you arrest it on an infinite plane. Mathematically, you're just passing a tangent plane through the south pole to a point on the sphere. This could be our point, the point where we are. Now, every point on the sphere can be labeled or mapped to a unique point on the plane. There's a little bit of trouble about the north pole, but the north pole is very far away. Let's not worry about it. Every point on the sphere can be identified with a point on the plane by so-called stereographic projection. You go to the north pole and you take the point that you're interested in. Oh, incidentally, we are at the south pole. We're at the south pole, but that's not us. We're over here. We're at the south pole. But here's what you do to stereographically project a point on the sphere. You draw a line, a straight line through the point. It comes out and hits the plane somewhere. There's another point. It's the plane over here. There's a point way up near the north pole. That will map to some very, very distant point. What about the point at the north pole itself? Where does that go? It goes to infinity and it doesn't matter what direction. Every point at the north pole is out on a circle, a huge circle at infinity. You know, it's even nice not to think about points, but to think of little circles. Little circles, all, let's say, of the same size. They could stand for galaxies. Little circles all of the same size. By this projection, the circles on the sphere map to some kind of little closed curves on the plane. It's a little bit of magic, but circles map to circles. That's something you can prove if you have the patience for it. But if you take a little circle on the sphere and map it by mapping every point on the, you will find circles on the plane. That's obvious for the one at the south pole. But what about the size of these circles? How does the size of the circles depend on how far up near the north pole you are? Well, the answer is that circles of a given size near the south pole look small, but not only small, they look about the actual size of the circle that they're mapping. As you move further and further, as the points move further and further, the circles up here look bigger and bigger. And in fact, the circle up near the north pole here, the Arctic circle, or the extreme Arctic circle, that looks like a giant circle very, very far away. So that's a way to think about the sphere by mapping it onto the plane. And when it's mapped onto the plane, it has the bizarre properties that the further toward the north pole you are, the bigger things look on the plane. But does that mean that the sphere is not homogeneous? No. This is just a way of drawing it. This is just a way of representing it. Every one of these circles is of the same intrinsic size on the sphere. So I'm mapping it to a plane like this so that you can draw it on a plane. You're distorting things. You can do the same thing with a three-sphere. You can also map it to an infinite flat three-dimensional space. Similar things happen. Spheres which are near you map to small spheres. They're just far away, map to bigger ones. So this is called stereographic projection. Okay. Now I'm going to tell you about the hyperbolic space. The hyperbolic space is made in exactly the same way. I'll take the case of the three-dimensional hyperbolic space, so we can do both, three and two dimensions. Yeah, let's start with two dimensions. Instead of calling it omega, we call it H, we call it dH squared. H stands for hyperbolic. dH squared, and now I'm talking about the two-dimensional hyperbolic space, exactly the same dr squared plus something squared times d omega 1 squared. dH squared. Question? dH or dH omega? dH2. The end. The omega. I'll tell you in a minute, okay. And here we put hyperbolic sine of r squared. Let me remind you what the hyperbolic sine is. Well, first let me remind you about ordinary sine. Sine of r is equal to e to the i r minus e to the minus i r divided by 2i. Everybody know that formula? That's the formula for sine in terms of complex exponentials. Hyperbolic sine is even easier. It's just e to the r minus e to the minus r over 2. In particular, hyperbolic sine for very large r is dominated by e to the r. e to the minus r is a very small number for large r. e to the plus r is a very big number. Sometimes I just call this cinch of r. The common term, hyperbolic sine cinch. Cinch of r is a function which grows very quickly. It grows exponentially. Sine, as you go far away, shrinks. As you go far away from r, sine shrinks back to zero. Hyperbolic sine just blows up and gets bigger and bigger rapidly, exponentially. So again, this is a space that when you look around you, you still see circles. We're still in lower dimensions, still in two dimensions. You see around you a family of circles, but the size of the circles grows very rapidly. Blows up very, very rapidly. To go to the omega 3, to go to the h3 squared, which is the candidate geometry that we live in, a candidate. Same thing. The r squared plus cinch squared r d omega 2 squared. Now we have the situation with surrounded by two spheres, the size of those two spheres gets really big, really fast as we move away. What would you guess happens to the angle subtended by a galaxy as you go further and further away? Well, we can work it out. We can work it out in the hyperbolic geometry. Remember, the size, here we are. We're at the center, we're looking at a certain distance. We're looking at an object which has size d, intrinsic size d. It occupies an angle on the sky, d theta. Let's see what it is. We can replace the omega 1 squared is just d theta, or d theta squared. We have the size, the actual intrinsic size is d squared, and that should equal hyperbolic sine or cinch squared, cinch of r squared, d, let's call it theta squared. d theta squared is the same as the omega 1 squared. Or just d theta, the size in the sky is equal to d divided by cinch of r. Okay, now let's plug in what cinch of r is. For large r, when you look far away, we don't have to worry about this. For large r, it's just e to the r divided by 2. So if you plug that in, you find that d theta is approximately equal to d times 2 over e to the r. Now, e to the r grows very quickly as you go away. So what happens to this angle associated with a galaxy of size d as you go further and further away? It shrinks really fast. Furthermore, the number of galaxies grows especially fast. The number of galaxies grows very fast, and the size of them shrinks to match. So if you lived in the hyperbolic world and you looked out, you would notice that distant galaxies look too small. You don't have this phenomenon where after a point there's nothing left, it doesn't shrink back to zero, it keeps growing, but you would discover many, many more galaxies at a given distance as long as it's far away. You would find a huge increase in the number of them, and you would find each one of them anomalously small compared to flat space. So the geometries really have meaning. Okay, this is hyperbolic space. That's what this is, but I'll tell you how you get this picture. Okay, yeah, somebody said, excuse me. You're excused. Go ahead. What do you want to ask? Oh, maybe it was him. He's excused. So all of these sources all look kind of nice and not all symmetric is the right word, but similar. They all look similar. Yeah, in the case of a positive, in the case of a sphere, the interesting values of r between zero and five, in the case of hyperbolic space, the interesting values of zero out to infinity. Right. You got it. Okay. On the other hand, when we stereographically projected the sphere, it was onto the infinite plane. Now, of course, that was at the cost of an enormous distortion where things far away just look much too big. The opposite takes place on the hyperbolic plane. I'll show you a way to stereographically draw the hyperbolic plane. If a two-sphere, let's take the two-sphere case, I can't draw three-dimensional figures, especially when they're curved, so I'll stick with the two-dimensional case. Yeah. A two-dimensional sphere, a two-dimensional sphere is x squared plus y squared plus z squared equals one. Okay. I'm now going to draw a two-dimensional hyperbolic space. The two-dimensional hyperbolic space, again, you start with three coordinates. You could call them x, y, and z, but I'm going to call them x, y, and t, capital T. This is not really time. This is just a trick for drawing the space. It's not time. Nevertheless, I'm going to call it t. I'm going to draw the x, y, z, no, x, y, t space, and construct not the sphere, but instead t squared minus x squared minus y squared equals one. Instead of t squared plus x squared plus y squared equals one, which would be a sphere, take t squared minus x squared minus y squared equals one. Do you know what kind of surface this is? No, no, with plus sign, it's a sphere. With plus sign, did somebody say a circle? It's a hyperboloid. If you only had one of these, it would be a hyperbola. With two of them, it becomes a hyperboloid, and here's what the hyperboloid looks like. Plot t upward, x and y horizontally, oops, not so good over here, and now draw a cone. We're not really doing relativity, but it looks an awful lot like relativity, but this is just a trick for drawing a surface. Draw a cone, whoops, that's not a very good cone. It's a right angle cone, and it corresponds to the surface t squared minus x squared minus y squared equals what? Draw a cone. Zero. t squared equals x squared plus y squared, that's a cone. It's not a hyperboloid. But now put a one there. Well, let's see, first of all, set x and y equals zero. Let's go on to the t-axis. Here's the vertical t-axis that corresponds to x and y equals zero. Where is the point t squared equals one? Well, that's just a point t equals one. There are two of them, one on the top and one on the bottom, but let's just concentrate on the top one. That's over here, t equals one. What happens is you move away from that point, you form a hyperboloid. A hyperboloid of revolution, a round hyperboloid of revolution. Like so. Okay? Now, it does not look like every point on this hyperboloid is the same as every other point. It really looks like this one is special, but it's not. Not if you use the metric, not d t squared plus dx squared plus dy squared, but you measure distances on here as dt squared minus dx squared minus dy squared. Sorry, dx squared plus dy squared minus dt squared. You measure distances on here using the relativistic metric pretending that this was time. It's not time, it's just an embedding space to draw a hyperboloid. If you do things that way, then this hyperboloid is absolutely uniform. How do I know it? Because to go from one point to another point on the hyperboloid is equivalent to a Lorentz transformation, which moves the time axis in various ways. But if you're not comfortable with that, we'll just take it as a true fact that the hyperboloid, when distances are measured with the metric, the x squared plus dy squared minus dt squared, then this hyperboloid really is completely uniform. And now, supposing I want to stereographically project it. Here's a way to stereographically project it. That's useful. Again, you draw a tangent plane. A tangent plane now is tangent to the bottom point. Think of us as living at the bottom point here. We live at the bottom point and we look out around us. We look out the given distances, distance out here, distance out here. But now we're going to take every point on the green hyperboloid and map it to a point on the blue plane. How do we do it? We just draw a line from the center over here to the particular point on the hyperboloid and it intersects the plane somewhere. The point on the hyperboloid at the center, that just stays at the center. The point on the hyperboloid over here, I don't know, intersects somewhere over here. Oops, over here. Whatever. How about, how far out on this plane do you go? As you move out on the hyperboloid, you get closer and closer to the asymptotes of the hyperboloid here, but the asymptotes of the hyperboloid intersect the plane on a circle over here. So no point is ever mapped out beyond this circle. Even the asymptotically far away points, very, very far away, simply map the points near the boundary, near the circle here. So in doing this mapping, again, every point winds up in a particular place. That's good. But there's a lot of distortion, a lot of distortion, and in particular things very, very far out there are squashed to very, very close to the edge of the circle. Things near the center, those are described pretty faithfully. Things out near the boundary are much too small. Quite the opposite, exactly the opposite of what happened when you took the sphere and spread it out on the plane. The sphere you spread out on the plane, everything far away was much too big. The hyperbolic geometry, things far away look much too small. But when you look at this picture, you're supposed to think in your mind that every angel and devil is exactly the same size as every other one. In fact, there are motions that you can do, coordinate transformations, that allow you to move the devils and angels around, analogous to the rotation of a sphere. If you take the sphere and you rotate the sphere, the galaxies all move around. If you project them onto the plane, they'll appear to move all over the place. What really all you're doing is rotating the sphere. Same kinds of things here, they can all be thought of as the same size. Everyone sees exactly the same thing. If you look carefully, you'll see that every devil or angel essentially sees exactly the same features around him. And if you take into account that size scales have been deformed by this mapping so that when you look at things, don't pay a lot of attention to the size, but do pay attention, for example, to how many angels and devils each one sees neighboring to them, you'll see that this really is homogeneous. Every point is exactly the same as every other point. This is sort of like the Pendros diagram in the sense that you're capturing everything in a finite way. It is like that, however, this is not a space time, this is a pure space. This is pure space. Notice that as you look very, very far away, that things look very small. The angle subtended by a very distant angel looks anomalously small and you'll see what would happen if they were spread out on a plane in a uniform way. Notice also that the number of them increases very dramatically as you move far away, increases exponentially. So that's the, I don't know, this geometry has various names. It's the Poincare disc, the two-dimensional version of it. It's the Lobachevsky plane. It's the hyperbolic geometry. I'm sure it has other names. It's also the uniformly negatively curved space. The sphere is the uniformly positively curved space. That actually means something technical. There's a certain components of the curvature which are really positive and this geometry has the opposite sign for its curvature. So mathematically it is the space of uniform negative curvature and as I said, it's the same everywhere. Okay, so we have three kinds of spaces then. Oh, there's a three-dimensional version of this and a three-dimensional version, instead of having a disc, you have a ball. You know what a ball is? A ball is the three-dimensional solid stuff contained within a sphere. A sphere always means the surface. Okay, that's a terminology. A sphere means the bounding surface. So when you say the earth is a sphere, you're talking about the surface of the earth. When you say the earth is a ball, you're talking about everything contained within the bounding surface. Okay, so the, I forgot why I mentioned that. I don't remember. So we have these three kinds of spaces. Let's write them down now. Oh, incidentally, just as the sphere has a natural radius, the hyperbolic geometry also has a radius. The radius is the thing which appears on the right-hand side, or the square, that's the square of the radius. This is the unit hyperboloid. It's the unit hyperboloid in that the center, the point over here is one unit away from the origin of coordinates. The, just as you could also draw a sphere of radius two, here's a sphere of radius one, here's a sphere of radius two. You can also draw hyperboloid of radius two. What you would do is just go up to two units here, and it would look like this. It would be somewhat flatter at the center over here. Draw a very big hyperboloid, a very big hyperboloid now would be much flatter, but it would be the same shape except expand it out uniformly. You would indicate that by putting a radius of curvature in over here, some squared radius of curvature. The point is that each one of these geometries, in particular spheres and hyperbolic planes, hyperbolic discs or hyperbolic balls, have a radius associated with them. If you include the radius in their metric, if you want to think not about a unit geometry, then you just multiply the metric by the square of the radius. So let's do that now. Let's think about a sphere, an ordinary sphere, the metric of an ordinary sphere of radius a, little a. A sphere of radius a, a unit sphere has ds squared is equal to the r squared plus sine squared r, the theta squared, or the omega squared. Incidentally, in case I didn't mention it, there are four spheres, five spheres, six spheres, seven spheres. In each case, if you want the hundred-dimensional sphere, you write exactly the same formula except you put the 99-dimensional sphere here. Always one less dimension around you than there are dimensions altogether. Okay, now supposing I want the sphere whose radius is not one, but whose radius is little a, what do I do? I just multiply all sizes by a, and sizes squared get multiplied by a squared, a squared, little a squared. If the radius of my sphere, fixed radius of the sphere is a, then the metric is exactly the same except we multiply it by a squared. Okay, here's the unit sphere, and here's the sphere of radius a, and we just multiply it up. Why a squared, why not a? Well, because this is the square of a distance. What about the hyperbolic geometry? Same thing, ds squared is equal to a squared times dr squared plus hyperbolic sine squared r, d omega squared. And that's the hyperbolic geometry of radius a. So we can not only accommodate unit spheres and unit hyperboloids, but arbitrary hyperboloids of arbitrary dimension. The bigger a, the flatter the hyperboloid is, but they're all similar to each other, similar in the technical sense that they're just expansions and contractions of the same thing. It is assumed in most of cosmology that the space that we live in is one of these three spaces. Let's take a five minute break. If you double the size of a sphere, what happens to its curvature? It remains the same. Not the way curvature is usually defined. Well, compare a basketball with the surface of the earth. Which one would you say, if you explore around on it, looks more curved? Basketball. So the smaller the geometry, the higher its curvature. You mentioned that it's suspected that cosmology is one of those flat space. You said that our cosmology is either spherical or hyperbolic. Or flat. Or something else. On scales out as far as can be detected, it looks flat. Is it harder because the scale is so large? Yeah. Tell it to curve one way or the other. Right. And the test is just those D measurements that you're talking about. No. Okay. Well, if you only explore nearby, nearby means distances small by comparison with the radius of curvature, with the radius of the sphere, then you can't distinguish the three of them. They all look flat. Nearby. Yeah. Can you say that if it's not flat, based on what we've seen, the universe has to be at least a certain minimum size? Exactly. Right. The answer, we can see. Literally, microwave infrared to what, 20 billion light years or something like that. Some number like that. Because it looks so flat out to that distance, we know that it is at least at absolute minimum 10 times larger in radius. Okay. That means a thousand times larger in volume. Is it really just 10? Very, very unlikely for reasons we will come to. Almost certainly much more than that. Unless it's flat. Well, flat corresponds to the limit of infinite, being infinitely big. So there's no possibility of being finite, but flat, in this. There is. There is. It could be toroidal, and maybe we'll talk about that, but no reason to believe it. Yeah. Yeah, there are other kinds of geometries. It's not that this is the only kind of geometry. In fact, there are even other, you know, I said something not quite true. I said these were the only homogeneous geometries, but it was just reminded just now that that's not quite true. There are other geometries, namely the surface of a torus, the surface of a donut. Now, the surface of a donut does not look flat. Okay. But that's only because you took the surface of the donut and tried to put it into three-dimensional space. I will tell you what a torus is, the surface of a donut, mathematically what a mathematician means by a torus. You start with a rectangle. Now, let's draw, let's first draw a torus. You know, the bagel. The bagel has a property that its surface is two-dimensional and you can move in two directions and there are two independent ways that you can go around the torus and come back to the same point. You can go around it by starting out horizontally and you'll come back to the same place. Or you can go vertically and come back to the same place. There are also other ways where you sort of spiral around as you go, you spiral one way as you go the other. But these are the two primary independent cycles on the torus. Okay. Now, you can represent the torus, at least the topology of the torus. When a mathematician speaks of a torus, they usually speaking topologically, take a rectangle and start working on the rectangle horizontally. Use the same color coding. And you just come to the edge. But when you get to the edge, make an identification so that every point along the edge is identified as being identical with a point on the opposite edge over here. So that when you go out over here, you come here. You can go around it and come back to the same place. That's the analog, or not the analog, but it's mathematically the same as going on the horizontal cycle here. You can also go on the vertical cycle. If you remember to identify points on the upper boundary with points on the lower boundary, so you come around here. This point where the two cycles cross, that's just this point over here. This is a torus, a rectangle with opposite sides identified as topologically a torus. It does have the property, first of all, it's flat. Why is it flat? Well, I just drew it flat on the blackboard. It's obviously flat. It's just a thing which I have drawn on the blackboard, on the flat blackboard. But it's periodic. It's periodic in two dimensions, it's doubly periodic a mathematician would say. So it's topologically a torus, it is flat, and every point is the same as every other point. It doesn't really look like that. It looks like this point being closer to the boundary is different than the point in the center. But there is no boundary. There really is no boundary. You walk out here and you come over here. Every point is exactly the same as every other point. So it is also homogeneous. So when I said that there was only three possible geometries, I was being a little bit loose. There are others, tori, plural of torus is a particular case. But in many ways you would just consider this flat. Consider it flat with periodic boundary conditions. Okay, so just to clarify this. Now, could we live on a torus? Yes, yes we could live on a torus. As long as the torus was big enough, if we lived on a small torus we would notice it. We would look out and see our fannies from the other direction. In two different ways we would see periodically. Okay, but as long as the torus was big enough, we couldn't tell. We couldn't tell the difference between it and flat space. So it's possible. It's not a popular idea. Oh, sorry, torus. Torus is a two-dimensional space. This torus. You can do the same exact thing in any number of dimensions. You make instead of a rectangle, you make a rectangular parallel of pipette, I believe it's called. And you do the same thing. When you come out this edge over here, you reappear here. In and out, you come out this end, you reappear at the other end and so forth, up and down. So that's a three-dimensional torus. And you can have a torus of any number of dimensions. That's a one-dimensional torus. We start over the two-dimensional torus. I drew you a three-dimensional torus. What about a circle? Circle. Right. Or an omega-1. Okay. Does anything new happen if a space is not simply connected? Like torus. Yeah, they could. Yes, it is possible that there would be, yes, as I said, you could look out in that direction and see your tail. But you could do that on a sphere. Different. It would be different. It would be different. The distortions would be different. Here, there would be essentially no, no, yeah, it would be different, though. You would have two directions in which you would see yourself and other directions who would see something much more complicated. Yeah, you would, you would, I'll tell you what you would think if you lived on the torus. It would be equivalent. It's completely equivalent to saying you live in flat space, but with everything in the universe being repeated periodically. Here is you. Here is you. Same you. Here is you. Here is you. It's entirely equivalent because if you look out the side, you see, you know, the torus, you would look out and see yourself from the other side. Here, you look out and you see somebody who just looks exactly like you. So it's completely equivalent to saying that space is periodic and that you would just see replicas of the same thing in each rectangular cell. Would that be visible? Well, it depends on how big these cells are, but you know, you'd see yourself here. You'd see yourself here. You'd see yourself here. You'd see yourself up here. You'd see a crystal array of yourselves. You ask if there's anything different. That's pretty different. So why is the sphere defined to end when you get to the other side but the torus isn't? Well, what else would you do with it? What else could you do with it? You could add another sphere on like that. What's going around when you get the same period? Well, you have two spheres. How are you going to glue them together? A sphere is different than a torus. It's not really possible in a nice way to continue the sphere past pi. And the reason is because the circles shrink to zero at that point. The torus, when you go out to here, doesn't shrink to zero. It doesn't shrink away. So any of this thing shrinks away to zero size as you move over to there. So it's not a nice prospect to pass through that point of zero size. All right, let's go on now to space and time. Let's just review for a minute the metric of space-time. The metric of space-time in special relativity has two pieces, a time piece and a space piece. I'll just remind you what it looks like. Setting the speed of light equal to one ordinary Minkowski space, that's a flat space-time special relativity, ds squared is equal to minus dt squared plus whatever, dx squared plus dy squared plus dz squared. Notice that the x squared plus dy squared plus dz squared is just the metric of flat space. So you've taken time and space and put them together. And as always in special relativity, the time component of the metric is negative, the space components of the metric are positive. If you wanted to ask how a light ray moves, how does a light ray move? What's the trajectory of a light ray? It's a trajectory in which ds is equal to zero. It's called a null ray. And what does that mean? Let's just have it move along the x-axis. Let's forget y and z. You would say that dt squared plus dx squared along the trajectory is equal to zero, or just that dt, or dx, is equal to plus or minus dt. dx equals plus or minus dt is just a light ray moving to the left with unit velocity or a light ray moving to the right with unit velocity. Null rays, light rays are null rays, meaning null here, standing for ds equals zero. All right. A more general kind of space time is always time. In fact, there's always space. But we're going to deform or change the kind of space time we're talking about, depending time just as it is, but substituting for the three-dimensional flat plane here, one of the three kinds of geometries that we've studied. One possibility is the plane. One possibility is the sphere. The other possibility is the hyperbolic geometry. But we're going to include one other thing, a scale factor. Let's take the case of the two-sphere. Two-sphere just for visualization. dt squared plus the omega two squared. What is this? This is a world with time, but in which space is just a two-sphere. Here's space, the usual thing. And then there's time. That's all, space and time. But we're going to allow for the possibility that the radius of the two-sphere changes with time, and that's easily done. We just remember to change the radius of the sphere, we include an a squared, where a is the radius of the sphere, but now we allow a squared to depend on time, a of t squared. That's the cosmology of a world in which space is two-dimensional, and in which the size of the universe, the radius of the universe, is time-dependent. Again, we can write down, if we like, how light rays move. Light rays again move along null directions. We'll work out some examples. Let's not do that right now. This is the space-time geometry of a world, two-dimensional world, plus time, one speaks of two-plus one-dimensional world. Two-plus one means two dimensions of space, one dimension of time, with a time-dependent radius where space is a two-sphere. We can do the same thing for three dimensions. And what does it look like? I mean, what it looks like is it's a basketball whose size changes with time. That's all. All right? You want a three-dimensional world? There it is. Same world. What about, take a pair of points, take any pair of points on the sphere, separated by a given angular distance. Let's call that angular distance theta. What's the actual distance between the two points whose angular distance is theta? Mm? A theta. A is the radius of the sphere. A theta. Right. You've got the right idea. You just got the wrong letter. All right? The distance between those two points is A, that's the radius of the sphere, times the angular distance. What's the velocity of the relative velocity of the two points? V equals A dot times theta. We're keeping the thetas of the two points fixed. We're keeping the two points fixed on the sphere, but just letting its radius change with time. What about the velocity as a function of distance? Velocity as a function of distance is A dot over A times distance. We've seen this formula before. This is the Hubble law with A dot over A being the Hubble constant. Any two points on this expanding or contracting sphere, it doesn't matter, any two points, the velocity and distance between them, actual distance between them, satisfy the Hubble law with, as in the previous examples, the Hubble constant H is equal to A dot over A. So we're talking about something quite similar to what we talked about before. Notice that this fact doesn't depend very much on whether this was a two sphere. It would also be true for a hyperboloid if we took two points fixed on the hyperboloid and allowed the hyperboloid radius to grow. Exactly the same thing would be true. In fact, if we took the flat plane and took two points at fixed coordinate distance but allowed the coefficient here, A, to vary with time, we would still have the Hubble law. So let's write down the metric now of the three cases, the spacetime metric of the three cases of interest. First of all, flat. ds squared is equal to minus dt squared. But this is not ordinary flat spacetime. Flat space, ordinary time, but with a growing geometry or a shrinking geometry. For that, we put here plus A of t squared, let's say dx squared plus dy squared plus dz squared or whatever. Flat space. What this corresponds to is a flat world with a grid on it, the x, y, z grid. But exactly as we studied in Newtonian physics, we can imagine that the distance between neighboring points in the grid is changing as a function of time, like A of t. So this is the geometry, the spacetime geometry. This is the metric. This is the metric of the flat spatial universe with a scale factor which depends on time. Same thing is true. If we took two galaxies fixed in the grid, the distance between them would increase with A, the velocity would depend on A dot, and the Hubble law would still be true. That's the flat geometry. And when I say flat, I mean flat space. How about the spherical geometry? That's ds squared, same thing, minus dt squared plus A of t squared. And now instead of writing a big thing, I'm just going to write d omega 3 squared. This d omega 3 squared stands for the metric of a three-sphere, a unit three-sphere, but the actual radius of the universe changes A of t squared. Excuse me. And what about the hyperbolic case? The hyperbolic case, same thing, ds squared equals minus dt squared plus A of t squared dh3 squared. All this means is that any fixed time, the geometry is either a sphere, a hyperboloid, or a flat space, but the scale of the space and the distance between galaxies embedded in the space, think of the galaxies as embedded in the space, that will change according to A and A dot. Okay, that's our cosmology. What do we need in order to make this cosmology have some dynamics to it? We need equations for how A changes with time. We're going to want to know, how does the universe, does it expand? Will it continue to expand? At what rate does it expand? Does it expand? Does t to the two-thirds of t to the one-half or exponential of t, what does it do? To do that, we have to have equations for how A changes with time. Now, we can't really use Newton's equations, although we will find the equations are Newton's equations, but we can't start with Newton's equations. We're talking about curved space time now. We're really not talking about anything Newton would have written down. We're talking about geometries which are curved in both space, sometimes in space, but definitely in space time, since A is changing with time. What are the rules for such geometries? The way the physics works? General relativity. So what we have to do is we have to write down the equations of general relativity for the special case of geometries which have this form, and they will translate. They will become equations of motion for A of t. We'll find there are three cases. Three cases, depending on whether the world is flat, positively curved or negatively curved, and those three cases we've already seen, those three cases have the same equations as the Newtonian equations for positive energy, zero energy, and negative energy. The universe switches beyond the escape velocity, at the escape velocity, and below the escape velocity. So this is where we're going to go next. We will not spend a lot of time going through calculating all of these symbols in Einstein general relativity. I'll just sketch out for you the equations and what they lead to in terms of equations for A. Equations are simple. We've seen them before. They're the Friedman equations. Once we have them, we know what they mean. We can explore the cosmology of them. Okay. Good. Yeah. Could the tie-up more complicated than the other two? What's that, Trinium? The t-squared could be a squared v more complicated, in a sense. Yes, of course. The flat space, the time, and space kind of... Absolutely, yeah. You can certainly have more complicated cosmologies. And you might ask, why don't you include them? A lot of them will simply evolve into these. This is a natural sort of place that you would evolve to. But there's no question you can have more complicated cosmologies. These are the easy ones. These are the easy ones and the ones that are, let's call them popular. Yeah. We're still talking about three-dimensional space. No, we're talking about a three-sphere. A three-sphere is a three-dimensional space. It's not the boundary of anything, it is space. You've got to learn to stop visualizing the sphere as a thing which is embedded in higher dimensions. You've got to think only of the spherical surface or in three dimensions, one more dimension. This is hard to do. Your mind does not want to visualize the three-sphere. So the only way to visualize it is through the equations for it. But it's not embedded in a higher-dimensional space that you can move off the surface and onto the surface. It is just, it is, that's all. Okay. One question? Yeah. Is luminosity our only observation for distant galaxies? No. For the distance of galaxies? No, no, no, there's a whole range of different, one way is to use the Hubble law to relate distance to velocity and then measure the velocity by the redshift, by the Doppler shift of spectral lines. Other ways which are useful is knowledge of the, well this is luminosity, luminosity of particular kinds of galaxies. Supernova. Supernova. Right. Supernova have definite absolute luminosities and from the absolute lumin, but that is using luminosity. What I'm thinking is that for luminosity, assuming the luminosity is inverse square, we're implicitly assuming that the space is flat. Well, that's the easiest case. You do your combination of measuring luminosity, measuring redshifts, measuring spectral lines, taking into account the possible different possibilities for expansion of the universe, whether it's positively curved, negatively curved. Take them all into account, measure all of them, and you get a sort of composite. But, you know, to say it's simply, as long as space is approximately flat over a big enough region, use luminosity to determine the distance is the easiest way to think about it. But there are other ways. I would think that the luminosity, if the space were curved, the actual luminosity observed would not be inverse square and that would cancel out the other effects of the curvature. So even if we're in a hyperbolic space, it may look flat, but we're just based on observations on that and the size. Yeah. All of this gets taken into account. Experts do it. They keep track of everything. Big computer codes and what you say is right. Today, essentially, all of the data that you cross on the bridge with flat space model. Today, only data that you cross on a lake with different measurements agrees with flat space models. To about 1%, 1% means that the curvature is zero to within more about 1%. That translates into the statement that the universe is at least 10 times bigger than the visible region. For more, please visit us at stanford.edu.
(January 28, 2013) Leonard Susskind presents three possible geometries of homogeneous space: flat, spherical, and hyperbolic, and develops the metric for these spatial geometries in spherical coordinates.
10.5446/15062 (DOI)
Stanford University. Okay. For some reason I have a set of notes here from last week. Did you ever get notes from last week? Here they are if you want them. And then I made a mistake. In this week's notes I wrote them on pads which are probably too long to scan in one piece. Is that a good idea? It's a figured out. Okay. I inherited a whole bunch of yellow pads from the mathematics department. It seems mathematicians don't like writing on long pads. So I got them. And about 25 of them from the mathematics department. A math friend of mine used to use butcher paper. He had a roll on the side of the table. It's hard to scan, isn't it? Yeah. But your paper. Okay. We've got a couple of minutes. Let's begin with some questions just for the next couple of minutes. Yeah. Okay. I don't think so. I mean one idea was that it was smaller. And that it was a kind of torus that was just periodically repeated itself. So that the large part of it that we were looking at was just repetitions of the same thing or another way of saying that we were looking at ourselves through the back door. That has consequences. That has observable consequences. You might think it'd be real easy. You just look out and you see, no, it doesn't work that way. It's much harder than that. And people have looked for periodicity in astronomical and cosmological observations. And there is no evidence and some counter evidence that is not periodic like that. And if it's not, I don't see how it could be smaller than the observed part. So it's an interesting question. I don't think so. Okay. You're talking about the universe space itself expanding under the influence of dark energy? So far I haven't, I'm not sure that I've mentioned dark energy, but so far we have not talked about dark energy. We will. The universe is expanding. The universe is definitely expanding. But the expansion due, the consequence of dark energy is not that the universe is expanding. It is, but I mean, it's not the, the universe was expanding before anybody knew about dark energy. Dark energy just makes it expand faster. Okay. I suppose that the space in our solar system is garden-variety space. Yeah. And we're expanding like everything else. Well, no. The orbits of the planet seem to be stable. Yeah. So where are they getting the energy from? They're staying in their stable orbits. In the case of this expansion. If you were holding your, your friend's hand out in space, your distance between you and your friend would not expand with the general expansion. The general expansion produces a kind of very, very mild repulsive force between everything and everything else. Think of it that way. It's, it's a way to think about it. That everything has a little bit of repulsion relative to everything else. Basically proportional to the Hubble constant. But that, with you holding your friend's hand, that very, very tiny repulsion between the two of you is more than made up for, vastly more than made up for, just to attract the force of you holding onto your friend's hand. So as long as you didn't let go, now I'll, I'll tell you a little more in a minute. As long as you didn't let go, you would not participate in the general expansion. You as a couple, so to speak. In fact, there's enough, probably enough of other kinds of forces between you besides holding your hand that they would overwhelm this tiny, tiny, tiny tendency to separate. That's because you're closer together than the average distance between your atoms. Your atoms and your friend's atoms are closer together. They have more attraction, in particular holding your hands will, will keep you from separating. And so you're not part of this general expansion. The same thing is true of the solar system. The solar system is largely held together. Well, first let's talk about an atom. What about an atom? Do atom, why did you come to the solar system? Why not an atom? An atom also is embedded in space. If that space is expanding, why isn't the atom also expanding? And the answer is the electrostatic forces more than overwhelm by enormous amount, more than overwhelm the general tendency to expand. Now, what you can say is that in an expanding universe, let's even take the case of a accelerated expansion, that this tiny little bit of outward force will tend to modify the atom a little bit. It will tend to make the atom just a tiny, tiny bit bigger. The atom will have to be slightly, slightly expanded. But other than that, it, and that's a tiny, tiny effect, but it will not cause the atom to fly apart. It's not strong enough to cause the atom to fly apart. Same is true of the solar system. Solar system is held together by gravity. The gravitational pull of the sun is simply much, much larger than this tendency to expand. So, for that reason, and, yeah. I understand that gravity is much stronger than the tendency to expand. But I'm asking an energy carrier to overcome these small perturbations. You need to do work. Where do you get the energy from to do? In order to do what? The orbit of the earth is determined ever so slightly, but it expands in this space. And gravity is going to restore it back to where it was before. Where does the energy come from to overcome the tendency to expand? Even though it's small, you have to have some energy load. Energy conservation in an expanding universe is different than energy conservation in the static universe. Energy conservation. There's several ways to think about it, but let me give you the simplest way to think about it. Energy conservation is a consequence of time translation invariance. In other words, if everything is time-independent, meaning to say space and time, and all experiments would reproduce exactly the same effect if they were later or earlier, the consequence of that is energy conservation. On the other hand, if the basic setup, called the background, space-time itself, if it's not static, if it's expanding, or if it's changing with time, then energy conservation doesn't apply. There is no energy conservation in a world where the parameters of the world are time-dependent. In this case, the radius of the universe is time-dependent. Energy conservation is not quite what you thought it is. And we're going to come to it. We're going to use energy conservation. We're going to use it in a certain form, but it doesn't say that the net amount of energy in the universe is fixed. That is not what it says. It basically says that changes of energy in the universe translate into kinetic energy of expansion. So there's a back and forth between changes of energy and expansion, and you don't have to ask where the energy came from. It came from the expansion. Yeah? What does that mean for the homogeneity and isotropic nature of the universe? It's similar to the surges scales if it falls apart. On small scales, it falls apart. On large scales, as far as we can tell, the universe is homogeneous and isotropic. The air in this room is largely homogeneous and isotropic. Every time somebody opens a door, of course, a draft blows in, but if we kept the room closed and you kept your mouth shut, and the air in the room would become highly homogeneous, but not on every scale. On tiny, tiny scales, what you see is atoms, molecules, and so forth, one over here, one over here. It's not homogeneous. In fact, on even bigger scales, there are fluctuations that take place. Density fluctuates. Sometimes it's a little more over here, a little less over here. So whether something is homogeneous or not is a scale-dependent question. The universe is not homogeneous on small scales, and small scales means hundreds of millions of light years. On scales bigger than hundreds of millions of light years, everything seems to be distributed uniformly. I'm not sure I'm answering your question, but I'm answering somebody's question. If the universe is expanding, how come I can't find a parking spot on campus? Oh, yes. That. Yeah. Yeah, no, that's one we're working on. When you say that something's homogeneous and isotropic, and that there was only three types of closed, flat, multiple solutions, what does that mean mathematically in terms of the quantities and, say, Einstein's field? Okay, what? It's a statement about geometry. Einstein's equations are the dynamics of how things change with time and so forth. This is a pure statement of geometry. If you classify the geometries, which are in some sense the same everywhere, that every point is the same as every other point, and what that means, I'll tell you what it means exactly, given a space, and how do you describe a space? You describe a space by a metric. You write a metric for the space, some g mu nu of x, or g, it's only space, it's not space and time, so let's call it gmn of x, and x now stands for all of the coordinates. That represents some geometry. Now, you can make a coordinate transformation. Imagine you make a coordinate transformation. The point x equals zero, which was the origin of the coordinates, this was x equals zero, that now has a nu value, the nu coordinates could be called y. What was originally x equals zero is no longer the origin, y equals zero might be over here, and so this corresponds to some kind of transformation which replaces the origin with some new origin. Now, when you do that, the metric, when you do a coordinate transformation, the metric transforms. It transforms into a new metric, we can call it g prime mn of y, and this is the metric in the y coordinates. Typically, the metric in the y coordinates will have a different form than the metric in the x coordinates. That could be for two reasons, two reasons. First of all, it could be because the space itself is different at this point than this point, and so if we transform our coordinates to this point, we might discover that the curvature is larger over here or something different over here, in which case the metric will have a different look to it when you express it in terms of y than it did in terms of x, or it might be not because the points are different, but simply because you use screwy coordinates over here and some other screwy coordinates over here. A homogeneous space means one that you can find a coordinate transformation which will replace any given point, as origin let's say, any given point by any other point in such a way that the form of the metric is identical both before and after the transformation. For example, let's just take flat space. Flat space has a metric which is ds squared equals dx squared. Let's just take two dimensions, dx squared, the x1 squared plus the x2 squared. x1 squared plus the x2 squared, where x1 and x2 are the coordinates in the blackboard. Now supposing I make a translation of coordinates, that means let y, y doesn't stand for x and y, it stands for a new set of coordinates which is just equal to x at y1 is x1 plus a shift. Let's call it a1, a does not stand for any other a that we've used so far in this course, and y2 is equal to x2 plus a2. This is a shift, this is a shift which, if this are the x coordinates, then the y coordinates are shifted by a vector a. What is the metric in terms of y coordinates? Well, we can do it by simply re-expressing dx1 in terms of dy1, but dy1 is the same as dx1. If I differentiate, if I make a little change in y, it will be equal to the little change in x, but since this is a constant, it doesn't contribute anything. Likewise for dy2. So for this kind of transformation, ds squared is equal also to dy1 squared plus dy2 squared. It has exactly the same form as the metric had in terms of x's. The implication of that is that the neighborhood of the origin, x equals zero, has exactly the same properties as the neighborhood of the origin of y equals zero, the y coordinates. In other words, these two points have exactly the same property. Now, since a could be anything, there's a coordinate transformation which takes the origin over here to any other point, to any other point, whatever, that preserves the form of the metric, and that's what's called a homogeneous space, whose properties are everywhere the same. So if there exists a coordinate transformation between x and y that takes the origin to any other point, such that the form of the metric in terms of y is the same as in x, then the space is called homogeneous, and it means it's everywhere the same. A sphere, take a sphere. We could put the origin, the south pole. The south pole could be the origin. You know what the form of the metric is. We wrote it down last time. I'll write it down again for you. r squared plus sine squared r times, let's say, d phi squared, or d theta squared, I've forgotten what I called it. d theta squared, that's the metric of the sphere. Now let's make a coordinate transformation by rotating the sphere. Rotating the sphere means choosing some other point, the point over here, for example. Instead of measuring r from the south pole, we measure r from the east pole. Here's the east pole over here. We measure r from that point, and instead of measuring angles about the south pole, we measure angles about the east pole. This is a coordinate transformation. It's a little nasty and complicated to write down. How does the distance from this point depend on, you know, the coordinates as measured about here, it's a little bit complicated, but when you figure it out, you'll find that the metric, when written in terms of the coordinates relative to this point, has exactly the same form as relative to this point. I won't bother writing it down. It'll be exactly the same form, except that the meaning of r and theta will be distances measured from here instead of here, angles measured about that point instead of about that point. Form of the metric is the same, and since you can choose your coordinate transformation to take this point to any point, it tells you that every point on the sphere is basically the same as every other point on the sphere. One test, which works very well in two dimensions for a uniform geometry, is that the curvature should be the same everywhere. That's not good enough in other dimensions, but the basic idea that the geometry is the same everywhere is called a homogeneous geometry. This is also true, it's true of the flat plane, it's true of the sphere, that it's homogeneous, and it is also true of the hyperbolic plane, of the negative curved sort of analog of the sphere, where you replace signs by hyperbolic signs. There are no others. Yeah, yeah, yeah, good, right, right. There, no, okay, this, yes, it is true that the torus is translation invariant, is the right word. Translation invariant means you can transform any point to any other, but as a matter of fact, it happens not to be isotropic. Why is it not isotropic? It's not isotropic because there are definite, remember I told you what a torus is, it's a rectangle with periodic boundary conditions, and the axes are preferred. If you go out here, you come out here, so this axis is preferred, that axis is preferred, and an axis like that is basically different. Observations would be different along 45 degrees than along, so yes, it's true, it's a homogeneous space, but it's not an isotropic space. Good, so, good. Now, of course, the idea that space is homogeneous and isotropic kind of has a status somewhere as between being a postulate and therefore highly questionable on the one hand and somewhere as in between that and an observational fact. So on certain scales, it really does look homogeneous, but on scales so big that we can't see them, we don't know. Okay, that, one other remark before we go on, since this comes up over and over and over again, and you'll be tired of hearing this, but there's often an enormous amount of confusion when you say such things as, for example, space has a spherical shape. People get into their heads the idea that it really is like a balloon, and namely that it has an inside and an outside, and they ask questions, what happens if you move away from the balloon or into the balloon? No, you'll have to learn to think of geometries as having their own intrinsic shape that has to do with what happens if you move around in the geometry and not having to do with some imaginary, possibly real or maybe imaginary, additional directions that you can move away from the space. For example, one of the most confusing kinds of spaces is a one-dimensional space. A one-dimensional space, let's say a closed one-dimensional space, a closed and finite one-dimensional space. What is a closed, here's a closed and finite one-dimensional space. Here's another one. Here's another one. It happens to be a square. From the point of view of the intrinsic geometry, all that counts is measuring distance along the geometry. No concept, as far as the intrinsic geometry goes, of moving away from the surface. So in fact, how many different kinds of one-dimensional spaces are there from the intrinsic point of view? The answer is one. All of them are identical, well, not quite, they're not identical to each other. There's a one-parameter family of them and the parameter is how far around you have to go to come back to the same place. So if this circle here happens to have the same circumference as this peanut shape over here, then intrinsically they are identical. They're identical and in fact, are they curved? They are not curved and to see that they're not curved, think of them as strings, not super strings, not string theory strings, just as strings. Without stretching the strings, you can always deform them to make any piece of them straight. Just pull and straighten it out. One intrinsic fact about them is that they are closed. They come back to themselves. But since any piece of it could be straightened out without stretching, without deforming the metric, the distance between points, one-dimensional spaces are all flat. They have no curvature. The fact that they're drawn, curved, that has to do with the way you drew them in two dimensions. So you have to learn to think about the intrinsic geometry. It's hard and there's a reason that it's hard, yeah. A line segment is also a one-dimensional space. It has a different property than the closed one-dimensional space. It's topologically different. It has endpoints. But the only thing that characterizes it is the distance between the endpoints. This line segment is the same as that line segment. And good. Is it true that it's only meaningful to think of different shapes if the one-dimensional space is embedded in it? That's right. Only if it's embedded. Now, I emphasize this over and over again when explaining some of these things to a general audience. I think I probably said it to you before. But what is it after all that is special about three dimensions? And so I typically ask the question, do you think that you can visualize a five-dimensional space? Visualize. Visualize means close your eyes and see it. And everybody says, no, I can't do that. Well, I can't do it either. For, not really. No, I can play. I can use a trick to help me visualize it. But direct visualization, I cannot. Now, I say close your eyes and see if you can see a sphere. Let's not take a sphere. Let's take a cube. Can you visualize a cube? Yeah, I can visualize the cube. I can see its cubical nature. I can see its three-dimensionality. And then we go down with the dimension. Let's go down to two dimensions. What I want you to do is to visualize an abstract two-dimensional space. Can you do that? And everybody says, sure. And I say, what do you see? I say, oh, I see a curved surface. And my response to that is, yes, you see a curved surface, but the only way you can visualize it is by visualizing it as embedded in three dimensions, unless you have some brain different than mine. Can you even visualize a one-dimensional space? Sure, I can see a line. No, what you see is that line possibly embedded on a piece of paper, on two dimensions, or possibly embedded in three dimensions. And even a point, an abstract point, you cannot visualize without seeing it suspended in three dimensions. What is it that's special about three dimensions? Is there something really mathematically special? No. It's your architecture. Your brain architecture evolved for the purpose of navigating around in three dimensions. And so it's not surprising that your ability at visualizing is hung up at three dimensions. That doesn't mean that three dimensions are in any way special. Of course, every dimensionality has its own special features, but three dimensions is not special. And, you know, as mathematically sophisticated people, we just say if you want to discuss two dimensions, you make an X and a Y. If you want to discuss three dimensions, you make an X and a Y and a Z. If you want to discuss four dimensions, you make an X and a Y and a Z and a W. And you can get, I like to joke, that you can get all the way to 26 dimensions. And so forth. Okay, so I just want to remind you over and over again that when you, I won't do it again. I will not do it again. But that when we talk about the geometry of a space, we're talking about the intrinsic geometry and not the way it's embedded for purposes of visualization in some higher dimensional geometry. There is, yeah. There is only three spatial. Right. Except, except unless there are more. The more would be very small, too hard to see. Yeah, yeah. Yeah, if I drew this, if I, all right, let's... When you say intrinsic, you're talking about what kind of person living in that sub-unit space to discover about the... That's right. But assume that light rays only propagate along the surface. If it's a one-dimensional world that we're living in, which we're not. But if it's a one-dimensional world, doesn't matter how we draw it, as long as it's one-dimensional. But assume that it has the properties sort of as an optical fiber that light only propagates internally along it and no such thing as messages getting off the axis and back onto it, that everything that takes place takes place on the line, or on the line segment. With that stipulation, there would be no difference between this and the straight line segment. They would be identical, yeah. If a circle, a line, a closed line has no curvature in an inverted intrinsic space, then the same thing for two-dimensional, right? No, two-dimensional surfaces cannot be, in general, cannot be deformed without stretching them to make them flat. It cannot. Okay? The curvy line segment here, the right term is that the intrinsic curvature is zero, the extrinsic curvature is not zero. Extrinsic curvature means what you naively think about when you say this is a curved line. And it has to do with the way the line is placed into higher dimensions. But, as you said, from the point of view of little creatures that live on this line and can't see anything off the line, they have no experience off the line, no experience of anything, and so forth. As far as they're concerned, there's only the line. So that's the way you should think about it, and not ask whether the line is embedded in the... Now, you know, if these creatures could be fooled, they might literally live on a optical fiber and think everything is moving along that optical fiber. But no doubt there would be some things that they could do. For example, they could invent a laser within their own fiber which emitted high enough frequencies, gamma rays, and I assure you that gamma rays do not stay in an optical fiber. They will jump across. So it's conceivable that these creatures could discover that they really are living in a bigger space with more dimensions. I don't think that's going to happen with us. Nothing like that. But the experimental facts at the moment tell us that we're living in a three-dimensional world, and that three-dimensional world has an intrinsic geometry, the intrinsic geometry being one of three kinds. Yes? I think another good example is just a flat plane. You're pretty good at living in a flat plane, then people walk around and you can see that it's flat. Right. If you take a portion of that and bend it into a cylinder, that's still as flat as it was before. Right. That is a good, that's right. That is a good. But it's curved in space. Right. That's exactly right. And it's a useful example. A piece of paper that you take and curve it like this is no less flat from the intrinsic point of view than the flat piece of paper that you started with. All the relationships within the surface are the same as they were before you bent it like this. So that's as flat as, but this is not true of the hem, of a, let's say, a sphere or a hemisphere, or take a hemisphere. You'll notice that this can easily be flattened without stretching it. This cannot be flattened without stretching it. So two-dimensional surfaces can be curved. One-dimensional surfaces or one-dimensional lines, curves cannot be curved. The gravitational field implies an intrinsic curvature. I always have difficulty with converses. Let's see, wait. A gravitational field implies curvature. Yes. Yes. Yes. Yes. Yes. As I mentioned to you before, curvature is a measure of tidal forces. And tidal forces have to be due to something. In general relativity, they have to be due to masses, and therefore they are the tidal gravitational fields due to some masses. Yes. Right. Okay, that was a long but very, very, I thought, hopefully helpful to you. Long question period. Yeah, okay. Okay, let's move on now. I thought that space is isotropic and homogeneous, and therefore is one of the three candidate type spaces. Give names, very colorful names. The colorful names are k equals one, k equals zero, and k equals minus one. The k stands for curvature. Curvature equals zero, that's flat space. k equals one, that's positively curved space, and it's the analog of a sphere, but of course we are not talking about a two-dimensional sphere. Space is not a two-dimensional sphere, it could be a three-dimensional sphere. Again, keep in mind when I say that space is either flat or a sphere or a hyperbolic plane and then it's completely homogeneous, that of course is not completely true. It's definitely not completely true. It could be true for the average properties of the space over distant scales of big enough to average things out. There's a statement comparable to saying the Earth is a sphere, or the surface of the Earth is a sphere, but of course the surface of the Earth has bumps on it, it has mountains, it has valleys, it has this and that, so it's certainly not a perfect sphere. However, how big a distance do you have to think about before mountains, valleys, all that sort of stuff average out? Fifty miles? Let's see, Mount Everest is what, seven miles high? Okay, so on a scale of a hundred miles by a hundred miles, the Earth looks pretty flat. On a scale of a thousand miles by a thousand miles, it looks very flat. So this idea of whether something is flat or not, there's a scale dependent idea. Now of course the Earth did not have to be flat even on scales of a thousand miles, it could have been shaped like a cigar. Then on no scale would it have been thought to be flat. So there's some content in saying that the Earth on big enough scales is homogeneous and isotropic, the surface of it, and that it looks spherical. Same is true of cosmology. Okay, so now let's, I think we discussed last time a little bit. I think we started to discuss space-time geometry. We're now moving from Newton. Once we start to talk about curved geometry, of course we've moved away from Newton. And we're doing relativity. General relativity, general relativity starts with a metric. It starts with a metric and we are going to make an assumption about the space-time metric. We're going to make the assumption that space and time aren't mixed with each other. Aren't mixed with each other in the metric. In other words, the metric has a form that looks like this. Minus dt squared, remember in relativity the time component of the metric always comes in with a minus sign and the space component of the metric comes in with a plus sign. Some scale factor, which depends on time in general, has to do with the expansion of the universe and so forth, times a metric of one of three types. One of three types, k equals, I erased k equals one, k equals zero, and k equals minus one, but the three metrics which can be here are, first of all, k equals zero. That's flat space, just plain old the x squared plus dy squared plus dz squared. That's k equals zero. And as far as we're concerned now, that's just its name, k equals zero. There is k equals plus one, that means the positively curved space. And we can either write that it is d omega three squared or we can write it out in detail. We can write it out that it's equal to dr squared plus sine of r squared d theta squared. Sorry, wait a minute. No, no, no, I made a mistake, didn't I? Yeah, d omega two squared. All right, so that's k equals one. We could have also written this one, the flat space, in a similar form, still doing k equals zero. We could have written it as dr squared plus r squared d omega two squared. This is three dimensional polar coordinates. Same space, but in three dimensional polar coordinates, where r equals zero is your location, and omega two is just the angular world around you. This is also just k equals zero, same thing, no change. And finally, k equals minus one, which is the hyperbolic, Escher, angels and devils world except in three dimensions, and that's dr squared, plus hyperbolic sine squared of r times the same d omega two squared. They all look similar, but qualitatively they are, well, especially quantitatively, but qualitatively they're fairly different. This a of t squared is called the scale factor. The distance between fixed points, by fixed points, I mean points with fixed coordinates in space, the distance between two points will in general be proportional to a, times some characteristic difference, for example, on the three dimensional sphere. There might be some angular distance here, some, I don't know, let's just call it delta, delta of some angle. This delta of angle is just some fixed value, which is the distance on the unit sphere. Likewise, the velocity between those same two points is the same thing except time differentiated. The ratio of the velocity to the distance, Hubble's law, equals a dot over a times the distance, just dividing these two. And the thing about a dot over a, that's not a constant in general, it could be a constant, by constant I mean independent of time, could be a constant, but in general it's not a constant, but it doesn't depend on position. That's the sense in which the Hubble constant, a dot over a is a constant, it doesn't depend on where you are, although it may depend on when you are. Okay, so there we are with the basic geometry, the basic setup. Now what we want to do with it, now we're all set up, last quarter we learned about Einstein's field equations, we learned that they're complicated as hell, that actually writing them down in detail is a real pain in the, but, and we're not going to write them down, but I'll write down the general form of them, but how do you do it? You write down the metric, in this case the space-time metric, A of t, the full space-time metric, incidentally in case you were wondering, this multiplies this, was that clear? Yeah, okay, good. You write down the space-time metric, you calculate the Einstein tensor, you set it equal to whatever it's supposed to be set equal to, and out of that operation comes equations for how A varies with time. That's the goal. Now, how do you calculate the Einstein field tensor on the left-hand side, the energy momentum tensor on the right-hand side, Einstein's field equations, you have to calculate some Christoffel symbols, that's a nuisance, they have derivatives of all kinds and so forth, and there's a lot of Christoffel symbols, in this case there aren't too many actually, but it's a nuisance, and I'm not going to do it on the blackboard. We're just going to outline what the basic idea is, and then I'll mentally plug in the Einstein field equations and spell out the answer. Not the answer for what the geometry of what A is, but what equations A satisfies. Okay, so let's just remind ourselves, the Einstein field equations have a left-hand side and a right-hand side like any equation. The left-hand side has to do with geometry. If you remember, and if you don't remember, it's not going to be terribly important, but if you remember, the left-hand side is called the Einstein tensor, it's built out of the curvature tensor. I'm not going to go into detail about what the curvature tensor is, it's got the Ricci curvature, it's got two indices, it's a tensor, and I chose the indices to be upstairs indices, that's some curvature tensor and minus one-half the metric g mu nu times the scalar curvature r. It will not be terribly important what the details of this is. Just notice the left-hand side has the curvature tensor, and the right-hand side has, anybody remember what's on the right-hand side? Anybody remember? The energy momentum tensor. But it also has 8 pi g divided by 7, right? No, you know it's not 7, it's 3. I'll just remind you this was the same factor that appeared in Newton's equations, and the energy momentum tensor T mu nu. Both sides are a tensor, therefore if it's true in any frame, it's true in every frame, and this is a good tensor law of physics. First question, let's start with the right-hand side. T mu nu contains a complex of things which include the density of energy, the flow of energy, the flux of energy, the density of momentum, and the flux of momentum. Different components of it. In particular, the time-time component is the one that we're going to fix on, the time-time component 8 pi g over 3 times T, time-time, T naught, naught. The time-time component is the energy density. Okay, so let's call the energy density rho, we've been calling it rho previously. Let's just call it rho. It stands for the ordinary energy in matter, whatever kind of material this energy momentum is describing. Incidentally, the right-hand side is completely sensitive to the kind of material that's in the universe. Is it particles? Is it electromagnetic radiation? Is it something else? It knows about the material nature of the ingredients that are making up the universe. The left-hand side has nothing to do with it, the left-hand side is geometry. So on the right-hand side of the time-time equation is the energy density, and on the left side is something that involves curvature. Now, if you go back to the definition of the curvature, work it out. You'll discover that there are two contributions to it, one of which has second derivatives with respect to the coordinates, and the other has first derivatives squared, or quadratic things in the first derivatives. Fortunately, for us, this particular combination, when you take the time-time components, only has one of the two. The things with second derivatives that you have to differentiate the metric twice, these things are made up out of derivatives of the metric. The things which involve differentiating the metric twice actually cancel between these two for the time-time component, not for the space components, but for the time-time component. So on the left-hand side, things are strictly proportional to squares of first derivatives, quadratic things of first derivatives. That's one thing. The second fact is that the Einstein tensor here has two contributions. One comes from derivatives with respect to space. The other comes from derivatives with respect to time. The things which have to do with only derivatives with respect to space couldn't care less at this time in the problem. What do they have to do with it? They have to do with the curvature of space, just the curvature of space itself. Here it is. This has positive curvature, this has negative curvature, this has zero curvature, and this has negative curvature. So the curvature of space is one contribution in here. The other has to do with the way space is changing with time, but the only way in which space changes with time is through the factor of A of t here. So there's going to be one factor which we'll have to do with time derivatives of A squared. One term in this equation here will have to do, we'll have a factor of A dot, some kind of factor with A dot squared in it. And if you work it out, guess what you find? It's A dot squared over A squared. It's a bit of a nuisance, but you can calculate it, A dot squared over A squared. The other term which has to do with the curvature of space itself comes in with a plus sign. And think about for a moment how curved a space is as a function of its radius. If the Earth were a thousand times bigger than it is, I think we would all agree that it would be less curved, at least locally, that it would look flatter to us. If the Earth was a marble, it would be small, but its curvature would be large. The curvature of a surface scales in a certain way as its radius, and in fact it's one over the radius squared. The curvature of a sphere, for example, is one over the radius squared, and it either comes in, well, it's proportional to one over A squared, and it either is positive if space is a sphere, zero if space is flat, minus one if space is a hyperboloid. So it means that there's a k here, and k is plus, minus one, or zero. So it's just a placeholder here that tells you which of the three kinds of spaces you're talking about. This is what Einstein's equations boil down to. In fact, let's just switch this to the other side. Let's just switch it to the other side so it becomes minus k over A squared. But this equation is absolutely identical to the Newtonian version if we think of rho as the mass density for the equations that we've already explored. The only thing new is that we now have an interpretation of this term. Do you remember what this term stood for in the Newtonian example? It stood for the energy, whether the energy was positive, negative, or zero. It had to do with whether you were above or below the escape velocity. That's the same exact term here. But what it has to do with something entirely different, or something on the face of it, very, very different, namely the curvature of space. So it's some equation, somewhat different physical interpretation. You might ask, why does this general theory of relativity, which contains among other things the ingredients of special relativity and so forth, why is it the same as Newton's equations? And the reason is basically that if you look, let's suppose the universe is curved, let's just look on a small little piece of it. A small little piece of it, we can't tell that it's curved, but look at all the galaxies in there. The way these galaxies move for a small enough piece here, must be the curvature can't be important for a small enough piece. So if we realize that the curvature can't be smaller and that the equations are exactly the same for a small piece or a big piece, we realize that we must somehow reproduce Newton's equations. But in any case, we do. The Newtonian version is correct, but we do have to keep in and remember that in Newton's physics, this stood for the mass density. I'm setting the speed of light equal to one, so mass equals mc squared is just equals m, energy and momentum, same units. This was the density of ordinary mass, ordinary rest mass. It was assumed in Newton's equations that everything is moving much slower than the speed of light, and if I have a collection of particles all moving much slower than the speed of light, the energy density is just the density of mass. On the other hand, these equations are more general. They follow from Einstein's equations. And for example, they do apply to situations where particles may be moving fast relative to each other. In fact, they even apply if the energy density here is due to photons, massless radiation moving with the speed of light. So the Newtonian equations wouldn't know what to do with photons. The Einstein equations know what to do with photons or radiation, but otherwise the equations look very similar. Okay. Yes. Two questions. In that metric where we assume that space and time are separated. Yes. Are we giving any, does that correspond to some physical restriction? Or is that just the way things are? Yeah, if you, it's a consequence of isochropy and homogeneity. If it's isotropic and it stays isotropic and homogeneous and it stays homogeneous, there really is no alternative. We could prove that, but I won't prove it tonight, but there's a consequence of that. Also, why are we focusing on the time, time component? You could actually choose any of them. For example, you could choose the space-space component. What you would get would again be like Newton, except instead of being the, this is the energy equation from Newton. This is the F equals MA equation. The F equals MA equation does have second time derivatives. It has a double dot. There are a linear combination of the space-space and the space-time and the time-time equations, which instead of looking like the energy equation really do look like the Newton, F equals MA equations. But they're all equivalent. The point is they're all equivalent. They better be equivalent. Why is it that you don't need more than just one of them? Well, for the simple example, there's only one function to calculate. It's A of t. More general context where geometry may be wavy and fluctuate and do other things, you may need all of the equations because they're just a lot of functions to compute. Here, for this case, the only unknown is A of t. It's only one function, and so really you only need one equation. You could have picked any one of them. You could not have picked the mixed space-time equations. Then you would have just got zero equals zero. But if you take the space-space components, you'll get the same equations back. Okay. Let me just remind you one more set of facts, and then I want to discuss, so we'll take a little break, and we'll discuss the equation of state and how it determines information about this row. What do I mean by information about row? I mean how row itself varies as a function of the scale factor. There's not much you can do with this equation unless you know something more about row. Well, we do know more things about row. For example, if row is just made of ordinary particles just sitting there and the universe expands, it's quite clear that the density of energy decreases, and we even know how. I'll write it down in a minute. If it's radiation, it decreases in another way. So how row depends on A depends on the nature of the material that's making up the energy. But it's clear that in order to solve this equation, to even think about it as having any content, we have to know something about how row depends on A. If we know how row depends on A, then this just becomes an equation, an ordinary differential equation for A as a function of time. So I'm just going to write down now, we'll take a little break, but before I want to write down two examples. The two examples are the ones I just mentioned. Here they are. They have names. One of them is called matter dominated. And it simply corresponds to ordinary particles moving slowly relative to the mesh. Particles which are not moving so fast that we have to worry either about relativity or even kinetic energy very much. Their energy, if you're standing next to one of these particles, its energy relative to you is simply its mass. It's called matter dominated. And it's the case where the energy density, row, is equal to some constant, row naught, let's call it, divided by A cubed. Incidentally, what is A? What is the meaning of A? Let's take the case, let's take the spherical case. In the spherical case, the meaning of A is extremely clear. It's the radius of the universe at any given instant, the radius of the sphere. Let's take that case. A has a definite meaning. Of course, we have to provide some units. We could measure meters, let's say in meters. Then in meters, A is simply the radius of the universe. And what is row naught? Row naught is just the density at the time when A was equal to one. At the time when the universe had a radius of one meter, now we don't want to go that far really. Maybe a mega parsec would be a better idea. But just conceptually, conceptually the meaning of row naught is that it is the density, whatever that density was, when the universe was one meter large. That's a tunable thing. You can change it. It's sort of initial conditions. Row is a constant, but every time you double the radius of the universe, or to change the scale of the universe by rescaling A, the density changes by the cube power of A. So that's straightforward. Question? Over here. Yeah. So at one point a couple of lectures ago you said that it's possible that the universe was infinite, even at the Big Bang, against the Big Bang. So that doesn't buy like this because you can have an infinity that's three times as large as the initial. Yeah. Okay, so that's... Yeah. In the flat case, which is infinite, A itself has no invariant meaning. Nothing to compare it with. In the round case, you can compare it with the radius of the whole universe. Also in the negatively curved space, there's a natural definition of the radius of a hyperbolic geometry. Okay. The other case that we talked about was radiation dominated. Row is row naught divided by A fourth. And we talked at length about the difference between these two, the fact that if you had a bit of both, that very early on this would be dominant because one over A to the fourth is likely to be bigger than one over A cubed. Late times this is dominant. We talked about that. No change. Everything is exactly the same as before. But one new piece of information. If you remember, the sign of K here determined whether the universe is going to continue to expand linearly, or whether it's going to re-collapse. So what we find now is a correlation. Incidentally, this will change a little bit when we come to talk about dark energy. But up till now, with these forms of energy, if K is positive, that's the sphere case, that corresponds to the situation where the universe re-collapses. If it's flat, then it's as if every galaxy was exactly at the escape velocity, so it continues to expand, ever slowing down and slowing down, asymptotically coming to rest, but it doesn't re-collapse. But it's sort of a knife edge. And the last case is K negative, in which case that corresponds to being above the escape velocity. And in that case, at late times, A just continues to increase linearly as a function of time. All right, so as I said, we didn't waste our time by doing the Newtonian case. The Newtonian case and the Einstein case are very, very correlated. Let's take a break for a few minutes. All right. We want to understand these kind of equations better. If the only possibilities were matter dominated and radiation dominated, we might not care very much to give a more general understanding of these equations. What's the point of a general understanding when there's only two cases? But of course, there are many cases. There are many things in between matter dominated and radiation dominated, not only in between, but more extreme. There's a whole range of possible behaviors like this, and they are important. They are important to understand. And so we want a deeper understanding of the connection for different kinds of material. For different kinds of material of how the energy density would change as a function of scale factor. What is it that you need to know? Now we're going to go through this in two steps. We'll go through it in two steps. I don't know that we'll finish tonight. But the important ingredient in determining how rho depends as a function of A is the equation of state. That's what it's called, the equation of state. We'll have two steps. The first will be to assume an equation of state. And I will tell you exactly what I mean by an equation of state. We assume an equation of state, and from that derive equations like that. The second... Right, so the first step will be to assume an equation of state, and derive the appropriate formula of this type. The second part will be to derive for different kinds of materials the equation of state. I don't think we'll get to both of them tonight. Tonight I'm going to assume the equation of state. For what? For different... we're just going to write down equations of state, and I'll tell you what they correspond to. But the next time we will derive the relationship between the equation of state and the kind of material we're talking about. What do I mean by an equation of state? Well, an equation of state is basically a thermodynamic idea, and it's the relationship between... basically the relationship between thermodynamic variables describing a system. For all purposes, temperature will not play any big role. Temperature is not going to be the important thing. The important thing is going to be pressure and energy density. And in particular, the equation of state is a relationship between energy density and pressure. Now, even for an ordinary gas, it is quite clear that there's a connection between... ordinary gas now, of moving particles. It's quite clear that there's a connection in this room, for example, between the energy density and the pressure. The higher the energy density, and what energy density we're talking about, basically the kinetic energy of molecules. There's the mc squared, which is dominant by far, much bigger than anything else, but let's forget the mc squared for a moment just to get the idea. There's the kinetic energy of motion of the molecules, and the kinetic energy is proportional to the square of the velocity and so forth. Those same molecules bounce off the walls of the room and exert pressure. It's pretty clear that the faster the molecules move, the bigger the pressure on the wall is going to be, so it's clear. There's a connection between energy density and pressure. And usually the way it goes, although it sums some exceptions to it, is the higher the energy density, the higher the pressure. The examples that are usually studied most frequently, and in fact these cover pretty much the ground of interest to cosmologists, can be described by very simple equations of state. The equation of state is the equation which tells you what the pressure is, p, as a function of the energy density. And the usual equation of state that cosmologists study over and over is the equation of state that says that the pressure is a constant called w. w is a constant times the energy density rho. Now, in many cases, of course, it's not really true that the pressure is a strictly linear function of the energy density, but as I said, for just more or less by accident, the interesting cases have this form. Pressure is some number times rho. I'll tell you right now what w is for the two cases of interest here. Later we will come back and derive it. For the matter-dominated case, the matter-dominated case is basically the case where the molecules in the room, if we wanted to think about the room, are at rest. They're moving very slowly compared to anything else, or they're moving very slowly compared to the speed of light. The energy is mostly just the mc squared energy. So in first approximation, you can just say the molecules are at rest in the room, and when the molecules are at rest in the room, the pressure on the walls is zero. The only thing that creates pressure is collisions with the wall, and so for the matter-dominated world, the pressure is equal to zero, or very, very, very small compared with the energy density. The big, huge e equals mc squared energy density and essentially a negligible pressure because things are moving slowly. So w is equal to zero is matter-dominated. I don't need to derive that. That was obvious that w equals zero is matter-dominated. The harder case, which as I said, I don't think we'll get to tonight, but next time, then it's an important thing to understand, is radiation-dominated. And that case is w equals one-third. Where did the third come from? Three dimensions. Three dimensions. Why does the number of dimensions come in? Well, okay, we'll work that out. Yes, it is three dimensions. Pressure equals one-third rho for radiation, or w equals one-third, and w equals zero for matter-dominated. Now, what does the equation of state have to do with anything? What I'm going to do is use the equation of state to describe how the energy density changes as a function of scale factor. And what do we do? We use the simplest kind of thermodynamic identities. Supposing we have a box of gas, a box of material of some sort. It could be gas, it could be liquid, it could be whatever it happens to be. It could be molecules, it could be radiation, it doesn't matter. And it has some energy in it. The energy, of course, the total energy is equal to the energy density times the volume of the box. So the box has volume v. The energy density is rho. And now let's imagine changing the volume of the box a little bit. It doesn't really matter whether you change the volume of the box sort of isotropically in all directions or any other directions. But let's imagine changing it equally in all directions. Then how much does the energy change? What's the change in energy if you change the volume by amount dv? Anybody know the answer? Not from this equation. From the work done by the pressure on the walls of the box. P dv. Pressure dv, but is it positive or negative? If the volume expands, what happens to the energy in the box? It's down. It does work on the box. And therefore, the work on the box means a diminution of the energy in the box. So it's minus p dv. That's all we need to use. That's enough. That's the basic identity. There's another term. Anybody know what the other term is? For thermodynamics? Tds. Temperature times the change in entropy. When variations are slow and the universe is expanding slowly by comparison with any other kind of time scale in the gas, when changes take place slowly, entropy doesn't change. That's a rule called adiabatic change. For our purposes, the change in entropy is zero, and this is the formula. De equals minus p dv. Okay, now the energy is rho times v. Let's calculate de. We just take the differential of energy, and that's equal to rho times the change in volume plus the volume times the change in rho. Both are changing. We'll change the volume. Changing the volume will change the energy density in some way, in some as yet unspecified way. Changing the volume will change rho, but we don't know how yet. However, whatever the rule is, de equals rho dv plus vd rho. That's the left-hand side, de. On the right-hand side, we have minus pdv. Well, we have two terms here with dv. Let's group them together. Let's put them over on the left-hand side and keep them together because they both have the same differential dv. On the left-hand side, we have vd rho, and on the right-hand side, we have two terms. First of all, a minus sign. Let's put a parentheses around it and a dv. What goes inside the parentheses? p and rho. p plus rho because this has to go over to the other side. p plus rho. But now, if we didn't know any connection between p and rho, we'd be kind of stuck. We wouldn't know what to do about it with this. But let's assume that p is known in terms of rho. And in particular, let's take the very simple case where p is w times rho. Now, we can write this as minus one plus w rho dv. One is just the rho dv. The w is the pdv. So now, we have something we can work with. We have an equation involving two variables. It's going to be a differential equation, volume, and energy density. Let's rewrite it. We'll just rewrite it on this board for a minute. vd rho is equal to minus one plus w. Now, that's just a number. One plus w is just a number, rho dv. And now, let's regroup the equation so that on one side, I have everything involving rho. And on the other side, I have everything involving volume. To do that, I just divide by rho to get all the rho dependence on the left. Let's do it. d rho over rho. And divide by volume, we've got rid of rho on the right. Now, I divide by the volume. And here's our equation. d rho over rho. What is that? Indeed, it's the differential of the logarithm of rho. d rho over rho is the differential of log rho. What about dv over v? Differential of log v. So we can integrate this equation, and it just says that rho is proportional. Sorry, it says that logarithm of rho is equal to one minus one plus w logarithm of v. You can add a constant, a numerical constant to it. Let's just put the constant over here. Now, how do you solve this for rho in terms of v? You just exponentiate it. What does it say? This says that rho is equal to the minus sign here. It gives you a one over volume to the power one plus w. And again, a constant up here, different constant. Just take the logarithm of both sides, logarithm of rho on the left and on the right minus one plus w times the log of v. But volume, the volume of this box, if we think of it as a box which is expanding with the general expansion of the universe, the volume of the box is just basically a cubed. It's proportional to a cubed. Apart from a constant, it's proportional to a cubed. So what this is telling us now is how rho varies with the scale factor. Let's write down the equation. It says that rho is equal to some constant, which we'll worry about another day, divided by a, the scale factor, to the three times one plus w. That's what thermodynamics, or that's what the thermodynamics of a nice homogeneous material would tell us. That's how rho varies with scale factor, constant being some constant. And again, you can say that the constant is just the value of the energy density when a is equal to one. Okay, let's see what it says. Supposing w is equal to zero, that's the case of matter dominance. If w is equal to zero, that's a top case up there. This just says rho is equal to a constant over a cubed. That's this. What if w is equal to one-third? One plus one-third is four-thirds, four-thirds times three is just four. So that's this equation over here. So you see there is a general framework in which these things emerge, and really what you need to know is w. To describe a cosmology based on some sort of energy, what you need to know is the equation of state, and very little more, very little other than that, the equation of state in the form pressure is w times energy density. Any questions? I think this is a natural place to stop, because I think if we go past this, I'll overload you. Next, yeah. Go ahead. In the radiation case, what is the radiation mostly? It's mostly the microwave background. It's mostly microwave background. Yes, yes, almost all of it. But an overwhelming factor, it's the microwave background. Other photons, even sunlight and starlight, and only a tiny, tiny fraction of it. Yeah. So it's almost all the CMB, the cosmic microwave background. And today, at the present time, it's a very, very small fraction of the energy in ordinary atoms, and in turn is a somewhat small fraction of the dark matter. So most of it is dark matter. Some fraction of that, 20, I forget, I don't remember exactly something, 10, 20%, is protons, neutrons, atoms, and so forth, and then a very, very tiny fraction of that, roughly a thousand, a thousand cubed, a billionth of it, is radiation today. But if you run it backward into past times, at some point, this becomes bigger than this, and it becomes radiation dominated. So the early universe was radiation dominated. Late universe, today, if it were not for dark energy, if it were not for cosmological constant, it would be matter dominated. Right. And again, if we didn't worry about the cosmological constant, we would say that this parameter K is correlated with the future history of the universe, whether it collapses or continues to expand. Now, I emphasize that that is wrong because of one other ingredient, and that's dark energy. Well, I can tell you, all right, we've gone far enough that I can tell you a little about dark energy. If we don't care where the equations came from, then cosmological constant, Cc, or dark energy, or vacuum energy, all the same thing, is W, let's see what we get. W equals minus one. W equals minus one. A little bit odd, isn't it, that the pressure and the energy density have the opposite sign? How odd is that, incidentally? Can you think of any situation in which pressure might have the opposite sign of energy density? Yeah, I can give you one. Just where the box, let's replace the box just by a line interval now in one dimension, where the box, there's a box, it's just a line interval, well, there's a line interval, and the physics is that the two ends of the line interval are held together by a spring. Crazy, that's not a reasonable description of energy density, but it does have the property that the pressure is negative. Why is the pressure negative? Because it's pulling together, it's not pushing apart. So the pressure is negative, we call it tension. When pressure is negative, it's called tension. It's a tendency to pull together, but the bigger you stretch it, the more the potential energy. So it has the property that increasing the potential energy makes the pressure more and more negative. Is that something close to absolute zero, or is it half with real? No, usually pressure, it can, it can, it can. Yeah, it can, under certain circumstances, it can go in that direction. Yeah, as things get closer and closer to absolute zero, you might think, well, it just corresponds to springs, but there's also some quantum uncertainty energy, so it can go either way. But yes, it is possible for pressure to be negative, and it is possible for pressure to increase in the negative sense as the energy density goes up. That's what w equals minus one means. Why, why, why such a tension should exist? That's another question, but let's just examine its consequences. If w is equal to minus one, let's see, here we are over here, w equals minus one, what do we get? Rho is constant, rho is constant. That's the nature of dark energy. It does not change when you expand the size of the box. That's the character of it. The energy density in a box doesn't depend on how big the box is. Just changing the size of the box doesn't do anything to the energy density, it doesn't change the energy in the box, but it doesn't change the density. Why? Because that energy density is a property of empty space, and empty space doesn't dilute when you stretch it. Okay, but let's just, well, I'll tell you what, I think I won't want to go through this tonight. Next time we will examine more critically the reason why w's have these various values, and then study the behavior of the universe under the various kinds of conditions. In particular, study what happens if we have dark energy. That raises new things that we haven't seen before. I think for tonight I'll take on a couple of questions, but then you're getting tired. Yeah. You talk about where do I assume that these dark matters grow as regularly as matter? Yes. Black holes and everything else? Yeah. Anything which can be regarded as non-relativistic particles, but particle can be a black hole, it can be a planet, it can be a star, it can be anything that's just a chunk of material that is moving slowly relative to the ambient background, and whose energy is mostly in the form of its mass. Yeah. Yeah. Did you say that we did not actually solve for n here, but we did in the case of Newton's equation? Absolutely. Same equation. Yeah, exactly the same equation. The only thing was that the interpretation of this had to do with being above or below the escape velocity. Other than that, the equation is exactly the same. I'll remind you, for the case where k equals zero, I remember the answer very well. In that case, for the matter-dominated world, A went as t to the two-thirds, is that right? I think t to the two-thirds. For the radiation-dominated world, it went the square root of t. That's still true here if this is zero. That's the flat-space case. If it's not flat, then you have to correct it. Go back to the previous lectures, and these were the equations we studied. Yeah. In order to have both a repulsive force between all objects and an attractive force like gravity, I assume that the law regarding the distance between them has to be different. But the law is different than what? I understand that gravity is an inverse square law. So what is the nature of the repulsive force that dark energy produces? Okay. You can think of it just as an expansion of the universe, which we will work out. But it is, at least at the Newtonian level, you can mock it up by just saying that the gravitational force between every pair of particles has an additional term in it, which is linear with the distance. Force proportional to distance, a very small coefficient. And we'll go through that. But you can mock it up. You can mock up the effect of vacuum energy of a cosmological constant by saying that every particle has a force with every other particle proportional to the distance. If the cosmological constant is positive, then it's repulsive. Yep. And if the cosmological constant is negative, then it's attractive. Needless to say, if it's attractive, then it will cause the universe to collapse even faster than it would have without the cosmological constant. If it's repulsive, it gives a chance for continued expansion even though the universe might be closed. So we'll work that out. Okay. Good. If there are any more questions, let's go home.
(February 4, 2013) Leonard Susskind introduces the Einstein field equations of general relativity and thermodynamic equations of state to the analysis of the expanding universe.
10.5446/15061 (DOI)
Stanford University. Okay, let's, I want to review a little bit and then discuss the equation of state, two equations of state, well three equations of state and where we get our information about them, where our knowledge, not the cosmological knowledge but our knowledge based on basic theory, basic physics, where that knowledge about the equation of state comes from. But before I do, I just want to very, very quickly remind you where we are, where we're going. The basic question that we want to answer in cosmology is what is the history of the universe and to the extent that the universe can be thought of as homogeneous and isotropic, it really boils down to what is the time history of the scale factor. If we know the time history of the scale factor, we know an awful lot about the history of the universe. We can test it and we can observe it in various ways. And so that's the question. So one, it's not the only question but it is one overriding question that if you want to do cosmology, you better have under your control. What is A of t as a function of time? How does it evolve? I'll just remind you quickly, we studied some models. There was the matter dominated model and in the matter dominated model, A of t expanded like t to the two-thirds. In the radiation dominated universe, A of t expands like t to the one-half. Both of these are models, neither one of them is exactly correct. Today at late times, this is almost exactly correct. In very early times, we believe that this was more correct and there was a transition between them. We talked about it at length. When we get to observational cosmology, we're going to talk a great deal about how we know anything about this, how we know anything about this and what the various meanings of them are. But not yet tonight. We talked about also the importance of the equation of state that the radiation and the matter dominated universe are two examples of universes which evolve under different conditions which can be characterized by an equation of state. The equation of state, incidentally, is what tells us how the energy density, which is on the right-hand side of the Friedman equations, of the cosmological equations, it tells us how the energy density changes with changes in the scale factor. For example, it tells us in the matter dominated case, matter dominated, it tells us that rho is equal to some constant, let's call it rho naught, divided by a cubed. The a cubed is just the volume of a piece of space as it expands. The density is the amount of energy in it divided by the volume. That's just something over a cubed. In the radiation dominated case, which we're going to talk about extensively today, where the equation of state comes from, rho goes like rho naught divided by eta fourth. This is radiation dominated. Now, the difference between these two originates and the difference between the relationship between pressure, I'm going to review these things quickly now, in the relationship between pressure and energy density. There are many, many, the relation between pressure and energy density is called the equation of state in cosmology. It's also part of the equation of state of a statistical mechanical system, which usually involves other variables like temperature. But in cosmology, we make a simplifying assumption that the energy density and the pressure are simply related. And for simplicity and because it covers a lot of interesting cases, we take the equation of state to be pressure is equal to some number called w. And w is a number that characterizes the fluid, whatever it happens to be, times the energy density rho. Let me just go back again and show you how the equation of state tells you these two different types of things. The equation of state for matter dominated, let's start with that one first. The equation of state for matter dominated. Matter dominated just means particles or galaxies, non-relativistic matter, and stuff which is in its own local frame moving slowly. Okay. What is the energy density? The energy density basically comes from just, let's call it the E equals MC squared energy of a particle at rest, the rest energy of the particle. Since we tend to set C equal to one in this class, it's just energy is equal to the mass of a particle. That's the energy of a particle at rest. But if we're thinking about particles which are all moving slowly, let's put back a C squared. Let me put back the C squared for a minute. The energy density is the number of particles per unit volume times the energy of a particle, and it has this big fat C squared in front of it. The C squared is the speed of light, and it gives a huge magnitude to the energy of even a very light particle. A dust grain, because of its mass, a tiny dust grain has enough energy in it to cause a big explosion, okay, if it were annihilated. So the energy density due to the particles is large. On the other hand, the particles are moving slowly. Where does pressure come from? Now, first of all, the particles are moving very slowly by comparison with the speed of light. Their motion is non-relativistic. What does pressure come from? Pressure comes from particles hitting the walls of a system. If we were just to think about a simple ordinary gas in a volume of space, pressure comes just by particles hitting the wall. It's proportional or related to the velocity of the particles that hit the wall. It contains the mass. Mass is important. If a bowling ball hits the wall, it creates a bigger force on the wall than if a ping-pong ball does, but whatever is hitting the wall, it's hitting the wall slowly, because all of these particles are moving slowly, which means the pressure does not contain the speed of light in the formula for it. Or as it contains, it contains the velocities of particles instead. And because it doesn't contain the speed of light, typically the pressure is much, much smaller for ordinary non-relativistic particles, much, much smaller than the energy density. That's the approximation that the pressure is approximately zero compared to the energy density, and that corresponds to W equals zero. So for non-relativistic matter density, W is equal to zero. For radiation, W is equal to a third, and we'll prove that in a little while, but let me just again go back and quickly remind you how we use that. One of the things that you need to know in order to work out the equations of cosmology is how the energy density depends on A. This is the equation that tells us, so let me just go back very briefly and remind you how that worked. We began with a box of gas. Now the equation of state can be analyzed by laboratory methods. Of course, if the gas that makes up the universe is made up out of galaxies, it's not so easy to put a bunch of galaxies in a box. But galaxies are just particles. They're just particles from our point of view. You can put particles in a box, and in the box you can investigate the relationship between the energy and the pressure. That's what we are interested in. So let's take a box with a certain pressure. The pressure on the walls of the box, let's take the pressure on this wall of the box here, it's equal to the force on that wall divided by the area. Force per unit area is the thing that's called pressure, or force is equal to pressure times area. Force is equal to pressure times area. Now let's suppose we expand the box a little bit. We expand the box a little bit, increase its volume a little bit. Let's say we increase its volume by increasing this side by amount to dx, keeping everything else the same. What happens to the energy inside the box? Well, if the box is exerting pressure on the walls and we move the walls, then the gas inside the box does some work on the walls. Work is equal to force times distance. So there's a little bit of work that's done, and what's the work that's done? The work is equal to the force on the wall, this is the work of the force, times the distance that it's displaced, and that's equal to the pressure times the area times little dx. The area times dx, this is the area of let's say this side of the box over here, this is the area of this side of the box over here. Box expands. The area times dx, that's the change in volume of the box. So the work done is the pressure times the change in the volume of the box. A times dx is the change in the volume of the box. That's of course a famous equation, I just want to write it down because I like it. What happens to the energy in the box? The gas has done some work. If the gas does work, then the energy in the box must decrease. If something does work, then its energy must decrease. So that means that the change in energy in the box, de, must be minus the pressure times dv. That's our equation. So let's go through it very quickly to remind ourselves how this tells you anything about how energy density scales with scale factor. We start with de equals minus pdv, and then we remember that the energy inside the box, e, is equal to the energy density times the volume. Energy density times the volume is the energy inside the box. And so let's consider the left-hand side. The change in the energy of the box is equal to two terms, just ordinary calculus, the energy density times the change in volume plus the volume times the change in the energy density. Both things change in general. Both things change when you expand the box a little bit. The energy density changes, certainly, and the volume changes. The net change in the energy is the sum of two of them, and that has to be equal to minus the pressure minus the pressure times dv. Okay, but now let's plug in the equation, the hypothetical equation of state, something we haven't really justified yet, but let's plug in our guess for an equation of state, the pressure is equal to w times energy density, and that just changes p to minus the number w times the energy density. Now take all the terms with dv and put them on one side of the equation and all the terms with d rho and put them on the other side. There's only one term with d rho. It's v d rho, that's this term over here, and that's equal to, on the right hand side, minus. We have, in both cases, we have rho dv. From here we get a one, and from here we get a w. So this is a famous equation, well, not yet. It's the preliminary to a famous equation. Let's get all the stuff with rho on one side and all the, one plus w, that's just a number. Remember that one plus w is just a number, whatever it is, it's a number. Let's get all the stuff with rho on one side and all the stuff with v on the other side. So that means divide by rho, divide by rho to remove the rho from here and to put it in the denominator over here and do the same with v. Divide by v to get rid of the v over here and put it over here. Equation is getting more famous, but it's not quite famous yet. Okay. v rho over rho, that's the differential of the logarithm of rho. dv over v, that's the differential of the logarithm of v. So this equation says that the logarithm of the density, of the energy density, is equal to minus one plus w times the logarithm of the volume or that the energy density is one divided by the volume for the one plus w power. We're allowed to put a constant here. Now it's a famous equation. The energy density, you may not recognize it, but it is famous. The energy density is proportional to, with a constant of proportionality, to the volume of the box to the power one plus w. But the volume of the box is proportional to the cube, if we imagine I did this problem by expanding the box along one axis. But you could expand the box uniformly along all the axes and you get exactly the same thing. It was not important that the volume increased by only increasing one dimension here. We could have increased it isotropically. Same equation here. And if we increase the box isotropically, we can think of it that the volume of the box is proportional to the cube of the scale factor. The volume of a box of space is proportional to the cube of the scale factor. And so that is equal to some constant, which we can call row north, but I'll just call it constant, divided by the scale factor cubed to the one plus w. Why cubed? Because the volume is the scale factor cubed. If you're one of these crazy people who likes to do cosmology in different numbers of dimensions, then this cubed could become the fourth power, it could become the second power and so forth, but otherwise it would be the same. But if you're a sensible three-dimensional person, this is the formula. And this formula now is famous. Okay. Let's just remind ourselves, again, for matter dominated, where the pressure is almost zero, because things are moving slowly, where the pressure is almost zero, that corresponds to w equals zero. Pressure is equal to zero times the energy density. In that case, we just get row goes like one over a cubed, and that's this formula over here. For radiation, which I simply told you the answer for, but we'll work it out tonight, for radiation, w is equal to a third. If w is equal to a third, then one plus w is four thirds, and this becomes a constant over a to the fourth. One plus w is four thirds times three is just four, four thirds times three is four, and we get one over a to the fourth. So okay, that was review. Now let's come to the question of why w is equal to a third for radiation. Radiation is massless particles. Radiation means photons. We could think of it also as electromagnetic waves. We would get the same answer incidentally, but let's think of it as photons. The characteristic feature of photons that makes it different than the non-relativistic matter is that the photons are moving fast, and in fact, they're moving with a speed of light. So let's work out the equation of state. Let's work it out in detail, the equation of state for a box filled with photons. Here's our box. It's three dimensional, but I'm not good at drawing three dimensional boxes, so we'll just draw a two dimensional box. And it's filled uniformly with lots of photons, and of course this is an instantaneous picture of it, but the photons are whizzing around with the speed of light. It's more they're bouncing off the walls. They're bouncing off the walls, we'll assume, that when they bounce off the walls, they bounce off and lose no energy, and exert pressure on the walls. We need to know a couple of things, all of which I think we've talked about in the past. Most of all, photons have energy. Let's call, what I'm going to do is pretend, but then I'll tell you why it didn't matter. I'm going to pretend all of the photons have the same energy. Now for a box of photons in thermal equilibrium, it's approximately true as a matter of fact, that they all have roughly the same energy, but nothing I'm doing really depends on that, and we'll see why. But for the moment, let's just pretend they all have the same energy, and let's call it energy per photon or per particle, let's call it epsilon. I'm not calling it epsilon with any deep motive, often epsilon is used for a small number. Well the energy of a photon is a small number, but that's not why I used it. I used it because it looks like E, but I want to save E for the total energy. So epsilon is the energy per particle or energy per photon, energy per photon on the average. What about the momentum of a proton? We're going to need the momentum. Why do we need the momentum? Because forces, what forces are, is their response to the change of momentum. If I throw a tennis ball at the wall, the tennis ball has some momentum, when it reflects back it has the opposite momentum, there's been a change of momentum of the tennis ball, there's also been a transfer of momentum to the wall, and that transfer of momentum per unit time, transfer of momentum per unit time is the force on the wall. So we need to know something about the momenta of photons. The momentum of a photon, I normally would call P. The problem with P is I'm already using P for pressure. So we're running into this problem that the number of letters of the alphabet is bounded by twenty-six. Therefore I use Greek letters. The momentum of a photon I'm going to call Pi. It is not 3.14159, it's just the momentum of a particle and it's a little vector. It's a vector, it has three components. That's the momentum of a characteristic particle in there. Of course the momentum could be in any direction. And one of the assumptions is if I look in any little volume of a box, on the average the momentum could be in any direction. That if I look at the velocity or momentum distribution, anywhere in the box it's isotropic as many particles going in every direction as in every other direction. And that's a good assumption. That's a fair assumption that can be justified using statistical mechanics. All right, so Pi, the vector Pi is the momentum, the magnitude of the momentum, we can just call Pi or put some bars around it. The magnitude of it or the absolute value of it is just called Pi. It's the magnitude of the vector Pi. And the relationship between the energy of a particle and its momentum, if I keep around the speed of light then the energy of a massless particle, a photon, is equal to the speed of light times the magnitude of the momentum. Oops, not P. Not P. Pi. Instead of writing the bars, I'm not going to write the bars, I'm just going to write Pi, but when I mean the vector Pi I'll put a little vector symbol on top. So Pi is the magnitude of the momentum, equals momentum of a particle, times C. Energy is Pi times C. That's the relationship between the energy of a massless particle and its momentum. And since we set C equal to one, the energy is just the magnitude of the momentum. Next, what about the number of particles, the number of photons in the box? And better yet, the number density. Let's let nu, nu for number. Let nu be the number of particles, number of photons per unit volume, the density of photons. It's not the density of energy. What is the density of energy in this language? The energy per particle times the number of particles. So epsilon times nu would be rho. We'll come back to that. Epsilon times nu would be rho. Let's calculate the pressure now. To calculate the pressure we have to have a proper theory of what pressure is. So here's the walls of a hypothetical box. That's a wall, the boundary of a box. The gas is on the left side of the box, the gas of photons. And let's take a little volume here. I'll tell you what this is. Let's consider a little time interval, delta t. Take a little time interval, delta t. And what I'm going to be interested in is how many particles hit the boundary of the box and transfer momentum to it in the time interval, delta t. Now the answer is a particle will hit the boundary in time delta t if it's close enough. Oh, incidentally, what's the velocity of these particles? One. One. I'll see. Yeah, see. But we'll take it to be one. All right. If the particle is moving horizontally to the left, where does it have to be in here in order that it will hit the boundary within time delta t? The answer is quite clear. If delta x is less than or if delta x is equal to delta t, and I make this little interval here delta x, then any particle moving to the right with horizontal velocity will hit the wall in time delta t. But what if it's not quite moving to the right? What if it's moving at an angle theta? So let's take a particle moving at an angle theta. Then it will hit the wall of the box. Let me get the equation straight. If delta x is equal to delta t times cosine of the angle theta. If the cosine of the angle of theta is one, that means it's horizontal, then the particle will hit the corner of the wall of the box if it's within delta x equals delta t. On the other hand, supposing cosine theta, supposing theta is perpendicular, is vertical. Supposing theta is 90 degrees, what's the cosine of 90 degrees? Zero. And that's of course correct. If the particles are moving almost vertically, they will only hit the box if delta x is very small. They will have to be very close to hit the box in time delta t. So this is the condition. All particles within a distance delta x will hit the wall of the box if delta x is equal to delta t times cosine theta. Now let's take particles moving at angle delta theta. Supposing one particle hits the wall of the box. How much momentum does it transfer? How much horizontal momentum does it transfer to the wall of the box? Well, the magnitude of the momentum is epsilon. Let's call this, yeah, let's say delta pi. The change in the x component, what we're thinking about now is a particle which hits the wall and bounces off. And it transfers some x momentum to the wall. How much? Well, the magnitude of the momentum that it started with was epsilon. That's the magnitude of the momentum. Its component along the x axis is epsilon cosine theta. So that's the component of the momentum. And how much momentum is transferred? Twice that much. Why is it twice that much? Because it starts moving with a certain momentum, it bounces back, and the amount the most change of momentum is twice its momentum. So the change in the momentum of that particle along the x axis is twice epsilon cosine of theta. Now let's divide that by delta t. I'll tell you in a moment why we're dividing it by delta t. Oh, yeah, that's right. Twice epsilon cosine theta. Let's divide it by delta t. Why am I dividing it by delta t? Because the force on the wall is the change of momentum per unit time. Is the transfer of momentum per unit time. That's Newton's equations. The change, the force on an object is the time rate of change of its momentum. And so this is the force exerted for each particle that hits the wall. Twice epsilon cosine theta over delta t. Good. Now, how do we find the full force? We have to calculate how many particles hit the wall. This is what we get per particle hitting the wall. How many particles hit the wall? How many particles moving at angle cosine theta hit the wall in a time delta t? Well, a particle will hit the wall if it's within delta x. How many of them are there within delta x? The answer is the number of particles, let's put the number of particles that will hit the wall in that time, is going to be delta x times the area of the wall, times the area. That's the volume of this little region, times the number of particles per unit volume. And delta x, n equals delta x area times the number of particles. And let's see if I left anything out of this. Nothing. So we should multiply this by the number of particles that hit the wall in time delta t. And that's pressure will equal twice epsilon cosine theta divided by delta t times the number which is delta x area number of particles per unit volume. But delta x is delta t cosine theta. So delta x over delta t is cosine theta. So we get a formula. The pressure due to particles moving at angle theta is twice epsilon cosine theta delta x over delta t is another factor of cosine theta, cosine squared theta times nu. There's one mistake in this formula, the factor of two. Why is there a mistake in the factor of two? And the answer is simple. A particle, if it's in here and moving toward the right will hit the wall, but one moving toward the left won't. So half the particles per unit volume are unavailable to hit the wall. Really we should only count those particles whose x component of velocity is toward the wall. The other particles moving in the opposite direction are not going to hit the wall. So we've really overestimated by a factor of two, and that's correct, that is correct. We've overestimated by a factor of two because in putting in here the full number of particles per unit volume, I put too many in. So we just wipe out the two here and now we have a correct formula. One times nu, what is that? The energy per unit volume or rho. We're getting there. The pressure is equal to the energy per unit volume, that's rho, times this cosine squared theta. Now wait a minute, what the hell do we do? We're getting an answer that depends on the angle. But of course this is the pressure due to particles moving at a particular angle. What we need to do is integrate up the effect of all the different angles that the particle could be moving at, or better yet, we can ask, it's equivalent, what is the average of the square of the cosine of theta for the particles? If we average over all particles, what is the average value of the value of cosine squared of theta? There are particles moving at theta near zero, there are particles moving near theta equals pi, there are other ones here, there are other ones here. What is the average value of cosine squared? If I asked you what the average value of cosine would, you'd say zero, but it's cosine squared. So we have to ask, what is the average value, and here's the problem. The x-axis here is the one perpendicular to the wall. There are particles flying about at every angle in the room, all possible directions. We want to know on the average, what is the square of the cosine of the angle? It's an easy problem. It's an easy mathematical problem. It has a very simple solution. Here, let's leave that there. It's less than one, isn't it? Why is it less than one? Because the cosine never gets bigger than one. Okay, so it's surely less than one, but we can calculate it rigorously. We suppose every particle, the direction of every particle is characterized by a little unit vector in three dimensions. The unit vector, let's call it n, and it has three components, nx, ny, and nz. And it represents the little unit vector along the direction of motion of the particle. Three components. Here's the x-axis, and I maintain that nx is just cosine theta. I think that's obvious that the x component of the unit vector is just cosine theta. Now here's something which is true. nx squared plus ny squared plus nz squared equals one. That's just the fact that this is a unit vector. Now let's average. Let's average this equation over all possible directions. What'll that give us? That will give us the average of nx squared plus the average of ny squared plus the average of nz squared. But nx squared, ny squared, and nz squared, they're all equivalent. They're just related by rotation. If the gas is isotropic, locally isotropic, so that the velocity distribution is the same in every direction, then the average of nx squared, let's average it. It just means average of nx squared plus ny squared plus nz squared is just one. If they're all equal, that tells me that the average of nx squared is just one-third. If there were four directions of space, it would be one-fourth. If there were two directions of space, it would be one-half. If there was only one direction of space, it would be one. So what have we found? We found that the average of the cosine squared of theta is equal to one-third. Pressure equals one-third rho. That's the derivation of the equation of state for radiation. Oh, did I really make any mistake when I said all the particles have the same energy? No. This could be thought of as the contribution from particles of a given energy, but for every energy, each contribution is such that the pressure from that contribution is equal to the energy density from that contribution. If you add them all up, it doesn't matter. You get the same answer. So yeah? This sounds like it's for a box that has perfect mirror balls. Yeah. If it has a black wall that absorbs and then remits the photon, it's the same as it used to. If it has a black, yes, the answer would be the same for a black wall which remits the photons, but you might ask, what wall are we talking about? What wall are we talking about? There's no wall out there in space, so there's no... These photons are not reflecting off a wall, nor are they being absorbed by the wall. What are they really doing? They're going right through. They go right through the wall. But on the other hand, for everyone that goes through, on the average is one coming from the opposite direction. So on the average, the wall of the box really does behave as though the particles that go out, you lose their momentum. The particles that come in, you get their momentum, and it really does behave the same way. So it really doesn't matter what your model for the origin of pressure is. It's always the same. Radiation pressure is a third energy density. This is just a little example in a box with reflecting walls, in a box with absorbing walls that re-radiates. If you're in thermal equilibrium, then it's the same. The radiation in the universe is mostly, almost all, the microwave background, and it is in thermal equilibrium. So okay, that then ties up a bunch of little ends. It ties all of this together. We now understand this, we understand that, we understand that, we understand that. And we're ready to move on to new kinds of equations of state. Yes? Question? Student 1, did the results, any of the results change if we use quantum statistics? Professor David L. No, no, no, no, none at all. Right. Yeah, go ahead. The cross-section of the protons are hitting each other, it's really, really small. It does make a tiny, tiny change in the equation of state. It does. But it's really, really tiny. It actually depends on the temperature and density. If the temperature and density are high enough so that when the particles, when the photons collide, they can produce electron-positron pairs, then the equation of state will change. And that's the main effect. That's the main effect, but that's exceedingly high temperatures, exceedingly high, way, way, way beyond. And the temperatures that we're talking about are very low. Well, by comparison with that, they're very, very low. So the cross-section for two photons to interact and to scatter off each other is negligible. But it, in principle, would affect things, yeah. Yeah? Student 2, did you see the area? Oh. What happened to the area is when I calculated the pressure, it's the force per unit area. Did I leave it out? Did I throw it away somewhere? Oh, yes. Oh, over here. Pressure, pressure, pressure. This should have been force. Good. This should have been force. Total force was the force due to one particle times the number of particles that hit it. And then force divided by area is pressure. So let's divide out the area. Does that answer your question? Yeah. Did that answer the question? Yeah. Okay. My, my own mission. Good. Any questions? You find that hard or easy? Not too bad. Okay. Now, can energy density ever be negative? Yeah, under certain circumstances it can, but not under any circumstances we will ever be interested in, or at least not for the moment, are energy density, well, I take that back. Yes, energy density can be negative. No, I take it back. It can be negative. And we'll even talk about negative energy densities. But more familiar, pressure can be negative. Okay. So let's discuss under what circumstances pressure can be negative. Are there any pressure on it? Any situations where pressure is negative? Yes, yes. If pressure has another term, another name, it's called tension, or in particular in one dimension, think about a one dimensional world, is the one dimensional world, we make a box in the one dimensional world, a box is just a little line interval, and we can imagine particles flying around in it. Particles flying in around in it will exert, flying around and bouncing off it will exert forces on the wall, and when they bounce off they will obviously push the wall out and correspond to positive pressure. Does it make any sense to think about negative pressure? Sure it does. Here's an example of negative pressure. Instead of particles flying around, imagine that the two ends here were connected by a string or a spring, just imagine there was a spring in space that connected the two ends of the box, when you pull them apart the spring pulls back. It doesn't push the walls of the box out, it pulls walls of the box in. That's tension. The tension of the spring is effectively a negative pressure. When you, for positive pressure, when you increase the size of the box you do some work on the wall of the box and the energy decreases on the inside. For tension, when you pull against it you increase the energy on the inside. So if for some reason you had negative pressure, but let's say positive energy, then W would be negative. That's a possibility. We should think about it. If it wasn't absolutely central to the cosmology I wouldn't even be telling you about it, so it is central that pressure can be negative, even if energy density is positive. Pressure tends to be, pressure will be positive if you have a bunch of particles moving around which don't interact with each other very much and mainly just bounce off the wall. If the particles are attracting each other, they're pulling themselves together and if they also attract the wall, they will pull the wall in. So there are certainly circumstances where pressure can be negative. And as I said, it corresponds to tension. We're going to talk about an example called vacuum energy where pressure can be negative, where pressure typically is negative. Okay, so let's talk about a special kind of energy density that's called vacuum energy. It is a consequence of quantum field theory, but we don't need to know where it comes from to describe it. Vacuum energy is just an energy that we assign to empty space. We don't need to know where it comes from just to say, I can, in my bookkeeping, not in my bookkeeping, I can conjecture that just empty space with nothing in it has energy. Now we know where that energy comes from in quantum mechanics. It comes from zero point energy of fluctuation. It comes from zero point energy of harmonic oscillators which represent the quanta of the field. We know where it comes from. But whatever it is, it's energy that's simply there in empty space. It's as if this blackboard had a uniform energy density on it and nothing I would do, well I could put some extra particles in, but nothing I could do short of putting in more material and so forth will change the energy of that blackboard. It's just a fixed thing that's there. Now I take a box. It could be a box with fictitious walls or it could be a box with real walls. How much energy, vacuum energy in this case now, is there in the box? The answer is whatever the vacuum energy density is times the volume of the box. Vacuum energy has the special property that the vacuum energy density is universal constant. It does not change when you change the size of the box, the density. It's just a characteristic of empty space and as long as the box only has empty space in it, the vacuum energy density is fixed. Yes? So when you make the box you limit the number of modes that can be inside, can be outside. So it would seem like a density would be less inside than outside. That's why the energy is less than the energy in all of space. If I didn't limit the number of modes, I would just be talking about the energy in all of space and surely the energy in the box is less than the energy in all of space. No, the energy density doesn't change. It's the total energy which changes because of what you said. There is a chasmier force but that is only important when the walls of the box are really, really close together. Other than the chasmier force and that's not important unless the distance between the walls of the box is comparable or smaller than the wavelength of the radiation or really, really close, really, really close. It's not important for our purposes. For our purposes now we're just talking about an energy density which is there. It's always there no matter what we do and the box doesn't affect the energy density. Okay, let's give it a name. Let's see. Let's call it row naught. What stands for the vacuum, vacuum energy density and I'm going to also call it lambda, row naught, let me get it straight. Row naught is equal to another constant. It's just another name for it, lambda. But I'm going to put a factor in. The factor is 3 over 8 pi g. Energy density is energy density. We know what we mean by it. Three and 8 pi g are numerical constants. This defines lambda. It is the definition of lambda. There's a name for lambda. It's called the cosmological constant. The relation between the left side and the right side is trivial. It's just the definition. You'll see why it's useful to define lambda. You know what? It's useful. Why is it useful? I'll remind you why it might be useful. Let me just remind you about the Friedman equation. The Friedman equation says that a dot over a squared is equal to 8 pi g over 3 times row plus maybe 1 over a squared or something. But always comes in 8 pi g over 3. Well 8 pi g over 3 times row. 8 pi g over 3 times row naught is equal to lambda. That's why the 8 pi 3 over g is there. Lambda appears nicely in the equation. Row appears less nicely. But nevertheless, let's just think of row as energy density. Okay, incidentally, for vacuum energy, we know immediately what the relation between the vacuum energy density is and the scale factor. There is no relation. No matter how big or small you make the box, the vacuum energy is always the same. It's a universal energy density in the vacuum, and it doesn't change when you change the size of the universe, the density of it. So we already know the answer to how it varies. But let's just ask for fun about its equation of state. What kind of equation of state does it correspond to? And it does correspond to an equation of state. Let's go, let's work this backward. Work this backward now. And let's work it backward for the special case of a vacuum energy density. So the energy is equal to row naught times v, and row naught does not change. So d row is zero, and that's equal to minus w, row naught, dv. Can you read off what w is? Not very hard. Row naught, dv is minus w, row naught, dv. With a little bit of calculation, maybe a half an hour's thought, w is equal to minus one. W is equal to minus one for vacuum energy. Can you read in various places that astronomers are measuring w, and they're discovering that w is close to minus one. This is what they're talking about. They're talking about vacuum energy. The closer the experimental evidence is to w equals minus one, what they're really talking about is it saying that the energy density of the universe is like vacuum energy. It doesn't dilute. It doesn't dilute when you expand space. It doesn't dilute because it's a property of empty space to begin with. All right, that's vacuum energy. It can be positive or negative. In either case, the pressure and the energy density have opposite sign. That's the meaning of w equals minus one. If the energy density of the vacuum is positive, the pressure is negative. If the energy density is negative, the pressure is positive. That's a characteristic of vacuum energy. After a while, when you think about it, it becomes familiar, and it's something that's not all that crazy. But when you try to think about it in terms of particles and in terms of the usual things that you're used to thinking about causing pressure, first of all, negative pressure may seem a little odd. But especially odd is this fact that the energy density and the pressure have opposite sign, but what it comes down to is just this little derivation here. Negative pressure, positive energy density, or the opposite. That's the equation of state for an empty universe if there is a energy density. What the value of rho-nought, yeah, this is the equation of state for an empty universe, assuming that it's governed by vacuum energy. What the value of rho-nought is, that's something we don't know how to compute. There are too many contributions to it. They come from all sorts of quantum fields that we may not have discovered yet. They come from high energies, they come from low energies. One would have to have a pretty exact theory of all of the quantum fields in nature to be able to compute what rho-nought is. We haven't got the vaguest idea of why it is what it is, the numerical value of it. We'll talk about the numerical value of it. What I will tell you, it's extremely small, but what are the implications of it? What are the implications of it? It's a form of energy density in the vacuum and it competes with the other energy densities. But let's study the special case where the only energy density in the universe is vacuum energy. Just like we studied the pure matter-dominated case, and then we studied the pure radiation-dominated case, and then we mixed the two of them, and we said radiation dominates early, matter dominates late. Let's isolate out just what pure vacuum energy density would do. Let's go back to the equations governing the expansion of the universe and see how vacuum energy would influence things. There are two cases, well there are actually six cases. The six cases are lambda, which is proportional to the energy density, is equal to positive or negative, plus or minus. Of course there's an infinite number of cases. When I say plus or minus, it could be any number, any value, but let's distinguish positive and negative energy density and the three possible values of k. Remember what k is? k is the curvature of space. It's either plus one, minus one, or zero. Plus one for spherical space, zero for flat space, minus one for hyperbolic space. We have k equals minus one, plus one, or zero. That makes six cases all together. What are the equations? The equations are the good old Friedman equations. Let's write them down. A dot over A squared, which also I'll remind you of the square of the Hubble constant, instantaneously. A dot over A squared is equal, first of all, 8 pi over 3 g times the energy density, but now the energy density is just the constant energy density of the empty vacuum. Then minus k, plus or minus one, or zero, divided by A squared. That's our equation, and that's the equation we'd like to solve. Before I do so, let's just take advantage now of our definition that 8 pi g over 3 is called lambda. That's why lambda was introduced. It was introduced to get rid of this nasty 8 pi 3 over 3 and just call it lambda. It's called the cosmological constant. It was introduced by Einstein, who later rejected it. Then famously, oh, it's also called dark energy. It's also the thing which the newspaper is called dark energy. Dark because it doesn't glow. As I said, we can have lambda equals plus, minus, and also lambda can also equal zero, incidentally, plus, minus, or zero, but we can take the various cases. If it's zero, we've already done the various things. Let's start with lambda equals positive. Let's take the case lambda equals positive and also k equals positive. There are fewer cases that are relevant. I'll show you some cases which don't make any sense, first of all. Supposing lambda is negative and supposing k is positive. On both sides of this equation are negative, but this side is positive, and so it can't make sense. The case lambda negative and k positive, that doesn't make any sense. There's a number of other cases that don't make any sense, or at least that don't have any solutions. Let's take one which does have a solution. The simplest case, this is by far the simplest case, let's take k equals zero. We'll come back to the other cases. Let's take k equals zero. Just a flat universe. Space is flat and the scale factor satisfies this equation here. Let's solve it. To solve it, we take the square root, a dot is equal to the square root of lambda times a. All right, so what's the solution? a dot means dA dt. Let's write it out. dA dt, and this is the equation such that the time rate of change of something is proportional to that something. What's the solution of such an equation? Exponential growth. Notice that if lambda were negative, we would be having a problem here immediately. Would make no sense. Lambda being negative and k equals zero, no good. It doesn't make sense, no solution. But lambda equals positive and k equals zero. There is a solution, and what is the solution? The solution is that a grows exponentially with time. a is equal to some constant. It doesn't matter what constant you choose. It actually doesn't matter. They all give the same answer for the geometry, times e to the t, but what's the coefficient in front of t? Square root of lambda, right? That's an interesting case. The universe exponentially expands. So that's a consequence of vacuum energy, positive vacuum energy, and we're doing the case with no curvature, k equals zero. In that case, the universe exponentially expands. Let's calculate the Hubble constant. Remember what the Hubble constant is? The Hubble constant, oh, I don't have to calculate the Hubble constant. a dot over a square. a dot over a is the Hubble constant, Hubble, and must be the square root of lambda. So the Hubble constant in this case, not generally, but in this case, the Hubble constant is just the square root of lambda. And we can also write that the scale factor exponentially increases, some constant, doesn't matter what constant, c, e to the Hubble constant times time. This is a space time which exponentially expands and is called the sitter space. The sitter was a Dutch physicist astronomer and he discovered this solution of Einstein's equations with a cosmological constant. It's named after him. We still call it. He discovered it some time. I think it was about 1917. I'm not sure. Very, very shortly after Einstein. And this is one version of the sitter space, exponential expansion. Yeah? Student question. The Hubble constant is actually a constant. Right. That's right. This is the unique geometry with a Hubble constant which is constant. That's a little bit, this is a little bit ambiguous and I will try to explain to you why it's ambiguous. Let's hold off on that. This geometry, if you want a technical set of words, this geometry is not geodesically complete. There are trajectories back into the past which go to the infinite past and a finite proper time. That means some of the geometry is missing. But we'll take that up separately. We'll take it up when we get to it. And it's a problematic question of whether there's a big bang in this kind of space or not. However, this kind of space doesn't exist by itself. There's no reason why we shouldn't put back other kinds of matter into it. And when we do put other kinds of matter into it, things change. In particular, they change at early time. Let's see. Let's imagine, let's see what goes on here. Let's put in rho nought, which is just lambda. But let's also put in some other kinds of matter. Other kinds of matter might be some radiation. That would be some constant over A to the fourth. So we would be adding to the cosmological constant some matter. Now the very early universe is the time when the universe was small. The late universe is the time when it's big. When A is very large, when it's big enough, this will become smaller than lambda. And eventually when the universe gets big enough, lambda will dominate and the universe will exponentially expand. On the other hand, very early times is when A is small. When A is small, it will be more important than lambda. So very early times, the vacuum energy is not important. Early late times, it dominates everything. That's why when we make our observations, we're in the process, meaning to say the universe is in the process now, of making a transition from being matter dominated. Let's put a cube here. And in the transition region, we're these two are more or less competing with each other. So we're not yet seeing genuine exponential expansion. It's too early. There's still competition from this term over here, even though this one is bigger. Well, they're more, this one is a little bigger. This one's bigger. But they're competing. But we're beginning to see a transition from this behavior here to this behavior over here. That's what these curves that I've drawn repeatedly look like. They show something which looks very much at early times like matter dominated. But over the last one or two or three billion years, we begin to see a deviation from it, and the deviation is pointing in this direction. Why is it called accelerating? It's called accelerating for the simple reason that if A increases exponentially and we calculate the acceleration, that just means the second time derivative, it's also increasing exponentially. The derivative of an exponential is just another exponential. The second derivative is just another exponential. And so the universe is not only expanding, but it's expanding in an exponential way, but in an accelerated way. It could be accelerated without being exponential, incidentally. But okay, so what's the truth? The truth is that observation at the present time confirms acceleration. More precision we get, the more it looks like it's beginning to exponentially accelerate. Okay, any questions? Yes? So you seem to say that the positive vacuum energy could be associated with the ground state of quantum fields. So then how do you explain negative vacuum energy? If you calculate the vacuum energy of a quantum field, it will be positive for bosons and negative for fermions. So it's just a fact of mathematics that the vacuum energy for bosons is a half h bar omega for each fluctuating mode and it's minus a half h bar omega for each fermionic mode. But we're not going to try to answer the question where the vacuum energy comes from and what it's due to. What is much weirder than having vacuum energy is having no vacuum energy. There is no known theory, no known theory that's in any way consistent with the world as we know it, which would predict zero vacuum energy. So when people talk about the mysterious dark energy, what they should be saying is it's very mysterious that there's so little of it. We can discuss in what sense it's numerically very small, but it must be numerically very small in some sense if it took so long to discover it. Really? No, no, I mean that. I mean that. If it were big enough to cause an exponential expansion that we could see in this room, of course it would blow everything apart, it would be a disaster for us, but if it had any appreciable size then we would have discovered it. We did discover it, but it was very hard to discover and it took enormous, enormously big telescopes seeing to the end of the universe in effect and that's an indication that in some sense it's a very small number and it is. Yeah? So you think it's problematic to extrapolate that back in time to the Big Bang, what about extrapolating forward to a big rip or something like that? Well this big rip, I never followed very much about the big rip, it seemed to me one of these ideas which the press liked more than any physicist I knew, but the big rip as I understand it is what would happen if W was even more negative than minus one. And there's no known sensible theory where W was more negative than minus one. Nevertheless we could talk about it, but I'm not sure what you had in mind when you asked me the question. Just if it was consistent with the big rip with W equal to negative one as it is. I think the rip is not W less than minus one or is minus two or something like that, at least if I'm getting my terminology right, I never paid too much attention to it, maybe it's worth paying attention to. Incidentally, experiment focuses, is focusing in more and more on W equals minus one. It's within, you know, it's somewhere between minus 1.1 and minus 0.9 and with diminishing error bars, but I think it will never be a high precision number. I think they can narrow it down more. Yeah? If you had exact supersymmetry, then an omega equals 0. It could. It could. It could. That's correct. That's the reason, because every fermion comes along with a boson, so they cancel each other exactly. When I said, when I chose my words I was thinking about exactly that. I said no theory that agrees with everything we know about nature. And what we know about nature is that fermions and bosons are not exactly matched. So, you know. What's the story with this factor of 10 to the 21 discrepancy between the calculated 0.9 and the measured expansion rate? 10 to the 121. Sorry, I didn't saw. A lot. Yeah. Well, there's only one, most of the constants of nature that we usually call constants of nature are not terribly fundamental. The mass of the electrons thought to be just a sort of consequence of more complicated crap. There are, the really fundamental constants of nature are C, speed of light, Planck's constant, and the gravitational constant. Why do I say they're fundamental? Because there's a sense in which they're about universal things. Yeah, I mean, let me slow down and discuss this a little bit. What is universal about C? Nothing. Nothing in the world can move faster than speed of C. No signal can go faster than the speed of light. So, it really does have a universality to it. It's not conditional on saying, well, we're going to be using oatmeal to send messages. It's fundamental. You can't get past it. What about Planck's constant? Planck's constant is also universal. It has to do with the uncertainty principle. No matter what object you're talking about, it doesn't matter if it's a bowling ball or an electron, uncertainty in position times uncertainty in momentum is always greater than or equal to Planck's constant, period. So it has a certain universal aspect to it. The Newton constant is also very universal. Again, think of the law of gravity. All objects, all exert forces between them gravitationally, which are equal to the product of their masses, the distance between them squared, and Newton's constant. So it's the use of the word all in all of those three cases, which says that there's something deep and fundamental there. There are other constants that we sometimes talk about. Let's say the ratio of the electron mass to the proton mass. It is probably true that all protons and electrons have the same ratio at the mass, but there's zillions of particles, lots of different particles. The ratios of masses are not in any special way universal. So we tend to think of G, H bar, and C as very fundamental. Now out of G, H bar, and C, out of those constants, you can make an energy density. First of all, you can make a unit of energy. It's called the Planck energy. The Planck energy corresponds to the energy of a mass of about 10 to the minus 5 grams. In other words, a microscopic mass, well, a microscopic, macroscopic mass, not like an elementary particle, but like a little bit of dust. A little bit of dust if it were to annihilate. The energy that would be released is the Planck energy or the Planck mass, and it's about a tank of gasoline or something like that. That's a lot of energy. There's a Planck length. The Planck length is a very small length, and there's a Planck time. They are the units of length, time, and mass that you can make out of G, H bar, and C. Or another way of saying it is that the units of length, mass, and time that correspond to setting G, H bar, and C equal to 1. Now if you can make a unit of length, Planck length, and it's 10 to the minus 33 centimeters, you can also make a unit of volume. The unit of volume is 10 to the minus 99th centimeters. If you have a unit of mass, you have a unit of energy. It's one Planck energy. And then you have a unit of energy density, one Planck mass per cubic Planck length. That's the natural unit, the universal unit, or the unit that, and incidentally how big is that? Let's see. The Planck volume is tiny. The Planck mass is pretty big. That's a huge energy density. Vastly, vastly bigger than anything that we ever experience in the ordinary world. But on the other hand, when I say it's huge, I mean it's huge by comparison with us ordinary creatures. It's the unit of energy density. It's the only unit of energy density that occurs in very, very basic physics. How big is the vacuum energy density? The vacuum energy density is nowhere as near as big as that, and it's about 123 orders of magnitude smaller. So from somewheres unknown to us, there is a tiny, tiny energy density of the vacuum, which is 123 orders of magnitude smaller than what we might guess. Nobody knows how to calculate it. But what we might guess offhand is that it would be approximately one in natural units. Well, we would never have guessed that because we never would have been here to guess it if it were true. But if you were to take your random guess about what the set of laws of nature would produce, it would be 123 orders of magnitude bigger than what we see. So for this course we will not, well maybe we will ask the question, but at least at the present time we won't ask the question where this vacuum energy comes from. But we should take note of what is mysterious about it. What is mysterious about it is not that it's there, it's that it's almost not there. It's the lack of it, which is the mysterious fact. Okay, yeah. I want to do another example, but another case. You were saying what's surprising about it is that how little there is of it or whatever. But when they discovered it, they were sort of surprised it was even there at all. That was more psychological than anything else. At some time in the history of astronomy it was possible to try to detect it, let's say, at the level of 10 to the minus 100. 10 to the minus 100 is rather big incidentally, I think it's probably much too big. But at some time in the history of astronomy, 10 to the minus 100 could be discovered, but not 10 to the minus 101, it was too small. Astronomers did not discover it at the level of 10 to the minus 100. It seemed to be zero at that level. So they pushed ahead and they looked for it at the level of 10 to the minus 101, still zero. 10 to the minus 102, still zero. 10 to the minus 103, 10 to the minus 120, still zero. 10 to the minus 121, still zero. You got the feeling that maybe this thing really is zero for reasons that we don't know. This was an attitude that had affected almost everybody in both the physics, astronomy, astrophysics, community. Einstein himself didn't, well he didn't think he thought of it in different ways. But it was just the fact that it was so small and each time another decimal was added to the knowledge of it, it was still zero, it just got people convinced that it might be, that it was zero. There must be some reason. You know, the logic was a crazy logic. The cosmological constant seems to be exactly zero. It must be a consequence of the right theory of nature. If we had the right theory of nature, and of course everybody knows the right theory of nature is string theory, and therefore it must be a consequence of string theory, ha! We just explained it. That mental thing was really there. It really was there. That since we now have a theory of gravity and quantum mechanics, and we know that the cosmological constant is zero, then it must predict it, and if it predicts it, we win. It's really successful. The best-laid plans of mice and men, you know, and it didn't turn out that way. Right. Student question. If our cosmology is trying to determine if that vacuum energy value could change over time. That's measuring W. Now, you can't, obviously you can't discover whether it will make some sudden or semi-sudden change after a trillion years. No way to do that. But you can try to discover whether over the last billion or two billion years it might have changed by a small amount, and that's equivalent to measuring W more precisely. So W is measured to about 10% now, and it's minus one to within 10%. That's evidence that at least over the relatively short term, a few billion years, it hasn't changed much. It hasn't changed by more than a few percent. I think we'll never be able to nail it completely. Does the dimension that the observer is in make a difference in the value of W? And since we're inherently in the third dimension at the present, does that explain why we have that disparity? Like if we were in the fourth or say 11th dimension, would we see these things differently? No, not really. They're pretty similar. It doesn't depend much on dimension. So the shift that we see in the expansion universe wouldn't be any different if we were in the 11th dimension and not in the third. It'd be pretty similar. Details would be different. General pattern would be the same. All right, let's do another case just for fun. Let's do lambda positive. I'm not sure what my equation is. Empty board, okay. Let's do the case lambda positive. But instead of k equals zero, let's do k equal to plus one. So this is the spherical universe, the positively curved universe, but with a positive cosmological constant. Let's try that one. That's an interesting case. All right, here's how we work it out. Here's how we can intuit the rough properties of it, and then I'll give you the exact solution. A dot over A squared is equal to the energy density rho, and then if k is equal to plus one, we get minus k over A squared. That's our equation with k equal to plus one. Oops, this is not right. I don't mean rho here, do I? I mean 8 pi g over 3 times rho. So that's the same as lambda. So here's our equation, and let's see if we can make some sense out of it. One way to make sense out of it is to try to see if it has the same structure as an equation where we may already have a great deal of familiarity with. Yes, this does, and I'm going to show you what the first multiply it by A squared. Multiply everything by A squared, and we get A dot squared is equal to, and let's write minus, minus lambda A squared. I've transposed this to the left side equals minus one. You got that? Did I do it right? Yeah. Yeah. Yeah. Okay, now think of A as the coordinate of a particle. All right, it has to be positive. A is always positive, but let's ignore that for a moment and just think of it as the coordinate of a particle on a line. We usually call it x, but let's just call it A. This would then be proportional to the kinetic energy of the particle, A dot squared. On the other hand, this minus lambda A squared could be like a potential energy. Potential energy, what would the potential energy be? It would be minus the constant lambda times A squared. That would be a potential energy of a slightly unusual kind. It would be negative because of the minus sign, and it would increase with A squared. So it would be an upside down parabola. A dot squared minus lambda A squared equals minus one. What does that say? That says the total energy kinetic of this fictitious particle, the total energy kinetic plus potential is just equal to minus one. So the vertical axis here is energy. Why is the vertical axis energy? Because it's potential energy. Imagine that you had a particle now moving in this kind of potential which had a total energy equal to minus one. That's over here. How would that particle move? Well, there's pretty much only one kind of motion it can have. You can throw it in from far away. It'll hit the wall over here and bounce back. Think of this as a high hill. You can throw something up the hill. It will get so far and then it will go back down the hill. Or you can start it. Another way to think about it is start it with total energy equal to minus one. That's over here. And let it go. It'll just fall back down. But then you can say let's take the falling down, reverse it, and think of a kind of bounce where you go up, come to a rest, and go back down. A gets bigger and bigger or A starts big, shrinks, comes to rest, and then goes back down. That's what this equation is describing. If we solve this equation, basically it is the equation for a motion of a particle that comes up a hill and back down a hill. What does A look like as a function of time? If I plot time this way and A this way, then it starts out big in the past at negative time, it shrinks, and goes back up. In other words, it's some kind of new kind of cosmology where the universe shrinks, reaches some size, bounces, and comes back out. Can we solve it exactly? Yes, it's not hard to solve exactly. I'm going to tell you what the solution is. You can work it out yourself and check that this is the solution. One over square root of lambda times the hyperbolic cosine of square root of lambda t. That's the exact solution. But let me just remind you what hyperbolic cosine, this figure here is the graph for hyperbolic cosine. Hyperbolic cosine is the same as one over two square root of lambda times e to the square root of lambda t plus e to the minus square root of lambda t. That's what the hyperbolic cosine is. It's a symmetric function of time, the same for negative time and for positive time, symmetric. It at late times, this piece of it is not important. e to the minus square root of lambda t, that goes to zero, but this piece exponentially increases. It exponentially increases on this side and on the opposite side, it exponentially increases into the past. If you wanted to draw this universe, if you wanted to make a picture of this universe, most people would be inclined to draw it the following way. Let's put time upward now. Here's time going upward. At time equals zero, the scale factor is as small as it will ever get. So at time equals zero, the universe is a small sphere. Remember k being positive says a spherical geometry. Let's draw that as a small circle over here. As time goes forward, the scale factor increases and it increases exponentially. So the universe increases, increases like that. But as time goes into the past, this solution is simply reflected and looks like this. So this is a strange kind of universe which exponentially increases in the future. Remember that the flat case also exponentially increased in the future. The flat case had only e to the square root of lambda t. It did not have this term here. So at very, very late times, basically at late times, they both just look exponentially expanding and they look very similar to each other. But the whole geometry from negative infinite past to positive infinite past is usually described as a bounce. Now we don't believe that the lower half of this means anything, but nevertheless, this is the mathematical structure that if we only had vacuum energy, positive vacuum energy, and we had k equals plus one, the universe would be a bounce. Wouldn't just, wouldn't expand, wouldn't contract, or would contract, bounce and expand. This is called also the sitter space. Now strangely, the flat case and this case are really secretly the same geometry. I'll try to explain to you that when we get to it. They're really not different, but we'll take some time to explore. They look very different, but they're not. Is there any case, other case that's, yeah, let's do one other case, let me think. From negative cosmological constant, let's try a case with negative cosmological constant and see what we can learn. All right. Lambda is now negative. Let's describe that by putting a minus sign in front of it over here. In fact, for simplicity, let's be really simple and just set lambda, or the absolute value of lambda equal to one. Just to be really simple, just too many symbols here, let's call it minus one. Minesimal constant that happens to be minus one in some units or another, minus one over a squared, but it's really minus k over a squared, minus k over a squared. Now if k is one in this case, in other words, if space is positively curved, this equation is nonsense. On the left-hand side, a positive thing. On the right-hand side, a negative thing if k is positive. So the answer is there is no solution with a negative cosmological constant for the positively curved case. But for the negatively curved case, there is. Let's see if we can figure out what it looks like. Let's take k to be minus one, in which case this becomes plus one over a squared. And let's do the same job on it that we did. Here's our equation. Let's convert it to a simple mechanical system. Let's see what mechanical system this corresponds to. With positive cosmological constant, they corresponded to a potential energy which just went off into the basement. Let's see what happens now. Let's multiply by a squared so that we get a dot squared, no a squared in the denominator, equals minus a squared plus one. Got that? Multiply by a squared. Let's transpose the a squared to the other side. It becomes a dot squared plus a squared. If I had kept lambda around, it would multiply this. But let's not. Let's just keep it plus a squared, equals one. Now what kind of system are we talking about? If this is kinetic energy and this is potential energy, the potential energy is plus a squared. What kind of system has a potential energy which increases quadratically with displacement? Harmonic oscillator. This equation here is actually the energy conservation equation for energy equal to plus one for a harmonic oscillator, with in this case a unit spring constant. The energy is the potential energy. The total energy is equal to plus one. What kind of motion do you get? In particular, what kind of motion do you get if you start at a equals zero? A has to always be positive, so half of this really doesn't mean anything. A is by definition positive. But you start at a equals zero at the big bang, and you shoot the marble up the hill. And what does it do? It comes back down. The universe expands and crashes. This is with k equals minus one. This is with the open universe. This is the opposite of what we might have expected. The open universe, but with negative cosmological constant causes it instead of expanding exponentially, it just expands and comes back and collapses. A big crunch. Even though it's open and infinite, it still comes back and crunches. So be thankful that we don't live in a universe with too large a negative cosmological constant. In fact, be thankful that we don't live in a universe with too large a positive cosmological constant. Either case would be deadly. In one case, the flow would be so large outward that it would just grab everything and you know, you hold on to your wife's hand or your husband's hand, you wouldn't be able to overcome the outward flow if lambda was too big and positive. If it was negative and too big, you would just have a crunch. Okay, that's the theory of vacuum energy in a nutshell. There are more cases. If you're free to look through them, some of them make sense, some of them don't make sense, but they all have their own characteristic behavior and they're interesting. And you can always analyze them by translating them into some sort of mechanical system and thinking about the conservation of energy for that system. Good. Okay. These are Einstein's equations. They are Einstein's equations. These Einstein's equations applied to, yeah. I'm not sure what that would mean. A sphere with complex radius. I mean, not that I know of. I don't know of any application of these ideas to complex A. Right. Is there any reason the proof that pressure is a linear function of the density other than the easiest equation to work with? No. More or less accidentally, all of the interesting cases are like that. But keep in mind, if you add two different fluids together, it won't be true anymore. And we do add two different fluids. And we just say one range behaves this way, other range behaves this way. But no, a general fluid will not have that property. I'll tell you what it means. It means that the speed of sound in the material is constant and doesn't depend on density. But why that should be true? It's certainly not true for all possible things. Okay. For more, please visit us at stanford.edu.
(February 11, 2013) After reviewing the cosmological equations of state, Leonard Susskind introduces the concept of vacuum energy. Vacuum energy is represented by the cosmological constant, and is also known as dark energy.
10.5446/15060 (DOI)
Stanford University. Okay. Let's come to some observational facts or connections between observational facts and the equations. It's impossible to really give any kind of justice to the observational facts without putting them into a context of the fundamental cosmological equations. So let's begin with them. We've already begun with them, but let's begin with them again and define some observational quantities, quantities which astronomers can and do observe. Alright. The basic equation, as always, at least for a homogeneous universe is the Friedman equation, A dot over A squared. And that we can also give another name. We can call it the Hubble constant squared or the Hubble. Somebody suggested that we call it the Hubble function since it depends on time, and that's fine. You can call it the Hubble function. And I suppose you could call its value today at the present time. I suppose you could call it the Hubble constant, keeping in mind that it's not a constant. But people quote, when they quote, the Hubble constant is so many kilometers per second per megaparsec or whatever it is, they're quoting the value of the Hubble function today. I'll just call it, if I call it the Hubble constant, then just forgive me, I simply mean A dot over A. This is the Hubble constant squared. And as we know, that is equal to, get all 8 pi g over 3 times the energy density minus a term for, and the usual energy density, minus a term coming from curvature. And that curvature term has a k in front of it. k is either plus or minus one or zero, never anything but plus or minus one or zero, positive for a spherical geometry, negative for the hyperbolic geometry, and zero for the flat geometry, divided by a squared. The role of a is to go into the metric. Over here I'll write the metric. Well, I think we need a little more room for it. We'll write the metric. The connection between A and the metric is that the metric ds squared is equal to minus dt squared plus the radius at time t squared. And now I'm going to use, oh, sorry. What do we have here? The r squared and then what goes in addition? An angular piece. And the angular piece is one of three possibilities, depending on k. The first one will be flat space. Let's take the case of flat space. That's k equals zero. Then this would be r squared d omega 2 squared, the two sphere of the sky. What about k equals plus one? What do we put there? Anybody remember? Sine of r squared. So this could be sine of r squared, or if it's negatively curved, hyperbolic sine of squared. All right, so let's give it, instead of writing three separate equations, let's just call it c squared of r. Where c squared, where c has the following meaning. c is equal to sine of r when k is equal to plus one. c is equal to hyperbolic sine of r when k is equal to minus one. And c is equal to r when k is equal to zero. That makes one simple formula, and we just have to remember which one we're using. Three different spatial geometries, and an equation of motion for a. That's our basic, okay. All right, now the energy density can come from various sources. The natural only sources that we really know in the current universe are of three kinds. There are three contributions to rho, and rho appears, as far as we can tell, to have only three main important contributions. The first is radiation. Radiation, energy density is some constant representing the amount, the number of photons per unit volume, or the number of units of energy per unit volume today, divided by a to the fourth. Then there's another contribution associated with ordinary particles, non-relativistic particles. We usually call that matter, and that's divided by a cubed. Then there's the possibility of vacuum energy. Let's actually put here 8 pi g over 3, and continue to use the right-hand side the way I've written it. Then the vacuum energy would be times lambda, the cosmological constant. That does not dilute with increasing radius. That's it. But let's add in to rho, let's add in minus k over a squared, minus k over a squared, plus this, in other words, let's include this over here as part of rho, and then we will put in here minus k over a squared. I'm not sure why I did that. Why did I? Well, yeah, it's okay, we can do it. Yeah. Okay. That's this, and what is this equal to? It's equal to h squared. It's equal to h squared. Now think about this equation today. At the present time, this says that a certain set of four quantities has to add up to h squared. That's a constraint, that a certain set of four quantities has to add up to h squared. And we can ask how much today, present, in the present universe, how much of adding up to the right-hand side comes from radiation, how much comes from matter, how much comes from dark energy, lambda, and how much is left over in this curvature term? That's a reasonable question. The scale factor today is some number. We're going to talk about in a little while how big it is, but it's some number. And each one of these terms has some value, and h has some value. How do we measure these terms? Let's think about how we measure them for the moment, or how we might measure them if we could. H is the Hubble constant. Hubble measured the Hubble constant. What he did was he measured the relation between velocities as determined from red shifts. I'll go back to red shifts in a little while. He determined velocities as a function of distance. How do we get distance? You have velocities from red shift. How do you get distance? That's the experiment. Luminosity, brightness. A distant bulb looks dimmer than a closed bulb. So it was basically red shift versus luminosity that he plotted. And he measured the Hubble constant. The fact that he mismeasured it by a factor of 10, that was not the important thing. And that's true. He did mismeasure it by a factor of 10. But in principle, with better measurements, and these were not cosmological measurements incidentally. By cosmological, I mean measurements taking place on a scale of tens of billions of light years. These were measurements of our local properties of a few clusters of galaxies. And so they did not penetrate back deep into the past. For his purposes, since they weren't penetrating back and deep into the past, the Hubble constant was a constant. He wasn't looking at it at different times. He was looking at it now. Well, slightly not quite now, but not too bad. So, yeah. This is a strong question. I guess, but how do you know these things all had the same brightness? It seems like there's a kind of catch-tree too or something there. Yeah. There's the whole story of a cosmic distance ladder. And it's a story of many different little pieces overlapping. How do you measure the distance to the nearest stars? You do it by parallax. And then you find certain stars that always behave in a particular kind of way called cepheid variables. They behave in a certain, very definite way. And they have a very, very definite relation between their period and their luminosity. And that you determine from stars which are close enough that you can get the distance to them from parallax. So, what are the problems with the end-of-milky way? Those are end-of-milky way. For sure they were end-of-milky way. And that gave you a candle, a standard candle to work with, the cepheid variables. You could then apply it to more distant things. And you worked your way up a ladder. And then you find the supernova, the best standard candles. And we're going to talk about that. We're going to talk about that. And of course velocity, he measured by redshift. Velocity by redshift and distance by a sequence of overlapping standards that were around. So, we're going to double check our C sub R and C sub M absolute constants independent of time. Yeah, they're constants independent of time. Radiation scales like one over eight to the fourth. Matter scales or dilutes like one over a cubed. Vacuum energy doesn't dilute. And the curvature term, which you can think of as a kind of energy, if you like, an energy of curvature scales like one over a squared. Okay, so Hubble measured H. We do a better job of measuring H. But basically, you measure H by measuring the relationship between distance and velocity or redshift and luminosity. But you want to do it, you don't need to go to very huge, huge distances to do so. And therefore, you're not looking back deep, deep, deep into the past. And you don't have to worry too much about age varying. So, Hubble measured this. Now, what about C radiation? Well, we can actually look at the amount of radiation in the universe. Most of it is in the form of the cosmic microwave background, which we will come to. But it's very feeble. There isn't much energy there in the form of photons. It's a tiny, tiny fraction of the other kinds of energies. And that we know just by measuring the photons that exist around us, the black body photons, the ordinary photons. And again, there's nothing terribly cosmological about the measurement. You don't have to go very deep into the distant past, nor do you have to go very far away. You have to have a big enough volume to average over, but we pretty much know that radiation is today inessential. Not very important. Okay, that's not there. Matter? Well, what do we know about matter? We look at the galaxies. We see how much hydrogen is in them. We see how much luminosity is coming out of them. We make studies of stars. We know a good deal about average stars. And from the average properties of stars and interstellar gas and things which are technical but not conceptually terribly difficult, we measure all the matter that we can see, the luminous matter, the matter that can be seen. We add it all up. And we have a pretty good idea as the years go on, we measure better and better and better how much luminous matter is out there. The luminous matter is presumably all in the form of atoms and protons and nuclei and electrons. And we get an estimate of this number here. That number today, the density of energy in the universe today on the average is about one proton per cubic meter. So it's very low, but much higher than the radiation energy density. So one proton per cubic meter is this term. For the moment, let's forget about vacuum energy. Well, we'll come back to it. And let's ignore it for a moment. Yes, yes, yes, yes. We're going to come to that. I'm being a slightly historical, historical, slightly historical. I want to do this with a bit of history in mind. This is as it was when I was a young physicist, let's say. Sea radiation, negligible, sea matter, substantial, a big deal, one proton per cubic meter. Nobody believed there was a lambda, so we forgot lambda, and then H had been measured. Incidentally, the relative ratios of these, the relative ratios of these today, the ratios depend on time because they scale differently, but the ratios today are called omega. So this is not the values of these, but the ratios. The ratios are omega r, omega m, omega lambda, plus omega curvature. Omega curvature may be negative, and the rule is these must add up to one for the simple reason that their definition is just the magnitude of them today divided by H squared today. In other words, the omegas are the fraction of H squared today contained in each one of these terms. So as I said, as a young physicist, everybody thought that omega r was equal to essentially zero, which it is. Omega matter was what you measure when you look out and look at the light coming in. The stars, the interstellar matter, and so forth, roughly one proton per cubic meter. Omega lambda, nobody ever heard of it, didn't believe it existed. Einstein had told us it didn't exist, and so we then had a situation where we could compute and know what omega k was. Or in other words, going back to this equation over here, this term was negligible. This term was believed not to be negligible. This was the unknown, and H had been measured. H had been measured at about a factor of roughly 20 or maybe 30 times this over here. H had been measured pretty well, and it was too big to correspond to Cm over A cubed. In other words, in other words, of this one here, this had not been discovered. This was negligible. The omega matter was something like one over 20, maybe one over 30. I'm not sure exactly how big, roughly one over 30. But it was negligible, and the only conclusion was that omega k was almost equal to one. Almost equal to one meant that k over A squared or minus k over A squared had to be close to H squared. So let's write that down. K, well, we can say what the sign of k is immediately, right? The sign of k has to be negative. The first conclusion, the sign of k is negative. Open infinite universe, it looked like. k was negative, and one over A squared equal to Hubble's measurement of H squared. Now, what and why? Because this just appeared to be too small, too small to make up H squared. The only thing around that was big enough was this, and that in turn told us, or we thought it told us, how big A is in terms of H. Now, H has another meaning at least within the context of matter dominated universe. Let's go back to the matter dominated universe, which after all, this is what we have here, and write down the equation for A. If you remember, A in the matter dominated universe goes like t to the two-thirds. For most of the history of the universe, in fact, this term is not as important as the matter term. One over A squared for small A, when the universe was small, was smaller than this term here. Eight times, it's true, this one got bigger, but in early times for most of the history of the universe, even though the matter was pretty small, it was still true that this term for most of time competed favorably with this term. So this was pretty good. This was a pretty good approximation. It fails at late time, but it's pretty good at early time. And so let's calculate A dot over A, A dot over A. What's A dot? A dot is the time derivative of t to the two-thirds, which is two-thirds, t to the minus one-third, and then we divide it by A. So we divide it by another t to the minus two-thirds. Altogether, A dot over A is just two over three t. So the Hubble constant, in fact, we know how the Hubble constant varies with time in this model, in the model of a matter-dominated universe. The Hubble constant, we can write this now, the Hubble constant, another way to write the Hubble constant, is it's just two divided by three t, and of course it's not a constant. But we can ask, what is it today? This equation was being applied today, so what is it today? The Hubble constant today is just equal, let's call it h today, is just equal to two-thirds the time today. What does the time today mean? It means the age of the universe. Let's call it capital T. Capital T stands for the age of the universe. In other words, how long it was since the universe was very small. We don't know exactly how small, but how long since the scale factor was many orders of magnitude smaller than it is today. That's what h is, it's essentially one over the age of the universe, or to be more exact it would be two-thirds over the age of the universe. Time, seconds. Sure. No, but it's just two-thirds, what over t, I mean it must be the speed of light. No, the units of the Hubble constant are one over time. But I mean time can be years, hours, seconds, that's what I'm asking. Which time units? Which units would you like to use? Which units would you like to use? Seconds? Years. Years. Answers about one over ten to the tenth. Doesn't it just depend on what age it is? I mean you can define age in whatever units you want. That's right. Whatever units you define the age in, those are the units of age. You can define it in terms of, hmm? At this real, I think that's the final. Yeah, look, what is the Hubble constant? It's a relation between velocity and distance. What is the connection between velocity and distance? Time. So the Hubble constant, even though people, if you go and you look in the Wikipedia or something for the Hubble constant, they will quote it as something like 80 kilometers per kiloparsec, something, something, something. I don't remember. But that's an astronomers unit. The astronomers use the unit because it's good for telescopes. But from our theoretical point of view, the Hubble constant is just one over the age of the universe, at least for the matter dominated. Incidentally, let's suppose it was radiation dominated. If there was no matter only radiation, this would be t to the one half, and h today would be one over 2t. So whatever kind of nice equation of state you have, for most equations of state, the Hubble constant today is related to the inverse age of the universe. That's good. Okay, so we know something else from the Hubble constant. Having measured it, Hubble knew, as I said, he was wrong by a factor of 10, the Hubble constant too large, if I remember, and therefore the universe was too young by a factor of 10, I think he got roughly a billion years, and he answers more like 10 billion years. But that's just from measuring Hubble constant. Okay, so, right. I believe so. It seems odd, doesn't it? As far as I know, yes. And as far as I know, he got it from the measurements. As far as I know. I don't know that he had any deep insight into it. All right, so now, looking at this equation here, and saying that this is negligible compared to this, 1 over a squared, then is h squared, and h squared is 1 over the age of the universe squared, it's telling us something about the size of the universe. It's telling us the size of the universe is basically the age of the universe. Here's a speed of light in here. A is measured in length, T is measured in time, and so this is what this is telling you is that the size of the universe is essentially, or the, yeah, the size of the universe is about equal to the distance light will travel in the age of the universe. No, no, no, this is not true. None of this is true. This is what was expected to be true on the basis of cosmology 40 years ago. These equations were dropping the various terms that were thought to be negligible. And as I said, one ingredient in this, several ingredients, first of all, radiation negligible, this is true. Matter being substantial, but still small, only one proton per cubic meter, and only about a twentieth of this side here. And lambda being as far as anybody knew was zero, was thought to be zero. So that's what went into that. And that's why for all these years people thought that the universe, that the radius of curvature of the universe was about equal to its age, something that we no longer think is true. Okay, the first change had to do with dark matter. So let's just talk about dark matter a little bit, what it has to do with. I think probably most of you know the dark matter story, but let's spell it out anyway. Ordinary matter made out of electrically charged particles radiates. It radiates when it's accelerated. It radiates when you collide with it. And it's called luminous because it radiates. It's a kind of matter that we know about from direct telescopic observation, and it's what went into here. On the other hand, there's another kind of definition of matter, which is how much it gravitates. What's the mass of matter which gravitates? And of course, at first it was thought that the galaxies were made of luminous matter, and the mass of a galaxy was just the mass of its luminous matter. It goes way back, I didn't know how far back it went. It went back to 1932, I didn't know this. I thought it was a little bit later. 1932, the Dutch astronomer Oort, or Oort, I'm not sure how you say his name, noticed that there was some things wrong with the way galaxies were behaving. A year later, Fritz Wickey, who I didn't know about, noticed that clusters of galaxies were misbehaving. And by misbehaving, it meant that the gravitational behavior of them was inappropriate to Newton's laws, also Einstein's laws, inappropriate to Newton's laws, and suggested that there simply was more matter out there than had been accounted for by this. So I'm going to take you through that. Most of you know the story of dark matter. Yeah. When you say luminous matter, do you include things like diffuse gas that might not be radiating? Yeah. But it could, if it were under the right circumstances. Yeah, yeah, and how you separate diffuse gas. Almost everything radiates a little bit. Almost everything. Consider all the galaxies. What's that? The supermassive black holes, do they amount to very much? But would we count those as a spin? Yeah, we count them as luminous matter. Okay. Now, and of course, they're not luminous matter, they're black holes. But the point is it doesn't matter whether we count them as luminous matter or not, because they're a small fraction. The galaxy at the center of black hole, no, take that back. The black hole at the center of the galaxy is roughly about, anyway, it's between a million and a billion solar masses. The galaxy is 10 billion, no, 100 billion solar masses. And so the black hole content is small. It's not a major, and that's not the, and besides, it certainly could not be the dark matter, because it's concentrated at the center. Whatever the dark matter is, I presume it was Ort who first understood, it is not concentrated at the center of galaxies. So let's just go through the argument. The argument has to do with the so-called rotation curves of galaxies. You have a galaxy, and you look at stars at different radii, and you look at their tangential velocity v. Let's begin by assuming something which would follow if all of the matter was luminous. The fact is that most of the luminous matter in the galaxy is in the center of the galaxy. By far, the overwhelming majority of the material is at the center of the galaxy, at the galactic center. And so for gravitational purposes, the outer things in the outer parts of the galaxy are moving under the influence of a central force where the central force is just a mass near the center. Now, this turns out to be wrong, but let's follow the logic of it for a minute. Let's use Newton's equations, f equals ma. The gravitational acceleration due to a mass at the center would be m times g divided by the radius squared to the star, and that has to equal the acceleration. And the acceleration is the velocity squared divided by the radius. Acceleration is v squared over r. So this would then say that the velocity falls off like one over r. This is somebody's law. Do you know whose law this is? This is Kepler's law. This is Kepler's planetary law. It's usually expressed in terms of the period rather than the velocity, but this is essentially Kepler's planetary law. One of the laws, which one is it? The second law? Whatever it is. Right. Telling you how the velocity of planets varies as a function of the radius from the sun. Same law here. And in particular, it says that the velocity falls off like one over the square root of r. Velocity falls like one over the square root of r as you move away. That's not what was seen by Ortt. It's not what was seen by anybody since. What was seen is that the velocity is pretty much constant. Not the angular velocity, incidentally. It's not like this thing is rotating like a big giant pinwheel. Not the angular velocity, but the linear velocity. The linear velocity appears to be constant. All right. So how do you account for that? Well, the way Newton would have accounted for it, I think, is he would have said, look, there's just more mass out there. So instead of saying all the masses at the center, let's say the mass is distributed. And let's say, let's introduce a function m of r. m of r is the amount of mass contained within a sphere out to radius r. A planet, not a planet, a star at distance r moves under the influence not of a constant mass at the center, but a varying mass which varies like m of r. And in that case, let's put back the equation. mg over r squared equals v squared over r. Let's multiply it by r. And now say that v is constant. If v is constant, oh, we put here now m of r, not just m, but m of r. v is constant, doesn't change with distance, and therefore we find that m of r must grow like r. Just multiply by r, think of v as a constant. m of r increases as you move away from the center. And so as you move out, this is not the mass density, incidentally. The total mass within a radius r apparently increases linearly with r out to the outer boundaries of a galaxy, and even beyond that. We see things, you know, at the very, very far edges of galaxies, we do see a stray star here and there. Yeah. So the luminous matter, you know, is much more dense in the center. That's right. But the non-luminous matter seems to sort of magically adjust for that so that the density, not the density, but the mass is a linear function of the radius. Yeah. We know, is there any theory as to why that would be? Because it seems very neat. If there is a theory, I don't know it. But my guess is that by now people who simulate galaxy formation could probably tell you that this is not an unnatural thing. I definitely do not know a simple explanation for it. So at the moment it looks like a piece of luck that the velocity curve is just flat. And I don't know a good substantial simple argument about why that's the case. So I'll tell you what, I'll ask some of my colleagues if anybody knows. I know some people who work on galaxy formation. But my guess is by now it's a combination of observational fact and simulation fact. Okay. So that's the status of galaxies. And the second thing, incidentally, is we know that from the motion, even from the motion of stars perpendicular to the plane of the galaxies, that it looks as if this distribution of mass is spherical in character rather than conforming to the odd spiral or flattened shape of galaxies. And I'll tell you for sure I do not understand why galaxies have spiral arms or why they look like flattened disks. I looked it up more than once. I've looked it up in various places and you get some very complicated explanations. It's apparently not simple. And it's certainly not that the galaxy is going around like a pinwheel. It is not doing that at all. So what do we know about this dark matter? Just from the fact that it seems to be more or less spherically distributed. And it seems to, that the M of R grows as R out to distances that are almost midway between neighboring galaxies. One can estimate by now I think rather accurately how much dark matter there is. Dark matter is a non-luminous variety and it is roughly a factor of somewhere between five and ten. And when I say it's a factor of five and ten, I mean to say that somebody knows whether it's a five or a ten, but I don't. I always thought it was ten. I looked it up today and it was more like five. So let's say seven times, but it's whatever it is, it's of order of magnitude ten times larger than the luminous mass. Now why doesn't it collapse and it's bigger? It's bigger in size. If the galaxy looks like this, you know, it's a little pinwheel about that big, the halo around it is quite a bit bigger. Why didn't it also fall together the same way that the luminous mass did? Well, the luminous mass fell together, meaning to say it collapsed in this form, by losing energy. It lost energy to radiation among other things. It lost energy to collisions. What kind of collisions? Collisions having to do with electromagnetic forces and things. And in the process of colliding, losing energy, radiating, it sort of fell into the center to some extent. So what we need to expect then about the dark matter is that whatever it is, it's much more weakly interacting. In particular, it's not electrically charged. Now of course, if it were electrically charged, we would see it. It would radiate. So it's not electrically charged. But the fact that it's not electrically charged and that it is very likely rather weakly interacting, maybe like the Trinos, maybe a little stronger than that, means that these particles that make up this halo here, they circulate around. They may go back and forth this way. Some of them are going this way. Some of them are in elliptical orbits, but they don't interact with each other very much. And because they don't interact with each other very much over the period of 10 billion years, this halo matter has not collapsed and followed luminous matter. So the luminous matter, there's no question that to some degree, they do follow each other. There are no, as far as I know, no large lumps of dark matter that don't have some kind of luminous matter at the center. And I don't believe there are any galaxies out there that are visible that don't have luminous matter around them. But they don't follow each other in the detail that the dark matter has a shape which looks like a galaxy. It just looks like a great big sphere. Halo. What do you think of the sequence of gimpers looking at galactic clusters? I think he was looking at clusters. Yeah. Yeah. That's right. He was looking at the motion of galaxies and clusters. That's correct. Or it was the one who looked at galaxies. And I understand he was first, but the scale was really good. So that's right. So there is evidence for this kind of thing on multiple scales. It's not all coming from galaxies. It's coming from galaxies. It's coming from interaction between galaxies. And it's coming from whole galactic clusters. So what would you say that these dark matters will be interacting? It's made out of particles. Which everybody believes it's made out of particles. Does it interact? Are the gravitational laws the same for dark matter? We sure think so. So in that sense, like two pieces of dark matter are going to be attracted to the same forces. So what do you mean by the fact that they're weakly interacting? No electrical forces? Yeah. No electrical forces. The emission of gravitational radiation by stars and other things moving in galactic orbits and so forth is completely negligible on the energy balance. So radiation of gravitational waves is not an efficient way for things to lose their energy. Wouldn't it be better to congregate around black holes because they do interact with gravitational radiation? And it does. And it does. It does. But it's very likely that the dark matter, this was controversial for a while. I don't think it's controversial anymore. That the galaxies formed by first the dark matter collecting in these great big halos and then the barions, the protons, neutrons, electrons and so forth falling into the dark matter halos. The dark matter, the big things were there first and then the smaller galaxies formed. They formed stars and probably the, I don't know how early the black holes formed and I don't know if anybody does, but the black holes may have been consequences of simply a lot of stars falling together. So it wasn't that the dark matter formed around the black holes. The black holes are a negligible amount of the mass at the center of galaxies. It's that the galaxies formed as a consequence of the halos of the dark matter. That's probably the way that it worked. But would the density of dark matter be higher near a black hole because of gravitational radiation? Yeah, it would be somewhat. Yes, it would certainly be somewhat. Would the radiation have been smaller? I mean, we think that the density, for example, we think that the density of dark matter near the center of the galaxy is significantly higher than the density of dark matter away from the galactic center. And therefore, people who are looking for, for example, radiation coming from the annihilation of dark matter to dark matter particles colliding, annihilating, forming radiation, where would you look? You would look toward the center of the galaxy. For sure. So yes, it's more dense near the center of the galaxy. Is there any way that we can know at what level dark matter is aggregated? Is it gas or is it dust or quads? It's probably just lonesome particles. We don't know. We don't know. We don't know. But it's probably just lonesome particles. What does dust mean? Dust means assemblages of large numbers of particles that hold themselves together by what? By electrostatic forces, by all the usual things that hold things together. If those kind of forces existed between these particles, they would not have survived as these kind of dark matter particles. So the expectation is that they are simply lonely particles that are lonely. What's that? If the sun is moving through space through these particles, wouldn't they tend to be trapped in the solar well? Yes, there's probably some abundance of them that is some more trapped than the sun. If I could buy these two, it says velocity is independent of r. Is that correct? It's just a constant. Yeah, velocity is independent of r. So that says that the angular frequency goes like 1 over r. So doesn't that explain that kind of spiral? Go look up the spiral arms of galaxies and try to see if you can decipher the explanation. Some people think that they're shock waves of star formation, shock waves propagating around in circles and not moving with the galaxy incident. Not moving with the dark matter. There's other pictures, none of which I understand. Why would you think it's one of the standard particles, like a neutrino? Neutrinos are very light. And because they would be so light, they would have to be pretty relativistic to have that much mass. And relativistic particles, okay, so one fact about these particles, which is most important, is that they cluster. That they tend to cluster, the gravitation makes them cluster. Things cluster less when they have high velocity. If these particles had high velocity, they would tend to cluster less. It's kind of obvious that if you have a gas of stuff and it has gravitational attractions, if it's hot, it will tend to resist the clustering tendency due to attractive forces. If it's cold, it will tend to cluster. Neutrinos being so light would tend to be pretty relativistic. They would be what is called hot dark matter. Hot dark matter is no longer an option. It's thought that the dark matter, basically because of this tendency to cluster, is cold. Cold means to say that the particles are moving relative to the local reference frame with much less than the speed of light. So, neutrinos are not a good option for dark matter. One thing I'm confused about in your explanation. You were saying, well, the matter in that equation up there with the four pieces, the curvature piece dominates over the matter part. That's what they thought. At the same time, you're saying in a matter dominated world, A goes like T to the two-thirds. Up until the time that this gets bigger than that. You'd have to correct. The equation that H is like 1 over T, that would get somewhat corrected, but not a lot because most of it was happening earlier before this crossover point. Again, at the 30 or 40 years ago, what they were doing. And the point is you can work it out. You can work out the detail. At the 30 or 40 years ago, what was considered, what was dominated, what is time now? Is it like a matter dominated over the curvature or is it curvature dominated over the matter now? Wait, now? Today? Today. Today according to yesterday's theory or today according to today's theory? The way they were thinking about it 34 years ago. Okay, so there was massive confusion. There was a lot of confusion about it. I think if you would have put things together without worrying about dark matter, I think you would have had to say that this term was big, that it was of the sign that K was negative, and that A was roughly the same as the age of the universe. And people resisted it to some extent. The only reason I, you know, I don't know what people were thinking. Each person was thinking something different. But I think one of the prejudices was that the universe was closed and bounded. So they would tend to write equations that were appropriate for K equals positive, and that didn't, and that conflicted with the observations. So there was a lot of confusion about it. Later on, 20 years ago, I'm not sure exactly when, it sort of settled down that K was probably negative. Now, what happened, let's come now. What did dark matter do to this equation? Well, the left, if we just ignore this, the right side and the left side, matter dominated term over here, and the Hubble constant differed by a factor of 25 or something like that. So it was pretty serious, a big gap between these two. At best, the dark matter is five or six or seven times the luminous matter, and it did not fix this. It fixed it somewhat, but it didn't fix it, it didn't fix it, it didn't make the matter term equal to the Hubble constant term. What it did was it still not only left room for a K over A squared with negative curvature, it required it. It required this term to still be there. So it didn't, well, when I say it didn't fix things, I'm not sure what fix means. There was nothing particularly wrong with saying that H squared largely came from here, but the matter term was not sufficient to fill a gap. It still had this present. Are there any sort of restrictions on the mass of the particles from dark matter? No, certainly none. Is light, would you have a problem that... Yeah. It depends, all right, there's... If the particles are light and fermions, you're in trouble. They can't all occupy the same state and be moving together slowly relative to each other. If they're bosons, they can be extremely light and be in a phase called a bosa compensation, and the bosa compensation just has very, very slow particles all in the same quantum state, moving like a bosa-einstein compensation, and that is one of the theories of dark matter. It's called the axion theory, that the axions are incredibly light, very, very light. I don't remember the exact numbers, but extremely light, much, much lighter than the trinos, and that they form a bosa compensation. That's one theory of dark matter, and it's a good theory. It's not a crazy theory, but mostly people think about theories in which the dark matter are particles that could be detected in big accelerators, particles whose mass are in the range of masses that could be discovered at the LHC. Is that because, is that wishful thinking? Yeah. They can't get enough power yet, right? They don't have enough power yet in the accelerator to try and reach the theoretical maximum. No, there is no theoretical maximum unless you assume that the particles interact with roughly 100 model weak interaction cross-sections. Then the maximum is probably up in a few TEV, a few, maybe most 10 times 10 TEV. And yeah, I mean, is there enough energy to have detected them? Apparently not. Well, then two years will be added more, right? Two years will have more information, and it is possible the particles will be discovered and they may have the right properties. So would it be fair to say that the dark matter, could we blame string theory on it and say... No. The reason that we can't see it is because it's saying a dimension that we can't see from our own. No, no, no, no, no. I mean, they are particles. They can be seen. They can be measured. We measure them gravitationally. I guess you're asking, is it conceivable that these are particles which are so unusual that they have almost no interaction with ordinary material and therefore we don't see them? Yes, it's possible. But that's right. But it is possible that they are just so weakly interacting with ordinary matter that there's no chance of detecting them. It's possible. The neutrinos are so fast, it's because during the Big Bang they were formed. So if this dark matter was formed at the Big Bang, would it have been formed later when the universe was cooler or something? No, no, no, they're just heavier. If they were very, very light, okay. If they're bosons, it's okay. They can boson condense and they don't have to be moving fast. If they're fermions, they have to be moving fast. But okay, the axion theory is not a theory in which the axions were ever in thermal equilibrium. If they were in thermal equilibrium at one time, then they would be moving fast today. But this is not where I was going with tonight's lecture. I'm going to get into axions. We know with any confidence nothing about dark matter. The most likely thing is it's some form of elementary particle that hasn't been discovered yet. A good chance that they're heavy enough to have escaped detection in the LHC, but light enough to be able to be detected in future experiments. But that's it. But all we can say at the moment, and they're there. The other alternative to dark matter was to modify gravitational theory so that Kepler's laws doesn't work for galaxies. But that doesn't seem to have worked very well. There don't seem to be any nice modifications of standard gravity theory which explain this. Most of us think it's particles, but you're free to make other theories. Has any specifications to the Newtonian mechanics by which we understand our own solar system? It has not. So by that we can conclude that there's a negligible amount of dark matter around us? That's right. Yeah, keep in mind, the density of matter in the solar system is a lot higher than it is out in... The dark matter in the solar system does not compete with the gravity of the sun at the center, not by a long shot, so that's right. It doesn't affect the solar system. Okay. All right, so it's natural then to ask, can we make a... How do we test the candidate theory? Here's the candidate theory now. The candidate theory is matter dominated. We forget this. We forget lambda because that's something we hadn't heard about. K here is negative and negative curved space. How do we test that theory? Okay. How do we test any particular model? A model incidentally consists of two things. It consists of a specification of K, whether it's zero, plus one, or minus one, and some kind of equation of state for which you could substitute a history, an A of T. And a K is a model that you can try to test. So what do we know and what can astronomers measure and how do they compare it with a particular model? Well, pretty much the same thing. Pretty much the same thing Hubble did, except the sophisticated version of it. You can measure two things to a large extent. You can measure the redshift of things, which tells you something about the velocity, and you can measure distance by luminosity. How do those two things come together to tell you anything about K and A of T? So let's talk about that a little bit. First, let's ignore the fact that H, the Hubble constant, changes with time. And let's just suppose we didn't have to worry about the fact that the universe evolves. Let's just suppose for a minute that it was all at an instant and that it didn't change. What could you do to measure K, whether the universe is open or closed or what? And the way to do that is to measure the number of galaxies out there of a given brightness. What does the brightness have to do with K? So let's talk about that. The number of galaxies as a function of distance, really, that's what you want to do. You want to measure the number of galaxies as a function of distance. If you happen to think that distance is related to redshift, which of course it is, then you could be talking about the number of galaxies as a function of redshift. But let's just talk about the number of galaxies as a function of distance, where the distance has been measured in some way that we need to come back to. So if we're living in a flat space, then it's clear and space is homogeneous, and everything is homogeneous, then it's pretty clear that the number of galaxies in a region, dr, let's say within a shell surrounding us, the distance dr, goes like r squared. That's just the volume of a r squared dr. It's also true that if we lived in negative curved space, it would go like hyperbolic sine squared of r, and that means something which grows exponentially with r, e to the 2r dr. If we were on a sphere, if we were living, not this sphere, but if space was a sphere, this would become sine of r squared dr. But let's forget that. The evidence seems to say negative curvature. This is the case for negative curvature. And so the first thing you can see is the number of galaxies as a function of distance would grow much, much more rapidly as a function of distance. What about as a function of luminosity? What can we say as a function of luminosity? Another question you could ask is how does luminosity depend on distance? Well, how does luminosity depend on distance in flat space? What's that? It goes like 1 over r squared. It's just the same fact here. If you thought of a light bulb at the center and us out here, then the luminosity of the light bulb, the amount of radiation coming out, would simply be spread over the area of the sphere here, and the area grows like r squared. So the luminosity itself goes like r squared. But supposing you were living in this negatively curved space where the area of the sphere grows like e to the 2r, then the luminosity would be much, much smaller. So what you would find would be a very rapid fall off of luminosity with distance. You would find many more galaxies out there at a given distance, but you would find the luminosity as a function of distance would fall off much faster. So in particular, you could ask, what about luminosity as a function of redshift? And you would find a characteristic behavior of the luminosity as a function of the redshift, which would say the luminosity really gets small fast because of this e to the 2r here. So that would be, that's pretty much the way of determining whether k is plus 1, minus 1, or 0. That in particular, if k is minus 1, meaning to say negative curvature, you should find that the number of galaxies at a given distance, or let's say at a given redshift, anomalously grows rapidly with redshift. As the redshift gets deeper and deeper into the red, number of galaxies should grow much faster than it would in flat space. So that's a test. You can also think about calculating the number of galaxies that you see as a function of redshift. How many galaxies do you see as a function of redshift? Forget the dimness and the brightness for a minute now. Just how many galaxies do you see at each redshift? That's something you can also calculate. Let's talk about it a little bit, how you do that. Well, in order to talk about it, we have to talk about redshift. What is redshift? If light is emitted by a source that has a wavelength lambda, we can call that the wavelength of the light when it's emitted. It's just the usual. It's a light bulb that has a certain wavelength. When it's detected, the light may or may not have the same wavelength. There could be various reasons for a shift of the wavelength. Doppler shift is one. Gravitational redshift from gravity due to a black hole is another one. And the expansion of the universe, which happens to be equivalent to the Doppler shift, is not a third. It's equivalent to the Doppler shift. But the expansion of the universe is the one that we're interested in. So here is what is going on. Let's draw a map of the universe. This is time, horizontal is space, and here's today. We're right at the center. This is r equals zero. And different r correspond to different vertical lines here. This is r equals one, r equals two, r equals three, and so forth. We look back. And we look back along light rays. So how do light rays move? Well, in flat space, in the Minkowski metric, they move on 45-degree lines. But this is general relativity. There's a complicated metric here. We'll discuss a little later how light rays move. But however they move, they move somehow. Either straight lines or curved trajectories will somehow light rays. We'll figure that out later. But we look back. And we see a galaxy over here. We see a galaxy over here. We see a galaxy over here. And we see a galaxy over here. We see them at different times. And because we see them at different times, we see them, first of all, with different scale factors. The scale factor varies vertically here. There's an A of t. Up here, there's A today. Let's call that A today. The emission point over here, let's just call A of t. The light emitted over here has to stretch to get to here. The universe increases by a factor of A today over A at the emission time. A today can just be thought of as a number. A at the emission time depends on the emission time. That's the factor by which the wavelength of a electromagnetic wave will stretch and going from here to here. And that's, in fact, the redshift factor. Oh, we should define redshift carefully. Let's define redshift carefully. It is the ratio of lambda when it's detected, divided by lambda emitted. And that's not quite right. Is it minus one or plus one times that? Minus one. Why is the minus one there? For historical reasons. And it's called z. z is the redshift. And lambda detected over lambda emitted is just the ratio of A today over A of t. So we look back along a light ray. We see light emitted over here. We see light emitted over here. We see light emitted over here. And each emission has a different redshift factor. The redshift factor was at this minus one is equal to z. So if we know the scale factor at the time of emission, then we know what z is. OK, so that's good. But so far we don't know anything because we don't know how the light ray moves. Until we know how the light ray moves, we don't know how to compare these points. So let's talk about how the light ray moves. Let's work out all the things we need. And then I'll write down some equations for you which an astronomer would actually work with. OK, so what's the next thing? How does the light ray move? We go back to the metric. Here's the metric over here. And light rays are null rays. Light rays, in particular, light rays that are moving radially relative to us. Here we are over here. Light rays moving radially to us are null rays, which means that the t squared is equal to a of t squared times dr squared. All right, so we can write then for a null ray for a light ray moving backward like that. And we can generate backward moving light ray that dt is equal to a of t dr. Now, I don't quite have it right here. I have a sign wrong. Of course, what's true is dt squared is equal to a of t squared times dr squared. Now, there are two solutions. One for going back into the past and one is going forward in time. Obviously, the one we're interested in is the one which goes back into the past. And for that, dt is equal to minus a over t dr. All this says is that r gets bigger as t gets more negative. As t gets more negative, r gets bigger. All right, so this funny minus sign there. It's a peculiar minus sign. Or, well, dr is equal to dt over a of t with a minus sign. All right, so this is one equation. And that tells us how light moves. That tells us how light moves along here. And let's now see if we can invent an equation, invent a question that an astronomer would be interested in. Here's the question. What is the number of visible galaxies of a given redshift? Just that. The number of, let's call it dn, the number of galaxies within redshift z and z plus dz, the number of visible galaxies per unit redshift. That's something that surely can be measured. Look out. You see the galaxies or if they're galaxies or whatever the standard candles are, and you count the number at each redshift. That'll give you dn dz. Let's see if we can figure out an equation for it. What about dn? What about dn? dn, let's just take that to be the number of galaxies within a sphere of radius r and differential distance dr. What's that going to be? dn is going to be equal to the area of the sphere times dr. So dn is going to be dr. And what's the area of the sphere? Nope, not in general. Not in general. That would be correct in flat space. In general, it's proportional to c of r squared. In the case of interest where k is negative, this would be hyperbolic sine of r, and at large distances it would be some exponential. But of course we can study all of the cases. We can study all of the cases simultaneously by putting psi of r squared. Times what? Times the number which is the number of galaxies per unit volume. But let's leave that out. Let's just write proportional to. The number of galaxies within a shell is proportional to dr times c of r squared. Now what about dz? Let's look at dz. Here's dz, and we're interested in the change in dz from going from one radial point to another radial point at distance dr. How much does dz change? Well, let's just take this equation here and differentiate it. dz is equal, derivative of minus one is zero. A to the A, what's varying incidentally? Is A to the A varying? No, A to the A isn't varying. A to the A is just A to the A. What's varying as you look back along here is A at the emission point, A of t. That's equal to A to the A, and I want to differentiate one over A, so that's minus A of t squared. The derivative of one over A is one over A of t squared times dA, right? dA of t. Everybody recognize that equation? We're going to divide by dz. That's minus A to the A over A squared dA. What is dA? dA is dA by dt times dt. I haven't done anything. I've just manipulated the equation a little bit. What is dt? dt is dr. It's not quite, no, it's not quite dr. It's not quite dr. That's what we're, that's, no, we have dr by dt. What's dr by dt? Here it is. The r by dt is minus one over A of t, right? Along the light ray. That's why I had to do this. That's why I had to figure out what that is. So dA, let's see, so d, let's put dr by dt. dr by dt is minus one over A, so that gets rid of the minus sign and gives us another A downstairs. Good. The A's cancel, this A cancels this A. I don't like it. I don't like what I have in my notes. Yes, it is. What's A today over A? Z plus one. A today over A is Z plus one. OK, so here I have the number of galaxies seen at redshift Z contains a factor one over Z plus one. OK, that's just, that's known if we know Z. And then C of r divided by A dot. What can we do with that? Let's see what we can do with it. There's nothing we can do with it now. There's nothing we can do with it until we make a model. So let's make a model. Let's make a model and see what we get. Do we really want to do this now? Yeah, let's do it. All right, let's make a model. Let's take the matter dominated universe. Let's take the matter dominated universe with K equals minus one. And for the matter dominated universe for most of the evolution, again, A looks like T to the one to the two thirds. Here's what we want to do. I'm not going to go through the whole story. We can calculate A dot. It's equal to two thirds divided by one over T to the one third. We calculate A dot. What about r? Can we calculate r? Yes, we can calculate r. Once we know A of T, we can calculate r. I won't do the calculation, but we also know r of T. It's known. We can compute it. Moreover, once we know A, we can calculate Z. The point is that everything, A, A dot, r, can all be expressed in terms of Z. We know that, and A today. We know the relationship between A and Z. We know the relation between A and T. In other words, we can write everything in terms of Z. Everything here is known in terms of Z. We get a thing here which depends on Z, but it's known only because we substituted a particular model. Had I used a different equation of state, I would get a different answer. C of r, that's an interesting quantity, that exponentially increases with r. That's easy to detect. The C of r, this explosive thing which just increases with r, that means since r increases with redshift, that makes an anomalously large number of galaxies at large redshift if k is negative. And A dot can be computed directly in terms of Z. In other words, the point is, with enough little bit of labor, you can take a model which means an A and a k and convert it into a statement of how many galaxies you see per unit redshift, together with the other piece of information which was the relationship between redshift and luminosity. Those two bits of information together can give you a complete history of the expansion of the universe. That's enough to give you a complete description of the expansion of the universe and to tell you what C of r is. I've assumed something about C of r, but I didn't have to. I could have done this calculation in flat space. I could have done it in a closed universe. The two pieces of information are redshift or luminosity as a function of redshift and the number of galaxies as a function of redshift. Those two things together are enough to determine A of t and k, whether it's plus one or minus one or zero. All right, so you do that. Astronomers do that. They've been doing it for 50 years or more. And the final upshot is, first of all, inconsistent with this matter-dominated universe with k being negative. What it's consistent with, the best fit from, incidentally, when I say galaxies, I mean standard candles of some kind. And the standard candles which are best at the redshift we're interested in are supernova. Supernova are things which are very controllable. We can't control them, but nature controls them. And the result is not this formula at all, but omega radiation, again, essentially equal to zero. Omega matter, and this includes both dark matter and luminous matter. Dark matter and luminous matter about 0.3. Omega lambda about 0.7. And omega k about zero. This is observational fact. Very little theory has gone into this. Well, theory has gone into it, but no particular biases about what to expect. This is pretty much what the unbiased raw data says, and zero he means it could be 0.1, or it could be minus 0.1. But no, no, no, no, no, no, I take that back. Whoa, 0.01. Could be 0.01. Plus or minus. It could be that small, or it could be that big. Omega lambda is about 0.7, omega matter about 0.3. That's the best fit to these various pieces of information here, using models which depend on the equation of state. So, just from counting, just from counting galaxies, redshift and luminosity, that's the picture that emerges. I thought lambda wasn't even going yet. No, I'm talking about today. Now I'm talking about today. This model, this theory failed. It failed. It failed this test. It just could not reproduce the observed luminosity, z relation and dn dz. Just failed. The way we know omega lambda is from the equation. We're not measuring omega lambda. No, you put in, yeah. You put in numbers for these things. The model consists of numbers for these things. That's what goes into the model. A value of omega k, a value of omega lambda, and a value of omega matter. Or equivalently, c matter, radiation is not important, lambda and k. My question is, from what you've talked about, taking these measurements, just taking measurements. You said we could figure out what k had to be, what omega k and omega m had to be. You have to process it through this. This is what you begin with. No, you don't begin with this. You begin with a value of omega k, a value of omega lambda, a value of omega m, and a value of omega r. No, no, you start with a number. You start with a set of numbers. That defines a model. Now you process it. Now you run it through the equations. You run it through the mill, and what you calculate is L of z and dn by dz. You compare them with experiment. If it fails, you throw the model away and put in some new numbers. And you keep trying numbers. What are you saying? That one works. That one works. That one works. With some uncertainties, but the uncertainties are not big. This one is rather small, 0.01. They're all small. They're all in, I think this is 0.28, if I remember, 0.28 and whatever. Let's call it 0.3. So they're known to about 1%. These numbers are known to about 1%. And if you vary them very much, much more than 1%, these tests fail. So a model then, again, a model consists of a specification of these values, which means the values of the various components here today. But you don't measure them completely. Some of them you measure, some you don't measure, but you just feed them in, grind the wheels, calculate A of t from the Friedman equation, calculate A of t. Once you have A of t, you can feed it into here, calculate the n by dz, and luminosity is a function of z, and test it. Yeah? Is A today, where does that come from, the Hubble constant? Yes. Well, once you have a model, you're juggling various things, but once you have a model that fits these two pieces of data, OK? Then you have an A as a function of time, and you can read off it what A today is. Yeah? And it's roughly speaking from the Hubble constant today. Yes, it's from the Hubble constant today. So that is one of the outputs. One of the outputs is A today. And... Earlier you said that... I think you said something along the line, are you doing it, or are you not that far back at the time? If you were to gather this data by looking very far back at the time, wouldn't that distort it? Because maybe it's the same point, maybe you'd be looking back at it. You know, if you only took your data from very, very far in the past, the only term that's important here is this one. If you go to very small A, which means long in the past, this is simply much bigger than this and this. I think it's a little like if you're not going that far. I mean, let's say you look back a billion years to... Oh, a billion years is not very far. That's not enough to distort the answers that you got from this. What do you mean to distort it? That's what you do. You go back a billion years, two billion years, three billion years, four billion years, and that's what goes into these equations. A of t, as a function of t, all the way back up to a present. All right, let's go back through it again. Let's begin with a set of values of the matter density, the cosmological constant density, all of these. They go into the Friedman equation. I know your answer was, before we would have come, these numbers. No. You just invented them. No, but really, I mean, it's the... Did the ratio go with that or the Hubble studies? Where did you get those values? We start with some other values. We feed them into here. We calculate the n by vz. That's an observable thing, the number of galaxies. How the number of galaxies varies with z. We observe it. We calculate L of z. That's also an observable thing. If the model doesn't work, we throw it away and try some new numbers. No, I understood all that. Where did you get those? I didn't get them. Tri-Lan-Error. How do you remember how it worked? Tri-Lan-Error. It's a data the best. It's right. You'll have a machine that starts with a set of four numbers. This one is not important. It knows that this is not important because you measure it directly. It's just too small to matter. So if you think it's a frequency equation, that allows you to compute the n, v, z and L of z, these numbers give you a good fit. Yeah, these numbers give you the best fit. Well, this is what you do with spreadsheets all the time. Yeah. All right. Some of them... Look, this one here is more or less measurable directly. Why? You can measure h. h is measurable. And you can measure the dark matter and the luminous matter together. And basically, omega m is the ratio of this factor over here, this one to this one. You measure the energy density in matter. You measure the Hubble constant. And the ratio of them is omega m. So that actually is measurable. It's really this one here, which was the unknown. So, all right, so what can you say? This one is pretty much directly measurable, omega matter. And it's 0.3. What's the assumption about dark matter? Hmm? With an assumption about dark matter. Yeah. Including the dark matter, omega m is about 0.3. These two are the ones which are not known. But what is known, at least in theory, is that this plus this plus this has to equal one. This plus this plus this has to equal one. This one is known. Therefore, this one follows. But you want to do better than that. Really, what you want to do is you really want to do better than that. You just don't want to say, well, this was 0.3. This one's not really known. Nobody's ever measured the curvature. And therefore, this one must be 0.7. That's not what you want to do. You want to take this is given, this is unknown, and omega k, whatever it is, to fill up the difference. Okay? You with me? You take this one as known, this one as unknown, but we'll put it in as a trial. And omega k is just what's left over. We put that into these equations. We run it and we calculate L of z in the end by d z. Having done that, we just look for the best fit. I think I probably didn't say this clearly. So let me just try it once more. These two, this one and this one are measurable. This is negligible. This is 0.3. These two are unknown, but what's known is that the sum of all of them should add up to 1. So we take a test value for omega lambda. That tells us what lambda is. We run it through the Friedman equations, find out what a of t is. Having computed a of t, we go to this blackboard over here and compute the observable signature, the number of galaxies per unit z, and the relationship between z and luminosity. The relation between z and luminosity is a generalization of Hubble's original calculation. Luminosity is distance, z is velocity. The only reason we have to do better is because we're looking at it at different times. So the additional ingredient is to have these curves knowing how light moves and self-worth. That involves the model. That involves the details of the model. But once you put the details of the model in, you look for your best fit. And the best fit is with these numbers here. The last two you treated is adjustable for omega. The last two you treated, well, only one of them because they have to sum to, yeah. That's sort of a question. Why just one of them? Because both of them appear in the equation. So why couldn't you guess lambda equals 0.4 and k equals 0.3 or something like that? No, no. I have to add up to one. These are the ones. No, they would add up. All that would add up to one. Instead of 0.7, lambda equals 0.7 and k equals essentially zero. Yes. Why not lambda equal 0.4 and then k equals 0.3? Good. Good. You take this, you take your trial, you solve the Friedman equations with it. That tells you what A of t is for your model. You then feed it into here. This is not what you get. You get something different. Feed it into here and calculate the number of galaxies per redshift or the number of supernova per redshift. And the luminosity is a function of z. I can't point out that it's different than what you said. You said just take lambda, forget k, and let it fall out. I'm not even sure what that means. You were saying you only have one parameter, lambda, that we have to deal with. But I don't understand why you're doing that. Because this plus this plus this has to be one. Right, but as I said, if you just fix, okay, so you can take it off. Right, I don't want to waste time. The point is that if you set lambda, you're forced to set a certain k. k, right? Okay, yeah. That's right. Right. I have a question about k. You can say plus one minus one zero. Is it really k is greater than zero? No, the way we formulated it, plus one minus one is zero. But omega, yeah. I'm having trouble seeing how the dimensions work there. I mean k over a squared has to have the same dimensions as a squared. That's right. Speed of light is equal to one. Apart from the speed of light, that's correct. There's some season there. There's some speeds of light. But get used to working with speed of light equal to one, and then this is dimensionally correct. Does this method rely on the assumption that the density of galaxies is isotropic over space and time? No, what does it mean for it to be isotropic over space and time? What does it mean? Meaning that earlier in the universe there might not have more galaxies, but smaller ones per unit of space. It assumes that every time space is homogeneous and isotropic. It does assume that. But that's to a large extent tested, not completely. But if you want to know whether it's isotropic at this time over here, you just look at galaxies that you can see at that time, and you see that it's isotropic. You look in different directions at different redshift, at the redshift you're interested in, and it's isotropic. These omega's are the way it is today. In the past it would be different. The omegas are defined to be the ratios. You have these four numbers, cr over a to the fourth, plus cm over a cubed, plus lambda, minus k over a squared equals h squared. That's true at all times, okay? Now we plug in today. Now we just take the ratio, divide it. I mean, like a zillion years ago, the omegas will be different. What do they look like? Obviously the wave one. Here they are. Here you can see that long in the past, radiation was much more important than matter, much, much more important than lambda. So long in the past, a long, long time ago, omega radiation was large. Large means essentially equal to one. Long in the past, omega radiation was about all there was, and everything else was small. Right, so therefore d omega r is negative. The derivative of that, you know, omega r is negative. Omega r is decreasing. Right, and omega matter and the other one, what are they doing? They increasing, probably omega is probably zero. Yeah, omega, look, okay, so let's, the most, when a gets large, what is going to be biggest here? Lambda is going to be biggest, right? So long in the future, omega lambda will be one. Omega lambda will be one and everything else will be small. Omega curvature will be less small than omega matter, which will be less small than omega radiation. But long in the future, it will be dominated by lambda. Long in the past, it was dominated by radiation. So the real question is what is the time derivative of matter of liquid? It obviously starts small, then it gets larger, then it gets small again. Yes, yes, that's correct, that's correct. The omega matter starts small, why? Because it's almost all omega radiation, right? Then it gets bigger, it starts to dominate, it gets up near, pretty big, up near close to one. That's before lambda became important. And then lambda becomes more important than this. That's today, so omega matter is 0.3 and omega lambda is 0.7. And omega k is, well, we don't know that. Let's see. No, that's correct. When I said we don't know that, I mean to say that we know that it's zero to within about one percent. The next round of experiments might discover that it's not zero, that it might be one percent, either plus or minus. No, that won't change much. Well, let's see. Yeah, it will change. It will get smaller. It will get smaller because this will eventually defeat this by any arbitrary amount. So yeah, this one will get smaller and eventually all it'll be will be lambda. That's as far as we can tell. I mean, that's some theory that goes into that, of course. But if you put the values in, fit the data, is there any kind of systematic deviation that's not understood? I don't think so. No. So it's a good problem. Yeah, no. This has achieved the level of precision cosmology where everything does fit. The only thing is it's a one percent model. It's not a tenth of a percent and it's not the hundredth of a percent. It's a one percent fit and everything fits to one percent. In the end of this, having been trying to experiment with a particular case, positive, negative, or zero, by trying to do huge triangulations and looking at angles to see if they're sum up to 180 and something like that. That's correct. That's correct. And we're going to come to that when we study next week the microwave background. And the microwave background, or at least the lumpiness of the microwave background, allows triangulation and allows you to study triangles. And again, probably if all you use is the supernova data, which is, this is the supernova data, I don't think that gets you to one percent. Maybe it gets you somewhere between ten percent and one percent. Combined and supplemented with a cosmic microwave background, it's one percent. So if you understand the grand scale of zero, then it says the inverse is becoming flatter. Becoming flatter? Yes. Oh yes, for sure. For sure. The bigger it gets, the flatter it gets. That's just the biggest flatter. In any case, if it continues to grow, it will become flatter. Okay, this is tricky stuff, but the way to think about it is what I said. You start with a model which means a set of parameters here. You feed them into Friedman equations. You go to this side, once you have Friedman equations, and you can read off from the Friedman equations the observable quantities L of z and dn by dz. And you compare. There's incidentally no reason why any particular set of numbers has to agree with the data, other than theory. Without a theory, there would be no particular reason why any set of numbers would agree with the data. So it's not just testing a particular model, it's testing a particular model and it's testing the theory. No.
(February 18, 2013) Leonard Susskind develops the energy density allocation equation, and describes the historical progress of the solution to this equation. He then describes the observations of luminosity and red-shift that have led to the correct solution for today's universe - which is dominated by dark energy.
10.5446/15058 (DOI)
We're going to begin inflation tonight too. Hopefully we'll get through bariogenesis. We will get through it because so little is known. No, I'm serious. The basic theoretical framework, modern physics, bariogenesis means the creation of the – I'll explain the words in a moment, but it does mean the excess of matter over antimatter. The modern theory, I will tell you what I know about the history of it. I was engaged in the history of it. In 1980, Savastomopoulos and I were asked by Bob Wagner, why is there so much entropy in the world? Bob Wagner is a cosmologist at Stanford. Why is the number of photons, which is counting the entropy in the world, why is it so much larger than the number of protons and electrons? Remember, it's about 10 to the 80th times larger. No, 10 to the 10th times larger. 10 to the 8th times larger. 10 to the 8th times larger, approximately. 10 to the 8th, 10 to the 9th, something like that. Why is that? What created all these photons for every proton? We thought about it a little bit and then realized that the question was upside down. The question was not why there are so many photons, but why there are so few – so few protons and electrons. And we basically worked out what the modern theory is. I'm going to tell you what it is tonight. And for a brief period of time, the conditions to explain the imbalance of particles and antiparticles were called the Damopoulos-Suskine conditions. Somewhere during that time, a little bit later, two young Russian mathematicians came to me and they said they were writing a book about – or not writing a book, yeah, they were – no, they weren't writing a book. They were looking for contributions to a book celebrating Andrei Sakharov's career. And they brought me a list of papers about this thick and what I looked through them and see what was interesting and write about it. So I started to look through them and sure enough, I discovered exactly the same paper. It was remarkably the same as what Savas and I had done, with the exception that it was done in 1967, 13 years earlier, been lost completely to the Western world. The Russians knew about it, but at that time there wasn't a lot of communication between the Russians. So I did indeed write a little historical paper about Sakharov's contribution and made the mistake of calling them the Sakharov conditions. They are now known as the Sakharov conditions, justly so, of course, that they're known as the Sakharov conditions. And there are basically three conditions that I'm going to go through and explain to you that if they are satisfied, and they are satisfied, then there will be an imbalance of matter over antimatter. The real question is the magnitude of it. Let me say something else. This same kind of question is going to come up later. Is the thing we're asking really a legitimate question? Okay, maybe the world just started with more particles and antiparticles, end of story, but it focuses your attention a lot when you have a number. When you have a number, a specific number, and in this case it was the 10 to the 8th photons per proton, numbers, you focus your attention and say this number needs an explanation. So the excess in itself was just a fact that you could have said, well, maybe it's just that way. But once there's a number, and in particular if the number is some oddball number, like 10 to the minus 8th, you start to think maybe this thing needs an explanation. So the explanation of the, yeah. How does it relate to entropy? Roughly speaking, for a black body thermal spectrum, the entropy is simply the number of photons. It's just the number of photons. And a black body radiation carries entropy. It's thermal. And so within some factor of order one, I forget there's some pies in it, simple factors, the entropy of black body radiation is just the average number of photons in the gas. So when people speak about the entropy of the universe, in many contexts what they're often talking about is just the number of cosmic microwave background photons. That's what they mean by it. And so when I say, when Bob Wagner came to me and said, why is the entropy of the world so large, he was asking me why the number of photons per proton is so large. You might have asked, how did that small number of protons make such a large number of photons? But that is not the right way to think about it. As I pointed out to you last time, or time before, in the very early universe there were huge numbers of protons and huge numbers of anti-protons. And they were basically equal to each other with the number of protons and the number of anti-protons being approximately the same, or the number of quarks and anti-quarks, same true for electrons and positrons, being approximately the same as the number of photons, thermal equilibrium at some very high temperature. And then it cooled. When it cooled, the photons were left over. They were left over because they decoupled. The universe became transparent and the photons just hung around and we see them today. The protons and anti-protons annihilated each other. The universe expanded fairly slowly and there was plenty of time for them to find each other by and large and annihilate each other and all that was left over was the slight excess. So the question became not why there are so many photons, but why was there this tiny excess of 10 to the minus 8? In other words, the number of protons minus the number of anti-protons, where is it? This number over here divided by the number of photons. But the number of photons and the early history of the universe was approximately the same as the number of protons plus anti-protons, number of protons plus NP bar. And that's a number which is about 10 to the minus 8. So there's a number to compute. Why does this small number appear? We don't know the reason for the number. That's because we don't have a complete theory, but almost any theory that we write down which explains it always gives a small number. So we'll talk about a little bit what the kinds of conditions are necessary. And it turns out sufficient to make an imbalance of matter over antimatter. Now is it matter over antimatter? How come it didn't come out antimatter over matter? That's largely a definition. That's largely a definition. The thing we call matter is the thing that we're made out of. So that's definition. All right, let's begin with a hypothesis which is really believed to be wrong, but we'll come to it. Namely the baryon number. What does baryon number mean? If there are only protons, baryon number can be interpreted in terms of quarks. It's basically the number of quarks minus the number of anti-quarks in the world. I think it's three times that to be exact. And the reason it's three times that is because it was originally defined in terms of protons and neutrons being baryons, and a proton and a neutron has three quarks in it. So that's called b, the baryon number of the world. So that's the number of quarks minus the number of anti-quarks times three. If all that exists- Student, one divided by three. Say it again? Student, one divided by three. Yeah, divided by three. Thank you. Divided by three. Or three times the baryon number is, yes, you caught me, divided by three. The baryon number of a proton is one and it has three quarks in it. Good. Okay, so that's called baryon number. And there are other kinds of objects that carry baryon number besides protons and neutrons, but they're all unstable. Even the neutron is unstable. Nevertheless, there are other kinds of objects. We can mostly focus by thinking about quarks themselves if we like, and the statement that there's a baryon excess is the statement that there were in the early universe more quarks than anti-quarks. The number of quarks and anti-quarks separately were about the same as the number of photons. And so, question is how it got that way. Okay, let's suppose for the moment that baryon number is like electric charge. One of the things about electric charge is that it's conserved. It doesn't change with time. Now, baryon number is not like electric charge. Electric charge is the source of Coulomb forces, long-range electric fields which create long-range electrostatic forces. Baryon number itself is not a source of some kind of Coulomb type force. Of course, the protons are electrically charged. Therefore, they make conventional Coulomb forces between each other. They make electric fields. The neutrons are not electrically charged. They don't make electric fields. So what we would say is it's the charge of the proton, not the baryon number of it which is creating any kind of long-range field. And baryon number itself may be conserved, it may truly be conserved, but it is not exactly like electric charge. It doesn't exhibit this tendency to make long-range forces. All right, but suppose it's conserved. Suppose it's conserved. Then if there is ever an excess in the beginning, let's say, of the universe, whatever that means, then there will always be an excess and that excess will be sort of frozen in. If you change the number of quarks, you must change the number of antiquarks by the same amount if baryon number is conserved. And what's more, experimentally, baryon number appears to be highly conserved. Nobody has ever seen a proton disappear. We can talk more about experiments which search for the decay of protons and so forth. But the first approximation in our world, protons are extremely stable. What could they decay to? Let's ask a question. What could they, suppose a proton was to decay, what could it decay to? It must decay to things which are lighter than itself. It must decay to something which has a positive electric charge. So if a proton would, and if we want to assume that whatever it decays to is stable, there's really only one thing that it could decay to. It could decay to a positron and something electrically neutral. A proton could decay, a proton could disappear and become a positron. That conserves electric charge. It doesn't conserve energy. A positron is much, much lighter than a proton, but it would compensate by giving off a neutral particle. What kind of neutral particle is around? Photons. Photons are prime candidate. So a decay possibility for the proton would be, proton I'll draw it as a kind of Feynman diagram, a proton moving along would decay to a photon which we'll call gamma. Photons are typically called gamma and an electron antiparticle, E plus. That's a possible thing that could happen and we don't really know any deep fundamental reason why it can't happen. And maybe it does happen. We're going to talk about whether it does happen. In fact, we think it does happen. But for whatever reason, yeah. What about a neutrino? No, it can't decay to a, I mean a positron neutrino. No, no good. No good. A neutrino is a fermion. Two fermions make a boson and a proton is a fermion. So whatever the proton decays to, it must decay to something with an odd number of fermions. So it can't decay into a positron and a neutrino. Right? Yeah? If baryon number really was conserved, it would correspond to a symmetry of some sort. Yes. Is that for that or is no one ever found one? You can always, given any conserved quantity, you can always make up a symmetry. But then you can say, well, all right, is there really a symmetry? And the way to test it is always to ask whether baryon number is conserved. So it's a little bit circular. But yes, you're right. If baryon number was conserved, it would correspond to a symmetry. And if it's not conserved, it doesn't correspond to a symmetry. OK. Yeah? Is there, should you think about particles decaying and particles annihilating with each other sort of different things? Like a proton can annihilate with a positive, with a negative proton. Yes. So that, can you think of that as a decay or not? No, that's not a decay. It's not a decay of the proton. It's just called annihilation of a proton. But the important point here is that the baryon number doesn't change. Three quarks and three antiaquarks came together. The sum total, the baryon number is zero. And afterwards, it's just a bunch of photons and the baryon number is still zero. So right. So that's a fair question. But it does, and you could call it a decay if you wanted to, but it wouldn't correspond to a, right. OK. So for whatever reason, and we don't know the reason really fully, the standard model does not permit this decay incidentally. It doesn't happen in the standard model. But there are versions or extensions of the standard model which are perfectly viable in which this process could happen. Now one thing we know about it immediately, the decay rate of the proton cannot be very large. Our protons have been around for 15 billion years or whatever it is, 13.7 billion years, and they haven't disappeared on us. So if the decay time of the proton were a microsecond or even faster, which particle physics might permit, the protons just wouldn't be here anymore. So the lifetime of the proton is at least very, very, very long. But in fact, it's much, much longer than is necessary for the protons to still be here. The mean lifetime of a proton is more than 10 to the 32 years, so it's very long. And we don't completely understand why. In fact, we don't understand why at all, but we take it as a fact. At least a temporary fact. Let's assume it for the moment. That would mean that the total baryon number in the universe, but let's just say in a box, in a box doesn't change. And so the only theory of this excess that we could have is that it was built in from the very beginning and it still survives today. That would be a consequence of baryon conservation, baryon number conservation. I'm not saying that's true. I'm saying that's a consequence of baryon conservation, that this is the same today as it was at the very beginning. Now there's another symmetry of nature, which again is not really a symmetry of nature, but you might have thought about it as a symmetry of nature. At the time that Sokhorov put out his theory, it was only a couple of years after it was discovered that it wasn't a symmetry. We'll talk about it. But it's particle-antiparticle-reflection symmetry, the statement that particles, if interchanged with antiparticles, that you can't tell the difference. Of course, you can tell the difference between a proton and an antiproton because we're made out of protons. But if we changed all of our protons to antiprotons, if you were made out of antiprotons and I was made out of protons, don't get too close to me. Somebody showed me a proton. I would say, yes, that's a proton. Somebody showed you an antiproton. You would have exactly the same response. You would see the same thing that I saw. That's called particle-antiparticle symmetry, or charge-reflection symmetry. And for technical reasons, well, all right, that's a symmetry. It's not a symmetry. But up until around 1964 or 65, I forget exactly when, it was thought that charge-conjugation symmetry, this is called C equals charge-conjugation symmetry, it was believed that it was a symmetry. Particles go to antiparticles. Two other things which were thought to be symmetries of nature, which went along with this, were called P, which is P is for parity, but it has nothing to do with economics or fairness. It's left-right symmetry, mirror-reflection symmetry, that if there's a kind of particle which exhibits a handedness, for example, it rotates to the right when it moves forward, spins to the right when it moves forward, then it had been assumed that there would be a left-spinning particle which also moves forward, left-right symmetry. And that was called parity. And the final interesting symmetry for us tonight is called T. And T is time reversal. What this means is that any process that can happen in nature, just an ordinary lab, let's restrict ourselves to laboratory processes, let's not worry about the whole universe, any process that can happen in nature, we could take a movie of it. If we run it backwards, it is still a possible thing that can happen. Anything that can happen, the opposite can happen. Now it doesn't seem that way when you think about the real world, of course. The second law of thermodynamics says things get worse. But at the microscopic level, at the true microscopic level, we don't average over things and we don't do statistical averaging and so forth. Any process that happens in nature was thought that its time reversal was another possible thing that could happen. That's called time reversal symmetry. Now what is actually known mathematically, this is a mathematical statement about quantum field theory which I am not going to try to prove tonight. It's a hard theorem. But it follows from the basic structure of quantum mechanics and relativistic field theory, is not that charge symmetry is a symmetry, particle, antiparticle, not that parity reflection symmetry is a symmetry or that time reversal is a symmetry, but the product of all three of them, CPT. I'll tell you what that means in a minute. CPT is a symmetry. And this for mathematical reasons, fundamental mathematical reasons, which you'll have to do except for now, is not consistent to ruin, to not have this symmetry. Now what does a symmetry mean? It means if you take any process, replace every particle by its antiparticle, reflect it in a mirror and run it backward. It is still a possible process in nature. You have to do all three in order to be sure that it's a symmetry. Up until the 60s. So for a wave function, you can't just change the sign of T. No, you have to complex conjugate it too. So does time reversal always mean take the complex conjugate as well as change the sign of T? Right. No. You can see that from the Schrodinger equation. The Schrodinger equation is something like I d psi by dt equals, I don't know, d second psi by dx squared or something like that. That's not quite right. But supposing you change the sign of T, that changes the sign of the left-hand side, but it doesn't change the sign of the right-hand side. So changing the sign of T doesn't work. But if at the same time you complex conjugate and change the sign of T, which also requires you to change the sign of I for complex conjugation, then it does work. But this is something we don't need to get into right now. CPT is a symmetry. But let's suppose for a moment, let's go back to Sakharov or even before Sakharov, before the facts are actually known, let's suppose that either charge conjugation or C times P, which is the usual one that people focus on, that that was a good symmetry. C times P means change every particle to its antiparticle and change your left hand with your right hand. That was actually thought to be a good symmetry of nature. That would entail that there's some kind of symmetry which allows you to change the sign of every particle to its antiparticle. You'd also have to change left to right, but that goes along for the right. If you believe that, then you might ask, well, gee whiz, if there is a complete symmetry between particles and antiparticles, why should it be that at the very beginning there was an imbalance of one versus the other? Now nobody can tell you that there isn't an imbalance. It just dates back to the very beginning. But you might also wonder, what's going on here? The laws of physics seem to be completely symmetric between the two kinds of things, particles and antiparticles. And yet for some reason, there was this small imbalance of size 10 to the minus 8. It doesn't sound right. The modern theory of bariogenesis begins with the idea that there was a balance, that particle and antiparticle were balanced. Again, not for any good reasons, but just for whatever initial condition you started with, there was no bias toward particle or antiparticle. That's an assumption. It can be justified in some frameworks, but it's... So then how is it possible then that it got imbalanced? The only way that it's possible for it to get imbalanced is if the conservation of barion number is not correct. In other words, if processes can happen, and here's one, if processes can happen in nature in which a proton becomes a positron, that is a violation of barion conservation, which allows the barion number of the universe to change. That would be the first requirement for a theory of bariogenesis that was based on the assumption that the initial starting point was balanced between the two, that if you're going to wind up with an excess of quarks over anti-quarks or barions over anti-barions, meaning protons and neutrons, you must have a mechanism which violates the conservation. So that was Sakharov's first condition. Condition number one, Sakharov conditions. Sakharov conditions started with number one, barion number violation. Violation means violation of a conservation law. And as an example, the process of a proton becoming a positron and a photon is an example, if it happens in nature. That was condition number one. Let's just talk about that for a minute. If barion number conservation is not a good conservation law of physics, then it must be a very, very weakly, weakly broken one. As I said, protons are very old. They didn't disappear. So it must be very old. It must be very old. The protons are old. And whatever mechanism, such as this kind of decay, it must have a very, very, very small probability per unit time. One can ask in current theories, current unified theories, which do some of which, most of them, in fact, everyone, no, I take that back, every known unified theory, and even when they're not unified, if they're coupled to gravity, every known fundamental theory violates barion conservation like this. You can ask in the known theories, why is the proton so stable? And the answer tends to be something like this, that the theory has Feynman diagrams, has processes in which the proton comes in from the left, meaning from early, out goes an electron, out goes a photon, but somewhere in the guts of this Feynman diagram, there are all kinds of particles. Let's not specify exactly which ones are in there. But among them, assume that there's at least one or more particles which are very, very heavy. In other words, that it requires a particle type which is very, very heavy in there, particles which have not been discovered yet. Now one of the reasons to believe this is that the standard model by itself, with its ordinary known particles, does not permit this to happen. And so in order to make it happen, you would have to have new additional particles that were not part of the standard model, and that certainly means that they're heavier because they haven't been discovered yet. But imagine making them very heavy. How heavy? 10 to the 16th GeV, or 10 to the 16th times heavier than a proton, which is not an unnatural number for heavy particles. Then what's true is that this kind of process is extremely unlikely. Extremely unlikely means that the quantum mechanical amplitude for it is suppressed by inverse powers of this heavy mass. Say for example, it might be, let's call this the heavy mass m, the Feynman diagram will contain a 1 over m squared just because it's so hard to make a heavy particle. Just because it's so unlikely that heavy particle doesn't last very long, it has to melt into the rest of the Feynman diagram. But if the particle is heavy enough in there, then this will be a very, very small probability process. That's what drives, that's the kind of thing which keeps the proton stable in modern unified theories. I don't know if it's right. I'm not saying it's right, but this is a mechanism that the extra particles that are there are all heavy enough that they suppress this enormously. All right. Now I will tell you something else. That this 1 over m squared, if for some reason the proton is given a lot of energy somehow, then this isn't really 1 over m squared, but it's really some energy m minus some energy squared. If the proton has excess energy from some other place, this can well be a much smaller suppression and I'll tell you where that extra energy comes from in a minute, where it can come from. But in the ordinary world, a proton sitting around, there's nothing to give it at an enormously heavy kick, an enormously high kick. It doesn't have a huge amount of energy. Sometimes when they collide or anything else, don't have huge amounts of energy, so this is where you begin. Theories of this type can explain why the proton is so stable. Of course, you could say they don't explain anything. They just tell you that for some reason that the particles which are necessary for the proton decay must be very heavy, and that's true. Any energy that's not included in its rest mass squared, nevertheless we go in here and I'll tell you where such energy can come from in a moment. But as I said, all theories, all known theories, and I would venture to say any theory that ever will exist will have the possibility of baryon violation. So let's agree tentatively that baryon violation is not taboo, that processes such as this can happen, they're not forbidden by any fundamental law of physics, it's just an accident of the particle spectrum that the proton is as stable as it is. Let's take that as an assumption, a working assumption. Baryon violation by itself is not sufficient to give you an average excess of protons over anti-protons. Why not? Because for every event, where is it? Every event like this which can happen, which can cause you to lose proton number, this is a source of decreasing baryon number, you start with one unit here and you have no units over here, for every process like this there is the charge conjugate process in which an anti-proton comes in, a electron, a true electron goes off, and the anti-particle of the photon is the photon, the anti-particle of a photon is itself. So if we believe in particle-anti-particle symmetry, then for every process like this, a process like that happens, and on the average, there will be as many of these kinds of decays as these kinds of decays. So if we started out with an equal population of quarks and anti-quarks of baryons and anti-baryons, when we try to rely only on the violation of baryon number, it would not be a very efficient way to create an excess. What do I mean by that? I mean, think about it for a minute, supposing there was a randomness that, let's see how to say this, oh, I might add something else, incidentally, that in the very early universe there were lots of electrons, positrons, and photons around, and it's perfectly possible for the opposite to go. An electron, a positron and a photon can come together and make a proton. An electron and a photon can come together and make an anti-proton. So there's a statistical balance of things going on. It's only statistical that there would be as many protons as anti-proton, because these processes are statistical processes, but the imbalance of protons over anti-protons is not a statistical effect. Yeah? Good question. On that lower diagram, on the left-hand side is the baryon number equal to negative one there? Here, yes. Negative one goes to zero. Yeah. Right. Negative one goes to zero. Here one goes to zero. And the reason that it can't be a statistical effect is that it is quite true that the baryon excess is a small number, 10 to the minus eighth. But ask how many protons are there in the entire universe, known universe, observable universe? And the answer is about 10 to the 80th. How big would you expect the excess of protons over anti-protons to be if it were simply a pure statistical effect? Square root of n. Right? You know that the statistical fluctuation in a variable just due to random statistics is of order the square root of the number. If you have a random heads, tails, flips and you collect, let's say, a thousand flips, the number of heads will be 500, but that's up to the margin of error. And what is the margin of error? The margin of error is the square root of the number of flips. And so you expect that the number of flips, the number of heads will be 500 plus or minus what's the square root of 525? 500 margin of error will be about 25. Okay, now let's come to the world which has 10 to the 80th protons in it. What is the margin of error or the statistical fluctuation, the average expected statistical excess of protons? It would be square root, which would be 10 to the 40th, right? 10 to the 40th. 10 to the 40th over 10 to the 80th is 10 to the minus 40th. So if the excess was 10 to the minus 40th, we could say it could be statistical, but it's not. It's much, much bigger than that, yeah. It depends on the number of protons and antiprotons in the very early universe that became our observer. Oh, assuming that they started out balanced. Right, they started out balanced. And if there were 10 to the 160th protons and antiprotons in that early universe, then that could be 10 to the 80th. It could be. It could be. The other thing that would be very hard to justify, extremely hard to justify is if it was statistical fluctuation, why it would be the same everywhere? That would be very, very hard. Right. You'd expect a patch over here with protons, a patch over there with antiprotons. And so you can ask, what is the experimental evidence that the neighboring or even very distant galaxies are not anti-galaxies? And as I understand it, there's very good evidence. The good evidence is if the population of galaxies was sort of symmetric between galaxies and anti-galaxies, then you would expect cosmic rays of especially very high energy cosmic rays, which are thought to be cosmic in origin, to have as many nuclei as anti-nuclei, or as many anti-nuclei as nuclei. We do see, for example, helium nuclei and cosmic rays. As I understand, nobody has ever seen an anti-nucleus in cosmic rays. You do see antiprotons, but that's fairly easy to explain. Even if we just had, even if we didn't have any cosmic rays or antiparticles coming in, and a high energy particle who would hit the atmosphere, it would make antiprotons. But it's extremely difficult to make an anti-nucleus. That would take an incredible piece of luck of a very high energy collision with the atmosphere, creating an anti-nucleus, not likely. So the complete absence of anti-nuclei strongly suggests that the universe is not equally populated with galaxies and anti-galaxies. So there's something to explain. It's not statistical, and we need to explain it. Do you see helium nuclei, or do you just see any of these that look like they correspond to helium? I believe you see helium nuclei in cosmic rays directly. And detailed statistics of it, I don't know. It's not my field of physics, but that's where the argument lies, at least one of them. We also don't, there's another thing we don't see. If our galaxy, for example, was particles, and the andromeda was antiparticles, then in the region in between, you would expect that to be both particles and antiparticles. I mean, there's plenty of electrons around, for example, circulating around in the region between galaxies. You would expect there to also be plenty of positrons, and then you would expect there to be lots of positron-electron annihilations, and positron-electron annihilations are very easy to detect. They produce pairs of photons of very definite energy, and those pairs of photons could be observed. And it's not that we don't see electron-positron annihilation, but we don't see nearly enough of it to account for the possibility that neighboring galaxies would be anti-gallaxies. So that's almost certainly ruled out that there are anti-gallaxies out there, and certainly ruled out that there's an equal population of them. Okay, so particle-antiparticle symmetry tends to suggest rather strongly that the universe was created symmetrically, although it doesn't prove it. So the next element of the argument is that in order to account for the fact that there are more protons than anti-protons, this charge conjugation symmetry must fail. The idea that particles and antiparticles are symmetric in the laws of physics, and that the charge conjugation times parity, it's enough that it would be bad enough charge conjugation or charge conjugation together with anything else, any symmetry that involved into changing particles and antiparticles. If you had that symmetry, you would have a very hard time explaining. Another way to say it is if we allow, yeah, if we allow baryon violation, that's not enough. That will just give you this statistical effect. You need something to push it in one direction. You need something to bias it toward particles rather than antiparticles. You need something in the laws of physics that will cause this baryon violation to be biased so that one kind of thing happens more than the other. The implications of that are that you need violations of particle-antiparticle symmetry, in particular the so-called CP symmetry, but it's basically just particle-antiparticle symmetry. Again, is there particle-antiparticle asymmetry in the world? Yes, there is. We know with absolute certainty experimentally that particles and antiparticles don't behave the same way. A particular example, it's rather, the examples are hard to come by, but once you have one or two or three, you know that the laws of physics are not symmetric between particles and antiparticles. The simplest example to explain is the so-called b-meson. A b-meson is a particular kind of particle made of a quark and an antiquark. It's made of a bottom quark and an anti-down quark. Bottom quark and an anti-down quark. Bottom quark and antiparticle form a meson. They form a meson. The meson is bound and it's called the b-particle. Now it has an antiparticle, you just interchange the bottom quark for the anti-bottom quark and the down, and the anti-down quark for a down quark and you get the anti-b-meson. Both the b-meson and the anti-b-meson are electrically neutral, but they are not their own antiparticles. This one is made of b-quark and anti-down quark. This one is made of anti-b-quark and down quark or whatever the opposite was. These particles decay and they decay into pi mesons. Let's see, this one here we forget. I'll just write down one possibility at the, oh boy, I've forgotten what a b-meson decays and they decay into lots of things, but a charge, oh I know, they decay into a k-meson and a pi-meson. And I think the b-meson becomes, I think it's a k plus and a pi minus. This is not important. The fact is that there's a very definite decay into a k-meson and a pi-meson. The b-meson is electrically neutral so the decay products are electrically neutral. That it follows that the anti-b-meson can decay into the antiparticles which is a negatively charged k-meson and a positively charged pi-on. The only important thing is that it's a process in which a particle can decay into other particles versus a process in which the antiparticle can also decay into its antiparticles. Now these are, the rates for these things to happen are measurable, they are measured and they are different. They are different. One of these, I can't remember which one is about two-thirds more important than the other. And so in this case it's a fairly gross violation of particle-antiparticle interchange. It's definitely a real effect. There are lots of indirect measurements that's been known for a long time that particle-antiparticle symmetry is not a good symmetry, is not a symmetry. Now once this can happen, once this can happen we can, meaning to say that there are fundamental processes in nature buried deep inside Feynman diagrams somewhere that are imbalanced between particles and antiparticles. It's no longer the case that this decay and that decay have to have equal probability. There is a rule, there is a rule that the total decay rates, the total half-life of the proton and the antiproton have to be exactly the same. That's a, that's a theorem for relativistic field theory. But the way that these two can be different from each other is there's more than one way that a proton can decay. A proton can also decay, for example, into a mu plus anaphoton and an antiproton can decay into a mu minus anaphoton. And what the theorem says is that if you calculate all of our possible ways that the proton can decay and you consider the half-life or the total rate, the total rate of decay of the proton, it must be exactly the same as the total rate of decay of the antiproton. But what it does not say is that the decay rate of the proton to electrons or to positrons and photons must be the same as, you know what I mean. It doesn't say that any particular decay has to have a symmetry. In particular, if at some fundamental level this charge conjugation symmetry is violated by something in the theory, then it will allow this to not equal, let's say this here, not equal to, that's a special symbol I just invented, not equal to, it allows that and not only allows it, basically insists that these two not be the same. Once that's true, it says there's a bias. Somewhere in the laws of physics there is a bias toward either protons or antiprotons or matter versus antimatter, some kind of bias. And that's something which is absolutely necessary to add to Baryon number violation, to give it a directionality, to give it a push in one direction rather than the other. So Sakharov's second condition, incidentally I think Sakharov's paper was a, I've forgotten when CP violation was first discovered, but Sakharov's paper was very quickly after that. When the CP violation, the particle, antiparticle symmetry was discovered to not be correct, within a year or so, I don't remember exactly, Sakharov wrote down his basic conditions. So that's number two, C and CP or particle, antiparticle asymmetry in the laws of physics. This is expected in any theory that we know, any theory that we've studied for particle physics always has, including just the standard model. In fact I said something wrong, the standard model does allow Baryon violation, in fact it not only allows it, it insists on it at some level. So every theory that we know about, but as a theoretical statement, insists that Baryon number violation happens. Basically we know that CP is violated and particle inversion, particle, antiparticle symmetry. Now there's one remaining, yeah. So if the total decay rates are the same, then why don't we have equal numbers? Just the mu particle is heavier than the electron. So we could be in a situation where, we could be in a situation for example where it's hot enough, hot enough to, what I need to explain to you is first why this can happen rapidly, even though I said that the proton is very stable. It has to do with the environment. Let me come back to it, it's a good question. All right. How do we overcome the fact that the proton is so incredibly stable? Remember what I said, the proton lifetime is something like about 10 to the 33, 10 to the 34 years or longer. The universe has only been around for 10 to the 10th years. So on that scale the proton is very stable and why don't we get to ignore this kind of thing? And the reason has to do with the fact that the universe in its very early stages was very, very hot. Because it was very hot, the protons or the quarks in particular, we can substitute quarks here, it doesn't matter, because it was very hot they were constantly engaged in very, very high energy collisions. The high energy collisions meant that the protons moving in this plasma, in this very hot gas had lots of energy. How much energy? It depends on the temperature. But at some temperature you get to a high enough point where the average kinetic energy, the average extra excitation on energy of the proton is high enough so that even these heavy particles here do not suppress the proton decay or the baryon violation. In other words, to put it short, a proton at rest, forget at rest, a proton in an environment where it isn't constantly being knocked around and having an enormously large excess energy of some kind or another is very stable. But when it's heated up to a high temperature, a high enough temperature, this excess mass here is not an important factor and the proton decay would happen quickly. So if we go back to the very early universe when the temperature was very hot, then these kind of processes can happen. Now this statement that the total decay rate is the same for protons and anti protons is a statement about zero temperature. It's a statement about a proton at rest in an environment where it has no excess energy. When it's being kicked around and when it has extra energy, then that is not necessarily the case. In fact, it's generally not the case. In an environment which has some energy around, which is kicking the protons around, the decay rates don't have to be the same. So that is not a problem. The problem has to do with the CPT symmetry. I said that there was no particle inversion symmetry, but there is a symmetry, at least in all known quantum field theories, in all known quantum field theories, string theories, any theory we know how to write down, which has relativity and quantum mechanics built into it. Large conjugation times reflection times time reversal. Again, what that means is you exchange every particle for its anti-particle, you reflect in a mirror, and you run the film backward. That is a symmetry of all theories. So what does that mean? That means, among other things, that in thermal equilibrium, if the universe was just static, if it wasn't changing, but it was hot, in thermal equilibrium, in thermal equilibrium, forward time is the same as backward time. In thermal equilibrium, backward and forward and time are the same thing. There's no asymmetry of time reversal. So in thermal equilibrium, where the universe is not changing, an imaginary universe, which is just a pot, which is there, a hot pot, it would have time reversal symmetry. If it has time reversal symmetry and it has c times p times t, then it must have cp symmetry. Cp just means particle change the anti-particle for our purposes. If particle goes to anti-particle and time goes to, I was going to say anti-time, and time goes to minus time, if that's a good symmetry, but the world has no bias toward one time or another, then you're stuck again. You're back to having a particle, anti-particle symmetry, and you cannot have a proton, anti-proton excess in thermal equilibrium. That's a theorem. That's a theorem that's been known for a long time that in thermal equilibrium, that the thermal equilibrium will come to a configuration with equal numbers of protons. But the universe was not in thermal equilibrium at early times. It was expanding, and it was expanding fast at early times. It was expanding fast enough that it could not be considered an equilibrium. If it's not in equilibrium, that means that forward time and backward time are different. That's the reason. If the universe is expanding, forward time and backward time are not the same. So in a rapidly expanding phase of the universe, we don't have to worry about time symmetry. It is definitely not symmetric. If time symmetry is broken just by the expansion of the universe, then we're in business. Then we have enough asymmetry of all possible kinds that the Baryon violation allows a change in the Baryon number. The Cp violation allows a directionality for it and the out of equilibrium. Out of equilibrium. Can you define what that's exactly? Yeah, it just means things are changing rapidly in a particular direction of time. It means the universe is expanding fast enough. That's all it means. So that running the thing backwards does not look like the original thing. In a world where the universe is running in one way, namely expanding, it is out of equilibrium. It has to be enough out of equilibrium. It's not enough for it to be very, very slowly expanding. It has to be expanding rapidly enough that these microscopic processes don't have a time to adjust themselves to the equilibrium configuration. But whatever it means, it means that backward and forward times are different. The picture of the universe, at a movie of the universe, allows you to tell which way is forward in time and which way is backward in time, just by the expansion and the fact that it's cooling. The fact that early times it's suddenly cooled, it's suddenly cooled rather rapidly in the beginning. That's enough to completely ruin the time reversal symmetry, and if the time reversal symmetry is ruined, the CP symmetry is also going to be ruined. So those were the three conditions of Sakharov. All three are believed to be really satisfied in the real world. And it's also believed that they are enough that if all three of these are true, that is really just no way it would be a complete accident with no rationale to it if there wasn't some excess created. The problem is that nobody knows enough about the physics of the early universe and the physics of very high energy collisions, the physics of very, very hot temperature, the nature of the particles that are in here, the details of what drives the CP violation and so forth. Nobody knows enough about it to be able to make a calculation of what the imbalance is of protons to anti-protons. That simply we don't know enough. So what we know is all three ingredients that are both necessary and probably sufficient to explain an imbalance, those are there, but the ingredients needed to make a computation to show that this number is 10 to the minus eighth times Nq plus Nq bar. That's out of range. We don't know how to do that. That's the status of this particular problem. As I said, it's called bariogenesis and it will await a much more detailed theory of both early cosmology and particle physics at very high energy. Any questions about that? We are now finished with the bariogenesis. But the fact that we know that we have a measurement of 10 to the minus eighth, I imagine that puts some constraints on what these students could look like. It does. It does. It does. Another question, of course. It does, but they're hard to use. You're right, but to my knowledge nobody has used it in a really effective way to constrain things, just too many variables, too many things. We're going to map out a neat-looking subspace. Yeah. We're going to look at a site of hundreds of parameters. We just don't know enough. Yeah. Now, the connection between that and the positron electron numbers that you got previously, is that this asymmetry here which allows that to be connected? I don't understand how the matterness of the forks and the matterness of the electrons are connected. Every time a proton disappears, a positron appears. That's an antimatter particle. The loss of baryon number must be made up for by an increase in the right, or the increase. Anything that increases P will decrease. As long as all of the processes that you're thinking about conserve electric charge, there's no alternative to the statement that if the baryon number shifts one way, the electron excess must shift the other way. Right. Yeah. I've heard claims that in the standard model, the CP violation that we see in the standard model, is not big enough to explain the baryon excess. I have heard claims like that too. You don't believe them? No, no, I do believe them. I have no reason not to believe them. I just am a little bit skeptical that anybody really knows how to do the calculation. I thought that was one of the big reasons why we needed a beyond the standard model. I'm a little bit skeptical. My involvement in this ended with stating these three conditions. Then thousands of people, not thousands, maybe tens of people, started to try to make computations based on standard models. Stephen Wolfram was one of them, and he invented mathematics, I think, to do the calculations. Well, I looked at it and I said, look, there's no way that they're going to be able to do this because there are just too many unknowns about the early universe and so forth. It may be correct, and I don't know. I haven't followed this story for a long time. It may be that we know that CP violation in the standard model is too weak to drive. Once you admit CP violation into the physics altogether, there's no reason to expect that at high energy it might not be tens and hundreds of times larger. Once you've opened the door to it so that it's no longer a principle, then you don't have really much control. If you had equal numbers of protons and antiprotons in the initial universe, all the electrons that we see today in our universe came from the antiprotons. Well, I'm not sure that's right. I'm not sure that's right. There could have been a population in the very beginning which was balanced between... Oh, I see what you're saying. Well, in that sense, yes. I'm not sure... You could have done the whole analysis in the opposite way by thinking about the electrons instead of the protons. And... The lingobarion genesis is a sort of historical relic. It has to do with the fact that we don't know how many neutrinos are out there that people focus on the barions but don't worry about it. Was the 10 to the minus 8, is today's number and has always been the number? Because I'm getting out of this. The 10 to the minus 8 is today's number for the ratio of barion number of protons to photons. But has there been a period when there was a lot of annihilation of protons and antiprotons? It was a period when there was a lot of annihilation between protons and antiprotons. It basically removed almost all the protons and antiprotons. But that was the point at which the temperature fell low enough that processes that would create protons and antiprotons. See, at very high temperatures, there's very high energy photons. Two photons come together and they create one way or another a proton and an antiproton. Those photons have to have enough energy to create a pair of protons. That means they have to be at least two GeV worst of energy. As long as the temperature is high enough, these things are going on backward and forward in equilibrium. As long as there are enough high energy photons around to be able to cause the proton antiprotons to appear, and it goes the other way. But then when the temperature falls below a certain threshold, there simply aren't enough high energy photons around to create protons and antiprotons, but protons and antiprotons can create photons. Photons are massless. Protons and antiprotons have lots of energy. It can go this way, but it has a hard time going this way. So once the temperature falls below that threshold, the protons start annihilating each other, but there's not enough energy to... Of course, when a proton and an antiproton collide and make photons, those photons have a lot of energy. But then they just go out into the soup and their energy gets lowered by coming to equilibrium with the lower temperature background stuff. So as the temperature goes down, the number of available high energy photons decreases and you can't make the proton antiproton, but you can go the other way. So at that point, that's when the annihilation starts and eventually all the proton antiprotons get eaten up. When did this process start? When there were no antiprotons. And that was so possibly... Oh, that was pretty early. Yeah, that was quite early. That was before nucleosynthesis, temperatures in the GEV range. I'm not sure exactly when it stopped. Before that 300,000 years ago. Oh yeah, yeah, before the first three minutes for sure. Some seconds maybe, I'm not sure. We'll get up to the meeting. In an earlier lecture, we were counting up the energy and there were four different types. There was photons and there was matter and dark matter. The contribution from photons, if I remember correctly, was negligible. Today? Yes. Today. Right. So after all of this annihilation went on and this early matter was turned into photons. Where did all this photons go if they're no longer around now? They're around. There's a microwave, a cosmic microwave background. There's 10 to the eighth of them for every proton. But how is it that it can account for such a negligible portion of the total energy in the universe? Because remember that radiation energy density decreases like one over the scale factor to the fourth power and matter energy density decreases like one over a cubed. Right. So if we need this very massive particle to make this work, does that mean the standard model is incomplete? Yes, it does. Yeah. Right. If you, your top two diagrams there where you have the unequal sun, they're very unequal and you start with the bar of p's and so on. Say it again. Say it again. If it's very unequal and you started initially with p's and p bars, then the p bars would decay because it's very unequal into the e-minuses. Then you'd end up with the e-minus p equivalents. So you start with just all p's and p bars and no e's. Uh-huh. The p bars, those are in that second process and you would get e's equal to p's. Oh, I see what you mean. Yeah? Yeah. It's, uh, yeah. Yes. You try to make a theory that way. I think you'd quickly flounder on the fact that at some point the universe was very hot and at that point there were these loads and loads of p's and p bars. At least if you wanted to follow standard thinking about, about, um, uh- What's that? It's very, very hard to explain why it would be that unequal. Particularly since, uh, CP violation is, is, is a small, uh, very small thing. So it'd be very hard to understand why it would be grossly unequal. And keep in mind that these decay rates, yeah, are very small, but, okay, I want to move on. That's what's known about matter, antimatter imbalance in the universe. And, um, as I said, I don't think it's going to be too soon that we know a lot more. All right. Now, the next thing we want to move on to is, um, inflation. Inflationary universe. Inflationary universe idea was put forward to explain another one of these things which it's not entirely clear needs to be explained. Or wasn't entirely clear that needed to be explained at the time. Um, and the question was, why is the universe so terribly homogeneous and isotropic? This became a rather critical issue when the cosmic microwave background was discovered. And the cosmic microwave background quickly became a rather high precision thing. The high precision thing being that in the short order, um, the, uh, the black body curve had been measured. The temperature of the cosmic microwave background had been measured carefully. It was known to be about 2.7 degrees and with some precision, with some other error bars were fairly small. Today they're microscopic. Today the, today the error bars are just, uh, they can't be seen on the, okay. So the temperature was very well defined, looked like a plonk distribution, like a black body distribution, almost exactly, or exactly as far as you could tell. But moreover, it was the same in every direction. So suddenly this rough idea that the universe is isotropic became a high precision idea. Now this took years. I'm, I'm condensing history into, uh, into something smaller. But at least by today in any case, um, the idea that the universe is isotropic is a very, very high precision thing. So you can ask why is the universe so isotropic? Uh, today it's isotropic. Must have been isotropic very early. No particular reason why, uh, why an anisotropy lumpiness in the distribution would de, would decrease with time. In fact, quite the opposite. Lumpiness tends to increase with time because of gravity. Gravity tends to take lumps and, uh, magnify the size, the magnitude of the lumps. So, uh, it means the universe must have started very early being extremely homogeneous and isotropic. When I say very early, I mean at the time before there were galaxies, before there were, um, planets, at a time that the black body photons originated. In other words, at the decoupling time, the universe was extremely isotropic. Now, it was of course known that it couldn't be completely isotropic or homogeneous, let's say. If it was exactly homogeneous, it would stay homogeneous. Anything which is exactly uniform and allowed to evolve, uh, will stay uniform. And the universe is not uniform, it's full of galaxies and it's full of clusters of galaxies. It has a lumpiness that's there. The lumpiness that's there clearly was much smaller to begin with, smaller in magnitude, for the simple reason that lumpiness tends to increase with time. Let me just explain what I mean by that. If you start with a world which is completely uniform, of course it stays that way. But let's suppose there was a little bit of over-density here. A little bit of over-density, now that's, uh, that's a little re- a big region, a fairly big region, where the density is a little bigger than neighboring region. What happens? What happens in grav- in a gravitational theory is the opposite of what happens in other kinds of theories. In other kinds of theories, what tends to happen is this, if there's an over-density here, it will diffuse out and be eliminated. For example, you have an over-density of ink dropped into water. You have a spot of ink. What happens over time? It diffuses out and, uh, and, um, disappears and becomes homogeneous. In gravity, the opposite happens. And the argument is very simple. If you have an over-density here, because gravity is attractive, universally attractive, it will tend to attract the stuff around it, pull it in, decreasing the, um, the density outside and increasing it inside. In other words, it's kind of a runaway situation. It's a runaway situation where a little bit of inhomogeneity will tend to reinforce itself. So if you have over-density, under-density, over-density, under-density, and so forth, in some pattern, the tendency of gravity will be to suck stuff out of the under-dense regions and put it into the over-dense regions, and thereby magnifying the degree of inhomogeneity. So the fact that we see inhomogeneity today does not mean that the universe very early was as inhomogeneous as it is today. It must have been much less inhomogeneous just by running the argument backward. In fact, cosmologists rather early were able to estimate, by running the theory backward, they were able to estimate at the time of decoupling just how much inhomogeneity was there in order that the galaxies could nucleate out of that inhomogeneity. In other words, the picture is the universe was very homogeneous, but little bits of inhomogeneity, little bits of ripples, little bits of excess depletion, excess depletion of some kind, and those over-dense regions eventually collapsed by this mechanism, formed galaxies and clusters of galaxies, and the under-dense regions formed voids, and that's what we see today. So you can estimate, and it was estimated by people like Jim Peebles and other people, early cosmologists in the 60s, and I'm not sure exactly when, how much inhomogeneity was necessary at the time of decoupling. And the answer was, here's the way you quantify it. You look at the lumpiness and you characterize it by a delta rho, where delta rho is roughly the mean excess density in a lump relative to the background, and you divide it by the density itself. That's the fractional over-density in a typical over-dense region, compared with the density itself. That's the dimensionless measure of how much inhomogeneity there was, and it was pretty early recognized that this was a number that had to be about 10 to the minus 5, somewhere between 10 to the minus 4, 10 to the minus 5. So here we were again with the same kind of situation, a question, why is the universe homogeneous? Well, does that really require an answer? It just started that way. But now there was a number. There was a number. In addition to just saying the universe was homogeneous, it is also clear that it wasn't homogeneous. It wasn't homogeneous with a specific numerical magnitude to it, and once you have a specific numerical magnitude, you want to know why that's the magnitude. Now, it's not that we learned why this is the magnitude. We did not. We do not really understand why this is the right number. But the existence of this number focused attention on the question, and the first question was, why is the universe so homogeneous? Later we'll worry about why it's inhomogeneous. The first-order fact was that it's very, very homogeneous. The explanation of that today, which seemed rather far-fetched when it was put forward by Alan Goothe in 1980, was the universe simply expanded by many orders of magnitude. And of course, when something expands, it stretches, and if you stretch out, if you have an inhomogeneous universe with lumpiness in it, and you stretch it out enough, you'll make it homogeneous at least on the scales that are relevant. So the idea was the universe inflated, which means expanded, and in particular exponentially expanded for some period of time, and stretched itself out so much that it flattened itself to, you know, it's like blowing, literally like blowing up a balloon. The balloons are crinkled, rough shape thing when it's collapsed, you blow it up, and it stretches out and flattens out. That's an analogy which you shouldn't take too far. Yeah, no idea. It's not exactly, if we look, I look around this room, it doesn't look isotropic to me. I look out on an average over large distances. You know, it's like the surface of the Earth. It's full of hills and mountains and valleys and so forth. You've got to explain those. You explain them by geological phenomena and so forth. But on the whole, the Earth is very smooth. And the universe is the same. If it's homogeneous, or if it's homogeneous, I was going to say if it's homogeneous, it's going to be isotropic. No, that's not true. I'm getting at it, but wouldn't you expect certain directions to exhibit much greater masses? Why? Because you start out with a small amount of inhomogeneity, but it's a random process that you would expect that in some directions you would see a great deal more. No, it's not a random. It's not random. It's not random. I see what you're getting at. You're saying if it were random, you might expect that somewhere out there there would be large over densities if it were random. But with some statistical probability, you might find a large lumpiness. It's not random. And we're going to talk about the pattern of it. Basically, it's a, you know, it looks like a, now squeeze this down to its size, 10 to the minus 5 relative to the background. On the whole, it's very smooth. But nevertheless, there are wiggles in it, and the size of the wiggles are very small. What the implication is, and they're uniformly distributed through space. They are homogeneous, isotrapically distributed. There are no large, tremendously large lumps out there. They're all very small, small in magnitude, not small in size, but small in magnitude. And it's more or less like a very, very smooth earth with ripples on it. Again, there's an in-birth relationship between early time and high-temperature relationship. So there was a particular time when inflation started to stop, and that relates to particular temperatures. Is there any connection or is it just random? I mean, is the temperature causal of the effect? During the inflationary period, temperature was not an important aspect of things. During the inflationary period, temperature was not a terribly relevant factor. It was the exponential expansion, which was a single most important thing. And I'm going to begin, let's see, where are we? We still have a little bit of time to talk about some preliminaries. It looks like next week will be our real week for inflation, and the equations. I'm going to take you through the basic equations of inflation, show you how it works in some detail. But I think what I'll do for the rest of the evening is talk about friction. What does friction have to do with anything? It has everything to do with everything, but tonight I'm just going to tell you about friction. The equations of friction. Write it down because I'm going to use it. And it's very simple. When I speak of friction, I'm speaking of viscosity. I'm thinking about something like a stone falling through a viscous fluid. Why am I doing that? I'm doing that because it will come up. Take a stone falling through water, or honey. And I'm just reminding you of the equations for it. It's falling due to a force, which is a gradient of potential energy. Let's call the height here, what shall I call it? I'm going to call it phi, which is a stupid name for the height. But it's not a stupid name for a field, and as you might expect, what we're going to be talking about is fields. But nevertheless, the analogy is phi is the height of the stone, and there's a force on the stone which is related to its potential energy. In this case, it could just be the potential energy of gravitation in, but let's just call it a potential energy V of phi. Let's assume the force is downward, which means V of phi increases in the upward direction, and what is the force? The force is the derivative of the potential energy, but minus it, minus the derivative of V with respect to, just a derivative, ordinary derivative, dV d phi. Let's write Newton's equations now. Newton's equations for the stone, for simplicity, let's just take the stone to have unit mass. What's the equation of motion? F equals ma, the force on the right-hand side is minus dV d phi, and that's equal to the mass, which I'm setting equal to 1, times the acceleration, which is phi double dot, the second time derivative of phi. Okay, this is F equals ma, and there's nothing special there. In a uniform force field, which would just mean dV d phi is just constant, the stone would form, would fall with a uniform acceleration, and it would just very quickly pick up a lot of speed and continue to accelerate. Alright, that has ignored the viscosity of the fluid that it's moving in, which let's say is something like honey. Alright, now that means that there's another force on the right-hand side here. That other force is zero if the object is at rest. The viscosity exerts no force on an object at rest. It's moving through the fluid, which creates the force, and so the force on the right-hand side is going to depend on the velocity. The larger the velocity, the more the viscosity, and for simple fluids the viscosity is actually proportional to the velocity itself. So there's a force on the right-hand side which is proportional to the velocity. Which direction is it in? It's obviously opposing the velocity, so it's got a minus sign in front of it here, and there's some coefficient in front of it that's called the coefficient of it's... I forget what it's called. I don't know what it's called. The drag! Alright, what's the letter for it? Who? Anybody know? Who? B. B? Gamma. Whatever it is. The sign has to... can't be the same for both forces. Why not? You want one force to pull it down and then the other force to slow it down. The VD5 pushes down, right? Alright. It's moving with a downward vertical velocity, right? Moving with a downward vertical velocity that means phi dot is negative, right? Minus phi dot is positive. This force is up. As long as it's moving down. As long as it's moving down. Okay. What's that? The derivative of the potential is the negative of the force. So that's why... The VD5 is down. It's negative. The VD5 is down is negative. But this one is positive. It's positive because phi dot is negative. Alright? It's flowing down. So, in the beginning, when the stone starts to fall, phi dot is zero. And basically it starts out accelerating exactly as it would without this. But very quickly, phi dot will increase and unless as you move down the force gets bigger and bigger rapidly, this is going to increase because phi dot is increasing until it balances the VD5. At that point, the force is canceled and that's called the terminal velocity. In particular, if the force is uniform, if the downward force, for example, like the force of gravity is uniform near the surface of the Earth, then this is just, let's just call it F minus F minus phi dot gamma. That's the total force. And at some point it stops accelerating because these two balance each other. And of course that happens when phi dot reaches the terminal velocity and the terminal velocity phi dot is equal... Now I am getting my signs confused. The force is negative, right? The force is negative because it's down, but the terminal velocity happens when F, when phi dot is F over gamma. Now my signs are confused. Do I have a problem here? Oh, no, it's okay. F is negative so the terminal velocity is negative. Terminal velocity is negative because it's falling. That's the basic theory of friction. And what does it do? It slows things down. In particular, if V of phi has a shallow slope, if V of phi, let's plot V of phi. Let's plot phi this way. Let's plot phi, plot it horizontally even though phi is the vertical direction. Let's plot it horizontally and let's suppose that V of phi is a rather shallow hill. If it's a rather shallow hill and the viscosity coefficient here is large, the stone falling along this potential energy here will simply take a long, long time to roll down the hill. That's the situation we will want to be in and phi will not be a stone, it will not be the position of a stone, it will be the value of a field, but we will want to be in a situation where a field evolves very slowly because of a lot of friction. And that will drive inflation and we will come to it, I think, well I don't know, should we go on a little bit? I'm a little afraid to overdo it tonight. Okay, let's go on for a little bit. Let's go on for a little bit. Good, let's go on for a little bit. Now what we're going to consider is classical field theory. The universe is filled with some field. This field is going to be a scalar field. Now, where does this come from? What scalar field? It was made up. It was simply made up in order to be able to explain the isotropy and the homogeneity of the universe. At the time this was created it was a long shop, a rather crazy idea, but it seems to be right. Okay, so here's the idea. The world contains, in addition to the electromagnetic field, the gravitational field and all the usual fields, and one more field, it's a scalar field and it's called the Inflaton Phi. Why? Because it has to do with inflation. Now, we're going to assume that Phi is pretty much uniform in space. We could assume it's not uniform in space, but as the space expands it will tend to stretch out the variations in Phi. So let's just assume, for simplicity, that Phi is uniform in space and think about the energy stored in a field, in a scalar field of this type. Does everybody remember what the energy density of a scalar field is? It contains a kinetic term with time derivatives. It's the energy density. It contains, this is the energy density, let's just call it the energy of a scalar field. It contains one term which is Phi dot squared. This is just the kinetic energy of the field, not the kinetic energy in the sense of movement in space, but the sense of due to time dependence. Then, typically, there is another term which has to do with gradients in space. So this case, the dot is a partial derivative of the space. With respect to time, it is, right, d phi by dt squared. There's also terms in the energy which have to do with gradients in space. Gradients in space also store energy. But since we've assumed that the field is homogeneous and is not varying in space, which we can justify but we can do that later, there are no gradients in space. And so this is the only term in the energy density that has derivatives in it. Then the other thing that can be there is a potential energy, a potential energy which is plus V of Phi. It's just a thing that's made up that different values of the field have different energy. It doesn't have to do with derivatives. Just in having a field of a given magnitude, there's an energy density associated with it. That's called the field potential energy. And it's not by accident that I'm giving these the same notation. Phi is a field. Up here, it was the coordinate of a storm, but it's not by accident. Okay. Now, let's follow the field and let's follow it in a box, in an expanding box. As always, the box that we focus on is expanding with the universe. And how big is it at any given time? How big is that box? That box is of size A cubed, the scale factor cubed. Let's say it's a unit box in coordinate space. The size of the box is A cubed. This is the energy density. This is the energy density. So to get the energy, I wrote energy here, but this is really energy per unit volume. If I want to write the energy of the field in the box, I have to multiply it by A cubed. But A is time dependent. It depends on time. So we have a time dependent energy expression, A cubed of t. Kinetic energy plus potential energy. You can think of this, if you like, as formally, mathematically being the same as the energy of a particle, kinetic energy plus potential energy, except with a coefficient out here which depends on time, something you wouldn't ordinarily write down. You might have a funny situation where the mass of a particle might depend on time, and then this might depend on time. But ordinarily you wouldn't write that down. Nevertheless, that's what we have. We have an energy, kinetic plus potential. Let's assume on A, no, A is just a scale factor. If there was spatial dependence, we would have to add in here a term for spatial gradients. Let's assume for the moment that the field is uniform in space. If it's not, eventually it will be because the expansion will stretch it out and get rid of the gradients and stretch it out. So it's reasonable to assume that after a period of time, there are no gradients, no gradients of the field in space. Just on a conceptual level, we're thinking of associating this field value with a point in space. But if we assume that it's uniform, then all we need to do to find the total energy is to multiply by the volume of the box, which is time dependent. Is there a question? Is there any physical thing we can think of for the feed-off, in other words, that kinetic, as you say, is not moving. I'm not sure if that was the final question, but what is there any subtle way of thinking of that? Well, I'm so used to it that I think of it as sort of obvious. What's the field do is we try to get in stronger or weaker? We have to find out. We have to solve the equation of motion. That's like asking what's happened to this particle. Is it moving up or moving down? It will depend on the initial conditions. It will depend on how long we wait, and it will especially depend on the sign of the V d phi, namely the sign of the force. It's doing it everywhere simultaneously. Now we can back off that and study what happens when it's not, but this is the easy problem to study. Okay, now how do we find the equation of motion when we know the kinetic energy and we know the potential energy? There's various ways we could do it, but the most efficient way is through Lagrange's equations. Lagrangian, let's write the Lagrangian now, the Lagrangian is the difference of kinetic energy and potential energy. So you pick up your copy of the theoretical minimum and you look up Lagrangian's equations with this. Phi is like the coordinate, phi is like the coordinate, and here is the Lagrangian. The only new thing that wouldn't be there for an ordinary particle is this A of t. Alright, so let's work out Lagrangian's equations. The first term is d by dt, or I suppose we can make it, d by dt of partial of L with respect to phi dot. Everybody remember that? And what's on the right-hand side? Right. That's Lagrangian's equations. So partial of L with respect to phi dot, this is the equation that we get out of it, partial of L with respect to phi dot is just phi dot times A cubed of t. This is partial of L with respect to phi dot, but this isn't t yet. dL by d phi dot is phi dot times A cubed of t. Is that obvious? Good. Now we have to take its time derivative, d by dt. And that we have to set equal to A cubed times minus A cubed times dv d phi, or just ordinary derivative dv d phi. We could call dv d phi, we could call it the force, so minus dv d phi, we could call it the force, but notice that there's this extra explicit time-dependent thing there. This doesn't quite look like Newton's equations. There would be Newton's equations if A was constant. Okay? But A is not constant, so let's work it out and see what it is. First of all, d by dt, there are two terms. One comes from hitting phi dot, the other comes from hitting A cubed. So let's first do the first one. The first one is A cubed times phi double dot. That's d by dt of phi dot times A cubed. And what's the second term? That's phi dot times the time derivative of A cubed. Okay? The time derivative of A cubed is 3A squared times A dot. Do I have that right? Yeah. That's in the parentheses over here. And that's equal, let's just call it the A cubed times the force. The force on phi. Okay? It's very tempting, and I will do it, to divide out the A cubed, since it appears both here and here. Let's divide out the A cubed. But I better divide over here also, huh? Okay. So let's clean it up. It's phi double dot plus 3A dot over A. A squared divided by A cubed is 1 over A, so it's A dot over A, times phi dot is equal to F. What is A dot over A called in cosmology? The Hubble constant, which is not generally a constant, but it's H. That's of course... Everywhere is in space, but not necessarily in time. Maybe the same in time, but it may not be. So we'll come back to what it is in time, but keep in mind it could be time dependent. A double, sorry, well A double dot, phi double dot. Phi double dot from here, plus 3H times phi dot equals F. Let's write this one this way, plus... Equals F. This is exactly the same equation as the falling stone with a viscosity coefficient 3H, and a force which is just a gradient of V. So our model for the way this field evolves can be envisioned by just supposing that phi was the position of a particle on a hill where the height of the hill was V, or the gradient of the altitude of the hill was the force, dV d phi. A ball rolling down the hill except that there's a viscous drag force proportional to the Hubble expansion rate. So the Hubble expansion rate behaves as a kind of friction. That is why we went through this exercise over here. If the Hubble friction is strong, if H is big, and the force here and if the hill is reasonably flat, not terribly flat but somewhat flat, then this could be like a ball moving through motor oil in Wisconsin on a cold day. It just left to its own devices without the friction term, it might roll down the hill in a few seconds. With a large friction, it could take years to roll down the hill depending on the magnitude of the friction. So this is called the cosmic friction term in the equation of motion of a scalar field. And it has the effect, it has the simple effect of slowing down the evolution of the system and keeping the ball from rolling down the hill. The next time we're going to use this to study the cosmology of a universe which contains a field like this. In other words, we're going to look at the FrW, sorry, the Friedman equation with an energy density which is given by this. So we're going to study how the universe expands and evolves under the influence of an energy density which is slowly, slowly, slowly rolling down the hill here. That's the phenomenon of inflation, that the way the universe responds to this very small, slowly moving field. Okay, next time.
(March 4, 2013) Leonard Susskind examines one of the fundamental questions in cosmology: why are there more protons than anti-protons in the universe today? The answer lies in theory of baryogenesis in the very early universe.
10.5446/15056 (DOI)
The goal tonight is not to tell you why the Higgs boson is the best thing since flush toilets. You've read all of that. Anybody who's here has probably read all of the height, the excited, reckless superlatives that have hit the newspapers and so forth. Higgs boson is going to explain the origin of the universe and this and that. I'm not going to do that. First of all, before I say it, the excitement and the enthusiasm is justified. It's not that it's not justified. It is justified. The history is fantastic. It's an unbelievable event and so forth. But that's not what my thing is. It's not what I do well. What I do well is explain how things work. My goal tonight is going to be to show you as far as I can in one hour, which is tough, which is hard and it may not work, but as well as I can to explain to you the nuts and bolts of what Higgs physics is about. One of my closest friends, incidentally, is named Francois Anglère. Francois Anglère would be appalled if you knew that I was standing here talking about the Higgs effect since Francois Anglère was the discoverer of it. So from time to time, they may call it the Brout-Anglère Higgs effect, but I also may slip and slide and just call it the Higgs effect because I tend to get like everybody else. All right, so how it works. First of all, there's a lot of moving parts, a lot of pieces that I would have to explain to you first to really do it right. And I'm going to try to explain those pieces in little modules, shall we say. It's a highly quantum mechanical effect. It cannot really be understood without quantum mechanics. And so I would begin with a course in quantum mechanics. And let's say the course in quantum mechanics consists of just one thing. Things are quantized. Quantized means that they come in discrete integer quantities. The most famous example of this is angular momentum. Angular momentum, the rotational properties of the rotational momentum of an object are quantized, which means they come in discrete steps and the discrete steps are one plunk unit, one unit of plonks constant. We'll take that as all we really need to know about for the most part about quantum mechanics tonight. The next concept, which is an easier one, at least I think it's an easier one, it's a classical concept, is the idea of a field. A field is just a condition in space. It could be the electric field, it could be the magnetic field, it could be a gravitational field. Whatever you think of as being present in space, which characterizes the behavior of space, is that instant in that space and time. So space can be filled with fields. Now ordinarily, you imagine in empty space, empty space is the thing we call a vacuum, from a quantum mechanics point of view, the vacuum is just a state of lowest energy. Nothing there, no energy other than what quantum mechanics requires there to be there. And so you ordinarily think that the fields that could inhabit space are zero in empty space, the electric field, the magnetic field, and so forth, but there's no important requirement of physics that says that that should be the case. Just imagine a world filled with electric field. How could that electric field get there? Well it might be there because there are capacitors placed infinitely far away. So far away we can't tell they're there. There we would have a world in which there was an electric field. It would just be there. Empty space would have an electric field. Now when you think about fields, we're beginning a little bit too early. So what I was saying to the audience, two things. The first was that quantum mechanics were going to summarize by one simple statement that things are quantized in quantum mechanics. Quantized means they come in discrete bits. The most important example is angular momentum, not the most important necessarily the most important example, but the most important example for me tonight will be angular momentum. And angular momentum has to do with rotating objects and so forth. Angular momentum in quantum mechanics, unlike classical mechanics, comes in discrete units. The unit is Planck's constant. You can't have a tenth of a unit of angular momentum. You can only have angular momentum 0, 1, 2, 3, minus 1, minus 2, minus 3. You can also have half integers, but we're not going to worry about that tonight. But no, you can't have angular momentum pi, only discrete integers. That's the first fact that I want you to remember. The other fact from quantum mechanics that we'll have to remember also is the uncertainty principle, but we'll come to it. Now the other thing we spoke about was fields. Fields are things that can fill space, electric field, magnetic field, gravitational field, other kinds of fields that exist in physics. They are functions of space. They can vary from place to place. And they affect, for example, the way things move. An example would be an electric field affecting the way a charged particle moves. Now the other thing I said was you can imagine a world in which, for all practical purposes, empty space is filled with a field. An example would be if I went out to Alpha Centauri at that end and, excuse me, I don't know what's out there. Yeah, whatever's out there. And place some capacitive plates. Big capacitor plate out there, big one out there, make an electric field in between. That's so far apart that we can't see them, we would say that the world is a world that exists with a magnetic field. And we would say that charged particles move in peculiar ways, but that was just a fact of nature. Generally speaking, fields cost energy. The space without an electric field has zero energy. With an electric field, it has energy. And if we were to plot the energy of a field, a typical field, it could be electric, it could be a magnetic, it could be something else. Generally we imagine that the field energy as a function of the field horizontally, imagine the value of a field, vertically its energy, zero field right here. And we imagine exciting the field, causing it to vibrate. Causing it to vibrate by giving it a push at some region of space and nearby the field to vibrate. Those vibrations are quanta of the field. They are particles. They are particles, the quanta of vibration of the field of particles. Now you might have a situation where there is more than one field relevant. Let's call it phi and phi prime or whatever you want to call it, doesn't matter. Then instead of plotting the field as one dimensional, we might plot it as two dimensional now this is not space, this is the value of some collection of fields. And then the energy would depend on both fields. Here's an example, an energy function which looks like that, which simply says that no matter how you displace the field, it costs energy. Now imagine that this upside down paraboloid here or whatever it is was nice and symmetric, nice and rotationally symmetric in the field space, exactly like the top of my hat here, which as you can see is symmetric. And so the field as a, oh yes where is my little desk, that's it, right. The field as a function of position would most likely, if the energy is as low as possible, just sit at the bottom of the potential energy of the field. Just to lower the energy as much as possible. So we might think of the field at every point in space, a little ball which can be made to oscillate back and forth and do things and those are just oscillations of the behavior of physics in a local region of space. As I said they often correspond to quantum particles, those oscillations, but for the moment they're just oscillations. Now one of the things we could do if we had a field whose values were like the position in the hat here would be to start it out displaced from the origin, let's say up to here, and then start it moving in a circle. Just in the same way you could take this ball and if the hat was really nice and smooth and symmetric give it a push and it would go around in a circle. That circular motion of the field is very, very similar in a way to angular momentum. It's not angular momentum in space, but it's a kind of angular momentum that exists in the field space. That angular momentum like all angular momentum are quantized, come in integer multiples of Planck's constant. What do they correspond to? They correspond to something else that is also quantized in nature, the value for example of electric charge. So in modern physics the way one thinks about the electric charge in a region is that in some region of space, a particle, a charged particle, a charged particle is viewed as an excitation of the field in which the field is made to spin around in the internal space of the field. Not in real space, but in the internal space of the field. That's one way, in fact it's the main way that we think about charge as a kind of rotation in an internal space. Okay, now what I want you to do is imagine taking the hat and turning it over. Imagine that the potential energy was not, turn it over this way, excuse me, this way is the way that the potential energy is minimum at the crown of the hat, but if the potential energy really looked like that so that it was maximum at the top of the hat, then the top of the hat would not be a position of equilibrium, it would be a position of unstable equilibrium, it would look like this. Turning over the hat, the crown of the hat now, this is the way the hat, the real hat, looks like this. And it doesn't, okay, let's just, let's make the, how's that? It looks like a hat, yeah. What kind of hat does it look like to you? Looks like a sombrero, right? Looks like a Mexican hat. This is just called this kind of potential energy function, a Mexican hat, believe it or not. It's called a Mexican hat. It turns back up, the top is unstable, if a ball was put at the top, it would roll down and where would it go, it would go to the brim of the hat. If for some reason the potential energy of a field was like this, then the state of lowest energy would not be at zero field, it would be out here. Now that's kind of interesting. It would be a vacuum, a world, which had a field, just like having an electric field except it's not an electric field, in which the value of the field at every point in space was not zero. You might notice it. How would you notice it? Well, you might notice it because it might affect other things and indeed it does affect other things, as we will see. But there's now something interesting you can do that you couldn't do here. Over here if you wanted to set this thing into rotation, you would have to displace the field a little bit because it doesn't mean anything to rotate right at the center. If you wanted to set up a rotation, you'd displace the field and then give it a flick. So making a charged particle costs some energy. Here you can imagine setting this thing into rotation with just a little flick that costs no energy. It costs no energy because you don't have to ride up the side of the hat. In other words, you could have a motion in which that's not a... You got it. You got it. You understand. You got to have a motion in which that field slowly wound around the top of the potential. In fact, it could do it everywhere simultaneously, not in real space, but in this field space. That would correspond again to a charge if rotation in this internal space corresponds to some kind of charge. But now the whole world, if the whole field was moving like that, would have a little bit of charge in it, a charge density, charge filling space, and essentially no cost of energy. That phenomenon is called a condensate. It's called spontaneous symmetry breaking, but it's also called a condensate, a condensate in space of charge. Now, you might say, okay, look, I want to find the lowest energy that the vacuum can have, that empty space can have. My best bet is to make the field not move with time. Just like a ball at the bottom of the sombrero hat here, there's also kinetic energy of motion, causing the field to move around in a circle like that would cost some energy. So you would say the true lowest energy state of the world should be with a field either here or here or here. It could be anywhere along the rim of the hat, but it should be standing still, right? The problem with no angular momentum or no charge, empty space should not have charge. The problem with that is the uncertainty principle. Let me remind you what the uncertainty principle says. It says that if you have a object and you're interested in its position x in ordinary space now and its momentum p velocity, if you like, the uncertainty principle says that the uncertainty in its position times the uncertainty in the momentum is greater than or equal to what? Planck's constant. You can't have something both standing still and having zero momentum. If it's stand, sorry, you can't have something standing still, namely no momentum, and also localized at a point. Delta p times delta x is greater than h bar. Same thing here. If you know where the field is on this Mexican hat, if you know with great precision, then it follows from the uncertainty principle that it must have a very large uncertainty in how fast it's moving around here. Ah, that's interesting now. That would say that you can't have empty space with no charge in it. You can't have empty space with no charge in it because if you lay the field down at this point, you know where it is on the rim of the hat, and if you know where it is, there's a necessary uncertainty in the charge, the charge being like the angular momentum. All right? So where are we then? If this were the case for electric charge, for ordinary electric charge, we would say that the vacuum empty space not only is filled with charge in a certain sense, but a totally uncertain amount of charge. Totally uncertain, and this is a quantum effect, a totally uncertain amount of charge. There would be equal probability, let's take a little volume of space, there would be equal probability that the charge was zero, or that the charge was one, or minus one, or two, or minus two, three, minus three. Now this is truly odd. This is not something you should try to visualize because you can't visualize an uncertain amount of charge, but nevertheless, that is what a region of space would look like. If you measure its charge, it could be anything from minus infinity to plus infinity. Okay, now I want you to imagine that you have an extra charged particle, an extra charged particle, and you throw it in. You don't know initially what the charge is, but what does that do? It displaces the charge by one unit. Let's suppose it was a positive charge. You've displaced the charge by one unit, and so if it was zero to begin with, it's now one. If it was one to begin with, it's now two. If it was two to begin with, it's three. If it was minus one, it's zero, one, minus one, minus two, and so forth. But that's exactly the same as what we started with. We started with something which had an uncertain amount of charge, equally likely for any value of charge, and what did we end up with after we threw the charge in? Exactly the same thing. What if we plucked the charge out of this thing? Same thing. So a condensate is a funny configuration of space where with respect to whatever kind of charge we're talking about, it's so uncertain that you wouldn't even realize it if you put an extra one in or pulled one out. Now the real world is not like that with respect to electric charge. We know if we have a charge in space. So it's not like that with respect to electric charge. However, there are materials that behave like this, superconductors. Superconductors are exactly like this. So it's not unheard of. It's not a totally new thing to have a condensate of charge where in a region the charge is completely uncertain. Okay. That was module number one, if you like. It's or what is sometimes called the spontaneous breaking of symmetry. Module number two, the standard model. Now we come to particle physics, and I'll give you a short course in particle physics. First of all, particles have mass, and the mass can be anywhere from zero... We're talking about small particles now. We're not talking about railroad engines or stars. We're talking about small particles. We'll call them elementary particles. But there's also a maximum mass they can have. If they were bigger than that, they would form a black hole. If they were more massive than that. If a point particle was more massive than something, it would form a black hole. And it would be something different. So up to some maximum, and that maximum is called the Planck mass. It is not a very large mass. It's neither a very large mass nor a very small mass. It happens to be about 100,000th of a gram, a small dust moat. But that is the heaviest of a chart that an elementary particle can be without turning into a black hole. And if you ask now, where on this chart, from 0, this is called M Planck, up to the maximum, where are the ordinary particles, the electrons, the photons, the quarks? They are way, way down here. The largest mass of a known elementary particle is about 10 to the minus 17 of the Planck mass. Why are the particles so light? Well, one answer is, in order to detect massive particles, you have to have a lot of energy. In order to have a lot of energy, you need a big accelerator. We've only made accelerators up to some size. And so for all we know, the rest of this is filled with particles. And that's probably true. That's probably true. But what is special about these particles? Well, first of all, let me name them. And then I'll tell you what's special about them that makes them clump up at zero mass. Let's name them the particles of the standard model. They come in two varieties. It is not important that you know the difference. Well, I'll give you a rough idea of what the difference is. They come in two varieties called fermions and bosons. The fermions are all the particles that make up matter in the usual sense. The electron, which I'll just call E. Well, the neutrino goes along with the electron. That's a new electron. The neutrino. Quarks. There's a variety of different quarks. Incidentally, there are several different kinds of electrons. We call them electron, muon, tower. It doesn't matter. But they're very electron-like and several kinds of neutrinos. The electrons have the electric charge. The neutrinos don't. And then there are quarks, a variety of different kinds of quarks, up quarks, down quarks, this kind of quark, that kind of quark. And those quarks, several different kinds of quarks. You know what the role of them are. They make up the proton. And that's about it for fermions. For bosons, on the other hand, there's, first of all, the photon gamma, gamma for a gamma ray. Photon, there's an object called the gluon, G. It's very much like a photon. It's very much like a photon. But it doesn't have anything to do with atoms. It has to do with nuclei and protons and neutrons. It plays the same role in holding the nucleus, or better yet, the proton together as the photon plays in creating electrical fields inside an atom. So there's the gluon. And then there are two others called W bosons and Z bosons. For the most part, we won't be interested in any of them, except the photon here and there. But mostly, we'll be interested in the Z boson. That's it. That's the standard model. That's all there is to it, with one exception. I've left something out. It's the thing you came to find out about tonight. So we'll come to it. If there was no X boson, then this would be it. Now, what is special about this set of particles? What's special about them is for reasons that I'm going to come to, reasons that I will come to, all of these particles in the standard model, as I've laid it out here, with nothing else in it, would all have mass equal to 0. They would be massless. And I'll explain why that is in a little while. We often hear that it's the role of the Higgs boson to create mass for particles or to give the particles their mass. That's the expression that I've heard over and over. The Higgs gives particles. Why do the particles have to be given mass? Why can't they have mass of their own? Why do they have to be given mass? Well, as it turns out, for reasons we'll explain, this set of particles is exactly the set of particles which would have no mass if this was all there was. Now, in part, that explains, in part, it explains why the particles, why these particles are so very light. It's because they're massless. They have no mass. Well, not quite. We can't live with that because we know that particles really do have mass. Next question. I'm going to draw some figures over here. What do these particles do? What kind of processes are they involved in? The basic process of the standard model, this is an oversimplification. But it's qualitatively right, is that the fermions, there's a fermion moving along. And I will describe a fermion by a solid line. Solid because it's what makes up stuff. Solid line, that's moving from one point in spacetime to another point in spacetime. What the standard model does is it causes the emission of bosons. A electron moving along can emit a photon. Electron moving along can emit a photon. And that's connected with the electric charge. Any electrically charged particle can emit a photon. A photon. That's the first thing that the standard model does. Now, this, of course, is just quantum electrodynamics. It does not have to be the electron. It could be any electrically charged particle. Next, the quark. Let's see, do we have room here? Yeah, we'll just do it. The quark. Quark, let's just call it Q. The quark can emit a gluon. Precisely the same pattern. The quark emits a gluon. Now, the quark can also emit a photon. If it happens to be electrically charged, then quarks are electrically charged. But electrons cannot emit gluons. Gluons are the things that bind quarks together to hold them together into protons and neutrons. And then there's one more important process for me tonight. There are two more processes, but I'll just write down one here. And it is either an electron or, incidentally, a neutrino cannot emit a photon. It has no electric charge. It cannot emit a gluon. It's not a quark. Both electrons and neutrinos and quarks, for that matter, can emit the zebozon. Where's the zebozon? Here's the zebozon right here. And when they do so, the zebozon being electrically neutral, the electric charge of whatever's here doesn't change. So this is another process that the standard model describes. Now, first of all, why are the bozons massless? Well, the photon is massless. We know that. It travels with a speed of light. Now, could we make a theory in which the photon had some mass? Yes, we could. But the more important thing is that we can make a theory in which the photon doesn't have a mass. Why? Because the photon doesn't have a mass. Using the same kind of theory, the zebozon would not have a mass and the gluon would not have a mass. Everything would be massless. These would be the processes that could happen. These would be the particles. They would all be massless. OK, now, how do fields? How do fields give particles mass? Or better yet, more simply, a simple example. I'm going to show you a simple example now. The simple example is how a field can affect the mass of a particle. We'll come back in a moment to how it can give something which didn't have mass. But let's take a more modest question. How may fields affect the mass? Or better yet, how might they make different masses for different particles? So I'm going to show you an example. This example is a little bit contrived, but it's a real example. A water molecule. Water molecules have the basic property that they're little dumbbells. They have a plus end and a minus end. Electrically charged plus end and minus end. They're actually not, they're more like y's, y with three n's. But we can think of them as having a plus end, dumbbells, and a minus end. Now, the mass of a water molecule, water molecules have mass, the mass of a water molecule doesn't depend on its orientation. If we turned it over and made a water molecule with its minus end here and the plus end here, it would have exactly the same mass. Why? It's the symmetry of space. Space is the same in every direction. And so by symmetry, we would say that the water molecule standing up straight has exactly the same mass as the water molecule standing on its head. Let's not worry for tonight about whether it's lying on its side. Quantum mechanics tells us we don't have to worry about anything but standing up straight and lying on its head. All right, so that's true about water molecules. Their mass is the same if they're standing up straight. And think of water molecules now as particles. Think of them just as particles. We don't know what they are. They're just little elementary particles. We can't see them. And so we have two kinds of particles, the upstanding and the standing on its head particle with exactly the same mass. Now, I thought I had a purple, no purple. I told them to put purple. We'll have to use orange. Where? We over here? Underneath the board. Underneath the board. Me under here? Oh, good. All right. I have my color coding and my notes here. And if I blow it, I'll be terrible. OK, so it looks brown to me. It is brown. OK, let's create a region in which there's an electric field. We're going to make a field. It could be between two capacitor plates. The capacitor plates could be far apart. It doesn't matter. But let's put them in the capacitor plates here and here. And inside that region, let's create an electric field. The electric field, in this case, pointing up. That means it pushes plus charges up and minus charges down if I have my signs right. And let's take one of these water molecules and insert it in here. Once I insert the water molecule in here, the energy of the upstanding water molecule and the upside down water molecule are different. Which one has less energy? The one with the plus up has less energy, and the one turned over has larger energy. The water molecule itself is electrically neutral. It has no electric charge. But it's a little dipole. It has a pair of charges. And which one has more energy depends on the sign of the electric field. OK, so there we are. We have two water molecules, two types of water molecules, two different particles. We've given them different names. We could call it water and scotch. And water molecule has one energy. The scotch molecule has another energy. And there they are. Well, by E equals MC squared, this also tells us that the two molecules have different mass. Now in practice, this would be a tiny different mass between them. But they would have different mass. So the same effect of this field, which exerts itself on charged particles, does something to neutral water molecules. Incidentally, notice that it doesn't exert any net force on the water molecule. The water molecule moves smoothly through it with no force, no net force acting on it. But there is a difference in the two configurations of the water molecule. And so it's as if we had particles of two different mass. So this is just an example of how a field creates mass. In this case, it increases one mass and decreases the other mass. Incidentally, if you read some of the literature and they'll tell you about how the Higgs field gives a mass, I've read any number of places that it's something like space being filled with molasses. It is not like space being filled with molasses. The vacuum is not sticky. And one of the things that molasses would do, well, the idea is that massive particles move slower than massless particles. So the idea is that molasses slows them down. But fields don't slow particles down. If you give the particle a push in this direction, it will just continue to move because there's no net force on it. It will just slide right through this thing, frictionlessly, no impedance, no friction, no molasses. The other analogy I once heard is that it was like trying to push a snow plow through heavy snow in the Arctic. It's got nothing to do with it, whatever. That's a lazy way to explain it. And it's a wrong way to explain it. OK, so there we are. But now let's think of this in a slightly different way. The electric field in here can also be pictured in terms of photons. A field is another way of talking about a collection, a condensate of photons, an electric field. We can replace the electric field by a condensate, the same kind of condensate, the same kind of condensate of photons. Let's draw photons by just a little squiggly lines. Fill this up with photons. How does it know which way the electric field is pointing? Well, photons have a polarization. They could be up or they could be down. So just imagine this thing being filled with photons, but not filled in the usual way, but filled in a condensate. What does a condensate mean? A condensate means that if I pull one out, it doesn't make any difference. If I put an extra one in, it doesn't make any difference. That's the meaning of the condensate. So it's an indefinite number of photons. That's what a field is, indefinite. And if you pull one out, nothing happens. And now let's reintroduce the water molecule. Let's just draw the water molecule moving through here. Now I'm going to make the water molecule. I've already blown my color coding. Here's a water molecule moving through here. And what is it going to do? It has charged particles inside it. The charged particles can emit and absorb photons. They emit and absorb photons. We've made the photons green now. So it emits photons. But when it emits a photon, putting an extra photon in doesn't matter. And so we usually draw that by just putting a cross at the end. A cross simply means that throwing an extra photon in doesn't affect anything. Photon is emitted and just is absorbed. Or it just disappears into the condensate. As this object, the dumbbell, moves through the electric field, it's constantly emitting and absorbing these photons, which get lost in the condensate. That is another way of talking about how the field affects the particle. And depending on whether the photons are polarized up or down, this effect of constantly being absorbing and emitting photons will have the effect of shifting the energy of the two configurations of the dumbbell. That's simply an example of how a field can affect the mass of a particle and how it can be thought of in terms of particles and condensates. That's what I want you to keep in mind, that picture. OK, now let's come to elementary particles, not dumbbells, not molecules. First question, is there any reason why a particle or an object just can't have a mass? Does it need an excuse to have a mass? Does it need anything called the Higgs phenomenon to have a mass? Well, there are lots of things in nature that have mass and have nothing whatever to do with the Higgs phenomenon. Let me give you an example. Imagine you had a box. And let's make that box out of extremely light stuff, the lightest stuff you can think of. But it's a box with good reflecting walls and fill it with lots of high energy radiation, bouncing off the walls, but never getting out. It's made out of massless stuff. The photons are massless. They have no mass. The box we're imagining is made out of stuff which is exceedingly light. Doesn't have much mass. But there's plenty of energy in there. Lots and lots of energy. Well, E equals mc squared. And so this box will behave exactly as if it had a mass. We didn't need anything to give mass, just energy. That's all it took. Are there any particles which are like this, which get mass having nothing to do with Higgs or anything else? Yes. The proton. The proton is a particle which is made out of quarks. Quarks, three quarks, and a bunch of gluons. G's, a bunch of gluons, a large number of gluons. Quarks and gluons in the standard model are massless. Does that mean that the proton would be massless if the quarks and gluons were massless? Not at all. If the quarks and gluons were massless, the effect on the proton would be about a 1% or even less change in its mass. Not much at all. Where does its mass come from? It comes from a kinetic energy of these massless particles rattling around in a box. A box being created by the proton. So mass doesn't have to come from black holes or another example. Black holes have mass. It doesn't come from the Higgs phenomenon. It doesn't have anything to do with Higgs. So what is it about the models of the particles of the standard model which require us to introduce a new ingredient? So I'm going to concentrate on the electron. Let's concentrate on the electron. We don't need all of this. What I need to tell you about is the Dirac theory of electrons. But really, we don't have to know very much about the Dirac theory. All we have to know is that electrons have spin. Furthermore, if an electron was moving very fast down the axis here, let's say with close to the speed of light, we really accelerate that electron, then there's two possibilities. The spin of the electron can be right-handed like that. Think of my thumb as the direction of motion of the electron. It can be going that way, like my right hand, or it can be going that way, like my left hand. Ooh, I didn't realize I could do that. Now, two kinds of electrons. Right-handed and left-handed. Now, do right-handed electrons always stay right-handed? Can they flip and become left-handed? Can the right-handed become a left-handed? The left-handed become a right-handed? Yeah, that's exactly what the Dirac theory says. But if it was moving with the speed of light, it couldn't. Why not? Because if a thing is moving with a speed of light, time is infinitely slowed down, and nothing can happen to the object. It just moves along, but nothing can happen internally to the object. So if its mass was zero, it couldn't flip. But in the Dirac theory, this flipping back and forth between, I tend to do it this way, but that's not right. This way, this way, this way, this way, that is intimately associated with the mass of a particle. And in fact, the mass of a Dirac particle is simply proportional to the rate at which it flips from left to right. That's the Dirac theory in a nutshell. Mass is the rate for the electron to flip back and forth from left to right. Of course, the faster it's going, the slower it will flip, but that's all right. You take that into account. So mass is left to right to left to right. And we could draw the motion of an electron in the following way. Here's the electron moving down in the axis. At first, it's right-handed, so it's going this way. And then it's left-handed. It's going this way. And then it's right-handed. Can you tell the difference? Maybe not, but that's OK. And in between, it jumps from one to the other. The probability or the rate at which it jumps is a measure of the mass of the electron. So it jumps back and forth and back and forth. Now I'm going to ask you to believe something really crazy. Remember the Zebozon? Where's the Zebozon? The Zebozon was emitted. It could be emitted from electrons. It could be emitted from neutrinos. But let's concentrate on electrons. It is not the same as the photon. And the thing which emits it is not the same as the electric charge. It is another kind of charge, a completely separate kind of charge. It's like charge, but it emits Zebozons. We need a name for it. We don't have a name for it. Well, we do have a name for it. It's a very awkward name. It's called the weak hypercharge. I don't like that. Because it's the thing which emits the Zebozons, I call it Zilch. Zilch. Zilch is like electric charge, but it's not electric charge. When a particle which has Zilch accelerates, it emits a Zebozon. It may also emit a photon if it also happens to have electric charge. Now electrons, both right-handed and left-handed, have the same electric charge. But left-handed and right-handed electrons do not have the same Zilch. In the standard model, this is part of the mathematics of the standard model, the left-handed and the right-handed electrons have different Zilch. The left-handed electron has Zilch of plus 1, and the right-handed electron has zero Zilch. I didn't make this up. In fact, my friend Steve Weinberg didn't make it up. If anybody made it up, he's up there, or down there. I don't know where. And it is just the way it is. It is the way the mathematics of the standard model works that the left-handed and the right-handed particles have different Zilch. And now we have a puzzle. When the electron moves along and it flips from left to right, that means the Zilch goes from plus 1 to 0. But Zilch is like electric charge. It's conserved. How can the Zilch go from 0 to 1? It can't. It can't. And that's the reason that the electron in the standard model doesn't have a mass. Because the left-handed and the right-handed have different value of a conserved quantity. And so left can't go to right, period. No mass. How do we get around this? We get around this by introducing a new ingredient. And the new ingredient is called the zigzbozan. It's not the zigzbozan. Not yet. We haven't gotten to the zigzbozan yet. We've gotten to the zigzbozan. The zigzbozan is one new ingredient. It is closely connected with this Mexican hat-type configuration here. It's a kind of particle, but it forms a condensate. You can't tell how many are there. You can put one in. You can take one out and so forth without changing the vacuum. So we have one more ingredient. It's a condensate that space is filled with. And the nature of the condensate is that doesn't have electric charge. It has zilch. And it's a condensate meaning that if you put a zilch in, nothing happens. If you take one out, nothing happens. And let's ask now what that means. The left-handed electron coming in has a zilch of 1. Let's call it a z of 1. The right-handed has z equals 0. Back to the left-handed, z equals 1. Is that possible? Only if you emit something at this point, which carries off that z equals 1. A zigz. The zigzbozan gets emitted. It carries a z equals 1. But what happens to it? Where does it go? It goes into the condensate. It gets lost in the condensate. You put one in and it just gets absorbed into the condensate. And so the electron goes on its merry way. The condensate absorbs the zilch and it goes from 1 to 0. But then it can borrow a particle back from the condensate. Borrow one back. It doesn't even have to borrow it. If you pull one out, nothing changes again. And so it goes on its merry way from left-handed to right-handed, from left-handed to right-handed. Every time it switches, it emits a particle carrying this zilch quantum number, which then just gets absorbed into the condensate. That's the mechanism by which a field, and in this case, it's a field which forms a condensate by itself. It doesn't require capacitor plates. It just requires the energy to be such that the field naturally gets shifted. And that's the mechanism by which electrons, quarks, and the various partners of those particles, the mu particle, the tau lepton, all those ordinary and extraordinary particles, the fermions, get their mass by this phenomenon here. Phenomenon doesn't really have a name. It's called the spontaneous breaking of chiral symmetry. But it does have a name. But this is what it is. What about the zebozon? I told you before, the zebozon is like a photon. Photons are massless. How does a zebozon get a mass? So I'll just show you something very similar that happens to the zebozon. Let's remind ourselves what a zebozon can do. It can take any particle which has a zilch, and in particular, this green ziggs particle. It can take the ziggs particle, and the ziggs particle can emit a zebozon. It has charge, not real charge, but zilch, and zilch emits zebozons. All right, so now let's ask what that means. That means that a zebozon moving along can do something a little bit similar to this. It can absorb some zilch out of the condensate. But now it has zilch. Originally, it was just a zebozon. Zebozons don't have zilch. It absorbs some zilch, and it becomes a ziggs. The zebozon becomes a ziggs, but then it can emit ziggs, which gets lost in the condensate again. And the zebozon just moves on its merry way, constantly going back and forth from being a zebozon to being one of these imaginary, not imaginary, ziggs particles. That's the nature of the way that particles get mass from fields. This phenomenon of the zebozon getting a mass is called the Brout-Anglaire Higgs phenomenon. This is the one that's called the Higgs phenomenon, the zebozon getting a mass. Now this could have happened to the photon. Had there been a condensate of ordinary charged particles, the photon would have become massive. We would all be dead if that were the case. Massive photons would not be healthy for us. And so we are very lucky that this phenomenon here did not apply to ordinary electric charge. Will we ever discover the ziggs particle? Sure. We discovered it long ago. It's just part of the zebozon. The zebozon was discovered, I mean, it was postulated in 1967, or even before that by many people. But it was discovered, I don't even remember, around 1980. I forgot when the experiment was going on. The experiment, if it's lack, first discovered the existence experimentally. But when it was discovered that there was a zebozon, that it had a mass, and that when its properties were studied, the properties were not only consistent, but required that it was a thing which went back and forth and back and forth and back and forth between pure zebozon and the ziggs particle. So they've existed. We are not in doubt about them. We never were, at least not for many years. So far, I have not mentioned the Higgs boson. So what is the Higgs boson? Well, the Higgs boson has to do with this condensate. It has to do with this condensate. But it's a different kind of excitation than sliding around the edge of the sombrero here. It does not have to move. It's not something which has to do with sliding around here. It has to do, I'll tell you, two different ways to think about it. You have a condensate, and you can imagine the condensate has a density. A density of these fictitious particles in the condensate. Imagine something which changes the density of them, kind of like a sound wave, a compression wave of some kind, which squeezes them closer and further and closer apart, makes them more and more less dense. That kind of vibration is what a Higgs boson is. Another way to think about it is that it doesn't have to do with sliding around the periphery of the sombrero. It's go to a place in space and start the field oscillating this way, in and out this way. The further away it is, the stronger the condensate. The closer to the center, the weaker the condensate. So when it sloshes back and forth, it's kind of a compressional wave in the condensate. That mode, that phenomenon, that oscillation is what is called a Higgs boson. The Higgs boson is like the sound wave propagating through the condensate. The reason it has been so important is because it was the one element that had not yet been discovered. As I said, the Ziggs was discovered long ago. The Z and the W, the electrons, and all the others were discovered long ago. And so the next question, which I'll try to answer in a couple of five minutes, is why it was so hard to discover the Higgs, what we discovered about it, and very, very quickly what the future might or might not bring. Try to do this in a couple of minutes. OK, so what kind of thing does the Higgs boson itself do? Now we're talking about the Higgs boson, not the Ziggs boson, not the Z boson, the Higgs itself, the one that's been so elusive all these years. It's called H. And what it can do with some probability is, for example, create, we read this from left to right, the Higgs boson moving along in time, time is now to the left, can create an electron, an apositron, it can create a pair of quarks. It can also create other things, a mu particle, or a top quark, or a bottom quark, all of the different quarks, electrons, also neutrinos, all of various fermions can be created in pairs when a Higgs boson decays. You say, if it's like a sound wave, why does it decay? Well, believe me, sound waves decay. If they didn't decay, you'd continue to hear my voice ring forever and ever, wouldn't you? So sound waves do decay. And it is possible to think of sound waves as decaying by creating particles. So the Higgs boson decays. It decays quickly if it exists. If it really exists, it decays quickly, either into an electron positron or a pair of quarks, or maybe some other of the fermions that exist in nature. You can read this diagram in two different ways. Oh, incidentally, the probability that the Higgs decays like this is proportional to the mass of the particle that it decays into. The heavier the mass, the more strongly that particle is coupled to the Higgs boson. So heavy particles are favored, and light particles are not favored. Now, you can read this diagram in either direction. You can say the Higgs boson decays, but you can also say an electron and a positron confuse together to make a Higgs boson. Well, if we want to make Higgs bosons and see them in the laboratory, we want to read the diagram from right to left. And we want to say, this is a process whereby a pair of electrons can come together and make a Higgs boson. We've been colliding electrons and positrons for a long, long time, almost as long as I've been a physicist, not quite. We've been colliding electrons and positrons together, and nobody was ever able to discover the Higgs. Now, one reason in the early days is it turns out that the Higgs is a fairly heavy particle. I will tell you what its mass is, but it's a fairly heavy particle. And unless you have enough energy, you don't have enough energy to make the Higgs boson. But there's a more important reason. In fact, slack in the later days of slack's life had plenty of energy to make the Higgs. The problem was the weakness of the coupling, the smallness of the mass of the electron, translated into a very weak, improbable cross-section. Too small in effect, too unlikely to make the Higgs. And so when you collide electrons together at high energy, electrons are just not favorable. They're too light, and because they're light, they tend to not make Higgs with any appreciable probability. Well, how about quarks? We can collide quarks together. The usual quarks that make up the proton and neutron are also very light. And because they are light, also unlikely to ever make a Higgs boson, well, I'm sure they were made in slack, but never in appreciable numbers that there was possible to detect them. So that was the main difficulty. The lightness of these particles was a thing that essentially prohibited us from making Higgs's in abundance at slack or in other laboratories where collisions took place. What is the most favorable particle, most likely particle, for the Higgs to decay in? The heaviest, the heaviest of the fermions. And the heaviest of the fermions is called the top quark. The top quark is hundreds and hundreds, thousands of times heavier than the electron, many, many, many times heavier, many thousands of times heavier than the electron. And the Higgs preferentially will decay into top quarks. So we'll just call those the quarks. They are quarks, but they're very heavy. 170 times the mass of a proton, basically, which is heavy. Top and anti-top. Top quarks and anti-quark. So you say, well, look, now it's easy to make the Higgs boson. You just, oh, it actually, in fact, not possible for the Higgs to decay to two top quarks because the top quarks are too heavy. But if you read it the other way and you take a pair of top quarks and collide them together, you can make a Higgs. So it's easy. We just go in the laboratory, take a pair of top quarks, collide them together, and make a Higgs. Well, the problem is that it's not so easy to find top quarks in nature. Why not? They decay very rapidly to the other quarks. They're not sitting around. You can't put them into the accelerator and accelerate them. They disappear in a tiny fraction of a second. There are no top quarks sitting around, not even buried inside protons and so forth, not even buried inside other kinds of particles. There are no top quarks around. So we have to make the top quarks somehow in the collision. How do you make a top quark? So here's a way to make a top quark. A gluon can come along. This is a gluon now. And remember what gluons do. They couple to quarks. One possibility is that the gluon can make a top quark and an anti-top quark. Well, there's plenty of gluons around, as we'll see in a moment. So why don't we just take a gluon and make a top quark and an anti-top quark out of it? The reason is because gluons are very light. They're almost massless. They don't weigh very much. Top quarks are very heavy. There's simply not enough energy in the gluon to create a pair of top quarks. So what we have to do is we have to take a pair of gluons. Here's a process that you can imagine. Take a pair of gluons with a lot of energy, moving toward each other with a huge speed, plenty of energy, let one of them make a pair of top quarks for a short period of time, and then let the other one come and be absorbed by one of the top quarks. There we have it, a pair of top quarks created by a pair of gluons, a pair of high energy gluons smashed together and make a pair of top quarks. Once we've created those pair of top quarks, the top quarks can come together and make our Higgs boson. This is the way we usually draw this, is to just draw a gluon, gluon, and then a triangle, Higgs. These are top quarks going around the lupia. That's the most efficient process for making Higgs bosons. But where do you get gluons from? Gluons are in floating around. Well, yes they are. The proton is filled with gluons. The proton mass of the protons, maybe 50% energy from gluons or something like that, it's filled with gluons and quarks. You take two protons and you collide them together, and the gluons inside the protons can collide during the collision and do this. That was what was detected at LHC. LHC is a proton-proton collider. It collides protons together. And when protons are very indirect way, two protons collide together, a gluon from each one of them scatter, collide, create a pair of top quarks, and then the top quarks then have plenty of come together and create the Higgs boson. That's the process that was discovered at LHC. And it took a long time to get there. It was a hard thing to do. It was a very, very hard thing to do. But now it's done. We know the mass of the Higgs boson. It's 125 GeV, about 127 times the mass of the proton. And that's, I think, a finished fact. Before I quit, let's talk about the near future. What have we learned? We've learned that the standard model is essentially correct. We've learned the standard model is essentially correct. Everything seems to fit together. The Higgs boson, fitting together, it's not the Higgs boson, really, that gives the particles their mass. It's the Ziggs boson. But the Higgs boson is just what's left over when you think of these density oscillations. With the last remaining piece, it's now in place. It's finished. But is everything fitting together exactly right, quantitatively right? Well, that we don't know. We don't know. There's one hint, one hint of a discrepancy. And I'll tell you what the hint of that discrepancy is. Here's, I drew this picture. Let me draw it again over here. It's the process of creating a Higgs by two gluons coming together, gluon, gluon, top quark going around the loop, and Higgs. Now, this same process, once the Higgs is created, also allows the Higgs to decay. But it's not so easy to see gluons in the laboratory. They're difficult to work with. That's not the best process for looking for the Higgs boson after you've created it. The best process is to replace the gluons by photons. I don't have to even change the picture. Photons. It's exactly the same process, except with photons out here. Once the Higgs is created by whatever it can create it, it can decay into two photons. It's an intricate process. It involves a lot of theory and a lot of calculation, a Feynman diagram. Not easy to calculate, but you can calculate it. And it depends on the properties of the top quark going around here. At the moment, at the moment, and I'm not an expert at this, I can only quote what I'm told, that at the moment, the Higgs boson that was produced in the laboratory appears to decay into two photons a little too quickly, about one and a half times too quickly. Now, everybody agrees that that is not a statistically really significant fact yet. But what will it mean if it persists? It doesn't seem like a big deal, one and a half times too fast. But the point is that theorists have the ability to calculate that rate very accurately. A one and a half times too big a rate is serious. It means something is going on. The most likely thing that would be going on is that there's another kind of particle, in addition to the top quark, that has not been discovered yet, that can also participate in the same kind of, it's called a triangle diagram. Some other kind of particle. That, of course, would be big news. If there's something there that is not described by the standard model, that would be big news. It could be a supersymmetric particle. It could be anything, all kinds of things. This is something to watch for now. The buzzwords are the decay of the Higgs into a pair of photons and a excess of about one and a half. I think it's a two sigma effect, whatever that means. It means something to statisticians. It means that it's not so robust, but it could be right. It turns out to be right. It means that we've discovered something unexpected. Well, it might be even something that's expected, but something new beyond the standard model. Remember, the standard model is over 50 years old, well over 50 years old. And so 1967, am I right? It's 77, 87, 97, 2007. No, not getting on 50 years old. So discovering the Higgs boson wasn't really discovering anything, it was confirming something. If this should be off by a factor of one and a half, one will have discovered something absolutely new. So if you want to watch, if you want to be a spectator in the sport and you want to watch what happens, this is the thing to watch for next, whether the Higgs decays are consistent with the standard model. OK, that's really finished. Thank you very much, and I hope you all have got something right about it. One or two questions? What would cause two different fermions to have different rates of chiral oscillation? Good. The answer is going to be an unsatisfying one. The answer is that the fermions have what would cause different fermions to have different masses? Different masses, essentially. Different oscillation, different rates of oscillation are the same as different masses. The coupling strain, the coupling constant that couples the relevant particle to the Higgs field. What happens? The particle moves. So each one has a separate coupling constant. In midst this, what did I call it, the zigs? In midst the zigs, which gets lost. There's a coefficient here, which is basically a probability. Each one, and we don't know why they are what they are. We know how to parameterize it, but we don't know how to explain it. For each kind of particle, let's say the electron, or the mu particle, or whatever it happens to be, there's a different constant there, and that constant is the constant to which determines the rate and the mass. It's the same constant which comes into telling you how rapidly the Higgs decays into these particles, and therefore, the heavier the particle, the stronger the decay. That's a good question. I meant to say something about it, I forgot. So we know the value of H-maps. We know the value of the vacuum expectation value of the Higgs field. Yeah, we've known that for a long time. 240 GeV. The value of the expectation value is, from one language, it's simply the displacement of the field. In another language, it's the density of the condensate. Think of it either way, the density of the condensate or as the value or the displacement of the field. And that's why oscillation in the magnitude of the field is the same as a density fluctuation. What's that? Is that the vacuum of the vacuum of the condensate? Apparently not. Apparently not. Well, yes, I think it does, but there are many, many other things that give it an energy density. And for whatever reason, they almost all cancel out. This is one of the great mysteries of, yeah. Yeah, right. So that's a very good question to which we don't have an answer at the moment. OK, I hope you got something out of that. I had fun preparing it and figuring out how to try to explain it. For some of you, you probably got something. Others are just mystified and sitting where they're talking about. That's it. Thank you.
(July 30, 2012) Professor Susskind presents an explanation of what the Higgs mechanism is, and what it means to "give mass to particles." He also explains what's at stake for the future of physics and cosmology.
10.5446/15040 (DOI)
Okay, the last time I explained to you Kruskal coordinates for a black hole, let's go over it. And tonight I want to show you how you can think about the formation of a black hole, a real black hole, and how much of this Kruskal story, the diagram we drew on the blackboard last time, really is physically meaningful. Not all of it is. Okay, so the last time I showed you that the geometry, the metric of a black hole, of a Schwarzschild black hole, has the property that very near the horizon, that's the horizon over there. It's like flat space. The horizon is a kind of polar coordinate, a hyperbolic polar coordinate point, similar to the behavior of accelerated coordinate frames. Somebody standing outside, and this wedge over here, this quadrant over here, is outside. The rule is, first of all, that 45-degree angles represent the motion of light rays either going in that direction or going in this direction. Somebody outside the black hole means out here, and this quadrant, then quadrant, let's label them, quadrant one, quadrant two, quadrant three, and quadrant four. And as you're going to see, quadrant three and quadrant four really don't have any physical significance for a real black hole. It's quadrant one and quadrant two, which have real significance. Outside the black hole, somebody at rest, outside the black hole, somebody at rest, of course, is experiencing a sense of acceleration. This is I'm experiencing a sense of acceleration right here. I've gotten used to it, so I don't notice it, the fact that I'm being accelerated. But if somebody who had been in outer space too long and free fall too long lost the sense of what a gravitational field is like, and you plunked them on the surface of the earth, they would be experiencing a sense of upward acceleration. And in fact, in this diagram, somebody at rest does appear to be accelerating away from the horizon. Come in and go out, and it's that acceleration, which is what we experience as gravitational acceleration near the surface of the earth, but in this case, near the horizon of the black hole. So this is a family of observers, further and further out, very far from the black hole, way out here. In close to the horizon here, the closer you get to the horizon, the more you have to accelerate, that's the curvature of these lines here, the more you have to accelerate to keep out of the black hole. And in any case, this is the exterior, the lines of constant time, they look like this, red lines are lines of constant position, the green lines are lines of constant time, and they accumulate up near this blue line up here. So this is t equals 0, t equals 1, t equals 2, t equals 3, and t equals infinity is way, way up on this 45 degree surface here. So that's the outside of the black hole, and the inside of the black hole is quadrant number two. In quadrant number two, for a real black hole geometry, not for flat space, but for a real black hole geometry, there's something nasty, and the nasty is the singularity, and the singularity looks like this. I don't know what it looks like, because I've never seen one, but it's a bad place with lots and lots of tidal force. And the problem with it is once you find yourself in quadrant two, there's no way out. The only way to get out, even if quadrant three was meaningful, which it is not, but even if it were meaningful, you can't escape from here to out here or to out here without exceeding the speed of light. Your trajectory somewhere has to have a direction which is shallow within 45 degrees, and that means exceeding the speed of light. So that means if you don't exceed the speed of light, the only place you can wind up is somewhere on the singularity, so you're doomed. If you're outside and you make a bad mistake, you could fall in, but if you don't make a mistake, you can jump back out and get out. So people on the outside or light sources on the outside, flashlights, whatever you like, they can produce signals which go in or they can produce signals which go out, and that's the character of a black hole. So that's all there is to black holes. The geometry is somewhat warped. It's not just flat space. It's somewhat warped and curved, but this is basically it. Could you say what the axes of that diagram are? Time and space. Time and space as seen by an in-falling observer. And so the lines of constant position that you drew for what stationary observer outside the... Yeah, that's correct. So for somebody falling in who doesn't accelerate, the coordinates that they see which are not too different from ordinary flat space in free fall, at least until they get near the singularity, time and x, those are the coordinates that such a person would use. So to them, the space looks flat pretty much, pretty much, especially near the horizon over here. To somebody on the outside, they experience this acceleration. And the acceleration is basically the same as the accelerated reference frame in flat space. Okay, now, it's very convenient to redraw pictures like this. Space is infinite here. It goes on and on and on and ends up somewhere in the next county or further than that. And time may come to an end at the singularity over here, but these directions offer 45 degrees. These light light directions, they are also infinite. And so I can't draw the entire space time on the blackboard. For many purposes, it is useful, convenient, provides some visual tools, provides some intuition to be able to draw the entire space time on a finite piece of the blackboard. Well, you can imagine why. You want to see everything that's going on. You want to get a complete overview of space time. You'd like to draw it on the blackboard in some finite region. Let's start doing that, but let's start with flat space. Let's start with good old ordinary flat space and take all the flat space time. When I say flat space, I mean flat space time. And let's redraw it or re-coordinateize it so that we do a coordinate transformation which pulls the whole thing into some finite region on the blackboard. I'll show you how to do that. This is useful, incidentally, for geometries and space times which have rotational symmetry. Rotational symmetry means that the same in every direction. They mean the same thing, rotational symmetry is when we say a gravitational source has rotational symmetry, means the same in every direction. And when a system is the same in every direction and has rotational symmetry, it's usually useful to describe it. Now I'm talking about ordinary polar coordinates. When a system has rotation symmetry, it's the same in every direction. We don't care so much about the angle. Everything is the same independent of angle. What we care about is how things vary as we move away from the center, the radial direction. And so for many, many purposes, we don't need to think too much about the angular direction. And we just think of space time, flat space time, ordinary flat space time as having a time axis. That's the time axis. It goes from minus infinity to plus infinity. And it has a radial direction. The radial direction is like that. The radial direction never gets negative. R is never negative. And polar coordinates, R is always positive. And so the entire space time is to the right of the time axis here. Only because being to the left of the time axis would mean negative R, and R is always positive. It's the distance from the origin. So here's everything, the entire space time. So let's put some markers on it to be able to visualize some landmarks. First of all, this horizontal axis is t equals zero. And it's t equals zero right over there. Let's put in t equals one. t equals one looks like this. t equals two looks like this. t equals minus one is like this. t equals minus two is like this. And if I wanted to draw the entire set of possible times, it would take me up to heaven and down to hell. So this is all possible times. And we could also plot all possible radial distances. And the radial distances would just look like this. This would be R equals one. R equals two. R equals three. And in all cases, I will imagine using units so that the speed of light is one. C equals one. That simply means that I work in, if I'm going to use seconds for time, I use light seconds for distance. And then light rays. Light rays will move at 45 degree angle. Well, that's not quite right. Let's think about the motion of light rays. If a light ray, let's draw another picture over here. This is R equals zero. This is polar coordinates. This is the radius here. It's just one instant of time looking and including the angle now. If a light ray comes in radially toward the origin, it hits the origin. And then what does it do? It goes back out. I meant to draw it straight. But this is just saying that a light ray in polar coordinates that happens to be aimed directly at the origin from far away. Imagine it's coming in from very far away. If it happens to be aimed right to the origin, it will go to the origin and then go back out. On this diagram, it will appear to bounce, to reflect off the origin. Here's R equals zero. This is R equals zero over here. And so a light ray coming in from long in the past would look like, I think I'll use another color, a light ray coming in from the past will look like this. Moving at 45 degree angle, it will get to the origin at some time. And when it gets to the origin, it will simply start back out. It hasn't really reflected off anything, of course. It has just gone through the origin, came in, and went back out. So a light ray starts out in the very remote past coming in at 45 degree angle from a place that we call light, like infinity, past light, like infinity. But all it means is the place where all light rays come from in the infinite past. And they go out to the infinite future. Of course, there are other light rays that do the same thing. This one comes in a little bit earlier and goes out a little bit earlier. Another one up here. But they all look pretty much the same. They come in at 45, they bounce off the origin, and they go out at 45 degrees. Now just for fun, let's think about a light ray which doesn't pass through the origin. A light ray which doesn't pass through the origin will miss the origin. It comes in and it misses the origin. When it's very, very far away, you can't really tell that it isn't done, unless you have very good precision, angular precision, extremely fine angular precision, watching that light ray way, way back there, it's hard to tell whether it's going to hit the origin or not. So it starts out coming in pretty much the same way. But it never gets to the origin. In fact, it almost appears to be repelled away from the origin and then goes back out. That repulsion is not a real repulsion, but it is what we call centrifugal force. It really is centrifugal force that keeps the light ray from hitting the origin. Or you can just say it doesn't hit the origin because it wasn't aimed at the origin. Same thing. But what does it look like? It comes in pretty much the same way at first. And instead of going through the origin, it gets a point of closest approach. The closest approach is over here. At that point, it starts going back out. So a light ray which is perfectly radially tuned to hit the origin looks like a 45 degree ingoing and outgoing line. But a light ray which misses the origin looks like, in fact, sort of like, it looks like a hyperbola. But whatever it looks like, that's it. I'm just telling you these things here to give you some landmarks and to give you some perspective on what we're going to do next. Next thing we're going to take this diagram and we're going to squeeze it and stretch it in various ways to get the whole thing on the blackboard. That's going to deform it. It's not going to look the same. But we're going to keep one thing fixed. The one thing fixed is that light rays will always be assumed to move on 45 degrees. In other words, any squeezing and squashing and stretching that we do should preserve the fact that light rays move in 45 degree angles. That's a useful thing to do because then we can see how light rays move. And we can see what going faster than the speed of light, what going slower than the speed of light looks like. So I'm going to show you how to do that mathematically. Excuse me. Yeah. There's nothing special about the origin. There's not a black hole there or anything. No, no, no. At the moment there's nothing special. It's just a particular point that we may want to focus on is there may be something there, but no need for it. Okay. So light rays in this diagram are bent just because of the particular way that we chose the coordinates. Yeah, that's right. It's just a choice of coordinates at the moment. Okay. Now, there may be good physical reasons to want to use a particular choice of coordinates. When studying the sun, there is no black hole, but the symmetry of the problem makes it convenient to think about the sun as being the center of something. And right. So that's all we're doing. All right. Next step. Let's assign some coordinates. I'm not going to call this T. I'm going to give it another name. I'm going to call a vertical axis capital T. Capital T and the horizontal axis capital R. I don't want to confuse them with other T's and R's that we're going to use and that we have been using. So we're going to call it capital T and capital R. But there is another way to coordinate this, and that's to use coordinates at 45 degrees angle. In other words, to introduce axes along 45 degree angles here and here. Of course, I don't mean to imply that there's anything behind R equals 0. We're only interested in this region here. But we're going to introduce new coordinates, and this coordinate over here, that's not going to be T or R. It's going to be T plus R. T plus R varies as you go from here to here to here. If I were to plot constant T plus R, constant T plus R would be these lines. Oops, I didn't draw it well. Let's draw another picture over here. R T, constant T plus R will look like this. These will be the lines of constant T plus R. OK, so this T plus R, and of course, there's also T minus R. T minus R, the lines look this way. But really we're only interested in this half over here. So the lines of T minus R, constant T minus R, look like that. These are useful coordinates, as we'll see. T plus R, that's called T plus. T minus R is called T minus, just as you might expect. OK, so we have a different way of describing the same spacetime, the same flat, good old flat spacetime. Instead of T plus R and T minus R, we have T plus and T minus. Now, what about this line here? This, what was this line? This line was just plain R equals zero. R equals zero. So what is that? What is R equals zero? That's T plus equals T minus. Let's write T plus equals T minus. T plus equals T minus, that's the same thing as T plus R equals T minus R, or R equals minus R. The only solution to R equals minus R is R equals zero. So this line over here, which was R equals zero, is also just T plus equals T minus. That's a way to describe it. OK. Now we're going to change coordinates, and the way we are going to change coordinates is to introduce two new coordinates, these are called light like coordinates. T plus and T minus is called light like coordinates. We're going to change light like coordinates. We're going to introduce two new light like coordinates. One of them is called U plus, and one of them is called U minus, and U plus is a function of t plus. So we're going to write u plus is some function of t plus. It's not going to involve t minus at all, and u minus is going to be, in fact, I think the same function of t minus. All right, but this function is just a coordinate transformation in which u plus and u minus independently, sorry, t plus and t minus just independently transformed to some new coordinate. Good. What kind of function am I going to use? I'm going to use a function of t plus, this is f or u, this way u equals f of t plus, and the function is going to look like this. When you go out to t, it doesn't ever become absolutely flat. It's always got a slight positive slope, also down here it also has a slight slope, but it'll go to one over here, u plus, u plus, u plus of infinity is one and, or just f, let's call it f. f of infinity is one and f of minus infinity is minus one. Now, anybody know a function that does that? There are many functions which do that. What's that? Yeah, a hyperbolic tangent is the one that's usually used. Hyperbolic tangent goes to one asymptotically, it's the usual one that's used but there's nothing special about it. Hyperbolic tangent, hyperbolic tangent, f of t plus is, and if you don't know what a hyperbolic tangent is, don't worry about it, it's just a function which looks like this, tanh of t plus. Okay, now think about what happens as we start moving on this graph further and further outward. T plus is getting bigger and bigger and bigger. What is u plus doing? It's never going to get bigger than one, right? It's never going to get bigger than one. So, let's draw our same space time with these new coordinates, I think I erased them but let's put them back. U plus equals hyperbolic tangent of t plus and same thing for minus. As we move out, let's redraw our picture. As we move out, let's start over here and move out, how big does u plus ever get? U plus never gets bigger than one. So, this diagram, if this is the u plane, not the t plane but the u plane, u plus never gets bigger than one, here it is, right there, it never gets bigger than one. So, that means all of the space time has been mapped somehow down into here. What about u minus? u minus is equal to hyperbolic tangent of t minus. What this says is as we move down in this direction, t minus never gets bigger or less negative than minus one. t minus is getting very, very big and negative as you go down here. t minus is getting bigger and bigger but negative as you go down here but u minus sort of maxes out again at minus one and here's t minus equals minus one. t minus never gets too far, t plus never gets too far. This is, I'm sorry, I said t plus and t minus never get very far, I meant u plus and u minus. u plus equals one, this is u minus equals minus one in fact. And what about this vertical line here? The original vertical line over here was t plus equals t minus, right? t plus equals t minus, that's the same as r equals zero. If t plus equals t minus, then that says that u plus equals u minus. So, in fact, this line happens to be u plus equals u minus and it looks like a vertical line. All we've done, if you had, don't follow the details, all we've done is squished the geometry this way, squished the geometry that way and squished it onto a finite triangle. You'll understand this better when I draw in these coordinates. Now, what is the trajectory of a light ray at outgoing, let's say, an outgoing light ray in terms of the t pluses and the t minuses? Well, an outgoing light ray is at a constant value of which, t minus, I guess, t minus would be constant and fixed on here, t plus would be constant and fixed for an ingoing light ray. But if t plus is constant and fixed along some trajectory, it means that u plus is constant and fixed along that trajectory, likewise for t minus and u minus. The upshot is that if you make a coordinate transformation like this, it is still true that radial light rays move on straight 45 degree angles. These are light rays that come in from the remote past, bounce off the origin and go back out. Again, bouncing off the origin is just a figure of speech, you just pass through the origin. So we've succeeded in taking all of space time and mapping it to a triangle that we can draw the whole thing on the blackboard. Let's draw in some more coordinates, let's draw in some more detail. In particular, let's take these horizontal slices which correspond to different times. Let's imagine starting at the origin at where is t equals zero? Here's t equals zero and moving outward, outward, outward will allow ourselves even to exceed the speed of light and just move outward in our imagination. What does that look like? That of course looks like starting at a point over here and just moving outward. And where do you eventually get to? You get to there. So this point must be spatial infinity, r equals infinity. It's where you get to if you just walk along this line till you get out to infinity. But what about this line over here? t equals one. If you plot that in these coordinates, it'll start about over here, but it will go to that same point. Likewise, t equals minus one. So what we've done is we've squished all of spatial infinity, we've squished it into a point right over here. And it's called space-like infinity. Yeah. No, they're not exactly straight. It depends of course on the particular choice of function, but they won't be straight. No, let's draw them more carefully. One of them will be straight. The one will be straight will be the t equals zero. This one will be straight. Then the next one will look like this. It'll have some curvature. What about way, way, way up there? t equals a hundred billion. t equals a hundred billion is going to be very close to this edge. So as we keep putting in more of these, they accumulate and get closer and closer to this edge up there. You've got to get an infinite number of them in there. Time really does go on and on forever, at least in flat space. And so this is what the series of timelines would look like, different times. On the other hand, now let's look at fixed spatial position. Fixed spatial position, we can mark off different locations. Here's r equals one, here's r equals two, r equals three, r equals four, r equals five, tata tata, and they again get all squished up near r equals infinity. Where do they all go? They all go vertically upward, and on this diagram, they all go up right up to this point up here. You can check that by doing the transformation, and that's what you would see. More and more and more of them. It's the same diagram, just in different coordinates. And this coordinates, as I said, everything gets mapped to a finite region of spacetime. Again, I emphasize, light rays move at 40, radial light rays move at 45 degrees, they come in and they go out, and so forth. Okay, this is called, this process of squeezing everything onto a finite piece of the blackboard without changing the speed of light. It's called compactification. You take everything and you compactify it. But it's also called making a Penrose diagram. This is a Penrose diagram of flat spacetime, Roger Penrose. Sometimes it's called a Carter Penrose diagram because a physicist named Carter, I think actually invented it first, invented Carter, but it's usually called a Penrose diagram. Good. So that's flat spacetime laid out on the blackboard. Now, the question is, supposing you took the black hole, is the black hole geometry? It also goes, oh, let's put some names of points. This is called spatial infinity or r equals infinity. This one up here, what would you call that? That's what you get when you just go vertically upward. That's time equals infinity. Right, these green, t equals infinity. That's this point. What about this one? That's t equals minus infinity. All right, so this, I'll give you the name. This is called future time-like infinity. This is called past time-like infinity. This is called spatial infinity or space-like infinity. And what about these lines here? These 45 degree angles. These 45 degree angles are where all light rays come from. Light rays come in and go out. They don't really go out because it takes an infinite amount of time for them to get out. But a light ray beginning in the infinite remote past starts out its life coming in along one of these light light trajectories here. Bounces off and goes back out. The standard terminology is to call this script i minus, sometimes called scri minus, script i, scri minus. And I don't know why, I don't know where the i came from. I know what the plus and minus means. This is called i plus or script i plus. And these are called past light-like infinity and future light-like infinity. The place where, the place where light rays begin and the place where light rays end. And that's the terminology. And that's what flat space looks like when compactified. Now, what about the black hole? The Schwarzschild black hole. What does that look like? We can do exactly the same thing. We can do exactly the same thing. Take this whole diagram. Introduce again, I guess we can call this R, R. R. R. The original Kreskal coordinates. And we can again introduce light-like coordinates, T plus R and T minus R. Do exactly the same operation. And what will we get? What it will look like, I'm going to draw it over here. And then I'll show you how it works. Okay, that's everything and the whole thing drawn on the finite blackboard. And I think you can probably see what goes where. The exterior of the black hole, that's this quadrant over here. The square. It's square if drawn properly. And we can again draw coordinates. Here's T equals zero. Here's T equals one. Here's T equals two. Here's T equals three. T equals four. That corresponds to these lines which all pass through this point over here. They also pass through this point over here. The origin over here, which is the horizon, is right over here. Okay, what about the vertical lines, the red vertical lines? Those are constant position. Those are observers who are hovering above the black hole at a constant radius. They look like this. Way out, even further out, even further out. In closer, here's somebody hovering close to the horizon. Here's somebody hovering very close to the horizon. The closer you hover close to the horizon, the more you have to accelerate to keep out. So it's not a comfortable thing to be hovering above the horizon because you're being pulled or jerked or whatever, you're being accelerated. This is the exterior of the black hole. What about quadrant two? Quadrant two is over here. Quadrant two is over here and it is the place where you're doomed. Here's the singularity. Now, the singularity is not off at infinity. It takes a finite amount of proper time to fall from the horizon to the singularity. It's a finite excursion from here to here. And so the singularity is a finite time away. Somebody who falls through doesn't get to experience an infinite time. They will hit the singularity. And this is it. Here it is. It's the whole thing, the whole black hole geometry laid out in a Penrose diagram. If you're over here, you're doomed. If you're out here, you can escape or you may go in. Now, you should ask yourself now, what is this other half? If this is the outside of the black hole, what is this other half over here and what's down here? What's down here in particular, what is this thing I drew down here? That's quadrant three and quadrant four. Quadrant three and quadrant four have no real meaning for a real physical black hole. As we're going to see, we're going to work out a real physical black hole, but they really have no meaning. They're not really on the diagram of a real black hole. Nevertheless, you can ask what kind of geometry is described by this full, what it's called as the extended Kruskal diagram. It seems to have two exterior regions. It seems to have two exterior regions, a second exterior region over here where you can escape outward in this direction or you can fall in from this side. It seems to have two exterior regions connected together at the horizon. That's the horizon. That's the place where r is equal to 2mg. r equals 2mg, Schwarzschild r. Remember, it's the place where 1 minus 2mg over r change sign. Whereas you came in, you suddenly changed sign and went off in a light-light direction. All right, so that's what it looks like, but supposing now we take a fixed slice here that looks like this, go right through from here to here. What does that space look like? It's space now. It's not space time, space on this green slice, so this green sheet surface. What does it look like? I'm going to tell you what it looks like. You start out way out here, you start out way out here and space is very big. Namely, the celestial sphere is very, very big. Way, way out here at large r. So you see a very big region of space. I'm just drawing it as a plane. As you come in, you're moving closer and closer to this horizon. What happens to the radius of the sphere? It shrinks down to r equals 2mg. So you could make, you could draw it like this. It's getting smaller and smaller. That's not very good. I don't like that. I don't want to. Let's try to do better. Let's look almost edge on. Then as we come in, it shrinks down these circles. Okay, let's go back a second. Each point on this diagram represents a two-dimensional sphere. Do you understand why it represents a two-dimensional sphere? Remember, we're using polar coordinates. Right. So just as in polar coordinates on a plane, each radial distance represents a circle. Each radial distance in polar, three-dimensional polar coordinates represents a sphere. At each point here, there's a sphere that somebody could move around on, a two-dimensional sphere. And the two-dimensional sphere is very big way out here. As you come in, the two-dimensional sphere gets smaller and smaller until it gets to some minimum size over here. And that's the horizon. That's r equals 2mg. Now, what happens if you just keep going on this diagram, on this sort of fake diagram, half of which has no real meaning? That we'll see. Well, it starts expanding out again. It starts expanding out again until it gets very big. It looks like you can pass through from one side to the other by going through what people call a wormhole. Okay. That's also called an Einstein-Rosen bridge. It connects to what appear to be external regions to the black hole, which get bigger and bigger and bigger as you move away from it. And it looks like the black hole is connecting to two universes, two asymptotic regions. You might think, well, you could pass through going from here to here. You can't. Let's think about somebody who wants to start on this side and wants to pass to this side. If they start out here, anywhere's on this side, and they want to pass to this side, their trajectory has to exceed the speed of light. There's no way to get from this quadrant to that quadrant, even if you start very, very long in the past and you're willing to get way up into the future here, you still have to exceed the speed of light. So, in fact, this diagram is a little bit misleading. It's everything at an instant of time, but there simply isn't really time to go from one side to the other. You cannot go from one side to the other. And so the Einstein-Rosen bridge is not really a bridge. One way to think about it is that it opens up and closes again before anything can pass through it. But the best way to think about it is just to look at this figure and say, yeah, if you could exceed the speed of light and move horizontally, yes, you would move from here to here and going through the neck at the center there, but you're not allowed to do that. You can only move on 45-degree angles or less, and so you really can't get from one side to the other. This has been a source of all sorts of science fiction, passing from one universe to another through the center of a black hole, but as you can see, it doesn't happen. These kind of wormholes, or whatever you want to call them, which you cannot get through, they're called non-reversible wormholes, meaning to say you can't traverse them. You can't go from one side to the other. I'm sorry to disappoint you, but that is the way it is. But in fact, as we're going to see in a few minutes, the meaning of the left-hand side, there's no real meaning to the left-hand side anyway. There's no place to go. There is no real left-hand side. The left-hand side is really absent from here. So let's talk about now creating a black hole. Creating a black hole, not in a laboratory that's too hard, but creating a black hole in infinite space by having some in-falling matter. We're going to take very special kind of in-falling matter. What we're going to do is imagine making a black hole. We start with no black hole. There's no black hole. But what we have is a very, very distant shell. It's not a shell made out of iron or anything like that. It's a shell of incoming radiation. Somebody way, way, far away created an incoming shell of radiation, a thin shell of radiation. Radiation carries energy. Radiation carries momentum. And the shell is coming in. It's coming in with the speed of light. At some point, as we will see, this shell gets close enough together that there's so much energy in a small region that a black hole forms. So this is the simplest version of black hole formation. A star isn't really made up out of things which fall in with the speed of light. This is the problem of a black hole being formed by stuff coming in with the speed of light on a thin shell. It's the simplest of all relativity problems. So let's see if we can figure out how it works. You need to know really only one important thing, two important things. The shell moves in with the speed of light. And the other thing is a version of Newton's theorem. Anybody know what Newton's theorem is? About gravity? For a spherically symmetric gravitational region? In particular, for a mass distribution which forms a shell. Supposing you have a mass distribution which forms a shell, it doesn't matter what whether it's moving or doing anything. If you have a mass distribution which forms a shell for which there's nothing on the interior, then there's no gravitational field on the interior. The interior is completely without gravitational field. So if you have a shell of material spherically symmetric with nothing on the interior, there's a gravitational field on the outside. This is Newtonian. There's a gravitational field on the outside which is identical to a gravitational field of a point mass at the center, same amount of mass. But on the inside of the shell, there is nothing. So outside, it looks like a point mass. Inside, it looks like there's nothing. That's Newton's theorem or a special case of Newton's theorem that on the interior of a distribution of mass which forms a shell, you see no gravitational field. On the outside, you see the gravitational field as it would be if all of that mass was concentrated at a point. That's even true if the shell is moving. As the shell moves in, more and more space will look like the field of the point mass and less and less will look like empty space. But interior to the shell, it always looks like nothing's there. Exterior to the shell, it looks like a point mass. This theorem is also true in general relativity. The statement of the theorem is if you have a shell of infalling or a shell of mass of any kind, then the interior region is just flat spacetime, just as if there was no source. The exterior region, it doesn't look like the Newtonian point source. What does it look like? It looks like the Schwarzschild solution. So if you have in general relativity, on the inside of a shell of mass, flat spacetime, on the exterior of it, it looks like the Schwarzschild black hole, a C-H black hole. So if you had a static non-moving shell, what you would do to construct the actual solution is you would sort of paste together an interior here which was flat with an exterior which was, which is Schwarzschild. This would do the same thing in Newtonian physics. Paste together no gravitational field on the inside with standard gravitational point source on the outside, and that's the way you find solutions of Newtonian physics. You do exactly the same thing in general relativity. Okay, so now we can, let's redraw the rest of this black hole here for a minute. Let's put it back. I erased part of it, but I want to put it back. Okay, so let's go back up to here. Flat spacetime. In flat spacetime, I'm going to redraw it. I want to get rid of all these decorations and just put in nothing, nothing in spacetime. But now I want to add the idea of an incoming shell of radiation. You can think of the incoming shell of radiation as a sort of pulse of incoming photons, a pulse distributed on a nice sphere of incoming photons. In this picture here, they come in from light like infinity. So here's the pulse. Let's draw the pulse of incoming photons, arranged around. Let's put it a little bit later. Let's make it a little bit later. Put it up here. There's the pulse of incoming radiation. It forms a shell. Where is the interior of the shell? At any given instant, where is the interior and where is the exterior of the shell? So that's easy. Here's a line at constant time, space. Where is the interior of the shell? The interior of the shell, in other words, the interior region, interior to the shell, that's the part of the spacetime that's inside the shell near r equals 0. The part of the spacetime that's outside the shell is out beyond the blue line here, and it's out here. Where is the incoming shell, and what does Newton's theorem or the general relativity version of it say? It says that on the interior of the shell, that's everything in here. Everything in there is flat spacetime. In other words, it's correctly represented by the spacetime that we drew here, which was just a representation of flat spacetime. So first of all, on the interior of the shell, which is to the lower left here, the spacetime of just the flat Penrose diagram is the correct representation of everything that's going on. Now what about out here? Out here, we're told that there's a gravitational field. The gravitational field does not look like flat spacetime, so this region out here is wrong. This is not the correct representation of the physics or the geometry of this infalling shell out beyond the shell. What is the correct representation? The correct representation is the Schwarzschild black hole. So let's redraw the Schwarzschild black hole to get rid of all the junk on it. Just redraw it. I lost my black pen. Okay, so here it is. And now somebody on the outside throws in the shell. The shell comes riding in. And what does it look like? It looks like this. Oh, I used blue before. Well, all right, let's use red. Here's the incoming shell. It comes in. Experience is nothing special at this point over here. It just keeps going, but eventually it's a light ray. It's a light ray, a radially incoming light ray. It must move on a 45-degree angle. It moves in and just eventually hits the singularity. All right, so that's what the light ray would look like on the Schwarzschild geometry. But now which part of the diagram is correctly representing the physics that we're doing? The interior of the shell is not correctly represented by the black hole. The interior of the shell is correctly represented by flat spacetime. It's the exterior which is correctly represented. The exterior is out here. We can make it a little bigger. Just I didn't mean to slice it so thin. Let's make it a little bit bigger. That just means the shell starts in earlier. The orange part here is the correct description of the spacetime of the in-falling shell. And the interior in here is not correctly represented. So how do we put these two together? How do we put these two together in order to make a single geometry? And it's pretty easy. You just take the wrong part of this and throw it away. And you take the wrong part of this and you throw it away. And you take the two right parts and sew them together, or glue them together along the joining line where the in-falling shell is. You take this, you put it up there, redraw it, and it looks like this. Let's see if we can see what's going on. You see I've taken this diagram over here and simply placed it on top of the flat spacetime drawing so that the shell matches across them. The shell is the thing they have in common. The in-falling shell is the region that the two of them have in common. And on one side you plaster the black hole geometry. On the other side you put in the flat space geometry. That is the geometry of a black hole that is formed by an in-falling shell. Now excuse me while I consult with Newton. Any questions about this? No discontinuity. But we're going to put that in and see where it is. No, nothing special happens. Okay, where is the horizon? Where is the point of no return? Where on this diagram is somebody doomed? What you do is you start up in that corner there. That's the corner where the singularity appears to meet scri plus. That's infinitely far away so don't worry about it. Scribe plus is very, very far away, very, very late time. But now run a 45 degree line from there down. It's not a place where anybody gets hurt if they fall through it. It's just a place where if you do fall through it you cannot escape from the singularity. You will hit the singularity. If you're on the outside of it, here you can get away. You can escape to scri plus for example. But notice something very, very curious. Even over here you're doomed. You're still in flat space time. You're still in flat space time. The shell hasn't even gotten in yet. You can't see the shell. If you look backward on your light cone you'll not see the shell. The shell is coming in at you at the speed of light. Light hasn't gotten to you yet. You don't know the shell is coming. And yet if you're standing over here, in other words if you were somebody who was just falling over here, just in free fall over here, you're already behind the horizon even though the shell hasn't even gotten in yet. You're still in flat space time. Not only don't you feel anything drastic. You don't feel anything. Nothing coming at you but nevertheless you're doomed. You're doomed. You cannot get out. So the horizon extends all the ways back to here. This is inside the horizon. What happens when the horizon is inside the shell? It hasn't collapsed down to the source of the rays. When the shell hasn't collapsed? It hasn't reached the source of the radius. Right over here is where the shell crosses the source of the radius. Okay, so let's go back and draw a picture. Here's the incoming shell. It has a certain amount of mass. Associated with that mass is a Schwarzschild radius. All right, so there's some massive shell, mass meaning energy. It's made of light but it still has energy, therefore mass. And here is the Schwarzschild radius of a black hole with that mass. If we had a black hole of that mass, it would have a Schwarzschild radius that's that big. As long as the mass is on the outside, well we can say that the mass is on the outside. That's all there is to say. But once the mass, once the light crosses past the Schwarzschild radius, it's trapped. It cannot get out. It has formed a black hole for sure. Where does that form on here? Okay, right over here. Here's the in-falling shell. It crosses the horizon right over here and then it's on the inside and there's no way for that shell to change its mind and get back out. Out here, the shell could change its mind and accelerate back out. Well, I suppose shells of photons don't have minds, but if they had a mind, the photons could turn around over here. Once they get past here, they cannot turn around. They can try to turn around, but they won't get very far. They'll hit the singularity. So this is the point at which the shell crosses its own Schwarzschild radius. Yeah. Is there a meaning of the word horizon until it gets to that point? Yeah. It still means something? The definition of the horizon is the place which separates those people who can get out from those people who can't get out. Can't they get out before a collapsing shell crosses? No. Look, we'll get a picture of it. If you're over here, the shell hasn't crossed. What do you mean before? Pick a time coordinate. This diagram you drew over here. If I'm inside the horizon, its flat space has no gravitational field. Why can't I just walk across here or shine a light? Because by the time you get to here, the shell will have- You can't get to the top of that. Yeah. I think the only answer is to look at this diagram. You look at this diagram and you say, I'm over here. I'm going to try to get out. But by the time you've gotten out, by the time you can't get out, you simply can't get out. You have to exceed the speed of light. The diagram is the whole story. There's no more story than that. If you're here, you can't get out. If you're here, you can't get out. So the horizon has materialized before the shell has actually gotten into its own Schwarzschild radius. So what's the explanation of this? At least what I think is a apparent paradox. Inside the shell there's no gravitational field in this glass space time. Why can't I just walk across the radius? What do you mean walk? Here's walking. You're walking with the speed of light. It's pretty fast walking. I mean the shell is collapsing with the speed of light. If it's a lot farther away from the horizon than you are, you ought to be able to be at the big top. Here it is. What's that saying is that if we're sitting here and a light shell is a million miles away, we're doomed. If the light shell is a million miles away, we are still doomed. It's coming this way. That's right. You can't walk fast enough to get out of its way. If that light is coming in at you, you may be trapped even though you don't know that there's a light ray coming in. So the proof is the diagram. But with this analogous to radiation coming towards us, what we can't get past it, is not a shell coming in. It's not a shell coming in. The horizon is a funny thing. It actually forms before the material gets past the Schwarzschild radius. Its definition is it's the point of no return. It is the point which if you're on this side of it, you can't get out. If you're on that side of it, you can't get out. It's not a point. It's a cone. So if there was a mirror right at the outside of the Schwarzschild radius, and the light comes in, hits the mirror and turns around, bounces back out, what would that diagram look like? Well, not likely you can really do this. You're saying supposing there was a mirror right outside here. I think the point is you can't have a mirror that has no weight. If you could, now surely you can do this to some extent. You can build a mirror around a region like this, and the light ray would reflect off. If you really succeeded in building a mirror, the light ray would reflect off, and it wouldn't... Yeah. Yeah, you wouldn't have a black hole. Yeah, so we can draw it on here. Here's the mirror. The mirror is at a certain distance away from the horizon, away from the center. There's the mirror. The light comes in and bounces off, and that's all that happens. So it's kind of not until it actually crosses the horizon that you're committed to being inside the mirror. Yeah, that's right. Question? Yeah. Just to test my understanding. This light could decide to turn around at this point, but there's no way that this gentleman over here could tell that light to turn around over there. Not even make a mirror really fast. Not over here he can't. If he's over here, he can't get the material out for a mirror over here. He could have made a mirror in the past. He could have made a mirror in the past and protected him. Well, let's see. Can it? Yeah, maybe so. If he was over here coming in, he could decide to shoot out a mirror and have that light ray reflect off the mirror over there. But once he gets past this point, once he gets past that point, he cannot shoot out the mirror out to this point. He can't get it out there in time to rescue himself. Of course, in practice, he wouldn't even know that that light ray was coming in. Well, he might know. He might have Bob could be, or let's see if I can get Alice. Alice is the one who always goes in the black hole. It's just a convention. I didn't invent it. Alice goes in the black hole, but up till this point over here, she's together with Bob and they're having a conversation. And she tells Bob, look, I got an idea. I want you to go out there and then shoot in a shell. And this is going to be a lot of fun. I'm going to get trapped behind the horizon and annihilated at the singularity. And in fact, I know that that's going to happen even before I can see this shell coming at me. So isn't it interesting? So she could arrange with Bob to contrive, to send in that shell, know that it was going to come in, be over here. But once she gets over there, it is too late for her to do anything about it. She can't get a mirror out there. She can get a mirror over to here. But getting it over to here is no good. Everything's behind the horizon already. Anything back here will hit the singularity. She's got to get the mirror out to here. Okay. So. I'm curious about what happens if the mirror is inside the horizon. The mirror seems to, it has to get annihilated. Yeah. The mirror will just hit the singularity. The mirror is just another thing which is doomed. Like the overwhelm of any structure that has been built there when it was flat-spaced top. You could build a structure over here. Well, let's say a structure over here which reflected light. So that light coming in here would hit the structure, reflect off, and go into the singularity. Mirror, Alice, light, all of it winds up on the singularity. So, see there's actually a particular time or is it like infinitely far away? No, it's not infinitely. It's not infinitely far away. There's a finite amount of proper tide between here and here. What about the corner there? The corner there is infinitely far away. But only the corner. Only the corner. Right. But the implication is that somebody falling through the horizon has a finite amount of time to survive. Well, they could try to accelerate like mad once they pass the horizon over here and try to save themselves by sticking to that corner. The problem with that is that it would take an infinite amount of acceleration and an infinite amount of energy to make that much of an acceleration. You basically have to accelerate yourself up to the speed of light. So, once you cross there, you're doomed. But you could last a long time if you could accelerate hard enough. Actually, that doesn't work. The more you accelerate, the less time you experience. That's the time valuation. It doesn't work. It doesn't work. But yeah, I take it back. Your best strategy is to stand still and just accept your fate. Just to make sure I understand. So, if there's a shell, let's say, a million light years lost, no matter what its mass is or its energy equivalent to mass, that thing, if it's going inward, it's going to form a black hole at some point. And if I'm inside that thing, but I'm far enough away from its eventual center, then I could just hang out here and then I can get a problem. Is that right? Sorry, you're at the center of it? No, I'm not at the center. I'm inside it. Inside the shell? No, I'm inside the shell far away from the center, the eventual center. There's no problem for me to interview you. No, sorry, you're inside the shell before the shell crosses the shell. Yeah, you can just hang out and let the shell cross you. Yeah, absolutely. Yeah. Of course, the shell might hit you and make some damage, but that's a separate issue. Translate, transforming that coordinate system to this. Would this be analogous here in a particular point in time, whatever that means? That's a point in time. This is a point of time, this picture. Would... Some point of time. Would it relate that the distance from the shell to the event horizon is greater than the event horizon to the person in between? Is that a different way of looking at this picture? Or how fast he accelerates to the eventual event horizon? He can't get through in time before the shell comes in. All right, so let's ask about what the relationship between these pictures are. Is... Or, I don't know, maybe I should draw it bigger. Let's draw it bigger and understand. No, we can do it over here, right over here. All right, so here's our Penrose diagram for the formation of the black hole. Here's the horizon. Here's the informing shell. The informing shell hits right over here. Okay. Now, in order to speak of an instant of time, which time do we use? Time is just a coordinate. We can make coordinate transformations. An instant of time really means you pick a surface. You pick a surface which is everywhere's space-like. You pick a surface, here's a surface, here's another surface, but you could pick it, you know, in various kinds of ways. Here's another surface. And these series of surfaces define a time variable. Time is one value on here, another value on here, another value on here, and another value on here. If you follow the time, at this time over here, there's a shell far away. The shell is far away. No horizon has formed yet. So, this point has not been reached yet. A little bit later, the shell is still outside the horizon. All right? You're inside, you're in here. This point over here corresponds to the position of the shell, and so far, nothing has formed on the, well, there's a horizon over here. There is a horizon. At this point, right over here, at that time, right over there, that's the time at which the shell passes its own Schwarzschild radius. It's the time that the shell passes into the interior of the horizon. Here's the horizon. The shell passes into the interior, and once that happens, it can't get out. Okay? The Schwarzschild radius grows larger and larger. What's that? The Schwarzschild radius grows larger and larger, right? It goes up that way. Oh, yeah. You're talking about this. Yeah. Yeah. It's not the Schwarzschild radius. Schwarzschild radius is by definition always the same thing, but the size of the horizon, right at this point here, the horizon starts, okay, the horizon starts very small. Only if you're in right near the center here are you doomed. If you're out here, you still get a chance to get away. That corresponds to being here or here. The little tiny horizon has formed. The little tiny horizon has opened up. If you're in here, you're dead. If you're out here, you got a chance. All right? Same thing here. If you're in here, you're dead. If you're out here, you got a chance. And then without the shell haven't even gotten to you yet, the horizon starts to expand until it gets to this point over here. At that point, the shell crosses it. At that point, the incoming shell crosses the growing horizon. And past that point, the shell is on the inside. The horizon is on the outside. And it should stop growing afterwards. It should stop growing. It does stop growing. It stops growing right at that point. Above it is constant radius line. Yeah, this is a constant radius line. That's correct. It doesn't look like it, but it is. Yeah. Did you draw an observer who's sitting at half of the Schwarzschild radius on that diagram? Yeah. Stationary at half of the Schwarzschild radius? Yeah. Well, all right. Yeah, at some point, it just crosses the horizon. It's half the Schwarzschild radius and crosses the horizon. I thought he wouldn't experience any gravity until it passes to the shell. He doesn't experience any gravity until he crosses the shell. But nevertheless, he's dead. He's not dead yet. He's not dead yet. Nevertheless, it's too late for him to escape. There's no gravity until that. Right. Crossing the horizon does not mean that you sense any sudden gravity taking place. That only happens when you cross the shell. But when you cross the horizon, you know what that means. That means you simply don't get away. You can't get away. So in that sense, the horizon is a little bit ghostly. It doesn't have a material significance in the way the shell does. Somebody falling through it doesn't experience it. In fact, they're down here. Somebody falling through it. There's nothing there but flat space. Nevertheless, from that point, cannot travel fast enough to escape the fact that that shell is coming in. This says then that if you start out early enough, and you move faster than the shell grows, then the horizon grows. That's the horizon, though, then you can escape. That's right. Your velocity has to be the same. That's right. From here, you can escape. From here, you can't escape. So your velocity has to be greater than the velocity of the horizon. Well, yes, yes, yes, yes, yes. But that's correct. But you don't see any horizon opening up at you. That's right. You'll have to, you'll have to, you'll have to, that's right. But there's a much simpler way of kind of envisioning it. You have two shells, if the distance between the observer to the inner shell is larger than the inner shell, which is the horizon shell to the radiation shell, no matter how fast he goes, he can't get to that point before they crossed. So he's, he's doomed. That's a much simpler way of envisioning it, I think. That seems third of me. It's a, it's a coordinate, it's warped my brain. You try to do a calculation with that. You find out where, for example, the horizon forms. Try to do that. You'll find, you'll find you can't do it, and you wind up drawing the picture, taking this line, going backward to that point and saying, that's where it is. I don't know. I mean, I assure you, if you want to get a good, get a good picture of what's going on, the way to do it is to become familiar with these diagrams. If you try to reason it out without a good picture in front of you that accurately represents the relationships between the different parts here, and when the relationship, the important relationship, incidentally, is not the relative size of things. Here, there's a huge region of space and time up in this corner here. It's huge, infinite. The same little circle over here might be a tiny little region. So these diagrams are not correctly or faithfully representing size scales. What are they representing? They're representing the motion of light rays. And in Motman, in representing light rays, they're representing what things are in causal relationship to other things. What signals can propagate? Who can send a signal to whom? Can a signal from here get from Alice in here get the bob out here? It reflects, the technical language would be that it reflects the causal structure, cause and effect. Cause and effect mean what can influence what? And the rule is a thing can only influence the things in front of it where light can get to. It cannot influence the things outside the light cone. So these diagrams were intended. They were built, first of all, to be able to draw everything on the blackboard. And second of all, to reflect faithfully the causal relations. What causes an effect? Who can send signals to whom? Who is trapped if they're stuck having to go slower than the speed of light? Those things are correctly reflected here. And so for that reason, these diagrams are very valuable. I can't think without them. If I have to answer a question of the kind you're asking, I have to draw the diagram. And the diagram says things much more clearly than trying to picture, I can't do it. I simply can't. I can't guess whether my guess is a right or I can't tell whether my guess is a right or wrong until I've drawn the diagram. Okay. How long did it take you to be able to be comfortable and intuitive with these diagrams? Not long. Just play with them for a while. Yeah. So this is sort of the opposite of what you said in quantum mechanics. Instead of quantum mechanics, you can't think of it in terms of pictures. You have to go to the mathematics to understand it. Yeah. But yeah, keep in mind, I mean, these are, in a certain sense, these are abstract diagrams. But you're right. I mean, you're right. That's absolutely right. General relativity is classical physics and it can be represented. It can be drawn. It can be even pictured to some extent. All right. It's not so easy to draw, to picture four dimensional curved spacetime. But that's hard. Yeah. But still, you can go a lot further in visualizing it than you can quantum mechanics. I think that's right. I think that's an interesting fact. Quantum mechanics is much more abstract in that sense. Okay. We're going to quit a little early tonight.
(November 12, 2012) Leonard Susskind develops the coordinate transformations used to create Penrose diagrams, and then uses them to describe the physics of black hole creation. This series is the fourth installment of a six-quarter series that explore the foundations of modern physics. In this quarter Susskind focuses on Einstein's General Theory of Relativity.
10.5446/15039 (DOI)
We want to come now. We're not going to get deep into solving Einstein's field equations. They're awfully damn complicated. If you were to sit down, even to write them down explicitly is complicated. If you really wanted to write them down. As I said before, the principles of general relativity are pretty simple, but it's computationally nasty. Almost everything you try to do gets complicated fast. There's a lot of Christoffel symbols. I forget how many, but a lot of them, independent ones, even more. The difference of the curvature tensor, each Christoffel symbol has a bunch of derivatives. The curvature tensor has more derivatives. The equations get complicated. Hard to write on a single piece of paper. The best way to solve them, in fact, the best way to even write them down is just to feed them into your computer and Mathematica. Mathematica will spit out answers whenever it can. On the other hand, as I said, the basic principles are simple, but going anywhere past the basic things tends to be computationally intensive. We won't do much computation. We'll just concentrate on the meaning of the symbols. Then I'll tell you what happens when you try to solve them in various circumstances. We may get a chance to do a little bit of solving in thinking about gravitational waves next week. No gravitational waves this week, just the equations. Topic tonight is Einstein's field equations. Before we do that, we should talk first of all about the corresponding Newtonian concepts. Let's talk about Newton's version of Einstein field equations. Newton didn't think about fields. He didn't have a concept of field equations. But nevertheless, there are field equations which are equivalent to Newton. First of all, it's always a two-way street that masses affect the gravitational field and the gravitational field affects the way masses move. John Wheeler had some way to say it, which was very clever. I can't remember what it was. You got it. Always these two-way things. So let's talk about the two-way street in the context of Newton. First of all, field affects particles. That's just a statement of F force on a particle, gravitational force on a particle, can be written as minus m times the gradient of the gravitational potential. The gradient of the gravitational potential, the gravitational potential, I usually write as phi, and phi is a function of position. So everywhere's in space due to whatever reason, everywhere's in space, there is a gravitational potential called phi of x, and it varies from place to place. You multiply it by m, the mass of the object, and you take the gradient, and that tells you the force on the object. That's one aspect of field, phi of x, tells particles how to move, and in this case, by telling them what their acceleration should be, if we write that this is equal to ma, then the m's cancel, and we just have the rule that acceleration is equal to minus the gradient of the gravitational potential. So that's field tells particles how to move, and on the other hand, masses in space tell the gravitational field what to be. The equation that tells the gravitational field what to be is the equation, Poisson's equation. Does everybody know what del squared phi means? Del squared phi means the second partial of phi with respect to x squared plus the second partial of phi with respect to y squared plus the same thing as z. It's called del squared phi is equal to something which on the right hand side has the distribution of masses which are the things which are causing the field to be there. So what is on the right hand side? That's a convention. That's a convention. Well, it's not completely a convention, but there's a 4 pi. Yeah, it is a convention, because another factor here is Newton's constant, and if you don't like 4 pi's, you can absorb them into Newton's constant. They'll reappear in other places, so I don't think you're getting away with it by not putting the 4 pi here. Newton's constant, 6.7 times 10 to the minus 11th meters per blah blah whatever it is, times one other thing, the density of mass. The density of mass is a function of position. It's also a function of time. Masses can move around. So on the right hand side we have mass densities, sources. On the left hand side we have the gravitational field, phi. So we have these two aspects. Field tells particles how to move, and mass particles, in other words, mass tells field how to curve, or how to do whatever it is that it does. You can solve this equation, in particular, in a special case, in a special case where rho. First of all, what does rho mean? Rho means the amount of mass per unit volume, mass per volume. In the case where rho of x is concentrated, let's call it a star. It doesn't have to be a star. It could be a planet. It could be a bowling ball, but let's say a spherically symmetric object, a completely spherically symmetric object of total mass m, and it does not matter whether the mass is uniformly distributed in there, meaning to say it has to be symmetric with respect to rotation, but it could be more dense in the inside than the outside. It doesn't matter. Once you get on the outside of where there is any mass, once you get out beyond the region where there is mass, you can solve the equation uniquely, and the solution of the equation is phi is equal to minus the total mass times g. The g is there because there's a g in the source equation here, mass, that's the integrated amount of total mass divided by r. That solves this equation. If you take this phi, which depends on position, and you calculate del squared, you will find that it's zero everywhere outside the, okay, so this is the solution outside. This is the outside solution. I don't care about anything but the outside solution, so I might as well shrink this to a point. If I shrunk it down to a point, then this solution would be valid everywhere except at that point. The point has a total mass m. All right, so that would then be Newton's equations. If I then plug this in. Excuse me, any questions? Yeah. So in this case rho is constant on the inside of that mass and. No, not necessarily constant. But symmetric. Symmetric, yeah. And zero outside. Okay. Yeah. So it's just continuous at the bottom. Not necessarily. The mass density could go to zero. Right. But it doesn't matter, you know, that doesn't matter. Okay. The point is that if I take phi written this way and then I take its gradient, it just puts another r downstairs, differentiating one over r with respect to radius, which is what del does, makes it one over r squared, and winds up giving you f is equal to little m, the mass of the particle, big m, Newton's constant divided by r squared. So it's all there. And this is a field way, a field kind of primitive field theoretic way to think about gravitation. Instead of action at a distance, we have a gravitational field. It's still really action at a distance because in Newton's theory, if you move around a mass, y instantly reacts to it and changes. But it's a way of writing the theory, making it look like a field theory. Okay. This is the thing that we want to replace right here. We want to replace by something which makes sense relativistically, which makes sense in general relativity. Just to get a little bit of a handle to get started, let's remember something about the Schwarzschild geometry. We pulled the Schwarzschild geometry out of a hat. Of course, the point is that it's a solution of Einstein's equations. This is Newton's equation. It's a solution of Einstein's equations. But let's just remember what we wrote down. We wrote down a metric. I'm not going to write the whole thing. I'm just going to remind you what G naught naught was. The time-time component of the metric was 1 minus 2 mg over r. Oh, I erased what the solution was, didn't I? Let me put back the solution for a minute. Phi equals minus mg over r. Okay. Now let's write Schwarzschild over here. The only part of it that I'm interested in right now is just to remind you, G naught naught is 1 minus 2 mg over r. I've set the speed of light everywhere is equal to 1 as usual. So you're equal to 1. I don't need this equation. And now I see that to the extent that this can be identified, that 1 minus 2 mg over r can be identified with anything over here, but we could think of this then as something like del squared. Let's see, I'm going to get the sign right. I always have trouble with the sign here. I think it's del squared G naught naught. Is it minus or plus? I have written down plus, but I think I mean minus because of this sign, because of that sign here. Del squared of G naught naught. G naught naught is just 1 minus, oh, it's 1 plus 5, 1 plus 2 5. It's equal to 1 plus 2 5. 1 plus 2 5. So del squared G naught naught is just twice del squared 5, the one that has no derivatives. Derivative of 1 is 0. Del squared G naught naught is just twice del squared of 5. So del squared G naught naught is equal not to 4 pi, but to 8 pi G rho. Now this should be taken with a grain of salt. This is just a pneumonic device to remember the relationship between some aspects of general relativity and matter. So already this begins to sound like matter or mass is affecting the geometry. When we make this correspondence between Newton's phi and Einstein's or Schwarzschild's metric, we see that roughly it looks like, I use the word roughly because we're going to be more precise, but it has the rough texture of del squared of G naught naught is equal to 8 pi G rho, matter telling geometry how to curve, so to speak. All right, but of course this is not Einstein's equations here. Einstein's equations are a good deal more complicated than that. Oh, the other half of the story, of course, is that in general relativity, this equation over here is replaced by the statement that once you know the geometry, once you know G naught naught, the rule is particles move on spacetime geodesics. So this equation becomes, is replaced by the geodesic rule and the Newtonian field equation is replaced by something which I'm kind of naively just writing in this form. We're going to do better. We're going to figure out exactly what goes there. Okay, before we do, before we write down the field equations, we need to understand more about the right hand side. The right hand side is the density of matter, density of mass. Mass really means energy, E equals mc squared. If we forget about C and set it equal to 1, then energy and mass are the same thing. And so really what goes on the right hand side is energy density. We need to understand more what kind of quantity in relativity energy density is. It's part of a complex of things which includes more than just the energy density. It's part of a complex, in other words, it's part of some kind of tensor whose other components have other meanings. So let's go back and review quickly a little bit about the notion of conservation, in this case conservation of energy and momentum. In a simpler case that we're going to discuss in a moment, conservation of charge. Conservation, densities, flows of things like charge and mass. Let's just review a little bit that we've gone through before, but let's do it again. Let me start with electric charge. Electric charge is simpler than energy for reasons we will come to. Electric charge, the total electric charge of a system we can call Q. That's a standard notation for electric charge Q. I don't know where it comes from. Charge density, in many situations, charge density is called rho, but I don't want to confuse it with the energy density or the mass density over here. So I'm going to give it another name. I'm going to call the charge density sigma. And what is charge density? Charge density is you take a small volume, a differential small volume, you take the total amount of charge in there divided by the volume. So it's charge per unit volume in the limit of small volume, differential volume. And we'll call that sigma. Sigma equals schematically Q divided by volume. At least it has units of charge divided by volume. We could draw this in another way. Notice if we draw time this way and space this way and draw a little element of space over here. Now that little element of space, a little volume of space in this picture is two-dimensional. Why is it two-dimensional, that little element of space? Because I didn't draw the third dimension, that's all. All right, that's really three-dimensional, but it's too hard to draw it on a blackboard. And now we have some charge which is moving around and passes into that little region there. There's other charges out here which don't pass through it. When I want to count the charge in a volume of space at a given instant of time, it's almost like asking for the charges which pass through a little cubic area here, which is now being represented as a little square area. All right, so it's charge per unit volume and you can draw it like this. Next concept, this is a sigma. The next concept is the flow of charge, also called current. Now what we do is we take a little window. Let's forget this picture for a moment. We take a little window in space. This is a window in space now. It's a window, like a window, a room doesn't have windows, but I mean literally a small window. It's not necessarily where the window is. It's any place I want to put it. It can orient itself in any direction. The window is characterized by an area, which I'll take to be infinitely small, infinitesimal. An orientation and a sense of direction through the window. A window pointing that way is the opposite of a window pointing that way. All right, so this window is characterized by a little area and an orientation, an angle, and a sense along it. The current has to do with the amount of charge passing through that window per unit time, per unit area. Windows have areas they don't have volumes, but we also have to have a clock and allow the clock to proceed for a small amount of time in order to ask how much charge flows through that window per unit time. Charge J, and in particular if the window is oriented along the X, M axis, X1, X2, X3, or X, Y, and Z, if the window is oriented along the X, M axis, then J, M is a vector which is equal to the charge through that window per unit area, per unit time. Now, in this case here, sigma was a charge divided by a product of three lengths, volume. Here it's a charge divided by a product of two lengths times a time, but in relativity, time and space are nice and symmetric to each other. This dividing by area times time is again dividing by three lengths. One happens to be time-like, two happen to be space-like, but it has the same units as sigma, if C is set equal to 1. If C is set equal to 1, then space and time have the same units. Both of these can be thought of as charged through a window. In this case, the window is completely three space dimensions. In this case, the window is two space dimensions and one time dimension, but they're similar creatures. Sigma the charge density and the current of charge, this is called current, space current of charge, those four things together form a four vector. They form a four vector in the sense of relativity, sigma and J sub m together form a four vector J sub mu, or super mu, with J zero being the charge density and J1, J2 and J3 being the three components of the current. One more thing, conservation of charge. Conservation of charge is a local idea. What do I mean by saying it's local? Well conservation of charge could allow, just sheer conservation, could allow a charge, this blackboard array service may have a little bit of charge on it, might allow it to disappear over here and instantaneously appear over here. In fact, it could disappear over here and reappear at Alpha Centauri. I always use Alpha Centauri as some place which is so far away that it doesn't matter. That would mean that if that were possible, you would say, well charge is conserved, but I would say who cares if charge is conserved if it can just disappear? If it can just disappear arbitrarily to some very distant place, it's just as good as saying it wasn't conserved. In my laboratory, it just disappears. Charge doesn't disappear that way. If it leaves the laboratory, it passes through the walls of the laboratory. That means it passes through windows. That means it cannot leave the laboratory without a current flowing and that current has to flow out of the walls. That idea is called continuity and there's an equation that goes with it. The equation is the continuity equation. If I take a little box and I look at all the charges passing out through the walls, all the flow of charges passing out through the walls, if I'm interested in the charge per unit time that disappears out of the box, let's say it's a unit box, then the amount of charge in that unit box is just sigma if the volume is equal to 1 in some units. The charge per unit time that is leaving the box is minus sigma dot. Why minus? Because it's leaving the box. If it's leaving the box, sigma is getting smaller. That has to be equal to the sum of the currents passing out through the box and by some Gauss theorem or something, that's just equal to the divergence of the current. The same current here. The divergence of the current is simply the total amount of current passing out of that small box, the divergence of the current inside the vicinity of the box. That's the current passing out and this says that sigma dot is minus the divergence of the current. We can also write this of course as, well first of all we can write it as sigma dot plus the divergence of the current is equal to zero. And that means the derivative of sigma with respect to time and this means the various components of the derivative of the current with respect to the corresponding direction of space. So that current is 3-vector current. It's a 3-vector current. It's equal to zero. And now if I write that sigma is J naught, this just becomes the nice elegantant t. T is x naught, sigma is J naught, this just becomes the nice equation that derivative of Jm, J mu, space time index, not just space index. Space time index J mu with respect to x mu is equal to zero. Incidentally I suppose if I wanted to be really systematic I would put a sum over m here. Sigma dot, sigma by dt plus sum on m, the J1 by dx1, the J2 by dx2, the J3 by dx3. This just becomes the derivative of Jmu with respect to x mu where I have now used the summation convention. Here summation convention implied or implicit. Same thing is equal to zero. It becomes a nice tensor type equation. J is a four vector, x are four components of space, and this has the nice look of a good equation, the derivative of a tensor with respect to position. In curved coordinates, in general if you had a thing like this, in curved coordinates, this would be correct in ordinary coordinates, in curved coordinates you might replace this by the covariant derivative. Remember about covariant derivatives of tensors. It turns out in this case it doesn't matter for charge currents, it doesn't matter, but in general it would matter when you go to curved coordinates you should replace all derivatives by covariant derivatives. Otherwise the equations are not good tensor equations. Why do you want tensor equations? You want tensor equations because you want them to be true in any set of coordinates. So anyway that's the theory of electric charge, flow, current, and the continuity equation. This is called the continuity equation, and the physics of it is that when charge either reappears or disappears in a small volume, it is always traceable to currents flowing into or out through the boundaries of that region. Now let's come to energy and momentum. Energy and momentum are also conserved quantities. They can be described in terms of density of energy, density of momentum, density of each component of momentum. You can ask how much energy in a form of particles or whatever it happens to be, including the mc squared part of the energy, you can ask how much energy is in a volume, you can ask how much momentum is in a volume, just look at all the particles within a volume and count up their momentum. Photons or electromagnetic radiation has both energy and momentum, and that energy and momentum can be regarded as the integral of a density. So in that sense, each one of them, each the energy and each component of the momentum are like the charge. They're conserved, they can flow. If an object or a thing is moving, then the momentum and energy may be flowing, and the question is how do we represent the same set of ideas for energy and momentum? Now there is a difference between charge and energy and momentum. Electric charge is an invariant. No matter how the charge is moving, the charge of an electron is always the same. The charge of an electron does not depend on its state of motion. Therefore charge itself is an invariant. Charge is an invariant, the density of charge and the current of charge are not invariants. For example, if I have a given charge and I look at it in a different frame of reference, here's my charge, but I walk by it, I have trouble doing it, you know what I mean? I walk by it with a certain velocity, I think I did that pretty well, didn't I? Yeah. Okay, let's try it again. Yeah, good. Certain amount of charge, I look at that charge and because of Lorentz contraction, I say that the volume of that charge is one thing. You sitting still assign a different volume to it, right? One of us assigns a smaller volume, let's say you assign a bigger volume, I assign a smaller volume because I see it Lorentz contracted. If we take the charge and we divide it by the volume, we will not agree about the value of the charge density, but that's okay. Charge density is the component of a four vector, it's not an invariant. Charge density is a component of a four vector. Similarly, you see charges, well let's say, yes, you see charges standing still and you say there is no current. Why? Because charges are at rest, all of them. I'm moving and I see a wind of charges passing me, I say there's current. Right? We're both right, of course. Charge density and charge current are not invariants, they form together a four vector. Okay? Now, energy and momentum are more complicated. The total energy and momentum, not the density of them, but the total energy and momentum are not invariant. I see a particle standing still, the whole particle, not the density, I see a particle standing still and I say there's some energy there of a certain magnitude. You're walking past it and you see not just the E equals MC squared part of the energy, but you also see kinetic energy of motion. You walking past the particle or the object sees more energy not because of any Lorentz contraction of the volume that it's in, but just because the same object when you look at it has more energy than when I look at it. The same is true of the total momentum, not the flow, not the density of it. The same is true of momentum. You see an object in motion, you say there's momentum there. I see the object at rest, I say there's no momentum. So energy and momentum, unlike charge, are not invariant. They together form the components of a four vector. In that four vector, P mu includes the energy and the components of momentum P m where m labels the directions of space. Each one of these is like a Q. It's a conserved quantity, each one of them is like a Q. Now energy can be in motion and we can ask before it's in motion, energy has a density associated with it. Energy has a density, oh incidentally, E is also P naught. Good. Come over to here for a minute. Density was the time component of J. Because it's a density, we'll assign it a component zero, zero for time, it's the time component of a density. Let's think about the time component of the energy. P naught is the energy, but let's think about the time component of its density. Not the total energy, but the time component of the density. In other words, how much energy is in a small volume? Not the total energy, but energy within a small volume. We're going to call that T naught naught. Now where the notation T came from? I don't know. No, I don't know where, oh yes. At some point in time it was tension, but it's a long historical evolution. T naught naught. Now what are the two indices naught naught? The first naught indicates that we're talking about energy. The second naught indicates that we're talking about its density. So you can think of this then as the time component, meaning the density, of a thing which is itself a time component, namely the energy. T naught naught, and it's a function of position. If you integrate it over position, it tells you how much total energy you have. But energy can also move. Energy can move from place to place. And like momentum, sorry, like charge, when energy disappears out of a region, it does so because it passes out through the walls of the region. And so energy also has a flow. It's exactly the same idea. The amount of energy passing through a window per unit time is the current of energy, if you like. We don't call it the current of energy. We call it T naught 1. T naught naught is density of energy. And again, I'll emphasize the fact that its density is one of these zeros. I think it's this one. And the fact that its energy is the first entry here. Next, the flow of energy along the direction x1. The amount of energy passing through a window oriented along the x1 axis, that's called T naught. Not because it's energy, but then one because it's a flow along the x1 direction. Likewise, there's a T naught 2 and a T naught 3. These three together form the flow of energy. And this is the density of energy. The same way as the continuity equation is derived, the continuity equation for energy is derived, and what does it say? For the moment, the first index here is just passive. It just tells us what we're talking about. We're talking about energy. It's the zero, one, two, and three here, which are like the components of the current here. All right, so what it tells us is that the covariant derivative with respect to x mu of T naught mu is equal to zero. This is the analogous continuity equation for energy. But everything that I said about energy, we could now say for any one of the components of the momentum. So let's go to the components of the momentum now. Let's say the component Pm. The component Pm also has a density. So let's put it up here, component M of momentum. It also has a density, and that density is called T1, well, M, Tm naught. The naught here indicates that it's a density. The M here indicates that we're talking about the mth component of the momentum. So we read this. The density of the mth component of momentum, it's also a function of x. And likewise, we can consider the flow of the mth component of momentum. Mth component of momentum flow along direction n of x. Momentum is a conserved quantity. It can flow, each of its components can flow along some direction. Let me give you an example, some examples. Good question? Yeah? Can you say x is really the m component of x, correct? Here and here? That just means a function of position. Any position. It means a function of position. Yeah. It means x, y, and z, all of them. So we can go a little further, and we can say the same equation is true even if we replace energy by a component of momentum. In other words, we could replace naught by n. But now we have an equation like this for all four possibilities. We can just call this key nu mu. For each nu, in other words, nu could be time, in which case we're talking about energy, or t could be space, in which case we're talking about, right. Now we have basically what it comes down to is that the flow and densities of energy and momentum form a tensor with two indices. One index tells us who we're talking about, energy over momentum. The other index tells us we're talking about density or flow. And that is called the energy momentum tensor. The energy momentum tensor, whoops. Let's see. The energy momentum tensor has an interesting property which I have not proved. But for example, take tm naught. That is the density of the mth component of momentum. Compare it with t naught m. That's the flow of the energy. This is flow of energy. This is density of momentum. It's a general property of relativistic systems, which I'm not going to prove now, which tells you that this matrix is symmetric. tm naught is equal to t naught m. We're not going to prove it. It takes a little bit of work to prove it. It's proved relativistic invariance allows you to connect this with this. And there's a theorem of relativistic mechanics that are relativistic field theory, essentially. All relativistic field theories. The energy momentum tensor is symmetric. So let's add that in. And then we have the energy momentum tensor of big square matrix. t naught 1, t naught 2, blah, blah, blah. Well, t naught 3, that's all. t 1 naught, t 1 1, and so forth and so on. We'll come back to the meaning of these elements in a little while. This one is clear. This is energy density. These are fairly clear. They're flow of energy. This one is momentum density, the flow of momentum. So they're pretty clear what they mean. But we're going to find out that some of these elements have another meaning connected with pressure, things like pressure, things of that nature. We'll come back to that in a while. But at the moment, the important idea is that the flow and density of energy and momentum are combined into an energy momentum tensor. And each component of the energy, well, the energy momentum tensor satisfies a continuity equation for continuity equations, one for each type of stuff that we're talking about. OK, we'll come back to pressure in a little while. Is essentially the second rank or index of the tensor just because it's not invariant? The total energy momentum is not invariant, like total charge? Total energy and momentum is not invariant. That's the first index. I'm comparing it to the charge and current equation. You got the extra index essentially came out of that, but you don't have any. This mu here does not tell us what quantity we're talking about. We know we're talking about electric charge. So we could write this in another way. We could say this is J charge mu. I just put the Q up here to remind us what stuff we're talking about. The mu over here tells us about the direction of flow when it's time-like, its density, when it's space-like, its flow. So the second index is the thing which distinguishes flow along which axis. The first index here tells us what quantity we're talking about. Same here. T naught tells us we're talking about energy. This M tells us we're talking about the flow of it along the axis M. How much of it is flowing out to a window-oriented along the M axis? But fortunately, we don't have to remember T naught M and T naught N. They're equal. That's the symmetry. We don't have to remember which or... It's easy enough to remember that one of these must correspond to the fact that it's energy and the other does flow, but it's not important to remember which is which because they're interchangeable to indices here. Okay, so now let's return... Excuse me. That entire matrix is symmetric. Is that correct? Yes, it is. That's just the M naught. No, that's right. The entire matrix is symmetric. Right. That's correct. We'll come back to its structure in a little while, maybe not tonight, but... And some of the meaning of its elements. But for the moment, what we've learned now is that the notion of energy density is incomplete. It's part of a multiplicative thing. It's part of a... Well, the word is tensor, of course. It's part of a tensor. Well the right-hand side of this equation is part of a tensor. The left-hand side must also be part of a tensor, but whenever you have a tensor equation, you can't have a tensor equation that says some particular component of a tensor is equal to some other, the same component of some other tensor. Let's say that is an example. Let's take some particular component of a four-vector A. Let's say A3. It's a four-vector, and this is the third component of it. And I assert that there's a law of physics that says that A3 is equal to B3. Three being the z-comb of the z-direction. Does that make sense as a law of physics? Well, it only makes sense as a law of physics if it is also true that A2 equals B2 and A1 equals B1. Why is that? Why can't you just have a law that says that the third component of a vector along the z-axis is equal to the third component of some other vector and not have that the other two components are equal? Well, it's an example that if it is always true in every frame of reference that the third component of A is equal to the third component of B, if it's true in every frame of reference, then by rotating the frame of reference, we can rotate A3, we can rotate the third axis until it becomes the second axis. And so if it's true in every frame of reference that A3 is equal to B3, then A2 must be equal to B2 and A1 must be equal to B1 if it's true-be-true in every frame of reference. That's an example of why equations need to be tensor equations of the form A sub m equals B sub m or vector equations, full vector equations. Okay, when you go to relativity, the same thing is true even concluding the time component of equations. If it were true, for example, in some frame of reference, no, in every frame of reference, that a certain four vector, now this is a four vector, this is the time component of it, is always equal to B0 in every frame of reference, then the only way that can be true is if all components are equal in every frame of reference, A mu is equal to B mu. For the same reason, Lorentz transformations are not so different from rotations, and unless all the components are equal, you'll always be able to find a frame of reference in which A0 will not be equal to B0 unless all four components are equal. So good laws of physics must be tensor laws of physics, in particular if they're to be true in every frame of reference. Now here we have an equation that involves a right-hand side, which is a particular component of some tensor. It's a component, a not-not component of the energy momentum tensor. Let's not worry too much about whether this left-hand side was just a guess of what the left-hand side might look like, but the right-hand side is the energy density, and it's what you expect to be on the right-hand side of Newton's equations. Sorry, the right-hand side of Newton's equations, but the right-hand side of Einstein's equations must involve not a particular component of a tensor, but it must generalize to something that involves all components of the tensor. So that means Einstein's generalization of Newton must read something like this. The right-hand side must be 8 pi g, same 8 pi g, times T mu nu, a special case being when mu and nu are both equal to time, and then that becomes the energy density. But if the equation is to be true in every frame, it has to be a tensor equation. What has to be on the left-hand side? What has to be on the left-hand side must also be a tensor with two components, a rank two tensor. Otherwise the equation doesn't make sense. So on the left-hand side must be some tensor with the same kind of tensor structure. It must be symmetric because the right-hand side is symmetric and have whatever properties the right-hand side has, but it's not something which is made up out of matter, it's made up out of the metric. It's something which is made up out of the metric, it has to do with geometry and not with masses and sources. So the left-hand side we will just say, we'll call it capital G mu nu. The only thing we know about it is it's made up out of the metric, it probably has two derivatives in it. To compare it with this here, it will involve the metric in some form and very likely two derivatives of the metric. That's the kind of thing we would like to find, to put on the left-hand side, and when we find a good candidate for it, we can then ask if we're in a situation where non-relativistic physics should be a good approximation, does this G mu nu reduce to just del squared G naught naught? Perhaps it does, perhaps it doesn't. If it doesn't, then we throw it away and try to find a different rule. So we need to know what kind of thing G mu nu can be. So let's explore the possibilities. G mu nu is a tensor made up out of the metric. It has two derivatives, or at least it must have some terms which have two derivatives to make it look like that. So it's not just a metric by itself, has to be a thing with two derivatives. What kind of tensor can we make out of the metric and two derivatives? All right, so we've already talked about one. It was the curvature tensor. Let me remind you about the curvature tensor. The curvature tensor was made up out of the Christoffel symbols. Now, for our purposes tonight, I'm just writing down these equations as reminders. You start with the Christoffel symbols, and I'll just remind you what they look like. This is equal to 1 half G sigma delta times d tau, d tau means d by dx tau, G delta nu plus derivative with respect to nu of G delta tau. The first two terms are gotten by interchanging tau and nu, and then the last one is minus derivative with respect to delta, I believe, of G nu tau. Now, the only important thing right now is this involves the derivative of a G. It involves the first derivative of a G, one derivative. All right, next, what about the curvature tensor? Now, I'm going to write the curvature tensor down, but the only important part of it is that it involves another derivative. So here it is, the curvature tensor on all its glory, R mu nu tau sigma. And this is the thing which tells us that there's real curvature. If any component of this is non-zero anywhere in a region of space, the space is curved. So what is this equal to? It's equal to, I think, d mu gamma nu sigma tau minus d nu gamma sigma mu tau. And then there's another term with gamma nu lambda sigma gamma mu tau lambda. I'm not sure there's any reason to rewriting this down. It's just the general overall structure which is interesting for reasons mu lambda sigma gamma nu tau lambda. Summation convention assumed. Main point is the Christoffel symbol, which is not a tensor, has first derivatives of the metric. And the curvature tensor has first derivatives of the Christoffel symbol. It also has things which are quadratic in the Christoffel symbol. This means these terms here have second derivatives of the metric, the derivatives of derivatives. This term here has also two derivatives, squares of first derivatives. So this is the kind of thing we like to see. We like to see an R, we like to see a tensor which involves two derivatives of the metric. And that's it, two derivatives of components of the metric. That's a candidate, or components of this, various components of it, are candidates to appear on the left-hand side of this equation. But wait, R has four components. The curvature tensor, the Riemann curvature tensor, has four components. The left-hand side of this equation only is allowed to have two components. Why? Because the right-hand side has two components. So this can't be the left-hand side of the equation itself. What can you do to it to make a thing with two components? You can make a thing with two components by contraction, by contraction of components. Remember the rule, if you set sigma, for example, I'll tell you what happened, you could set sigma equal to tau in sum, you would get zero. If you actually plug in what the curvature tensor is, if you set sigma equal to tau in sum, you will get zero. If you set sigma equal to nu in sum, you won't get sigma, you won't get zero, you'll get something that we can call the tensor R mu tau. I have a question. You're going to run into a problem contracting this because you get a covariant tensor where you need a countervariant tensor. You can all, exactly what I was coming to next. Okay, right. We can build a tensor from a tensor with four components, you can build a tensor with two components by contracting indices, but you have to be careful that you don't get zero. You will get zero, some of the symmetries of this thing, the various minus signs that appear here. In fact, the various minus signs that appear here will wind up giving you zero when you contract sigma with tau. They will not give you zero when you contract sigma with nu or sigma with mu, but the two tensors you get by contracting sigma with nu and sigma with mu happen to be the same tensor apart from a sign. So there's really only one thing you can build. It's a theorem, well-known theorem, I don't know whose theorem it's called. There's only one tensor that you can build out of two derivatives acting on the metric which has only two indices. It's called the Ricci tensor and it's a contraction of the Riemann tensor. It has less information. The Riemann tensor has a lot more components than the Ricci tensor. It has less information, meaning to say it doesn't, it itself can be zero without the full Riemann tensor being zero. So this is called the Ricci tensor. And as always, if you have a tensor, you can raise and lower its indices. That means there's also a thing called R mu tau. There's also a thing called R mu tau and also R mu tau with both of them raised. You can raise and lower indices using the metric tensor. And so you ask, now I'm telling you, for each such tensor there's another one with upper indices. Another fact about the Ricci tensor is that it happens to be symmetric. In particular, R mu tau equals R tau mu. Now that you just check by using its definition, I don't know a simple quick argument about it. All of these things are fairly complicated. This Ricci tensor is symmetric. Just the property that it has to find the way it is. And so a possible left-hand side would be the Ricci tensor. And mark 8 pi g, team you knew. Now there's another one that you can make. There's another one you can make. Another tensor that you can make. This was not unique. Only one other, only one other possibility. And that's to begin by contracting mu and tau. You can contract mu and tau by multiplying this object that's called R. It's a scalar. It's a scalar. It's R mu mu, which is also equal to g mu tau times R, oops, g mu tau, R mu tau, g mu tau. The action of this g lowers the index tau and makes it into mu, and then it becomes just exactly this thing here. These are the same object. This is called the curvature scalar. It's a scalar. It has no indices left at all. So it's not what we want on the left-hand side. But we can multiply it by g mu nu. Multiplying it by g mu nu does give us a tensor. That's another possibility. I'm not recommending either of these at the moment. I'm just saying from what we've said up till now, either of these could be possible laws of gravitation. They both involve second derivatives of the metric tensor, equaling something on the right-hand side, which looks like a density of energy and momentum. OK, which one shall we pick? Well, we know one more thing. We know one more thing, and that's the conservation of energy and momentum, or better yet, the continuity equation for energy momentum. If we believe that energy and momentum has the property that only disappears if it flows through walls of systems, in other words, if there's a flow, then we are forced to the conclusion that d mu covariant derivative of t mu nu is equal to 0. That's the continuity equation. g mu nu is the continuity equation, from which it follows that g mu nu is equal to 0. So the first thing we could do is we could check whether either of these two tensors satisfies this relationship. If not, then the left-hand side simply can't be the right-hand side unless we give up conservation and continuity of the energy and momentum. All right, I'll check this one for you. Let's just check this one, and then I'll tell you what the other one does. Let's check, let's calculate d mu of g mu nu r. I want to put some parentheses around this. OK, the first fact is covariant derivatives satisfy the usual product law that this is equal to d mu g mu nu times r plus g mu nu d mu r. That's just the product rule for derivatives, covariant derivatives, so no exception. This is true. Now, what about the covariant derivative of the metric tensor? What the covariant derivative of the metric tensor is? 0. 0. 0, covariant. That's where we started. That's how we calculated the Christoffel symbols by starting with the assumption that the covariant derivative of g is equal to 0. And the reason for that is because covariant derivatives are by definition tensors which in the special good frame of reference are equal to ordinary derivatives. But the ordinary derivative of g in the good frame of reference, the good frame of reference being the frame of reference in which the derivatives of g are 0. So the covariant derivative of g is equal to 0. This term is not there. And now r is a scalar. The covariant derivative of a scalar is just the ordinary derivative. Scalers don't have any indices. Derivatives, covariant derivatives of scalers are just ordinary derivatives. What do I write here? r plus. That went away. Yeah. That went away. Okay, what about this? This is just equal to g mu nu times the derivative of r. Well certainly in general, the derivative of the curvature is not 0. In general, the derivative of curvature, the curvature scalar, if the curvature scalar was constant everywhere, as we know that's not true, we know that there can be geometries which are more curved in one place, less curved in other places, even flat in some places. Certainly it is not the case that the derivative of r is identically equal to 0. And the g mu nu here doesn't help. You can lower the g mu nu, get rid of it, and we'll just say that the derivative of r is equal to 0. So here's what we find. First of all, the covariant derivative of this guy over here is not equal to 0, but it is equal to just g mu nu d mu r. So it doesn't work. No good. Bad. What about this one? Well we do the same thing. We calculate d mu of r mu nu, and it's a little harder, but not much. A little bit harder, I'll tell you what the answer is. It's equal to 1 half g mu nu d mu r. Again, it can't be 0 for the same reason that this one can't be, and happens to be exactly 1 half the corresponding covariant derivative. But now we know the answer. So we take g mu nu r and subtract off 1 half r mu nu, or better yet, take r mu nu and subtract off 2 or 1 half. So what do I know? r derivative of r mu nu is equal to 1 half. Yeah I think I got it right. But I now have an obvious thing to do. We combine this with this r mu nu with some coefficient times g mu nu, and then we will get a thing whose appropriate derivative is equal to 0. So what is g mu nu? g mu nu is r mu nu minus 1 half g mu nu r. It's a theorem that there is nothing else made up out of two derivatives acting on the metric that has the property that it's covariantly conserved. This would be called covariantly conserved. Just is nothing else. So either, oh I take it back of course, you could have twice it or half of it or 17 times it. But now it just becomes a question of matching this equation to Newton's equations in the appropriate approximations where everything is moving non-relativistically. And one of two possibilities, either there is some correct numerical multiple of this which matches this or there isn't. If there isn't, then we're in trouble. Then we're in trouble. When I say it matches it, we look at the time, time component. The time, time component of this equation has rho on the right hand side. It has t naught naught on the right hand side. T naught naught is rho. So we take this equation in the non-relativistic limit. Everybody moving slowly, not too strong a gravitational field. We plug it in and we hope that with some appropriate numerical coefficient here, this equation and this equation are the same, in an appropriate limit. The answer is yes they are the same and they're the same with coefficient one, numerical coefficient one. It's a piece of luck that it turned out with coefficient one, nothing deep about that. And this is what Einstein, this was Einstein's calculation. He knew what was going on pretty much but he didn't quite know what the right equation of motion was. I believe in the beginning he actually did try r mu nu equal to t mu nu. And eventually found, realized that it didn't work. I don't know how many weeks of work it took him to do all of this but in the end he discovered capital G mu nu equals r mu nu minus one half g mu nu r. This is called the Einstein tensor. This is the Ricci tensor and this is the curvature scalar. So this is now known as Einstein's field equations and they do reproduce Newton in the appropriate limit. But now we see something interesting. We see that in general the source of the gravitational field is not just energy density but it can involve energy flow, it can involve momentum density and it can even involve momentum flow. Now as a rule the momentum flow or even the energy flow, certainly the momentum flow but even the momentum density are much smaller than the energy density. Why do I say that? It has to do with the speeds of light in the formulas. If you put the speeds of light into the formulas, just like energy is always huge because it gets multiplied by c squared but on the other hand momentum is typically not huge because it's just mass times velocity so velocity is slow. If you're in a non-relativistic situation when velocity is slow, energy density is by far the biggest thing, the other components of the energy momentum tensor are much smaller, typically decreased by powers of the speed of light. So in the various non-relativistic situations, the only thing that's important on the right hand side in a frame of reference where the sources are moving slowly, in a frame of reference where the sources are moving slowly, the only important thing on the right hand side is rho. It's also true that in the same limit the only important thing on the left hand side is the second derivative of g, of g naught. So in a non-relativistic limit these things match but if you're outside of a non-relativistic limit, places where sources are moving rapidly or even places where the sources are made up out of particles which are moving rapidly even though the whole thing may be not moving so much. Other components of the energy momentum tensor do generate gravitation. They do generate curvature and it's not just energy which, or mass if you, as we sometimes call it. So, another question. In Newtonian mechanics, so you can derive the continuity equation by analyzing a little differential element of say a fluid or gas or whatever. Is there anything analogous, is there an analogous way to come up with this? I mean this sort of seems like, you know, let's try this, let's try that. I'm sorry. Huh? Say it was. Is there some sort of basic physical way to come up with this as opposed to? This equation. Yeah. Without using the continuity equation? No, to obtain the continuity equation. Well the logic here is a little bit different. The logic is that the continuity equation basically comes from the idea of conservation and local conservation. One of this crap where you have something disappear and reappear on an alpha centauri. That logic is just as good in relativity as it is in non-relativistic physics. Stuff having to pass out through the boundaries in order to disappear. So in some sense the continuity equation is more fundamental from this point of view. But there is another point of view which is the, which we'll talk about maybe next time, which is the action formulation, which is much more beautiful and much more condensed where we introduce a principle of least action for the gravitational field. And all of this just pops out. Hard calculations. They're not easy, but it pops out. What's the term you put in for getting the, what term do you put in for a Gaussian? The curvature scalar. That's all. Just the curvature scalar is the Lagrangian density. Right. Questions? Oh sorry. The curvature scalar plus some things for matter, for the matter, for the sources. Yeah. So the reasoning here seems to be overlooking for something that's a cancer, two cancer I guess, and that satisfies the continuity equation. And it consists strictly of geometrical stuff. So could there be something else? There is no, it's known that there isn't. It's known that there is. Is that a physical thing or a mathematical? Well, it's a mathematical fact, but is it a physical thing? Well, I can't think of any simple physical argument for it, but it is a fact. And was that fact known by Einstein or was that the term of the Lagrangian? I, oh, no, it's an easy fact. The only tensor, this was known from Riemann, the only tensor that you can build that involves only two derivatives of the metric was the Riemann curvature tensor and things that descend from it by contraction. So yes, Einstein knew that it had to be built up out of the curvature tensor. And it's not hard to go through, to exhaust all possible things that you can do with the curvature tensor, or essentially did it, all possible things that you can do with the curvature tensor to make a thing with only two indices, there's only two things you can do. One of them is arm you knew and the other, and this is not hard to prove. It's very straightforward. So Einstein presumably knew that, that this particular combination satisfied a continuity equation that was Einstein. So he must have had to do a little bit of work to calculate this thing over here. D mu arm you knew, this one. He had to do a little bit of work to calculate that. I assumed that what he did was just plug in. And it would take you about 15 or 20 minutes to do it. As usual, the principles are simple, but by the time you've manipulated all the indices and written down the Christoffel symbols and worried about getting the signs right and so forth, it'll take you a good 15 minutes to get this done. So I can envision him having done this little calculation, noticed that he agreed with this calculation and that if he combined them in the right proportions, that he would have what he needs. As I said, there is another argument. The other argument is much more elegant, but this was the first round of things that he did. Yeah. Two questions. One is when you did that contraction from tensor before a disease to two of the seeds, the stuff that we got thrown away, is there any physical content to that? Oh yes. Oh yes. Oh yes. Oh yes. Yeah. Yeah. Another way, yeah. Right. Okay. So, yeah, we'll come to this. There's quite a bit of content to it. But let me point out one thing. Just as in Maxwell's equations, there are solutions which don't involve sources. You can find solutions of Maxwell's equations that don't have any sources. Or you can find solutions in regions of space in which there are no sources. Let's consider the case with either no sources or when we're in a region of space where there are no sources. One side of all the sources. Then on the right hand side we have zero. So let me just show you that there's a simplification in that case. It's just a little simplification, but the equation does become simpler in that case. Let's suppose the T mu nu is equal to zero. In other words, r mu nu minus 1 half g mu nu r is equal to 0, or better yet, r mu nu is equal to 1 half g mu nu r. Now, let's calculate r by, on the left-hand side, contracting nu and mu. It's equivalent to lowering an index here. We have to do the lowering on this side also, and then setting nu equal to mu. Do you remember what g is with one upper and one lower index? It's the Kronecker delta. Go back to your notes if you don't remember. The g with one upper and one lower index, which means Kronecker delta, ordinary Kronecker delta. In general relativity, the Kronecker delta is always considered to be a thing with one upper and one lower index, but it means the same thing. 0 and mu is not equal to nu 1, 1 it is equal to nu. It's the unit matrix. The reason, OK, let's just remind ourselves of this. All right, so let's now contract mu with nu. That means set mu equal to nu and sum. What do I get on the left-hand side? On the left-hand side, I get r. On the right-hand side, I get 1 half r times this object g mu mu, or better yet, delta mu mu. What is delta mu mu? 4. Go for it right there. 4. There are four pieces. Each delta, 0 is 1, delta 1 is 1, delta 2 is 1, delta 3 is 1. This is just 4. And we get the stupid equation that r is equal to 2r. The only solution of that is that r is equal to 0. Not the curvature tensor, not even the Ricci tensor, but just the curvature scalar. That does not imply that the Ricci tensor is 0. It only implies what it implies, which is the Ricci scalar is equal to 0, in which case? In this special case, when this is equal to 0, you can drop this term. 1 half g mu nu r is just 0. Einstein's field equations become a little simpler in what is usually called the vacuum case. The vacuum case means in regions of space where there are no sources, in regions of space where there are no sources, the whole set of Einstein equations is just the equations that r mu nu is equal to 0. The solutions are not trivial. They contain gravitational waves with no sources, just like there are electromagnetic waves. But another example is the Schwarzschild metric. Now, the Schwarzschild metric itself is analogous, roughly speaking, analogous to a point mass. Outside the point mass, there is no matter, nothing. So just like Newton's equations, with everywhere's outside the mass, the equation is the same as it would be for just empty space. The Schwarzschild metric is also one for which everywhere's outside the singularity. The equation that's satisfied is the vacuum Einstein equation. So a simple thing, it's not simple to do. It's a real nuisance to do. But a conceptually simple thing to do is to take the Schwarzschild metric and calculate r mu nu and check that it's equal to 0. You will find that it is. If you take the Schwarzschild metric, it's a bunch of g mu nu's, sit down and spend the rest of the day calculating Christoffel symbols, and then curvature tensors, and contracting them, or put it on the Mathematica, and you'll find out that it's exactly equal to 0. Now, of course, it's ambiguous at the singularity. Singularity, everything is so crazy, it doesn't make any sense. But the components which you're contracting become infinite there, sort of like the point mass, too singular, too undefined at the origin to have a value. But everywhere's away from the singularity. Yes, the Schwarzschild metric is what is called Ricci flat, saying that this is equal to 0, is sometimes called Ricci flat. It is not the same as flat. So gravitational waves satisfy this. Schwarzschild metric satisfies it, except at the singularity. And that's the basic facts about the Einstein field equations. I think we'll quit there. Yeah? Analysis, but you leave here. Let's say it again. You do the same analysis that you did there, contracting. But you leave t and you don't set it to 0. You leave t? Oh, OK, good, good, good. Yeah, yeah, you can write. OK, so you can do that. Let's do that. OK, let's see what we get. We have arm you new. Doesn't matter whether it's upstairs or downstairs. Arm you new minus 1 half g mu new. R is equal to 8 pi g t mu new. Now, what are we going to do? We're going to contract these two indices. This is going to give us R minus 2R is equal to 8 pi g t mu mu. I contract the index or what's the same thing, t mu new g mu new. OK, so that now tells us, let's see, this is minus R minus R equals 8 pi g. And let's just call it t. t is by definition the scalar that you get when you contract the two indices here. So what do we have? We have R is equal to minus 8 pi g t. And now we can put that back into the Einstein equations. Put this back. We get arm you new minus 1 half g mu new times R, which is plus 8 pi g t equals 8 pi g t mu new. And now let's take this thing here and shift it to the right. This was a plus sign here, huh? So we get 8 pi g t mu new. 8 pi g multiplies t mu new minus 1 half g mu new t, where t is gotten by contracting. So here's what we get. In other words, Einstein field equations can be written with arm you new on the left hand side, but on the right hand side, you have to compensate by subtracting 1 half g mu new times t. Is there any physical significance to the scalar t? Yeah, it's called the trace of the energy momentum tensor. And yeah, the trace of the energy moment, OK. I'll tell you when it's 0. It's 0 for radiation, electromagnetic radiation. It's 0 for massless particles like photons or gravitons. For electromagnetic radiation, it would be 0. For particles with mass, the energy momentum tensor, this thing is not 0. So. Yeah, on a similar question, can you describe the physical meaning of the Riegel scalar and the Riegel tensor, the Riemann tensor's obvious? Yeah, Riemann tensor has to do with going around the little curve and seeing what kind of rotation you do. I don't know a simple answer to that. I don't know any particular physical significance to or geometric significance. I'm sure there is, but whatever it is, it's not really very transparent. They're much simpler objects than the full Riemann tensor. On the other hand, it's kind of difficult to visualize their individual meaning. So I'm going to say no, that I don't know a good, simple way to think about these things. Yeah, I think one thing there is basically they're averaging when you're doing it in a direction. They're averaging over direction. They're averaging over first directions. Yeah, right. They are. Right. Right, so you asked whether you lose information. Well, the answer is yes. You can have geometries where R mu nu is equal to 0, where the Riemann tensor is not equal to 0. And an example that we'll explore a little bit is gravitational waves. Gravitational waves, just like electromagnetic waves, they don't require any sources. Of course, really, we really expect that in the real world, an electromagnetic wave is made by an antenna or something. But as solutions of Maxwell's equations, you can just have electromagnetic waves that just propagate from infinity to infinity and just no sources. In the same way, you can have gravitational waves, which also have no sources. Those gravitational waves satisfy R mu nu equals 0, but they are most certainly not flat space. There's all sorts of distortions of space going on. So I think maybe a possible somewhat satisfying thing is to see what a geometry that has R mu nu equal to 0, but for which the curvature tensor itself is not equal to 0. And we'll explore a little bit what a gravitational wave looks like. So that will help. Same thing, of course, is true of the Schwarzschild metric. As long as you stay away from the singularity, there's something there. There's real curvature there. Tidal force is all sorts of stuff, but R mu nu is equal to 0. So yes, R mu nu has less information in it than the curvature tensor. It actually depends on the dimensionality. In four dimensions, there is less information in the Ricci tensor than in the Riemann tensor. Turns out in three dimensions, the amount of information is the same. You can write there, or you can write one in terms of the other either way. And in two dimensions, all of the information is in the scalar. And so all there is is the scalar, and you can make the other things out of it. What's there on you? So that must mean there's some invariance by connecting the Ricci tensor and Ricci scalar that, I don't know how to phrase this, but there are combinations that you can subtract that information out of the Riemann tensor, not having to affect the gravitational field. Would that be the gravity waves? Or what was that for? Well, I'm not sorry. It's a mathematical. There's missing information, and that missing information has no effect on the momentum energy tensor, correct? Because a left-hand side is not the momentum tensor. Right. So how is that not the information manifest? It means that for a given source, there can be many, many solutions, many of which they all have the same left-hand side, namely, team you knew. But they simply have different physical properties. So for example, the simplest situation is to say, what if this is 0? If it's 0, does it mean that there's no gravitational, no interesting geometry at all? No. It allows gravitational waves. Roughly speaking, any energy momentum tensor you put there construct a solution, and then add gravitational waves on top of it. Roughly speaking, that's not exactly true, but it's roughly speaking true, that any solution, you can always add gravitational waves. So the gravitational waves must be something that contains more information than just a Ricci tensor, than they do. No, no flood factor. Yeah? No cosmological constant? No, no. Well, the cosmological constant can be thought of as part of team you knew. Yeah. No, we haven't said anything about cosmological constant, whether or not it's there. Yeah, OK. From this point of view, the cosmological constant can either be thought of as an energy momentum tensor, or a component to the energy momentum tensor. Let's call it team you knew cosmological. For the cosmological constant, it would be some lambda times g mu nu. So then, if there was a cosmological constant, you might write 8 pi g times lambda times g mu nu. Or this number here is just a constant. Let's just get rid of it and call it some, I guess it is really the cosmological constant that appears there. It only depends on geometry, a number times geometry. You can shift it to the other side of the equation and think of it as part of the geometry, part of the geometric side of the equation. Or you can leave it on the right side and think of it as part of the energy momentum. So the cosmological constant can be thought of in either way. Did Einstein use the equation you derived tonight to calculate the orbit of the record, which was then observed by astronomers? So that equation was used. Do you know of a source that would show how that calculation was done? Good question. Let me see if I can find one. It's probably in one of Einstein's papers, all of which I have at home, and I'll see if I can find it. Well, I know how it was done. You want to see it actually done, though. It was done by, he didn't have the Schwarzschild solution, but he did have the approximation to the Schwarzschild solution at a fairly large distance. Now, the surface of the sun is way, way out at a distance much larger than the Schwarzschild radius of the sun. Of course, the sun is not a black hole, but outside the solar radius, the geometry is exactly the same as the Schwarzschild geometry. He didn't have the Schwarzschild geometry, but he had a pretty good approximation to it that 100 times larger distance than the Schwarzschild radius. It's more than 100 times, a million times larger than the Schwarzschild radius. So he knew how to make a good approximation that was true far from the center of a gravitating object. And then he just solved orbits in that geometry. And again, in an approximation, the thing that allows you to do that is the fact that the sun is so much bigger than the Schwarzschild radius. And that means the corrections from Newton are small. You can think of it as just small corrections from Newton, and work it out in a kind of perturbation theory. Perturbation theory means just small corrections to something you already know. Most likely what he did was just take the Keplerian orbits for a light ray, which means straight line, and do a little bit of perturbation theory, and work it out. And then you asked me about the mercury or the bending of light. I forget what you asked me. What's the mercury? Mercury, yeah. So undoubtedly just took the ordinary Keplerian orbit, took the Newtonian solution plus a small correction, and fit the small correction on the left-hand side to the small correction on the right-hand side. But then Schwarzschild immediately, I think within a year of maybe less, calculated his exact solution of the equations. And from there, you could do much, much better than. OK. You know that pretty much the ordinary 1 over r squared potential, the force law, is the only power law force law which allows the orbits to be closed without a procession of the orbit. It's a curious fact, and it's somewhat accidental. For more, please visit us at stanford.edu.
(November 26, 2012) Leonard Susskind derives the Einstein field equations of general relativity and demonstrates how they equate spacetime curvature as expressed by the Einstein tensor, with the energy and momentum within that spacetime as expressed by the stress-energy tensor. This series is the fourth installment of a six-quarter series that explore the foundations of modern physics. In this quarter Susskind focuses on Einstein's General Theory of Relativity.
10.5446/15036 (DOI)
Stanford University. Okay, let's begin. Let's begin tonight. I want to just go through a very quick review of some equations. And I want to move on. Tonight, I hope to get to some physics and to discuss gravity if nothing else in a uniform gravitational field. We've already talked, of course, about a uniform gravitational field. That's just the way you all know what that is, just near the surface of the Earth. But the question is, how would you describe that uniform gravitational field in general relativity? Okay, we want to get there. We want to understand what kind of metric it describes, or describes it. But let's just write down some, just to remember some equations, just to have some review. First of all, we talked about covariantly differentiating. The idea, covariant derivatives act on tensors, in particular on vectors. And we worked out the covariant derivative as it applies to particular tensors. In particular, first of all, the covariant derivative, let's call it dm. The covariant derivative of the metric itself is always zero. Oh, let me say one more thing about the covariant derivative. The point of the covariant derivative is to write a tensor such that, I'll get this straight, such that in Gaussian normal coordinates, it is just the derivative. In Gaussian normal coordinates, the coordinates which are closest to Cartesian coordinates at a particular point in the space, the covariant derivative is just the derivative. But in other coordinates, it takes on a more complicated form if we want it to be a tensor. Alright, so the covariant derivative. Question about Gaussian normal coordinates. So I was searching on the web and found things that said they're the same as synchronous coordinates and showed that basically the spatial coordinates aren't coupled with time in the metric tensor. And that's what they said about it. You know, I may be using Gaussian normal coordinates in two, or restricted a sense. What I mean by Gaussian normal coordinates, I may be using the term two restricted away. The only trouble is it's the only term I know. Let's call them the flatest possible, the most Cartesian coordinates at a point. Alright, can you make a, can somebody make a, you know, a series of letters out of that? The most Cartesian coordinates at a point. What does the most Cartesian mean? Alright, it means, it means exactly what we said before, that the metric at that point, at that point is a chronic delta and its first derivatives are zero. That's what I mean by the most Cartesian coordinates at a point. I may be misusing the term Gaussian normal coordinates. My education was not, was left something to be desired and I sometimes use terms inappropriately. Okay, we'll just call them the best coordinates. They are the best coordinates at a point. The best coordinates at a point have derivatives equal to zero in, sorry, have the metric having derivatives equal to zero. And the problem of finding a tensor which in the best coordinates at a point reduced to nothing but the ordinary derivative is the covariant derivative. Alright, so here's the covariant derivative. It acts on any component of the metric and always gives zero. Why? Because in the best coordinates the derivative of the metric is zero. Okay, now next we considered the covariant derivative of a vector. dr vs, and I chose to study a vector with covariant components. It's a covariant derivative of the covariant vector. And here's what we found it is. It starts out as just the derivative of vs and then has a term which would be zero in the best coordinates at a point. Alright, what is it? It's plus gamma rs, let's call it pvp. Remember the summation convention was summing over p here. Oh, yes, there's a plus or minus, I believe it's minus. It's minus. And the minus is a convention. And we worked out what these symbols are, the gamma prs, they are not tensors. They're made up out of derivatives of the metric. That means in the best coordinates there's zero. So they are zero at a point in the best coordinates because the derivatives of the metric are equal to zero. But if they were a tensor, then they would have to be zero in every coordinate and they're not. Alright, so these gammas, let's write them down. Gamma prs is equal to gpn. Remember gpn is the inverse metric, the metric except with upper components, times partial of gnr with respect to xs plus partial of gns with respect to xr minus partial of g, oh, there's a factor of two, excuse me, a factor of two, minus partial of grs with respect to xp. This shouldn't take too long to memorize it. If gpn, okay, that's just to raise some index here, but now you look at it, it has gnr, it has gns, and it has grs. Wait, did I do this right? No. grs with respect to p, no, no, with respect to n, n, n. There it is. Now I have it right. It has all three combinations of g, blah, blah, with respect to x, blah, blah, and they come in three combinations, two of them positive, one of them negative, and a half here. That's the Christoffel symbol with one upper index and one lower index. And that just goes right into here. That tends to make covariant derivatives rather complicated objects. If I wrote down a metric for you, that's a nuisance to write all these things down, but this is what it means. Next, let's talk about covariant derivatives of vectors with contravariant components. Let's, uh, let's call it dmvn. As always, it starts out with a simple derivative, but I'm not going to work it out for you. I'm not going to work it out. You work it out the same way you worked it out for covariant derivatives of covariant tensors, but there's a simple trick. You can do it yourself, do it at home. I'll tell you what the trick is. You begin by writing vn with upper index as equal to gn, now give me another letter, p, vp. Okay. And now you say, I want to construct a covariant derivative of vn. Since the covariant derivative is a derivative, it's a derivative in the best frame. It is a derivative. It will satisfy here. This is a product. This is a product of two things, and it will satisfy covariant derivative of gnp times vp plus gnp times covariant derivative of vp. Let's put an index down here, m, m, m, and bracket around here. What this says is simply that the derivative of a product is the sum of two terms. The derivative of the first term, that's this one, times vp plus the derivative of the second one times g. This holds for covariant derivatives. It holds for just about every kind of derivative you can ever think of. And now, what do we know about the derivative of the metric? Now, this is the metric, the covariant derivative of the metric. Remember that the metric in the best coordinates has no derivatives. So in the best coordinates, the metric itself with lower indices is constant at least to first order, at least to first derivatives. That means the inverse tensor, the inverse matrix, is also constant to first derivatives. And so one can also argue that the covariant derivative of the metric tensor with two contravariant indices is also zero. We know how to differentiate a thing with lower indices. That's this formula over here. If you just plug that in here and manipulate it a little bit, you'll find out how to differentiate, how to covariantly differentiate a tensor with upper indices. I'll write the formula for you. Here it is. It's the same kind of thing as up here. Derivative with respect to m of vn, derivative with respect to xm of vn, they always start out that way. And then now plus, this you'll work out by doing this little exercise here. It's a simple little exercise. It takes a few minutes. And now it's plus gamma mn, let's call it p, I guess. What did I call it? I'd like to keep my notation consistent in mr. So r, vr. Okay, let's look at it for a minute. As before, it begins with just a simple derivative. Then it has a term which would be zero in the best coordinates, because in the best coordinates, the gammas are all zero. This would be zero, but it's not zero in general coordinates. This has one lower index, one lower index here. It has one upper index, has the same upper index here. Here it has a lower index contracted with an upper index. So the formula makes sense. It satisfies the rules. Upper indices contracted with lower indices, summed over, and indices generally in place. It's plus. It's plus. It's plus. That's the little peculiarity that you get by doing this exercise. It's plus. That's all you really have to remember that when you go from covariant to countervariant, the way the indices fit together, you can't mistake it. A lower index and an upper index, that same lower index has to appear on this side. There's nothing else it can be but the lower index of the Christoffel symbol. The upper index has to appear here, and then there has to be a contraction. Okay, that's the covariant derivative of the contravariant components of a tensor. Good. Now we come to an idea called parallel transport. We've already more or less spoken about it, but let's spell it out in detail. Supposing you have a curved space. Oh, I should write down one more thing just to write it down for now. The curvature tensor, I think I won't. We won't need it too much tonight. We won't need it too much tonight. All right, and now we have some vector field. Let it be a vector field living on the space. I'm interested in knowing whether, when I move along here, whether the field stays parallel to itself, whether the vector field stays parallel to itself. All right, so I have some vector at each point here, at each point along the curve, and I want to know whether the field stays parallel to itself. Parallel to itself means that the covariant derivative is zero along that direction. That's what it means. Covariant derivative is the difference between the vectors except written in the best coordinates. If the derivative of a vector from one point to another is zero, that vector is maintaining a relationship of being parallel to itself as you move along. Okay, so that's actually a definition, but let's see, we have written down, I'm going to write down again the covariant derivative because we're going to use it, dmVn, let's write that down for the moment, is equal to partial of Vn with respect to xm, I'm going to write it down again, plus gamma nmRVr. Okay, that's the covariant derivative. Now, I want to consider the derivative along the trajectory here, along this curve, how the vector changes from point to point. That simply corresponds to taking the covariant derivative, Vn, and multiplying it between these two points by dx, let's see, dxm. If I differentiate with respect to xm and then multiply by dxm, that can just be called the small change in the vector going from point to point, from dx, except keeping track of the fact that the coordinates themselves may bend as you go from point to point, and that's why you have to use the covariant derivative. This is the change in the vector V in going from one point to its neighbor. What is it equal to? Let's just call, let's just give it a name, dVn. It's a small change in the vector going from one point to a neighboring point. More precisely, it's the small change in the vector in the best coordinates. Okay, that's it. Now, what is that equal to? It's equal to dVn by dxm times dxm. I'm simply multiplying this equation by dxm and then plus gamma m Rn Vr dxm. I did nothing here except multiply this by dxm and then sum over m. So let's erase the middle expression here. This is the covariant change in a vector going from one point to another. Now, this thing here, that's simple. That's just the derivative of something with respect to x times dx. That can just be called the differential change in V. That's the differential change in V. So look at this equation. The covariant differential change in V is the ordinary change in V plus this Christoffel symbol multiplied by dx. This is the formula which tells you how a vector changes from point to point. Now, supposing I'm interested in defining a vector which is parallel to itself as I move along this curve. Parallel to itself means it doesn't change as you move along here. At each point, you erect some best coordinates. In those coordinates, you test whether the vector is changing. If not, you say, all right, good, the vector is constant along a little segment there. Go to the next segment. Erect best coordinates at that point. Do the same thing. If at every point, the change in the vector is zero, this equals zero, the vector is said to be parallel to itself along that curve. As you move along that specific curve, the vector maintains a relationship of being parallel to itself. The best coordinates are always parallel to themselves? No, not necessarily. Once you find the best set of coordinates, you can rotate it. No, they don't have to be parallel to the curve, but they have to be fixed and well-defined, but they could be parallel to the curve. The best coordinates are not unique. Just like an ordinary flat space, the Cartesian coordinates are not unique because you can always rotate them. You can also rotate the best coordinates at a point. That's not important. That won't change whether your vector stays parallel to itself. All right. Moving a vector from one point to another, such that this covariant differential change is zero, is called parallel transport of the vector. If you move a vector along the curve in such a way that the differential change in its contravariant component plus the Christoffel symbol multiplying v times dx along that trajectory, that being zero is called parallel transport of vector along a certain curve. Now, the funny thing about curved spaces is if you take two points and you start a vector over here and parallely transport it along this curve here, you'll get some other vector here, but if you choose some other curve and parallely transport it, in general, it will come out different. So the notion of parallel transport is relative to a particular curve. This is what I showed you about cones. When we talked about having a cone with a round top or something like that, if we parallel transport a vector from here to here versus transporting it around the back of the cone, you will get a different final result. So parallel transport... One question? Yeah. And of course, when that happens, it's because there's curvature in between. So I know I'm misunderstanding, but tell me where I'm wrong. I think in this v, this vector is a vector field. Okay. What I said was for a vector field, but now just imagine that you could imagine that there was a vector field or you could just imagine that the vector field was only defined along the curve. In fact, we don't even have to... We can even ask, how do we construct a vector along this curve such that it's everywhere parallel to itself along the curve? What we do is we solve this equation. We solve the equation that when we move the vector, we move it in such a way that this cancels this. So my question is, if it were a vector field, then it would be independent of the path. Yeah. Sure, the vector field would be independent of the path, but there might be one path along which this vanishes everywhere and another path along which it doesn't. Still, we get to the end. When you get to the end, you have a unique answer, but one transport of the vector, you would say that the vector was covariantly constant along one curve and not covariantly constant along another curve. Now, you're right. We're defining something slightly different. Instead of saying there's a vector field, let's just suppose we start with a vector over here. And now I'm going to fill this curve up with vectors. Let's imagine it's been filled up. So roughly speaking, there's a vector field now along this curve. Let's not worry about what it's doing over here. And then just require of this vector field along the curve that it's covariantly constant. And this is the equation that says that. Question? Yeah. This isn't the main focus, but are you also preserving the length of that vector? Yeah. Yeah. That's right. Covariant, yes. Then you can prove that this equation conserves the length of the vector. I didn't assume it, but one could prove it from here. Yeah, that's right. The reason why you chose, you chose the sharp edge is because you made it differentiability, isn't it? Say it again. The cone, you didn't make it very sharp. It's because you made it differentiability. Yeah. So it wouldn't have mattered, even if I made a sharp cone. If I want to specify what the curvature is in here, then I'd better make it nice and rounded so that it has a finite curvature. The curvature at the tip of a sharp cone would be infinite. So I wanted to avoid that. Yeah. The tangent vector of that curve, is it always parallel? No. But when it is, the curve is a geodesic. That's exactly what we're coming to now. All right, so we've defined. Yeah, let's see what's going on here. Let's take the cone, slice it, and open it up on the blackboard. OK, so it's, slice it, and open it up. So it looks something like this. The, here's, it's empty out here. All right. Now let's take a curve that just goes around like that. That corresponds to exactly the kind of curve that I drew over here, right? Goes around and comes back to the same point over here. This point and this point are the same point. OK, now let's take a tangent vector, as you said. It's pretty clear that this tangent vector is changing. What were they vector which didn't change look like? A vector which didn't change would look like this, like this, like this, like this, and we come back to here. We come back to this direction over here. So the answer is no. In general, the tangent vector to a curve is not covariantly constant. But in fact, this is not a geodesic. This is not a geodesic of a cone. In fact, you can see, just flatten everything out. This is a circle. It's not a, it's not a geodesic. It's not the shortest distance between these two points. The shortest distance between those two points is that. Drawing back on the cone, it would look something like this, the shortest distance between those two points. OK, so now let's, now we want to get right to what you said. Question of the tangent vector and the notion of a geodesic, and just to say it since you've already said it, or since I already said it. The notion of a geodesic. A geodesic is a curve. One can define it in two ways. You can define it as being the shortest distance, the shortest curve between two points. Better yet, a curve whose distance is stationary. It could be the longest distance between two points. If it's the distance is stationary, then we will say that the curve is a geodesic. But a better, not necessarily a better definition, in fact I think it is a better definition though. A better definition is to look locally along the curve and require that the curve be as straight as possible. OK, now what does that mean? I'll give you, I'll give you some intuitions and I'll give you a precise statement. If at each point along the curve, the derivative of tangent vector is zero, then that curve is as straight as possible. But let's think what this means. I'll give you an intuition, I'll try to give you an intuition for it. Suppose you have a curved terrain. Now, this is a two-dimensional example, but there's nothing particularly two-dimensional about the notion of a geodesic. Now, let's suppose you have a car that you drive on this terrain. Also assume that the size of the car, in particular the distance between its wheels, is small by comparison with any curvature. In other words, the car is very small compared to the hills and the valleys and so forth. And now, take your steering wheel and point that absolutely dead ahead, straight, and start driving. Don't turn the wheel. Make sure in advance that it was pointing, you know, go to your mechanic, tell him to set it straight ahead and don't deviate. Just go straight ahead. The curve that you will execute on this space will be a geodesic. Another way to say it, the same thing, is that the tangent vector along the curve is constant. The tangent vector along the curve is constant, so let's define the tangent vector. Here we have a curve. Now, go to a point. We're going to construct the tangent vector right at this point and take a neighboring point. It's separated by a little interval called dxm, okay? A small interval. And now, draw a vector through those two points. Make that vector be a unit vector. And then take the limit in which the second point approaches the first point. That's called the tangent vector. I need one more equation here. Remember that ds, well, okay. No, let's, let's, no. The way that you construct the tangent vector is very simple. Tm is equal to dxm divided by something, divided by the distance between those two points. Just divide by the distance between those two points, and that's what we call ds. Remember what ds is? ds squared is the square of the distance gmn dxm dxn. Just divide it by ds. That defines a unit vector. You can prove that it's a unit vector. Oh, you don't do that. It's a unit vector at every point along the space, along the curve. That's called the tangent vector. And it points in the direction between two neighboring points. The direction specified by two neighboring points, and its length is one. Okay, so that's the notion of a tangent vector. And now what about the notion that the tangent vector is constant? That's the statement that if I plug in the tangent vector for v, the right-hand side or the left-hand side of this equation should be zero. So let's write that equation down. Okay, plugging in t into that equation, it says that d tm, let's call it tn, plus gamma mrn tr dxm. Let's see what I did. All I did was plug in for v, t. You have a dv and a v. Here you have a dt and a t. And set that equal to zero. Go along each point and check whether as you move along this curve, this quantity is zero everywhere along the curve. If it is, the curve is a geodesic. It corresponds to setting your steering wheel dead ahead and moving in as straight a line as you can. That's the notion of a geodesic. And we can write it in a little neater form. We can write it in a slightly neater form. Let's divide both sides of the equation by ds by the little distance between neighboring points here. We're differentiating, which means we've taken a little interval here, dx, and now we're going to divide it by ds. That says that the derivative of tn with respect to distance along the curve, this is the derivative of the tangent vector with respect to distance along the curve, plus, well, let's write it equal to minus, is equal to minus gamma nmr, tr, but then dxm by ds. What is dxm by ds? Tm. So I have a little equation here that only involves the tangent vector. It also, of course, involves the Christoffel symbol, but let's suppose we're given that, let's suppose we're given the Christoffel symbol, then the equation of motion of a geodesic is just this equation. Gammas are made up out of the metric, so if we know the metric, we know what to put on the right-hand side, and we can say one more thing. Since t itself is a derivative, we can write this as a second derivative. d second xn by ds squared. I've written out that tn is the xn by ds, and then differentiated it again, ds squared. The second derivative of xn along the curve is equal to minus gamma nmr, tr, tm. Does this look like anything familiar? Probably not, but it is actually. If we were to think of s as time along the curve, some measure of time as we move along the curve, then this thing would be acceleration. Second derivative of the position with respect to time, if we imagine that s is like time. We'll come back to that. So if s were like time, or if s were increasing uniformly with time, then this would be the acceleration. Acceleration is equal to something that depends on the metric and t and some more t's here. We'll deal with them later. This has the look of a Newton equation. Acceleration is equal to something that depends on the gravitational field. The metric is the gravitational field. So this is the equation that replaces Newton's equation, as we'll see, for the motion of a particle in a gravitational field. In other words, in some sense, a particle in a gravitational field moves in the straightest possible trajectory. Now, it moves through the straightest possible trajectory through space-time, not just through space. So now we have to come to space-time. So far, we've just been studying the mathematics of curved spaces. Curved spaces as Riemann would have understood them. The mathematician Riemann was the one who invented most of this mathematics for curved spaces. These are ordinary curved spaces in which distance was governed by the Pythagorean theorem, and always the square of the distance was always positive. Minkowski space is a space which also has a natural measure of space-time distance along curves, so space-time distance between neighboring points, or even distant points. Minkowski space is a space with a notion of distance between neighboring points. In fact, a notion of the square of distance. Distance is usually defined in Riemannian geometry in terms of its square. Minkowski geometry, again, we have a notion of square of distance between neighboring points, and what is it called? The proper time between the points. The proper time. The proper time between the points. And it's also a notion of neighboring distance, sort of. Given two points in space-time, Tx, the proper time distance between them. Now let's not call it the space-time distance. Let's call it the proper time. It is equal to dt squared, not plus the x squared, but minus the x squared. And if there are more coordinates, let's say y and z, it would also have minus dy squared, minus dz squared. That's called the tau squared. So tau is the proper time. And it's conventional to rewrite this as minus ds squared. Sometimes you use s, sometimes tau. I'm not going to use s, so I'm going to use tau. But according to convention, this is equal to minus g mu nu dx mu dx nu. Now what does that mean? g mu nu is the metric of space-time. dx mu stands for, or x mu stands for t, x, y, and z. And according to standard convention, t is called x0, and x is x1, y is x2, and z is equal to x3. That's convention. All right, so mu and nu run from 0 to 3. This has exactly the same form, exactly the same form. I'm using mu and nu because it's conventional when talking about space-time to use mu and nu. The only new thing is the metric tensor. Let's write down the metric tensor. The metric tensor now has a minus 1 that corresponds to the dt squared. Notice there's an extra minus sign here. 0, 0, 0, 0, plus 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1. The matrix or the metric tensor g mu nu. As we read, it reads minus dt squared plus dx squared plus dy squared plus dz squared. And what are all these zeros doing here? The zeros are telling you that there are no cross terms, dt dx, dt dy, or anything like that. These are special coordinates. These are Minkowski coordinates, and they play the role in relativity of Cartesian coordinates. In special relativity, this is all there is to it. In general relativity, g mu nu becomes a function of space and time. Let's just call it g of x. It becomes a function of space and time, not just a function of space, but a function of space-time. But one thing, what can you say? What way is this matrix different than delta mu nu? 1, 1, 1, 1. The answer is it has a minus 1 in one place. But there's also an invariant concept here. The invariant concept is this one has one negative eigenvalue and three positive eigenvalues. The metric of space-time always has one negative eigenvalue and three positive eigenvalues. We're not going to spend much time dealing with that. The equations always take care of all of that. But what does it mean that there's one negative eigenvalue and three positive? It means there's one dimension of time and three dimensions of space. I could write a metric that looked like this with two minus signs here. That would have dt squared, let's say. Excuse me, that would have two positives and two negatives. That would be unallowed in relativity. It would correspond to a crazy space with two time directions and two space directions. Einstein realized that the metric of space-time should have one negative and three positive eigenvalues. But we don't need to worry too much about that. Other than that, all of the equations are the same as we've done up till now. Yeah. Did mathematicians explore spaces like this before Einstein? I don't think so. I don't think so. Not to my knowledge. You'd have to ask somebody who knows a little bit about the history of geometry. But to my knowledge, no. So I think Minkowski and Einstein, or as far as I know, the first one to explore these. Yeah. The equation you wrote for D-taus where it has three minuses and one positive. Yeah. So it's minus that metric. Did I get this wrong? No, I got it right. There's two minus signs. These are both conventional. Minus g mu nu, right. Yeah, there are two minus signs there. Why? Just in order to keep my notation consistent with everybody else's. So, right. Okay. Other than that, as I said, everything is exactly the same. You just, if you want to know whether the space is flat, now what does flat mean? Flat does not mean that there's a coordinate system in which the metric is a chronic delta. It means that there's a coordinate system in which the metric looks like this. It's the chronic delta except with one negative sign. That's the notion of a flat space time. A flat space time is one with one negative and three positive, but in which there do exist coordinates in which the metric can be brought to this very simple form. That's the notion of a flat space time. And if the space time cannot be brought to this form, then it's curved. How do you check whether it's curved? You do all exactly the same things. Could you have three negative and one positive? What's that? Could you have three negative and one positive? You could. Actually, it would be basically the same thing, but, uh, uh-huh. Is there a special symbol like the chronic delta but it tells me one negative? Yeah, it's called eta mu nu. And so one, yeah, that's a good point. When we write this, if we were dealing with these coordinates, we would write this as eta mu nu dx mu dx nu. The general equation, sorry, the general equation is to put a g mu nu of x here. The special case in which you're dealing with these special flat coordinates, I told you before, don't use the term flat coordinates, in which you're dealing with these special Minkowski coordinates, which are like Cartesian coordinates, is that this is called eta mu nu times dx mu dx nu. Okay, so that's what we inherit from special relativity. This is all inherited from special relativity. Okay, are we okay? Everybody clear? There's a minus sign missing somewhere there. There's a minus sign missing on one of the eta's. On one of the what? Oh, good, thank you. Yeah, right. That's correct. All right, so for a while now, we're going to do special relativity. In other words, we're going to deal with a flat space. We're going to deal with a flat space, and we're going to wind up looking at it in polar coordinates. Not polar coordinates as Euclid would have used them, but polar coordinates as Minkowski would define them. The purpose is to define the notion of a uniformly accelerated reference frame. Now, in special relativity, there is a bit of a problem with the notion of uniformly accelerated reference frame. If you have a bunch of points which are a fixed separation, and you start accelerating them, and you keep them at that same fixed acceleration, and you're going to accelerate with uniform acceleration, then they will maintain the distance between them, but distances are supposed to transform when you accelerate something. So if we were to just start these points moving with a uniform acceleration, we would actually discover that in the rest frame of the points, the distance between them was what, getting larger and larger? I think larger and larger. That would mean, for example, if there were strings between them, that as they start moving in an attempt to keep their distance uniform, the string would stretch and eventually break. That's not what normally would be thought of as a uniformly accelerated reference frame. What's nice about a uniformly accelerated reference frame is that the distance between points stay the same, but if you had strings connecting the points, they wouldn't get stretched, all sorts of things like that. Another thing about a uniformly accelerated reference frame in special relativity, if you wait long enough, uniform acceleration will lead to moving with greater than speed of light. So uniform acceleration, to the extent that it exists and makes good physical sense, is not as simple as just moving these points all with the same acceleration. I'm going to tell you how to construct what a relativist would call a uniformly accelerated reference frame. But to do so, I want to go back a step. I want to go back to Euclidean space and talk about polar coordinates, because the uniformly accelerated coordinate system is the analog of polar coordinates. Okay, so let's start with ordinary space, y and x. And I want to introduce polar coordinates. Polar coordinates mean an angle and a radius. So here's a point. It's characterized by a radius and an angle. And here's some equations which you're all familiar with. Okay, you know those equations. You know that cosine squared plus sine squared theta is equal to one, which is the same as saying that x squared plus y squared is equal to r squared. Finally, two other equations. Cosine of theta is equal to e to the i theta plus e to the minus i theta over two, and sine theta equals e to the i theta minus e to the minus i theta over two. You can check that cosine squared plus sine squared is equal to one, just simple identity for all possible theta. These are the basic equations governing polar coordinates. Ah, yes, in this one. Thank you. And I in that one, very good. Okay, now, what's the equation of a circle? The equation of a circle is just that r is equal to a constant, and there's a circle. Imagine a point moving around with uniform velocity, uniform angular velocity. With uniform angular velocity, the acceleration of that point is uniform on the circle. It's uniform, the acceleration is uniform on the circle. What direction does the acceleration point? To the center. Okay? But the magnitude of the acceleration is constant. Okay, now, what does this have to do with relativity? Relativity, we write basically the same equations to define a uniform accelerated point. That's the light cone. What is a uniformly accelerated point in special relativity? It's a point moving on a hyperbola. It is accelerated. It's not moving with constant velocity. Constant velocity would mean it had a, you know, that its direction would be fixed. It has no velocity over here. It's moving straight upward. This is time. Time. This is x. At this point, the velocity is zero. At this point, it's moving forward. As it moves up and up and up, it gets closer and closer to the speed of light, but never exceeds it. Okay? And the question is what is the equation? How do we, what's the analogous equations for hyperbolas, basically? This is a hyperbola. All right, so let me just write them down. Cosine and sine theta are replaced by the hyperbolic versions, hyperbolic cosine, and let's call the hyperbolic angle omega. Omega is what increases as you move along the hyperbola. It's a kind of angle, and it plays the role of the angle in Euclidean space here. It's measured off along the curve here, along the hyperbola. This is hyperbolic sine of omega. The definition of hyperbolic cosine and sine is very similar. This is equal to e to the omega plus e to the minus omega all over 2. So let's go to this, sorry, to that one, and this one here is e to the omega minus e to the minus omega over 2. The symbol over here means divided by 2. No i in this one. No i in hyperbola, in hyperbolic geometry. You can check very easy that cos squared omega minus sin squared omega is what? 1. Sinch. It's called cinch. Analog is to this equation, and now define x, say it's the analog of this equation over here. X is equal to r, cos omega, and t, and t, the vertical coordinate, is equal to r, cinch omega. This is, these actually define r and omega. These are the definition of r and omega, if you like. It's a coordinate transformation. It's a coordinate transformation from ordinary x and t, flat space x and t, or just Minkowski coordinates x and t. It's a coordinate transformation to r and omega. Which one do you think is like time, omega or r? Can you tell? Well, if you're on this horizontal axis where cos omega is just 1, on the horizontal axis, cos omega is just 1, if you increase r, you're just going out along this axis. So r is like a space coordinate. You're just going out along this axis. If you increase omega from this point, you're traveling upward in analogy with traveling around the circle. Traveling around the circle, the analogy is moving upward along the hyperbola. So omega is like time, is a time-like coordinate, and r is a space-like coordinate. Now, just as there's a uniformity to the circle, at any point you can define r, and it's constant on circles, you can define r, in this case, is constant on hyperbolas. So here's the hyperbola r equals 2. Here's the hyperbola r equals 1. Here's the hyperbola r equals 3. And finally, if I take x squared minus t squared, along one of these hyperbolas, it's just given by r squared times cos squared minus h squared, but that's just 1. So a hyperbola is just fixed r, constant r, let omega vary, let the angle vary, and that defines the hyperbola x squared minus t squared equals r squared. So that defines coordinates. Those coordinates and special relativity are the closest thing that exists to uniformly accelerated coordinates. This point here moves in an accelerated trajectory. This point moves in an accelerated trajectory. The distance between here and here is the same as the distance between here and here. That you can check is the same as the distance between here and here. That means that as the Lorentz frame of reference rotates, not rotates, but accelerates, the distance between neighboring particles, let's think of these as a sequence of particles, the neighboring particles stay the same. What's unusual here, and different than an ordinary accelerated frame of reference, is that the acceleration along these trajectories is different. Let's look at a trajectory which comes in very, very close in here, very small r. It makes a sudden change of direction. That indicates that this trajectory has a much larger acceleration. Incidentally, the accelerations, the proper acceleration, the actual acceleration that an observer would feel everywhere along this trajectory is uniform. Same acceleration here as here, as here, as here. Just in the same way that a particle moving in a curve here feels the same acceleration inward as here, as here. The acceleration along the trajectory is uniform, a proper acceleration. What does a proper acceleration mean? It means the acceleration felt by somebody over here. How much of a push he feels being accelerated over here? It's uniform along these things, along these hyperbolas, everywhere is the same, but the acceleration is different here, here, here, and here. The message is this, that if you want to define uniform acceleration, you have to have acceleration which is constant in time is okay, but you're necessarily going to have different accelerations at different points of space, at different r. This never exceeds the speed of light, and the proper distance, the proper distance between points stays fixed. Remember, proper distance is, yeah. If you make it here, it's not an angle, it feels like it's a view of the city. That's right, it's called a hyperbolic angle. It's not an angle, it doesn't go from zero to 2 pi. That's exactly right. It goes from minus infinity to infinity. Yeah, I'll tell you what proper acceleration is. Yeah, it's a derivative with respect to proper time. Yeah, proper acceleration would be d second x mu, not by dt squared, but by ds squared, where s would be, or d tau squared, excuse me, d tau squared. That's the notion of proper acceleration. You measure the acceleration not with respect to ordinary time, but with respect to the proper time. Okay. It's the rate of change of the proper velocity, which is dx by d tau, with itself with respect to tau. It's everywhere the same along these curves, but differs from one point to another. The further out you are, the smaller the acceleration. The straighter, the less curved the trajectory. In fact, you can write an equation for the acceleration. For the proper acceleration. The proper acceleration, a, along such a trajectory, depends on only one thing. It depends only on r. Okay, it depends only on r. In units in which c is equal to one, the acceleration, let's take a particular curve here and call it capital R. Capital R represents its distance along here. Capital R, not little r, but capital R is just a particular value of little r. Then along that hyperbola, the acceleration is given by one over capital R. The smaller capital R, it's actually the same thing as going on here. The smaller little r for the bigger the acceleration needed to hold it in place. Okay. Okay, so the acceleration is one over r. Now, this doesn't make good unit sense. What's the units of acceleration? Length divided by time squared, right? Okay, let's write that as length squared divided by time squared times one over length. Okay, so acceleration has units of one over length times velocity squared. Okay. If we work in general units, not set the speed of light equal to one, the acceleration here is equal to c squared over r. That means that for a fixed r, the acceleration is very, very large. You have to go to very, very large r before you're talking about a moderate acceleration because of the square of the speed of light here. Yeah, question? No, okay. It's already x-intercept of the curve? Yeah, r is the x-intercept of the curve. That's exactly right. And this acceleration is exactly the acceleration, ordinary acceleration that you would experience at this point moving on that curved space-time trajectory. Yeah. The more curved it is over here versus here and here, the more acceleration. Okay, so I'll use c equals one generally, but keep in mind that this acceleration is very big unless r itself is very big. Okay, let's go a little further. Yeah? It looks like what curve you're going to get is very dependent on when you put the order. That is correct. That is correct. The only accelerated reference frame is defined relative to a particular origin. Okay? That's right. Yes, that's correct. Nothing should be relative to the origin. We better find that no real physics depends on... I'll tell you what we're doing. We're replacing the somewhat arbitrary set of coordinates. And in that arbitrary coordinates, we're going to write the equation of motion of a geodesic. And we're going to see that the equation of motion of a geodesic looks like a particle falling in a uniform gravitational field. But only if we really work it out in detail. Okay, so I'll show you what we're doing. We'll get there. We'll get there. Yes, you're right. There are many possible coordinate systems where you could put the origin any place you like. And for that particular coordinate system, we have a well-defined r in omega, but we could move the origin if we liked. But let's not. Let's not. Let's keep the origin fixed. Okay, so here is the acceleration. And now let's talk about the metric. Let's talk about the metric. First, the metric in ordinary polar coordinates. Let's look in polar coordinates, ordinary distance squared. That's equal to the r squared. Plus what? Anybody remember? Something involving d theta squared. No, no, no. This is not a sphere. This is not a sphere. This is just a plane. You're thinking about the sphere. It's just r squared. r squared d theta squared plus dr squared on the plane. This just says that for a fixed separation of theta, the actual distance along here increases linearly with r. So for a given d theta, the distance associated with d theta increases linearly with r. Therefore, the square of the distance increases quadratically. This is the metric of the flat plane in polar coordinates. Now notice, first of all, that the metric is not the chronic delta. It has dr squared. That's one, but it has r squared d theta squared, so that's r squared. If we were to write the metric and a two-dimensional metric for this, this would look like this. It wouldn't be the chronic delta. It has this r squared here, position dependence. Why? Because the coordinates are curved. Space is not curved. Completely flat space, curved coordinates. The analog here is d tau squared is equal to r squared d omega squared minus dr squared. Now, I've taken only two dimensions. For the moment, I'm going to ignore y and z. We don't need y and z now. We're going to be thinking about the particle falling in a gravitational field along what is the vertical axis will be the x-axis, and y and z will be the other coordinates, and they won't matter for the problem we're going to discuss. So the problem we're going to discuss is just r and omega time and distance from the origin. Time and distance from the origin. That's the metric. We want to see what that metric has to do with gravitation. Gravitation is supposed to have something to do with the metric. So here's what we're going to do. Remember, as we move outward here, the acceleration is changing. If we go very, very far away, let's think about a place far enough away that the acceleration is picking acceleration. What's a good acceleration? How about 10 meters per second per second? Let's go out far enough out here that the acceleration is g, 10 meters per second per second. It's accelerating to the right. So where was our formula? Our formula was that the acceleration is equal to 1 over r times c squared. And we're going to set that equal to g. So that means r is equal to c squared over g. We have to go very, very far away. Measured in meters, what is c? c is 180, no, not 180, 6,000, about 3 times 10 to the 8th meters per, 3 times 10 to the 8th, so 3 times 3 is 10 times 10 to the 16th. This number here is 10 to the 17th. This is only 10, so this whole thing is 10 to the 16th. We have to go out 10 to the 16th meters before we find a place which is accelerating with the ordinary gravitational acceleration. But let's go out there. Moreover, if we don't move too much along the r direction, this acceleration won't change much. Let's imagine we fix on a certain distance r, given by this. If we set c equal to 1, of course, then this is just 1 over g. But we have to keep in mind that 1 over g corresponds to a very, very large distance. The 1 is the thing which is big. It's really c squared. Let's go out through a distance r where the acceleration is g and then think about a region, a region of r which is close to r. This corresponds to being near the surface of the earth, not allowing things to change too much. So we're going to move a little bit away from that r, and we're going to call a distance along that y. I guess I called it y, not x. So the distance from this hyperbola, the space-like distance from this hyperbola, is y to the specific hyperbola at that place. In other words, I'm writing that r, little r, is just equal to big r, which I'm going to keep fixed, plus y. Making new coordinates, these coordinates are sort of tailored to be around this point, which is moving with the gravitational acceleration g, the neighborhood of that point. And now I'm going to rewrite the metric. I'm going to rewrite the metric. Rewrite the metric. Let's see. Where can we rewrite that? Let's erase this. Just rewrite it. d tau squared is equal to r squared. That's this little r squared, but that's big r squared, plus 2ry, plus little y squared, right? All times d omega squared, minus dr squared. So now I have an even more complicated-looking metric here. Well, it's a little more complicated, not much, and here it is. But we're going to focus about the region of r. So let's divide out r from this. Let's write it in this form. 1 plus 2y over r, plus y squared, is it over r squared? Yes, r squared. I'm factoring r squared out of here, and now I'm going to multiply this r squared d omega squared, minus dr squared. One last step. I'm going to call r times omega, big r, times omega. I'm going to give it a new name. I'm going to call it little t. I've called the original time here big t. I'm going to call this thing little t. Where is it? This is little t. And now I get a metric that looks like this. 1 plus 2y over r. What about this? This is much smaller than this. Remember that r is enormously big. r was how big? 10 to the 16th meters. And y, I'm assuming that y is a couple of hundred feet or something. We're talking about falling in a gravitational field. So y is a small number compared to r. y squared over r squared is much, much smaller. r is small. y is 10 feet or 100 meters. How far did that guy fall? 24 miles. That's tiny compared to 10 to the 16th meters. And y squared over r squared is even much smaller. So I'm going to drop this. So we did a whole bunch of stuff. And at the end of the day, we get a metric that apart from this, which is small, just looks like good old metric, dt squared minus dr squared. It just looks like space and time in the more or less ordinary way. d times squared minus d space squared. But with a little correction, a little correction, and that little correction is what accounts for gravitation in this accelerated reference frame. Now, first of all, keep in mind that we're really talking about flat space. So far, we have not introduced any curvature. This is really flat space. So any gravitation that we find here is, in a sense, the same fake gravitation that we find in the accelerated elevator. We're studying physics in an accelerated coordinate system. It's the elevator being pulled in this direction here. And what do we expect to find? We expect to find that in that elevator, there's an effective gravitational field. The effective gravitational field is associated with this over here. It's associated with this over here. So let me tell you what the connection is now. And then we're going to, well, maybe instead of telling you, what we'll do is work out the equation of motion of a particle in a metric like this. All right. Here we have a metric. And that's all there is to it. It's really simple. Oh, what is r? r is related to the acceleration. Where do we write it down? We said that the acceleration is c squared over r. If I set c equal to 1, let's set c equal to 1, then, oh, sorry, g. g is c squared over r. The gravitational acceleration, we chose r so that it was accelerating with 10 meters per second per second. That gave us this equation. In c equals 1 units, this is just 1 over r. So 1 over r is g. 1 over r is g. And look what we have. We have just 2y times g. Have you ever seen the expression y times g in studying gravitation in the uniform field? The potential energy. y times g. Actually, it's not quite the potential energy. It's the potential energy divided by the mass, or just called the gravitational potential. This is y times g is the gravitational potential. And this is 1 plus twice the gravitational potential. That's extremely general. In any kind of gravitational field, as long as the gravitational field is more or less constant with time, not doing anything too radically relativistic, this coefficient multiplying dt squared is always 1 plus twice the gravitational potential. But why do I call this the gravitational potential other than that it just looks like the gravitational potential? Because if I work out the equation of motion of a particle in this metric, I will find that the equation of motion is essentially, as long as the particle is moving slowly, as long as there's a good Newtonian approximation, as long as things are not too relativistic, then we'll see that that particle falls along the y-axis, like a particle in a uniform gravitational field. So here's our basic metric, and we want to find out how particles move in that metric. What's the rule for how a particle moves? Anybody know? Yes, sorry, it could be r or y. You're absolutely right, we could set this equal to y now, why is that? Because dr is equal to dy. That's a good point, okay, dy squared. Right, so we're writing the metric in the y-t coordinates. This is only good for the vicinity, we made some approximation here, we threw away a term, we said it was very small, it's only good in the vicinity of this distance from the center. In other words, it's only good in the region where the acceleration is really g. So question about big r, it seems like, I mean, you picked that, so the constant acceleration curve would approach the asymptote, t equals x, basically, or I mean, that's what you did. Is this sort of a computational trick or whatever device? To do what? I mean, you could have constant acceleration beginning at any point along x. That's correct, you could, they will always be hyperbolas, they won't asymptote to the same, that's right, that's correct. Yeah, so that's right, so there's a uniformly accelerated frame everywhere, but that's exactly the same as an ordinary physics. There's a uniformly accelerated frame with an origin over here, there's a uniformly accelerated frame with an origin over here, in other words, there's just a bunch of different elevators whose floors are at different height, at a particular instant. Right, that's all that's going on here. And we can think of this curve here as being the floor of the elevator. The elevator is being pulled in that direction now, so it's an elevator on its side, the elevator is being pulled this way. And somebody is experiencing a gravitational field and being held down, I'm sorry, you need some arms. Right, so the elevator is being pulled that way. We can take this coordinate R to be the bottom of the elevator and the height above the floor of the elevator we're calling Y. Okay, Y is the height above the elevator and here's the metric, what do I do with it? Here's the metric of space-time in the elevator, in the uniformly accelerated elevator. All right, we had to do a little bit of work to get there, but that's what it is. The metric tensor in the vicinity of a point accelerating with the acceleration of gravity near the Earth's field, that's it. Okay, now what are the rules about particle motion? Most of you probably know them. The rule about particle motion is that particles move on geodesics. Not geodesics of space, but geodesics of space-time. In other words, we take the metric of space-time, whatever it is, and we go through exactly these same operations and here is the equation of motion of a particle. This is the equation of motion, was the equation of motion which says geodesic. Go straight ahead, but not straight ahead in space now, but straight ahead in space-time, but otherwise the equations are the same. Let's write them out. I want to write them slightly differently. d second x in by, now, we don't want to, yeah, okay, d tau squared. This is called the proper acceleration. As long as the elevator is moving slowly, in other words, down here, if it hasn't been accelerating long enough to get up near the speed of light, then the proper time and the ordinary time are essentially the same, and this is just the ordinary acceleration. This could be, what would we choose? We would choose x to be y. We want the y component of acceleration along that direction. So this would be the y component of acceleration. That's the left-hand side, and the right-hand side is minus gamma. Now, the n component stands for y. This is the y component of motion. So we have minus gamma y for the y direction. m, r, and what is t? t is dxr by ds dxm by ds. Okay, now fortunately, there's a whole bunch of these now. m and r can run over all the possible coordinates, and they're all true for. So it looks like, well, there's basically 10 such contributions, but most of them are extremely small, as long as the elevator is moving slowly, and as long as the object that we're interested in, namely the object, the particle, whose coordinate is y, as long as all of the motion is slow, then only one of these combinations is significant. t is dt by ds for slow motion, remember? It's essentially one, because s and t are d tau, excuse me, this should be tau, tau for the relativistic case. These are the components of four velocity, but the components of four velocity for a particle which is moving slowly, the only component that's important is the time component. The time component of velocity, what does that mean? Whoever heard of the time component of velocity? Well, it's just one. It's just one. It's derivative of time with respect to time. In this case, derivative of time with respect to proper time, but proper time and time are almost the same. So the two entries here with r and m standing for time, those are big, and they're just about one. They're both equal to one. What about a space component divided by d tau? Well, that's proportional to the actual ordinary spatial velocity. We're assuming the spatial velocity is small, small compared to the speed of light. So the only important contribution here comes when r and m are time components, and that's minus gamma y time time. d second y by d tau squared is equal to minus gamma y time. That must be the gravitational force. The Christoffel symbol must be the gravitational force. Must be the derivative of the gravitational potential energy. So let's go back to the definition now. Here it is up here. Here's the Christoffel symbol that we're interested in. And what do we need? We need the one with two time components and one y component. So let's see what we get. What did I have here a minute ago? P, what was this, P? And what's down here? M and R? N and R, right? No. R and S. R and S. Yeah. OK. And we're going to throw away some small things. OK. So G, P, N, what is this P here? This P here is supposed to be y. So there's a G y something. A G y something. But the only G y something is G y y. And that's one, or maybe minus one. I'm going to lose the signs because I'm getting tired. No. The G y y is equal to plus one. So the only G that appears here is G y y. The only component of the metric that has a y index. All right. So this one is just one G y y. And what do we get here? We get partial of G y something R with respect to, what is this? These are supposed to be time components. And this is supposed to be y component. G y R with respect to time. This one is the same thing. G y R, that's zero. There is no y R component here. That's just zero. There's only one component. There's only one term. And it's this one. And if you work it out, it gives you minus derivative of G. Now let's say R and S are supposed to be time components. So this is time time divided. And what is N? N is supposed to be y. This whole Christoffel symbol is nothing but the derivative or minus the derivative, one half. There is a one half there. One half minus. Derivative of the time component with respect to y with a factor of one half. That's what's on the right hand side over here. I think this should be plus derivative of G time time with respect to y. Does this look at all familiar that the right hand side of an equation of motion contains a derivative with respect to y? Potential energy, right? Force is equal to derivative of potential energy. Yeah. So somehow, GTT must be the potential energy. But look at this. Here's GTT right here. Here is GTT. This term is constant. It doesn't have a derivative. But this term does have a derivative with respect to y. The derivative of this term with respect to y is just 2G. So this is just equal to, oh, is it two? A minus or plus? There's a minus. Yeah. There's a minus that cancels. There's a minus. There was a half here. Yeah. The half is canceled, and there is a minus sign. I'm too tired to remember where the minus sign came from the fact that the tau squared is minus GTT. The derivative of GTT with respect to y is just 2G. Get rid of this half. And what is this all equal to? It's equal to minus G. That's the equation of motion of a particle in a uniform gravitational field. So we went through what is a rather complicated derivation. But what we learned is the following. That first of all, space time has a metric. And the metric can have fairly complicated structure in arbitrary coordinates. In uniformly accelerated coordinates, it has this extra term there. And the equation of motion for a geodesic, at least as long as things are going slowly and as long as the Newtonian approximation, the equation of motion for a geodesic is just Newton's equation in a uniform gravitational field. Uniform gravitational field, constant acceleration, it's what we expected. But to do it properly using metric and using Christoffel symbols and all these things, it's a fairly complicated procedure. Einstein, of course, guessed this. He guessed all of this in the opposite direction. He didn't know about Christoffel symbols. He didn't know about uniform accelerated coordinate systems. And some ways along here is somehow the place where we started. I don't know exactly how we started. But all along here, this was the observation that motion is geodesics in a particular metric and the metric of uniform acceleration just looks like this. So we've come back. We've come around sort of full circle from the first lecture where we talked about accelerated elevators giving rise to gravitation. And we've come around full circle. So far, we haven't gotten to real gravitational fields. These are not real gravitational fields because they really correspond to flat space up there. If we were to take this metric and calculate the curvature tensor, it would be exactly zero indicating that they do exist coordinates where the metric has the simple form like this. So this gravitation that we're experiencing here is really exactly how we celebrate gravitation due to an accelerated frame of reference, not due to real gravitating matter. Now, we can sort of guess what the effect of real gravitating matter would be instead of putting 2y times little g. What's the gravitational potential in a, let's say, we have a gravitating object. And again, we can use a y coordinate over here. Yeah, that's one over y. Right. So we can expect that when we study an actual gravitational field of a gravitating object, we're going to get Newton's constant here divided by y. And that's the Schwarzschild metric, but not quite. We're going to work out the Schwarzschild metric. No, you made a mistake. What's the potential energy of a gravitating object? Try it again. What did you say? You got the sign wrong. You got the sign wrong. Right. So we get this. Oh, but that's weird. Because that means that the time that means when y is large, this is small. That's good, because this is positive. But something crazy happens at the point where, sorry, this is not 2g. This is 2gm, the mass of the gravitating object. Something happens when y is equal to 2gm. This coefficient is just 0. Something very peculiar there. Is that to do with approximation? No. This has to do with the horizon of the black hole. Right, so I plugged in the gravitational potential here, guessing that the same kind of thing would happen. And it does. This gttt, this gttt by dy, that is just the derivative of the gravitational potential. So if we want our equations to work out and look like a falling mass in a gravitational field, we want to plug in the gravitational potential here, gm over y. But then we find something really odd. This coefficient vanishes somewhere. That vanishing is the horizon of the black hole. All right, so next time we're going to get into the subject of the Schwarzschild metric, we're not going to derive it now. We need field equations to derive it. We haven't discussed field equations yet. We've only discussed geometry, and we've discussed the motion of a geodesic in a gravitational field. And so this is just a little demonstration of how geodesics give rise to Newton's equations. And then we jumped ahead and said, OK, let's guess that that's more general and see something like this. Immediately, we discover something odd, and that oddness is the oddness of the black hole metric. OK, I think we're finished for tonight. Yeah? It should be what? It should be what? Where you have the gamma and the data. What should it be? It should that be the eta? No, no, no, no, no. Look, the metric is not eta. The metric has this extra term here. Now, you said, but wait a minute. The original metric, the original flat space metric, was just eta. And that's correct. If you were to have done this calculation in the original flat space coordinates, you would have found no force. That's because the eta is constant, and it has no derivative, has no Christoffel symbol. So if you were to work in the original coordinates, you would find no gravitational force. That's not surprising. If you're an outer space, and you have an elevator, and the elevator is not being accelerated, no force. Things move on straight lines in that frame. In the accelerated frame of reference, particles move on curved trajectories, and those curved trajectories are the, they're not really curved trajectories, but they look like curved trajectories in the new coordinates, and their motion is governed by Newton's equations with a right-hand side. You see, the fact that the Christoffel symbols are not a tensor, that's important. If they were a tensor, then if they were 0 in one frame, they would be 0 in every frame. And since the derivative is the gravitational force, it would tell you that if the gravitational force is 0 in one frame, it's 0 in every frame. But that's not true. The gravitational force in a non-accelerated frame is 0. The gravitational force in an accelerated frame is not 0. That's the equivalence principle, equivalence of gravitation and acceleration. So it's important that this thing on the right-hand side is not a tensor. If it were a tensor, and it was 0 in one frame, it would be 0 in every frame. That's the idea of the value of the force Newton's equation, depending on what frame of reference you're in. Can you give a little bit more feel for why we needed to have a huge R to make this? To make the acceleration modest. It was only in order to be working in a reasonable non-relativistic framework. Yeah, it was in order to compare with non-relativistic framework. We don't want the acceleration to be so big that in a small amount of time we get up near the speed of light. We want the acceleration. But I chose, I decided just to put in the ordinary gravitational acceleration here. So that's c squared over R. Well, that's just the way it is. In these coordinates here, the place where the acceleration is modest is very far away. After all, the gravitational field of the Earth is pretty modest. You can think of a lot stronger gravitational fields, and we will talk about them. OK. For more, please visit us at stanford.edu.
(October 15, 2012) Leonard Susskind moves the course into discussions of gravity and basic gravitational fields. The Fall 2012 quarter of the Modern Physics series concentrates on Einstein's theory of gravity and geometry: the General Theory of Relativity. This course is the fourth of a six-quarter sequence of classes that explores the essential theoretical foundations of modern physics.
10.5446/15034 (DOI)
Stanford University. Okay, shall we begin? Any questions before I begin? Quick ones. As I was saying, a good notation will carry you a long way. When a notation is good, it just sort of automatically tells you what to do next. I mean, you can do physics in a completely mindless way. Just, you know, it's like having tinker toys. It's pretty clear which end the stick has to go into, it has to go into the thing with the hole. You can't try putting a hole into a hole or forcing a stick into a stick. There's only one thing you can do. You can put the stick into the hole, and then the other end of the stick can go in another hole. It gives you some more holes that you can put sticks into. And the notation of general relativity is much like that. If you follow the rules, you almost can't make mistakes. But you have to learn the rules. The rules are the rules of tensor analysis, tensor algebra, tensor analysis, and the question which we're aiming at now is to understand enough about tensor analysis, metrics, to be able to distinguish a flat geometry from a non-flat geometry. Now, that seems awfully simple. Flat means like the plane, non-flat means with bumps and lumps in it, and you would think we could tell a difference very easily. Sometimes it's not so easy. For example, as I mentioned last time, if I take this page, that page is flat. If I roll the page like this, it looks curved, but it's not curved. It's exactly the same page. The relationship between the parts of the page, the distances between the letters on my page, and so forth, they don't change, at least not the distances between the letters measured along the page. They don't change when I do various things with a page. So a folded page, or a, I don't want to call it curved, because curved would be the wrong thing, but what's the right word for, what's that? Well, it's not really, no, I don't want to use the word deformed either. Just a curl. Let's call it curled. You stretch it, and you don't deform it. That is not introducing curvature into a space. Now, technically it introduces what's called extrinsic curvature. Extrinsic curvature has to do with the way a space, in this case the page, is embedded in a higher dimensional space, three-dimensional space of this room. Here's the page. When the page is laid out flat like that, it's embedded in the embedding space in one way. When it's curled like this, it's embedded in the same space in another way. And one says that there is extrinsic curvature. The extrinsic has to do with the fact that it's in an external space, and it has to do with how the space, how this page is embedded in space. But it has nothing to do with the intrinsic geometry. If you like, you can think of the intrinsic geometry as the geometry of a tiny little bug that moves along the surface, cannot look out of the surface, only looks along the surface, crawls along the surface. He may have surveying instruments by which he can measure distances along the surface, can draw triangles, measure the angles of the triangles within the surface, and do all kinds of interesting geometric studies, but never looks out of the surface, and therefore never detects or notices the fact that it might be embedded in the higher-dimensional space in different ways, just learns about the intrinsic geometry. The intrinsic geometry, meaning the geometry, that is independent of the way the surface is embedded. General relativity is about a Riemannian geometry, a lot of geometry, is all about the intrinsic properties of the geometry. It doesn't have to be two-dimensional, it can be any number of dimensions. And the basic thing which defines the geometry and distinguishes it from other geometries is to imagine sprinkling a bunch of points, I don't want to ruin my page here, let's do it on the back of the page, sprinkle a bunch of points, draw lines between them, sort of triangulate the space, and then state what the distance between every pair of neighboring points is. Specifying those distances specifies the geometry. Sometimes that geometry can be flattened out on the page without changing the lengths of any of these little links. Let me give you an example. If I draw a bunch of triangles which are representing lengths, oops, I'm not doing it very well, a triangular lattice. That's a triangular lattice, it's built up out of triangles, and if every triangle is an equilateral triangle, this, this, this are all equal, this, all of this, this are all equal, then in fact, imagine that at the nodes here, we can, you know, what's the right word? They're like hinges, they're hinged. The hinging would allow us to sort of fold this thing, but as long as we keep these lengths the same, and don't change them, make them all equilateral triangles, this can always be laid out on the desk as flat. On the other hand, if we were to, let's say, take several of these lengths, all the ones coming into some point over here, this one, this one, this one, this one, this one, this one, and this one, six of them, and we were to double the size of them, then this point would have no choice but to come up out of the blackboard, or into the blackboard. There would be a bulge at that point, and you can't push that bulge back into the blackboard and flatten it out without changing the lengths of the lengths. A curved space is basically one which cannot be flattened out without distorting it, in fact, without distorting it. And it's intrinsic property of the space, not extrinsic. Okay, so our goal, which, as I said last time, matches closely the question of whether there is a real gravitational field present, or whether the gravitational, parent gravitational field is just due to an artifact of funny coordinates, funny space-time coordinates. The question of whether a space is really flat, or whether, or whether it's really flat or really curved, or whether it's just we've presented that space with curved coordinates. Those two mathematical problems, is there really a gravitational field there, or is it just an artifact of curved space-time coordinates, or is the space of the top of this table top really flat, even though it may look curved because I introduce, I won't draw on the table, but curved coordinates of various kinds. So the question is, how do you tell really if a space is flat? What are you given? Typically, you're given the metric tensor. But before we go into that question, and it's a hard question, it's going to take us, it probably won't get completely to the point of answering the question tonight. I have a question? Yeah. And I don't mean this facetiously, I really mean this sincerely. When is flat? How do we define it? That you care, we're going to define it. We have not defined it yet in any mathematical sense. I tried to give you an intuition, it means you can flatten out on the table, but of course that's not good enough. I will tell you exactly what it means. In fact, I told you last time, but I'll tell you again. All right, what it means is the metric tensor can be chosen to be just the chronic, a delta symbol, but we'll come to it. All right, before we come to it, we need to understand a little bit about tensors. We've talked about them a little bit, but I want to formalize it tonight a little bit. Tensors, scalars and vectors. Scalars and vectors are special cases of tensors and tensors of a general category of objects. They have indices and they transform. The most important thing is that they transform when you go from one coordinate system to another. In particular, we're going to be interested in spaces with quantities, fields on them, so that every point in space, there may be some quantities associated with it, with that point, and those quantities will be tensors. There will be tensors, there will be other kinds of quantities also that will not be tensors, but in particular, we'll be interested in tensor fields. A field is a thing which can vary from point to point, and the simplest kind of answer is a scalar. A scalar, S of X, is a quantity at each point of space where everybody in all coordinates agrees about its value. So, the transformation properties, and going, let's say, from the X coordinate to the Y, X coordinates, XM to YM, in changing coordinates, the value of a scalar field at the actual point of space, so space time, but we will get to space time, the value of that scalar field does not change. So, we can write that as S prime of Y equals S of X. The prime here simply denotes the fact that I'm talking about a quantity in the coordinate system Y. Without a prime, it will refer to the coordinate system X. There are two coordinate systems. Here's the X coordinate system. These are the lines of constant X, so the surfaces of constant X, and then there's an, at every point, is labeled by a collection of X's. How many X's and Y's? Well, that depends on the dimension of the space. If the space is one dimensional, then one X, just one coordinate, will label where you are in that coordinate. If it's two dimensional, two coordinates, X1 and X2, or Y1 and Y2, if it's three dimensional, three coordinates, X1, X2, and X3, or Y1, Y2, and Y3. And whatever the X coordinates are, the Y coordinates are different. But we assume there's a correspondence that if you know the value of the X's for a particular point, then the Y's, so let's write that, Ym, where m runs from one to however many, is a known function, assumed to be a known function of the X's. This stands for now the whole set of X's, X1, X2, X3, X4, however many, and that we can invert the relationship so that if we know the value of Y of a point, we can also know what its X is. So that's a coordinate transformation of some kind, and that coordinate transformation can be pretty complicated. We'll assume it's continuous, we'll assume we can differentiate when we need to differentiate, but nothing more special than that. Okay, so scalars transform trivially. If you know the value of S at a point, you know it no matter what coordinate system you use. Next, there are vectors. Vectors, and there are two kinds of vectors, we spoke about them last time, I'm going to tell you now a little about the geometry, what it means. There are contravarian vectors which have an index upstairs, and there are covariant vectors. I shouldn't really use the same symbol because they're not the same thing at the moment, but let's just use the same symbol, V, V for vector, with an index downstairs. The rules, we'll write down rules for transformation in a moment, but let me first tell you a little bit about the difference between covariant and contravariant, intuitively what it means. Let's suppose for the moment we're located right at this point over here, we have some coordinates, these could be the X coordinates, and I'm going to draw them as straight lines for the moment because at the moment I'm not interested in the fact that the coordinates curve and vary the directions from first to place, from place to place. I'm mostly interested in the fact that the coordinate axes may not be perpendicular. They may not be perpendicular, and what the implication of non-perpendicular alerady is for these coordinates. Furthermore, the distance, let's call this, we can call this X1, we can call this X2, they're not perpendicular relative to each other, they could be perpendicular relative to each other, but not necessarily, and the actual physical distance, let's say measured in meters from X equals, X1 equals 0, here's X1 equals 1, here's X1 equals 2, here's X1 equals 3, and so forth, here's X2 equals 1, the actual physical distance, let's say measured in meters, or centimeters, or whatever units are, between X equals 0 and X equals 1 is not necessarily one unit. They're neither perpendicular to each other, nor do these labels represent actual physical distance along the axes, they're just names of points. Here's the point X equals 0, and so forth and so on. Okay, now let's introduce some vectors, two vectors, not necessarily two, here I've drawn a two-dimensional space, there could be more axes, for each axis I will introduce a vector, which is basically a vector extending one unit of coordinate space along the particular axis. I'm going to give this vector a name, it extends from this point to this point, X equals 0 to X equals 1, and I'm going to call it E1. The 1 here stands for this one, X1 and not X2 and not X3, in other words it's along the X1 axis, and there's another vector along the X2 axis, which I'll call E2, and these are vectors, these are vectors, and think of them as physical little arrows, and of course if there's a third coordinate, you point out of the blackboard, there's an E for that and so forth and so on. In fact, we can label these E sub I, E sub I they're little vectors, and as I go from 1 to 3 or 1 to 4 or whatever it is, these go over the various directions. Now, next issue, take an arbitrary vector, take an arbitrary vector, let's call it V, an arbitrary vector, for example, one over here, V, can be expanded as a sum of terms, plus E1, plus other coefficients times E2, plus more coefficients times E3, in other words, they can be expanded as a sum of terms, the first one having E1, and the coefficient of E1 will call V1, plus V2, E2, plus V3, E3. The things which are vectors in this formula are the E's. The V's are actually numbers, but they are components of the vector. They're components of V, and they tell you how much of each kind of one of these vectors are present in the sum, V1, E1, plus V2, E2, plus V3, E3, and so forth. These coefficients are called the contravariant components of the vector V. It's just a name. Now, there's nothing in what I did that required me to put the one here downstairs and not upstairs in this one, downstairs. It's a convention to write it in this form here. All right, so first of all, you see what the contravariant components are. They're the expansion coefficients, the numbers that you have to put in front of these three vectors to expand a given vector. Next thing, let's define the projection of a given vector onto the x-axis and the y-axis. How do we do that? The definition is the dot product of the vector V with the vectors E1 or E2. Let's think about the next thing. It would be the dot product of the vectors of the vector V with just the ordinary dot product with, let's say, E1. Let's start with E1. Now, if we were just using conventional Cartesian coordinates, perpendicular to each other, and if these really were unit vectors, if the distance representing each coordinate separation here was one unit of whatever the units were dealing with, then these coefficients here would be the same as these dot products. The dot products and the coefficients would be the same. However, when we have a peculiar coordinate system with angles and with nonunit separations between the successive coordinate lines here, this is not true. Let's see if we can work out what V dot E1. Incidentally, V dot E1 is called V sub 1 with a covariant index. Notice how things fit together. We can also write this here as VMEM, summation convention. This means V1 E1 plus V2 E2 plus V3 E3. Notice I've concocted things so that an upper index always goes with a lower index. We're going to come back to that. It's called contraction of indices. But just notice that I've done it in such a way that the summation always involves an upper index with a lower index. So definition, this is the contravariant component of the vector V. The covariant components are the dot products of the vector with the basis vectors. We might as well call them by their name. These are called the basis vectors. And the covariant components are the dot products of these. So let's see what we get. Let's see if we can figure it out. Let's plug in for V, its expansion here, and then take its dot product with E. So what is this equal to? This is equal to VMEM from here. And now we're going to dot it into E1. But why fix E1? Let's just go for it right now. The general case, the dot product of V with the nth basis vector. And this will be E sub n. EM dot EN is something new. It's something new. Let's isolate it. It has two lower indices. And it's the metric tensor. It is the metric tensor. Let's go a little bit further to see what its connection with the metric tensor is. The length of a vector is the dot product of the vector with itself. Let's calculate the length of V. And we'll see various ways we can write it. It's V dot V. Now, V dot V, let's calculate V dot V. From here, we can write that the first V is VMEM. That's the first V. But then let's take its dot product of the second V. The second V is VNEN. I have to use a different index for this summation than for this summation. I mustn't mix up the summation indices. This is VMEM. This means V1E1 plus V2E2 plus V3E3 times V1E1 plus V2 and so forth. But now I can write this in the form VMVN times the quantity EM dot EN. Let's call this quantity EM dot EN. Let's call it GMN. And now we have a formula VMVN GMN. This is the character of the metric tensor. The metric tensor tells you how to compute the length of a vector. The vector could be just DX, DXM and DXN. And then we would just be computing the length of a little interval between two neighboring points. GMN. Okay, so I just put this all on the blackboard here to give you a little picture of the difference between covariant and contravariant indices. Contravariant indices are the things that you use to construct a vector out of the basis vectors. Covariant indices are the dot products with the basis vectors. The different geometric things, but they would be the same if we were talking about ordinary Cartesian coordinates. Okay, so that's, I just inserted that discussion in order to give you some kind of geometric idea of what covariant and contravariant means. And also what the metric tensor is. The metric tensor is constructed out of these basis vectors by taking the dot products. But we'll come back to these things. Yeah, all right. This was just for you, you know. You asked me to do it. Yes. I assume this is what you asked me to do. It is totally what I asked you. So in orthogonal coordinates, the two sets of components are not to be the same. orthogonal and unit are separations between them. They're not to be the same. And that's also true in curvilinear coordinates where the unit vectors are always orthogonal? No. In curvilinear coordinates, typically in general, the coordinate directions are not orthogonal to each other. The coordinate directions, you know, the direction along which one of the coordinates increases keeping the other ones fixed. That's what we're talking about. We're talking about the same coordinate. Right. But for some curvilinear coordinates, they are. Oh, yeah. And so in that case, the components would also be the same. Is that true? Not quite. Not quite, because it's also the question of the length between neighboring coordinate lines. Cartesian coordinates not only mean that they're orthogonal, but also means that the separations are the same here, here, here. This is the same as this. So let me give you an example where the coordinate axes are orthogonal to each other. But where the separations are not the same. Polar coordinates. Ordinary polar coordinates. The lines along which the radial coordinate increases are just the radial lines. And the lines associated with the angular coordinates, such as the circles, and clearly at every point, a circle is perpendicular to the line passing through the same point. So these coordinates are, I didn't draw them very well, but these coordinates are orthogonal to each other. Okay. But the separation distance between, let's call this theta equals zero over here, and let's call this theta equals theta one over here. And the separation between these two points is not the same as the separation between these two points. It's not the same as the separation between these two points. So these are not Cartesian coordinates. And according to the rough definition I gave of these ease, the ease along the theta direction would be increasing their length as you went up here. So it's more than just perpendicularity, perpendicularity and equality of the distance unit along as you go a fixed separation in X, more than that in defining Cartesian coordinates. Okay. So now let's come to tensor analysis. Yes. You used two different phrases. Why don't you talk about covariant and contrarian vectors, but tensor. We can talk about covariant and contrarian. Oh, yeah. Before the metric tensor was introduced, when we just have abstract coordinates and vectors, there are covariant vectors and contrarian vectors. And they're different things. dx would be a contrarian vector and d phi by dx, that would be some scalar, that would be a covariant vector. Different kinds of things. As we'll see, once the metric tensor is introduced, and I've introduced it here by introducing these unit vectors here, or these vectors here, once the metric tensor is introduced, it's possible to make a correspondence between the covariant vectors and the contrarian. Such a way is to say every vector can be either represented as covariant or by all-contrarian. But we'll come to that. We'll come to it. At the moment, I should be only talking about covariant and contrarian components of a vector. In this language here, we had one vector, and we had its contrarian components and its covariant components. So think about components. Good. But I may sometimes lapse into calling them contravariant vectors by which I mean the contravariant components of a vector. Yeah. OK. Now, tensors are objects which are characterized, the thing which characterizes a tensor and makes it a tensor, is the way that it transforms under coordinate transformations. We talked about this a little bit, but in fact, I want to do it again. v is some vector. It has contravariant components, and it has contravariant components in the y-coordinates and in the x-coordinates. If I change coordinates on here, keeping the vector fixed, not going to change the vector, but change the coordinates, I will change its components, clearly. So how do the contravariant components change when I change coordinates? Here's the rule. This is the mth component of v prime, where v prime means the components of v in the y-coordinates. Prime means y. Unprimed means x. All right. v in the prime coordinates, I'm basically reviewing what we said last time, is given in terms of the unprimed coordinates by dym by dxn, vn. This is the object in the unprimed coordinates, and it's going towards the x-coordinates, and it's gotten by multiplying by dy by dx times v. An example is dy, dym. That would be the primed version of a little interval that would obviously equal the dym by dxn times dxn. So this is sort of the archetype of a contravariant vector component, and you can see that it, in fact, transforms in exactly this way. The covariant components, a covariant component of a vector, for example, if you have a scalar, s, and you differentiate it with respect to y, or x, let's say y, m, this becomes the primed component of a covariant vector, or the primed covariant component of the gradient vector. This is the gradient of s, and I've differentiated it with respect to y, not x. So this is a primed component, and this is equal to ds. Well, let's write it this way, ds by dxn dxn by dym. Notice the difference, and notice how the notation carries you along. On the left-hand side, I have a y with a superindex, upstairs index m. On the right-hand side, I also have a y with a superindex m, but on the right-hand side, I'm going to have a dxn. Of course, I'm going to have a dxn. A dxn upstairs has to be balanced by a dxn downstairs, and you'll get familiar with this. You're allowed to contract indices one upstairs and one downstairs, and you can see the pattern. You can see the symmetry of this relationship here. Likewise here, here we have a lower index corresponding to y, the index m. On the right-hand side, I have something below the fraction bar also, dym, and then above the fraction bar, I have dxn, but that's compensated for by another dxn downstairs here, the y-ring in the ds. So the notation pretty much carries you along. This is the standard form for the transformation property of contravariant components. This is the standard form for transformation properties of covariant. And here's an example of contravariant. Yeah. From these two examples, it looks like if we were to just consider dimensionally what the contravariant or the contravariant represent, it would seem like the contravariant vector is one where the units are distance, while the covariant are one over distance. Is this true in general? Not in general, but I take your point, and I think, no, it's not true in general. It could, for example, be a velocity, in which case it would be a length per unit time, or it could be a force, components of a force, in which case it wouldn't be those kind of… But units have to match. So in here we have, for example, this could be a force on the right, a force on the left, and a y and a dy by dx, which is dimensionless. Perhaps they're both measured, let's say, in meters. Should they all be what? They're all partial differentials. Why are they partial? They're partial because there are several coordinates here. There's x1, x2, x3, and x4, and this is the derivative of one of the y's with respect to the x's keeping all the other x's fixed. Right. All right, so this is the transformation properties of a covariant object, and the corresponding thing for contravariant would be the analog of this, w'm… Oh, sorry, w'm… I'm just using w and v to… just to give different letters for things. This is equal to dxn by dym vn, wn. Same pattern as over here. Again, dx by dy and so forth. Yeah. When you look at this, you look at this and you say, oh, there's a lower index here of type y. Why? Why? Because prime means y. All right, so prime is associated with y. There's a lower index here, m, which is of the y type, and so there's got to be a lower index below the fraction bar of type m on this side, and equation balances, equation balances, and you get a feel for these things in a while. Good. All right, now what about a tensor of higher rank? A tensor of higher rank simply means a tensor with more indices, and the simplest example of tensors with more indices are just products of vectors. So let's take an object. W, not this w now. Let's see. Yeah, let's take a tensor w. Maybe I should call it T. Let's call it w. W prime, in other words, in the y coordinates, it has two indices m and n, and I'm going to take one of those indices to be an upstairs index and another to be a downstairs index, contravariant and covariant. A tensor like this could just be the product of two vectors, one with a contravariant index, one with a covariant index. I won't bother writing it out, but what makes this thing a tensor is its transformation properties. So let me show you what the transformation properties are. And the prime components, again, for each index, for each index, there's the same kind of pattern. This is a primed component. This is the index m, so there must be a dym upstairs. There's going to be a dxp downstairs. Now, what about this index here? This is a y index because it's primed downstairs. So there must be a dy downstairs, therefore there's got to be a dx, and let's call it q, upstairs. And now when we multiply that, we multiply that by w, the tensor in the unprimed frame. There's a p downstairs, there's got to be a p upstairs here. There's a q upstairs, there's got to be a q downstairs. So this tells me how a tensor of rank two, with one contravariant and one covariant index transforms. For each index, there's a dy by dx or dx by dy, and you simply track where the indices go. m upstairs, m upstairs, n downstairs, n downstairs. We're going to have a w here, unprimed w. It's going to have some components, let's call them p and q, and the p's and q's have to balance. This is very general. If I take a tensor with any number of indices, oh incidentally, let me do one other example. Supposing we had a tensor with two contravariant indices. Oh, let's make it two covariant indices, mn. Just as another example, a tensor with two covariant indices, how does it transform? Again, there are downstairs y's, ym, another downstairs yn. Again, m and n go with y. What do I put upstairs? Well, the only thing I put upstairs is x. Call this one p, call this one q, and now this is wpq. So this is the transformation property of a thing with two indices purely covariant now. And the general pattern is the same. For every index, you do the same thing, either dy by dx or dx by dy, and you sum over repeated index indices. m and n are not repeated indices. They're on this side, and they also appear on this side, but p and p are repeated, and q and q are repeated. So this is a double sum. This is a double sum over p and q, because p and q are repeated. All right, this is the basic notational device. Who invented it? Einstein was the one who dropped the summation symbol because he realized he didn't need it. I don't know. Riemann Gauss was in there. I don't know who invented all of this notation, but it's very systematic. All right, so a tensor is an object which is characterized by its transformation properties. Now, notice something about tensors. If they are zero in one frame, let's start with a scalar. If a scalar is zero in one frame, it's zero in every frame. They're equal. Now, supposing a vector is zero in some frame, let's say the x-frame, for a vector to be zero, it doesn't mean some component is equal to zero. It means all of its components are equal to zero. A vector is only zero if all of its components are zero. If all of the components of the contravariant vector vn are zero, then obviously the transformation property is such that all the components of the prime vector are zero. Likewise, with any tensor, if all of its components are zero in one frame and one coordinate system, then all of its components are zero in every frame. That means that once you've written down an equation in tensor form, you can always, of course, transfer everything to the left side of the equation and set it equal to zero. If an equation of that type is true in some frame, it's true in every frame, and that's the basic value of tensors. They allow you to express equations of various kinds, equations of motion, equations of whatever it happens to be in a form where the same exact equation will be true in any coordinate system. That's, of course, a deep advantage to thinking about tensors. There are other objects which are not tensors, which have the property that they may be zero in some frames and not zero in other frames. A frame means coordinate system now. Tensors have a certain invariance to them. Their components are not invariant. Their components change from one frame to another, but the statement that a tensor is equal to another tensor, the tensor WPQ is equal to, let's say, TPQ. Oh, incidentally, when you write a tensor equation, the components have to balance. It doesn't make sense to write an equation like WPQ equals TPQ. Oh, you could write it. You can write it all you want. But since this left side transforms differently than the right side, then even if this were true in one coordinate system, it would not be true in another coordinate system. The transformation properties of the two sides are different. All right, so normally we wouldn't write equations like this. We might say in some particular coordinate system, coordinate system of the blah, blah, blah kind, this might be true. But then if you change coordinates, it won't be true. The kinds of equations which are true in every frame are ones in which the indices balance. They transform the same way. So it's true in one, yeah. What is it that's invariant? The truth of the equation. If this is true in one coordinate system, then it will be true in every coordinate system. So for a vector, at least the contrary, its magnitude will be the same. Oh, its magnitude will be the same. But it's the... The individual components in different coordinate systems... Look, we can always write this as W minus T. W minus T equals zero, right? If every component of this W minus T object is zero, then it's true in every reference frame. So is there anything analogous to the magnitude of the vector? No, it's not the magnitude of the vector. It is the vector itself. It's one thing... It's true that if the magnitude of a vector is zero, in ordinary geometry, the vector itself is zero. That will not be true in relativity. It does not follow from the magnitude of a vector being zero that the vector is equal to zero. The magnitude of a vector and the vector itself are two different quantities. The magnitude of a vector is a scalar. The vector itself is a complex thing that points in a direction. To say that two vectors are equal means that the directions are the same and their magnitudes are the same. And a tensor of higher rank is a more complicated object which points in several directions. It's got some aspect to it that points in one direction and another there. We're going to come to what their geometry is like soon enough. But for the moment, it's defined by the transformation properties. Okay. As I said, the importance of tensors is that when a tensor equation is true in one frame, it's true in every frame. Next, operations on tensors. Things you can do to tensors that make new tensors. We're not at this point interested in things that you can do to tensors which make other kinds of objects which are not tensors. We're interested in the things we can do to a tensor, operations we can do to them, which will make new tensors. Okay. And in that way, we can make a collection of things out of which we can build equations, the equations being the same in every reference frame. Okay. So let's write down some set of operations and then I'll go through what they are and how you do them. They're very simple. Well, most of them are simple. The last one is not simple. Well, first of all, you can multiply a tensor by a number. I didn't even write this down. You can take any tensor and multiply it by a numerical number. It's still a tensor. I'm not even going to bother with that one. Okay. One, addition of tensors. That's the first operation we'll talk about. Two, an addition of course includes also subtraction. If you multiply a tensor by a negative number and then add it, it's subtraction. Multiplication of tensors. Mixed tensors. Three, contraction. And I'll tell you what that means. You may or may not know the word at this point, but we will know the word soon. Contraction. And four, differentiation of tensors, but not ordinary differentiation, covariant differentiation, and we will define that I think tonight. Covariant differentiation of tensors. I think those are the four basic processes that you can do on a tensor to make new tensors. Differentiation with respect to what? Well, differentiation with respect to position. These tensors might be things which vary from place to place. They live at a point, they have a value at a point, at the next point they have a different value, at the next point they have a different value. And learning to differentiate them is going to be fun and hard. Not very hard, a little hard. Okay, adding tensors. You only add tensors if their indices match and are of the same kind. For example, if you have a tensor T, M with a bunch of more upstairs indices, contravariant indices, and maybe a collection of downstairs indices, and you have another tensor of the same kind, plus S, M, S does not stand for scalar here. Dot dot dot, dot dot dot dot P. In other words, their indices are of exactly the same kind. This might be M, N, R, and so forth. This could be P, Q, whatever. If the indices match, then you are permitted to add them and construct a new tensor, which I'll just call T plus S, M dot dot dot dot P. In other words, the thing which is just the sum of the indices defines a new tensor, which is the sum of the two tensors. It's obvious that this transforms the right way. If T transforms by multiplying it by a bunch of the X by the Ys and the Y by Xs, and S transforms the same way, then if you just factor out the transformation coefficients, the DX, DX, and the DXs, then you can see easily that the sum is also a tensor, transforms as a tensor. You can do the same thing with minus, incidentally. No difference. Also, T minus S is a tensor. And this is the basis for saying that tensor equations are the same in every reference frame. T minus S equals zero is a tensor equation. It's the equation that the tensor T minus S is equal to zero. Okay, next, multiplication of tensors. Now, unlike addition, multiplication of tensors can be done with tensors of any rank. Rank means the number of indices, and independently of whether those indices are upstairs or downstairs, and here's the way it works. Let me start with an example, multiplying two vectors. Supposing we have a vector with a contravariant index. Now, we multiply it by some other vector to make it a life a little more complicated. Let's multiply it by a vector with a covariant index. This is a tensor. It's a tensor with one upstairs index M and one downstairs index N. You have to remember which one is which. Do not cross, if this is M, this one is M, if this one is N, that one's N and so forth. This is a tensor with one covariant, sorry, one covariant index and one contravariant index. But the set of values, the set of values, one value of this thing for each M and N, define the components of a tensor with two indices, you could have done the same thing with some other vector. Let's continue to call it W, but one with another upstairs index. This would have been some other tensor with one upstairs and another upstairs index. I use upstairs and downstairs because I constantly have to remind myself which one is covariant and which one is contravariant. Upstairs and downstairs are easier to think of, but they're the same thing. You can put a product sign in here if you like, just to keep track of the fact that you're multiplying. What's that? This is not dotting the vectors. We're going to talk about that in a moment. This is making a tensor of different rank, higher rank just by juxtaposing these together. How many components does this object have? Well, let's work in four, we're going to be interested in four dimensions in a little space times. Let's say the number of indices is four, the number of values that M can take on is three dimensions, it would be three, but in four dimensions this would have sixteen independent components. Sixteen independent components, four for this and four for that. This would be a sixteen component object. It is not the dot product. The dot product only has one component, it's a number. Sometimes it's called the outer product, but it's just a tensor product, just a tensor product of two tensors. And it makes another tensor. Typically, a tensor of different rank than either one of them. The only way you can make a tensor of the same rank is for one of the factors to be a scalar. A scalar is a tensor and you can always multiply a tensor by a scalar, take any scalar, multiply it by Vm, and it's a tensor with one index upstairs, in other words. Multiplying any tensor by a scalar gives back a tensor of the same kind, not the same tensor, it's multiplied by s, but of the same type, of the same generic type, same number of indices in the same place. That's the only situation where when you multiply a tensor by some other tensor, you get back a tensor of the same type. Generally, you get back a tensor of higher rank with more indices, obviously. Where these tensors will come in, we'll find out where they come in soon enough, but so far this is just a notational device. So far, think of it as a notational device. Everybody happy with it? It's really easy. It's very hard to make mistakes. Are there any rules which are kind of like switching sides and Vavw is equal to wv? Yeah. Well, OK, let's be careful. Yeah, let's take this one with two upstairs. Vmwn is equal to wnvm, but it is not equal to vnwm. Vm times wn is the same as vm times wn. It doesn't matter which is, you know, 7 times 3 is 3 times 7. We're not doing quantum mechanics. Every multiplication of numbers and components are numbers. Components are numerical values. So on the other hand, when you write this as a tensor, you must remember that the first index refers to the v and the second index refers to the n. You must remember your convention that the first index here was associated with v and the second index here was associated with w. But the transformation properties are just the transformation properties of a thing with two contravariant indices. Yeah, there's nothing abnormal about multiplying components of vectors. They are just numbers. And when you multiply two numbers, you can multiply them in any order you like. Same with adding them. OK, good. Incidentally, how do we prove that a thing like this is a tensor? Well, we just write down the transformation property of v and the transformation property of w, and that tells us what the transformation property of v times w is. And it's more or less manifest. I already gave some examples of it that products like this continue to be tensors, tensors with the appropriate index structure. So that's good. We have addition. We have multiplication. And now we have contraction. OK, contraction is also very easy, an easy algebraic process. But in order to prove that contraction leads to tensors, we need a little tiny little minor theorem. No mathematician would call this theorem. They would call it a maybe at most a lemma. OK, here's what the theorem says, or the lemma. Consider the object, the x, b. I've mostly used m and n's and p's and q's for the indices. I'm going to start using a's and b's. There just aren't enough letters in the alphabet to take them all from the same range in the alphabet. Take the xb by dym. Multiply that by dym by dxa. An implicit in this formula is a sum over y, a sum over m. Implicitly, this means sum over m. What is this object? Anybody know what this object is? This is the change in xb when you change ym a little bit times the change in ym when you change xa a little bit, summed over m. You change y1 a little bit, then you change y2 a little bit, then you change, what does that do to echo? What is this thing? I didn't hear what you said, but it's probably right. Let me write down a slightly more general formula. Let's take df of x, or df. df, f is both a function of x and y. I mean, it's a function of x, but because x depends on y, it's also a function of y. df by dym, dym by dxa. You know what this is? The change in f when you change y a little bit times the change in y when you change x a little bit. What does that give you? It gives you df dx with an index. It gives you df by dxa. This is the change in f when you change xa a little bit, and it's been calculated by some of steps where first you change y1 a little bit, then you change y1 in response to a change in xa. Then you change y2 a little bit in a response to a change in xa, and that gives you df xa. What if f happens to be xb? Then, carrying out the formula, it tells me that it's the derivative of xb with respect to xa. That looks like a stupid thing. What does that mean? What's the derivative of xb with respect to xa? What's the derivative of x1 with respect to x1? No, derivative of x1 with respect to x1. 1. What's the derivative of x1 with respect to x2? 0. So this thing here is just the chronicle delta ba. Is it only true for positive 40s? No. No. No, it's true for any set of coordinates. Nope, true for any coordinates. Notice it has one index upstairs and one index downstairs. Delta ba, we're going to find out that delta ba by itself happens to also be a tensor. That sounds a little weird. It's just a set of numbers, but it is a tensor with one upper and one lower index. We'll probably come around to it eventually. OK, that's the little lemma that we need in order to understand index contraction. So let's do an example of index contraction and then define it more generally. Darn it, I have my lemma on the blackboard that I need. OK, let's do it over here. All right, let's, as an example, let's take a tensor which is composed out of v and w. v has a contravariant index and w has a covariant index, excuse me, n, n. This is the tensor tmn. And now what contraction means is it means take any upper index and any lower index, combine them together, identify them, and set the numbers equal to each other. In other words, take vmwm. This means v1w1 plus v2w2 plus v3w3 and so forth. You've identified an upper index with a lower index. You're not allowed to do this with two upper indices. You're not allowed to do it with two lower indices. But you can take an upper index and a lower index. And let's ask how it transforms. All right, so first let's write down the transformation properties before we set the indices equal to each other. vwmwn prime. The primed version of it is dym by dxA dxB by dyn times v, now let's see, awB. Let's check and see if this equation makes sense. The transformation properties as always have for each index in the tensor, one of these dy by dx's are dx by dy's. On the left-hand side, we have y's. This is the y components of something. And here we have an upstairs index m. So we have a dym by dxA. Here the y index is a lower index, so this one is lower, dxB by dyn. And then we multiply by the same tensor, except the tensor in the unprimed components, ab, the ab components. This is the transformation property of the rank 2 tensor with one index upstairs and one index downstairs, the primed version of it and the unprimed version of it. Now let's set m equal to n and contract the indices. Contract means identify an upper and lower index and sum over them. So we now have vmwm. How many indices does this object have? What kind of object is this? It's a scalar. It's a scalar that has no indices. There's no indices. Well, what about these indices? No, no, these indices are summed over. These are summed over. This is not the one component of something or the two component. It's like a dot product. The one component, the two component, the three component all added together, the result has no components. They're summed over. All right, let's see what it is. All we have to do, the primed component of it is equal. Now all we have to do is identify m and n, dym by dxA, dxB by now dym. m and n have been identified, vaw. And now here's where our little lima comes in handy. dxB by dym, that's the thing over here. I've written them down in opposite order. Here is dxB by dym. Here's dxB by dym. Here's dym by dxA. Here's dym by dxA. And so I have exactly this combination appearing, and that's just deltaAB. So this monstrosity over here is just deltaAB. And deltaAB is simply the instruction to set A equal to B. That's what it's a little machine which says set A equal to B, and the result is that this thing is equal to vAWA. This says set A equal to B. Okay, I set A equal to B, or set B equal to A, sorry, vAA. Set B equal to A. And look what I have on the left. On the left I have the contraction of the upper index with the lower index. It doesn't matter that I call it A, that's not important. I have the contraction of the upper index with the lower index in the unprimed frame, and that's equal to the corresponding quantity in the primed frame. I could call this M if I like. It's a summation index, it doesn't care what I call it. Okay, what does it say? This says that the object that I've made has no components, and it has the same value in every frame, what would you call it? You call it a tensor. You call it a scalar. Excuse me, you call it a scalar. So by contracting two indices, I make another tensor, in this case, a scalar. It's easy to prove, and you can do this yourself, that if you take any tensor with a bunch of indices, M, N, NMR, P, Q, S, like that, any number of them, take one index from the upper indices, and an index from the lower indices, set them equal to each other, then I would have T, N, M, R, P, R, S. How many indices does this object have over here? One, two, three, four, five, six. Is it really a six-indexed object? No. Because I've summed, this is, R is not really an index anymore, it's been summed over. So this is a four-indexed object. I've taken a six-indexed object, in other words, a tensor of rank six, and by contracting an upper with a lower index, I have lowered the rank of the tensor by two. Yeah? Why doesn't that have to be an upper with a lower index? Why can't it be an upper with a lower index? Okay, what you would find, good. So let's do that. Let's start with this expression over here. Now, these both have upper indices. So this has to be dyn by dxb, and then we would have, alright, do you recognize that? This has two upper indices, this has two upper indices. Here's the prime component, and in both cases we have dy by dx, dy by dx, okay? Now, supposing I set m equal to n in sum. This is an illegitimate notation, but let's do it anyway. This object is not this object. This has a dxdy dy by dx. That's the thing which is delta ab. This has a dy by dx dy by dx. This is nothing in particular. It's certainly not just karnac or delta ab. So, since this is not karnac or delta ab, this does not become VaWA. So the transformation property of this thing over here with two upper indices contracted like that is just some bastardized thing with no particular... No, no, no. The inner product of two vectors is one upper and one lower. This is really the generalization of the inner product, the contraction of the inner product. Yeah, the contraction is the generalization of the inner product of two vectors. Yes, that's exactly right. One has to be contrary. We're going to be doing it in a moment. As soon as we introduce the metric pencil, we'll talk about inner product of vectors. And this also provides each index having the same range of values, otherwise it's... Yeah, yeah, sure. So some space of the given dimensionality, so the range of values of the index runs over the dimensionality of the space. Can I ask a question about something came a while ago? When you mentioned that if you... What you said about making the components all zero and pushing every single side of the equation, having that be... When you can do that, therefore you can write the same equation in any frame. Zero is the same as zero in a different frame, yeah. So it seems to me, and I don't know if this is right, that what that really seems to determine is that these transformations are linear. It does? Absolutely. In other words, by linear, you mean that... Yes, it depends on that they're linear. Yeah. Yes. Yes, indeed. That is correct. Okay, we've defined... Now let's come to the metric tensor. The metric tensor plays a big role. Here I've illustrated it by a particular construction the way that this E-M-E-N is the metric tensor. But let's define it on its own terms abstractly. Again, things we've done before, but let's do them again. Okay, the definition of the metric tensor is if we have a differential length element, dxm, which just represents the components of a displacement vector. Let's call it a displacement vector. Go to a point in the space called x, x1 to xn, and now displace it and call that little vector dx. It has components, contravariant components dxm, the contravariant components, if you want to remember what they mean geometrically, what they mean is these expansion coefficients in terms of some basis vectors. But it's easier once you get the hang of it just to proceed with the notation. dxm is the contravariant components of a little displacement vector. And now we ask what is the length of that vector? Well, I haven't told you enough to tell you what the length of the vector. This could be some arbitrarily shaped complicated space. And specifying what the space is is in effect specifying what all the lengths of the little elements are. So we take the dxm and let's take the squared length. It's always easier to start with a squared length, Pythagoras's theorem, but Pythagoras's theorem becomes the more, and in coordinates which are not orthogonal, Pythagoras's theorem takes a more complicated form. It's still quadratic in the dxs, but it involves a dxm and a dxn, and a quantity gmn. gmn. In general, this gmn will depend on where you are. So it depends on x. In particular, if you have a complicated curved space of some sort with curved coordinates and you pick some little dx over here, then the length of it will not only depend on the dxs, but it will also depend on where you are in the space. And so this is basically the most general thing you can write down which is quadratic. I'm going to stick with a case of four dimensions just because I have it firmly planted in my head four dimensions and we'll keep with that. But how many independent components are there of this gmn thing? You know? Ten. See why? Well, to begin with, there are sixteen. There are four x's. You can multiply any one by any other one, and so you start with sixteen. But the x1 times dx2 is exactly the same as dx2 times dx1. So there's no point in having a separate g12 and g21 set them equal to each other. And then if you count, you'll count that there are ten independent ones. Oh, for three space, it's six. Yeah, for three space, for three-dimensional space, it's six. Right. Well, here's where you count. x1, x2, x3, x4, x1, x2, x3, x4. Make a sort of matrix. Sixteen entries. One, two, three, four, five, six, seven, eight, nine, ten, eleven, twelve, thirteen, fourteen, fifteen, sixteen. Okay. But the one... Here's a... Let's see, where are we? Here's the one-one element, one-two element, and so forth. You might as well take the one-two element and the two-one element to be the same. They both multiply dx1 times dx2. All right? So we have an element for here, and let's just cross it. An element here, an element here, an element here, and an element here. That's dx1 squared. Here's dx2 squared. Here's dx3 squared. dx4 squared. We have the one-two element, the one-three and the one-four, the two-three, the two-four, and the three-four. But we don't need to count separately, the one-two and the two-one, so we don't have to put anything, any new ones down here. Okay? How many are there? One, two, three, four, five, six, seven, eight, nine, ten. Ten independent components of GMN. As you say, in three dimensions it would be six. How about in two dimensions? Three. Three. The two diagonal elements and one off diagonal, which has to... Which you can take to be the same as the other off diagonal. Okay, so ten independent elements to the metric tensor. So far we haven't proved it's a tensor. I call it a metric tensor, but let's now prove that it's a tensor. GMN. Now, the basic guiding principle is that the length of the vector is a scalar. We have a little vector somewhere, everybody agrees on the length of it, although they don't agree on its components. That's the underlying principle about a metric space that the length of a vector is a scalar, that everybody agrees about the length of the vector. Okay? So, let's go from X coordinates. What happened to them? I thought I... Didn't I move this? Oh, well, nope. I did move it, but it seems to me I've seen that before. All right, so the length of the little dx vector, here it is, all right, again, the squared length, the squared length. It's GMN of X dxM dxN. But now let's go to the Y coordinates, the prime coordinates. This must be equal to GM, let's call it, let's call it G, I don't know, call it GPQ. I don't want to use the same index, GPQ, DY, DYP, DYQ. There's something wrong, this isn't quite right. That's because this should be the prime components of the tensor. The tensor in the prime coordinates, in the Y coordinates, in the Y. Okay, let's rewrite this by writing that dxM is equal to partial of xM with respect to yP, DYP. The little differential element of X separation is the derivative of X with respect to Y times little change in Y. And let's do the same thing for dxN. dxN is equal to partial of xN with respect to YQ, DYQ. And now plug these two expressions into here. Okay, so what will we get? We'll get that this is equal to GMN of X. Partial of xM with respect to yP, partial of xN with respect to YQ, DYP, DYQ. So look at this side over here, and look at this side over here. This has DYP, DYQ, this has DYP, DYQ. This object over here must be G prime. They play the same role. This is the set of objects, the PQ objects, which you multiply by DYP, DYQ, to find the length of the vector. And so we found the transformation property. The metric tensor in the primed frame is given by the metric tensor in the unprimed frame times our good old friends dx by dy times dx by dy. This is just exactly the transformation property of a tensor with two covariant indices. So we discover that the metric tensor is indeed really a tensor. That's the first fact about the metric tensor. It really is a tensor. It transforms as a tensor. And it has many applications. I have many questions about the metric that we're going to say more about in a minute. Okay, let me go on. I'm getting a little bit late. The metric tensor has two lower indices, and that's because it multiplies these differential displacements which have upper indices. Now, the metric tensor is also just a matrix. It's a matrix with MN indices. This is just a matrix, GMN. G11, I'll write it in a three-dimensional space, G33, G12, G12, G13, G23. And furthermore, it's a symmetric matrix. G12, G12, G13, and G13, G23 over here. Yeah, you can call this G31, but it might as well be taken to be equal to G13 because dx1 times dx3 is the same as dx3 times dx1. So you can just take it to be a symmetric matrix. Now, what do we know about symmetric matrices? There's one more fact about this matrix. G12 as a matrix, it has eigenvalues. The eigenvalues are never zero, and the reason the eigenvalues are never zero is because the zero eigenvalue would correspond to a little vector of zero length. And there are no vectors of zero length in a, you know, every direction has a length associated with it. What do you know about matrices which are symmetric and don't have zero eigenvalues? Do you know anything? One, they are her mission, but they're also invertible. In other words, they have inverses. The metric tensor has an inverse. That means that there's a matrix which when you multiply by the metric tensor, by the metric matrix, gives you the unit matrix. I'm going to write the equation down and then we'll see what it means. The inverse matrix, the inverse matrix to the metric tensor is also called G, but you put the indices upstairs. GMN upstairs. GMN with indices upstairs is also a tensor and it's defined by the property, by the defining property that as a matrix, it's the inverse of the matrix with two lower components. Now, how do you write that? This is the last thing we're going to do tonight. How do you write the fact if we have two matrices? Let's just take two matrices, A and B. One is called AMN and the other is called BPQ. How do we multiply two matrices together? We multiply two matrices together by identifying an index in and summing over the index. That gives us the product of the matrix, the MQ product. I'm going to write the equation and I think you'll recognize it, but if not, we're going to come back to it. GMN times GN, let's call it P, summed over N, contracted. This is a legitimate expression here if GNP is truly a tensor. If GNP is truly a tensor with the upstairs index, then this is a legitimate product of two tensors with the index N contracted. Can you guess what the answer is for this product? Equals delta MP. What is delta MP? That's a matrix, thought of as a matrix. It's the unit matrix, the identity matrix. It's the identity matrix. What this equation says is the product of the matrix GMN with the matrix GNP with upper indices is the unit matrix. It identifies this object over here as the inverse matrix to this. This is called the metric tensor with contravariant indices, two contravariant indices. This is the metric tensor with two covariant indices. This is the metric tensor with one contravariant and one covariant index. But in any case, the definition of this matrix over here is just the inverse. And the inverse is such that when you multiply it with the original matrix, you get the identity matrix back. This will play an important role, the fact that there is a metric tensor with upstairs indices and downstairs indices. And we'll come to it. I think we'll quit for tonight. I think we've done enough for tonight. We'll talk just a little bit more about the metric tensor next time. And then we'll go on to the subject of curvature. The subject of parallel transport, curvature, differentiation of tensors. So far, everything I've told you is easy. It's just getting the notation and following the notation. The idea of a covariant derivative is a little more complicated, not much. But it's essential. We have to know how to differentiate things in a space if we're going to do anything useful. In particular, if we're going to study whether the space is flat, we have to know how things vary from point to point. The question of whether a space is flat or not has to do, fundamentally has to do with derivatives of the metric tensor and the character and nature of the derivatives of the metric tensor. So the next time, we'll talk a little bit about tensor calculus, differentiation of tensors, and especially the notion of curvature. I hope we get the curvature. Yeah. I'm going to confuse about what space these things are. You've got the space you drew on the board originally. And you've got the space with coordinates that we've been transforming about. And at each point there, there can be a value, a scalar, a vector, a three-dimensional vector, something. Those things don't live in that space today. Well, they're functions of position in the space. But then they have to live in their own space. But remember that the space that they live in has the same dimensionality as the number of indices, one, two, three, is the dimensionality of the space and it's the same as the number of coordinates. That's my idea. It's not a really like that or I can construct a map from the points of the three-dimensional space to two dimensions. You could. You could. You could. But these are all right. All right. So what? All right. So let me give you another answer to tell you what space these things live in. Here is the curved or uncurved space that, well, maybe, that everything is a function of. At every point on that space, there is what's called a tangent space. The tangent space has exactly the same dimensionality as the space itself. But roughly speaking, you can think of it as a space of flat planes which are connected to every point. At every point, there is a tangent space. The tangent space has the same dimensionality as the space itself. And a mathematician would say that these tensors live in the tangent space. I don't know if that helps you or not. It does help. The tensors, vectors, and so forth live in the tangent space. They have components in the tangent space. And that's the mathematical way to describe the space that they live in. Question? Yeah. The delta. Lower. Can't hear. I'm sorry. Yeah. The equation, last equation you have there, could only be the identity matrix if m equals p. Does it still make sense if m is not equal to p? No, no, no, no. This is a symbol which is 0 if m does not equal p and 1 if m does equal p. The equation makes sense for every m and p. m and p. For example, it says that g1ngn2 is equal to 0. It says that g1ngn1, whoops, 1 equals 1. So there's an equation for each m and p. Some of them say that the right-hand side is 0 and some of them say that the right-hand side is 1. I understand that. I'm asking, you also said that it was the identity matrix and that's only true for that one case. Being n not equal to p also makes sense. It's just not identity matrix. No, no, no, no, no, no, no, no, no, no, no, no, no, no. This whole thing is the identity matrix. It's not a particular component of it is the identity matrix. This whole thing is the matrix. All right, so let's write it out. Let's write it out for 2 by 2, 2 by 2, 2-dimensional space. This would be g, let's, well, the right-hand side is the matrix 1, 0, 0, 1. Okay? Let's actually do it. Okay. Let's do it. This is going to be a pain in the neck, but let's write it out. We start with a matrix which is g11 with lower indices, g12, g, I'll call it 2, 1, but g21 is the same as g12 and g22. That's this matrix. Now we multiply it by another matrix whose indices are upstairs and this is g11, g12, g21, g22. Let's multiply it. If you don't need to do that, my point is that in that case m equals p. In what case m equals p? Put it down here, your example, m equals p. In half case, or m is not equal to p. Sorry. In your example. Which example? Your two matrices here. These two? Up, down, left, right. Here. Okay. m equals p. Nothing up to the range of p. I think the point is m and p are up. The point is m and p are all in three space or four space. They're in the same space. That's why the matrix is going to be symmetric. I'm not sure what you're worried about. I'm going to now write this equation in its full blown glory. 1, 0, 0, 1. No, no, no, no. Right. So what does it say? G11, G11, G11 plus G12, G21. This times this plus this times this is equal to 1. It says, I'm not going to write all of them. It says G11 times G12 plus G12 times G22 is equal to 0. There's four equations here. One for each entry. Four equations. The product of this times this is equal to 1. The product of this times this is equal to 0. And so forth. I'm not sure what else, I'm not sure what you're, what you're, which is in, everything's in n by n matrix. They're all n by n. m equals p. m equals p? That's because it's in three space or four space. They're, the dimensionality is the same, right? m and p are in the same dimension. They're in three or four or five or whatever. Let's say three. Let's say whatever. One, two, three, four, right? Yeah. So I think the point he's trying to make is just that it is a symmetric space. It's three by three. Is that what you're saying? That it's a three by three matrix? Or two by two or two by four. They have to be the same space. Yeah. They have to be because it's the same space. Yeah. Those are indices that go between the same value. Yeah. They go from one to three. m runs from, in this case, from one to two. In a three-dimensional case, they would run from one to three. Each of these would be square matrices. n by n square matrices. Yes. Is that what you're saying? Okay. Yeah, that's true. They're all square matrices, all n by n matrices, right? But they correspond to four distinct equations. Well, n square, some number of distinct equations. Some number of distinct equations. Each independent component. Yeah. Are they positive definite? Do they have to be positive definite? Do they have to be positive definite? Me, now what do you mean by positive definite? The eigenvalues for, okay, good. For a conventional space with a positive metric where all distances are positive, the answer is that it has to be positive definite. In the context that we will be coming to, relativity, they will not be positive definite. And why is that? Because if you remember from special relativity, the distance, the spacetime distance between two points is t squared minus x squared minus y squared minus z squared. Or, depending on convention, x squared plus y squared plus z squared minus t squared. So the answer in relativity, in general relativity, is that it's not positive definite. But if we were talking, if we were Riemann talking about a conventional geometry in which all distances are positive, then the metric would be positive definite. Is that what you're asking? Yeah. Okay. So in this moment, we're just doing ordinary geometry, and we will come to Lorentz geometry. Lorentz geometry is nothing but throwing in an extra sign, the signs, but we'll come to that. It's worth learning first about differential geometry and tensor analysis in the more conventional situation. I don't understand why that can't be zero. Why would you say zero? Why in that context? Why? Why would you say that would be zero? None of the eigenvalues are zero. None of the eigenvalues are zero. That's what allows you to invert the matrix. Is it under the other stuff on the board there that's written underneath that one? Yeah. So we took four dimensional states, and then we got a ten-order matrix. Ten independent components. Right. So is that some way, does that some way inspire string theory saying, well, we didn't have anything to do with that? Absolutely no connection. Ten is ten, but no connection, whatever. Coming back to the question of importability, is the metric tensor being invertible property of the monogenometry, but not more ICM? Because looking for the Lorentzian geometry, it seems like a vector on a light bulb that has a metric of zero. But it's not an eigenvector. It's not an eigenvector of the metric tensor. For example, in just two dimensional space would be one minus one, and that has eigenvalues one and minus one. No eigenvalues zero. For more, please visit us at stanford.edu.
(October 1, 2012) Leonard Susskind introduces some of the building blocks of general relativity including proper notation and tensor analysis. This series is the fourth installment of a six-quarter series that explore the foundations of modern physics. In this quarter, Susskind focuses on Einstein's General Theory of Relativity.
10.5446/15033 (DOI)
Stanford University. So the questions that were asked to me tonight are more or less by accident, they're exactly the ones that I want to address tonight. Big gravitational fields, linearity versus non-linearity, and gravitational waves. Again, working out the equations of general relativity is always unpleasant. And we're not going to do it on the blackboard, they would fill the blackboard even for simple things and they probably would not be terribly illuminating. To learn this subject really how to compute and how to solve its equations and so forth, I think you just have to sit down and do it. On the other hand, the principles are straightforward enough and it's easy enough to say what you get when you do solve the equations. So that's the way we'll talk about gravity waves by writing down the equations and then writing down the solutions. Now we're interested in what could be called weak gravitational waves. Weak gravitational waves means that the amplitudes of the gravitational waves are small enough that you can make approximations such as the amplitude squared of the gravitational field is zero. When a quantity is small and you're expanding an equation about the smallness of that quantity, the usual rule is to ignore things with a higher order in that small quantity. All right, so we start with an equilibrium. We're talking now about fluctuations or perturbations about an equilibrium situation and the simplest equilibrium solution of Einstein's equations first. Einstein's equations, let's take the case without any matter, no energy momentum tensor. Then the equations of motion are just r mu nu minus one half g mu nu r is equal to zero. That's called the Einstein tensor, g mu nu equals zero. And it can be simplified. If you take the trace of both sides, the r here is the trace of the Ricci tensor. If you take the trace of both sides, you will discover that r, the scalar curvature r is equal to zero. And once you know that, you can ignore the second term in the equation. It's simpler. That doesn't really matter because we're not going to write down the details anyway. But this is the equation of Einstein in a context in which there is no energy momentum tensor on the right-hand side. All right, what's an equilibrium situation? An equilibrium situation, first of all, means a solution which has no time dependence. It's really, and which also doesn't have any matter on the right-hand side, matter is another word for the energy momentum tensor on the right-hand side, no right-hand side. There's really only one equilibrium situation. The equilibrium situation is just empty space. Empty space is time independent. The empty space, I mean empty flat space, no curvature, no interesting gravitational field, and in that case, the metric, g mu nu. Now I'm not going to say the metric is equal to such and such. The metric depends on the coordinates you use. If I wrote that the metric of a flat plane is just a Kroniker-Delta symbol, you would correct me and say, no, the metric of a flat plane is not the Kroniker-Delta. A possible metric expressing the flatness of space would be the Kroniker-Delta. But if I used other coordinates, the metric would not be the Kroniker-Delta. Use curved coordinates, use polar coordinates, use any other kind of coordinates. What's special about flat space is that you can find coordinates in which the metric has a nice simple form. The same is true in general relativity. Flat spacetime with no gravitational field, there is a choice of coordinates in which the metric has a simple form of eta mu nu. Let me just remind you what eta mu nu is. Eta mu nu is a matrix. Let's just write the matrix. I think with the notations we've been using in this class, I think it's 1 minus 1 minus 1 minus 1, 0, 0. No, sorry, that seemed correct. 1, 0, 0, 0, 0, 1, minus 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1. Which row and column corresponds to time? The first one, T x, y, z, T x, y, z, this is, and so forth. That's the metric of flat spacetime, 1 minus 1 and 3 pluses. It's an equilibrium solution. It is a solution of the Einstein field equations. The Einstein field equations say certain components of curvature are equal to 0. This is a geometry or this is a metric that has no curvature at all. So it is a solution. If you plugged it in, you would find it's a solution. In fact, it would be trivial, one step. Now let's think about something which is close to empty space. Far away, some complicated thing is happening far away, too far away to be interesting to us or at least to be close enough for us to directly observe. There's something going on. A binary pulsar is rotating around and doing some complicated emission of gravitational waves. Close by, the gravitational field may be very strong. Even the gravitational waves might be rather strong. But if you go far enough away, the gravitational radiation, the gravitational waves that are produced by this thing are going to be very weak. What does weak mean? Weak means that the true metric can be chosen, again I emphasize, can be chosen to equal eta mu nu plus something small. Small means that its components are much smaller than the ones which appear here. And that small thing is usually called H mu nu. As far as I know, it's called H because H is the letter after G. I don't know any other reason for H. Which mu nu, unlike eta mu nu, is in general a function of position. It's also, when I say position, I mean position and time. It's a function of the coordinates. It varies from place to place. And it might describe a wave. We'll come back to waves in a moment, but what are we going to do with this? We're going to take this metric, calculate from it R mu nu, and set it equal to zero, and that's going to give us an equation for H. Now the equation that you actually get from H for H, it's not big enough to fill the whole blackboard, but it's big enough to be quite unpleasant. And so I'm just going to be schematic. I'm going to show you what goes into it. First of all, what is R? R is a combination of components of the curvature tensor. I'm not going to write the curvature tensor. I'm just going to remind you what it contains. The curvature tensor itself, the thing with four indices, contains one term which is a derivative with respect to position, with respect to x. And I'm not even going to write which x. It contains lots of different derivatives depending on the index structure. Derivative of the Christoffel symbol. So it contains a first derivative of the Christoffel symbol, and secondly, it contains the Christoffel symbol quadratically squared, or Christoffel symbol times another Christoffel symbol. This is really all, this is as much detail as we're going to do. I could put the indices in here and remind you where the indices go, and there are various terms which look like this, there are various terms which look like that, but this is good enough for us. What about gamma? What about the Christoffel symbol? If you remember, that can, well first of all, it contains a factor of one-half. It contains the metric tensor with components upstairs. The metric tensor with components upstairs is simply the inverse matrix of the metric tensor itself. It can also be expanded in powers of h. The first contribution to it is just the eta symbol itself. So the inverse, eta is its own inverse matrix, and so it begins with a g to the minus one, I'll be more specific in a minute, and then it has derivatives of g. It has various kinds of derivatives of g. Now come back to here. G inverse, that contains again things like eta, probably minus h, but whatever it is, it contains the eta symbol plus a small correction. The metric tensor with upper indices, if you only had eta, would basically be the same eta, and then h would appear as a correction. All right, so g inverse here, this would contain things like, this is a wiggly equal sign, which means not equal at all. So it contains things like eta minus a small correction or plus a small correction, that's not important for us. But then the derivative of g, that only contains h. Why? Because the derivative of this metric tensor is just zero. All it contains is zeros and ones. Derivative of zeros and ones is zero. And so in the approximation that we're working in, the derivative of g just contains derivatives of h. Let's just call it derivative of h. Okay, let's look at what we have here. This is the Christoffel symbol. It contains one term which has one power of h, incidentally, h is small, and we'll also assume that its derivative is also small. If you have a small number, a small function, and you differentiate it, it's equally small in general, unless it has very sharp points or something, which we'll assume it doesn't. So the derivative of h is also considered to be a small thing. Eta times h is once small. H times the derivative of h is twice small. It's quadratic in the fluctuation or in the small gravitational field, so we ignore it. Too small to be important. If h is 0.01 and derivative of h is 0.01, then h times h is 0.001. Whatever, very small. So we can ignore this. And eta times h we'll just call derivative of h. Let's call it derivative of h. So the Christoffel symbol itself is just in the approximation of weak gravitational radiation. The Christoffel symbol itself is just proportional to some collection of derivatives of h. What about the Ricci tensor then? The Ricci tensor will contain derivative of gamma, and that's going to be second derivative of h. It's going to contain various kinds of second derivatives of the gravitational field, incidentally. H is called the gravitational field or the field of a gravitational wave. And then over here, it's going to contain plus derivative of h times derivative of h. Gamma times gamma is quadratic in derivatives of h. So immediately we say this is much smaller than this, and we ignore it. So whatever the devil this Ricci tensor is, it's composed out of simple second derivatives of the metric tensor. From that we can conclude that Einstein's equations have a relatively simple form. There's still plenty of indices around, and the number of different terms involving second derivatives of h are significant. It's complicated enough, I don't want to write it down on the blackboard, but it is basically built out of second derivatives, second derivatives with respect to position and second derivatives with respect to time, and maybe even some terms which have a derivative with respect to position and a derivative with respect to time. And furthermore, there are several components of it. There's not just how many components of it, well the Ricci tensor has a mu and a nu, so it has two components. What comes out has two components, but the whole works whatever it is is composed of second derivatives of h. So our equation of motion is some kind of equation that looks like this. Because of that form, especially in relativity, are usually wave equations. Just to remind you what a wave equation for an ordinary wave looks like, let's say a wave moving down z-axis. A wave moving down the z-axis also satisfies an equation that looks like this. Let's say the wave field, let's call it phi. The wave field phi will satisfy an equation like d second phi by dt squared is equal to d second phi by dz squared. This is the simplest wave equation that you can imagine, and the solution to it are waves which move either to the right or to the left. You can add waves which go to the right and the left. It's still a solution, so you can take a wave which looks like this, add it to a wave moving to the left. The sum is still a solution because the equation is linear. So this is, or let's write it this way, this minus this is equal to zero. If we had more directions of space, the structure of the equation would be a little more complicated. Instead of just derivative of phi with respect to z, we would also have a sum of terms plus second derivative of phi with respect to x squared plus second derivative of phi with respect to y squared equals zero. But it's evident that there's a family similarity between the kind of equation which occurs here and the kind of equation which occurs here. In fact, by clever manipulation, you can make these equations. This is not just one equation, incidentally. There's an equation for each mu and nu. Let's put a bracket around here and just remind ourselves that there are four times four sixteen components of our mu nu. Now they're not all independent, and it does really quite a bit simpler than that, but still, in principle, there are sixteen components, sixteen equations. In that respect, it's somewhat similar to Maxwell's equations. Maxwell's equations have the form of wave equations, for example, for the electric and magnetic field, but there are several components. There are three components of electric field, three components of magnetic field. Sounds like there's only six equations, but in fact, there are eight equations, Maxwell's equations. This is the same sort of thing, several equations, but all of similar form. Any questions? One question, doesn't that derivative put two inches, doesn't it actually have four inches? Does it have what? Four inches. Two go with the derivative and two go with the h. Yeah, yeah, but they were contracted in many ways. Remember, yeah, that's correct. Remember where our mu nu came from? It came from a thing with our mu nu sigma tau, or our upstairs or downstairs indices, and then we contracted them for the nu upstairs here so that nu disappeared as an index, and we just got the Ricci tensor. So yes, with four derivatives, sorry, with two derivatives and two components to h, we certainly can make how many? Whatever number. We can make a number of different combinations, much more than just two indices, but the combination that we're interested in is contracted in such a way that there's only two indices and only two equations left over. Yeah? On that contraction when you have the index upstairs, doesn't part of the contraction involve introducing the metric again or not? Yes, it does. Good. Right. So part of contracting involves introducing the metric again, but again, the same thing is true. The metric is eta plus h, r is composed out of derivatives, so it already is small. It already is small, and if you multiply it by another small number, it's doubly small. I think that's what you're asking. Well yeah, but then you'll get terms of the form h times the second derivative of h. Yes, but their count is infinitely small. Okay, so they're small and then they're... Yes, anything that involves an h multiplied by another h or a derivative of h times another derivative of h or an h times the derivative of h is dropped in this approximation. This is an approximation, but it's a well-defined approximation in that it is the... Well, technically it's the linearization of Einstein's equations, which simply means you throw away everything of higher power. Yeah? But it is a good approximation when the gravitational radiation or the gravitational wave is weak. Okay, so we have equations of this form, which we're not going to specifically write down. They're kind of messy. But before we discuss them, or before we discuss the solutions, you might ask yourself, well, I originally told you that it's not that the metric of flat space is equal to eta mu nu. It's that it can be chosen to be equal to eta mu nu. That means that there are other ways to represent flat space, make a coordinate transformation. Make a coordinate transformation on eta mu nu and the metric changes, but it's still exactly the same... Exactly the same solution. And so what that means is that there must be solutions of these equations which look like they have... Which look like they're non-trivial, but really are just that all they really represent is flat space, but in coordinates where the coordinates have little ripples in them. In other words, supposing I take the flat blackboard, I maintain that the flat blackboard is a solution of Einstein's equations. Well, not really. Well, it is, yes, but not in the sense I mean here. It is a flat space. It can be represented by a metric delta mn, let's say. m and n stand for x and y. On the other hand, there's nothing to prevent me from introducing coordinates with little wiggles in them. Well, we could start with coordinates. Let's say we start with coordinates which are the good Cartesian coordinates. Let's call them x standing for x and y. Well, yeah, let's call them x and y. All right, x and y. And then we can introduce new coordinates, x prime and y prime, where x prime is just x plus a small change. We could call that plus f sub x of x and y. That's a coordinate change. A coordinate change is x prime is some function of x and y. And y prime is some other function of x and y. That's a coordinate change. Here the coordinate change is taken to be almost no coordinate change at all, plus a small little correction. Same thing for y. Y prime is equal to y plus, let's call it f sub y of x and y. I have not changed the space in any way. All I've done is change coordinates and perhaps put some little wiggles. If f has some wiggles in it, then the new coordinate axes might have, the new coordinate lines might have some little wiggles in them. And now the metric, if I rewrite the metric for the prime component, so let's say, okay, let's take the case, let's assume the metric in the primed coordinates is just the x prime squared plus dy prime squared. I'm working backward. I'm supposing that the primed coordinates are the nice coordinates, the nice Cartesian coordinates, and the x and y coordinates are the slightly curved ones. All right, what is the metric? The metric, this is just a blackboard now. We're not doing space time, just a blackboard. The metric of the blackboard is the x prime squared plus dy prime squared. Let's work it out for fun in x and y. Okay. So what is dx prime? We work out what dx prime is, dx prime, comma, dx prime is equal to dx plus the derivative of f sub x with respect to, let's call it, x sub xm. f sub xm could be either x or y times dxm. Likewise for y, we figure out what dy prime is. That's dy plus a small correction. You plug it into here. What do you find out? You find out that the x prime squared plus dy prime squared is not dx squared plus dy squared. It contains the x squared plus dy squared, but also contains other cross terms, dx times this and dy times that. If you work it out, what you find out is that this is equal to dx dy, sorry, dx squared plus dy squared. Now I'm assuming that f is small. I'm going to assume that f is small, and that means when I squared dx prime squared, I can drop dx squared. Likewise for y. So what I'm going to get is a correction. And the correction can be called hmn dxm dxn. In other words, just by squaring out the x prime squared and dy prime squared, substituting in, you'll discover that there's a small correction to the metric tensor. Now does a small correction to the metric tensor mean that the geometry of the blackboard has changed? No, it just means that the coordinates that I've used have wiggles in them. You can work out, it's nice to work out, it's a little exercise, work out what the correction in the metric is, dropping anything which is quadratic in f, dropping anything which is higher order, and I'll tell you what you get. You get that hmn, m and n now just run over x and y, is equal to dfm by dxn plus dfn by dxm. That will mean nothing to you until you try to work out an example, or until you try to prove it. This is the correction to the metric. It's a correction to the metric which is small. Why? Because I chose f to be small. It has the form of a small perturbation on the metric, but it doesn't represent anything, it represents just the trivial change. So there are some, let's call them perturbations, perturbations mean a small change, perturbations on the original form of the metric which are trivial, they don't represent anything. Likewise here, there are small fluctuations that you can write down which just represent curvy linear coordinates in space time. How do you get rid of that? How do you get rid of it and eliminate the phony solutions which automatically solve the equation because they're just flat space but which don't represent any real physics, and you do it by imposing more equations, more equations on the metric that sort of cut away and divide out and eliminate the unwanted spurious solutions. I could write down the equations, it's not important, they're just equations which tell you get rid of the spurious coordinate ambiguities, and you can do that in a wide variety of ways, wide variety of conditions that will allow you to eliminate the unphysical, meaningless solutions of Einstein's equations. Once you've done that and you've imposed all of the Einstein equations here, once you've done that incidentally, the equations become pretty simple and in fact all they become is wave equations, they become wave equations, they become several different equations, they become first of all just perfectly ordinary wave equations for the components of H, H and H. Same kind of wave equations that I wrote down or I'll write them down, I guess I erased them. d second phi by dt squared minus d second phi by dx squared, blah, blah, blah, y squared and z squared is equal to zero. So let me write them now in the correct form, instead of phi we have H here, H mu nu. Each component of the metric tensor or each component of the fluctuation satisfies a wave equation. That means that all the components of the wave equation, sorry, all the components of the metric just move down the axis like waves and there are waves. On the other hand there are also some constraints. The constraints come, there's a lot more equations, how many equations are there? Well there are more equations than one for each mu and nu. The reason that there are more equations is because you have to divide out this spurious fake solutions that are there because of the coordinate ambiguity. Once you do that you find out that the physical solutions, the ones that really have meaning, I'm going to classify them now for you, supposing we have a wave moving down the z-axis. Let's suppose we have a wave moving down the z-axis. What does a wave moving down the z-axis look like? Wave moving down the z-axis, if it were just a simple wave, would just be phi is equal to something like some number times sine of k times x minus z, sorry, times z minus t, times t minus z. Z minus t, t minus z, doesn't matter, same thing. This is a wave that a fixed instant of time is just a sine wave and it moves down the axis with unit velocity. Unit velocity here means the speed of light. Now each one of the components will have a solution like that. Each component will be proportional to sine k times, what is k incidentally? It's the wave number. It's the frequency of the wave. It's the frequency of the wave. It's also the wave number, the number of oscillations per unit length. Of course, it can be anything, it can be any number. Long wave lengths have large k, long wave lengths have small k. Okay, so that's a typical solution of a wave equation and the solutions to this equation here are all of the same type. H naught is equal to some function, well there's a coefficient here, let's call it coefficient phi naught. Phi naught is just a number. Each component is a function of t and z which is proportional to the same thing, sine k times t minus z times a number which we can call H naught. Let's put mu and nu here, mu and nu. And this is not a function of position. It's just a numerical coefficient that multiplies the sine of t minus z and that's about all you can write down. This is the nature of a gravitational wave. Each component of the metric behaving like a wave moving down the axis. However, as I said, there are more equations. The additional equations, there's a good number of equations here and when you impose them all, you find out, oh, let's, before we do this, let's just remind ourselves something about electromagnetism. In electromagnetism we do the same thing, we solve Maxwell's field equations. We can solve Maxwell's field equations either for the vector potential or the electric and magnetic fields. But when we write down all the equations, we find some constraints. The constraints are called transversality of the field. Does everybody know what transversality of an electromagnetic field means? It means that the electromagnetic field, the electric and magnetic fields always point in the direction perpendicular to the motion of the wave. Not only are the solutions waves, but they're transverse waves, meaning to say that the electric field and the magnetic field, the magnetic field points one way, the electric field points the other way and the whole thing goes down the axis, oscillating as it goes down the axis. Very similar things happen here. The waves have to be transverse. To say that the waves are transverse means that the time components and the z components of H are zero. The only components of H which are allowed to be nonzero are the components in the plane perpendicular to the direction of the wave. So if you substitute this in to all the equations, the full set of equations here, including the ones which remove the fake spurious fluctuations, you find that a gravitational wave has a very, very simple form. Well, it does have this form. A set of numbers, H nu, but the only components that are allowed, let's say this is z, x and y, I'll call i and j. i and j represent the directions in the plane perpendicular to the motion of the wave. And the only components which are allowed are H ij times sine kz kz minus k times z minus t. So if you're looking down the z-axis, if you're looking down the z-axis, you see in front of you the xy plane. You see in front of you the xy plane, there's the xy plane, and the xy plane at each z and t. In other words, a slice, here's not my slice, I slice it at a given location at a given time, at a given location at a given time, the metric of the two-dimensional plane. I slice the wave into two-dimensional planes as I go down the axis here at a fixed instant of time. And what I have here is a metric for each z, which looks like, where is it? Right over here. With only two components, with the components only in the plane perpendicular to the motion of the wave. And the components are simply numbers. At each z and t, the metric is simply a set of numbers. There's one more equation, one more equation that comes from Einstein's field equations, and it says that the trace of H ij is equal to zero. In other words, H xx plus H yy is equal to zero. That's it, that's the whole set of equations, and what it tells you is that the metric of a gravitational wave is H naught. H naught equals zero. H z naught is equal to zero. H zz is equal to zero. In fact, H anything with either a time component or a space, or a z component, are equal to zero. The only components which are not zero are H xx, H yy, and H xy. H yx. The metric tensor is symmetric. The metric tensor is symmetric, and so H xy and H yx are the same. And finally, H xx plus H yy is equal to zero, which means that H yy is minus H xx. H yy is equal to minus H xx. That's pretty simple. What does it mean? First of all, all of these vary with z and t. They vary only with z and t. If the wave is going down the z-axis, then by definition the variation is along the z-axis. And so H xx is some number to begin with times sine kx times sine kz minus t. This one is H naught yy sine k times z minus t. And likewise, this one is H naught xy sine kx. Yeah, OK. OK, so think of a series of layers. Let's look at it at a fixed instant of time. At a fixed instant of time, for example, time t equals zero, as we move down the z-axis, there's a perturbation on the metric. The metric tensor is a little bit different from just a flat space metric. It oscillates as you go down the z-axis. And the components that oscillate are the components that have to do with the metric of the plane perpendicular to the wave. What does it mean to have an H xx? Well, let's just take the case of H xx. If there's an H xx, that means that the xx component of the metric is a little bit different than one. The xx component of the metric was originally one, that's eta, eta xx. It's one plus this little bit. Blah, blah, blah. Plus this little bit. That means that proper distances along the x-axis are a little bit different, that actual distances, let's say between x equals zero and one, or let's say between x equals minus one and one, that the actual distance, the metric tensor is the thing which tells you the actual distance between points. The coordinates are just labels. The actual distance between point minus one and point plus one is a little bit different than it would be if this perturbation weren't here. Either a little bit larger separation or a little bit smaller separation depending on the sign of H xx. For example, if H xx is positive, then the actual physical distance between here and here would be a little bit larger than one meter, supposing we're working in meters. That would mean that a meter stick doesn't quite extend from minus one to one. What about G yy? Well, we'll take H xx to be positive. If H xx is positive, did I say that? Yeah, I did get that right. The distance between minus one and one is a little bit bigger than a meter, and therefore a real meter stick would not look. It would be a little bit shorter than the interval between minus one and one. Not because the meter stick is necessarily shorter, but because the coordinates are such that on that surface there, the distance between minus one and plus one would be a little bit bigger than a meter. What about G yy? G yy is equal to one minus H xx, minus H xx because Einstein's equations tell us that the sum of these two are equal to zero. That in turn tells us that a meter stick oriented along the y-axis, let's say from y equals one to minus one, is again one minus one. A meter stick would look a little longer. The distance between plus one and minus one would be a little bit less than if you only had the one here. Our meter stick over here would look a little bit longer. This would just be nothing but a squeezing of the coordinates so that you would be choosing coordinates along the x-axis where the meter stick was a little bit less than one unit and the meter stick along the y-axis was a little bit greater than one. It would be nothing at all, it would be nothing significant except for the fact that it changes as you go down the axis. As you go down the axis from one point to another or if the wave were passing you, imagine better even yet, the wave was passing you. The wave is passing these meter sticks. After a brief amount of time, the h's will change sign, not the sign but S-I-G-N. The sign wave oscillates so after a brief interval of time, one half cycle of the wave, hxx has changed sign, hyy has changed sign and the result is after a brief interval of time, the meter stick will look a little bit short in these coordinates along the y-axis and the meter stick will look a little bit long along the x-axis. Yeah? The meter stick is held together electromagnetically, it's not changing length so you're talking about the same thing. No, no, no, these are coordinates. Right, you're talking about two little test masses, they would move back and forth. In fact, two little test masses would move back and forth but not because, yeah, the meter stick is one meter long. It's always one meter long but an oscillating wave is going back and forth, perturbing it. A real wave, a real wave of curvature, this is real curvature and if you like, I think one way to imagine it, well, the time dependence of this wave really does exert tidal forces, they're tidal forces. The wave has curvature, the wave is moving past here, it's a kind of tidal force and the nature of the tidal force is to actually cause the meter stick to be compressed, stretched, and the way that it's compressed and stretched is that when it's compressed horizontally, it's stretched vertically, when it's compressed vertically, it's stretched horizontally. So if you like, this wave going by a meter stick, or for that matter, if it were just two test masses, if it really were just two test masses, the two test masses would oscillate back and forth, if instead of taking a meter stick, you took a square piece of plywood, the square piece of plywood would be deformed this way and then reverse, this way and then reverse. Now it's interesting, there is another solution, there is another solution in which not Hx and Hy are nonzero, but in which Hxy is nonzero. That would be a metric which would look like a chronicle delta, delta Mn or delta Ij plus a little matrix which had an Hxy over here and an Hxy over here. That still has Hxx plus Hyy equal to zero, there is no Hxx, Hyy. If you think about it a little bit and just spend a little bit of time figuring out what this means, it's also exactly the same kind of thing except that the squeezing and stretching are along the 45 degree axis. Stretch this way, squeeze this way, oscillating. It's not a new thing, it's not a new solution except that it's the original solution turned by an angle. In fact any solution that you write down which is a superposition of these two kinds of solutions, any solution is a linear superposition of the two solutions and they correspond to some orientation of axes where along those axes the compression and expansion are along those tilted axes. That's all there is, that's all the kind of gravitational waves there are. You pick a set of axes perpendicular, first of all you pick the direction that the gravitational wave is moving. Yeah. What can you use to measure this compression that does not change? A strain gauge. A strain gauge. This will create honest real stresses in the piece of plywood. A strain gauge will register then. And this wave is going past, if the wave was static and not moving then this piece of plywood would really be unaffected. Everything would be simultaneously squeezed or stretched the same way. The rulers which measure the plywood, the strain gauges which measure but it's the oscillating character of the solution which really does create real honest stresses and strains in it. It does have real curvature and this is what it does. What triggers the gravitational wave like electromagnetic waves triggered by a current state? Or by a moving charge. For example one charge, charge of an electron going around a proton will create electromagnetic waves. And the same thing, a star or a black hole or a pulsar or whatever it is. In orbit around another one, a binary pulsar for example, the famous binary pulsar, two very very concentrated masses which happen to be rotating around each other at a rather small distance. Neither one is a black hole. The whole thing is not a black hole but it is a very very strong gravitational field. They're going around each other and in going around each other the accelerating masses create the gravitational waves. So now near the binary pulsar this is a bad approximation. The gravitational field is too strong to linearize the equation this way but if you get back a few radii, a few distances away then the wave spreads out, dilutes itself and gets weak enough that this becomes a good approximation. So if you have the binary pulsars over there and you're standing back, the wave is coming at you, let's say the axis between you and the binary pulsar is the z-axis then what you'll see coming past you will look like this. And what it will do is it will cause stresses and strains along perpendicular to the line of sight to the pulsar. Yeah. Yeah. It could be a real physical linear state. I think the way you measure it is by the stresses and strains that are induced in it. It's hard to say that it's not a meter because it's by definition a meter stick. But in essence it's very... Yeah. Yeah. In fact if you had a steel meter stick and a wooden meter stick they would probably react differently. They would probably react differently. So once these stresses are applied, once the stress due to the curvature tensor here, and it's a real thing, the curvature tensor, it is a kind of tidal force. Once these tidal forces are applied, the meter sticks, the two meter sticks, the steel one and the wood one will stress by different amounts in general. So if you measured the wooden meter stick with the steel meter stick, I'm not sure which one would be bigger and which one would be smaller but they wouldn't match up precisely. Yeah. So would this be how you would build a revenue wave spectra? Yeah. That's kind of the point I was making before. If a steel meter stick has got a big elastic module, it's not going to shrink very much when the wave goes past. But if you have two test masses that are floating right at the ends of the meter stick, they have no elastic modules. They'll move right with the spacetime. That's right. Well, no. See the move relative? No, they won't move quite with the spacetime. They'll be accelerated. They'll be accelerated. Yeah. So they'll start ringing. Yeah. So they'll start ringing. Now, if you want to detect a gravitational wave, what you want to do is create some system which has a resonance at the frequency of the gravitational wave. You know what a resonance is? A resonance means that the system has its own natural frequency, frequency of oscillation, which is the same as the oscillation frequency of the wave. Then you would have a driven system being driven at its own natural frequency, and under those circumstances, the response will be particularly big. So if you wanted to detect the gravitational wave, oh, let me say that the gravitational waves that we expect in astronomy from different sources, believe me, I'm not an expert at all, but I know enough about it to know that the H mu nu, which are produced by a distant gravitational source, are very, very small. They're so small that the dimensionless strain on a steel rod or something like that would be something like 10 to the minus 21. Really, really small effects. The fact that people even contemplate measuring them is quite astonishing, but it seems it can be measured. And... No, no. There is a system that measures the legal system. There is a system that builds in two places. I think one in Louisiana and some other place, which is measuring that interference of laser blades. I'm not sure what you're asking. LIGO or LIGO? LIGO. LIGO was a gravitational detector. It's not a steel rod. It's a... A pair of mirrors, which are monitored by laser beams, interference effects due to the relative motion and so forth. But basically, the gravitational detector is a system which is allowed to be sent into oscillation either this way or that way. Is there a velocity that comes out of those solutions? Oh, yeah. The speed of light. Oh, okay. It's not put in artificially in advance by... No, no. It's the speed of light. So if you were... If you saw a gravitational collapse, a gravitational collapse or the collision of two black holes would make... And that happens from time to time. When I say it happens, I mean to say that you can estimate roughly from observed facts how many black holes are out there, and you can guess how many black hole collisions take place every year. And if I remember, I think within range of such... Of the best-imagined detectors... If I remember, I think it's about one black hole collision per year. It's a lot. And right, so in principle, if it was possible to detect a black hole collision through other means other than gravitational radiation, you ought to see the signal at the same time as the gravitational radiation. Gravitational radiation is a pretty weak effect when you're far back, but from the collision of two black holes, it's an enormous effect when you're... It's much bigger than any other kind of radiation that's emitted. So if the collision is taking place at large cosmological distances, it can be the case that the only way to detect it would be through gravitational radiation, gravitational waves like this. That's why it's interesting because it's a new window for astronomy. Okay, that's what gravitational radiation is, or that's what gravitational waves are. They're real. You asked me if gravitational radiation has ever been detected. Yes and no. It's never been detected by direct detectors. But just like electromagnetic radiation, for example, from an orbiting system, radiation carries off energy. Carrying off energy means that as two systems are rotating about each other, they will lose energy and in losing energy, they may speed up a little bit. They will speed up a little bit due to the loss of energy. And so the study of this binary pulsar that's out there, which has all the properties necessary for gravitational radiation to be emitted from it, when the timing of the procession, not the procession, the rotation of the two systems about each other, they are... The frequency is changing a little bit in time exactly as would be expected, quantitatively exactly as would be expected from the system if it were emitting gravitational radiation. Gravitational radiation is the main thing which would cause its energy to change and it works out exactly right. So... What sorts of frequencies are we talking about? Milliseconds? I think milliseconds. Yeah, I think. You look it up, I'm not sure. It might be a little slower than milliseconds. Milliseconds is what a pulsar does when it rotates around its own axis. It might be a little slower than that. If I remember correctly, it's days. Oh, days. Okay, days. Yeah. Yeah, yeah, yeah, this is two stars going around each other. What am I saying, milliseconds? No. Milliseconds is for the rotation of the pulsar. These are two stars going around each other. It's not the... Take back what I said. Okay, yeah. We arbitrarily took X and Y as two coordinates that we decided to put them on the flat line. What about if we took X and T? In other words, how would it affect space time rather than just the space? This is the time direction. The wave has a directionality along with its moving. The wave is transverse, which means the only components that have physical significance are the ones in the plane perpendicular to the direction of motion of the wave. The other components are either not there or they correspond to a wiggle in the coordinates. The other components are either wiggles in the coordinates or they're not there. Yeah. I looked it up for the original binary pulsar that where the detection is at 7.75 hours. Okay. Geometric mean of a day and a million... Right. How much has it changed over the period that it's been observed, though? A very small number. About 30 seconds. Really? That much? Yeah, but it's been looked at for a decade. Yeah. So time is actually oscillating as well, as you just said. It's not time is not oscillating. The wave is oscillating with time. If it's good in one place, the wave would oscillate with time. Yes, but you were going to ask. Well, but time is when these coordinates, right, in the important differential space. Time is a coordinate. It is orthogonal to z. No, no, no, no, no. When I said orthogonal to z, I meant the space directions orthogonal to z. There is time. And the components of the metric, which have a time component in them, all are equal to zero. It's very much like an electromagnetic wave. The electromagnetic wave also has space and time, or electromagnetic field, also has space and time components. The vector potential has space and time components. In a wave moving down the z-axis, only the transverse space components of the vector potential are there. It's very much like that. You said before that h0,0 and h0,x,h0,y,h0,z are all zero. So there's no curvature in the time direction. Well, no, there is curvature in the time direction. Remember something, yeah, the h's in those directions are nonzero, but when you differentiate, you can differentiate with respect to time and get the time into it. So derivative of h, let's say x,y with respect to t is not equal to zero. And remember the curvature tensor has two derivatives on h, so you can get a whole collection of different kind of components. Yeah. Yeah. But still, it's a very special kind of curvature that's not generic. Okay, that's gravitational waves. You can learn more about them by any good book on general relativity, which will work right down the details of these equations. Now they will fill about one blackboard, and they won't be terribly illuminating, but the basic principle I've told you. Any other questions? Yeah. All right. We start out with Einstein's equations, which are nonlinear. We wind up with a wave equation, which I believe is linear. Only because we made the approximation of a very small h. Okay, so it's got nothing to do with getting rid of the nonphysical solutions? No. No, no. The linearity is because the wave is weak. Generally speaking, I tried to explain before, small oscillations about an equilibrium and the approximation that the oscillation is very small are linear. So this is linear because of an approximation that h is so small that the square of it is zero. Any other question? Yes. If you do this approximation like we did, and then you go back at the end and see how far off it was from an accurate solution, how much error is there? That's how big h is. If h is 0.01, the corrections will be 0.00. If h is 1.10, the corrections will be 1%. Whatever h is, the corrections will be of order the square of it. Yeah. So, right. So that's, look, that's something, yes, that's something you always do when you make a linearized approximation, you always check at the end that the corrections to it will be small. But they will be if h is small enough. Yes. If you make h much larger so there is a component of a higher order, would that affect the fact that i and j are perpendicular to the z-axis? Would that? No. Is that the same? No. But what it will affect is whether waves will pass through each other. They won't. They will scatter off each other. No, the wave will maintain, that's a good question, but the wave will maintain its transverse character. Yeah. The mechanical wave, the magnitude of the mechanical wave is related to the energy and the stiffness of the material. The energy and the stiffness of the material, yeah. Is there any sense in talking about the stiffness of the material? Yeah, yeah. The stiffness of a material determines the velocity of propagation through the material. So you can substitute for stiffness, you can substitute velocity of propagation. Electromagnetic and gravitational waves are extremely stiff. They are the stiffest things possible in that the speed of light, and that they are both governed by the speed of light. So in the old days when people thought about the ether and things and the mechanical properties of the ether, they attributed it to stiffness. Things are called Young's modulus or whatever it's called to the ether, and it was a very, very big coefficient of whatever it's called because the speed of light is so much larger than the speed of propagation, for example, through a metal beam. So we no longer really think of it as a stiffness, but if you did think about it as a stiffness, you would say that the space on which these waves move is very stiff. Well, it's really the stiffness divided by the density, isn't it? So either it's very stiff or there's not much density. Yeah, and only the combination of them appears in the propagation of light. That's true. It's just one combination named with a velocity which appears. True enough. Okay. Let's, in a little bit of time that we have, I just wanted to tell you about another way of thinking about the equations of general relativity. Again, the complexity of actually carrying out the equations is too complicated, but nevertheless, the basic idea is there. One of the things I've told you and emphasized to you, particularly in classical mechanics, is that the principle of least action is the most central principle of classical physics that we have. All systems that we know about, mechanical systems, electromagnetic radiation, quantum field theory, all of these things, the standard model of particle physics, they're all governed by an action principle. And the action is something which in field theory, in for ordinary particle mechanics, the action is an integral along the trajectory of the particle. So it's an integral over time. The action of a field is an integral of a space and time. A field configuration is a value of the field at every point of space and time. So we draw here's space time, all of it. I've gotten it all down onto the blackboard. Time goes this way, space goes this way, here's everything. A field configuration or a field history, even better, a field history is just labeled by, is basically a value of the field or fields at every point on here. So it's some field, let's call it generically phi, and it's a function of x and t. And that's a field configuration. Not every configuration is a solution of the equations of the theory. Just like not every trajectory is a solution of the equations of motion, every trajectory is a thinkable trajectory, but not every trajectory satisfies the equations of motion. In the same way, every phi of x and t is a thinkable possible value of a field in space and time, but they're not all solutions. And in general, for the best of our knowledge, every such field is governed by a principle of least action. The principle of least action involves some action which is composed out of the fields. This is just a generic term for field. It's an integral. It's an integral over space and time, dx dt, and x now means x, y, and z. So it's a four-dimensional integral of some thing, usually called the Lagrange density, which is a function of the fields and their derivatives. Let's call them phi sub mu, derivatives of the field with respect to x mu. And what do you do with this? You minimize it, or you make it stationary. You find the field configurations, let's use shorthand, which minimize the action subject to a constraint. You go out to a large space, some large distance, and some large times. You take some region of space-time, and you say what the values are on the boundary. It's the same deal as when you look at a particle motion, and you say I'm interested in the solution, giving that the particle goes through two points. Given that the particle starts someplace and ends someplace, then what's the trajectory in between? In the same way, you say supposing you know that the field at some late time has some value, let's say final, phi initial, and you ask what is the solution which interpolates between these two? The answer will be the field that, or the solutions that make the action stationary. Okay, so the point here is that the field equations, the Einstein field equations, can be derived from a action principle. It's a long computation, but the action is simple. I'm just going to tell you what the action is, and then you can go home and try to take that action and apply the Euler Lagrange equations to it, and see if you can derive Einstein's field equations. I guarantee you it can be done. I also can tell you I never finished it. I started it several times and never quite got to the end of it, but it's just tedious. It's straightforward, but tedious. So you go back and you look at Euler Lagrange equations. I'll tell you what the action is. It has some interesting pieces in it. First of all, let's do a simpler thing first. Let's just calculate the space-time volume. I will tell you what I mean, but let's actually even think about ordinary space for a minute. Let's suppose we have ordinary space, and it has a metric, gmn. When I'm thinking about space, I use mn. gmn dxm dxn. That's the metric, ds squared. The distance between two neighboring points is given by this formula. Now supposing I'm interested in the volume of a small region of space, let's say dxdy. Let me begin by supposing the metric has two components, gxx and gyy. And I have a small interval dx and a small interval dy. I want to know the area. We'll call it the volume. Volume means three dimensions. Area means two dimensions, but they're the same concept. What's the area of this little square? The area of the little square, I would know it if I knew the distance, the actual proper distance associated with the x and the actual proper distance associated with dy. What's the proper distance associated with a little x dx here? Square root of gxx square root of gxx dx. Can you read that? Let's make it bigger. This distance is square root of gxx dx, and this distance is square root of gyy dy. So what's the area of the rectangle? It's square root of gxx square root of gyy dxdy. It's not just dxdy. The important thing is it's not just the x or dy. No more so than the distance between two points is just the x or dy. The actual physical area is a square root, in this case, is the square root of gxx gyy. Now, I've used a case here where there was only gxx and gyy. There could be other components of the metric. If you work it out, this is a special case. This is a special case where there's no off-diagonal gxy term. Here, let's think of it this way. You have a metric, you have a matrix, gxx gyy. Now, there may be some other components here, gxy and gxy. But what I tell you is that when there's no off-diagonal components, the volume element involves the product of gxx gyy. Can you guess what the general formula is? What replaces that product? The determinant. The determinant is gxx gyy minus gxy squared. The determinant is exactly right. So if you actually work out the volume element of a little square like this, or a little rectangle dxdy, the actual answer is not gxx gyy. It's gxx gyy minus gxy squared, which is the term as the determinant of the tensor g. It's usually just written with two bars to represent determinant and g. That's the formula for the volume element or for the volume of a small differential dxdy. What would be the, and this can be in curved space. This is in curved space or any space, whatever. Given a little rectangle dxdy in coordinate space, the area or in higher dimensions, it would be the volume, would just be the square root of g, square root of determinant of g times dxdy. If I have some large region and I want the total volume of it, I integrate this. The integral of square root of the determinant of the metric times dxdy, that is equal to the area of the entire region enclosed within some boundary. What about the volume of space? If we're talking about volume, in other words, three dimensional space, well it's exactly the same except you have dz here. Exactly the same square root of the determinant of the metric is called the volume element. It's usually called the volume element. And if you have four dimensional space time, it's useful to define a similar concept which would just involve the fourth component. Call it the space time volume. It's an interesting quantity. It's invariant. It's the same in every coordinate system. That's the important thing about it. If you re-coordinateize the region, change coordinates, this quantity stays the same. The volume, in some sense, the physical volume of a region of space time or the physical four dimensional volume of a region of space time. And this integral is an invariant quantity. Other invariants, other quantities which are invariant and which will not change when you change coordinates, are based on this quantity. If you take the volume element and you integrate it with any scalar, let's call the scalar S of x, S of x and t, the x dt, this also is an invariant quantity which doesn't depend on the coordinates. It's just the sum. You take each little volume element, you take its volume, and you multiply it by the square root, sorry, by the scalar field. You take the volume of each little element, multiply it by the scalar quantity in there and add them all up. That's a quantity which is really not dependent in any way on the coordinates you use, even though writing it down may use a certain set of coordinates. Action is a thing which is supposed to be invariant. Action in relativity theory is always supposed to be the same in every coordinate frame. Otherwise, the laws of physics which came out of it would depend on the coordinate system. If the laws of physics are supposed to be the same in every coordinate frame, one way of ensuring that is to make the action an invariant, the quantity which doesn't change from coordinate to coordinate, from coordinate system to coordinate system. So it's natural then to take the action to be, I'll call it d4x, meaning the x, dy, dz, dt, some kind, well, the square root of g, the square root of the determinant of the metric, times some scalar. Now let's just take the Einstein theory without any energy momentum tensor on the right-hand side. Just pure gravitational field, nothing else. What kind of scalar could I put there? How many scalars are there that I can make up only out of the metric? Well, indeed, so that's called r. It's the thing on the r, let's see, it's the, that's a scalar. Anything else you can think of? I'll tell you there's one more thing. The other thing is any number, seven or four or sixteen or whatever number you like. So you can add to this a numerical number. Numerical numbers are always regarded as invariants. Everybody who sees seven sees the same number. And the number that you put here depends on the laws of physics. It is a law of physics, it itself is a law of physics, has a name, what's it called? All right, I'll tell you what it's, I'll tell you what its symbol is, its symbol is lambda. Now what's it called? The cosmological constant. It's a term in the action which is just proportional to the space-time volume itself. The term in the action lambda times this integral is just the space-time action itself. The cosmological constant has not been, I have not included it in these equations here. These are the solutions, these are the equations without cosmological constant. So without cosmological constant, this is the action. And Einstein's equations are completely equivalent to saying minimize this or make it stationary, make its variation equal to zero away from a solution. That's minimizing something is making it stationary. Yeah? What's the stress energy tensor coming up with? The stress energy tensor would come in as additional terms in the action which would depend on the material and other fields. For example, there could be electromagnetic energy, there could be other kinds of energy, they would come in as additional terms here. This is the vacuum. Yeah, this is the vacuum theory. This is the vacuum theory, for example, the theory which would govern gravitational waves. Other things could be in here, but those other things would be made up out of different fields. They always have a square root of g, they always have a scalar. If you remember your electromagnetism, the electromagnetic field is described by an anti-symmetric tensor, F mu nu. One of the things that you would put here would be F mu nu, F mu nu. That would govern the electromagnetic field. But let's leave that out. Let's leave that out. Here's the Einstein, the Hilbert-Einstein action discovered, I think, just about simultaneously by Einstein and Hilbert. Of course, Einstein already had that. He already had the whole works and knew how it all worked, but they independently sat down and said, can these equations be derived from an action principle? And both of them, Hilbert and Einstein, both came up with the same answer. This is it. Well, this is a short hand, but not just a short hand, a physical principle that contains all the Einstein field. Incidentally, what replaces this field? I said a field in space. What do you put in there? The metric. The metric. This is an action that's made up out of the metric. And you vary the metric a little bit. You vary the metric a little bit, and you look for places where you found a minimum of the action. Varying the metric a little bit, if you vary away from a solution, the action should grow bigger in any direction that you vary it. So this is the Einstein-Hilbert form of general relativity. As I said, you could sit down. Let's see what you'd have to do. You'd have to calculate the curvature tensor in terms of derivatives of the metric. You do that by first calculating the Christoffel symbols, then differentiating the Christoffel symbols, multiplying them together, sticking them all together, multiply it by the square- oh, do whatever contractions are necessary to produce the curvature scalar, multiply it by the square root of the determinant of the metric. It's a 4 by 4 metric. Four rows, four columns. The determinant is a great big thing made up out of lots of components. Get the determinant and then apply the Euler-Lagrange variational equations to that action. And it takes about five minutes to whole operation, and out of it you will get Einstein's field equations. If you can't do it in five minutes, you fail the course. As I said, I've been trying to do it for 50 years and I've never quite succeeded. A lot of my friends have. There's a little chapter in the Tewkes general activity notes, and it's very involved because it does all the things you talked about. And he plugs certain things out of the air because he knows the answer and then just verifies it. So it's really not easy. No, no, no, no. As I told you repeatedly, the principles themselves are not terribly hard. For whatever reason, somehow in the, you know, there are people who have such a good visual feel for curvature and differential geometry that they can more or less see through it. You have your choice. Either you're a person with a great deal of visual ability to be able to see through the complexity of curvature, tensors and so forth, and figure out how to do these processes by almost visualization, or you work them out by the algebra. So if you're good at algebra, you do it one way. If you're good at visualizing geometry and you have a great command of differential geometry, you probably can short circuit a lot of that. I frankly have neither. I am not very good at algebra, and I certainly don't have the visual ability to be able to visualize four-dimensional curvature tensors. So repeatedly, I have sat down and tried to do this calculation, usually in the process of teaching the subject, and give up after about two hours. If I had to do it, I could. But since I don't have to, I don't. Don't ever tell any of my friends that I can't do it. Okay. Yeah, I know. They know I can't do it. It probably is, but it fills pages. I'm sure it is filled, written down somewhere. It doesn't help because when you have a thing of that complexity, it really doesn't help to follow the steps from one step to another. Either you do them yourself and you figure out what the relation between the steps is, or you don't. But very long calculation really is not very illuminating unless you try to do it yourself. And then when you run into trouble, you peek, oh, that's the next step. And then you understand why it's the next step. But I've never learned anything by looking at these long calculations. These days, people don't do them. People throw them on a computer, either numerically or analytically, and let the computer grind out because it's just too nasty and complicated. Yeah? In hindsight, this seems more motivated than the way Einstein did it originally. But I'm wondering, I think it went from what, 1905 to 1916 before he published General Relativity. What do you think? Is it apparent from his papers what was taking so long to put it all together? The only papers I've read were the early ones around 1907 and then the 1916 paper. And I read them 50 years ago. So I really don't know what, but part of it is that very, very few people knew very much about Riemannian geometry. Einstein didn't know, and he basically had to invent all of Riemannian geometry. I think one of his friends helped him with it, who knew some mathematics. It was a Marcel Grossman, I don't know. But 10 years to put all of this together doesn't sound like a lot to me. He literally had to build all of the ideas of curvature, so it took a long time. Yeah, I think he knew what he was doing pretty well. But you can go back and look yourself. The papers are in English. They were translated into English. You can find them. Okay, I think we're finished for the quarter. Yeah. Anyway, I could use a rest. I'm going to take a rest. For more, please visit us at stanford.edu.
(December 3, 2012) Leonard Susskind demonstrates that Einstein's field equations become wave equations in the approximation of weak gravitational fields. The solutions for these equations create the theory of gravity waves. This series is the fourth installment of a six-quarter series that explore the foundations of modern physics. In this quarter Susskind focuses on Einstein's General Theory of Relativity.
10.5446/15021 (DOI)
Stanford University was talking about what happens during or from beginning, from before a measurement to after a measurement. They were talking about what happens to the evolution of an isolated black hole left to itself between the time that it was prepared and the time it was measured. Okay. Nevertheless, I could answer the question, is entanglement inevitably irreversible? If two things which are not entangled come together, interact with each other, do whatever they do, and become entangled, is it irreversible? Not necessarily. Anything that can happen, the opposite can happen. And now we're not talking about an external apparatus measuring a system. We're simply talking, for example, about two spins. Two spins interact with each other, come together, go off, and they're entangled. What happens is it possible to disentangle them by, for example, reflecting them, letting them come together and interact again, can they become disentangled? Yes. Yes, there's no rule that says entanglement is irreversible. On the other hand, if an entanglement happens and you take one of the electrons off and throw it away, then the other one is stuck there, left being entangled with a thing that just is not going to come back. Not going to come back unless, of course, somebody out there reflects it back in. So yeah, entanglement is reversible. But as far as you're concerned, the measurements that you do are irreversible. That's a very peculiar situation, and you have to think about it to understand what it means. We're going to move off the question of these very simple systems, spins, sometimes called qubits. Tonight at some point, but I want to just go over just very briefly. I think I probably discussed this in class last time, but nevertheless, it's so important that I want to hammer on it. First of all, there was a notion of a density matrix. If we have two systems, Alice's system and Bob's system, and Alice and Bob's system are far away from each other so that Alice doesn't get to do any experiments on Bob's system, Bob doesn't get to do any experiments on Alice's system, but they're entangled, either they are or they are not entangled, they may be entangled, is there a description that Alice by herself uses to describe just her subsystem? The answer is yes, and the working description of her subsystem is called the density matrix. So let me just review that very, very quickly, and go back over some concepts. If the wave function of the Alice-Bob system, the combined system of Alice's subsystem and Bob's, if the wave function is called psi of A and B, B for Bob, A for Alice, then Alice's density matrix is defined to be psi star, the complex conjugate of the wave function. Now let's say it's rho, called the density matrix, it's indicated by rho. It has subscripts A and A prime because it's a matrix, but it doesn't have anything to do with Bob, so it's purely a matrix with respect to Alice's variables. Let's indicate that by calling this Alice's density matrix. And it's given by psi star of A, B, A prime B, I always get confused, yeah, A prime, psi of A, B summed over Bob's variables. It doesn't depend on Bob's variables because they've been summed over. When you sum over something, it no longer depends on that set of variables, so this is only a function of Alice's variables, and it contains all possible statistical information that Alice can ever know about her half of a system unless Bob comes and interacts with her system. Of course, if Bob comes from wherever he is and gives her a system a shot of some sort, that will change her density matrix, but as long as Bob stays away from her system, Bob's and Palo Alto, Alice's on Alpha Centauri, as long as Bob does not interact directly with her system, her density matrix doesn't change no matter what Bob does. Let me give a little proof of that. Let's take Bob's subsystem to not be so narrow as just to be the little spin that he's carrying around, and also Alice's. Let's imagine that Alice's subsystem consists of everything in Alice's neighborhood, everything that she can have any contact with, and everything in Bob's neighborhood, maybe even including Bob, and now Bob is going to do something. He's going to do something, interact, of course he can only interact, interact may mean he may turn on a magnetic field, he may do all sorts of things to change the evolution of his subsystem. He's going to make a subsystem evolve in some way. One of the ways which Bob can cause his subsystem to evolve, many, many ways, but whatever they are, they are always governed by unitary matrices. Let's come over here and forget Alice for a minute and ask what Bob can do to his subsystem. He has a wave function, let's leave Alice out of the picture altogether, just let's suppose that Bob has a wave function, and he's going to manipulate the system somehow, he's going to manipulate the system somehow, and send it to a new wave function. The rule is the new wave function of his subsystem must be related to the old wave function by a unitary operator. Unitary operators are the things which govern time evolution. Go back, remember about time evolution, unitary operators, and that means that psi of B becomes replaced by some over B, some unitary matrix, BB prime, psi of B prime. In terms of wave functions and matrices, this is the description of what happens to Bob's matrix, sorry, Bob's column vector, this is basically Bob's column vector, describing his half of the world or his system, and to find out what happens to it no matter what he does, even if he doesn't do anything, but let's suppose he does something, it generates a unitary evolution of his wave function. Now, how do we describe the same thing in the composite language? In the composite language, if the wave function is psi of A and B, Alice's and Bob's variable, what happens to it when Bob manipulates his half of the world? Exactly the same thing with Alice's variables being just spectators, just being passive spectators. So psi of A and B just becomes psi of, whoops, let's put in there, B prime, this is sum of B prime. This is a function of B and you sum over B prime. Okay, that's again a sum over B prime of U of B and B prime, psi of A and B prime. A just passed through. Alice knew nothing whatever about this whole process. Her subsystem was not in any way influenced, her index, or her part of the wave function, or half of the dependence of the wave function, was uninfluenced by Bob's manipulation. So let's go back and take the manipulated wave function, here's the manipulated wave function, let's call it psi manipulated, the manipulated wave function of B and A, of A and B, and that's this. It's been modified by the action of Bob's unitary operator. Okay, now let's recalculate Alice's density matrix. After all, Alice's density matrix contains all information that she can ever know about her half of the world. And so let's see what happened to it. Well, we first of all have to take the manipulated psi star, we still have sum over B, and now we take the manipulated psi star, oh, we haven't written psi star, sorry about that, we have to write psi star, manipulated of, and we want to know about it with index A prime and B, A prime and B, and that we get just by complex conjugating. This tells us how psi changes, let's just work out the complex conjugate. The complex conjugate of A prime. Where is it A and B prime? This is A prime and B, just A prime and B. But here, since B prime is a summation variable, we do not want to write B prime here. B prime was the summation variable used over here, let's just invent another one called B double prime. That's going to be a summation variable in a moment. And U complex conjugate of B and B double prime. Yeah, U complex conjugate, and of course there's a sum over B double prime here. And the only reason I call it B double prime instead of B prime, was so as not to confuse it with B prime. Here's one sum and here's a separate sum. Now, here's something, what do we know about unitary operators? Unitary operators are operators that if we interchange B, if we interchange the two indices, and complex conjugate, what happens? That's called the inverse operator, or the Hermitian conjugate. In general, it's called the Hermitian conjugate. Specifically for a unitary, it becomes the inverse operator. Now, let's just call it the Hermitian conjugate. So, if we interchange the two indices, the Hermitian conjugate, and the B double prime, B, then this becomes Hermitian conjugate. So, this is the usual story. If operators operate to the right on ket vectors, the corresponding action on bra vectors is given by the Hermitian conjugate. So, now what we have to do is plug this in. We plug this in for the manipulated psi star, that's summation over B double prime, psi star of A prime and B double prime times U dagger of B double prime and B. That's a no-sense. Too many indices for me to be comfortable on the blackboard, but I think this is worth doing. Now, what about psi? Psi is the other half that comes into the density matrix, and that's U BB prime psi of what? Psi of A and B prime. Complicated little mess. Oh, what are we summing over? We're summing over all the B's basically. We're summing over this B over here, let's put a B summation here. We're summing over B double prime to get the manipulated wave form of this complex conjugate, but we also have to sum over B prime here. Sum over all the B's in sight. Each B is a summation variable by the time we're finished. All right, but now look at what we have. It looks complicated, but what is this thing over here? Well, not quite, yeah, one is essentially correct, but of course what you mean is the unit matrix. This is the product matrix of U dagger. This is a unitary matrix, so U dagger is U's inverse. B double prime B, BB prime, this is just the chronicle delta. B double prime, B prime. Let's instruct you to set B double prime equal to B prime when you do the sum, when you do one of the two sums, let's say the B double prime sum. So all of this then becomes sum over B prime, well, let's see, what have we got? We've set B double prime, yeah, we've done the sum over B, that's done. No sum over B, we just did it. And now we set B double prime equal to B prime. Psi star of A prime, B prime, psi of A and B prime. That's it. Now it doesn't matter whether we call B prime, B prime, or whether we call it B, because it's a summation variable, so let's do that, let's just call it B. Now what do we have? We have exactly, I hope, I pray, the original density matrix. So Bob, no matter what he does, as long as what he does is described by a unitary matrix, Bob will have absolutely no effect on Alice's density matrix. It does not matter whether they're entangled or not. Now who wears that I specify that this was a product state? That is what physicists call locality, the inability of Bob to influence any statistical prediction of Alice's by doing a manipulation on his own degrees of freedom. Locality is not violated, this form of locality, this form of locality is not violated in any way by the phenomenon of entanglement. That's the technical argument. Notice that it does depend on unitarity. So what did unitarity do with? Unitarity had to do with conservation of distinctions. What I call the minus first law, at least in its quantum mechanical version. So there's a funny interesting interplay, funny interesting interplay between the conservation of distinctions on the one hand and locality. If Bob somehow had access to changes which were not described by unitary operators, this would not have become the unit matrix over here and the final density matrix would, Alice's final density matrix indeed would have been modified. This is interesting the way the whole thing holds together. This is why it's so hard to change quantum mechanics. You want to play a little game? Let's let Bob, let's let Bob's evolution not quite be unitary, only approximately unitary. Well, then immediately you start sending signals faster than the speed of light. How? By instantly modifying Alice's density matrix. So things hold together tightly and it's hard to change them. Good, so yes? Is it correct to say that Bob's matrix is unitary? The evolution matrix is a vector? Yeah. You can't do it, it's not really an infectious system. He's not able to change Alice's system. He's changed his own system by doing something and that doing something was described by this unitary matrix. Okay, that's the part that confuses me. If he's doing something, isn't he going to be violating unitary? No, for example, he might be turning a magnet. He might be turning a magnet in such a way that it has an influence on his spin. What is that influence? It will change the state of the spin. For example, yeah. If you have a spin in a given direction and it's coupled to a magnetic field and you turn the magnetic field in some way, it will typically change the evolution of that spin. So Bob can do that. He can change a magnetic field that's acting on a certain spin. The result will be to change the way the spin evolves. That will change his own measurements of the spin, of his spin. If he does act by rotating a magnetic field, which is mathematically described by some sort of evolution, doesn't matter what, some sort of evolution that's described by a unitary operator, then he does change the results of his own measurements. For example, he rotates the magnetic field but he doesn't rotate his apparatus. The result is that he's going to change the probability distributions of his own spin, but he will not change any of the probabilities of Alice's spin. Okay. So are you saying quantum mechanics is always a basal calving? Or is the probability defined by the inability of Bob to influence Alice's measurements? Yeah, now the question is, so what's the big deal? Is that what you're going to ask? Yeah. I was guessing what you were going to ask. Why don't you ask it? Single particle wave function, put a photon through a beam splitter. It goes off in two different directions. You catch it at one detector faster than the speed of light, the wave function collapses. Whoa, whoa, whoa, whoa, what? You do what? You detect a photon at one detector. The wave function at the other detector collapses to zero, instantaneously. I showed you what happens. The density matrix of the other system does not change, period. It did average measurements, still changed. What happened? I know, I know, I know, people, look, I'm doing my absolute best to tell it to you as it really is and to cut through the crap that's out there. To cut through the crap that's out there, because there is a huge amount of, there is a pile a mile high. And we somehow want to cut through that. This is the mathematics of cutting through it. Now, nevertheless, there was some pretty damn smart people who are uncomfortable about something, something that they call nonlocality. I mean, you know, Einstein was no jerk, he was pretty smart. And they didn't call him Einstein for nothing. That's my favorite joke. John Bell was pretty smart. They didn't call him Einstein, though. Yeah. So there's something going on. There's something going on. Yeah, go ahead. I want to get, I'm reviewing last week, so make it quick. To me, this is real clear, I think, and that is your idea of locality means that the states don't affect each other. In other words, there's two things. The density. Yeah. And it seems that in the confusing part of quantum mechanics is that those two different things get mixed up somehow. Is that accurate or? No, it's fuzzy. But it's true. So I will try to tell you as I understand it what the concrete operational facts are. So I told you last time, I was a little bit tired last time, and I'm not sure I was completely clear. So I won't go over it again because it's very, very interesting. And it really is at the heart of all of the confusions about confusions. I mean, I must say, including my own internal confusions about quantum mechanics. When I first learned about, er, Einstein, Rosen, Podolski, that was the discovery. Well, I don't know if it was a discovery. I think maybe Schrodinger said some things about entanglement. But it was Einstein, Rosen, and Podolski who really called attention to how peculiar it was. Basically what they said is we have a very funny situation here where although we can know everything that can possibly be known about a composite system, we know nothing at all about its constituents. That's crazy. That violates my sense of reality. That violates my sense of, er, of what it means to know what a system is doing. Violates all sorts of things. I remember when I first heard about it, it was actually before John Bell sometime in the 60s when I was a student. No, no, I wasn't a student. I was a young assistant professor. And I started to think about it and I thought about it a lot and tried to figure out what it means. I was sort of inclined to try to think about it in terms of simulating quantum mechanics. And I still am. What are the limits? If, if, if quantum mechanics is so very strange and so very different than classical physics, does it mean that you can't fool somebody into thinking a system is quantum mechanical if all you have, your resources are entirely classical? And the answer is yes and no. So I know I talked about this last time, but it really is at the center of things. So let's go back over it quick. Quick as I can. We're going to build ourselves a quantum simulator. I think somebody built one on the, on the site. I came across it about an hour ago. Who, who's that? Yeah. Does it work? Well, it doesn't actually simulate yet. Not yet. But is that picture, is that picture your picture? It was a picture of Einstein and Heisenberg and Einstein says I don't like this and Heisenberg said what? Einstein says I think I'm not going to like this. Yeah. Right. Oh. Yes. Right. I think that captures it. Why go on? But okay, let's, let's talk about the nature of quantum simulators. First of all, I just want to simulate a single spin. I think I told you how to do it, but let's write down the steps. We have a computer screen. Here's our computer screen. On the screen, we have a picture of a detector, not a detector, an apparatus. And the apparatus can be oriented in any direction in three-dimensional space. We need to do some three-dimensional graphics here. And it has a window, and the window shows one of three possible states. One is called blank B. That's the detector or the apparatus before it's interacted with anything. So it can be B or it can be plus or minus one. B is not important. It's just to say before the detector interacts with anything. There's a button over here called M. And the operator of the system, of the computer game, basically all you ever get to do is press M, and then look at the result of the M stands for measure. It performs a measurement. Now, over here connected to the computer by wires, by cables, connected to the screen by cables, is a computer. And the computer stores information in its memory, and the basic information that it stores is two complex numbers, alpha up and alpha down. Those are supposed to be the components of the wave function of the spin. I didn't say yet that there was a spin here, but that's what we're talking about. Alpha up and alpha down. The computer only can generate alphas which satisfy alpha star, alpha up, plus alpha star, alpha down is equal to one. Whenever it generates them. And it can do something else, it can do two other things. It can update the alphas by solving the Schrodinger equation for them. There's some Hamiltonian, for example, it could be that the spin is in an imaginary magnetic field, in which case the alphas would change with time. The computer knows how to do that. It knows how to solve Schrodinger's equation. Computers can solve Schrodinger's equation beautifully. And it updates these as a function of time. Updates them according to the rules of quantum mechanics. One other thing, the cables are instantaneous. They convey information instantly. So we'll have one more element. The other element will be a random number generator. Let me just say some words about random number generators. Random number generators, sounds like we're introducing quantum mechanics. No, we don't need quantum mechanics. You can simulate to a very, very high degree of confidence a sequence of random numbers or a random number generator purely out of classical considerations. You know how to do it. You take the digits of pi or whatever and you make something that looks, would fool anybody, including a sophisticated mathematician, into believing that you were generating a set of random numbers. So there's a random number generator inside the computer. All right, the operator starts by initializing the system. And initializing the system simply means starting out some orientation for the detector, for the apparatus, doesn't matter what, and setting the alphas to some initial values. You can plug in what he wants the initial values to be, or the computer just sets them according to whatever rule it likes. Then the system runs for a period of time, an indeterminate period of time, updating the alphas until the operator decides to press M. A certain amount of time is elapsed, the operator decides how much time and presses M. At that point, the random number generator generates a plus one or a minus one, but with probability alpha star alpha up, that it's plus one, sorry, alpha up, alpha up star, that's plus one, or else alpha star alpha down, that it's minus one. Remember, the sum of these two alpha star alphas add up to one. So armed with the random number generator, the computer sends a signal with a probability of the appropriate probability to the detector, and the detector detects or registers what the computer sent it. That's one measurement. So as soon as that happens, the computer re-initializes the alphas. How does it re-initialize the alphas? If the detector detected plus one, then it sends back a message, it doesn't really need to send back a message, after all, this message got from here to here, so the computer over here knows whether it got plus one or minus one. If the detector saw plus one, then it sets alpha u back to one and sends alpha d to zero. In other words, it does what we call collapsing the wave function. According to the result, or according to the outcome of the measurement, it re-initializes the alphas. So whatever was measured, you start with probability one for the outcome, and probability zero for the, what shall we call it, the anti-outcome. And then it runs again. Solves a Schrodinger equation, does it again. The operator makes a ramp, makes whatever decision he likes. I mean, what makes him decide to press M? He's an experimental physicist. What makes experimental physicists do what they do? I don't really know, but he does what experimental physicists do. Now and then he presses M and does the whole operation over again. Oh, sorry. Before he does, he may decide to rotate the apparatus. Now, the computer in here knows what the orientation of the apparatus is. So when it calculates, it not only calculates the updated alphas, but it also calculates the probabilities for the rotated detector to detect whatever it's going to detect. That's all algebra and a little bit of, that's purely algebra. The computer makes the kind of calculations that you did, presumably did, makes the kind of calculations to calculate what the probability would be that if the detector was rotated in some arbitrary direction, then what's the probability that it will get up or down? Now, who is it that decides to rotate the computer? Well, it's the operator again, the experimentalist. The experimentalist not only can press M, but he can also reorient the detector any time he wants. This is about as much as an experimental physicist can do with that spin. Rotate the apparatus, measure. Rotate the apparatus again, measure. And what's the result of the outcome of this? The outcome of this is indistinguishable from what the experimental physicist examining the spin would see. So we've simulated quantum mechanics. Now we want to know whether we can simulate quantum mechanics of two spins, one on Alice's side of the world over here, and one on Bob's side of the world. Yeah? I don't understand the purpose of the button M in ours. In other words, when he decides to put it, some amount of time is involved. Has that affected anything going on in the computer? Yeah, the computer has been updating all this time. Right, but when we talked about our base model of how spin works, and how it was funded, and I come into it at all. We did talk about Hamiltonians. We talked about Hamiltonians and how Hamiltonians update the state of the system. You're right, we didn't discuss how... We didn't discuss much about Hamiltonians for spins. Yes, we did. Yes, we did. I believe I described a spin and a magnetic field and how the average spin precesses. We didn't have the magnetic field, because that's how we started out talking about all this stuff, right? Then time should not be a factor. Okay, I understand that. Right, now there's no real magnetic field. There's just an imaginary magnetic field that the computer knows about. And whereas I come into it, it comes into the nature of the exact Schrodinger equation. Okay. Now, we do this over here and precisely the same thing over here, except instead of alphas, we have betas. Beta stands for Bob. Exactly the same setup, replicated twice. Bob does his measurement, Alice does her measurements. They're perfectly happy. Each one says, I have a spin. It's quantum mechanical. They don't talk to each other. And each one says, my spin is in a state. The language that we would describe this by quantum mechanically is they have simulated a product state. A product state in which Bob's... Sorry, Alice's wave function is this, and Bob's wave function is this, and the combined system is in a product state. Supposing now we want to simulate any state, a more complete description that includes entangled states. Well, we can do it this way. Let's start with a way that's sort of guaranteed to work. We take the computer and we put it in the middle, a single computer. We can think of the single computer if we like as a pair of computers connected by wires, but just by the fact that they're communicating with each other and interacting with each other really makes them a single computer. And in that single computer we now store four complex numbers. The four complex numbers that we store in there are alpha up, up, alpha up, down, alpha down, up, alpha down, down. These complex numbers here are not necessarily products. In fact, let's assume they're not products. In other words, this is not a product state. Let's assume it's a highly entangled state. Even a highly entangled state is described by four complex numbers like this. That's what's in the computer. And what does a computer do with it? It evolves the Schrodinger equation. First of all, we initialize. We start with some alphas. The alphas are allowed to run according to whatever Schrodinger equation describes them. And at the end, either Bob or Alice or both, without knowing what the other one is doing, press or as the case may be, don't press their measurement buttons. Once the measurement button has been pressed, an instantaneous signal goes back to the computer. The computer grinds out its random number generator, and whichever side was measured sends a signal to the detector with appropriate probability, either plus one or minus one, and then reinitializes itself, reinitializes itself in a way that doesn't touch, for example, if Alice does the experiment, then it doesn't touch Bob's end of things. If Bob does the experiment, the computer reinitializes his half of the wave function, which is another way of saying it throws away the piece that you didn't measure, throws away the pieces you didn't measure and renormalizes the wave function so that the total probability is warning. Exactly the same thing. There's no real difference with a single system you can simulate it, and you can simulate it in a way which would describe all the possible things that Alice and Bob could measure, whether or not the system was entangled. Where do we get in trouble? We get in trouble if after initializing, we try to separate the computers and break the wires. For example, here's what we might do. Given these alphas, we might put them into both computers. Take the set of alphas that we generated, put this one into this computer, this one into this computer. Now, since the computers are not going to talk to each other, they have to have separate random number generators. The random number generators will not talk to each other. Each one contains the full entangled state, and then we go through the same operation. What's going to happen? The computer is simply going to forget, for example, for a highly entangled state, let's say the entangled state is the singlet state, then the right answer would be that if Bob and Alice both measure, they should get opposite values for any spin. If they measure the x component of spin, they should get opposite values. If they measure y components of spin, they should get opposite values. But if when we separate them, they have separate random number generators, and the random number generators don't talk to each other, then guess what's going to happen? There's going to be no correlation of any kind between what Bob measures and what Alice measures. No, Bob may measure plus, and Alice may also measure plus, because their random number generators are not connected to each other. And so even though both computers may know about the full state of the system, if the random number generators don't talk to each other, then the results of the experiments will not contain the correlation. The correlation that Bob and Alice could later come together and report, Bob would say, my first experiments are a plus. If things were working right, then Alice should say, my first experiments are a minus. And so forth and so on, that correlation will not be there once you try to separate the computers. Are the random numbers correlated if you don't separate the computers? You can have a single random number generator that sends out signals to both computers. If it sends out a signal plus one to this computer, it sends out a signal minus one to that computer. So all you need is a single random number generator, but it has to be able to talk to both computers. So there's got to be wires in the world. There's got to be wires in the world connecting this side and this side for no other reason than they can both connect to the same random number generator. You've got four outputs there, but it sounds like you just described that the outputs are independent. Now you've got to think about it. Yeah, yeah, yeah, yeah. Yeah. Yeah. Well, that's sort of right. Once you've separated them, then what happens next after some measurements are made, doesn't preserve the relationship between them. They become disconnected from each other. Yeah, so think about it a little bit. I mean, even better. Try to simulate it. This is this, I've never tried, but I don't think it should be that hard if you can do a little bit of computer programming. I've simulated much harder things than that when my son was a little kid. We had this thing called BASIC. Anybody remember BASIC? And he and I designed the basketball game, and it had friction, and it had, it was much harder than this. So that was my last experience of programming anything. I think you'll learn a lot by trying to simulate this. You have a detector called a cell phone, and I get a signal from Alice or Bob about some random event. It could be a spin, let's suppose, but we're, will you please speak concretely? We get a signal from Bob of some random event. What does it mean? Can you please be exact? Very exact. I'm getting two text messages, one from A, one from B, okay, and they're going to have a value of one or zero, each of them. Okay? And I get a million of these things, and I noticed the distribution that the A signal had a 50% chance of being one or zero, same with the B signal, okay? It could be from a spin. Yes, yes, okay, go ahead. Now, I look at the correlation that turns out that whether it's A or B, the other one is the same. In other words, it's correlated, a correlation of one. That's in the real world of entangled states. This could be, we don't know where it's coming from. I'm just getting information. I get information to A comes through, I don't look at it, B comes through. I know it's a 50-50 chance, I don't look at it, but as soon as I look at A, I know what B is, so the wave function completely changes. Now, the example I'm using is not a state, I'm just a Bob and Alice or sexting twins, one's a boy, one's a girl. You've got to hurry up because I've got to get on to particles. So there is nothing magical about that. There is nothing weird about that. If it's correlated like that, it's very probable they're identical twins instead of eternal twins. So what, how am I missing? What is weird about that? If you don't find it weird, good. What's weird is that you can't simulate it classically without having a central processor connected by wires. So what's weird about it? So don't those wires that connect the two distant computers represent non-locality? Oh, okay, now we ask. All right, those wires are there. For goodness sakes, surely Bob can send a message through the wires to Alice. It takes no time at all because they're instantaneous. Yeah, true enough if this were a real computer setup, but if Bob is restricted to only do the things quantum mechanics allows him to do, if he is restricted by the rules of quantum mechanics, then we find out amazingly that Bob can never send a message through the wires that violates locality. Why? Because all Bob can do is manipulate his own system with unitary operators and so that's what's curious here. Does that mean that the world is filled up with wires that we can't see? I don't think that's the message. I think the right message, but of course here you can hear with this room for debate what the right message of all of this is. The message I take away from it is basically quantum mechanics cannot be simulated by a classical system. The logic is too different. The two theories are too different. I don't take away the message that the world is full of wires that if only we could access them, we could send messages back and forth. Other people do, but I think at least if we know how to do quantum mechanics, we'll agree on the facts. We may disagree on basically whether there are or are not wires there. Are there really wires there or are there really not wires there? Well, perhaps the culprit is the word really. But this is, oh, there is one other element to it, which is quite interesting. We've made an assumption that Bob and Alice get to kind of randomly make decisions about when to press the button, about what orientations, about what to decide to measure. Bob sits here and says, hmm, okay, Northwest, hmm, Southeast, or whatever, makes these random decisions according to the principle of free will, so to speak. That's fine as long as Alice and Bob are outside the system and not part of it. We don't need to ask the question of consciousness and the question of free will. We'll just admit that Alice and Bob can make random decisions about when to measure various things. It's obviously a pretty good approximation. But once we take Alice and Bob and put them into the system so that they are governed by the laws of physics, then things get a little bit touchy. What do we mean by saying that Alice can decide to do this or Bob can decide to do that? And that, I think, is where the possible loopholes in all of this can come from. Now, I don't want to get too deep into heavy philosophical discussions of all sorts of psychological questions of consciousness. I don't know the answers. That's it. I don't know the answers beyond what I've told you. I am certain of two things about it. I am certain that there are a lot of people who think they do know the answers, and I'm also certain that they don't. That's it. A question. Actually, not a question. If you Google for Quantum Randy Challenge, you'll find a blog entry that has the setup for this simulation. Really? For the entangled case? It doesn't solve it. It says, if you can, here's a program. Your challenge is to fill in this blank place. I see. If you can, he'll give you a million dollars. Actually, he won't give you a million dollars. Somebody in Sweden will give you a million dollars. Yeah. OK, that's very interesting. I'm not the first one to think about it this way, but I've been telling this to people for 50 years. No. I don't know. I probably wasn't. I think maybe Bohr thought about it this way. Who knows? It doesn't matter. We could spend the rest of eternity talking about spins. Why? Because they really are interesting. But we want to get to something else. We want to get to particles. We want to get to more complex systems. The most complex system we've discussed is two spins, four-dimensional Hilbert space. We've gotten about to the point where in classical physics, we got through the first lecture. The first lecture was about these simple discrete systems. And then we rather immediately moved on to continuous motion of things like particles moving continuously with an infinite, not just an infinite, but a continuously infinite range of possibilities. That's where we have to go now. Now fortunately, we've set up so much of the apparatus of quantum mechanics that we can do it fairly quickly. There's a little bit of subtlety about replacing sums by integrals. That's about it. That's about the nature of the subtlety. Summs and integrals, we've expressed the general principles of quantum mechanics and then exhibited it in a variety of simple spin system examples. Now we'll plunge right into quantum gravity. No. We'll plunge right into the problem of a single particle moving on a one-dimensional axis. We'll take the axis to be infinite, although that's mostly because they don't have time to be more careful. If we want to get anywhere, we're going to have to be slightly less mathematically rigorous. So we have, and in order to be rigorous, you really have to contain a system in a finite volume and so forth, but we don't need to do that. So we have a line, infinite line. Along that infinite line, one particle can exist. We won't do two particles yet. One particle can exist in one dimension, and that line is labeled by a coordinate x. I mean, not right down the coordinate x, but also the states of the particle are also labeled by the position of a particle. Now here we are departing from classical logic. In classical logic, the states of a particle are labeled by a position and a momentum. If we were talking about the states of a spin, we would say the state of a spin is described by all of the, or maybe at least two of the components of spin. But we can't do that in quantum mechanics. We can't specify two components simultaneously. Same in classical mechanics, specifying, as I'm sure you know, specifying the position and the momentum is too much. And we'll see how that works. But for the moment, let's take it as a postulate that a complete description of the state of a particle, and you can't get any more out, is, for example, to know that it's located at a position x, and if it's located at a position x, we'll describe that by a ket vector. The ket vector x. That's the state of a particle when it's located at position x. Now, of course, we can consider more complicated, more complicated states. Linear superpositions of these in quantum mechanics, we are always allowed to make linear superpositions. And so the space of states is bigger than just the space of particles located at definite positions. Let's call the general state psi. This is the general state of a particle. It doesn't mean anything now. It's just an abstract symbol. And it's a vector in some vector space. The vector space now has a continuously infinite number of basis vectors. Now, strictly speaking, that's a mathematically bad thing, but we're going to play that game. If you want to understand more deeply the mathematics of quantum mechanics, you go and you learn about Hilbert space. But we're not going to do that tonight. So the general state of a particle is called psi. Remember what the wave function of a system is. The wave function of a system, in a particular basis, is the projection of a state onto the basis vectors. And we call that psi of A. Now we come to a particle. The basis of states are the position states. And what is the wave function? The wave function is the inner product of the state vector with the eigenvectors of position. And that's called psi of X. That's the wave function. That's the classical, not classical, but that's the standard quantum mechanical wave function of a particle. It plays the same role as in the simple systems we've discussed, the inner product of the state vector with A. Now we can say a little more. In this context, the probability that the system, whatever it is, exhibits property A is given by this thing squared. In fact, it's given by psi star of A, psi of A. That's the probability that the system is, but the outcome of an experiment to determine whether the system had property A, that's the probability. Likewise, in exact parallel, the probability to find the particle at X is equal to psi star of X times psi of X. I really haven't done anything that we haven't done before. Now, in quantum mechanics, like in any other sensible statistical theory, the sum of all probabilities should add up to one. So the sum of psi star psi should add up to one. But when probabilities refer to a continuous variable, we don't talk about the probability that the particle is located exactly at point X. That's too sharp. The probability that a thing is located exactly at any given X is zero. We talk about the probability that it's located within a range of X. Or better yet, we talk about the density of probability. We talk about the probability density, and we integrate this over a little range of X to find the probability that you're in that range of X. We do something like that here. If we want to know not whether the system is definitely specified at A, but we want to know whether it's at a range of A, we sum over them. But here we sort of have to do it. Total probability equal to one? That's the condition that the integral of psi star of X, psi of X, dX is equal to one. Total probability of the system has to add up to one. And the only, as I said, the only new thing here is that we have to replace sums by integrals. And that does make mathematical subtlety that we have to worry about, but it's sort of in the mathematical details. Okay, what about the inner product between two state vectors, one at X and one at X prime? Well, here we go back for inspiration, right back to, right back to this case. Zero if A is not equal to A prime and one if A is equal to A prime. And we're not going to be able to quite get away with that here. We're not going to quite be able to get away with a statement that X, X prime is equal to zero or one. We are going to say that if X is not equal to X prime, then it's zero. But we're not going to say that when it's equal to X prime, that it's one, that would not leave us anywhere. What we're going to say is that it's a function which vanishes unless equals X, X equals X prime, but has area one under it. We'll see how that works out. We'll see what that does for us in a moment. What it really comes down to is a different normalization here. But again, the basic idea that states are orthogonal, if X is not equal to X prime, that we will retain. But the numerical value of what we get when X is equal to X prime, we're going to take to be slightly different. And in fact, we're going to invent a function that we are, derac did, the derac delta function, X minus X prime. The derac delta function is not a real mathematical function, but by now, I think mathematicians have defined it to death so that it is a real mathematical entity. But for our purposes, we can think of it in a very simple way. It's a function which is zero, well, let's take delta of X, not delta of X minus X prime, this is basically delta of X minus zero, X prime zero, the function delta of X. The function which is zero unless X is equal to zero, that's the same as saying this thing is zero unless X is equal to X prime. So if this is X, it's zero everywhere except X equals X prime, and X equals X prime, it's a very high narrow spike. How high and narrow? High enough and narrow enough that the area under it is one unit. That means if it's approximated by a narrow interval of size epsilon, that it has height one over epsilon. And then you imagine a limit where you take epsilon smaller and smaller, one over epsilon higher and higher, always keeping the area under it equal to one. That's the derac delta function, derac delta function, zero everywhere except when X equals X prime, infinite in some sense when X is equal to X prime, but in such a way that the area under it is equal to one. All right, now let's use this to further elaborate on what the meaning of psi of X is. I'm going to guess and then we're going to check. I'm going to guess that the meaning of psi of X is the following, that if I want to write a general state vector, psi, that I can think of it as a sum over the basis vectors, but not a sum, an integral. Of course an integral because the basis vectors are continuous. The X of some coefficients, some coefficients, well let's know, let's write it this way, psi is equal to an integral of the X of some set of coefficients. I'm going to guess what they are. I'm going to guess that the psi of X times the, the state, the quantum state where the particle is located at X, but then we're going to add them up for different X's. It's exactly the same thing that we did for ordinary quantum systems except this was replaced by a sum. All right, now I've given you two distinct definitions of psi of X. One is that it's the inner product of the state vector with the eigenvector X, I'll call it the eigenvector of X. The other is that it's the expansion coefficients in writing psi as a sum over vectors. Now if you're a little bit uncomfortable with the idea of a continuous basis of vectors that you integrate over, you're in good company. The mathematics of this is, as I said, by now it has been made rigorous. But it's, it's, it's, it took pages and pages and pages and pages and books and books and books on something called distribution theory to make sense out of this with precision, but it's pretty intuitive. It's pretty intuitive and we'll stick with that. All right, so let's calculate, oh let's put a prime here. This is a sum, an integral. It doesn't matter, it doesn't matter what you call the integration variable. Let's call it X prime. And now let's calculate the inner product of X with psi. As I said, there were two different definitions of psi of X. Let's check that they're the same. So here we have it. Integral the X prime, psi of X prime, inner product of X with X prime. What's the inner product of X with X prime? I think I wrote it someplace, where is it? I think I erased it. That's equal to the inner product, to the integral of the X prime, psi of X prime, delta of X minus X prime. Okay, what is that? All right, first of all, delta is a function which is zero everywhere except that X equals X prime. So that means that in doing this integral here, you'll only get a contribution when X prime is equal to X. That means the answer is going to come out to be proportional to psi of X. Furthermore, the area under this delta function is exactly one. If we imagine that this delta function is a pinched, narrow, or narrow, or high spike, so that the area under it is one, then this integral would tend to psi of X. The rule is, if you have an integral over a delta function like this, it simply picks out the value of the integration variable where the argument of the delta function vanishes, where X minus X prime vanishes. That's where the function is not zero. And the integral of a function with a delta function just gives you the value of the function at the point X. That's a rule. We might as well write that down. Let's write that down as a rule. Integral delta of X minus X prime, F of X prime is always equal to F of X. It picks out the value X prime equal to X. That's a part of the definition of the delta function. Okay, so it's just equal to psi of X, and we find consistency that the thing that we called psi of X over here is the same as the thing that we called psi of X over here. All of this has its exact parallel for the ordinary systems. Top integral on that bottom board, yeah. I'm trying to figure out which are the base inspectors. These are the base inspectors. Let's compare that with psi is equal to sum over a prime. Let's call it a prime. Psi of a prime, a prime. Okay? If we use the fact that a and a prime are orthogonal and do exactly the same thing except replacing sums by integrals, we will also find that the psi that appears here is the same as the inner product of psi with the basis vectors. So I've really done nothing that we haven't done before. I appreciate it's a little, possibly a little unfamiliar, but all it is, is the same thing we've done before. So the psi of X prime sort of plays the same role as the alpha's and some other. The X prime plays the same role as the A prime here. Yeah. The psi plays the same role as the psi. The sum is replaced by the integral. Sum over A prime is replaced by integral, the X prime, and otherwise it's exactly the same. Now let's calculate the inner product. One quick question. It may be nitpicking or something, but on the left-hand side where you have the Dirac's formula, lower right-hand corner, yeah, you've got a vertical line between the axis. Over here in the middle formula, there's no vertical line. Thank you. Okay. Great. Because it could have been a line or a problem. Got you. Got you. Got you. Right? Nope. Absolutely. How about the inner product of two vectors? Well, the inner product of two vectors is some inner product, but one way of calculating the inner product, let's go back to ordinary quantum mechanics, ordinary means for the discrete systems. We don't need this right now. How do we calculate the inner product? Let's take it high. Inner product of psi with phi. We first write psi and phi as sums over the basis vectors. So this is equal to sum over two sets of basis vectors, a prime and a. Psi star of a prime times a prime, that's the bra vector. We complex conjugate, of course, when we deal with bra vectors. That's the psi half of it. And then the phi half of it is the sum over a of phi of a times a. Phi of a being the coefficients of the phi vector, psi star being the coefficients of the psi bra vector. All right. Now, phi of a is just a number. And the inner product of a prime with a is just the chronic a delta. So all of this adds up to sum of psi star of a prime, phi of a. And then a chronic a delta of a and a prime, which is exactly the instruction to set a equal to a prime. Now, I think you've seen that before. This is simply the rule about row vectors. If psi star of a is represented by a row vector, psi star of a1, blah, blah, blah, blah, blah, and phi is represented by a column vector, then this is just the rule to take this times this, plus this times this, plus this times this, plus this times that. So this is the common garden variety quantum mechanical inner product. I'm not going to, we don't need to go through the same exercise here. We can immediately write down what the result is going to be just by analogy. It's almost not even an analogy. It's just replacing wherever we see a's put x's and wherever we see sum put integral. The inner product of two state vectors for a particle on a line is integral the x psi star of x phi of x. It satisfies, incidentally, one of the basic rules of inner products. What happens to an inner product if you want to change the two vectors? What's the relation between the inner product psi phi and the inner product phi psi? Complex conjugation. Now what would happen in here if you want to change psi with phi? You'd wind up writing phi star here and psi here. That again would be the inner product, the complex conjugate. And for exactly the same reasons. All right, so this is equal to this for both discrete systems and for these continuous systems. Inner products, the inner product of a vector with itself is the norm of the vector. And just as in previous examples, you normalize the vector to one to express the fact that total probability adds up to one. So not much difference. Just a difference of notation. I think what gets to be confusing about it is you know both of these things. You know about these simple row vectors and column vectors that you know about functions, but you're not used to thinking of functions as vectors. Or maybe you are if you are good. But if you're not used to using final thinking of functions as vectors, f of x's, or function psi of x's, then you'll have to get used to it. But functions are vectors in a vector space. You can add them, you can multiply them by complex constants, and you can add them, you can multiply them by numbers. And so they satisfy the axioms of a vector space. Space of functions is a vector space. Now sometimes you may want, yeah. In more classical mathematics, would you put as the limits of integration minus and plus infinity? You would. Which incidentally says something. It says that we shouldn't be thinking about functions which grow at infinity, for example. We should be only thinking about functions which are square integrable. Now we're going to have to modify that. Square integrable means you square them, this is called squaring them, that the integral exists. Functions which grow at infinity won't do that. Now those are crazy because they would say that all of the, if you had functions which grew at infinity, where would the probability be? The probability would be all located off at infinity, and that's not what we want. We want to be talking about probability distributions which have a total finite probability under them. So that means we should take perhaps with a grain of salt the idea that we can write down any function. But for the moment we're going to be a little bit loose about that. Okay. I think I'll slow down here for a few minutes and would you digest that and ask for questions. Yeah. Can I go back to the end of this? No, no, no, no, no. Yeah, go ahead. No, I didn't want, I meant about this, but okay. Maybe this is quick. It seems like just about everything must be entangled with something almost all the time. Oh yeah. Yeah. You're entangled with all sorts of things in the environment. Now we haven't talked about the importance of a complex environment. It's not so important in the very foundations of the subject, but it is important to why we see the world the way we are, that we're constantly being bombarded and getting entangled with things. As a theory of distribution has been useful for physicists, or is it just... Oh boy. Certainly. Okay, so it's not just some, it didn't just clean up the direct delta function, I appreciate it. Oh, you mean, has it, well... I see what you mean. You mean has it done anything else besides clean up the... I don't know. I don't know. I don't know. I think it's made rigorous a set of concepts that physicists use all the time. Has it really changed? No, no. The theory of distributions... Yeah, I'll tell you what it did for physicists. It made them have to argue less with mathematicians. It was one of the many examples where mathematicians got dragged screaming and kicking by physicists. Of course, the opposite also happens, but it was an example where mathematicians at first looked at this thing and said, it's crazy, that doesn't make any sense. It's non-rigorous, and in time they came to peace with it through defining it in their own way. But no, I don't think the mathematical structure of it has influenced physics as much as you might have expected. Unfortunately, because there are big problems of confusion related to it in quantum field theory, and it hasn't resolved those problems. Okay. Okay, next place we want to go is observables. We want to talk about observables and give some examples of observables. Observables are linear operators. What are linear operators acting on a space of functions? They're operations on a function that gives another function. If you have a linear operator, let's call it M. For some reason, I've saved L to stand for observables. I've used previously M for a general linear operator, so let's continue to do so. Whatever M is, it's a machine, and thought of as an abstract operator in a space, it takes a state vector into another state vector, a side prime, let's say, or a vector in the vector space. But it also can be thought of as a thing which acts on wave functions. Every state has a wave function that describes it. And the machine, oh, that's why we called it M. Yes, I remember now. We called it M for machine. The machine can also be thought of as acting on a wave function to give another wave function. So let's think of it that way. Let's think of it in concrete terms as an action that we do on a wave function to produce another wave function. Now, it's linear. Linear means that when it acts on the sum of two functions, it gives back the sum of the results. It means that if it acts on a function which is a numerical product, a numerical multiple of some function, it gives back the same numerical multiple. All the ideas of what a linear operator are continue to be exactly the same. Let's talk about some linear op- oh, okay. That's the notion of a linear operator. Now, what about a Hermitian linear operator? I'll tell you the definition of Hermitian. We've talked about it before, but I'm going to give it to you in a condensed form. The condensed form is also true for all of the vector spaces we've studied. And you may recognize it if not go back and prove that it follows from the previous definitions. A Hermitian operator, Hermitian for operators is roughly speaking the analog of real as opposed to complex. Okay, here's what's true about Hermitian operators. If you have a Hermitian operator which I will now call L, then it has the property that if you take, if you act on a wave function psi, or a state vector psi, and take its inner product of phi, that that is equal to psi L phi, but not quite, what do I have to do with it? Complex conjugate it. Now, if L were not, let's go back a step. Supposing L was not Hermitian, would this be right? Not quite. What would we have to do? We'd have to put a dagger here. In other words, we'd have to roughly speaking complex conjugate L. Okay. If L is Hermitian, then the rule for that is that it's matrix element, it's when sandwiched. When sandwiched in opposite order, you don't have to complex conjugate L because it's already the analog of real. This is the condition that an operator is Hermitian. That's important because Hermitian operators are the class of observables. So let's work out one or two Hermitian operators while we're at it. I guess we may need this, so let's work out a couple. First of all, one of the simplest things that I can do to a wave function, I don't know what's there. There are many simple things I can do to it. But a very simple thing is to multiply it by x. An even simpler thing would be to multiply it by a simple number. But let's take slightly more complicated and multiply it by x. Multiplying by x gives back a function. This is at minimum an operation on a function. Is it linear? Sure it is. If x multiplies psi of x, let's say, plus phi of x, it gives x psi of x plus x phi of x. In other words, when it acts on a sum, it gives the sum of the results. If it acts on a numerical multiple of a psi, it gives back the numerical multiple. If it acts on a linear operator, question, is it a Hermitian? Is it a Hermitian operator? Okay, so let's check. Here's what we have to take. We have to take psi in a product of x times phi. That's this one. Oh, sorry. I think I want to do it the other way, phi x psi, the top, phi x psi. And ask whether this is true or not. We'll check in a minute. No, x is not a number. x is a function x. We've multiplied one function by another function. That takes functions to functions. x is not a number. It's a variable. Any given value of it is a number. But it's a machine that takes the function psi of x and replaces it by x psi of x. All right, so let's calculate. The wave function of the ket vector is x psi of x. The wave function of the bra vector is phi star of x. The inner product is, let's put some brackets around it, bracket. This is a ket vector inside the red bracket, and this is a bra vector. And the rule for taking the inner product, whoops, I think I interchanged star and not star, but the rule for taking the inner product is to integrate. So let's get rid of a bracket here. And just integrate dx. Oops, sorry, dx. All right, that's in this order. What about this order? This order over here, the inner product without the star, without the star, the thing we're going to compare it with, is the same thing except with psi star x phi of x. Well, look at it. Is this integral the complex conjugate of this? Yes, it is. The complex conjugate of phi star is phi. The complex conjugate of psi is psi star. And what about x? x is its own complex conjugate. So yes, indeed. x is an operator which satisfies the basic rule of Hermitian operator that if you interchange the psi and the phi, it complex conjugates. All right? If, yeah. What observable does x correspond to? Position. It is the position. Is the delta function a eigenvector of position? Well, maybe we'll come to that next time. I think I won't. Yes, it is. But we'll come back to that. x is an observable. What does it correspond to? It corresponds to the position of a particle. We will prove that by showing that the eigenvectors x, that the vectors x are eigenvectors of that operator, but not right now. We'll do that the next time. It's easy. It's trivial. Just a big question. So my correctness in assuming that when you write x up there, it's implied that that is really f of x. The x is a function of x. x is an n f of x. Yes, it's implied that that x is a function of x. It has a whole bunch of values. It has a whole bunch of values? That's right. It itself is a function which is multiplying another function. All right? So it's an operation on a function which gives another function. Yeah. That's exactly right. Let's take another operation or operator, differentiation. If I differentiate a function, I get a function. That's the first test of whether it's an operator. If I differentiate the sum of two functions, I get the sum of derivatives. If I differentiate a numerical multiple of a function, I get the numerical multiple of the derivative. It's a linear function. Is it a Hermitian operator? So here we have to get ourselves a fresh blackboard and check once and for all whether it's, well, check whether it's Hermitian. Okay. What happens? Let's call a derivative operation. Let's call it capital D. D on psi gives us some psi prime. What about wave functions? D when it acts on psi of x by definition now. This is the definition of how D acts. It simply gives d psi by dx. I may sometimes lapse into calling this partial derivative with respect to x. That's fine. Doesn't matter. It's just a function of x. So it just differentiates. It creates a new function. It's a linear operator. Is it Hermitian? Okay. So let's write down the two sides of the Hermitian condition here and see if they're the same. That's all there is to it. The left side is integral phi star of x times the operation D on psi, which is just the psi by dx. What's the other way of doing it? Or the right-hand side? The other way of doing it is integral psi star. Not doing the same thing, doing something different. Psi star of x, the phi by dx. And the question is whether this is the complex conjugate of this. So let's write equals with a question mark. We don't know. Complex conjugate. So let's do this integral over here. First of all, the thing in the bracket here is just equal to integral psi of x d phi star by dx. Just complex conjugate. The complex conjugate of a product is just a product of complex conjugates. So the thing in the bracket is psi of x d phi star by dx. And we're asking whether this is equal to this. They look similar. This has a psi. This has a psi. This has a phi star. This has a phi star. But unfortunately, this one has psi differentiated. And this one has psi, you know what I mean, psi and phi. How do you go from this to this? Integration by parts. And integrate like this, assuming, what are we assuming when we say we can integrate by parts? We're assuming that far away the functions go to zero. That the endpoint contributions from integration by parts don't give us anything. Well, of course, the functions must go to zero far away because the integral of the total thing must be one. So we can integrate by parts. Everybody know what integrating by parts is? If not, find out fast. It's simply a rule. I'll tell you what the rule is. You can shift the derivative from one, if you have a product, dx. There's a dx here. If you have a product, psi of x times the derivative of phi star of x, then you can simply shift the derivative from one function to the other, but it's at a cost. A cost? A minus sign, right? Good. So this is equal to minus the integral phi star of x d psi by dx. Whoops. We're in bad shape. This is integral phi star d psi dx. This is minus the integral of phi star d psi dx. They are not equal. In fact, worse than that, they're absolutely the opposite of each other. Opposite in the sense of the negative of each other. Can we fix it? Well, no, we can't fix this, but we can redefine an operator which is her mission. The redefined operator which is her mission is not derivative. It is minus i. The minus sign doesn't matter. We can put i or minus i d by dx. So let's go through it again and check whether now it's her mission. This is the new thing that we're going to call, and we're going to give it a different name. I'm not going to call it d because it's not derivative. That's minus i times the derivative. Anybody have a good suggestion for what I might call this? P, brilliant. P. P4? Momentum. Momentum. All right? And if I really wanted it to be momentum with units of momentum, what else would I put there? Where? Right or left? Right. Well, you can put it on the left, but then you'd better put it downstairs. All right, but we're not going to worry about each bar. We're going to use our usual convention of ignoring each bar for the moment. P is minus i d by dx. Let's see what happens. Over here, we have to replace this by an i over here. Minus i, thank you. Minus i. What about this one? If we have a complex conjugated everything, we'd better change the sign because the complex conjugate of minus i is plus i. And so, let's do it this way. Minus i, and we should get plus i here if it's her mission. Why? Because in the process of taking the complex conjugate, minus i gets converted to a minus i, and every minus i gets converted to a plus i. Is it true now? Yeah. Why? Because this is equal to minus this. That's what we proved over here. Okay? So if we stick an i here, because i changes sign on the complex conjugate, this little operation here tells us that minus i d by dx is a her mission operator. It's a little odd, isn't it, that minus i d by dx should somehow be a real thing, should represent a real thing, and that d by dx should not. But let's check something out. Let's check something out right now. Let's just, we're going to finish up in a moment. We haven't talked about expectation values yet tonight, but I think you can guess how to think about them. Expectation values in the state psi of an observable L are just the sandwich composed of psi on both sides and L in the middle. So let's take the expectation value of momentum. It's integral psi star of x, psi of x, and then according to the rule here, p is minus i d by dx. Is this real or imaginary? You're wrong. It's real. So let's prove that it's real. I hope it's real. God, I hope it's real. All right. So let's integrate by parts. All right. This is equal, first of all, let's take out the minus i. Minus i integral psi star of x d psi by dx. Now let's integrate that by parts. This is equal to minus i, and then we can switch the derivative. We can switch the derivative to the star variable, integral d psi star by dx times psi of x. But then in doing the integration by parts, we have to change the sign. So here we have a little theorem. The integral of this thing here with a minus i is the integral of this thing here with a plus i. What's the complex conjugate of this whole thing? The complex conjugate it, you replace psi star by psi. Well, psi is over here. You replace the psi by the x by the psi star by the x, and you replace minus i by plus i. This over here is the complex conjugate of this. Can everybody see that, or should I explain it further? Let's put a minus i out here and write that this is equal to d psi by the x. Minus i d psi by the x. This is clearly the complex, the integral, are complex conjugates of each other. The psi by the x is the complex conjugate of this. That one's a complex conjugate of this, and i and minus i are complex conjugates of each other. So, given that this is the complex conjugate of this and they're equal to each other, what do you say about a thing which is equal to its own complex conjugate? It's real. So we prove a rather startling fact that the integral of psi star of x minus i d psi by the x is real. No, yes, I did. And again, it's because when you integrate by parts, you pick up an extra minus sign. So instead of this thing being minus its complex conjugate, it's plus its complex conjugate. Okay, so what did we learn? We learned that this is a Hermitian operator. It's a Hermitian operator and therefore an observable in quantum mechanics. The next time we'll explore a little more. We'll discuss what this has to do with Fourier transforms. We'll discuss questions about what the probability for finding the particle at different locations are. But then we'll also talk about what the probability for having different momentum are. And then we'll move to the Schrodinger equation. We'll move to the evolution of the wave function. This, after all, now is really in the classic sense of the word a wave function. And we'll think about how it evolves with time. Oh, one other thing we could do with tonight. I think we're probably at the point of diminishing returns or you exhausted. We could do one more thing. Let me do one more thing for you. Let's calculate an interesting quantity and why it's an interesting quantity we'll come through in a moment. So let's take a quantity which seems very dull, but which is interesting and calculate it. It's the commutator of x with p. Take any wave function, whatever it is, psi of x, anyone. And we're going to do two things with it. We're first going to multiply it by x and then we're going to multiply it by minus i d by, or operate with minus i d by dx. This can be called px operating on psi. This is p, this is x. In the order first x then p. Then we're going to calculate it the other way. We're going to write xp on psi. What is that? On the other half of the blackboard over here, we're going to write x times minus i d by dx psi of x. Let's see what we get. In fact, what we're going to do is we're going to compute the commutator, which means we're going to calculate this one with a plus sign, we're going to calculate this one with a plus sign and this one with a minus sign. I don't know the minus sign, but before we subtract them, let's just calculate them. All right. This is just minus i. And now the derivative of a product. The derivative of a product. This derivative here now has to act on the whole result of x acting on psi of x. x acting on psi of x gives a new wave function and then p acts on it. So what do you get when you hit the product? You get two terms. The first term is the derivative of x times psi. The derivative of x is just 1. So the first term is just psi of x. And the second term is plus x d psi by dx. Everybody agree with that? The derivative hit x and left psi and then the derivative hit psi and left x. That's the side over here. Now what about this side over here? Here, the d by dx doesn't act on x, it just acts this way. It just acts on psi of x and gives us minus i d by dx psi of x. X, sorry, x minus i x d psi by dx. Now let's subtract them. Let's take this one with minus and this one with plus. This corresponds to taking the commutator of x with p acting on psi of x. The commutator of x with p is another operator and it acts on psi of x. What do we get when we subtract? Let's see, this one now has to come with a plus sign since we're subtracting. Since we're subtracting, we have to change sign here. And this term cancels. Yeah. This term cancels, goes away, and all we're left with is psi of x. That's all we're left with. This term cancels and we're just left with what is it? i psi of x. i psi of x. All right, so here's what we found. For any psi of x whatever, for any psi of x, the commutator of x with p acting on psi of x just gives, is it plus i? I think it's plus i psi of x. If an operator acting on any wave function gives another operator, if the operator xp acting on any wave function is the same as i acting on that same wave function, for any wave function whatever, it follows that this operator is equal to this operator. Another way of saying it is if xp minus i, put it all in a big bracket, when it hits any wave function at all, gives zero, then it means that this is zero. There's only one operator which gives zero when it acts on every state, on every vector. There's no vector which escapes when this thing hits it. The implication is that this thing is zero. There's only one operator which annihilates, which kills every vector and it's zero. xp minus, well, commutator xp is equal to i. If I put back the h bars, there would be an h bar here, i h bar. We've seen things like this before. You may have to go back to your classical mechanics. We've seen things before like this. We've seen Poisson brackets which had similar relationships. The Poisson bracket of x with p is equal to one. Earlier in this course, we discussed the potential relationship. We had reason to discuss why commutators. It had to do with Schrodinger's equation. Go back, but we discussed the relationship that there might be a relationship between commutators and Poisson brackets, and in particular, the relation involved this i h bar. This is actually where Dirac began his discussion of quantum mechanics. He said there must be something analogous to a Poisson bracket in quantum mechanics. The only thing he could think of which had the right algebraic structure, was anti-symmetric, it had a certain mathematical structure, was the commutator. He postulated that commutators and Poisson brackets were related with each other to an i h bar, and then proceeded to go exactly backwards from the way that we went. He said if this is the Poisson bracket, then this must be the commutator, and so there must be an operator p, whose commutator with x is i, and he went back through the operation and discovered that p was minus i d by dx. So he went in the opposite direction. We discovered p by just playing a game. He said how about this operated d by dx? What does it do? What we found is that it was not her mission, we had to multiply it by i to make it her mission. We explored it a little bit, found out some interesting properties of it, and the most interesting property is that its commutator is equal to i h bar, which suggests that it's connected with the Poisson bracket, the classical Poisson bracket being one. Of course the upshot of it is, and of course we need to do a lot more to convince ourselves that the operator minus i d by dx really behaves like quantum mechanical momentum. We can't just say, well, it looks like a Poisson bracket, it must be the quantum mechanical momentum. We have to show that wave packets move with velocities that are governed by this, that it's conserved, that it has all the properties of momentum, but we will. There will be some connection between this abstract operator minus i derivative of mass. And what? And the godney of mass. Well, we would like to show that in some way that it's the mass times the velocity. So what we'll eventually show is that if you have a wave packet, in other words, a psi of x, which is a concentrated distribution like this, that that wave packet moves along the x-axis with a velocity which is given by the expectation value of this momentum operated divided by the mass. That's what we would like to show, that the velocity with which this wave packet propagates along the axis, that velocity, you can call it the group velocity, it's usually called the group velocity, the velocity that the group of waves moves along times the mass of the particle is equal to the expectation value of the momentum. Once we do that, then I think we're, and that it's conserved, that'll take us a long way to calling it momentum. At some point, we have to say quantum mechanics is more fundamental in classical mechanics, and the definition of momentum should be through quantum mechanics, and then we should show it behaves in a certain way when particles are heavy and so forth. Okay. For more, please visit us at stanford.edu.
(February 27, 2012) Leonard Susskind spends some time in the beginning of the lecture discussing some of the basic qualities of systems to lay a foundation for the rest of the lecture and the class.
10.5446/15015 (DOI)
Stanford University. Okay, let's go on. In fact, let's forget for the moment the spin system. We'll come back to it for sure. Probably tonight, in fact, definitely tonight. But let's remind ourselves of, I think I wrote down four principles of quantum mechanics last time. And I want to write them down again and then move on to a fifth principle. In fact, they're not all independent. It's easier for me to explain them one at a time than to try to derive from a minimal, absolute, minimal number of them. And I don't think there's any particular advantage in doing that. So we'll just write down all the principles that I wrote down. The first one is that observables. Observables are the things that you measure. They could be called measurables. And if I had, if it was up to me, I would have called them measurables. But they were called observables. Observables are represented. And I'll just use the equal sign to indicate that. Observables are represented by Hermitian operators. I'll use the symbol L for the moment for Hermitian operator. Not all operators, not all linear operators are Hermitian. And it will not always be the case that the symbol L will stand for a Hermitian operator. But right now, L stands for a Hermitian operator, any Hermitian operator. Every observable is identified with a Hermitian operator. Now, the question of whether every Hermitian operator is a thing which can be measured is a generally hard question. You have to, you know, what can and can't be measured depends on what kind of materials you have available, all kinds of things. Generally, the abstract rule that, that, or the folk rule, let's call it, is that any Hermitian operator is identified with an observable and someday somebody will figure out how to measure it. Okay. But the other way that any observable is identified with a Hermitian operator, that's for sure. The eigenvalues of a Hermitian operator, let's call them lambdas. Lambda is the eigenvalue of L. The eigenvalues represent the possible numerical values that L can exhibit when it's measured. Okay, so eigenvalues, eigenvalues of L which equal lambda, no. They represent the possible observed or the possible outcome of experiment, possible outcome of an experiment to measure L of L measurement. Okay, I think we had a third one along the line here. Yeah. Physically distinguishable states. Now let's, let's just see what physically distinguishable states mean. Certainly up and down are physically distinguishable. And the meaning of saying two states are physically distinguishable is that there exists a measurement of some kind that you can do that can tell you the difference between an upstate and a downstate. In other words, somebody hands you a spin from out of their pocket, and they say, I created this spin. I prepared it in a state which is either up or down. Can you do an experiment to unambiguously tell me which one it is? You tilt your apparatus until it's pointing along the z-axis and you make the measurement. If the state was up, you will get plus one. If the state is down, you will get minus one. No ambiguity. The same is true for left and right. Different measurement, different thing that you would measure. You would measure the x component and you would tilt your apparatus along the x-axis, but same, same deal. On the other hand, supposing somebody, that same person came along and said, here, I'm going to give you a spin, here it is, and that spin I either prepared up or I prepared it right. I am not going to tell you which way I prepared it. I'm not going to tell you which way I tilted my apparatus when I did the preparation. I'm just going to tell you it's either left, sorry, it's either up or right. Can you do an experiment which will uniquely tell you the difference? The answer is no. For example, supposing you decide to measure the spin along the z-axis, well, if it's up, you'll get plus one. If it's right, with 50% probability you'll get plus one. You can't be sure whether you are up or right. What do you mean if you were 445 degrees? Still you would have a probability for, yeah, yeah. It would be unambiguous. Yeah, it would not be unambiguous. Right, okay. So the next postulate. Excuse me. In your last statement, that's assuming it was prepared along the z-axis. If it had been prepared along the x-axis, then if you measure the x-axis first, you get the minus. If you measure the x-axis first, you'll find something out, but you could have found out exactly the same answer if it was along the z-axis. Yeah, in either case, there is some probability of getting the same answer, whether it was up or right. There's no experiment you can do which unambiguously will determine which it was. Honestly, that leads to the notion, I'm not going to try to give a more precise definition of it, that leads to the notion of physically distinguishable states by which is meant that there exists an experiment or a set of experiments that can unambiguously determine which of the two states you're talking about. The next postulate is that physically distinct or physically distinguishable states are represented by orthogonal vectors. So physically distinct, distinguishable, same thing, a little easier to write distinct. Physically distinct states imply orthogonality. All right, I said that observables are Hermitian operators. I could have just said linear operators. Let's leave it at linear operators for the moment, but let's add a postulate that the result of every experiment, every simple primitive experiment is a real number. If it's a complex number, it really means that two independent things were measured. The actual results that come out of your apparatus, that come out of the needles on your apparatus, they're real numbers. So I'll add that as a postulate, the results of experiments are always real numbers, and therefore the eigenvalues of observables are real numbers. Now that does not prove that the apparatus is a Hermitian, not enough, not enough. The other added thing is that the various eigenvectors with different eigenvalues, physically distinguishable states, are orthogonal. That's the third postulate, and that tells you that observables are Hermitian operators. That's the Hermitian operators whose eigenvalues are always real and whose eigenvectors are orthogonal. That's necessary and sufficient. So this one, as I said, they're not all completely independent of each other. One, two, three, and let's write number four now. I forgot what four is. Oh. What was four? Yeah. The probability principle. The probability principle is also called Born's Rule, that max Born. Born's Rule, let's just call it Born Rule, but it's a rule for probabilities. So far, none of this has much content. The added content, well, no, it has content, but the real bite here is for the prediction of probabilities for various experiments. And the Born Rule is the rule for how to calculate probabilities. The Born Rule says if your system has been prepared in state A, all right, system in A, and you measure L, the observable L, now I'm using interchangeably the physical idea of the observable with the operator which represents it. I'm not going to make a special language where the operator will always be called the operator and the observable L. They come in pairs. All right, so if the system has been prepared in state A, and you measure L, you measure L, the outcome is going to be one of the eigenvalues. So the only question you can ask is what is the probability that you get the answer lambda? Okay, one of the specific eigenvalues. The answer is the probability that you get result lambda, that's the probability that you get out of the various eigenvalues that you get lambda, is equal to the square of the inner product of the state vector that the system was in with the eigenvector corresponding to normalized eigenvector, all vectors are normalized now, are the inner product of A with lambda squared. A with lambda, the inner product, sometimes incidentally the inner product is called the overlap, it's a measure, if we said that orthogonality is complete distinguishability, then a lack of orthogonality represents to some extent the inability to make the clear distinction between two states. Yeah, all right, so this is, yeah, question. I'm surprised to see lambda represented as a vector because it's a value. No, no, all right, the notation is every eigenvalue is associated with an eigenvector. I'll just write E vector. And the notation is that the vector labeled by lambda is the eigenvector associated with the eigenvalue lambda. That's a good point. The problem with trying to be a purist about notations is that notations get very complicated. They get so complicated that they're difficult to read, there are too many indices, there's too many different letters you have to use. So, yeah, sometimes we use a slightly excuse the word bastardized notation where we conflate a symbol for a vector with a symbol, in this case for an eigenvalue. Okay, yes, that's the probability principle or Born's rule. And let's just look at it for a minute. You might say, why the square of the absolute value? Well, in general, these overlaps are neither positive, not even real in general. Overlaps between two vectors are not necessarily real. In fact, in general they're not. We're talking about a complex vector space, and if the components of the vectors are complex, in general the inner product will not even be real, let alone positive. On the other hand, the square of the absolute value of the square, the square of the absolute value of the inner product, that is real. It's not only real, it's also positive. So this has a chance to be a candidate for probability, whereas the inner product itself doesn't. The inner product itself is called a probability amplitude. So a probability amplitude is a thing that you square in the sense of absolute value to compute a probability. Needless to say, the justification for these principles in the end is experiment. On the other hand, you could ask, how much can I bend them and still make physical sense out of the predictions? The answer is nobody has ever found a way to change the rules of quantum mechanics and still preserve a reasonably logical structure. So you'll eventually get familiar with these principles. Which of these principles took you from several possible outcomes to one actual outcome? Okay, if A is an eigenvector of L, if it is, if the starting state of the system happens to be an eigenvector of L, then it will only have an inner product with one of the eigenvectors. Right, so that's, again, these are good questions. If you say L ket lambda equals lambda ket lambda, it kind of shows you. Excuse me, what if lambda is an eigenvalue for multiple eigenvectors all the time? Yeah, still, the rule is the same. But what you're writing, right. Okay, I'll give you two answers. The first answer is don't worry about the case where the eigenvalues may be the same because it's very special. But that's not good enough. Let me say what the right answer is. The right answer is if you want the probability for a given value of lambda, and there's more than one eigenvector with the same value of lambda, then you add them. You simply add the sums of the squares of the probability addition. So the questions tonight are really good. Yeah. And what about the case where A has overlapped with several different numbers? Yeah, yeah, then you take the sums of the squares of them. And take the sums of the squares. If there are several different eigenvectors all with the same eigenvalue, first of all, you make them perpendicular to each other. You can always make them perpendicular to each other. Different eigenvalues. You know, even if they were the same eigenvalue, there's the theorem. Okay, let's go back to the theorem. The theorem says that the eigenvectors with different eigenvalues are orthogonal. It doesn't tell you anything about the eigenvectors for the same eigenvalue. So supposing you have two eigenvectors with the same eigenvalue, then it's also true that any linear combination of those two eigenvectors is also an eigenvector with the same eigenvalue. For example, let's suppose lambda 1 and lambda 2. Now, these are not two different values of lambda. These are the same value of lambda, but two different eigenvectors. Maybe we should write it like this. Lambda 1 and lambda 2. Two different eigenvectors both exhibiting the same eigenvalue. Then you can take any, what that means is that L on lambda 1 equals lambda times lambda 1. L on lambda 2 equals lambda, lambda 2. Multiply this by any complex number alpha. This one by any complex number beta. And add the two equations. What you find is that L on the linear combination, the linear sum alpha lambda 1 plus beta lambda 2 is equal on this side now you have lambda times the same thing as in the bracket here. In other words, if you have two eigenvectors with the same eigenvalue, you can add them with arbitrary coefficients and they remain eigenvectors. Now, given two distinct vectors, and all the possible linear combinations of them, you can always find perpendicular combinations. You got this one, you got this one. Now we can always add them or subtract them with coefficients to make them perpendicular. So the rest of the theorem is that if the two eigenvalues are the same, you have the freedom to choose two eigenvectors which are perpendicular, which are orthogonal. And the end of the day, to summarize the theorem, is that given a Hermitian operator, its eigenvectors can be chosen so that they form a basis, so that they're orthonormal perpendicular to each other and normalized. Okay, so where were we? That was an interruption, but I don't remember where I was. That's the problem with interruptions. We just finished born rule. You just talked about born rule. Yes, I was talking about born rule. Right, oh good. Okay, so now if you happen to have several eigenvectors, let's say lambda 1 and lambda 2, both with the same eigenvalue, then you construct the overlap of the state vector with lambda 1. You construct the overlap of the state vector with lambda 2. You square this one, you square this one, and you add them. In other words, you simply add the probabilities for the two possible ways that you could get the same lambda. Just a sum of probabilities. Thank you, Sanjay. It's already too late. I already got Oreos. But my wife would say, this is probably healthier than Oreos. Yeah, it is. What? Did you eat the Oreos? Excuse me. Okay. Yeah, go ahead. Loud and clear. Could we say also that the reason why those things are squared, regardless of whether they're complex or not, is so that the total probability adds to 1, right? Yes. Right. We know it's the sums of the squares of these things, which add up to 1, for normalized states, for normalized vectors. That's right. I'm not sure which one implies which, but clearly they fit together very nicely. Good. Okay. Is it possible to get an unepiculous result from an unprepared experiment? I'm not sure what an unprepared experiment means, but I think what you're asking is, supposing somebody gives you a spin, and they don't tell you anything about how they prepared it. They just said, here it is. I give it to you. I tell you nothing. Then it is not possible to find an experiment, which has an unambiguous answer. Right. Another good point. Okay. Now, now we need to come to a new idea and a new question. In classical mechanics, all of this, this whole story here was summarized when I showed you a heads in a tail, and I said those are the two possible states of a simple system. We didn't talk about measuring it. It was all sort of obvious, and we didn't have to, because we know that measuring doesn't do very much to a system you measure, but it doesn't change the system in any way. Here we come to a much more complicated story, but the story up till now is really the quantum mechanical analog of just specifying that you have a collection of states. The next question we asked was how do states change with time? That we called in the classical case, dynamical laws of motion, or dynamical laws of nature, or whatever, the rule for how a state changes. You remember there were some examples. The coin, a simple example would be, put the coin down and the rule is nothing happens. It just stays the same way. That's pretty trivial. Another possible law is in each interval of time, we have a stroboscopic time that we can imagine, and in each interval of time, it flips. Head, tails, heads, tails, heads, tails. That's a perfectly good law. We talked about the same concept in the context of a die with six faces or any number of faces, including continuous infinity. Same basic idea. The laws of physics are updating from instant to instant of a configuration. Let me tell you how the configuration changes. If you have good laws in classical physics that are deterministic, they tell you how the state changes, but also another rule, and the other rule we called reversibility. Reverseability was basically just the idea that states don't run into each other, that states which are distinguishable, namely different, heads and tails are distinguishable. Two states which are different will evolve mathematically and stay different. That was the idea of reversibility. What it comes down to is exactly that, that if you know the state at any instant, you not only will know what comes next, but you will also know what came before, because there's no chance that states run together, and two different states giving you the same outcome, then you wouldn't be able to tell where you came from. So reversibility is the idea we also called it information conservation, and I suspect I referred to it as the minus first law, minus first because it comes before everything else. Quantum mechanics also has a rule of both reversibility and information conservation. You can call it the minus first law, but it comes down to the same thing. States which are distinguishable, in the sense that I said that there exists an experiment that uniquely distinguishes them, stay distinguishable. So now we want to talk about how states evolve with time. We're going to come back and discuss this in the context of the spin. It's the simplest system of quantum mechanics, and it's a good idea to get it all down for that, but let's talk about it more generally. Okay, here's the postulate. The postulate is that if you have a system at some specific time, let's call it a time zero, and I'm going to start changing my notation for states. Instead of calling them A and B, I'm going to start using a notation which is more or less a standard notation in quantum mechanics. Call them psi. Psi is a Greek letter, and psi is a commonly used letter to represent the state of a system. Now the states of systems do change with time, so you have to think of these vectors as being functions of time, they evolve. So let's put that in here in the following way. Psi of t, this simply represents a state which changes with time. Which state could be any state? It's not the same after a certain time as it was to begin with, but we're following one specific state of a system that's been prepared and then allowed to evolve by whatever means things evolve. All right, the postulate is that psi of t can be obtained from psi of zero, this means time zero, by an operation on psi of zero, a unique definite operation which is governed by the laws, by the quantum mechanical laws for that system. So we have to do something to the vector, it doesn't stay the same. So let's represent what we do by the operation of some kind of operator. Now incidentally, we didn't have to start with time zero, we could have started with time something else, and evolved by amount t. So this t here really represents the time of evolution from the initial state to the final state. Not necessarily final, but the state at some time. The basic postulate, two basic postulates. The first is that u is a linear operator. That u is a linear operator, and the same u for any state, whatever the initial state is, you put that here and you hit it with the same u, the same operator. In other words, there is a, this is called the time development operator, it's a linear operator, and it evolves the state from one instant to another. You could try to ask, what would happen if you made a more general hypothesis? For example, you might have the hypothesis that u depends on the initial state, or you might have the hypothesis that u is an operation, but not a linear operator, some other kind of operation. You can try it, but you'll very shortly run into big troubles with all kinds of things. It's been tried, I guarantee you, it's been tried and repeatedly every so often, some physicist will try to invent something and then some other person will come along and say, look, that has troubles with locality or it has troubles with causality or it has troubles with probability interpretation, and you're free to try. I've given up trying. I actually never tried. Right, okay, so u is a linear operator with all the rules for linear operators, but one other thing, and that's this idea of conservation of information or conservation of distinguishability, that if you have two states, two different states of the same system and you evolve them, let's call phi of t, phi, capital phi, that's given by the same u of t, the same operator, times phi at time zero, then if phi to begin with is orthogonal to psi, in other words, if they're physically distinguishable and they're both in the same system, then they will remain orthogonal for all time. Is there anything to the fact that you used uppercase letters on the left and lower cases? The Greek letters are uppercase on the left and lower case on the right, is that just... I'm sloppy. No, right. So orthogonality breeds orthogonality. Physically distinct states evolve and distinguishability remains intact. It's the immediate analog of the minus first law of classical mechanics. So let's see if we can find out what it says. Oh, incidentally, it's very easy to prove from that. Well, what does it say? It says that if psi and phi were initially orthogonal... I'm... All right. No. If they're initially orthogonal, if, then... Don't ask me to make capital letters. It's much quicker to make lower case ones, but I mean capital here. I mean uppercase. Then this is true and it remains true for all time. It's an easy exercise. I will leave it to you to prove that if orthogonality is preserved, that you can say a stronger thing. It's implied by this. You can say if you take any two vectors, which may or may not be orthogonal, the inner product between them stays constant with time. There's a nice little trick. You take any two vectors, you expand them in basis vectors, and then you assume that the basis vectors remain orthogonal with time. Then you can prove that the inner products of any two vectors is time independent. So it follows from the orthogonality, the evolution of orthogonality, that the inner product between two states remains the same. The overlap between them remains the same. There's a sense in which the overlap between two states is, of course, a measure of their similarity, and what it says is the degree of similarity remains the same. Okay, let's see if we can figure out what that means for you. What it says, let's rewrite this by plugging in for phi of t and psi of t, the product u times phi and u times psi. But before we do, we have to remember that we have to flip this equation. Oops. Chocolate pie. Is this chocolate? Yeah, this chocolate. We have to flip it from a ket vector to a bra vector equation. Okay, so let's do so. This is psi of t equals psi of zero, not times u, but times what? U dagger, the Hermitian conjugate. Not the compass. It's related to the complex conjugate, but the Hermitian conjugate. All right, so now we plug this into this equation here, and what it says, let's write it out. A little bit of erasure. For arbitrary states, they don't have to be orthogonal. Whatever they are, they have some inner product. So this is equal to psi of zero u dagger of t, u of t, phi of zero. All I've done is plug in for phi, u of t, for psi, u dagger of t. Now, the product u dagger times u is another operator. Because the product of two operators, we didn't talk about this, but the product of two operators is a very simple concept. You take a linear operator, and you apply it to a vector, and then you take the result, and you apply another operator to it. The process of repeatedly applying two operators gives you an operation which is called the product of the two operators. So that's what this is. You hit phi with u, and then you hit the result with u dagger. And what this says is that for any phi and psi, any phi and psi at all, that when you plug in here, u dagger and u, the product of u dagger and u, you get something which is exactly the same as had you not evolved the state. The inner product stays the same. Whoops, I think I made a mistake. One of these should be phi, right? So that's the principle of conservation of overlap, if you like. Well, here's a theorem. We don't need to. This is, again, very, very easily proved by going to a basis, very easily proved, that if the matrix elements is called the matrix elements or the product like this, if this equation is true for any pair of vectors, in other words, I'll write the theorem in general, if you have an operator, let's call it k for u dagger u, and for any pair of vectors, psi k phi is equal to psi phi for any pair of vectors, whatever, it follows that k is the unit operator. The unit operator is the operator which does nothing, just gives you back the same vector. So what follows then that u dagger times u must be the unit operator. That's the content of conservation of overlap. u dagger of t for any t times u of t is equal to the unit operator, meaning the operator which does nothing, just gives you back the same vector. So that's the question. Can you then say that one is the inverse of the other and matrix algebra? Yes, you can. Yes, you can. Indeed. Let's examine this a little bit now. In particular, let's examine it for very, very small time. For very small time, in other words, will you only evolve the system by a tiny little incremental time? Let's call the incremental time epsilon. Let's write that u of epsilon, now first of all, what is u of zero? No time evolution at all. Just one, just a unit operator. So for a very small epsilon, u must be itself close to the unit operator. Plus a small deviation, presumably of order epsilon, and that will be right. It will be of order epsilon, so there will be something of order epsilon times another operator here. I'm going to make use of freedom that I have, and I'm going to put a minus sign here. We're going to put an operator over here. This minus sign has no content yet, unless I know what the operator is over here. The minus sign just absorbs into the definition, and I'm also going to put an i here. Again, there's no content yet, because I haven't told you anything about h. h is some operator. Epsilon times h is just an operator, and i epsilon h is just an operator. So I have not really said anything other than for small epsilon, the operator u is close to the identity. How close? Of order epsilon. And then there may be additional things of order epsilon squared, epsilon cubed, but we're going to drop them and study this equation. We should give a name to this equation here. More specifically, we should give a name to operators which satisfy this rule. U dagger times u is equal to one, or as was said, that u dagger, the Hermitian conjugate of u, is the inverse of u. u is a unitary operator. So this is, again, this is a restatement of the principle of conservation of overlap. Let's now apply this equation, this condition, to u of epsilon and see what we find out about h. Okay, so let's bring it over to this blackboard. u of epsilon is equal to one minus i epsilon h. Now let's do it over here. U dagger is the Hermitian conjugate. The Hermitian conjugate of one is just one, but then we have the complex conjugate, plus i epsilon, and then we have to Hermitian conjugate h. Okay? All right, so here is u dagger and here is u. And now let's multiply them together and set the result equal to one, to order epsilon, to order epsilon. Okay, so what do we have? We have u dagger is one plus i epsilon times h dagger times one minus i epsilon h. That has to equal one. And now just expand it out to order epsilon one plus i epsilon h dagger minus h equals one. As I said, that's to the approximation that epsilon is so small that its square can be completely ignored compared to order epsilon. All right, so the ones cancel. And basically what we find is that h dagger minus h has to be zero. We cancelled out the ones. We get zero on the right-hand side. And so to order epsilon, h has to equal h dagger. This is the condition that h be Hermitian. Now I didn't call it h for Hermitian. I called it h for something else. I called it h for Hamiltonian. Okay, what it has to do with the Hamiltonian of classical mechanics will eventually become clear. The Hamiltonian for classical mechanics also enters into the equations for how systems evolve from one instant to the next. But of course that's not a good enough reason to call this the Hamiltonian. We want to see that it enters the equations in a way very, very similar to the way the Hamiltonian enters into classical mechanics. And it does. It is also the thing which... It's Hermitian, right? That means that it's an observable or it could be an observable. It's a Hermitian operator. It could be an observable and it is an observable. It's the Hamiltonian. It's the energy. Hamiltonian and energy are the same. This is what the energy of a system is. It's the Hamiltonian which generates the evolution of state vectors according to this rule here. Yeah? That could be any Hermitian operator at the moment. It could be any Hermitian operator at the moment. Now, what determines what Hermitian operator you put there? Well, it can't be... Yes, it could be any Hermitian operator associated with the system you're studying. If we're studying a system of one spin, we want to make it up out of operators that are associated with a spin. So, yes, it could be any operator. But it's either experiment or some... It's the same things which govern why you choose one Hamiltonian or Lagrangian in classical mechanics. Either experiment, some prejudice about the way things work, some symmetry principle, whatever you have that might give you a clue as to what H is. At the end of the day, you may have to resort to experiment to find out what whatever it is. Okay, so that's... Right, so now let's rewrite our equation. Let's rewrite our question. Yeah. Normally the expansion like that, normally then H is a constant, right? You don't really do that first order expansion. So it'd be... Well, H is an operator. I mean, it's not a function of time in that case, right? Okay. Right, now folks, that's a lot. The situation with a possible explicit time dependence of H is the same as in classical physics. In classical physics, H can depend on time. For example, if you have a particle moving in a magnetic field and the strength of the magnets is being varied with time by changing the current through the electromagnet, then the Hamiltonian for the particle moving in a magnetic field is time dependent. Okay? Otherwise, if you have the parameters of the problem are not varying with time, then you say the Hamiltonian is not time dependent. Exactly the same. Exactly. You know that you do it especially like that. You've been using say H of zero. Yeah, yeah, no, that's right. That's right. In principle, H could be a function of time. Let's take the case where it's not, and what that corresponds to is the situation where the parameters of the problem are not time dependent. Where in classical physics, we call the time translation invariance. Remember in classical physics, time translation invariance, which means like every time is the same as every other time. Okay? You do the experiment, the same experiment at a later time will have the same output as the identical experiment at an earlier time. The statement of that principle, time translation invariance, is that H does not have explicit dependence on time. It's just an operator which doesn't depend on time. As I said, you can imagine situations where you're ramping up the current in an electromagnet or the universe is expanding or God knows what, there's some explicit time dependence, then H can be time dependent. Now do you remember what happens to energy conservation if the Hamiltonian is time dependent? Down the drain. Right. Same thing here. If the Hamiltonian is time dependent, and we haven't yet seen why there might be a conservation of energy, we'll come to that. But, um, all right. Okay, so let's come back then to this equation over here. What happens if the long thing, if it's all for the small epsilon thing? Sorry, what, sit again? This is for the small, infinitesimal small. Yes. Yeah, we'll come to that. Basically you just multiply them together. Right, we'll come to that. Okay. Or you can think of it another way. After you've updated by a small amount of time, you just do it again and again and again. All right. You just do it repeatedly, but we can use this to derive a differential equation for the way the state changes with time. Let's do that. Um, let's do that. Okay, so let's look at the state of the system at a time epsilon. And this time epsilon could be thought of as a time epsilon right after some arbitrary time, which I'm going to call zero, but it doesn't matter. Psi of epsilon is equal to 1 minus iH epsilon, minus iH epsilon times psi, let's say, at time zero. Or another way to say it is that psi of epsilon minus psi of zero, this is the small incremental change in the state vector. That's equal to minus i epsilon operation by H on psi. Now as I said, this zero here doesn't have to be zero. It could be any time as long as epsilon is epsilon units after the starting time. Okay, let's divide this equation by epsilon. You're allowed to do this because you can multiply vectors by numbers. So we're just multiplying both sides by 1 over epsilon, 1 over epsilon times this whole thing. Okay, what's this thing on the left-hand side? One over epsilon times the difference of two things, which are by which it's two slightly different times. That's just the time derivative of this vector. We haven't talked very much about taking derivatives of vectors, but it's clear. You can do that. You can take the difference between two vectors. That's perfectly well-defined. They will be close to each other if epsilon is small. If you divide by the small epsilon, you'll get something which on the right-hand side is just the time derivative of psi. The time derivative of psi equals minus i H psi. At any given time, it doesn't have to be at time zero. At any given time, if you want to find out how the state of the system changes over an incremental time interval, this is your equation. Completely analogous to the way you update classical systems, but because knowing the state vector is not the same as knowing the values of the experimental outputs of experiments, you can know the state, and still there can be ambiguity or uncertainty in the value of an experiment. This tells you how states change, but it doesn't tell you how the results of experiment change. The results of experiments are still statistical, but with a continuously updated state vector. Okay, this equation has a name. The name of this equation is the generalized time-dependent Schrodinger equation. This is the general form. This is the Schrodinger equation. The specific versions, what Schrodinger wrote down, was a very specific version of this. We're going to come to that later in the course, but this is the general idea. The rate of, I don't need to say it again. This is what it is, but of course in order to use it, you have to know what H is. Question? Yeah. So all of this is dependent on the reversibility of the system? Yes. So I guess the conservation of the product. Right, so I just stepping back, how does that match with the fact that if you take an up vector and a down vector, and you measure them along the horizontal axis, they sort of both end up in sort of what I just told you in the intrestable states. That's a good question. You have to remember when you do that, you're intervening with an apparatus. In order to follow the system, under those circumstances, you have to include the apparatus as part of the system. You cannot interact with the system and not include the things which are interacting with it in the system. Classically, it's not such a bad scene, but quantum mechanically, if something interacts with the system strongly enough to affect it, well, strongly enough to measure it, you have to include it as part of the system. Now, this is the way a system evolves if it's isolated and completely not in contact with anything else. We will have to come back to ask how it evolves, what happens when you put it in contact with a measuring apparatus? We'll have to have a model for how measurements take place. This is the way the system behaves if nobody disturbs it during the course of the evolution from time zero to time t. Okay, we'll have to come back to that. It's an important question. Did you summarize the assumptions that the operator brings? Oh, boy. Yeah, really just two assumptions. Well, maybe three. First is that time evolution takes place by a linear operator called U, and that operator is independent of the state that it's acting on. Okay? With me? Okay. That's the first statement. The next statement is that inner products are conserved with time, and that tells you that U is unitary. Okay? From that, we went over to here and discovered that the small little incremental change is governed by a Hermitian operator. Okay? So we get to the idea of a Hamiltonian. And then we just rewrote that and said, look, the small little incremental change could be thought of as 1 over epsilon times the time derivative of the state vector. Okay? So this is the Schrodinger equation. How do you solve the problem if there's a long time interval, not a short time interval? You basically solve this equation. And we'll talk about how you solve this equation. We'll do so for lots of examples. But, um, okay. All right. To go on and explain what this has to do with classical mechanics, or what it has to do with the corresponding concepts in classical mechanics, we need a couple of ideas about how you relate classical ideas to quantum mechanical ideas. We need the idea of what is called the expectation value of an observable. It's a bad name. It's a bad name because the expectation value as defined may have very little to do with what you expect the experiment to give. This doesn't have much to do with quantum mechanics. It's just to issue a probability theory. I think we've talked about this before, but I'll just remind you. If you have a probability distribution which looks like this, then the least likely answer that you could possibly get is right over here. It's also the expectation value. So it's a poorly chosen terminology. The right terminology should sometimes some fancy people. I won't name Murray-Gelman, but some fancy people like to call it the expected value. Well, it's even less the expected value than it is the expectation value. It's the average value. It's the average value, and the average value of something can be a value which the thing can't even take on. A possible value that it couldn't even have, the average value. For example, if you assign heads plus one and tails minus one and you flip the coin, what's the average? Zero, but you can't get zero. So it's got nothing to do with the expected value of an experiment. But nevertheless, it's called the expectation value, and we will call it the expectation value. But it's the average value. Let's talk about the average value of a observable given a particular state. All right, so our state is A. I said I was going to change the psi, but sometimes I will, sometimes I won't. Can we go back to the expectation value? Yeah. So to me, the middle is not the average. It really is the expected value. Is the value that's going to occur most frequently? No, no, definitely not. This one here, it never occurs. Never, ever, ever. Let's even make it worse. What are the x and y values? What are the x and y axis represent? P is the probability, x is some variable, something that you measure. This could be the probability for finding a particle at different locations, okay? All right. It could be that for some particular state, the probability, let's make it symmetric about the origin just to... It could be that the probability distribution looks like this. I meant to make this symmetric, and it's really very, very zero in a neighborhood of the origin here. There's no sense in which x equals zero is expected. You can do the experiment forever and ever and ever, and you will never get x equals zero. You'll either get an x somewhere in here or here. So the quality, the expected or the expectation value is sort of a misnomer. It is the average value defining averages in a very particular way. Now, for many, many smooth probability distributions, in particular, probability distributions don't have big holes in them, something like this, let's say, it can very often be the case, and very often is the case that the probability, that the average value is the most likely value to get. So it's just, if we call it the most likely value, what's the peak? The peak there is the most likely value. Yes, by definition, the top of the peak is. And in many, many cases, if you have nice simple probability, Gaussian probability, you know, bell shaped curves and so forth, then the expectation value or the average value is usually quite close to the average and peak value and so forth. You're all usually close to each other, but certainly there can be violent exceptions. So the coin or the spin, which can only be plus one and minus one and never zero, the expectation value and the average value are quite different. Okay, but that's just a little sermon and give me a chance to bed mouth my friend, Marie Guilman. Yeah. Yeah, well, okay, it depends on what you mean by average. There's a notion of average which is strictly mathematical and we're going to write it down in a minute. There is a unwritten law which I think it's safe to say nobody really understands, but it's the law of large numbers that if you do a thing enough times, how many times? I don't know. If you do a thing enough times, you do an experiment enough times, the average gotten by averaging the results of your experiment will be equal to the average defined mathematically to within what? To within the margin of error. What is the margin of error? There's a specific number. Does it really mean that every time you do it, the answer will come out within the margin of error? No. All right, so we're assuming standard probability theory. That if you do a thing and not plus this extra little ingredient, this extra physical assumption, I don't even know what to call it, that if you repeat an experiment enough times, the average of your data will be the mathematical average. Can anybody prove that? No. That's a law of large numbers. No, no, no, there's nothing you can do to prove it. The reason you can't prove it is because sooner or later, somewhere's in the multiverse, somewhere's is going to be an exception to it. No matter how many times you do it, a zillion times, a zillion is a number, incidentally. It's not infinity. A zillion is one with a logarithm of a zillion zeros after it or something like that. Right, if you do an experiment and there's enough repetitions of it, eventually somewhere, someplace, there's going to be a violation of it. This is closely connected to the second law of thermodynamics. You know, there was a principle, the second law of thermodynamics says that an entropy never decreases. Boltzmann was never able to prove that and he finally realized that the right law is the entropy probably never decreases, except when it does, which is rarely, except when it does. So I'm not going to try to give you an explanation of why probability theory works. This is, to me, very puzzling, but we will assume probability theory works. The statement that it works is the same as the statement that the average of your data will be, for a big enough experiment, will be the mathematical average. All right, so let's come over to this side here first and just say, supposing we have a collection of, we have a probability distribution for some variable, let's say for lambda, lambda one of the eigenvalue, we're going to measure an eigenvalue of L. Supposing we have a probability for lambda, there it is right there, the other thing I called prob of lambda, probability for lambda, and lambda can take on a whole bunch of different values, lambda sub i, let's call them, so this becomes the probability for lambda sub i, then the average, the definition of the average is the sum over i of lambda i times the probability for lambda i. That's the definition of the average, and we'll represent the average by the symbol, the bracket symbol, why it's labeled by the bracket symbol will come to. Okay, but that's the definition, standard probability theory definition of the average of a quantity. The quantity times the probability that the quantity takes on that particular value, summed over i, and as I said, that's standard definition of average. Okay, what is the average in a state A of the observable lambda? So let's say that A is expanded in a basis, which basis? The basis of eigenvectors of L. Remember, L is a Hermitian operator, it's eigenvectors form a basis, you can expand vectors in an orthonormal basis. So let's write this in the form alpha sub i, sum over i, lambda sub i. Remember that the lambdas are an orthonormal basis. What is the probability lambda in the state i? The probability for lambda i is equal to alpha star i, alpha i. I maintain, or this is the probability, and the average of lambda, what's called the average of lambda, is the sum over i, lambda sub i, alpha i star alpha i. So let's see what this is going to plug in. Let me prove that this is the same thing as taking the vector A, the bra vector A, and sandwiching between the bra vector A and the ket vector A, the operator L. The operator L is the operator whose eigenvalues are lambda. Let's see if we can prove that. It's actually quite easy. Let's just plug in, let's come to another blackboard, plug in, and see if we can prove that. All right, so let's calculate this creature over here. A is this summation, and let's use also that the bra vector A is the summation over i of lambda sub i alpha star i. All right, so we make a sandwich. The sandwich is the sum over i and j, why? There are two sums, one for the bra ket, one for the bra and one for the ket. Lambda sub j alpha star j, this is the bra vector A, and then we put L there, and then we put the ket vector alpha i lambda i. It's pure plug-inology. Now, what is L when it acts on lambda sub i? Lambda sub i's are the eigenvectors of L. So when L acts on lambda sub i, we still have sum i j, lambda j, alpha star j alpha i, and now L hits lambda i and it gives us lambda sub i times lambda i. Now all this junk in here is just a number for each i and j. For each i and j, this is just a number. So that means we are called upon to take the inner product of lambda i with lambda j. What is that? Right, it's one or zero depending on whether i equals j or not. Given that, this sum, the double sum collapses to a single sum in which lambda i and lambda j are identified, this just becomes sum on i lambda i, well, okay. Let's cut through some of the steps. If j is not equal to i, you get nothing. If j is equal to i, the inner product is just one. So this just becomes the sum over i alpha star i alpha i lambda i, which is exactly what we have over here. So we've proved this relationship here that the average of any observable is just the sandwich, the bracket, the bracket, the bracket, the bracket, the bracket made out of sandwiching L, the observable, between a bracket, between a bra and a ket. That's the same bra and ket, the bra and ket associated with the state of the system. That's where this notation of bracket comes from. Okay, that's a good thing to know, that if you want the average of a quantity, just sandwich it between the bracket, the bracket representing the state. Any questions about that? Okay, now I'm going to tell you what we're going to do. What we're going to do is we're going to try to find equations governing the time evolution of these average values. The idea is that under suitable reasonable circumstances, if the probability distributions are nicely shaped, if they have a sort of nice bell-shaped curve, that by calculating the time evolution of averages, you're doing something pretty close to what classical physics would instruct you to do about calculating the equations of motion of the classical system. The way averages tend to evolve in time, as I said, under suitable circumstances, follows equations which look very, very much like the corresponding classical equations. So our next goal is to try to find the rules for the time evolution of these average values, expectation values. And that's not so hard. We have all the equipment we need to do that. Let's see if we can do it. All right, so at any given time, the average of lambda is given by psi of t, that's the state vector at time t, and operator L representing the observable times psi of t. In other words, I've just plugged in for A the actual time-dependent state of the system. This is the average of L as a function of time. I don't know what should we call it, but let's call it, okay, it's the average of L as a function of time. Again, this is a kind of wrong notation. It's not the average of L of t, it's the average of L as a function of time. We could. We could put the t outside the bracket. That's non-standard. Sub-t, but why don't we just go with standard notation? Standard notation and remember, well, okay, I'm not going to do it. I'm going to just call it average of L as a function of time. Okay, as a function of time. It is a function of time. That's the main thing, it's a function of time. If you do something else, we could call it L bar. Another notation for average is to put a bar on top of something. That's another notation for average, standard notation. And we could just call it L bar of t. It means the average as a function of time. Let's do it that way. What would we like to do with this? We'd like to find equations of motion for it. In other words, we'd like to find out how it changes with time by differentiating it with respect to time. I want to take this quantity and differentiate it with respect to time. So let's call it L dot. Now we have bars with dots on top of them. L dot of t is the time derivative of this. Now L of t, that's just some fixed operator. It's some fixed definite operator. All of the time dependence is in the state vectors. The state vectors change with time. The operators, those are fixed. Those are definite operators. So let's see if we can take the time derivative of this here thing. The time derivatives act on the psi of t's. When you take the time derivative of a product, and this can be a product of vectors, it can be a product of numbers, it can be a product of functions, well, functions, the rule is always the same. It's the time derivative of one times the other plus the time derivative of the other times the one. That's very general. So this is going to equal then d psi by dt, the colors of a product L psi plus psi L d psi d t. Oops, I erased a very important equation or else I buried it under the blackboard. Let me write it over here. d psi dt is equal to minus i h psi. That's what we're going to stick in over here. So we begin to see that the Hamiltonian is going to tell us something about how average values change with time. The Hamiltonian of classical physics tells us how the corresponding quantities change with time. The Hamiltonian of quantum mechanics will tell us how the averages change with time. We're going to also need the bra version of this. So let's write the bra version of this. The bra version of it is d by dt of the bra vector psi of t is equal plus i psi h. H is her mission, and so we don't have to her mission conjugate it. It is her mission, but we do have to change the sign of i when we go from bras to kets. OK, so now we're just going to plug in. Let's just plug in, and we'll get l dot, yeah, right, l dot. OK, let's do this one first over here. This one is going to be plus psi, and there's a minus sign, minus sign from here, minus i psi l h psi. That's what we get from here. How about the other one? The other one is plus i psi h l psi. Or i times the expectation value. We have psi here, psi here. We have psi here, psi here. I times h l minus l h psi. How about h l minus l h? Is that zero? Why not? H l, I don't know. Who cares which one we order? Well, operators don't necessarily commute. Commute means that you can freely interchange the order of them. In general, this is true of matrices. Products of matrices don't necessarily commute. Sometimes they do, and in particular, an operator commutes with itself. l times l minus l times l, that's OK. You can do it there. l times l minus l times l is zero. So is h times h minus h times h. But in general, l times h is not equal to h times l. It may be, but typically not. What is this animal called? h times l minus l times h? It's called the commutator of the two. So this is the definition. Given two operators, well, they could be just h and l. The combination, h l minus l h, is written with a square bracket h comma l, and it's called the commutator. Commuting has something to do with passing freely between each other. Exactly what it has to do with taking the subway in New York. I don't know, people don't know about that. But it's called the commutator. And so what we have here is the equation that the average of l at any given time, sorry, the average of the time derivative of the average, is equal to psi commutator h l psi. But this object over here is just the expectation value of commutator of h with l. So we can think of this as an equation relating the time derivative of a certain average to the average of another quantity. I did. Is it plus i? It's plus i. I did. i. Or we can write l dot of t is equal to i, well, average, times average of the commutator h l. And I'll indicate that by a bar on top of it. So we have equations of motion now for averages in terms of averages. If the probability distributions for everything are nice and peaked, then we really can just say approximately nice and peaked so that the averages are closer to the highest point on the curve. And the curve is reasonably narrow, so there isn't an enormous amount of uncertainty. Then this basically says that the time derivative of the classical or the time derivative of the approximate l of t is equal to the approximate average of commutator of h with l. And someone writes, this, of course, is not quite right, but right anyway, l dot is equal to i times commutator h l. Now, this has to be taken with a little grain of salt. What it means is an equation among averages. It means an equation among averages, but it's often just written in this form. l dot is equal to h times l. OK, now, before we finish, I want to remind you of a classical mechanics equation of motion. It's the equations of motion represented in terms of Poisson brackets. Do you remember Poisson brackets? Good old Poisson brackets? Fish brackets? OK, what were the Poisson equations of motion? Anybody remember? And if you write d by dt, exactly, d by dt of a classical function, and this could be a classical function of q's and p's. In classical mechanics, we have q's and p's. l, incidentally, is a standard notation for angular momentum. This could be the angular momentum, but right now, I'm just using it to represent any observable. d by dt of l is equal to a certain Poisson bracket. And the Poisson bracket is the Poisson bracket of l with the Hamiltonian. Remember that? Probably some of you don't. I'll advise you to go and look it up. It's in the next to last lecture of the classical mechanics on Poisson brackets. And I will not go through it here. I just wanted to remind you of it. And I want you to notice the similarity. Where is it? Did we erase it? I think I erased it. The similarity of this with dl by dt, average value, is i minus i or plus i, or lost back. Plus i, i times the commutator of h with l. Now, incidentally, let's just point out one thing. Poisson brackets have the property that if you interchange the two expressions, what happens to the Poisson bracket? It just changes sign. So do commutators. hl minus lh, if you interchange the order of them, change sign. So we can also write this as minus l with h, commutator of l with h. This is a remarkable similarity. And it suggests that we identify the Poisson bracket with minus i times the commutator. Now, I've left out a discussion of something. I just realized it just now. And I've completely forgotten about it. What happened to h bar? What happened to h bar? We've been working in units in which h bar is 1. I've been working in units in which h bar plonks constant is 1. The question is, if we were to reinstate plonks constant and make it not equal to 1, where would it go in these equations? And what's that? Yes, it goes in Schrodinger's equation. That's correct. But how do we trace it? Let's see. How do we trace it? Here's the equation here. I told you that h has the significance of a Hamiltonian, and therefore it has units of energy. So let's look at this equation. Psi cancels out on both sides as far as units go. Units of this psi are the same as the units of this psi. And so the right-hand side of the equation, apart from the size, has units of an energy. The left-hand side of the equation has units of inverse time. Inverse time and energy don't have the same units. Inverse time is inverse seconds, and energy is joules. Or whatever. So we have an equation here which doesn't make good dimensional sense. We need a constant in here. And the constant, we have to put a constant in here. Oh, let's see. Where does the constant go? Does it go on the right-hand side or the left-hand side? Where does Planck's constant go? Right-hand side or left-hand side? What? Either. If it goes in one place in one thing, it goes in the inverse in the other place. OK. So here's all you have to know is that Planck's constant has units of energy times time. Planck's constant has units of energy times time. So let's see. So energy times time. So we want to put Planck's constant here. Energy times time over time is equal to energy. That's where Planck's constant goes into these equations. And I apologize for having completely forgotten about it. But that's where it goes. And if we follow it through the equations, no doubt it appears over here. I think it appears in the denominator here. Yeah. You see, I got everything right except where I have h. I should have h over h bar. We could rewrite this this way. h over h bar. So in all of my equations, wherever we wrote h, the Hamiltonian, we really should have, if we want to keep the unit straight, h over Planck's constant. And that goes straight through to here. So really, the right equation, including the constants, is the L dt is 1 over h bar here. 1 over Planck's constant. Planck's constant is a small number. So this looks like it's very big. But on the other hand, this is L times h minus h times L. In classical physics, that would be 0. So whatever this is, classically it's 0. And quantum mechanically, it must be a small correction for the classical physics, to the extent that quantum mechanics is a small correction for classical physics. And so the commutator is itself something which is typically small in a order h bar. So that allows us then to guess, to guess, an identification between commutators and Poisson brackets. Now, this is not intuitive at the moment. It's just looking at two equations and saying, look, if there's going to be any connection between quantum mechanics and classical mechanics, then there must be a connection between the Poisson bracket and the commutator. Namely, Poisson bracket must be equal to minus i over h bar times commutator. Or it's usually written the other way, commutator is i h bar times Poisson bracket. Commutator is a small thing because there's an h bar there. Poisson bracket in classical physics is not a small thing. It's some number. I mean, it's some characteristic quantity which is not neither small nor big. And in the same units, the units that you do classical physics, the standard units, meters, seconds, whatever, commutators are very small. That's because they're almost 0. A times b minus b times a is almost 0. OK. No, I don't think so. I think i. We had a minus i on this side, minus i over h bar, multiply by h bar. And now multiply both sides by i and use that minus i times plus i is 1. It was Dirac who recognized this connection and built his quantum mechanics out of it. He built his quantum mechanics out of making an identification of Poisson brackets with commutators. But of course, the foundations of the subject, quantum mechanics is a consistent subject without having to make any reference to classical physics. The right way to think about things is that quantum mechanics comes before classical mechanics. Commutators before Poisson brackets. You have to derive classical physics as an approximation to quantum mechanics. So here we see some formal similarity. At this point here, we have no idea why this commutator should have features in common with the Poisson bracket. Before we go, let me just say, let me write down some of the features of commutators and Poisson brackets to remind you. And again, all of this should be at this stage, should be a little mysterious. What is this funny connection between Poisson brackets? But let's pursue it anyway. OK. Let me remind you about Poisson brackets. First of all, if I have two quantities, I'll call them A and B now, or maybe A and B I use for the L. No, let's just call them L and H. They don't have to be the Hamiltonian and anything special. They're just L and H. All right. The first thing is the Poisson bracket of L with H is equal to minus the Poisson bracket of H with L. OK. The commutator of L with H is minus the commutator of H with L. So that's a parallelism that we can begin with. Now, let's take some other properties of Poisson brackets. There's a lot of properties of Poisson brackets, but let me just write one or two more. If I have two operators, or two not operators, but two quantities, let's call them L and M. OK. L times M is itself a classical quantity, and I can compute its Poisson bracket with H. Does anybody remember the answer? I'll remind you. I'll tell you. Go look it up. This is important. It's L, it's the Poisson bracket of L with H times M plus L times the Poisson bracket of M with H. Now, in classical physics, the order of multiplication doesn't matter. So you can bring the M on the left or the right. It doesn't matter. You can bring the L on the left or right. It doesn't matter. But this is the relationship for the product of two variables, the Poisson bracket. Go back to the next to the last lecture and check this out. This was one of the defining, not one of the defining, one of the properties of Poisson brackets. You can ask the same question over here. Supposing you have LM and you take its commutator with H, what do you get? This is an exercise. This is an exercise. All you do is you write that this is equal to LMH minus HLM. And now you start just juggling and adding some terms, subtracting the same terms. And eventually, you'll discover that this is the same as this, same relationship, that this is equal to L times the commutator of M with H plus commutator of L with H times M. The way to do this is just write LMH minus HLM and then write out these guys here. This is LMH minus LHM plus LHM minus HLM. Four terms here, two little canceling pairs, and you'll just get this back. It's a one-step thing. It's a really easy thing. So you begin to see some pattern that Poisson brackets and commutators must somehow be closely related. The last thing I'm going to show you tonight is about energy conservation. Now, we're not going to consider the most general concept of energy conservation. We're simply going to ask, is the average of the energy conserved with time? We have not even really gotten to what it would mean for energy itself to be conserved in any deeper way. But we can ask, we have all the ingredients now, which will allow us to ask how the average of any quantity changes with time, no, here it is over here, we can now specifically ask how the average of the energy changes with time. Let's do it. The energy I claim is the Hamiltonian, or its definition, if you like. What is energy? It's Hamiltonian. So let's check and ask whether energy is conserved if energy is the Hamiltonian. This equation would read, the time derivative of the average of the Hamiltonian is equal to minus i over h bar times the commutator of the Hamiltonian with itself. The Hamiltonian with itself, it's h times h minus h times h. h times h is the same as h times h. Quantum mechanics is not that weird. It's weird. But h times h is the same as h times h. So this is zero. Whatever the Hamiltonian is, its average, its expectation value with time is always zero. And so it's conserved in that sense, at minimum in that sense. There's a stronger sense in which it's conserved and will come to it. But even this weaker sense is kind of interesting, and it tells us we're on some kind of right track to be able to compare quantum mechanics and classical mechanics. OK, I'm going to stop now and take a few questions. And yeah. Here? Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Probably. I just mean the average. I mean the average defined as summation of probability of lambda times lambda. So I don't know. What's the standard symbol and statistics for an average in this sense? Would they do it? The first moment of the distribution, you think, sometimes. The first moment of the distribution. Yeah. OK. Let it of lambda. Yeah. Yeah. OK. I'm used to just calling it lambda. It's included in brackets. Hm? E brackets of lambda. E bracket of lambda. And that stands for what? The average expected value. Expected value even though it's not the expected value. You don't expect to get it. Yeah. Yeah. OK. You've never seen the average value of the bar on top of it? Yeah. I mean it's normal convictions that it means the sample means like the average in things. OK. Yeah. When I use the term average, I will mean this. It's also called the expectation value. And I will freely interchange the Dirac notation for the average with the bar notation. OK. And if that's not standard according to statistics books, too bad. So over here you wrote down the Schrodinger equation. And then you wrote down the, how did you go from the left hand equation to the right hand equation? Was that complex conjugation or was that Hermitian conjugation? Hermitian conjugation. OK. But remember that H is Hermitian. We proved that. That was the condition of unitarity that H is Hermitian. But how can you interchange the order of psi and H but not d dt and psi? Where, where, where, where, where, where? On the left side equation, you've got d dt psi. d dt is the operator. This one here? Yeah. Yeah. Yeah. The time derivative of a bra vector is a bra vector. The time derivative of a ket. Look, the time derivative is just another way of talking about differences. The difference of two bra vectors is a bra vector. The difference of two ket vectors is a ket vector. OK. In the limit, the difference becomes the time derivative or the derivative. So the derivative of a bra vector is a bra vector. The derivative of a ket vector is a ket vector. If you want to take this ket equation, what is the bra that goes with this ket? The bra that's dual to this ket is the time derivative of the corresponding bra vector. So since time is not an operator, you don't have to worry. Right. Just think of this as taking the difference of two vectors. Yeah. It's just the difference of two vectors. Right. Exactly. And again, quantum mechanics is weird. But taking the difference of two vectors and replacing it by a derivative when the difference is small, that's perfectly legitimate. You said that the form of Hamiltonian, we will choose depending on symmetry considerations. Well, yeah. In the derivation here, we did not put any symmetry consideration so far. No. So why is it that we are getting energy conservation out? How come we're getting what? Energy conservation out. How do we know that this is corresponding to energy? Well, we don't know that it's corresponding to energy until we do a lot more. We don't know that we can define it to be energy, the Hamiltonian. But that's not satisfying. We want to see in cases where we understand the classical physics and we understand what energy is, we're going to want to see that the quantum mechanics of the same system, the classical system, is an approximation to the quantum mechanical system. Physics is really quantum mechanical. Under certain conditions, such as a system is heavy, basically that, that it's heavy, that it's big, that it's large masses, then the system behaves approximately classically, which means that the average values move in ways which are consistent with the classical equations of motion. We're going to want to see that when we study such big, heavy systems that the thing that we call the quantum mechanical Hamiltonian is essentially closely connected with the thing that we call the classical Hamiltonian. But we'll come to that. We can't do everything all in one night. At the moment, I'm just showing you similarities. And we're going to want to do better than that. Whether we'll actually get to do better than that, I don't know. We can't do everything. But that's the logic. Yeah. OK, way back to the electron span. I think when we left off the last lecture, you were going to show what the probability is to choose the thing when it's all fitted. Yeah. Right, I had saved some time to do that. But I think I've done enough for tonight. Who was it who sent in the solution to it? I think he had it. Yeah. Yeah, well done. Right, so but we'll do it. We'll do it. I just run out of steam. Right, we'll come back to that. We're going to come back to that. So how can your postulates work without a collapse postlet? What's that? How can your postulates work without a collapse postlet? Well, we really want to derive the collapse postulate. But the derivation is deeply connected with something else that we haven't come to yet. And I think we're going to come to it next time, entanglement. The measurement process is really a process of establishing entanglement between a system and an apparatus. And until we've talked about the entanglement process, I think until we do, we simply have to adopt the language of collapse. But next time, next time, next time. Remind me about collapse. I'm close to collapse. OK. For more, please visit us at stanford.edu.
January 30, 2012 - In this course, world renowned physicist, Leonard Susskind, dives into the fundamentals of classical mechanics and quantum physics. He discovers the link between the two branches of physics and ultimately shows how quantum mechanics grew out of the classical structure. In this lecture, he continues his discussion on the vectors and operators that define the language of quantum physics.
10.5446/15013 (DOI)
Stanford University. The subject this quarter is a collection of things, but mostly special relativity and field theory. We're coming back now, not quantum field theory, we're not prepared for quantum field theory yet. We've learned classical mechanics, those who have participated up till now. Classical mechanics, the general structure of classical mechanics, what it is, how it works, the action principle, Hamiltonians, Lagrangians, all that kind of stuff, and I assume you know it. We also learned some elements of quantum mechanics. I can't believe I taught a whole quarter on quantum mechanics when we never got to the harmonic oscillator. When I tell people that, they say, what did you do? And I tell them, and they say, oh, that's good. But we learned the principles of quantum mechanics, but we didn't spend much time on examples in order to go forward with quantum mechanics and quantum field theory, we're going to have to do some more quantum mechanics for sure, but not this quarter. No quantum mechanics this quarter. We're back to classical physics, but now back to the relativistic end of things. We've studied non-relativistic mechanics, motion of particles, mostly motion of particles, F equals MA, Newtonian physics. We have not studied anything about light, and that's because light basically is a relativistic phenomenon. A phenomenon that has to do with special theory of relativity, and that's going to be our goal this quarter, the special theory of relativity and classical field theory. Classical field theory means electromagnetic theory, basically electromagnetic theory, waves, forces on charged particles, all that sort of thing, but in the context, very definitely in the context of the special theory of relativity. So that's where we're going to begin, special relativity, that's tonight. I suspect that most of you have at some point in your lives learned the elements of special relativity. How many people would not fall into that category? So I'm right then, just about everybody has learned special theory of relativity. How many people feel that I could completely dispense with it without losing any, you know, without, is it, do you feel that you would like to go through special relativity? All right, so I'm getting the rough answer that the answer is yes, and that's what I prepared to do tonight, but I had also been prepared that if everybody said we really understand special relativity completely, can you move on to the next thing? I would have said yes. But I'm glad you didn't because I like the idea of teaching from the beginning. Okay, so let's come to special relativity. Let's first talk about relativity. Well let's even go back a step before relativity, the idea of a reference frame. The idea of a reference frame, you know what, I'm not going to spend a lot of time on the basic philosophy and everything. You know what a reference frame is. We talked about reference frames in classical mechanics. It's a set of spatial coordinates, X, Y, and Z, usually based on Cartesian coordinates to specify the coordinates. You will want to specify where the origin of coordinates is. There's ambiguity in that. You can translate it from one place to another. And you will also want to specify the orientation of the coordinates. In other words, my orientation. I mean, they're free up to some rotation. You want to pick the X axis that way, the Y axis that way, the Z axis that way, or change it but specify it to begin with. And that's part of picking a reference frame, a spatial set of coordinates. In addition to a set of coordinates which could be measured by meter sticks, if you like, if you want to think physically or concretely about what a coordinate system means, think of space as filled up with the lattice of meter sticks so that every point in space can be specified as a certain number of meters to the left, certain number of meters up, certain number of meters in and out. And that's a coordinate system for space. And in order also to specify when something happens, not just where it happens, you have to specify a time coordinate. So a full coordinate system or a full reference frame consists of an X, a Y, and a Z, and a T. In other words, a time axis. And that's a reference frame. Now you can have different reference frames. Apart from the ability to rotate from one thing to another, you can also consider moving reference frames, reference frames that are moving relatives to some specific reference frame. We could speak of our reference frame by our, I mean yours and mine. Now let me take, let me not say yours and mine, your reference frame. Your sitting still. I will assume that for the most part you will continue to sit still tonight. But I will move around from time to time. I may march past you like this or I may march past you like that. And we will, if I think of my coordinates as a coordinate frame, a set of meter sticks that I carry with me and move with me as I move so that at every instant I personally am at the center of my own coordinates, then my coordinates are different than your coordinates. You will specify an event by an x, y, z and t. I will specify it by a different set of x, y, z and t's to account for the fact that I may be moving past you. In particular, we won't agree about the x's. If I'm moving along the x-axis relative to you, mine knows I will always say relative to me is six inches in front of my face. I think it's probably less than that. And I will say my nose is at x equals six. You will say my nose is not at x equals six. You will say my nose is moving and it's at some position that changes with time. And so you'll give a different coordinate to it. In ordinary pre-relativistic physics, we will also specify times pre-special relativistic physics. We will also specify times. And in pre-relativistic physics, we would assume that we all have watches. Our watches are synchronized. We go through some operation to synchronize them. We'll talk about operations to synchronize watches. But we'll assume they're synchronized over the given instant of time. All of your watches agree with each other and they agree with my watch. My watch agrees with your watch. That was an assumption that was made in all of pre-relativistic physics that time is time is time is time and there's no ambiguity associated with moving reference frames about what the time is at a given instant. What does relativity mean? Relativity means that for all coordinate systems which are related to each other by uniform velocity, all reference frames related to each other by uniform motion, the laws of physics are the same. The laws of physics are the same in every reference frame. The word is inertial reference frame. You know the term inertial reference frame. If one frame is inertial, then any frame moving with uniform velocity relative to it is inertial. What is an inertial reference frame? What is the first inertial reference frame that you start with? Let's just make it simple and say it's a frame of reference in which Newton's laws are correct and in particular where particles which have no forces on them move with uniform velocity. That's a reference frame. Every frame moving uniformly relative to it, not rotating relative to it, not accelerating relative to it is another inertial frame. It's a feature of Newtonian mechanics that the laws of physics, F equals ma, that together with Newton's law of attraction or Coulomb's law of attraction, there's the same in every reference frame. If you like the way I like to describe this, go to the proverbial railroad train moving down the x-axis with uniform velocity and think about you or me. I'm in the train. You're standing still. You'll have a set of laws and the way I like to think about it is laws of juggling. Juggling is a good thing that makes a lot of use of classical mechanics and inertia, forces, all the good stuff. If you know how to juggle at rest, you juggle exactly the same way in the moving reference frame. You cannot tell. Another way to say it is if you're in a moving reference frame, if I'm in a moving reference frame and everything is sealed so that I can't see outside, I cannot tell that I'm moving. I try to find out by doing some juggling and I find out that my standard laws of juggling work and I assume therefore that I'm at rest, but that's not right. All it tells me is that I'm in uniform motion. The principle of relativity is that the laws of physics are the same in every reference frame. That principle existed before Einstein. It was not invented by Einstein. Sometimes it's attributed to Galileo. I don't know if he said it or not. Newton would have recognized it. It is called the principle of relativity. What did Einstein add? Einstein added one law of physics. The law of physics is that the speed of light is the speed of light. C, that the speed of light is 186,000 miles per second, or that the speed of light is 3 times 10 to the 8 meters per second. That's a law of physics. Now if you combine the two things together, that the laws of physics are the same in every reference frame and that it's a law of physics that light moves with a certain velocity, you come to the conclusion that light must move with the same velocity in every reference frame. Why? Because the principle of relativity says the laws of physics are the same in every reference frame and Einstein announced that it's a law of physics that light moves with a certain velocity. That is puzzling. Let's quantify and make precise why it's so puzzling. Let's try to be very, very almost pedantic about it. Let's start with a reference frame. A reference frame is a coordinate axis in space. Now for our purposes now, space is just one dimensional. I don't care about y and z. I care only about x. So let's draw an x-axis. Here's x. And also a time axis. Here's t. Let's call this your reference frame. And in your reference frame, light moves with the speed of light, with the standard speed of light. And so if a light ray is sent out from the origin, time t and position x, it will move with a trajectory given by x equals ct. That's the motion of the light beam. And what does that look like on this map here, on this map of space and time? It looks like a straight line. X equals ct. It looks like a straight line like this. Now the faster the light moves, the closer to horizontal that line will be. In other words, in a given amount of time, the further it will move. So see some number. If I actually put in 186,000 miles per second, that line would be practically so horizontal that you couldn't see the difference from horizontal. It would mean that this light ray goes 186,000 miles in only one second. That would be a very horizontal line. So I will use some other units for the speed of light. Units in which I can see explicitly that the light ray is moving. It moves a certain distance in a certain time. That's its velocity. Okay, now let's add a second reference frame. A second reference frame moving relative to the first. The second reference frame has an x prime axis. Its axis is called x prime, and the center of it, just like the center of me, is moving relative to you. So because it's moving, it means me, the center of my coordinate system, is moving also with a straight line. That's not a very good pen. Let's just try another one. Now I'm moving with a straight line, and that straight line is x equals vt, where v is my velocity relative to u. So I'm moving with x equals vt. My position relative to u keeps increasing as a function of time, and that's the trajectory x equals vt. Now I don't call this x equals vt. I just call it x equals 0. It's me, but I don't want to call it x equals 0. Let's call it x prime equals 0. That's x prime. I will agree that the call my coordinates prime. I call my coordinates x prime and t prime. Next, what about, yeah, what's the relationship between your coordinates and my coordinates? That's pretty easy. X prime is equal to x minus vt. This distance here is vt. My coordinate is called x prime. Your coordinate is called x. My coordinate is your coordinate, less the distance between us, which is vt. So that's x prime equals minus vt. And what about our clocks? Well if we make the assumption that Newton made, the Newton made the assumption that all clocks can be synchronized and that the time that a clock registers has nothing whatever to do with how it moves, then we can, before I even started to move, we could synchronize our clocks and now I can start to move and the rule will be my time is the same as your time. This would be the transformation of coordinates between your coordinates and my coordinates. If I know, if you know when an event happens, you can tell me and my coordinates when it happened. Okay, now let's take this light ray, the light ray that moves with x equals ct in your frame and ask how it moves in my frame. All right, so I want to find out how the light ray moves in my frame. It's going to be x prime is going to be x. And along that light ray, x is equal to ct. All along this light ray, x is equal to ct. So that means along the light ray, x prime is equal to ct minus vt, which is the same as c minus v times t. So my ref, oh, which is also the same as c minus v times t prime. That's important. That's important. So according to my watches and my clocks and my measuring sticks, that light ray, which was emitted from here, does not move with the speed of light c, but moves with the speed of light c minus v. That's bad. Something's wrong. At least the Feinstein was right something's wrong because Feinstein announced the rule that all light rays move with the same speed. Okay, what about a light ray going the other way, incidentally? The light ray going to the left would be x equals minus ct. If you plug that in, to x prime is equal to x minus vt, you get x prime is equal to minus c plus v times t. In other words, a light ray going, if I'm moving to your right, a light ray going that way looks to me like it's going a little bit slower by this amount. A light ray going in the other direction appears to me to be going a little bit fast. That was what Newton would have said, and that is what Galileo would have said, and it's what everybody would have said, until the end of the 19th century when people started measuring the speeds of light extremely carefully and found out that no matter how things were moving the speed of light always appeared to be the same. The only way to reconcile this is to say something is wrong with the transformation law between your coordinates and my coordinates. Something is wrong with this simple-minded coordinate transformation here. So the first step of special relativity is to figure out how this has to change your coordinates versus my coordinates in order that the speed of light be the same in every reference frame. I'm going to take you through that first, through the steps. Even before that, we should go back and ask about perhaps assumptions that we made which were unwarranted. Well, there's a zillion assumptions we made, obviously, a lot of them, but the most important one and the one which is most at risk, and in fact the one which is wrong, is that simultaneity in every reference frame is the same. That if we begin with our clocks synchronized, and now I start to move, will my clocks remain synchronized with your clocks? In other words, is this the right transformation between moving clocks and stationary clocks? And that, of course, in hindsight is the one that we know was wrong, that the correct relationship is different and in fact that the whole idea of simultaneity is frame dependent, is reference frame dependent. Here's what I want you to imagine. Each one of you has a clock. All right? Your clock, your clock, your clock. You synchronize them by whatever means you synchronize your clocks. I have an equivalent collection of friends who are spread out exactly the same way relative to me as the front row is here relative to this gentleman here. Each one of my friends has a clock. I've made sure that our clocks are synchronized and now we're moving. We're moving relative to you. As we go past, we check each other's clocks and we check whether they're still, whether they are synchronized. And if so, if not, excuse me, if not, how much out of whack is each clock depending on how far down the line you are? Equivalently, we're asking what is the, we can ask the same thing incidentally about our meter sticks. We could ask as I pass you is my x equals one, one unit of your position relative to you if you get my point. And of course, this is where Einstein made the great leap. He said we have to be more careful. We have to be much more careful and define very carefully what we mean by synchronous and what we mean by lengths, what we mean by times. All right. So he said, okay, look, let me, let me think about experimentally how I would synchronize two clocks. Given his postulate. His postulate was the speed of light is the same in every reference frame. How do you go about synchronizing your clocks and how should I go about synchronizing my clocks? They said, well, if light goes that direction with speed C and that direction with speed C in every direction, you will do a little trick. Little trick is if we have two clocks, let's say this gentleman at the end over here and that gentleman at the end over there and they want to synchronize their clocks, they don't want to get up out of their seats. They're lazy. They don't want to get out of their seats. They want to stay where they're sitting. Then they find somebody right in the middle. Now how do they know the person is in the middle? They measure with meter sticks. I'm afraid you'll have to get out of your seats if you want to do that. But you measure with your meter sticks and you find that that's five meters or four meters to the center. You find that's four meters to the center. Go back to your seats and now check your clocks. All right. Now check your clocks by, we could also have somebody sitting in the middle there if we liked. We could check the clocks by when your clock reads 12 noon and when your clock reads 12 noon, each send out a flash of light, a flash bulb. If the two flashes arrive at the center at the same time, then your clocks were synchronized. You each said you're going to send out your light beam when your clock reads 12 o'clock. If the light at the center arrives at the same time, then your clocks were synchronized. That's the way Einstein decided to define synchronicity. Now what about me who's moving? I'm moving. I get to this point over here. When I get to this point over here, let's say that just happens to be the point that you emit your simultaneous light rays. At 12 noon according to your clocks, you each emit a light ray, but it doesn't quite get to the center at 12 noon. It gets to the center slightly later according to your clocks by which time I'm over here. Since I'm over here, your light ray will arrive at me a little bit late relative to your light ray. So I will say you guys are out of synchronous. You made some mistake. You didn't synchronize your clocks because the light arrived at me at two different times. I can either say that or I can say that the meaning of synchronicity is different in the two frames of reference. Okay, so let's go back to our two coordinate systems and be skeptical about exactly what synchronous means in the moving reference frame. In the stationary reference frame, we'll assume that synchronous means two points at the same horizontal level here, at the same T. But what about the moving frame of reference? The moving frame of reference will find in a minute that this point is not synchronous with this point, but a synchronous with some other point. And in fact that the whole surface here, that the moving reference frame calls synchronous, not yet, that's the wrong place, that's light, is someplace else. We're going to map out what the moving observer calls synchronous and how are we going to do it? We're going to do it by synchronizing the clocks according to Einstein's rule. Okay, so what we need now is a better drawing. Before we encounter a relativity problem, draw a picture. That's the first thing to do. You first draw a picture. The picture is always the same. It's X and T. X and T can be a reference frame that you can think of as stationary in your frame of reference. The other frame of reference, think of the railroad train moving down the axis. But the next thing to draw in before you draw anything else, draw in light rays, how light rays move. So draw in X equals CT and X equals minus CT, in particular X equals CT. We'll worry about X equals minus CT later. But as I said, if we were to use C equals 186,000 miles a second, we would draw a practically horizontal line. We don't want to do that. So we choose the speed of light in some units where it's easier to draw the picture. In particular, the natural choice of units is to make the speed of light one. How do you make the speed of light one? Well, you use length units, which are defined relative to your time units in an appropriate way. If your time units are seconds, use your length units to be light seconds. How big is a light second? Well, it's 186,000 miles, but never mind. One light second is a unit of length. And how fast does light go in units of seconds and light seconds? It goes one light second per second. That means that on a diagram like this where X measures light seconds and T measures seconds, light moves at 45 degrees. I'm sure you've seen this before, but I'm just spelling it out. And most backwards at the same 45 degrees. So that's the trajectories of light rays. Next, let's draw in another observer, another reference frame, a moving reference frame. And again, the reference frame is going to be moving X prime equals X minus VT, exactly like it is here. One picture superimposed on the light ray. This is X equals VT. What is X prime along here? Zero. This is also X prime equals zero. So let's put X prime equals zero. That's because X prime is X minus VT. And if X is equal to VT, X prime is equal to zero. So that's the moving reference frame. And let's try to figure out where T prime is, where the T prime axis is. All right. We begin by putting in three people. The one in the end, the one in the middle, and the one at the other end here. All right. So at the initial time right over here, we're going to put in a moving person. Not a stationary one, I should have said, not you people. My friends. My friends who are separated by equal distance is me, there's the one in front of me, and there was the one yet in front of him. Let's put in the first one. And he moves with the same velocity, and he moves parallel. Let's suppose that he's one unit ahead of me, but measured with your meter sticks. That means that this distance right over here is one unit. What does that say about the equation of this second blue line here? It says that it's X equals VT plus one. It's one unit ahead in the stationary coordinates. What about the next person who's two units ahead? Like that. And his equation is X equals VT plus two. Now person one over here has a watch. And as it happens, his watch says time zero, what is time zero? That's not a time on my watch. I'll take time zero to mean 12 noon. Time zero occurs right over here along his trajectory. That's time equals zero. In other words, the moving observer and the stationary observer agree by assumption on what time T equals zero means. The stationary observer, T equals zero all along the horizontal axis. That's essentially the definition of the horizontal axis. All times from the stationary observer are the same along here. But let's see if we can figure out where time zero is along this second axis. Along, let's give these people names. Fred, Mary, and Seymour. Is that the way it's supposed to be Seymour? All right, so the way this is going to work now is at this point, Fred is going to send out a light signal to Mary. And at some point, I don't know what point yet, Seymour is also going to send out a light signal toward Mary, but they're going to send it in such a way that they arrive at Mary at the same instant. So Fred sends out his light signal along here. Where does Seymour have to send out his light signal from in order that it arrive at exactly the same time? Well, if I start in this end and I start shooting light rays up at 45, I'm likely to miss this point. So I'll be smart. I'll work backward 45 degrees to here. Each light signal in the stationary reference frame moves at 45 degrees. You can see because the second reference frame is tilted relative to the first reference frame that the point occurs above what the stationary observer calls t equals 0. This point over here, the moving observer, will call t prime equals 0. Why? Because in the moving frame of reference, these two points sent light rays to the central observer over here that arrived at the same instant of time. So Mary will say, you guys sent me the light signal at exactly the same instant of time because it arrived at me at the same instant of time and I happen to know that you're equally spaced. Now it just becomes a little exercise to figure out where this point is. So I'm going to take you through that exercise. We're going to do it in detail. It's very easy. Let's give these points names. This is A and this is B. And what we want to figure out is what the coordinates of this point B are because we know B is synchronous with the origin in the moving reference frame. That will give us some information. In other words, B must be a point t prime, one of the points for which t prime is equal to 0. Okay, what about point A? How do we find point A? We find point A by recognizing that it's at the intersection of two lines. What is this line over here, the green line? That's x equals ct. This line is x equals ct because it's a light ray moving to the right. We've decided to set the speed of light equal to 1 or I decided. So it's really just x equals t. The motion of a light ray is just x equals t. And here's the light ray. That's this line and the other line along there that this point A lies on is x equals vt plus 1. How do we find the intersection? We substitute one equation into the other and it just says t is equal to vt plus 1. On the first equation I just said x equal to t or t times 1 minus v is equal to 1 or even better, t is equal to 1 over 1 minus v. So we now know where the time of this point A is. This is point A. What about the x of this point A? I would like the full set of coordinates of this point. So let's box around it. What's the x of that point? Well, you can write it as vt plus 1 and plug in what t is, but you can be smarter and you can say, look, this line is x equals t over here. The green line is x equals t. So all along the green line, x is equal to t. In other words, x is equal to 1 over 1 minus v. Now let's go to line AB. The next thing to do is to figure out line AB, which will ultimately figure out where it intersects x equals vt plus 2. It's a few steps. It's a few steps, but they're fun to do. So we'll go through them. And I don't know a shortcut for this. I don't know any shorter way to do this. What about the line AB? OK. Every line, which is at 45 degrees but pointing downward to the right, in other words, 45 degrees, but going upward to the left, is a line that has the property that x plus t is constant along that line. Every line moving upward to the right with 45 degrees is x minus t is a constant. Lines, which are at 45 degrees but upward to the left, are lines x plus t equals a constant. Let's take this one. It's x plus t is equal to something. What thing? Well, one easy way to find out what I should put on the right-hand side here is just to take one point on that line and plug in the value of x plus t. X plus t is the same all along this line. And in particular, at point A, x plus t happens to be twice 1 minus v, twice 2 over 1 minus v. It's just this plus this, x plus t. So all along line AB, x plus t is equal to 2 over 1 minus v. So we've now found line AB. We want to find point B. So there's one more step. And that step is to find the intersection of line AB with x equals vt plus 2. So we take x plus t equals 2 over 1 minus v, and we combine it with x minus vt equals 2. That's this equation over here. X minus vt equals 2. And we solve the two simultaneous equations, and that will tell us what the coordinates of B are. All right, so how do we do that? Well, the first step we can just solve by subtracting the two equations. That will get rid of the x. All right, so let's subtract. On the left hand side, we have t minus minus vt, which means t times 1 plus v. It's equal to 2 over 1 minus v minus 2. Well, I've done a subtract. All right, I want to subtract 2 from a fraction, so the best thing to do is to put them over the same common denominator, t minus 2 times 1 minus v over 1 minus v. I've just multiplied and divided by 1 minus v. Okay, if you look at this carefully, you'll see the two's cancel, 2 minus 2. The term with v here will become plus. This whole thing becomes twice v over 1 minus v. That's equal to t times 1 plus v. One more step to calculating t. It's just to divide by 1 plus v. So what happens if you divide by 1 plus v? Well, you divide by 1 plus v. 1 minus v times 1 plus v. Too many steps. I hate doing algebra on the blackboard, but I don't see any way around it. I could tell you to go home and do it, but what's 1 minus v times 1 plus v? 1 minus v squared, right? So we can just write this as 1 minus v squared and get rid of the 1 plus v. That's t. But it's not really t that I want. I want x and t at this point. So how do I find the x of point b? This is the t of point b. We could call it tb. What about the x of point b? Well, all I need to do is substitute into this equation. t is equal to tb. So let's see what it is. xb is equal to 2 divided by 1 minus v minus tb. I'm subtracting t from this equation minus tb. And tb is 2v over 1 minus v squared. Looks ugly, but it's not. It's much. When v is plus or minus 1. What's that? What happens when v is plus or minus 1? Then bad things happen, right? Right. When v is plus or minus 1, that means that the observer here is moving with the speed of light. And we get into a kind of degenerate situation. Well, you just plug in. What does happen? Let's see. First of all, this one over here. When v is plus or minus 1, tb becomes infinite. So what's happening when v gets to be plus or minus 1, let's say plus 1, this blue line is tilting over to become horizontal with a light curve. And we're not horizontal, parallel with it. And when it's parallel with it, they never intersect. So the intersection is an infinity. Part of the game here, of course, is to realize that we will eventually disallow things moving faster or as fast as the speed of light. For the moment, that's a fair question, and something bad happens. So it begins to smell bad to have somebody moving with the speed of light. All right, we can simplify this. Multiply by 1 plus v in both the top and the bottom. Now, I use the fact that 1 minus v times 1 plus v is the same as 1 minus v squared. And we get everything over the same denominator. And let's see what we have. We have a 2v. We have another 2v with a minus sign. They cancel. And we just have a 2. The remaining thing is a 2. So it's not so bad. xb is equal to 2 over 1 minus v squared. The only difference between them is that there's an extra factor of v in tb. Now, why did I go to all that trouble? I went to all that trouble because I wanted to figure out where the surface, so the line, it's a surface in higher dimensions, where the surface, when there's also other space dimensions like y and z, where the line is which corresponds to being simultaneous with the origin here. And it's a line which passes right through that point. We could have found the other points in the line by using smaller or bigger intervals for the same argument, same argument, but with smaller or bigger intervals. And we would have found that all the places which are simultaneous with the origin lie on the line which passes through the origin and which passes through the point b. What do we know about that line? We know that it has a constant slope. What is the slope? The slope, of course, is the ratio of t to x all along that line. And what's the ratio of t to x? It's just tb over xb, which is just v. The ratio of tb to xb is the ratio of this to this, and it's just plain v. That's all. So the slope of this line here is v. Another way to say it is that this line is t equals vx. t equals vx. All along this line, the moving observer says everything is simultaneous. So however the clocks, however moving clocks work, all the clocks along here, if they're synchronized, if they've been synchronized by Einstein's procedure, they will all read the same time. They will all read t prime equals 0 all along here. Why? Because this operation here was exactly the operation to synchronize these two clocks. If this one reads t prime equals 0, so does this one, all along here. Now look at the diagram. Let me simplify the diagram. Let me make it get rid of a lot of the stuff on it. We have, first of all, a light ray moving at 45 degrees. We have x equals vt. That's the moving observer, the center of the moving observer. And then over here, we have t equals vx. Notice the symmetry of these two. x equals vt, and t equals vx. The meaning of this symmetry is that this line over here is just the reflection of this line. It's just related by interchanging t and x, flipping about the green line here. Another way to say it is that they're at the same angle. This angle here is the same as this angle here on this diagram. So we've discovered something interesting. If Einstein is right, and if the speed of light is the same in every reference frame, and you use light rays to synchronize clocks, then what's synchronous in one frame is not the same as synchronous in the other frame. That's the first thing. And the second thing is we've actually found what synchronous means in the moving reference frame. It corresponds to surfaces which are not horizontal, but which are tilted a little bit. Tilted by slope v. That solves the problem of what simultaneous means in different frames, and they're not the same. OK, next. Can we say more about the relationship between xt and x prime and t prime? We don't need this over. Well, we can just lift it up. Now, here's what we know. We know that x prime equals 0 whenever x equals vt. That x equals vt is the same as x prime equals 0. So let's begin writing x prime is equal to x minus vt. But we don't know that that's correct. All we know is that when x is equal to vt, x prime is equal to 0. This could be incorrect by some factor. Let's call it f. That factor could depend on the relative velocity of the two frames of reference. So let's write f of the velocity between the two frames. And I mean the magnitude. How fast is it going? How fast is it going? The magnitude of the velocity. I use script v here to indicate the magnitude of the velocity. The velocity v can be positive or negative. The two reference frames could be moving. The moving frame could be moving backward. It could be moving forward. So v could be plus or minus. But the function that would go here, we might guess, is just a function of the magnitude of the velocity. Let's assume that for the moment. And that's the general answer for what x prime is. Although we don't know yet anything about what the function v is, the function f of v is. What about t prime? Well, we know that t prime is equal to 0 whenever t is equal to xv. We knew that x prime was 0 when x equals vt. We know that t prime equals 0 when t equals xv. Just invert x and v. So it must be then that this is equal to t minus vx times some other possible function of v. These two equations tell us that x prime is 0 whenever x equals vt. And it tells us that t prime is equal to 0 when t equals vx. In other words, it's just reflecting this relationship that t prime is 0 when this is true and x prime is equal to 0 when that's true. OK, now we use some physics. We use the physics that Einstein said that the speed of light is the same in both reference frames. What does that say? The speed of light is 1 in the stationary reference frame. So it must also be 1 in the moving reference frame. What that says is that whenever x equals t, x equals t is along the green line, it must also be true that x prime equals t prime. If we have a light ray and the light ray moves with velocity 1, this goes right back to the first thing that we demonstrated. That when we took a light ray that went like x equals ct, and then we transformed from one frame to another. Remember what we got the first time around? We got c minus v and c plus v. Let's do the same thing now and say, supposing the light ray satisfies x equals t, then it must also be true that x prime equals t prime. How can we arrange that? Well, it's actually very easy. Let's suppose x equals t, then for that curve, we can write this as x times 1. Oh, let's write it. What do we have to do to make this equal to this when x is equal to t? If x is equal to t, these two will be the same. These two will be the same. So what's in the bracket here will be the same as what's in the bracket here when x is equal to t. If we want x prime to equal t prime, there's only one choice. That's to make f equal to g. That's the only way. f must be the same as g. If f is not the same as g, the motion of the light ray in moving coordinates will not be with velocity 1. So we come to the conclusion Einstein's rule tells us that whatever the connection between these coordinates is, it involves the same function here. What more do we know? What more do we want to know? We want to know what this function of v is. We would dearly like to know what that function of v is. Once we know it, we know completely how the coordinates in two reference frames are related to each other. So we'll use one more ingredient. Incidentally, I'm practically regurgitating Einstein's first paper on the subject. I haven't read it for probably 50 years, but it has left obviously a very last impression on me. This is basically what he did in that first paper. Well, this is a lot more. OK, so now he said, wait a minute. Who's to say which frame was moving? Who's to say if my frame is moving relative to you with velocity v, or your frame is moving relative to me with velocity minus v? Whatever the relationship between the two frames of reference are, they should be symmetrical. We could take the entire argument, and instead of starting with the x and t and drawing an x prime and t prime, we could do exactly the opposite. The only difference would be that as far as I'm concerned, you're moving with velocity minus v. As far as you're concerned, I'm moving with velocity plus v. So we can immediately write down what the relationship between x and t are in terms of x prime and t prime. Here it is. I'll write it down for you. Let's write it over here. It must be that x is equal to x prime. Now, what shall I write? Minus vt prime plus vt prime. Why? Because the relative velocity has the opposite sign, times function of the same magnitude of the velocity. The magnitudes of the velocity, if I'm moving relative to you with velocity v, you're moving relative to me with velocity minus v. Let's assume this function is just a function of the magnitude of the velocity, which is the correct answer in the end. And t is equal to, what is it, t prime plus v x prime times same function of v. The question is, are these two equations, or there's two sets of equations, compatible with each other? After all, I didn't really need to guess the relation between x and t over here. I could have just solved this equation for x, these two equations, for x and t. So I want to make sure that this equation and this equation are compatible with each other. I'm going to do it in a fancy way. I'm going to, of course, that will determine what f is, the compatibility. Here's the way you would do it. Here's the way you would do it. Here you have x prime and t prime in terms of x and t. Now, supposing you take the x prime and t prime from here and plug them in over here, what will you get? You'll get x in terms of x and t and t in terms of x and t. How can that make any sense? It can only make sense if what you get is x is equal to x and t is equal to t. That's going to involve choosing f carefully. But I will show you there are more than one way to do this. Maybe we should just do it by plugging in. Yeah, let's do it by brute force. Let's just do it by brute force. I had a fancier way to do it. OK, let's do it. x is equal to x prime, which is what? Somebody got to read it off to me. x minus vt times f of v. That's this guy over here. And plus v times t prime, which is t minus vx, also times f of v, all times f of v, right? All times f of v. So that was square f of v here, right? Now, as I told you, the only way this can make sense is if it just reads x is equal to x. It's the only way it can make sense. And if I did the same thing for t, I would do the same thing for t here, plugging in t prime and x prime. And I'd better just wind up being t is equal to t. But let's look at it. What does it take? Let's combine things. We have x times f squared over here. x times f squared over here. And then we have plus v squared x times f squared. That's from here. And then what about the t dependence? Minus. Minus? Minus. But more important is the t dependence. The t dependence is minus vt here, minus vt times f squared. And here we have vt times f squared. This is minus vt times f squared. This is plus vt times. Good sign. The t's cancel automatically. We've done something right because the t's cancel automatically, leaving only the x's. And the x's have x times f squared minus v squared f squared x. Or in other words, x times 1 minus v squared times f squared. And that, by dam, has to be equal to x. How do we arrange that? Well, we arrange that just by choosing 1 minus v squared times f squared to be 1. In other words, it tells us what f has to be. And it tells us that f has to be 1 divided by the square root of 1 minus v squared. So cancel out the x. So we have 1 is equal to 1 minus v squared times f squared. f is equal to 1 divided by the square root. That's it. That's it. What about the other root? That's probably some interchange of plus velocity and minus velocity. Let's see. So that would be minus. It sounds like it's an inversion of coordinates or something. It's probably an inversion of coordinates. I don't know. I never thought about it. Yeah. That probably corresponds to the possibility of reflecting the coordinates. x goes to minus x. Good question. I'm not sure what the answer is. Let's ignore it for the moment. It's got two roots. I'm not sure what it says. OK, so now we have the answer. Oh, I know what the answer is. OK. Square root of 1 minus v squared over square root of 1 minus v squared. All right. Now I'll tell you what the answer is. First of all, which root you pick, plus or minus, should not depend on the velocity. There shouldn't be some point in velocity at which you suddenly jump from one root to the other root. That would be a very discontinuous thing to do as a function of the velocity. So you say, what do I know? Well, what about for velocity v equals 0? Certainly for velocity v equals 0. In other words, when the two frames are not moving relative to each other, we want the two frames to be the same. So that says that x prime equals x, not x prime equals minus x. The two roots here would correspond to the two possibilities, x prime equals x, or x prime equals minus x. Surely, when the frames of reference are at rest relative to each other, we want the solution to just be x prime equals x, and t prime equals t. So that says, pick the positive root, plus squared of 1 minus v squared. OK, let me write this more completely. Let me write it again. Let's write it the usual way. And I assume you recognize this. t prime is equal to t minus vx over square root of 1 minus v squared. Notice they have the same form except x and t interchanged. x prime is x minus vt, t prime is t minus vx, all divided by square root of 1 minus v squared. These are, of course, the Lorentz transformations. These are the Lorentz transformations, and they've been cooked up to make sure that in every frame of reference, the speed of light is 1. What do you do if the speed of light has not been chosen to be equal to 1? What if it's just c, where c is whatever you choose your units? Then to figure out what's going on here, all you have to do is say, well, if I'm going to use other units, I must have these equations be dimensionally consistent. I must have these equations now. A velocity is a length over a time. Well, an x is a length, a time is a time, this is a length over a time, so the numerator here, the two terms are consistent with each other. That's fine, no problem. But what about the square root of 1 minus v squared? The units of 1 are just 1, they're dimensionless. The units of a velocity are a length over a time. So that does not have units 1. So this velocity squared here must not really be velocity squared. The only way to make it dimensionally consistent is to make this v squared over c squared. In other words, had we kept the speed of light in everywhere, this v squared really would have meant the velocity in units of the speed of light. By setting the speed of light equal to 1, equivalently, we said velocities are measured in units of the speed of light. v equals 1 half would mean half the speed of light. So the only way to restore the dimensions and get the dimensions right in this equation is wherever you see v squared put v squared over c squared. Now this equation is not quite right. What's wrong with this equation? The left-hand side is a time. The denominator is OK. The right-hand side starts with a time, but is v times x a time? You need to divide by c squared. v is a length per unit time. But if I multiply that by a length, I get a length times a length over a time. That doesn't sound right. So this v must be v over c. A simpler way to say it is the way I said it in a moment. v over c squared. Yeah, v over c squared. Right, v over c squared. That's the only way to restore the units. And now this is the recognizable Lorentz transformations. Notice that as long as the velocity is very small compared to the speed of light, so that v squared over c squared is even smaller, if v is a tenth of speed of light, v squared over c squared is 1,100. If v over c is 10 to the minus 4 or 10 to the minus 5, this v squared over c squared is truly a small number. And the square root just becomes 1. What do we get? We get x prime is equal to x minus vt. That's the good old Newton version of things. What about the second equation here? If v over c is very small, in other words, let's now plug in some numbers. Supposing v is 100 miles an hour. Let's even make it bigger. Let's make it 100 miles a second. c is 186,000 miles a second, and c squared is enormously big. So v over c squared is a very, very tiny number. And so this term here, if the velocity of the reference frames is relatively slow, this is negligible. Also, 1 minus v squared over c squared, this is negligible. Therefore, the rule is t prime equals t. For reference frames moving slowly relative to each other, it's just the good old Newtonian type formula. This goes away. This goes away. v over c squared is negligible. In other words, wherever you see c in the denominator, ignore that term because it's just too small to count. So this is good. As long as we're moving slowly compared to the speed of light, we get the old answer. But when we get up near the speed of light, there's huge corrections. When we get up near the speed of light, big things happen. OK, those are the Lorentz transformations, which I assume you've seen before. You may have even seen the derivation before. Incidentally, what happens to y and z, the other components of space? Now, we've been very specific in talking about a frame of reference moving along the x-axis. The relative motion of the frames is along the x-axis. But what happens to the y direction when you do this? Well, physically, what does it mean? If your arm was the same length as my arm, stick your arm out. Stick your arm out. And we went past each other. The question is, when we met at this point, would our arms match? Or would yours be longer than mine? Well, just by symmetry, just by symmetry of the problem, we could either refer to somebody sort of moving halfway between us, or it's clear that our arms are going to match. Because there's no reason for one to be longer than the other. So therefore, the rest of this Lorentz transformation would involve y prime equals y and z prime equals z. In other words, things only happen in the plane, the xt plane, where the motion is along the x-axis. x and t get mixed up with each other in this funny way. y and z are just passive. They don't do anything. They just stay the same. Your y and my y are the same, no matter how fast we're going. But our x's and t's get mixed up with each other in this way. That's special relativity. Let's talk very briefly about time dilation and space contraction. While we're at it, we might as well do it. We have all the ingredients we need. I'll just do a couple of illustrations. Again, draw a picture. Let's first talk about what we're doing. I have a meter stick and I'm walking past you. Or do I want you to be the, no, you're the ones who hold the meter stick. You hold the meter stick. You got the meter stick? Hold it there. Now, you know that that's a meter. I walk past you and as I walk past, I ask how long is your meter stick? I measure it relative to my meter sticks. But I have to be very careful what I mean by that. In particular, since I'm moving, if I'm not careful, I will be measuring the end points of your meter sticks at different times. In fact, I want to be measuring the end points of your meter sticks at a common time which is synchronous to me. Why do I want to do that? Because that's what I mean by the length of your meter stick in my reference frame. I mean the length of your meter stick relative to my meter sticks where the two end points of your meter sticks are examined at the same time in my reference frame. That's the definition of what I mean by the length of your meter stick. That I look at the end points of your meter sticks at a common instant of time. And I measure the distance between them with my measuring rods. So let's draw a picture to indicate what that means. The meter stick is at rest in your reference frame. It goes from x equals 0 to x equals 1, 1 meter. Here's one end of the meter stick. Here's the other end of the meter stick. Of course, they exist at all times. They're standing still. And so the end of the meter sticks are just vertical. They're just vertical lines here. Now I want to know, in my moving reference frame, so again, we draw in the moving reference frame. This is t prime equals 0. Here's t equals 0. This is x prime equals 0. This is x equals 0. And what do I want to know? In my moving reference frame, here's one end of the meter stick as I pass it right over here. Here's the other end of the meter stick at time t prime equals 0. Not at time t equals 0, but in my moving reference frame, I say that this is the two ends of the meter stick at time t prime equals 0. What do I want to know? I want to know what x prime is at this point. I want to know what x prime is at that point, at the point t prime equals 0. Well, this is not hard to figure out. First of all, let's see, t prime, let's go back and set the speed of light equal to 1. I don't want to carry around the speed of light. Here's the Lorentz transformations. And let's look at this point right over here and see if we can figure out what its coordinates are. It's x equals 1 and it's t prime equals 0. x equals 1, t prime equals 0. So t prime equals 0 means that t equals xv. This surface is t equals xv all along here. So that's the first thing I know. I know that t is equal to xv. And I also know that x is equal to 1. So let's see if we can find out what x prime is. x prime is equal to x, which is 1 minus vt, but t is equal to xv. Everybody got the logic? The logic is a little bit slippery here. Let me go back through it. We're trying to figure out what the moving observer ascribes to this point at the other end of the meter stick at an instant of his time. So we try to figure out what the coordinate x prime is at the end of the meter stick. What do we know? We know that t prime is equal to 0 over here and we know that x equals 1. So we use the Lorentz transformations. What could chind x prime equal to 1? This is the meter stick at rest. The meter stick is at rest. Your frame. This is being measured from the moving frame. Yes, we're trying to measure it from the moving frame, which means we're trying to find out what x prime is over here. So we have two pieces of information. One is that t prime is equal to 0. The other is x equals 1. First step, use t prime equals 0. t prime equals 0 tells us that t is equal to vx. So t is equal to vx. We're trying to find x prime. So its x prime is equal to x, which is 1, minus v times v times x, v squared x, divided by square root of 1 minus v squared. x, that's just equal to 1. In fact, I don't even really need to say it's equal to 1. I could just say it's x. But let's just say it's x equals 1. So what do we find? We find that x prime is equal to 1 minus v squared over square root of 1 minus v squared, which is just equal to square root of 1 minus v squared. So the moving observer says that at an instant of time, meaning to say along this surface of synchronous times here, the two ends of the meter stick are separated in his frame or in the moving frame by a coordinate distance or a number of meters in the moving frame, which is given by square root of 1 minus v squared. In other words, the meter stick looks in the moving frame of reference to be a little short. To look a little short compared to what it was at rest. Notice, though, that they're really talking about two different things. The meter stick at rest, we're talking about the distance between this point and this point as measured by stationary meter sticks. In the moving frame, we're talking about the distance between this point and this point measured by moving measuring rods. They're not really the same point of space and time here. So there's no real contradiction that one looks shorter than the other. You can do the other calculation. The other calculation, I'll leave it to you. Use the way the other calculation and the sort of opposite calculation. Think of the moving meter stick now. Here's the moving meter stick. If it's one unit long, what do we know about this line here? Is it x equals 1? No. It's a meter stick one unit of length in the moving reference frame. That means it's x prime equals 1. In the moving reference frame, it is 1 meter ahead of the other end of the meter stick, of the tail end of the meter stick. And so it's x prime equals 1, not x equals 1. And now the observer at rest sees the meter stick, the moving meter stick, being this long. So he has to calculate what this point is, what's x at that point. You can do it, and you'll find out that it's also shortened by square root of 1 minus v squared. The moving meter sticks look short to the stationary reference frame. The stationary meter sticks look short to the moving reference frame. There's no contradiction. They're just talking about different things. The stationary observer is talking about lengths measured at an instant of his time. The moving reference frame is talking about lengths and an instant of the other time. So they're talking about different kinds of things, different notions of what they mean by length. So you say the meter stick looks shorter? Well, forget look. Forget look. Let's be very precise about it. I hold my meter stick, and I move with it. And at some instant of time, remember, you people have synchronized your clocks. So at exactly 12 noon, somebody is going to see the tail end of my meter stick past them, and somebody else is going to see the front end of the meter stick move past them. So they've got their watches synchronized, and they've instructed each other. At exactly 12 noon, see who along here is contiguous or adjacent to the two ends of the meter stick. And then after you've done that, compare notes and say, how far apart were you? That's exactly what you mean. Forget saying that it looks shorter. It means something very specific with synchronized watches and synchronized clocks. Just ask exactly where the ends of the meter stick were in your reference frame at an instant of your time. I can do exactly the same thing. I have all my buddies lined up. They also have clocks. We've also done the synchronization. My synchronization is not the same as your synchronization, but we've done our synchronization. You hold. Who was holding the meter stick before? No, I was holding the meter stick a second ago. Now one of you people is holding the meter stick, and I march past, and I instruct all my people at exactly our time 12 noon, look at the stationary meter stick, and see how far apart the ends are. In both cases, it will look a little bit short from the frame of reference in which the meter stick seems to be moving. If they didn't look the same from each perspective, it wouldn't apply an absolute reference. It would indeed. But since we've established and made sure that the relationships between x's and t's and x primes and t primes are essentially the same relations, except with v goes to minus v, we know that that can't happen. The mathematics of each half of it will look exactly the same as the previous one. So you can go through it, check this out. The last thing to talk about before we move on to the motion of particles. Yeah? This isn't a meter stick, is it? Because our units are meters. What's that? Say it again? It's not a meter stick because our units are meters. It's a light second stick. You mean this is not a meter? Yeah, because we're assuming c equals 1. Oh, indeed. Well, of course, it depends on our choice of time. You could use for time the time interval that it takes light to go one meter. But then we were right. Yes, you're right. You're right, but I'm also right. Good. OK. Yeah, you're right. What about time dilation? Time dilation, we can pretty much work the same way. Supposing we have a moving clock. Supposing we have a moving clock. And for the moment, I'm going to assume my clock is moving with uniform velocity. In particular, we have a clock that's moving with the moving observer. My clock. My clock. I'm moving. And my clock is right sitting on top of me. It's not one of my friend's clocks. It's my clock. And it is moving along a trajectory like that. Here's the question I want to ask. At the instant when my clock reads to me, to me personally, when it reads t equals 1, let's say, maybe we could be a little more general. Now, let's say t equals 1. When it reads to me, t equals 1, the question is at that point, what is the time in your reference frame? So when I say t equals 1, do I mean t equals 1 or do I mean t prime equals 1? My coordinates are t prime. So I have my clock. It's my standard wristwatch. It's been made in the Rolex company. It's a good watch. It works well. The guy sold it to me for $25. And I got it in New York from a guy. He said, right? But it works well. So it's t prime equals 1. One second, or one unit, t prime equals 1. And I want to know what the corresponding value of t is all along the horizontal surface. The horizontal surface is the surface that you call synchronous, that you call an instant of time. All right. Now, what else do we know? We need two things in order to pin down this. We also know that x prime is equal to 0. x prime is equal to 0 all along here. And t prime is equal to 1. Let's see if we can find out what t is. So we write all the rinse transformations. x is equal to x minus vt over square root of 1 minus v squared. I don't think we're going to need that one. And the other one is, oh, I'm sorry. This is not the one we want. We want the other one, which goes x is equal to x prime plus vt prime over square root of 1 minus v squared. And t is equal to t prime plus vx prime over square root of 1 minus v squared. I simply used the other form of the Lorentz transformation, where I write x and t in terms of x prime and t prime. And to do it, I've replaced v by minus v. So this is the other Lorentz transformation going the other way, telling me the x and t coordinates in terms of x prime and t prime. OK, so t prime is equal to 1. x prime is equal to 0. And what I want to find out is t. That's right over here. Let's see what it is. t prime is equal to 1. x prime is equal to 0 divided by square root of 1 minus v squared. Right. So the time over here is a bigger or smaller than one? Bigger. Bigger. The time interval over here is longer than the time interval measured by the moving observer. By a factor of square root of 1 minus v squared. This is the origin of the twin paradox. The twin paradox involves reversing the motion of the twin over here. Two twins, one at rest, one moving. Reverse them, bring them back together again, and ask how much time has evolved in the two reference frames. Well, one of these reference frames is not an inertial frame, the one which bends and goes around. But we've already calculated how much time there was between here and here. That's 1 over square root of 1 minus v squared. In other words, we found that there's a little bit less time along here than along here. Same thing is true for the second leg of the journey. The implication is that the time measured in the moving frame is a little bit less than the time measured by this factor in the frame of reference of the twin that stays at home. And so the twin that stays at home is a little bit older than the one who moves. We're using here a basic idea that every clock participates in this thing in exactly the same way, and in particular, the clock that has to do with biological aging also slows down. But this is just a statement that whatever we've said here applies to every kind of clock, not just Rolexes. I'm in. I have a hard time accepting. What's that? I have a hard time accepting. I have a lot of news to share with you on this side. Let's say it again. I can't hear you. I have a hard time accepting. I have a pot news being shorter than one of the sides. Yes. Isn't that interesting? We're going to come to that. Yes. That's right. That's exactly right. No, it's not right. But I mean, it's exactly something that you might be puzzled by. So now let's talk about exactly that. Let's talk about exactly the issue of the hypotenuse, and whether it's shorter or longer than the sides of the. Yeah. You were saying that the twin that goes away and comes back less time has passed for him. But from his point of view, we flew away. No, no, no. Remember, these two frames are not symmetrical with respect to each other. One of them underwent an acceleration. They're really not symmetrical. The twin who got sent out and sent back knew that he was the one who went on the curved trajectory because he experienced an acceleration. That twin was not in an inertial frame. So is that the cause then? Is because the acceleration? You can think of it that way. You can just calculate it, or you can think of it as the cause of the acceleration, if you like. So a legitimate way to think about it. But the mathematics is pretty similar to the statement that if you take two points in the plane, ordinarily just two points in the plane, and you connect them by a straight line, or you connect them by a curve, don't be surprised that the length along the straight line is not the same as the length along the curve. In fact, the length along the curve is, of course, longer than the length along the line. And in the relativity case, we actually found that the time along the curve was a little bit less than the time along here. So there's something going on that's different than ordinary lengths, but it's a similar phenomenon that the length along the curve is not the same as the length along the straight line. The straight line is the shortest distance between two points. Well, in fact, in relativity, we've erased the diagram. Did I erase it? Yeah, here it is. As was mentioned, the time along the hypotenuse is less than the time along the vertical axis here. So there's something funny going on, and I want to talk about that funny thing that's going on. Let's just think about Euclidean geometry for a minute. Think about the plane, and let's think about different coordinate systems on the plane. Coordinate systems just related to each other by rotation. Now, there's an origin of coordinates. Let's call this x and y. This is now x prime and y prime. This is not a moving reference frame now. This is just the rotation of coordinates, and I'm using it to make a point. But let's take a point somewhere in the space over here. It's the point x, y. The two reference, well, let me call them reference frames. I really mean coordinate systems. The two coordinate systems do not ascribe the same values of x and y to this point. Obviously, the x and y of this point here are not the same as the x prime and the y prime. They're related to each other. If you know x and y and you know the angle between the coordinates, you can figure out x prime and y prime, but they're not the same. But there is something that is the same, no matter how you calculate it, whether you calculate it in the prime coordinates or the unprimed coordinates. And you know what it is. It's what? The distance from the origin. Everybody will agree on the distance from the origin. In fact, I'll also agree about the square of the distance from the origin. So what is the square of the distance from the origin from the point of view of the unprimed coordinates? Well, it's just Pythagoras' theorem. It's just x square plus y squared. That's the square of the distance. What about from the point of view of the prime frame? In other words, the quantity x squared plus y squared for a point here is an invariant. Invariant means that it doesn't depend on which coordinate system you work it out in. It's invariant under changes of coordinates, which in this case are just rotations of the coordinates. So we say then that x square plus y squared is an invariant quantity. And it's the same no matter what frame you measure it in. Question is there an analogous quantity associated with the Lorentz transformations here, which is invariant? In particular, here's something we might guess. Let's try it out. Let's try it out and see if there's something similar going on in tx space. Here's t is x. This point is characterized by a t and an x, but it's also characterized by a t prime and an x prime. It has a t and an x or a t prime and an x prime. It has coordinates in the unmoving frame and the moving frame. What is the relationship between them? It's the Lorentz transformations. But we might guess, we might ask, is there something similar that's invariant? Let's make a guess that it's t prime squared plus x prime squared equals t squared plus x squared. Let's try that out and see if it's true. In terms of x and t, we know x prime and t prime, so we can work it out. Let's see if it's true. OK, so t prime, let's start with t prime squared. We just take this equation over here, t prime squared has. It has t squared. It has plus v squared, x squared. And it has minus 2 v tx divided by 1 minus v squared. Did I do that right? That's t prime squared. What about x prime squared? That has x squared plus v squared t squared minus 2 v tx. Right? Question, does this equal t squared plus x squared? Hell no. It does not. And in particular, one thing you can see immediately is that the xt term here adds to the xt term here. They do not cancel. There's no xt here. They can't be the same in general. And they're not the same. They're simply not the same. But now if you look carefully at it, you might notice that the x t term here you might notice that if we subtract them, the v tx terms do cancel. Let's see what else happens. So let's try subtracting them. This is t prime squared minus x prime squared. And let's not prejudice by what's on the right-hand side here. Let's find out what's on the right-hand side. t prime squared minus x prime squared is this difference. It contains the cross term here. This appears. They cancel now because of this minus sign. And how about x squared? Let's see what x squared has. x squared has v squared minus 1. v squared minus 1. And then this plus t squared. And t squared has 1 minus v squared. 1 minus v squared all divided by 1 minus v squared. 1 minus v squared is cancel. And you just get t squared minus x squared. This is minus, minus plus. OK, so we've discovered something. We've discovered that there's an invariant quantity under any Lorentz transformation. The combination t prime squared minus x prime squared is the same as t squared minus x squared. It's almost like Pythagorean theorem that there's a kind of notion of distance from the origin here, which is composed out of the base of the triangle, namely x, and the height of the triangle that stays the same if you do a Lorentz transformation. In other words, if you use other coordinates, calculate x prime and t prime, you find out that the combination t squared minus x squared is the invariant. You could also take it to be x squared minus t squared, incidentally. x squared minus t squared and t squared minus x squared. Pick your, take your choice. This is an important thing to know. It's very important to know what the invariants are. The components of a vector or the components of a displacement are not invariant. They depend on which coordinate system you're using. The things which are invariant, things which all observers agree upon, are, in this case, the, I'll give it a name in a moment. Let's call it the spacetime distance. The spacetime distance, which is defined with this funny minus sign, is invariant. All observers. So it must be an important quantity. OK, let's see if we can figure out what it really means. In particular, let's suppose that between here and here, that we happen to be talking about the motion. In other words, let's assume, let's take a special case. Yeah. Let's take the two points to be along the motion of a moving observer. OK? All right. Then for this moving observer, what is t prime squared minus x prime squared? Well, x prime at this point is just plain zero. This is the point x prime equals zero along here. So it is just t prime squared. In fact, we can write immediately that t prime squared must equal t squared minus x squared. Why do I say that? This point over here is the point t prime and x prime, but it's x prime equals zero. So its distance from the origin, t prime squared minus x prime squared, is just plain t prime squared. On the other hand, because this quantity is invariant, it must also be t squared minus x squared. Um, what is this quantity t prime? It's the reading of a moving clock, a clock moving along here, which starts at t prime equals zero over here, will read t prime over here. That's what it is. It's the reading of a clock that has moved on a straight line between one point and another point. It's how many units of time have occurred in the frame in the moving, well, in the frame that connects these two points. It's called proper time. It's called the proper time, and everybody will agree on what the proper time is along that trajectory. They will calculate it the same way. They will take the two points and calculate the square of the difference of the time interval between the two points and the space interval between the two points in any frame whatever. You pick your frame, and you calculate t squared minus x squared for this point. And that tells you how much time evolved between the clock reading over here and the clock reading over here. It's called the proper time of that clock. But you can also just think of it as a measure of distance along a line connecting two points. It's not a measure of ordinary distance. It's a measure of spacetime distance, called the proper time. It's this minus sign here, which gives the triangles the peculiar property that the hypotenuse is shorter than the height of the triangle itself. Why is that? The minus sign makes it smaller. The minus sign makes it smaller. But in many other ways, this functions as the notion of a distance in spacetime between two points. Are there any questions about that? Yeah? So up there where we actually drove the Lorentz transformations, we used alve. The rove? Derived. I don't know. Some kind of a transformation, I don't know. We used angles and slopes and matched coordinates. But what this is saying is you can't do geometry, at least not a lot of it. No, no. That's a little late. But we never used any notion of lengths in this. We just used coordinates. We never really, yeah, that's right. You can't use ordinary notions of Euclidean geometry on spacetime. But still, we were very careful and simply defined coordinates in terms of how clocks would read, and we derove the invariant. We derove the invariant, and that's it. That's the invariant. Right. So relativistic spacetime is kind of like ordinary Euclidean spacetime, but with a funny difference that you take the difference between t-squares and x-squares, and that's the thing which everybody will agree on. And you can think of it as a kind of length. Now, there is one funny property of it. What about the points? Let's take the point. Let's go back to this here. Let's take one point at the origin, and let's take some other point. We can just use x and t. Let's just use x and t now. x and t. Another point at x and t. Is it possible that the distance between these two points, and these are different points. This one's at the origin. This one's not at the origin. Is it possible that this notion of distance just gives zero distance between them, even though they're not the same point? In ordinary Euclidean geometry, if the distance between two points is zero, the points are at the same place. Good. What about in this crazy kind of geometry with the minus sign? Supposing the distance from this point to the origin is zero, what does that mean? That means that x squared minus t squared is equal to zero. All right, x squared minus t squared can be equal to zero without x and t being zero, incidentally. x squared plus y squared equals zero implies that both x and y are zero. x squared is positive, y squared positive or zero. y squared is positive or zero. If the sum of these is zero, it means either one of them. Both of them have to be zero. But if x squared minus t squared is equal to zero, it doesn't say any such thing. What does it say? It says either x is equal to t or x is equal to minus t. One or the other, x is equal to minus t. One or the other, what does x equals t mean? It means a light ray. It means that a light ray could go from this point to that point. They're connected by a possible light trajectory. Same for the other one. x equals minus t would be over here. Again, light ray can go from one to the other. So one of the facts here is that if the distance between two points is zero, it doesn't mean the two points are the same point. It means that a light ray can connect them. Still, nevertheless, it does mean that the proper time between the two of them is zero. In other words, if a light beam could carry a clock with it, there would be zero ticks of the clock between here and here. Clocks that move with the speed of light simply don't tick. Stick, they don't tick. Right. Yeah? Could proper time just as well be called proper length? OK. Yeah, that's a good question. That depends on whether the two points, here are two points, which could not be the path of a moving object, of a clock. Why? Because the velocity, in order to go from here to here would have to be greater than the speed of light. If we assume nothing can go faster than the speed of light, then this can't connect these two points. That's the condition, basically, that x is bigger than t. Here, x is less than t. If x is less than t, a light ray can get from one point to another. If x is bigger than t, it can't. If we're talking about two points in this type of configuration, they're called, and there's a word for it, they're called a relation between those two points is called space-like. Space-like means that the space component is bigger than the time component, or better yet, that x squared plus t squared minus t squared is greater than 0. And the point is up here, so that a trajectory can connect those two things, then t squared minus x squared is greater than 0. This is called space-like, this distance, and this one is called time-like. If the two points are space-like connected, then you would call this quantity x squared minus t squared, you would call the proper distance between them. If they're time-like separated, then you would call t squared minus x squared the proper time. So yeah, you can think of it also as defining a proper distance, but only if the t component is larger than the x component. All right, we're close to the end. There's only one for today. I was going to talk about some relativity paradoxes, but I don't think we have time today. What was the other thing I wanted to mention? I've forgotten. Are there any questions? Yeah? The last table I just want to do here, put C's and I put those. Oh yes, absolutely. So where would you put the C in this equation? We could put it here, C squared t squared, or we could put it in the denominator here. A distance divided by a velocity is a time. So this would be, and this would normally be called proper time as units of time. If we go up to x squared minus t squared, and we put the C squared here, we would call this proper distance. But that's a pure convention. It doesn't matter where you put the C squared, whether you put it here or here. Just make things consistent. The definition of proper time is usually taken to be t squared minus x squared over C squared. The square root? Where square root? Because the units for that would be time squared. But for proper time, would that be cool? Oh, sorry, proper time squared. Yeah, proper time squared. Thank you. Right, that would be proper time squared. Yeah, it's square. It's like a Pythagorean theorem. The notion of distance is to take the square root of this thing. Most of the times, you don't take the square root. But yes, proper time is the square root of t squared minus x squared. Thank you. That's important. Yeah. Is it useful to think of the rest transformations of rotation? Well, yeah. The trouble is it's a rotation by an imaginary angle. Imaginary in the sense of imaginary. Yeah. It is literally a rotation by an imaginary angle. So yes, it is sometimes useful to think that way. Right? At the beginning, you mentioned that Einstein added the fact that C was a law of physics. Was that to match the experimental data at the time? Or was it to be careful? Well, I know the history a little bit. Of course, I don't know the history. I know what a history is told by a participant. I didn't know the participant, of course. All right. I think there's every reason to believe him. I think Einstein was motivated by Maxwell's equations. He took Maxwell's equations to be a law of physics. Now, he was only 16 years old at the time when he started thinking about it, according to himself. He was 16 years old. And what he knew about Maxwell's equations was that they gave rise to these wave-like solutions and that the solutions moved in a certain way. And he was puzzled because he tried to figure out what would happen if you moved along with a light ray. Then you would see a static electric magnetic field that had a wave-like structure that didn't move. And he knew somehow that that was not a solution of Maxwell's equations. So according to him, that was the motivation that he was puzzled by the idea of moving along with the speed of light, the light ray. But I think beyond that, he knew Maxwell's equations, or at least at some point, he learned Maxwell's equations and took Maxwell's equations to be laws of physics. And Maxwell's equations say lights move with the speed of light. So I'm inclined to believe that that was the correct history, that he didn't know at the time the Michelson-Morley experiment. But who knows? He didn't know that. So don't you just say he didn't know about it? According to his own historical testimony about it, he did not know about it. I think there's every reason to believe you. He was surely smart enough. I mean, so the logic makes sense that Maxwell's equations, in modern terms, we would say it differently. We would say here are these Maxwell equations, and there's some symmetry of the Maxwell equations, some set of coordinate transformations for which, if you do them, Maxwell's equations have the same form in every reference frame. If you take Maxwell's equations, which contain x's and t's, and you plug in the old Newton rule or the Galileo rule, x prime is equal to x minus vt, t prime is equal to t, you'll find out that in the new primed frame of reference, Maxwell's equations change their form. They don't have the same form that they had originally. If you plug in the Lorentz transformations, you'll find out that the Maxwell equations and the new coordinates are identical to what they were in the old coordinates. So I think in modern language, I think what Einstein did was to recognize that the symmetry structure of Maxwell's equations was not these transformations, but these transformations. But it was all encapsulated in one principle. He didn't have to really know Maxwell's equations. All he had to know was that Maxwell's equations were a law of physics, and the law of physics said that light moves with a certain velocity. And from there, he could just work with the motion of light rays. So yeah. Lorentz presents, they're called Lorentz transformations. Lorentz fits Gerald transformation. So he presumably had thought of them earlier for some other purpose. No, he thought about them for the same purpose. He did know about the Michelson-Morley experiment. But he envisioned them differently. Same equations, but he envisioned them differently. He envisioned them as effects on moving objects caused by their motion through the ether. So he envisioned that an object moving through the so-called ether, because of various kinds of ether pressures, would be squeezed and therefore shortened. Now, was he wrong? I suppose you can say in some way or another that he wasn't wrong, but he certainly didn't have the vision that Einstein had of a symmetry structure, of what is the symmetry required of space and time in order that it agree with the principle of relativity, of the motion of the speed of light. Nobody, I think, would say that including Lorentz would have said that Lorentz did what Einstein did. And furthermore, Lorentz didn't think it was exact. He thought it was a first approximation, that a thing moving through a fluid of some kind would get shortened and that the first approximation would be the Lorentz contraction. And in some higher orders, you would see some other things happening. So he fully expected that the Michelson-Morley experiment was not exact. He thought that there would be higher corrections, the higher powers of velocity over C, where you would see discrepancies. Now, Einstein said, look, this is really a law physics. This is a principle. So he made a principle out of it. Not likely to go away. Even in Italy. All right, next time we'll talk about the motion of particles and how relativity helps you understand how particles move and things of that nature. For more, please visit us at stanford.edu.
(April 9, 2012) In the first lecture of the series Leonard Susskind discusses the concepts that will be covered throughout the course. In 1905, while only twenty-six years old, Albert Einstein published "On the Electrodynamics of Moving Bodies" and effectively extended classical laws of relativity to all laws of physics, even electrodynamics. In this course, Professor Susskind takes a close look at the special theory of relativity and also at classical field theory. Concepts addressed here includes space-time and four-dimensional space-time, electromagnetic fields and their application to Maxwell's equations.
10.5446/15012 (DOI)
Stanford University. All right. Well, a mathematical interlude we're going to begin with. Mathematical interlude is again about linear algebra, about vector spaces, but about the idea of operators. But before we do, I want to let, before we get to operators, I want to say a few more things about vectors, a few more bits about the mathematics of vectors. Most of these bits have to do not so much with deep mathematics as with good notation. Good notation is worth an awful lot when you can just manipulate symbols in ways that are sort of prearranged and do it easily and comfortably. There's an enormous benefit in that for doing abstract mathematics, certainly, but also the abstractions, the mathematical abstractions of physics. The abstractions that we're going to talk about were largely due to Dirac, Paul Dirac, who really was the one who saw into the way that quantum mechanics really fits together. So let's talk about that. We have a space of states. The space of states, which we'll come back to, is as I said, a linear vector space, meaning to say that you can multiply states by numbers, to get new states, and these are abstract vectors. Abstract vectors, maybe I throw too much in when I say space of states, just abstract vectors, you can multiply them by numbers in our case complex numbers and you can add them. There are two kinds of vectors, there are the bra vectors and the ket vectors, they're roughly speaking related by complex conjugation, and a vector space has a dimensionality. The dimensionality is the maximum number of orthogonal vectors that you can find in that space, and once you know what the dimension of the space is, you can look for a basis of vectors. The basis of vectors is a mutually orthogonal collection of normalized, get the terminology down, orthonormal basis. Ortho means that they're all orthogonal to each other, normal, normal means normalized, and normalized simply means that the vectors are of unit length. A collection of them that is maximal, in other words that you can't find anymore because there are no more directions left, that defines a basis and the number of them is equal to the dimensionality of the space. Given such a basis, you can write any vector in the space as some kind of sum over those basis vectors. So we can write for any vector A that it's equal to sum over i. i here stands for the basis vectors. Of course, a basis is not a unique thing. Just as there are many sets of mutually orthogonal vectors in three-dimensional space, there are many sets of mutually orthogonal vectors in an abstract vector space, but we pick one. We pick a set. Having picked that set and we label it i, we can write any vector as a sum alpha sub i, exactly those alphas which we'll refer to a moment ago. Alpha sub i, a set of complex numbers, a set of complex coefficients times the ith basis vector. The ith basis vector, I'm labeling i. This is a ket representation, but now what we can do with this is we can take the inner product with it. I want to try to calculate these alpha sub i's in terms of some quantities involving the vectors themselves. So what we do is we take the inner product of both sides of the equation with the basis vector j, the ket vector j. On the left-hand side, we get the inner product of the vector a, which is the thing we're trying to express, and that's equal to the sum of i alpha sub i times the inner product of the vector j with the vector i. But the inner product of the vector j with this vector i is either one or zero. It's one if i and j are the same vector, and it's zero otherwise. Why? Because they're orthogonal if i is not equal to j, and they're normalized namely of unit length if i does equal j. So this bracket over here, this product of bras and kets is just a chronicle symbol delta j i, zero or one, and when you do the sum, it just picks out one and only one contribution. The contribution in which i is equal to j. So on the right-hand side, you just get alpha j, no sum. Sum is collapsed to one term and that's now equal to j a. So what do we learned? We've learned that the coefficient here in the expansion of any vector is just the inner product of the jth vector with the target vector, with the vector we're trying to describe. The fact that I wrote a j there is irrelevant. It could also be alpha i. It doesn't matter which i or which j, the i-th coefficient is in the inner product. Now, I can use that to rewrite the sum up here. I now know what the alpha sub i's are. These guys over here, so let's rewrite this sum over i. I'm going to start with alpha i, sorry, with i, and then write alpha sub i. I'm going to write alpha sub i next to it over here in the opposite order. That's allowed, nothing non-commutative at this stage. Now, I want to write alpha sub i. So I write i a. That's a basic formula that any vector can be rewritten in terms of its coefficients or in terms of its inner product with the basis vectors, times the basis vectors themselves, summed over i. It's kind of a pretty formula and it comes back over and over again. What it says is whenever you see this summation of i times i in that form, you can sort of throw it away. It's just a equals a. But it's an expression for a vector in terms of its components. All right, so that's one simple fact. The same thing is true for bra vectors. Exactly the same thing is true for bra vectors and you would write it in the following form. A bra vector is also a sum over i of the inner product of the bra vector with the basis vectors times the basis bra vectors. The basis bra vectors are just the complex conjugates, if you like, of the basic, of the basis ket vectors. Both of these equations are true and we're going to find out that they're enormously useful. They're enormously useful and powerful even though they were very, very simple. All right, now we come to the notion of linear operators. States in quantum mechanics are vectors. And we saw a little bit about how that works in the case of a single spin. Observables, observables mean the things that we measure. The things that we measure, the objects that we measure, the quantities that we measure. The observables are related to linear operators in the space. All right, so what is a linear operator? And it'll take us the rest of the evening to understand in any detail what it has to do with observables. So put that out of your head for the moment. We're just doing a mathematical interlude now. What is the notion of a linear operator? A linear operator is a process, if you like, which you apply to vectors. Somebody or other, I can't remember who, it might have been John Wheeler, like to describe these things as machines. You would say a linear operator is a machine with two little ports in it. Into one port, you put an input vector, and then you turn the wheel, and out comes an output vector. So it's a thing which acts on an input vector to give an output vector. So let's call M a linear operator. Linear is a linear we'll come to. So far, we're just talking about operator, a machine. In goes one end, out comes another end, and we'll talk about what it means for it to be linear in a minute. All right, so M is some operator, and that means that given any vector, when M acts on it, it simply gives another vector, unique. Given any A, it gives a unique B. That does not mean, the converse may not be converse, opposite or no, may not be true. It may not be that given a B on this side, that there's only one unique A that gives rise to that B. But it is true that M applied to any vector A gives a unique reflection of A, which is the process M acting on A. Now, what does it mean for M to be a linear operator? Well, first of all, M can act on Z times A, where Z times A is a vector. In fact, let's write that M can act on Z times A. Z times A is itself a vector, because you're allowed to multiply vectors by complex numbers. Z is a complex number. The rule about linear operators is if they apply on constants times a vector, they just give back the constant times what they would have given on A. So it just means that a Z, a complex number, a complex number can be brought through M. M acting on twice A gives twice what M would have given on A. M acting on three times A gives three times what M gives on A. M acting on I times A, the complex number of I, just gives I times whatever M would give on A. That's the first rule about linearity. And the second rule about linearity is that if M acts on the sum of two vectors, A plus B, then what you get is just a sum of what the machine spit out for A and what the machine spit out for B. What the machine spits out when you put in A plus B is just a sum of what comes out M times A plus M times B. And that's it. That's the notion of a linear operator. There's no more to it than that. Or at least that's the full set of rules about linear operators. Now let's see if we can be more concrete about them. Let's see if we can learn to manipulate with them. Let's suppose that M times A equals B. Here I've used B as a piece of the input, but now B is the output. A and B are just letters. I can use them any way I want. M times A equals B. I'm interested in the component of B along the axis I. By that I mean these objects over here, the alphas, they are the components of the vector A along the various basis vector directions. So alphas of I's are the components of the vectors. And I'm interested in the components of the output vector. I would like to know in a concrete way what are the components of B. So to find out, I just project them onto the direction I. A, M, I. All I've done is in both sides of the equation project or to project this. Take the inner product with the vector I. This by definition is beta I. In other words, if I were to expand B in the same way I expanded A, I would use the coefficients beta I. That's what we learned up on the top board. This is beta I. How about over here? What can I do with this to rewrite things in terms of components? I want to rewrite everything in terms of components so they actually become arithmetical operations that you can do. Well, here's where we use this trick. Now whenever you see a vector A, you can substitute for it a sum. Let's do that right in here, where A is, let's substitute I. I haven't substituted yet. Now, the summation variable is not I in this case. I have used as the external non-summation variable here. So let's sum over J. It doesn't matter what we call a summation variable, J A. I've stuck in for A the sum of the components times the jth vector, times the jth basis vector. But now A times J go up to the top again. That's just alpha J. So we now have a formula completely in terms of components. This is I mj. I'll get to this in a minute, but it's a number. Inner products are numbers. m times J is a vector, and you can take it to inner product with I. m times J is a vector, and vectors, ket vectors, have inner products with basis vectors, times J A or times alpha J. That's equal to beta I. So all we have to do is give this object a name. What is this object? Just to tell you again, you have some operator. It's an abstract entity. You have a collection of basis vectors. You apply the abstract operator to the vector J and take it's inner product with I and call the thing mij. It's a number. It may be a complex number, but it's a number. In other words, it's possible to characterize linear operators by what are called their matrix elements. These are called the matrix elements of the linear operator m. This times alpha J summed over J. I left out the sum over J here. Let's put it back in. Sum on J is equal to beta I. If somebody hands you the collection of mijs and somebody hands you a vector in the form of its components, you can then start sweeping through it and calculating what the components of the output are in terms of the input. mij is a matrix. It has two indices. It's a square matrix. If the dimensionality of the space is n, then m is an n. n is an n by n matrix, a square matrix. It's a square matrix. Alpha can be regarded as a column. Alpha can be regarded as a column, and the output is also a column. It's a column because a was a ket vector and b was a ket vector. So another way to write the same thing, this is one way to write m times a equals b. Well, here's one way to write it, abstract. Here's another way to write it as a concrete sum. And the third way is to recognize that this is just the multiplication of a matrix times a column vector. You write the matrix m by displaying all of its components, m1, 1, m1, 2, m1, 3, dot, dot, dot, m2, 1, m2, 2, dot, dot, dot, dot, dot, dot, dot. It's just an array that exhibits the matrix elements. We'll now learn to call these the matrix elements of m times the column vector a. Alpha 1, alpha 2, dot, dot, dot, alpha n. And what is the rule? This, of course, is supposed to result in an output, which is beta 1, beta 2, dot, dot, beta n. What's the rule that's implied by this equation over here? It's very simple. If you want the first entry over here, you take the first row and you take its inner product with a column here. m, not the inner, you can call it the inner product, m1, 1, alpha 1, plus m1, 2, alpha 2, plus m1, 3, alpha 3, right down to the bottom. That's exactly this formula. m1, 1, alpha 1, plus m1, 2, alpha 2, and so forth, is exactly the standard rule for multiplying a matrix by a column vector. All right, so now we have a third way to represent the action of a matrix on a vector. This is less abstract. It's concrete. If you know the matrix elements, you know exactly what to do with them. The only thing about it is it does depend on a particular choice of your basis vectors. The specifics of the matrix elements here and the column vector will depend on your choice of, once you pick a basis, you stick with it. But you could have chosen a different basis, in which case these components would have been different. The matrix elements would have been different. After all, the matrix elements are themselves related to the basis vectors. And so you lose something and you gain something by going to a matrix type representation. What you lose is the geometric idea of a relationship between two vectors that's independent of the basis choice. What you gain is a computational, actual, mechanical algorithm for producing or for the machine itself. Here is the machine. The machine is a matrix. And what it does is it grinds through m1, 1, alpha 1, blah, blah, blah, blah, blah, blah. Then it goes to the next row, does the thing again, goes to the next row, does the thing again, and just spits out the final answer. These things, of course, also true for ordinary vectors in three-dimensional space. They're not the specific, we tend not to use them as much in three-dimensional ordinary vectors, but they're quite essential in quantum mechanics. Yes, sir? Marker. This may be getting ahead. Could the choice of basis vectors ever make a difference to whether or not you can see the solution to a problem? Well, the precise way you ask it, whether it can make a difference to the way you see through a problem, certainly a wrong choice of basis can leave you completely confused. I think what you really meant is can different choices of basis lead to different physical answers. And that had better not be. Then you're doing something wrong. I think it's smart. Yeah, yeah. It's often the case that a convenient choice of basis will make your computations especially simple. And a bad choice of basis might make them horribly complicated. That's true. But at the end of the day, the answers should not depend on it, the result of an experiment. So now we have the idea of a linear operator operating on it and several different versions of it. Question? Yeah? Is it legitimate? It seems that within a definition linear operator, you have an operator that gives you a ket vector from a raw vector relation first person. No, no, no, no. No, no, no, no, no. Linear operators act on ket vectors to give ket vectors. The complex conjugation is more loud. That's right. Complex conjugation. Very good. All right. Let's, yeah. The simplest kind of linear operator is just multiplication by a complex number. I just multiply a by a complex number and get another vector. It's basically the same vector, but twice as long or something. That is a linear operator. You might ask why their complex conjugation is a linear operator. I take the components of a vector and replace them by their complex conjugates. The answer is no. It is not. And you can check the definitions, multiplying the components of the vector, or not multiplying them, but complex conjugating them is not. What it does do is it takes a vector, it's a different kind of machine, complex conjugation. It's a machine which takes a bra vector to a ket vector or a ket vector to a bra vector. And that's a, that's strictly, it's called an anti-linear operator. It's not anti-linear. I mean, it's just, it's not the opposite of linear. It's just a word. Well, the reason it's a word, the reason it was called an anti-linear is because the process of complex conjugation is in some deep way related to antiparticles, but we're not there yet. Yeah. All right, now let's talk, let's talk about bra vectors. So far, we have defined our linear operators to act only on ket vectors. But every linear operator can also be given a definition. We need a definition, of course. We haven't defined the ket. We need a definition, but there is also a definition of any given linear operator acting on a bra vector. OK, let's see if we can concoct a definition, and then we'll stick with that definition. What might be the rules for the action of that same linear operator when it acts on a bra vector? First of all, the standard notation, and it's very, very good to keep track of notation and to use the same notation over. It's not only neat, but it keeps things in order. When you act on a bra vector, put the corresponding operator on the right side of it. It gives something. Take that something and take it's inner product. Let's put a bracket around it to indicate that M acting on A is itself an entity. And now let's take its inner product with another vector, B. Well, if I removed the bracket here, I would have a symbol which would look like this, A, M, B. And I really wouldn't know if what I'm supposed to do is act with M on B and then take the inner product with A, or act with M on A and then take the inner product with B. The answer is with the standard definition of how linear operators act to the left on bra vectors, it doesn't matter. In other words, we define the action of an M on an A to the left such that, and I'll show you that we can always do this, such that it doesn't matter which way we do it. A, M, B is the same as A, M, B that you can remove the brackets. OK, let's see that we can do this. In order to see it, let's start with the definition this way. Let's start with the definition this way. Where's my? Ah, OK. Start with the definition this way. And now I'm going to use that trick up on the top where you write a vector by inserting, this is a name for this operation here, it's called inserting a complete set of states. Complete simply means it's a basis. It's called inserting a complete set of states. It did nothing. It took A and it rewrote A, but it rewrote A in terms of its components. So let's do that over on B over here. We can do the same thing over here. We can write that this is equal to B, J, J summed over J. I'm not going to write the summation signs. Maybe I'll put them at the end out here. Summation I and J. So this is the same vector as B. Then put M there. Let's put the bracket around it. Put M there and doubt to exactly the same thing on the ket vector. Remember the ket vector can also be expanded in the same way. So here we go. A, I, I. What do we see here? We see this is alpha star I. Why did I put the star there? Because the bra vectors are complex conjugates. Alpha subs are I. There's an M, I, J. I in a product of I, M times J. And then there is beta J. That's all we have here. This is summed on I and J. M with I on the left and J on the right. That's M, I, J. J beta alpha I. And you can see from the way that we've written this that it doesn't matter if I first sum over J and think of M as acting on the ket vector beta, or I sum over I and think of M as first acting to the left on alpha. You get the same answer. It's just a double sum over I and J. You get the same answer. And so if you define, in other words, you can. You can define M in such a way that it doesn't matter which order, not which order or where you put the brackets. That eases your life. You don't have to remember whether you meant M acts on B and then you take the inner product with A or M acts on A and then you take the inner product with B. Same thing. OK, but there is a sum. There is. That's the power of a nice notation. But now let's consider another question. Supposing that the linear operator M on A gives B. Incidentally, I'm using A and B all over the place in different ways. There it was just two vectors. Here, A is the input vector. B is the output vector. So let's suppose that the machine M, when you stick A into it, outspits B. We could ask, what's the corresponding bra vector relationship? Is there a linear operator which has the property that when it acts on, I'm not going to call it M yet. It's not M. Let's call it scripty M. Is there, for every M, which acts on A to give B, is there a corresponding scripty M which acts on the corresponding bra vector B to give the corresponding bra vector over here? Well, yes, there is. And we're going to figure out how to figure out what it is soon enough, but it is not the original M. One example would be, what if M was just multiplication by a complex number? Then what would scripty M be? Multiplication by the complex conjugate number. So whatever M is, it's not necessarily the same as scripty M. It would be the same as scripty M if it was multiplication by a real number. But if it's multiplication by a complex number, then in general, it will not give the same relationship. We would normally just put a complex conjugate sign if we wanted to operate on the other direction. All right, the notation that is standard is to say that the operator which when acting to the left is the reflection, if you like, is the dual of the operator M acting to the right. That is not called M. It is called M dagger. The dagger is a operator version of complex conjugation. It's an operator version of complex conjugation or matrix version of complex conjugation. It's called Hermitian conjugation, but we'll get to these fancy words soon enough. Well, OK, this is abstract, but let's see if we can get to a concrete notion. Mij, the abstract operator, is equivalent to a concrete matrix. If I give you the concrete matrix, then I'm giving you M. The question is, in terms of that concrete matrix, is there a concrete matrix for M dagger? And the answer is yes. How do we find it? We find it just by proceeding dead ahead. I would like to know what the matrix elements, let's see which way I did it. Yeah. This is what I would call the ijth matrix element of M dagger. I don't know what M dagger is yet. I have a suspicion that something to do with complex conjugates, but this is its matrix elements, concrete set of numbers. Can we find out what they are? All right. Yeah, I think I have to step back for a minute. Let me suppose that M acts on the basis vector to give i prime. i prime is just the result of sticking the basis vector into the machine, into the M machine, and this is what comes out. Let's suppose that's the result. Now let's take jMi. That's equal to jI prime. All I've done is take the inner product with j. Here's the matrix element of the matrix M, and it's just this inner product. That was a trivial operation. I didn't do anything. I didn't do anything very significant. If this is true, then by definition, M dagger acting to the left, what does M dagger acting to the left give on i? Look at this equation here. M dagger when it acts to the left gives the bra vector b. So that's what all that dagger does is it sort of turns the equation over. So what does M dagger acting on i give? It gives the bra vector i prime. If M dagger acts on the bra i, it must give the bra vector i prime. It just flips the equation from bras to kets. So what we have over here then is i prime j. That's the matrix element. That's this matrix element over here. OK, so let's look what we have. We have the matrix element of M is the inner product of j with i prime. The matrix element of M dagger is the inner product of i prime with j. What's the relationship between these two? Transposing complex conjugate. Just a complex conjugate. If you take two vectors a and b and you think of them in the opposite order, the bra a times the ket b versus the bra b times the ket a, the relationship between them is just complex conjugation. So this here is just a complex conjugate of this. So what we've derived is that the matrix element, the ijth matrix element of M dagger, is the complex conjugate of the j-i-th matrix element of M. So if I know the matrix elements of M, what do I do to get the matrix elements of M dagger? It's very simple. You complex conjugate, but that's not quite enough. You complex conjugate and you interchange i's and j's. You interchange i and j. Another way to write it is that M dagger ij is equal to Mji star. So now we know a lot. We know how to construct operators which act to the left to do the same mapping that they did when they went to the right. They map the bra a to the bra b if the original operator did the same thing on the kets. All right, that's the notion of Hermitian conjugation. This is called Hermitian conjugation. The word Hermitian is named after some mathematician whose name was Hermite and Hermitian conjugation is the process of interchanging i and j and taking its complex conjugate. In terms of matrices, if I have a matrix, this is an easy way to remember it, M1, 1, M1, 2, dot, dot, dot, M2, 1, M2, 2, if that's the matrix representing M, then the matrix representing its conjugate, its Hermitian conjugate, is to simply interchange rows and columns. That means reflecting it about the diagonal and then complex conjugating. Interchanging i and j, that's like interchanging M1, 2 with M2, 1, you just flip it about the diagonal and you complex conjugate it. So if I want the matrix elements of the Hermitian conjugate, well, I can't say it anymore. Just do what I said. You interchange rows and columns and you complex conjugate. Interchanging rows and columns is the same thing as reflecting about the diagonal. You reflect about the diagonal and then you complex conjugate. Let me give an example. Supposing I have the matrix 2, 6 plus i times 7, 4 minus i, and 9, 9 plus i. OK, 9 plus i. What's the Hermitian conjugate of that? You first flip it. Flipping it leaves the diagonal elements in place and interchanges the off diagonal elements. So you flip it 2, 9 plus i, 6 plus i times 7, 4 minus i. What is that process called of flipping it like that? Transpose. That's called the transpose. So flipping it is a transpose, but then you have to complex conjugate it. Complex conjugating just means wherever you see i, put minus i. So this becomes 4 plus i, 1 minus 7i, and 9 minus i. And that's the matrix that represents the Hermitian conjugate of the original matrix. So it's easy. It's easy. It's concrete, and it's also abstract. It has an abstract side. The Hermitian conjugate just allows you to turn an equation from a bra vector to a ket vector. And in terms of concrete matrix elements, you flip i and j and you complex conjugate. And that defines Mvaga. Now, as I said, in general, these matrices are thought of as complex objects. And the notion of complex conjugation is replaced by this Hermitian conjugation. Simultaneous flipping. Now, what is a real number? A real number is one which is its own complex conjugate. If I write that z is equal to z star, you jump up and you say, that is a real number. There's a corresponding notion for operators. It's the notion of an operator which is its own Hermitian conjugate. An operator which is its own Hermitian conjugate, that means you enter change and complex conjugate and you get back to the same thing, that is called Hermitian. Hermitian is the analog for matrices or the analog for operators of real. All right, so what it says a Hermitian matrix. Now, let's specialize to the case of Hermitian matrices. And just spell out Hermitian for you. Hermitian for matrices is the analog of real for numbers. A Hermitian operator or matrix is one which is equal to its own Hermitian conjugate. In terms of matrices, its Mij is equal to Mji star. Let's try to see if we can construct a Hermitian matrix. Let's see if we can construct a Hermitian matrix. The first thing about Hermitian matrices is their diagonal elements are always real. Take a diagonal element. The diagonal element i and j are equal. ij is the same as ji on the diagonal. So that says that for the diagonal elements, they're always real. So a Hermitian matrix, first of all, has real diagonal elements. 1, 7, I don't know, I like 7, minus 2.6 on the diagonal. Off the diagonal, they have the property that when you reflect them, they simply complex conjugate. So if we were to put i over here, we would put minus i over here for a Hermitian matrix. If you were to put 4 minus i over here, you would put 4 plus i over here. And if you had 5 over here, you'd put 5 over here. So a Hermitian matrix has the property, first of all, that diagonal elements are real. And the off diagonal elements, not symmetric, but a different kind of symmetry, a symmetry where when you reflect them, they complex conjugate. That's called a Hermitian matrix. A Hermitian matrix has an enormous importance. And Hermitian matrices have enormous importance in quantum mechanics. They represent the observable quantities, the measurable quantities, the things that you can measure. We're going to come to that, and we're going to do a bit of that, especially for the single spin, I hope, tonight. But before we do, we have to know a little more about Hermitian matrices. In particular, we need a concept, the concept of eigenvector and eigenvalue. I know that I'm going fast, and I know I'm quite aware that if you don't know these ideas, you're going to have to struggle tonight to keep up. But they're standard ideas. There are lots of textbooks on linear algebra. You can find it. And you can sit down quietly and learn these ideas in about two hours, I would say, two, three hours. OK, so let's talk about Hermitian. Well, let's first talk about eigenvectors and eigenvalues. We can even think about these things, incidentally, in ordinary three-dimensional space, where the vector space is just a space of ordinary three vectors. There are operators. They're represented by matrices. And those operators operate on vectors to give other vectors. And typically, if a matrix is at all interesting, or if an operator is at all interesting, whatever it mean to be interesting, well, if it's at all a little bit complex, it will generally have the property that if you put a vector into the machine, the vector that comes out will be pointing in a different direction. That will be generated by the vector. It's in a different direction. That will be generally true, generically true. So directionality is not preserved by linear operators as a rule. On the other hand, there may be particular vectors. And typically, there are particular vectors that when you apply a matrix, those vectors will depend on which matrix you're talking about, or an operator. There may be specific vectors associated with a specific operator that do simply get multiplied by numbers. In other words, they come out in the same direction. An example of a linear operation on a vector is a rotation about an axis. That can be represented by a matrix. In three dimensions, if you rotate the vector space about some axis, then generally speaking, an arbitrary vector will wind up in a different direction. However, there is one particular direction for which that doesn't happen, namely along the axis of rotation. If you rotate about the axis of rotation, the vector comes out pointing in the same direction. It comes out a multiple of itself. Now, in this case, simply equal to itself. All right, so that's the notion of the eigenvectors of an operator. They are the vectors, if there are any, they're not necessarily any. For example, in rotations about two and two dimensions, there are no eigenvectors. There are no directions which come out unaffected. But so in general, it may or may not be eigenvectors. But if there are eigenvectors, they're defined in the following way. Again, I'm going to use m for the generic operator. m acts on an eigenvector. Let's call the eigenvector i. i for eigen. Now, of course, i is not for eigen. Eigen begins with an e. But I'm going to call it i anyway. If m acts on one of its eigenvectors, eigenvectors are not things which are independent of the matrices. Given a matrix or an operator, it has certain eigendirections or certain eigenvectors. And the rule is, or the definition, is the set of vectors which just get multiplied by numbers. Let's call for the i-th eigenvector here, the eigenvalue is called lambda i, and it simply multiplies the input. Now, different eigenvectors of the same matrix will have different eigenvalues. There may be, and in general, that's the generic situation. If there are several eigenvectors, they will have different eigenvalues. However, the eigenvectors are not completely unambiguous. If you take an eigenvector and multiply it by a number, the result is still an eigenvector with the same eigenvalue. If I multiply this equation by a complex number z, I can bring the complex number z through, and I'll find out that any numerical, even complex numerical, multiple of an eigenvector is also an eigenvector. So strictly speaking, you should talk about eigendirections. The set of vectors is including or not worrying about their length, so to speak. But in any case, this is the definition of an eigenvector. All right, some matrices have eigenvectors. Other matrices don't have eigenvectors. There is something important about Hermitian matrices, or Hermitian operators. Hermitian operators not only have eigenvectors, they have a complete set of eigenvectors. Complete means that there are enough of them to expand any vector, and even better, they have an orthonormal family of eigenvectors. In general, the different eigenvectors will have different eigenvalues, as I said. But it will be true that if a matrix is Hermitian, it has, and the dimensionality of the space is n, there will be n mutually orthogonal eigenvectors. In other words, the eigenvectors of Hermitian matrices define a basis. I'm going to let you prove part of that theorem. It's a very easy theorem to prove, and I'm going to prove the other half of it. I'll prove the half that says that if you have two eigenvectors, with different eigenvalues, and that's the generic situation that the eigenvalues are all different, if you have two eigenvectors with different eigenvalues, we will prove that they're orthogonal to each other. Your job is to prove that you won't run out of eigenvectors until you have n of them. That's not hard. That's not hard. In any case, it's true. All right, so let's prove that if we have two eigenvectors, m is a Hermitian. Let's m is Hermitian. m equals a Hermitian matrix. I use matrix and operator synonymously, which means that m equals m dagger. OK, let's suppose that m is Hermitian, and it has an eigenvalue lambda i. The first observation is that the eigenvalues of Hermitian matrices are real. That even comes before what I said previously. If there is an eigenvalue, it is real. OK, how do we prove that it's real? Let's prove that it's real. Well, let's take this equation and flip it over into a bra equation. To flip it over into a bra equation, I flip i, I take m and I replace it by m dagger. Well, that's not going to do much, because m is equal to m dagger. And on the right-hand side, I flip i, and what do I do with lambda in general? Complex conjugate. Complex conjugate. So far, we haven't proved that it's real. Every time you flip from bras to kets, you have to complex conjugate all the numbers that are there. All right, but now, assumption, m is Hermitian. So it's equal to its own Hermitian conjugate. And now we have this equation. Now let's take the inner product of both of these equations with i. In the upper equation, we're going to take the inner product on the left with i. And what does that give us? That gives us ii times lambda i. Let's do the same thing over here, except now multiplying on the right. This and this are obviously the same. m with i on the left and i on the right is the same as this. But this is not obviously the same, not unless lambda is equal to lambda star. ii here is the same as ii here. That cancels out. And therefore, we come to the conclusion that if a matrix is Hermitian, if it's equal to its own Hermitian conjugate, then its eigenvalues are equal to their own complex conjugates and they are real. Eigenvalues of Hermitian matrices are real. That's the first step. Now let's prove that if we have two different eigenvalues, the corresponding eigenvectors must be orthogonal. Now remember what orthogonal means from a physical point of view. It means physically distinct that there's an experiment that you can do to distinguish the two of them unambiguously. That's what orthogonality means from a physical point of view. Let's prove that the eigenvectors of m with different eigenvalue are orthogonal. Different eigenvalue, physically distinguishable. You should keep that in your head. OK, so here we are. Let's start with m on i equals lambda ii. And m on j equals lambda jj. Two different eigenvectors, i and j. First thing we'll do is turn this one over and turn it into a ket equation, a bra. It is a ket equation. We'll take this one and turn it into a bra equation. So that looks like this. j, m dagger, but m dagger is the same as m, equals the real number lambda j. Eigenvector, eigenvalues of Hermitian matrices are real, times j. Now, here we have, OK, so we have two equations. What can you do to them so that we can relate them? What we can do with them to relate them is to multiply this one by j on the left and this one by i on the right. And we will get the same expression on the left-hand side of the equation. So let's do that. j, j, i, i. What we have on the left is the same, on the upper and lower equation. What we have on the right is not necessarily the same. So let's require that it's the same. Requiring that it's the same says that lambda i minus lambda j times the inner product of j with i must equal 0. I've just written a fence. I've just written that this is equal to this. And then transposed. So lambda i minus lambda j times the inner product of j i equals 0. Well, if the product of two things, and I have two things, lambda i minus lambda j and the inner product of j i with j, if the product of two things is 0, it says unambiguously at least one of them is 0. It usually says one of them is 0. The inner product of i and j may or may not be 0, but let's take the case where we know that the eigenvectors correspond to different values of the eigenvector. Sorry, that the eigenvalues are different. That's what I'm interested in. I'm interested in eigenvectors corresponding to different values of the eigenvalue. So by assumption, lambda i is not equal to lambda j. The conclusion is that the inner product of i with j must be equal to 0. So what does that mean? That means that i and j are orthogonal. So we now have a theorem. The eigenvectors of Hermitian matrices with different eigenvalue are orthogonal. Let's just take the generic case where all of the eigenvalues are different. We can come back to what happens when some of them may be the same. It's not important right now. If all the eigenvalues happen to be different than each other, then all the eigenvectors are mutually orthogonal to each other. If there are enough of them, and your job is to prove there's enough of them, they form a basis. In other words, this collection of things called i and j here is a special case of the kind of basis which we labeled with small i and j. But it's the basis of eigenvectors. It's the basis of eigenvectors of the Hermitian operator. As I said, Hermitian operators will be identified with observable quantities. And to say that eigenvectors are orthogonal has something or other to do with physical differences that are measurable unambiguously between those states of the system. OK, we are now prepared to state the principles of quantum mechanics. We have enough mathematical information or enough mathematical formalism to state the entire show. We can put on the blackboard now what quantum mechanics is and what it says and what the rules are or what the correspondences are, what the mathematical, sorry, what the physical meaning of these various things are. You won't get the, you won't understand this the first time around. It'll be foreign. It'll be peculiar. What is he talking about? But of course, we'll work it out in examples. We will see how the ideas work. So don't get disturbed if what I throw at you now sounds a little bit meaningless. That's OK. By the time we're finished with some examples, it should be clear how we use this. All right, first of all, there are measurable quantities. The result of a measurement is always a real number. It takes at least two measurements and two somehow two compatible measurements, two measurements that can both be performed simultaneously to measure a complex number. The basic ingredients and the basic measurables are real numbers. That's what comes out of your detector. You can measure two of them if you can measure two of them, if it's possible to simultaneously measure two things, and put them together into a complex number, but it really is just two measurements. So each measurement, each thing that you measure, has a real result associated with it. The physical measurables, the called observables, the quantum mechanics that called observables, the things that you can measure, the observables are identified with Hermitian matrices, or Hermitian operators. Observables, whoops, not Q, it's like a Q form. I won't write equals. A represented by, we'll just put an arrow, are represented by Hermitian linear, of course, operators. First thing. Let's just think, in studying the spin in a rough and loose way, we sort of identified some observables. The observables that we identified were the components of the spin. The observables that we could identify were associated with our apparatus. Point your apparatus in some direction, and measure the spin along those directions. We said, we called, if we measure the spin along the z-axis, we called it sigma z. We can measure it. Or we could turn our apparatus on its side, and measure sigma x. Or we can turn the apparatus from the x-axis to the y-axis, and measure sigma y. We talked about those things. We didn't talk about sigmas being Hermitian operators. In fact, we didn't talk about them being operators at all. But we did talk about measuring them. These will become Hermitian operators in time. Next, the possible values that a given observable can take on, what are they for sigma z? What are the possible plus 1 and minus 1, right? The possible observable values. Measured in an experiment, those observable values are the eigenvalues. They're real because the matrices are Hermitian. The eigenvectors, what about the eigenvectors? The eigenvectors correspond to states. States, vectors, states, vectors. The eigenvectors are the states in which the corresponding quantities are definite. In other words, in which the measurement of them is unambiguous. Oh, let's follow through on this here a little bit. For all three of these observables, the eigenvalues had better be plus or minus 1. Now let's come to the eigenvectors. The eigenvectors, for example, the eigenvector with a given eigenvalue is a state of the system where if you measure that particular observable, you will definitely get the eigenvalue associated with that eigenvector. So the eigenvectors, those are, I think I have it backwards, we should put eigenvectors on this side. And on the left side, the physical meaning of them, the physical meaning is states in which observable has unambiguous result. Example, we talked, for example, about the states up and down. Those are states in which sigma z had definite values. When you're up, sigma z is definitely 1. When you're down, sigma z is definitely minus 1. Any other vector, sigma z is statistically either plus or minus 1, but with a randomness. So up and down must be the eigenvectors of sigma z. Left and right were the states in which sigma x was unambiguously either plus 1 or minus 1. So left and right must be the eigenvectors of sigma x. Whatever sigma x is, we're going to try to figure out what sigma x is knowing its eigenvalues, knowing its eigenvalues and its eigenvectors. And what was the other three? The other two called in and out, I think. In and out were the eigenvectors of sigma y. So we have three connections and now a fourth connection. Given a general state, not an eigenvector, not an eigenvector, given a general state, a, somebody made an electron or a system of some sort and left the eigenvector. So if we make an electron of some sort and left it in the state a, then we can talk about what is the probability that we measure the eigenvalues of n at different values. There may be several different eigenvalues. In this case, there would be plus 1 and minus 1, but more generally, 1,000 eigenvalues, a million eigenvalues, whatever the number is, by the label i, what is the probability that when you measure m, you get lambda i? That's the last piece of the thing here. So on the left-hand side, we put probability, physical meaning, probability. And I'll write it just as the probability to measure i. No, to measure lambda i. So I'm sorry, to measure lambda i, that the result of the experiment comes out a particular eigenvalue of the observable m, whatever m is. That is related, not related to, it's essentially equal to. Well, it comes in two steps. First, you calculate if you have the vector a, the vector, which you start, that's the state of the system. You take it's inner product with the eigenvector i, the projection of a onto i, the component of a in the i direction. Now, that in general is a complex number. That is not the probability. You multiply it by its own complex conjugate to get a real positive number, and that is the probability. We can multiply this by its complex conjugate or we can just write it i a. This i a and a i are complex conjugates of each other. We can also write this as the absolute value of i a squared. The individual components before we square them are called amplitudes. They're called probability amplitudes. Probability amplitudes are things which you multiply by their complex conjugates to find probabilities. They're real. They're positive. And that's the fourth postulate of quantum mechanics. Now, they're not all independent. I use them because they're reasonably intuitive. In fact, you can probably get rid of at least two of them and prove them from the others. But we don't need to do that. This is a good starting point. And these are the basic four postulates of quantum mechanics. The other postulate is that when you measure a quantity, you prepare the system in a state of definite value of that quantity. I won't bother writing that down. When you measure something, you are, in effect, pushing it into a state of definite value of that quantity, depending on what the outcome of your experiment is. If your outcome measures some particular, the outcome of an experiment is some particular eigenvalue, that means you push the system into the state, into the eigenstate of that quantity with that eigenvalue. I won't write that down. It takes too many words to write it down. It's under the P, the probability. Is it in here? I don't know. Is it in here? Yeah. What's next to the P? What is that? I can't read your hand, buddy. P-I-O-P. Prob. P-I-O-B. Prob. It's probably probability. This reads the probability of measuring lambda. Oh, it is lambda i. The probability of measuring lambda i is the square of the inner product of A with i. It's, if you like, it's the projection of A onto the i-th direction, or the component of A in the i-th direction. Now, there's an implication here. There's an implication here about the nature of state vectors coming from the assumption that total probabilities are equal to 1. They have to be normalized. What's that? They just have to be normalized. Yeah. Yeah. Let's work that out. If you want the total probability to add up to 1, then you want the sums of all these objects, when you sum over i over the various possible values that you can measure. You want that to equal 1. So you want the sum over i for any allowable state A. We want the sum of A, i, i, A to equal 1. Now, i is a basis. And if you go up to the top equation there, you have equations which tell you how to deal with exactly this combination. What is the sum over i of i, i, A? It's just A. So what this says is that the inner product of A with itself should equal 1. The total length of A should be equal to 1. That's not too surprising that that's what this says. This says the sums of the squares of the components of A add up to 1. For ordinary vectors, that would just say the length of the vector was 1. And that's what this says. The length of A is equal to 1. It's equivalent in terms of the alphas. We got really how the alphas are up there also. It's equivalent to the sum of alpha star i alpha i equals 1. Well, i, capital I. And so there is one combination of the alphas. This is one real equation. It's not a complex equation. It's a real equation. And that means there's one real combination of the alphas which has to be constrained. Alphas are not completely free. Or the alphas are not completely free. Roughly speaking, the sums of the squares or the sums of them times a complex conjugate has to add up to 1. So that means there's less fewer independent quantities than you thought. There is another fact which we have not gotten to yet. We will get to it. You can sort of see it from here. All probabilities, any probability that you ever may want to measure or that you ever may want to calculate or compare with experiment, is always connected with a real quantity like this. The collection of such real quantities is invariant under a certain operation. The operation is multiplying the state vector by a phase. Everybody know what it means to multiply a number of a thing by a phase? You multiply it by an e to the i theta. If you multiply all the components of A by the same overall phase, then every probability is unchanged. Every probability is unchanged. Even if you rotate bases, it's unchanged. So physical quantities don't respond to a change of the phase. That means there are two fewer degrees of freedom, two fewer parameters in the specification of a state than the collection of complex numbers that would define a vector in the abstract space. Two, one is the phase, the overall phase, and the other is the magnitude of it. We'll come back to that. OK, that's taken us a long way. You're probably saturated, but I'm not, so we're going to continue. I'm on a high. Questions? Yeah. The complex number also has to be, so it can't be negative. Yeah, that's right. So these are always positive. Think times it's complex conjugate is always positive. Is the evolution postulate? Is the what? Is the what? The evolution postulate. We haven't gotten to that yet. How things change with time. No, we're at the level now of where we were in classical physics when we just said the state of the system is just a point in a set of things. That's about where we are. That's how far we've gotten. We have not talked about how things change, in particular how the states of systems change. We'll come to that. So back in the beginning when you had this example of measuring these qubits, and you had the apparatus at an angle, and you got the cosine of theta. OK, I guess my question was that we somehow got the idea that cosine of theta is going to be the average of your measurements. Should it be cosine squared? I see why you're going. OK, bye-bye. I'm going to do it. Can you say that last posture one more time so I can get more better? When you measure something, you push it into the state you are measuring. No. When you measure something, you get a result. The result is statistically determined, meaning to say that you may get one result, you may get another result. But once you get that result, then you know what state the system is in. It's in the eigenstate of the quantity that you measured with the eigenvalue that came up on your detector. Remember, when we had our apparatus and we measured sigma z, we said that not only measures sigma z, but it prepares the system in a state of definite sigma z. It leaves it over in a state in which sigma z is definite for the next measurement. That means it leaves it in an eigenstate of sigma z. OK. Let's see if we can apply this. So to the spin system. You get a very general formulation. And now we want to try it out and see how it works. So I want to start with the three observables sigma x, sigma y, and sigma z and see what I can learn about them. First of all, sigma z. Now let's start in the basis associated with the measurement of the spin along the z-axis. Let's take that as the appropriate basis. Sigma z has two eigenvectors. From the experiments that we talked about, we would have learned that we'd done the experiment that sigma z has two eigenvalues and two eigenvectors. The two eigenvectors we've called up and down, and they're associated with sigma z equals plus 1 and minus 1. Those must be the eigenvectors of sigma z and eigenvalues. And so we can immediately say sigma z times up is equal to up, plus 1 times up. Why? Because the measured value when we measure sigma z is going to be plus 1. That's the eigenvalue. And up is the eigenvector because in the up state, sigma z is definite. What about sigma z on the down state? What's it all right there? Is sigma z definite in the down state? Yes, it's always minus 1. It's going to be minus 1. And that means the eigenvalue is minus 1 times down. It's not sigma z times down equals up. That wouldn't be an eigenvector. An eigenvector has the property that when the operator acts on it, it gives the same direction back times the eigenvalue. So here is what we know about sigma z. Let's see if we can compute its matrix elements and exhibit it as a matrix. But when I say exhibit it as a matrix, I must pick a basis. I'm going to pick the basis up and down. First of all, one thing, up and down are orthogonal to each other. They must be according to these principles because if sigma z is unobservable, it corresponds to a Hermitian matrix. If there are two different eigenvalues, the eigenvectors must be orthogonal for a Hermitian matrix. And so up and down, first of all, learn that up and down are orthogonal. That's one fact. So it doesn't matter whether we write up down or down up. I mean, we could write it in the opposite order. Let's see if we can compute the matrix elements. The matrix elements are just the array that we get when we consider the four possibilities. They're going to be two by two matrices. And the four possibilities are going to be, first of all, we take the inner product of this back with up. That'll be the up, up matrix element or the one, one matrix element. What is that going to be? I'm not sure I want to. So let's do it over here. Let's calculate up sigma z up. That's equal. Sigma z acts to the right to give us up plus 1. Well, plus 1 is just 1 times up. Just the inner product of up with itself. And since up is a basis vector, it's normalized. And this is just equal to 1. It's magnitude is equal to 1. I can choose it to be equal to 1. It's positive. Yeah, it's positive. It's real. And it's just plain 1. So up here in the up, up or the one, one place, it's just 1. Now, what about the off diagonal element? The off diagonal element, that's an off diagonal element there. That's going to be equal to the inner product of down with up. What's that? 0 over here. I would get exactly the same thing if I put down over here and up over here. Then I would use this equation over here. Let sigma z act on down and then take its overlap with up and I would get 0 again. And the last one is down sigma z down. What is that? Well, you go over to here. Sigma z times down is minus down. So this is just equal to minus down down. And what's that? Minus 1. So this is the matrix that's associated with sigma z. So this is the matrix that's associated with up. When it acts on a vector, on a column vector, in particular, when it acts, what is the column vector that's associated with up? It's just a 1 in the top and a 0 in the bottom. All right, so let's just try this. 1, 0. What do we get? We get 1 times 1 is 1. 0 times 0 is 0. 0 times 0 is 0. Same thing. It's guaranteed that we'll get the right, that the eigenvectors are what they're supposed to be, namely just up and down, and that the eigenvalues are plus 1 and minus 1 in the right way. All right, so we found one of the sigma matrices, sigma z. These are, of course, well, not of course. They're called the Pauli matrices, sigma x, sigma y, and sigma z. Let's see what we can find out about sigma x. Now, if I use the x basis, if I use the basis of left and right, of course, I would just find exactly the same thing. But I want to stick with the up-down basis. I want to stick with the up-down basis, but find out what the matrix elements of sigma x are. Now, what is the property of sigma x? The property of sigma x is that its eigenvectors are left and right. So for example, sigma x on right, that equals right. And what about sigma x on left? Equals minus left. Those are the two possible values that sigma x can take on. Left and right are the eigenvectors, in other words, the states in which sigma x has definite values, plus 1 and minus 1. And this is what I know about sigma x. But I want to find its matrix elements in the up-down basis. So what I really want then, let's come over to here and see if we can compute in the same basis. What do I do with the Pauli matrix? Did I erase it? It worked so hard to get it. So let's do it again. Sigma z is equal to 1 minus 1, 0, 0. Now we want to try to get sigma x. So the first step is erase the blackboard. And let's start, one by one. There are four matrix elements. One by one. Up, sigma x, up. Now what we have to know is that up is equal to left plus right over the square root of 2. You remember that? We originally defined left and right to be up plus or minus down over the square root of 2. But then we can solve for up and down in terms of left and right. Sorry, it's right minus left. With the notation we had last time, it was down, not sorry, it's right minus left over root 2. So all we have to do is plug them in. On the left here, we have left. You know, I'm going to abuse notation and just write this as left plus right over square root of 2. This is a meaningless symbol, but I think you know what I mean. I mean the bra vector, left plus the bra vector, right over the square root of 2. But too many vertical strokes and I get confused. So let's just write it as left plus right over the square root of 2, sigma x left plus right over square root of 2. First of all, I can take out, this is the, what is this thing? This is sigma x, up, up. There it is. It's the up, up entry. One that goes up in here. All right, first of all, there's a factor of 2 that I can take out. Square root of 2 from here and a square root of 2 from here. That's all together 1 half. And now let's look at this. What happens when sigma x hits left? It gives back left except with a minus sign. What happens when sigma x hits right? It gives back right with a plus sign. So when sigma x acts on this, what it winds up giving, you can take the sigma x away and just remember what it does. It gives right minus left instead of right plus left. It hits the vector right and gives plus 1. It hits the vector left and gives minus 1. Again, these are not things to be numerically added. These are vectors, the right vector minus the left vector. And now all we have to do is take the inner products. Well, the inner product of right with right is what? 1. The inner product of right with left is 0. The inner product of left with right is 0. And the inner product with left with left minus 1 when you account for this minus. The result is they cancel. 1 minus 1. Or equivalently, the inner product of, well, OK, 0. So first of all, it's a 0 over here. That's the up-up matrix element. It's easy to get the down-down matrix element. The down-down matrix element, you would do exactly the same thing. You accept that you would start with right minus left, right minus left, right minus left, sigma x. And then you would say that when sigma x hits right, it's plus right, when it hits left, it gives minus left. So when sigma x acts, it takes right minus left to right plus left. And what's the inner product of right plus left with right minus left? 0 again. Right with right is 1. Left with left is minus 1. And the cross terms are just 0. So there's a 0 over here. Now, looks like all we're going to get is 0s. But that can't be right. So let's try up-down. All right, we try up-down. Let's put up-down there. Up will be right plus left. We'll have sigma x. And then down is right minus left. What happens when sigma x hits right plus left? It makes right minus left. And now what's the inner product? The inner product of right with right is 1. The inner product of minus left with minus left is also plus 1. So they add. And again, the cross terms between right and left are 0. So we get 2 divided by 2, which is 1. You get 1 up here. And if we work it out, we'll get another 1 over here. So now we've got sigma y. Let's just stop for a minute and say, what is it that we have? We have a pair of linear operators, or a pair of matrices, whose eigenvectors are, this fellow's eigenvectors here are up and down. This fellow's eigenvectors are right and left. They're exactly what we were talking about on the previous blackboard. The states in which the observables have unambiguous results are the eigenvectors. That's what we are. Now, the last one is sigma y. What do we know about sigma y? I'm not in and out of the eigenvectors. The one we just did is sigma x, right? The one we just did is sigma x. You wrote sigma y. You wrote sigma y. Oh, sigma y. Sorry. Actually, I don't think I wrote sigma y. I think I just failed to cross the x. I think, I don't know, maybe I wrote sigma y. All right, the next one is sigma y. And here I'm going to leave some of the algebra. This is very easy to you, but let's see what the rules are. Oh, incidentally, no, I think we're OK. We know the rules. OK, so there was in and out. I cannot remember which one was which. So I'm going to fake it. And if it comes out wrong, it comes out wrong. I think in was up, plus I think i times down. And out was up minus i times down. And I think I left out factors of square root of 2. And incidentally, you can check that in and out are orthogonal to each other. OK, there's in and out. Can we find a matrix which, in and out, are eigenvectors with eigenvalue plus 1 and minus 1? It's getting late. It's very easy. Leave it to you. I will simply tell you what that matrix is. It's unique. It's unique. If you know the eigenvectors and you know the eigenvalues, it's unique. One way of doing it is to solve for up and down in terms of in and out. Once you've solved for up and down in terms of in and out, then you know how the matrix acts on in and out. You're finished. The last one in the multiply here, now I am writing sigma y. Similar to this one, but it's o minus i and i. So a little exercise. You don't really need to go and prove this. You can do a different thing. You can take this matrix and show that these are the eigenvectors of it. Just show that these are the eigenvectors of it. You take this matrix and show that it's eigenvectors. What does that mean? It means that it's eigenvectors are 1i and 1 minus i. The square root of 2 doesn't matter. Eigenvectors are eigenvectors. They don't care about the square root of 2. These are the three Pauli matrices that represent the components of the spin in different directions. I think that's enough for tonight. What I was going to do, and I have in my notes, was exactly this question. What happens if you have a component of spin along an arbitrary direction? Can we show, using what we know, that the problem, we'll do this next time. We'll do this next time. If we have a component of spin along an arbitrary direction and we want to calculate, using the principles, we want to calculate what the probability is for the measurement of sigma z in various, either plus or minus. We should be able to do it. We have enough information. I think it's probably more than we can yet handle for tonight. I think we did a lot for tonight. Probably more than. Am I wrong to say this? The use needs to be negative to make those two more positive? Does one of the use need to be negative to make those two negative? No. You know what you're missing? You're missing that you have to complex conjugate. If you want the inner product of in without, you multiply 1 times 1 and then add i not times minus i, but i times plus i. Yeah. The only way that somebody like me knows so quickly why what you're thinking is having made the same mistake, of course, sometime in the past. OK. All right, we'll finish for tonight, I think, unless there's questions. For more, please visit us at stanford.edu.
January 23, 2012 - In this course, world renowned physicist, Leonard Susskind, dives into the fundamentals of classical mechanics and quantum physics. He discovers the link between the two branches of physics and ultimately shows how quantum mechanics grew out of the classical structure. In this lecture, he works through some of the mathematics behind vectors and operators as used in physics.
10.5446/15011 (DOI)
Stanford University. Alright, so last quarter we studied classical mechanics. And classical mechanics is fairly intuitive. The basic ideas are things that you can visualize. You can picture in your head motion of particles, motion of objects. It gets a little hard when you start thinking of ten particles all at the same time. But nevertheless, the whole thing is based on concepts that are drawn from pretty much the everyday world. Of course, it gets more and more abstract as you go along. The crazy French who were obsessed with elegance kept making it more and more and more mathematically elegant. Lagrangians, Poisson brackets, blah, blah, blah. It turned out all to be tremendously important. The abstractions, though, are not all that difficult to understand. And the basic concept is the concept of updating. Well, before that, the concept of a space of states. What is the state of a system? Even before that, the concept of a system, a closed, isolated system that doesn't interact with anything and whose motion and whose evolution, whose history you wish to predict. Classical mechanics is predictive. It's deterministic. And the first concept, as I said, after the concept of a system, is the concept of the state of a system. Systems can have a collection of different states. We talked about some very simple examples. The first day of the quarter last time, I just described for you the most elementary simple systems consisting of nothing more, for example, than a coin. Heads and tails. The only states of the system are heads and tails. They form a mathematical set, the states. Two of them for heads and tails, two points, one space, two states. And we can draw them as dots. This one stands for heads. This one stands for tails. And, you know, it's easy to picture in your head. From there, we talked about equations of motion. Equations of motion or laws of evolution. For the simple, very simple system of a couple of points, heads, tails, we formulated some very, very simple possible laws of motion. The simplest being, nothing happens. If you start with heads, you stay heads, heads, heads, heads. If you start with tails, you stay tails, tails, tails, tails, tails. That was a very simple law of motion. A more complicated law of motion. If you begin with heads, you go to tails. If you begin with tails, you go to heads and just oscillate back and forth. The basic idea is the idea of updating. If you know everything there is to know about a system at an instant of time, then classical mechanics, everything that there is to know, essentially means everything that's needed in order to predict the future. If you know everything that needs to be known, then classical mechanics consists of a set of equations which tells you how to update the state, given its value at any given time, and a goal from one time to the next. That basic idea, for example, well, I don't think I need to describe the examples again. We described them at length last time. That's extremely intuitive. In going from these very simple discrete examples to a world of continuous motion, indeed, we have to abstract. We have to replace the idea of a small number of states, a small finite number of configurations, by an infinite number of configurations. For example, the motion of a particle along a line, we have to label those states by the points along a line, as well as their velocities. It does get more complicated, but nevertheless, classical physics consists of equations which tell you how to update the state at any given time, and it's completely deterministic. As I said, easy to visualize, or at least by comparison, easy to visualize because it makes use of concepts from the ordinary world, which your brain is rather hardwired to be able to understand. Now, I always give a little sermon at this point, and I'm going to give it again, probably you've heard it a dozen times, about the way one has to think about physics beyond classical physics when you move past classical physics. I don't think it can be said enough, and it's so rarely said that I think I will say it again. And it's that once physics moves past the realm of parameters that has to do with ordinary experience, either objects which are so small that they're far beyond what ordinary physics is about, or velocities which are so fast that they're far beyond ordinary velocities, whenever we move out of the range of parameters that we're familiar with from ordinary experience, we inevitably run into things that we cannot visualize. Nobody can visualize the motion of an electron. It's just not wired for it. Nobody can visualize four-dimensional space, or four-dimensional space-time, let alone ten or eleven dimensions. It's not the way physics is done. I often read some of the stuff that goes back and forth between some people, I suspect people in this class, about trying to explain to themselves, to each other, to ask questions of each other about certain abstract physical questions. And I notice very much that a lot of the mistakes, a lot of the confusion happens because of trying to visualize things using the old-fashioned or standard intuitions that you're hardwired for. You may think that physicists, good physicists, are especially good at visualizing these things. They're not. I cannot visualize five, six, seven dimensions any better than you can. But I know how to use abstract mathematics to describe it. There's no substitute. There really is no substitute for the process of abstracting and using mathematics to describe the things which are beyond your ability to directly visualize. The example that I always give, and I'm going to do it again tonight, I just want you to really focus on it and realize that you're not going to understand quantum mechanics by trying to visualize it as some funny form of classical mechanics. It won't work. You'll always get it wrong. But just the idea of visualizing space of dimensions different than three dimensions. In particular, I always used to say, OK, how many people here can visualize five dimensions? And everybody would say, no, we can't. Of course, there were a few fakers, a few clowns in the class who would say, I can do it, I can do it. But they couldn't. And they know they couldn't. So they could visualize five dimensions. But then I began to realize also, you've probably heard this before, that it's equally hard to directly visualize two dimensions. And so I would say to the class, OK, how many people now can visualize two dimensions? And almost everybody would raise their hands. And I would say, no, you're wrong. Close your eyes, try to visualize two dimensions. I see it. I see two-dimensional curved space. What they're seeing is not two-dimensional curved space. What they're seeing is a two-dimensional surface embedded in three dimensions. Even try to see one-dimensional space. One-dimensional abstract, one-dimensional space is just a line. You cannot see that line in your head without seeing it as a line drawn in a plane. And the plane you can't see without seeing it as embedded in three dimensions. You can't see it as a line. Why can't you? I hate to tell you this, but you're prisoners of your own neural architecture. Your neural architecture was built for three dimensions. All right? How do you get around it? Well, you don't get around it by training yourself to see four or five, six dimensions. You get around it by training yourself to think abstractly about it. If three-dimensional space is a point x, y, and z, then four-dimensional space is x, y, z, and w. And you learn to manipulate the symbols. Now, that's not to say that after a while you don't start to gain a new kind of intuition for things. You do. But the intuition is not the process of physical visualization of the same kind as when you realize a wave as a wave in the ocean. So I warn you about that. And I will tell you quantum mechanics is about as abstract as anything can be. Well, perhaps things are even getting a little more abstract now. But again, quantum mechanics is about things that your evolutionarily developed neural structure is not prepared to deal with directly. How do you deal with it? Abstract mathematics again. And relativity, it's also true of relativity. It's worse in quantum mechanics. OK, let's begin with quantum mechanics at the very, very simplest, most primitive question you can ask. What is the state? Well, I think we can ask the most primitive question is what is a system? Now, I don't think I can answer it. It's sort of undefinable. You start with the idea of a system. And I don't think I will try to give it a definition. But in particular, a closed system, a closed system is one which at least temporarily is not in interaction with anything else. It's isolated. It is not interacting with anything else. And so it can be described by itself as a lonesome entity with nothing else involved. In classical mechanics, or let's, it's not even a question of classical mechanics. It's a question of classical logic. The logic, the whole logic of quantum mechanics is different than the logic of classical mechanics. The logic of classical mechanics, again, begins, of course, with a system which I won't try to describe. But the next step is the space of states, the collection of possible states of a system. If the system is a discrete system like a coin or a die, die, you know, the six sides, the six faces die, it can have six states. Of course, a real die, a real genuine die in three dimensional space, of course, can be in all sorts of orientations. It has infinitely many states. But what I mean by a die now is I mean an abstract mathematical die, which, whose states just consist of one of six numbers. The space of states is a set. One, two, three, four, five, six. This is the space of state of a die. This is the space of states of a coin. Now, again, an abstract coin which only has two states, heads or tails. This would be heads, this would be tails. The abstract space of a point particle moving along a line, what is that? Anybody know what that is? What's the abstract space of a particle moving along a line? Well, I'll remind you that the term is phase space. The space consists of the set of possible positions of the particles and the possible momenta, the possible velocities. And so, for a particle moving on a line, the possible states of the system form a plane and the plane has a position, sometimes called q, a momentum, sometimes called p. And every point on that plane is a possible state of the point particle moving along the line. But it's a set. In this case, it's a continuously infinite set of two dimensions. In this case, it just consists of two points. In this case, it consists of six points. And we can now start building some logical concepts for classical physics. For example, concepts such as or and and, what do they mean? Or and and are concepts which apply to collections of states. Let's consider a proposition. Proposition such and such. The proposition could be that the die is located, has a value which is even. Is even valued. Okay? That's a subset of the space of states. In fact, it consists of three of them, if I'm not mistaken. Three odd, one, three and five. And another proposition would be the die is odd. No, did I say odd? Odd, even, whatever the other possibility is. That's another non-overlapping set of configurations. Let's take another case. Let's take two propositions, one of which again is that the die is odd. And the other proposition is that the die is less than or equal to three. All right, so let's see. Is one, three, five, is two, four, six, and less than or equal to three is this set over here. So now they're overlapping sets. And we can ask for concepts like and or. For example, the proposition that the die is both less than or equal to three and odd is the intersection of these two subsets. One and three is odd and less than or equal to three. So there's a concept of and. And that's in this subset and in this subset, the mathematical concept is the intersection of two sets. Is the intersection of two sets or the sets which satisfies both things. And that's the concept of and. Given two propositions, each proposition is a collection of states. The concept of and is intersection. What about the concept of or? Or becomes the concept of union. What what set of states here either is less than or equal to three or odd. It's the inclusive or incidentally inclusive or means it can be both. All right, that's not the intersection of the two sets. The union of the two sets, the union meaning everything in both sets. Everything in here is either less than or equal to three or it is odd, inclusively, with both being allowed. So the inclusive or translates into the union and the and the concept of and is the intersection of sets. That's the logic of classical reasoning to a large extent. Here's a concept. Two sets are completely non-overlapping. This one and that one. That simply means the and statement is false. There are no states in there for which both things are true. That's sort of the whole logic that is connected with the space of states being a set and it's the logic of set theory. We're not going to do the logic of set theory, but it's important for me to tell you right now that the logic of quantum mechanics is different. It is not based on the idea that the state of a system is a sec. The state of a system is something entirely different and only in certain limits, only in certain limits where systems behave classically does whatever a space of states of quantum mechanics, whatever it is, does it approximately reduce to the concept of a mathematical set. In fact, just to tell you what it is, it's a vector space. We'll talk about that. But before we do, before we start to talk about the abstract mathematics of vector spaces and spaces of states of quantum systems, let's talk about experiments and describe the simplest possible system in quantum mechanics. It's the coin of quantum mechanics. The system with two states heads and tails. We have to completely erase the blackboard and start over again. So we have some system, and whenever we measure that system, whenever we measure something or other about that system, namely the analog of whether it's heads or tails, we either get heads or tails. Let's give that a mathematical notation. I'm not going to call it a coin. I'm going to call it a qubit. A qubit is a system which can have two states and q stands for quantum. The analog for classical physics is called a cbit, classical bit, and it's simply the idea of a coin, heads or tails. We're going to see that a qubit is a much more interesting, well, complicated for sure, but a much richer idea, even though it only has, in some sense, two states. Okay, so the qubit. The qubit, when it's measured, we're going to talk about measuring in a moment. It's either heads or tails. But let's describe it another way. Let's give it a degree of freedom, a mathematical degree of freedom, and I'm going to call that mathematical degree of freedom sigma. Sigma is a traditional name for a, for the degree of freedom describing a qubit, and sigma can either take on the value one or it can take on the value minus one. It's one when the qubit is heads, and it's minus one when the qubit is tails. And there's no more content in it, other than heads and tails, but it allows us to stop being able to write equations for something. We can also describe this another way by a picture. We're going to describe sigma equals one by an arrow pointing up, and sigma equals minus one by an arrow pointing down. No implication here of anything geometric for the moment. Just a notation. A notation, three different notations. Heads or tails, sigma equals one or sigma equals minus one, and arrow pointing up and arrow pointing down. Okay, now let's imagine an experiment. An experiment involves more than just a system. It involves a second system, and the second system is called the apparatus. In classical mechanics, we never bother thinking too much about the details, unless we happen to be experimental physicists, and I'm really going to go into the laboratory and do an experiment. We sort of don't worry very much about, at least in a deep sense. We think the system can be described completely without ever worrying about how you detect it, how you measure it, or anything else. But in quantum mechanics, one cannot get away ignoring the concept of an apparatus. So let's introduce an apparatus. And I'll tell you what an apparatus, we're going to make a mathematically abstract apparatus. I'm not going to tell you how it works. It's a black box. It's a black box. We're going to call it A. A for apparatus. It's a box and it has a little window. Here's the window. It also, it really is a box. It's like a carton. It's like the carton you get in a mail which says, this side up. This side up. You want it to stand upright. Here's a little window. By a window, I mean a little screen. And numbers appear on that screen. And it has some sort of detector on the end of a wire that senses the qubit. And it senses whether the qubit is plus one or minus one. So you start with your qubit and you don't know what state it's in. The experiment is to determine what state it's in. So the qubit begins in the unknown, not the unknown state, but unknown. We don't know whether it's one or minus one. And we're going to do an experiment. The experiment, as I said, I'm not going to tell you how the detector works. It's not important for us. It's only important that detectives do exist. And we're going to, we will talk later about the process of detection, but not for now. So it has a sensor over here. Bring the sensor over to the qubit. And what does it do? A number lights up in here. Either plus one, oh, first of all, let's start. Before we do the detection, the apparatus is in the neutral state, the blank state. No number there at all. So let's just call that the blank configuration of the apparatus. We take the apparatus over close enough to the qubit to interact with it. And if it's a good, faithful apparatus, it will then register a number here, and that number will either be plus one or minus one. Plus one or minus one, depending on which way, whether it was heads or tails. That's the active measurement. Now, you may or may not be there to look at the apparatus. This is a secondary. The apparatus has detected and recorded what has gone on here. So that's something that, in classical mechanics, of course, we do the same thing. There's nothing particularly quantum mechanical about this, but we tend never to think about it in formulating the basic principles of quantum mechanics, classical mechanics. Okay, so that's an experiment that measures the state of, or measures whether sigma is plus one or minus one. But we can also think of it as another way. We can think of it not so much as a measurement, but as a preparation, a preparation of the system. Here's the idea. Supposing we, in fact, do measure sigma to be plus one. Supposing that's the outcome of the experiment. Then we can erase the one that it wasn't and start over. And then say, let's back off, reset the apparatus, reset it, means start it over in the blank state, and then redo the experiment again. Let's say we do it fast enough so that there wasn't much time for anything else to happen. What happens? What happens is that the detector records again, and if it's a good detector, and the system hasn't had a chance to change very much, it will simply record the same number again. It will confirm the previous experiment. This is important, that experiments can be confirmed. Again, no difference between classical and quantum mechanics in this respect. We just emphasize it in quantum mechanics. Experiments can be confirmed. We can then do it again and again and again and again, and we will continue to get the same answer that we got in the first place. So we can say it a little bit differently. The first experiment, instead of saying that it determined whether it was heads or tails, we can say it prepared it in a state, which was in this case heads. After the first detection, it's now known to be heads. It has been prepared in the state heads. We can do the experiment over and over and over again and always get the same answer. So a device that measures is also a device that prepares a system in a given state. Of course, the same thing would be true had the device determined in the first round that sigma was minus one, then it would continue to detect the same thing. Of course, there can be bad detectors. Not all detectors are well made, and not all detectors work the way you want them to work, in which case it just wasn't a good detector. It didn't do what you wanted it to do. What does heads and tails have to do with heads? What's the purpose of having a head and a tail? There's just two possibilities. You want to know what it actually has to do with a head and a tail? There's just two possibilities. I just use that for familiarity. Doesn't have anything to do with heads and tails. Okay, now we're going to do something new. We're going to do a new experiment. First, we're going to start with a preparation. The preparation determined that the spin was, or a spin, I call it a spin. I'm giving away, it is a spin. The qubit is in the state plus one. The qubit in the other notation is an arrow pointing up. We've prepared it. We now know, we can do it a few times to check, but once we check it, we now know that this qubit is up and if we continue to do it, we will continue to get the same answer. But now we're going to do something funny. After we've determined or prepared the qubit, we're now going to take the detector and turn it over. We're going to do a, I don't know, this side up. We've turned the detector over and now we're going to probe the qubit again. And what do we learn in this case? We discover. We discover that instead of getting plus one, we get minus one. By turning the detector over, we get the opposite answer. What is this telling us? What is this telling us about the nature of this system? What is telling us, oh, and of course then, if we take our upside down detector and continue to do the experiment over and over, we'll continue to get minus one. Of course, what happens if after we do it 35 times, we determine, we turn the detector back upright? There we get plus one. What we are learning is that whatever this system is, this particular qubit has a sense of directionality to it. That when I drew it as up versus down, there really was a sense in which orientation is being distinguished. By turning the detector upside down, I interchange up and down. Well, that's what I normally mean by up and down. I turn it over, look at it upside down, and it changes the sign of the qubit. That's an indication that the qubit has some directionality in space, spatial directionality associated with it. Now, not all qubits do have spatial directionality associated with them, but by doing this experiment, we discovered that whatever this qubit is, it really does have a sense of orientation, an upness and a downness to it in space. So that's a piece of interesting information. We might begin to suspect that it's a vector. We might begin to suspect that since it has a directionality in space, it's a vector. When it's pointing up, it's different than when it's pointing down. We can make, well, yes. So that's the basic idea of a qubit which also happens to have a sense of directionality. It's like this pin, and our detector, when we measure it, either measures it up or measures it down. But now we're going to do something else. Oh, here's what we might believe. Given that the detector has an orientation to it, and it does have an orientation to it, there's a right side up and a wrong side up. In fact, in the detector, we might even think that there's something which is itself an orientation, something built into the detector which has an orientation in it. And you know what we might think? We might think that this detector is detecting the value of the component of a vector along the direction of the detector itself. So when the detector is right side up, and it detects this thing here, it's detecting the component of some vector in the plus direction. When you turn the detector over, it's measuring the component of the same object except with respect to an axis which has been flipped. So you might start to think, this thing is a vector, this qubit. So now we do something to say, all right, let's do something else now. Let's turn the detector on its side. Let's turn the detector on its side. We have initially determined that the qubit was up. We've definitely made sure it's up. We've done it a thousand times with the right side up detector, and we know for sure it's up, it's not down. And now we take the sideways detector, A this side up over here, with its internal arrow pointing in this direction over here. And what should we expect ordinarily? Ordinarily, we would say, well, if this thing is going to measure, now not the vertical component of some vector, but the horizontal component of some vector. We know that that vector was pointing upward by the initial preparation. What should you get here? Well, what's the component of this vector along the horizontal axis? Zero. So the answer is, we should get zero. If this were classical physics, and this were a classical little vector, that's exactly what we would get. So we go and do the experiment, probe it, and we get not zero, but either one or minus one again. Which one do we get? One or minus one? One and minus one sort of seem to be symmetrically located relative to the axis of the detector. So I say, oh, I got one. Let's do this experiment a whole bunch of times. Let's go back. Let's go back. Get us a new qubit. Throw this qubit away. We're finished with it. Get us a new qubit. Make sure that it's pointing up and do the same experiment again. Well, we might get plus one again, but again, we might get minus one. We do it many, many times, exactly the same experiment with the qubit known to be pointing up, and we measure what we would have liked to think was the horizontal component of it, and we always get plus one or minus one, randomly. Randomly, but in such a way that if you do it a great many times, then the average value is zero. As many plus ones as minus ones. So you do it a great many times, and you find every single one of them is either plus one or minus one, but on the average, it adds up to zero. You said you threw the qubit away. What if you kept the qubit and looked at it again? Well, if you reoriented it, if you reoriented it and made sure it was up again, then it wouldn't matter whether it was a new qubit or not. If you look at the same qubit with the side doiers and you look at it. Okay, so good. Good question. All right, good. Good. Let's go back. Good question. We started with a qubit known to be up. We made a detection and we got plus one. The implication of that is the qubit is lying in the horizontal direction. Somehow, somehow we've detected a plus one, but a plus one relative to a new axis. Not the original axis, but a new axis. Supposing we now take that same qubit, don't modify it, but do the same experiment over and over and over again. We will always get the same plus one. Once that qubit has registered plus one and we don't disturb it, do the experiment again and again. We will continue to get plus one. In other words, we will confirm the experiment. The difference between the two actions that you projected from going from topside to bottomside, that's 180 degrees or flat, whereas the other is only 90 degrees. That's better. That's better. When I turn the detector by 90 degrees, I no longer have definiteness about the answer. The answer is random, but only the first time. Once I do it, and I determine whether the horizontal component is plus one or minus one, then it's determined, and if I do the same horizontal experiment over and over again, I will continue to get the same answer. Okay? Well, then I could come back and say, okay, now having done that and been pretty sure that the horizontal component of it is in the plus direction, I can come back and turn my detector again. Guess what will happen? I will find a random result, half the times plus, half the times minus, but again, once it's determined and I do it over and over and over again, it stays the same. This is probably not important, but on your sideways detector, your arrows are, I think, going the wrong way. Well, yeah. How so? They point from A. Oh, oh, oh, maybe this was an A here. Yeah. This side, off my finger. Okay. All right. That's interesting. Okay. So therefore, after having done this experiment and found out that I get a plus one, I do it over and over and over a few times, I get a plus one each time, let's say then I take the detector and turn it around by 180 degrees. What do I get? Minus one. But what happens if I only get turned by 90 degrees? Random. But what random means, a particular kind of randomness, equal probability for up and down. Equal probability for plus one or minus one. And that means that if I were to do the same experiment, the whole thing over and over, perhaps with a whole bunch of different qubits, I have a whole reservoir of qubits over here, lots of them. I don't even know what direction they're in. They're in. Whatever they're doing. And I take them one at a time, I subject them to the first experiment. If I get up, I keep them. If I get down, I throw them in the trash. At the end of the day, what I've gotten myself is a whole bunch of qubits known to be pointing up. Now I take one of them and subject it to the sideways experiment. I may get one or I may get minus one. I record what I got. Throw it away. Go to the next one. Do the same experiment. Plus one, minus one. Record what I get. Take this whole collection, large collection of them, and do the sideways experiment for all of them. What will be the average result? 0. 50%. 0. I will have as many plus ones as minus ones. And so the average will be 0. Let's write that in this case here by saying that the average value, we need a symbol for the average value, but after this experiment here, we will discover that the average value of the x, of the horizontal component of the qubit here, is 0. Classically, if we would have created a little vector in the up direction when we measure the horizontal component, we simply get 0. In the real world, in the real world of quantum mechanics, we always get plus or minus one, but in such a way that the average is 0. Is that true even if you don't throw away the ones that were measured to be minus one? Yeah, actually it's still true. Yeah, that's correct. Yeah, the answer is yes. Yeah. In the description, either the detector or the qubit seems to have some type of memory to it. Some type of? Yeah, that's right. We're assuming that the law of motion of the qubit is that nothing happens to it. So it remembers when you detect it, and then you detect it again instantly after. It hasn't changed. Right, that's correct. So later on, we're going to consider possibilities where during the intervening time between measurements, some interesting thing may happen, but at the moment we're supposing nothing interesting happens in between. It's the analog of the classical coin whose law of physics was just nothing happens. We don't talk about collapsing a wave function. That's a possible phenomenon, but no, let's not talk about that yet. Let's talk about the operational things that we might do to discover the laws of quantum mechanics. And then we'll put some words on them, such as collapse of the wave packet. Okay. So things are funny. In some average sense, the horizontal component of something is zero, but in each individual instance, you only get plus one or minus one. Now let's go a little further. Oh, incidentally, the same thing would be true if instead of turning the detector this way, we turned it this way so that the internal vector was pointing outward. It would still be true. Same thing. Okay, now let's do a little different experiment. Same setup to begin with, but instead of rotating the detector by 90 degrees, let's rotate it by 45 degrees. Any angle, let's not do 45 degrees. Let's rotate it by an arbitrary angle so that the internal detector orientation is now characterized by some angle theta relative to the vertical. So it's been rotated by angle theta and do the same experiment now. We take the first qubit, subject it now to the detector-oriented angle theta. What do we get? Plus one or minus one. Do it again. What do we get? Plus one or minus one. What do we get? Do it again. Plus one or minus one. Never anything in between, but in this case, the average value of the experimental value of what comes out here is not equal to zero, it's equal to, what do you expect it to be equal to? Cosine of theta. Cosine of the angle. Exactly like the x component of a vector would be, or exactly like, yes, exactly like the component of this vector along the detector axis. The component of this vector along the detector axis would be smaller by amount cosine theta, classically. Quantum mechanically, we simply get plus one, minus one, plus one, plus one, minus one, plus plus plus whatever in such a way as to average to the cosine of the angle. As the detector gets more and more upright, cosine theta gets closer and closer to one, and that just means it gets closer and closer to the value, to the answer, if the detector were perfectly upright. Now and then, you might get a minus sign, but mostly plus signs, and as soon as the detector gets perfectly right side up, all plus ones. What about when the detector is almost upside down? Well then you get mostly minus ones, but here and there are a few plus ones, and so it seems that the average values of sigma are behaving as if sigma was the component of a vector. So this is something really different. This is because you've prepared these as opts. If you just took raw skew bits, you get all of them. If you get zero right, you know what? The theta is only because you started with the opts. Yes. Yes, that's true. Why do we need a collection of qubits? Why do we need one? A collection of qubits? Since nothing is happening to the qubit, why can't you take the same one again and again? If you take the same one and do it over and over again with the same detector, you'll just get the same answer over and over. You'll just confirm the previous measurement. So if you take one qubit known to be up because you've made it, you know that it's up, and you subject it to this angulally distorted, angulally rotated experiment, you'll either get plus one or minus one, but if you do it the second time, you'll just get the same answer you got the first time. You won't learn anything new. Incidentally, if that weren't the case, there would be no way to confirm that the answer that you got the first time was in any sense the right answer. The only way that you confirm the answer is right is by checking it. And one way to check it is just, you know, if I close my eyes and I look at you and I see you upright, and then I close my eyes again and look at you again, I've confirmed that the first time you were upright. But the other one is also prepared in the same state. Say it again. The other one is also prepared in the same state. This one here? The second one? Now, what I'm saying is if you only have one of them, then you do the experiment, you either get plus one or minus one, and then if you do the same experiment again on the same one, over and over, you just get plus one. But if you have a sequence of them and you do the experiment with a sequence of them, all having been prepared the same way, then you'll get a statistical distribution. So if you turn off the detector and turn it on again, will it still have memory of the same unit? Switch it off. That's what I mean by measuring it again. Switch the detector off and turn it back on. So does the detector seem to store memory of... The detector has an orientation to it and when I say... How does it know you changed or did not change the unit? Change the qubit. Say it again. How does it know you changed or did not change the qubit? No, it knows that it didn't change. They didn't change the detector. But when you switched the qubit and now it's reset, but if you don't switch the qubit, it gives you the same reading. It turns into dollars. It knows. It gives the same answer. It gives... there was only one qubit. You did the same... Isn't it, I mean, the detector isn't remembering anything. It's just the qubit has changed. I have a bunch of students in the audience. Some of them are sitting upside down. Some of them are sitting right side up. If I randomly open my eyes at a student, 50% is standing right side up or not. Close my eyes, look at another student, 50% right or wrong. But if I open my eyes and then close them and look at the same student, it's always the same way. How did my eyes know that the student was going to... that I wasn't looking at a different student? The answer is I just looked at the same student. Okay? So the whole thing is not that weird that we can't keep track and remember which of these things that we're looking at. Yeah. So basically we can think of it as the act of observing causes you to prepare that particular qubit. So once you observe that one, you prepared it. So now you're always going to get that answer. Until you intervene by... Something else. If you change qubits, you could leave the measuring apparatus in the same position. You'll still get plus one? Okay, so what do we do now? You're not going to want to change the orientation of the measuring device, but you just measure a sequence of different qubits. Are there always going to be a plus one or minus one? It depends. It depends on how they were prepared. Somebody might have given you a sequence of qubits which happened to be up, down, up, down, up, down, up, down. You might have gotten them from someplace. You got them from a, you know, from a qubit store where they worried that they didn't have good quality control. But if all of these qubits were prepared the same way and identical way, and then you send them through here, they will continue to be identical. Right. Okay, so what this is telling us is there's something funny about the notion of the state of a system. It's not as clear-cut and not as simple as the state of either a coin which is either heads or tails, or a little vector which has components in different directions. But for example, when you measure the component along an axis perpendicular to the known direction of the vector, you get zero, not so. So there's something intrinsically different. And the difference does trace to the big difference between the notion of a state in quantum mechanics and a state in classical mechanics. Can I add one quick? You had here everything up plus one. One by one you turn it and some were plus one and some minus one. And each one you bring up that which was minus one then becomes minus one back on the original. If you do the experiment, say again. If you bring one down and let's say that it's minus one down here. Okay. Then you turn the apparatus up again and now you said it will be minus one from now on. No, no, not if you turn the apparatus. If you turn the apparatus. If you turn the apparatus up, I thought you said if you turn your apparatus up, that if it measured minus when it was turned, that it would stay minus when you turned the apparatus up. When you turn the apparatus, then you get a statistical distribution. Okay. When you turn the apparatus. Once you have the qubit that comes out of this apparatus, let's say it was plus one. Let's say it came out plus one. And now you take that qubit over here and you turn the apparatus back up again. Then you're going to get a random plus or minus. Then it'll be random when you turn it back up. So in that way then... Every time you turn it, it'll be random. You'll end up with a random bunch of qubits. You'll end up with a random bunch of qubits. Yeah. Yeah. Now it depends on the angle. If the angle is not too large, if the angle is small, then mostly you'll get pluses. Right. Mostly you'll get pluses. You've postulated a two-state system and then a detector, but what would be the difference in logic in your signal, but is it variable because your detector is a two-state system and it seems like you get the same output? Same. Same. You've postulated a two-state system plus and minus one. Yes. But then you have a detector that always produces plus and minus one. If you postulate it in variable, to start with your detector seems like it would still give you what your detector was. It still gave you what? It seems like you're really saying your detector is a two-state system that you can orient and it doesn't seem to imply that... No, no, you're absolutely right. That at some point we have to come back to this and say the detector is a system and it has states and understand the combined system as a quantum system composed out of two quantum systems. But let's not do that yet. Let's divide the world into detectors and systems and then come back later and say, look, a detector really is a system and we have to be able to describe it quantum mechanically also as a system and to understand the entire thing as the interplay between two quantum systems. But that will take some steps before we get there. What if we change the axis of rotation? Does it matter? You mean if we rotate this way? Yeah, no, it doesn't matter. I mean, if we do one rotation using one axis and then different axis. Yeah. Right. You're right in guessing that the whole story is much more complex. Well, the story is more complex but not inconsistent with what I said. So don't try to guess too far ahead. Don't try to guess too far ahead. Let's take the simple version of it first and then we'll eventually come to a much more rich understanding of it. So for now is it correct to think of the detector as something that could potentially return a continuous value from plus one to minus one, but the system will only be, the cubicle will only be a continuous value. For now, yes. Right. If you like, you may think of, yes, for now we may think of the detector as a thing which can be oriented in any direction and when we measure it, it behaves like a classical system. Yeah. Eventually we'll have to reconcile that with the idea that detectors themselves are quantum systems and have to respect the same laws as the systems themselves. OK. OK. Now let's, I leave you with that to ponder now and we're going to separate from this for a while and discuss mathematics. We're going to discuss the mathematics of the state space or the space of states of a quantum system. And as I said, it's not a set. It is not a set in the same sense that the set of configurations of a die is a set of six configurations. It's more complicated. All right, but we need some mathematics. We need some mathematics. We need some abstract mathematics. And the mathematics, as I said, is not the mathematics of set theory. It's the mathematics of vector spaces. Now let's just discuss the word vector for a moment. You're familiar, of course, with the idea of a vector. A vector is an arrow pointing in a certain direction in ordinary space and that's one concept of a vector, but there's a more abstract notion of a mathematical vector space and mathematical vector spaces can encompass many, many kinds of mathematical objects which are not necessarily pointers along a axis in space. I generally, when we get to this point, I generally make a new name, a non-conventional name that nobody else that I know of uses for the notion of an ordinary vector in ordinary space. In other words, a thing pointing in ordinary space. I call it a pointer to distinguish it from the abstract mathematical concept of a vector. We're going to talk now about the abstract mathematical concept of a vector. Don't think of it as pointing in three-dimensional space. Ordinary three-dimensional space. We'll discuss in what space it points. So let's talk about vector spaces. A vector space is a collection of mathematical objects, just like numbers are a collection of mathematical objects. And numbers, just the real number axis, for example, is a vector space. It's a special case of a vector space. It's a one-dimensional vector space. Complex numbers are two-dimensional vector spaces. We're going to be talking about vector spaces for the moment, unknown dimensionality. And we're just going to write down a mathematical notation for an object in the vector space. And as I said, vector space is now just a name for a collection of mathematical objects. Completely abstract. A is a vector. And that's the notation for an abstract vector. It's Dirac's notation. And it consists of a vertical line, a crooked line, and a letter in between. And that stands for the vector a, whatever the vector a is. What can you do with vectors? Well, you can add them. I'm not going to tell you how to add them. I'm just going to tell you that, given any two vectors, a and b, you can form this sum. And this sum is another vector. Let's call that other vector, the vector c. So a vector space consists of a collection of objects which can be added. Numbers. I'm sorry. Just confused, vector versus vector space. Vector space is a collection of vectors. Right. So vector space is a collection of vectors. The same distinction between number and numbers. Numbers are all in numbers. Number is a specific number. OK. So as I said, numbers are a special case of this. You can add them. You can do something else with numbers. You can multiply them by other numbers. Well, eventually we will talk about multiplying vectors, but not yet. What we can do is multiply vectors by numbers, ordinary numbers or complex numbers. And we're going to be thinking about complex numbers. I hope everybody is comfortable with complex numbers. If not, you've got to quickly get up to speed on complex numbers. But we don't need to know too much. You need to know that a complex number is a combination of a real number and an imaginary number and how to multiply, add complex numbers. And you need the notion of complex conjugate. How many people here are familiar with the concept of complex conjugate? Most people, right? Almost everybody. So I'm not going to go in it now. But basically for every complex number, there is another complex number with its complex conjugate. It's a mapping between numbers and their complex conjugate. So it's an important notion. Keep it in mind because it's going to come back. But for the moment, ordinary vectors, let's go back to pointers, pointers in space. Ordinary pointers, you can multiply by ordinary numbers, positive or negative. A pointer which is of a certain length in that direction, if you multiply it by two, it's just twice as long in the same direction. Multiply it by minus one, it just becomes in the same direction except in the opposite direction. So yes, you can multiply vectors by numbers. And in a complex vector space, we're now going to talk about complex vector spaces, you can multiply vectors by complex numbers. Now, do not try to picture this in your head as any kind of pointer in some direction. Just accept we're now talking about a mathematical abstraction where a vector, any given vector, can be multiplied by any complex number. I'm going to use, let's see, what shall I use for complex number? In my notes, I used C, but I don't want to use C. Z. Z is a standard notation for, you know, X plus I, Y is equal to Z. So you can take any complex, sorry, any vector, multiply it by any complex number, and it is still some, I'm not going to try to draw it for you. Stop trying to think about drawing vectors. If there is a vector and there's a complex number, you can multiply them and it gives you some new vector, let's call it, well, let's call it A prime. So you can multiply vectors by numbers, complex numbers. And that's about it. That's all there really is to a vector space. Any set of objects for which you can do this is a vector space. I'll give you some examples of vector spaces. If A's and B's are just real numbers, then you can add them to make new real numbers. But you cannot multiply them by complex numbers and still expect to get real numbers. Okay? You can multiply them by real numbers and get other real numbers. So the real numbers are a real vector space. How about the complex numbers? Are the complex numbers a complex vector space? Yes. You can add them and get other complex numbers. Any two complex numbers you can add. And any complex number you can multiply by another complex number and get back a complex number. So the complex numbers are the simplest example of a complex vector space. Okay? Let me give you another example of a complex vector space. This one's incredibly complicated, but still pretty easy to write down. Take any function of a variable. Let's call it psi of x. Now x may be an ordinary real variable. A function on a line. Function on a line, but it's a complex function on a line. It has a real and imaginary part. Complex function on a line. Can you add two functions and get another function? Certainly. So functions, just functions are things you can add to get other functions. Can you multiply a complex function by a complex number and get a complex function? Yes. Take any complex function, multiply it by a complex number, you get another complex function. So functions form a vector space. Now that's a complicated idea, and I don't want to get that complicated yet. So let's erase that from the blackboard. I'll give you another vector space. Another complex vector space. Let's just make some symbols, a bracket, and put in entries, two entries into the bracket. And the two entries are themselves complex numbers, alpha 1 and alpha 2. Alpha equals a complex number, both alpha 1 and alpha 2. And alpha 1 and alpha 2 can be anything. The collection of such symbols is also a complex vector space. Here's the rule. Supposing you have two of them, here's one, and here's another one, beta 1, beta 2, and you add them. What does that mean? Definition. The definition of the sum of two, these are called columns. The sum of two columns like that is just to add the entries, alpha 1 plus beta 1, alpha 2 plus beta 2. So given any two abstract symbols in which the entries are filled in with complex numbers, any two of them, alpha and beta, we can add them and get a third one. So column vectors like this, little columns of two entries like this satisfy the first rule. And the second rule is that if you want to take a column and multiply it by a complex number, all you do is multiply each entry by the complex number. You multiply each entry separately, and then you construct something which is just z times that. So just pairs of complex numbers like this form a vector space. It's a very abstract concept, but not very hard. How about triples of them? Sure. So any length of column like that. What you don't want to do is add a column of two to a column of three. You don't have a rule for that. So columns of different length form different vector spaces, completely different abstract vector spaces. Any question about the meaning of the abstract? And this is not hard, but it is abstract. Are those the only two requirements for a vector space? Yeah, well, I think it's also important that there be a zero vector. Zero vector is the unique vector, which when you add it to any other vector, it gives back the same vector. I didn't feel like writing it down, but... Closure requirement? A general closure requirement? Does all be closed under a particular space? Closed under addition? I think, right. Closed under multiplication? Which I think you've illustrated. Yeah, yeah, that's right. Is this the same as matrices? Is this the same as matrices? Is it what? Matrix. Matrix. Yeah. This is a... Usually the term matrices in this class in any case will be reserved for square matrices. Square matrices are collections of numbers which form square arrays as many rows as columns. But the general notion of a matrix can have different numbers of rows and columns, and the vector would be a matrix with only one column, in this case two rows. So yes, this would be a special case of a matrix, but in many contexts, one reserves the term matrix for a square matrix, and I will use it that way. I will use the term matrix to mean a square matrix. So I didn't want to use that terminology. Okay, are you thoroughly saturated with abstract mathematics? This should be straightforward for you. Okay, now we introduce another abstraction. Another abstraction. Basically, it's the abstraction of a... Given a complex vector space, the complex conjugate of the vector space, the complex conjugate of the vector space is sort of the space of complex conjugate vectors. Just like you can have for any number a complex conjugate... any complex number, there's a complex conjugate to go with it. So in that sense, the complex number system has a dual version, which is the complex conjugates of the original numbers. So do the complex conjugate vector spaces, and roughly speaking, the complex conjugate vector space, so the dual vector space, usually called a dual, is roughly speaking the complex conjugates of the original vectors. We'll make that more precise as we go along. But what it is, is a separate vector space whose elements are in one-to-one correspondence with the elements of the original vector space. So if there is a vector a in the original space, it's right this way, then there is a vector in the dual, it's called the dual vector space, but as I said, it is more or less the complex conjugate, and it's written this way. It's just written as a backward Dirac symbol. We'll learn to call these things bra vectors and these things ket vectors, but no, this is the ket vector, this is the bra vector, but for the moment, they're just symbols. An example, if the vector space is just the space of complex numbers, then the dual vector space is just the space of complex conjugates of those numbers. So there's a one-to-one correspondence between vectors and their complex conjugates. Let's write down first some postulates about the complex conjugate vector space and then test it out by seeing if we can find a concrete representation of the notion of the abstract postulates. We'll write down some abstract postulates and then check, just as we did here, we said here's a concrete representation of the notions that we wrote down over here. Let's write down the abstract postulates about the dual vector space. So first of all, for any vector, there is a dual vector and they're one-to-one correspondence. Next, if you take two vectors and you add them together, then the dual of the sum is just the sum of the duals. So if you take a vector which is made up out of the sum of two other vectors, then its reflection in the dual space is just the sum of the individual dual vectors. These are postulates. They're postulates, if you like, but then that's all obscure. The third assumption is that if you take any vector and multiply it by a complex conjugate, sorry, by a complex number, then its reflection in the dual space, anybody want to guess? Of course, it involves the dual vector to A, but not multiplication by Z, but multiplication by the complex conjugate of Z, Z star. Z star is the complex conjugate of Z. This would be true of complex numbers. If you take a complex number and multiply it by another complex number, then the complex conjugate of the whole thing is the product of complex conjugates. That's all this is saying. So we think of the dual vector space as basically just a complex conjugate vector space, in some sense. Okay, let's see if we can make any sense out of this in terms of these column vectors here. Good question. Is A plus B the same as B plus A? Yes, yes. A plus B is the same as B plus A. Good question. Yes, when you add vectors, and you can see that from here. I should have, all right, I'm sorry. Somebody asked me, are there any other rules about the definition of vector spaces? And indeed, there are, among other ones, that may be all there is, that it doesn't matter which order you add them in. Yes. But you can see that here. The sum of two vectors is just given by numerically adding the complex numbers here. And of course, numerical addition of complex numbers doesn't matter which way you order them. Good. So that's a good point. Okay, so let's come back now to, let's put this up on top, and come back to this concrete, non-abstract representation of the complex vector space. Here we have a vector. This object is representing the vector A. The one that I erased with beta, that's representing the vector B. But right now, I don't need the vector B. Okay? What about the object? If this object represents A, then what is the object which represents the dual vector, the corresponding dual vector? And the answer is, it's the same pair of numbers except they're complex conjugates. The same pair of numbers, not quite, they're complex conjugates, alpha one star and alpha two star. But just to keep track that we're talking about of the different vector space, we write it in row form, not as a column but as a row. This is just a trick to remember that this object belongs to the dual vector space, this object belongs to the original vector space, and that they're different. All right? So the dual vector space image of alpha one and alpha two is alpha one star, alpha two star. That's the dual vector. And then you can check. You can check with this definition are the postulates, namely, that if you add two vectors and then take the dual, whether or not you get the sum of duals, and you can check whether when you multiply this by a complex number and then take its dual, whether it just multiplies by the complex conjugate number. The answer is yes. So just laying out a pair of complex numbers like this makes a vector space, and the dual vector space is the same pair of numbers except complex conjugated. And laid out in a row, laid out in a row just to remember that it's not the same creature, doesn't live in the same space of things. And that's the idea of a complex vector space. As I said, again, very abstract and very easy. Final concept. Well, it's not the final concept, but A. The N plus first concept. I'm not sure what N is. How many people find this hard? It's not very hard. How many people find it familiar? Quite a lot of people find it familiar. That's good. Those who don't find it familiar apparently didn't find it hard. That's also good. The product of two vectors. The inner product of two vectors. The inner product of two vectors. It's the analog of the dot product for pointers. The analog of the dot product for pointers. The inner product between two vectors is strictly speaking the inner product between a vector and a vector and a the dual of a vector. Given two vectors, let's call them A and B. One does not form the product of A with B, the inner product. Instead, one takes B and constructs its image in the dual vector space. And then takes the product of the vector A with the dual vector B. In Dirac's language, you take a ket vector and you multiply it by a bra vector. And when you do so, you make a bra ket or a bracket. A bracket. The bracket, of course, is this thing over here and the two together. I think this one is called the ket and this one is called the bra. So it's a bra ket. So the inner product of two vectors is really a product of a vector and the dual of a vector. Now, is it the same as multiplying the bra vector A with the ket vector B? Question. Is it the same to multiply the bra B with the ket A or the bra A with the ket B? And the answer is no. The answer is no. It is not the same operation, but in fact, they're closely related. The axiom, we're still talking about axioms now. The axiom is that these two are related by complex conjugation. I have a complex conjugated A to go from the ket vector to the bra vector. I have a complex conjugated B to go from the bra vector to the ket vector. So if this is a product of A, roughly speaking, times B star, this is the product of B star times A. This is the complex conjugate of this. So A, another postulate, another axiom of complex vector spaces, is that the inner product of A with B is the complex conjugate of the inner product of B with A star. I mean, in normal vector space language or whatever, nothing to do with quantum mechanics per se. I mean, there the inner product is just all the vectors are in the same vector space. In ordinary vector space. So this is still called an inner product, even though it's kind of the special thing with the complex conjugate. The usual vector spaces that you think about are real vector spaces. And for a real vector space, for a real number, there's no difference between the number and its complex conjugate. So the real vector space satisfies the same postulates, except the complex conjugation is completely a trivial operation. I mean, you could have a vector space where you take the inner product, two complex vectors, and it doesn't involve any conjugation. You just multiply them component-wise and take the sum. Yes, you could. You could invent such things. Then, of course, the inner product or a vector with itself would not be real. That's okay. Who cares? So you could invent such things. Turns out not to be terribly useful in the context of quantum mechanics. So, you know, it's largely a question of what's useful, and what's useful winds up being this concept. But yes, you could certainly define such a thing. It's not a commonly defined thing. It doesn't seem to have a lot of use. This structure occurs over and over in all kinds of places, not only quantum mechanics. I mean, this complex vector space is a very, very common thing in mathematics. And, okay, let's check now. Let's see if we can postulate a construction for the inner product of the dual vector beta. Remember when you complex conjugate, you take times the original vector alpha. This is supposed to be now, I guess I called it B, A. And I haven't, well, I've written down the notation for B and the notation for A. Column vector for A, row vector for B, complex conjugated. I will tell you now what the rule is. Again, we're making up rules which will satisfy those postulates up there. And it's a very simple construction. It's just beta one star times alpha one plus beta two star times alpha two. Notice you take the first entry of the dual vector times the first entry of the vector alpha, add it to the second entry times the second entry. So it's actually just a generalization of multiplying beta times alpha, but it's also a generalization of the dot product. If you remember about dot products of ordinary vectors, you simply take the sums of the products of the components. If you have two pointers and they're labeled by components, both of them, then the dot product is just the sums of the products of the components. Does that satisfy the rules up here? Where are the rules? I think I must have erased the rule, huh? I did erase the rule. The rule was... Did something happen? Where was the last one? The last one? Yeah. Lost it. Okay. What happens if you interchange alpha and beta or A and B? Then this becomes alpha one star alpha two star times beta one, beta two, if I interchange them, and that's equal to alpha one star beta one plus alpha two star beta two, and that's just a complex conjugate of this. It really is the complex conjugate. So interchanging alpha and beta or A and B really does interchange the value of the product and its complex conjugate. That's the notion of a complex vector space and an inner product on the complex vector space. Now, let's just take a couple of special cases of inner products and say, what about the inner product of a vector with itself or a vector with its image in the dual space? Let's take the inner product of A with A. What do I know about that? Well, first of all, let's... I'm setting B equal to A. That tells me that the inner product of a vector with itself is equal to its own complex conjugate. I just substituted for B, A, the inner product of a vector with itself is equal to its own complex conjugate. What kind of number is equal to its own complex conjugate? A real number. So the inner product of a vector with itself is always real. That's pretty obvious just from here. If I take alpha and multiply it by its image, I get alpha star alpha plus alpha 2 star alpha 2. Each one of these is real. It's also something else. Positive. Alpha star alpha, a number of times its own complex conjugate, is always positive. So it's real and positive. The inner product of a vector with itself is real and positive and it's considered to be the square of the length of the vector. The square root of the inner product of a vector with itself defines the length of the vector. So complex vectors have real lengths, real and positive lengths. Also, the square of the... Both the square and the length itself are defined to be positive. So that's not too obscure. Every complex vector has a length which is the square root of the sums of the squares of its components, but squares now mean multiplied by complex conjugate. Next notion, an important notion. The notion of orthogonality of vectors. The notion of orthogonality of vectors, as we'll see, is a very important one. I'll give some examples in a few minutes. The notion of orthogonality between two vectors is exactly the same as it is for ordinary vectors, that the inner product, read dot product if you like, the inner product is zero. The length of two vectors is zero. They are said to be orthogonal. It defines all perpendicular. Okay, so two vectors are orthogonal if the inner product between them is zero. Let's see if we can make up some orthogonal vectors. It's very easy. Oh, oh, what about reading, what about if I take the inner product, the opposite order? It's just a complex conjugate of zero, but the complex conjugate of zero is zero. So saying the two vectors are orthogonal, it doesn't matter which order you put them in. The notion of orthogonality doesn't depend on order, although the general idea of inner product does. So now we have the notion of orthogonal vectors. In a real vector space, how do you define the dimensionality of the real vector space? Well, give me a definition of the dimensionality. I don't know the definitions, of course, but anybody have one? The number of entries in the vector? Yeah, but let's work in a more abstract way without, yes, that's of course correct, but... The maximal number of orthogonal vectors that you can find in the space. So if the vector space is two-dimensional, ordinary two-dimensional vector space, there are two mutually orthogonal vectors, you cannot find the third vector which is orthogonal to both of them. So the maximal number of mutually orthogonal vectors in three dimensions, where's my... I've got three mutually orthogonal vectors. Of course, there are many sets of three mutually orthogonal vectors, but given three, I cannot find the fourth one. So a three-dimensional vector space has a maximal number of three mutually orthogonal vectors, and the same is true, or the same definition, its definition, is true for a complex vector space. The maximal number of orthogonal vectors that you can find is equal to the dimensionality of the space. It does correspond to the number of entries here, the number of components. It is the same, and you can prove that. That's a little exercise. Prove that the number of components of the vector is the same as the number of mutually orthogonal vectors. But let's just talk... Are there any mutually orthogonal vectors we can write down? Sure, loads and loads of them, but a very simple case would be one zero. Now, one and zero are one and zero complex numbers? Well, yeah, real numbers are special cases of complex numbers. So a real complex number can be real. Here's one, and here's another one, zero, one. This one has a one in the first place, it has a zero in the first place, this one has a zero, and you get the idea. The inner product is zero times one plus one times zero. Okay, so the two vectors, or... Let's just write it in terms of column vectors. One zero and zero one. One zero and zero one are orthogonal vectors. The inner product of this one with the dual of this one is zero. That's because one times zero plus zero times one is a zero. So there are plenty of orthogonal vectors around, but in two dimensions, and this is two dimensions, once you've found two of them, you cannot find the third. So there's a little exercise, prove that there can be no vector, which is both orthogonal to this and orthogonal to that. That's not hard to do, that's very easy to do. Oh, yeah. Zero, zero is always orthogonal to everything. So one should be careful and say the maximal number of non-zero orthogonal, mutually orthogonal vectors, good point. Okay, that's a mathematical interlude for tonight. Complex vector spaces. And the next time we will discuss how the space of states, and what it means for the space of states of these simple qubits to be two-dimensional vector spaces, very, very different than saying that they consist of points in a set. The space of states is in fact a vector space, and then try to make contact with the mathematics of vector spaces and the strange things that are happening, or that seem to be happening here with these funny properties of measurements. That's where we want to go. So we've done some physics, some experimental physics on this blackboard up there, and we've done some abstract mathematics over here, and next time we want to connect them and show what the abstract mathematics, how it represents, or how it can be used to represent this unusual kind of logic of qubits. Q. What do you think strictly of the patterns is it going to be meaningful to define a multiplication operation, which would be Baylor star alpha 1, energy star alpha 1? You're asking... Yes, the answer is yes. Yes, absolutely. We'll construct inner products, outer products, all sorts of products. Yeah, yeah, that's right. The outer product of two vectors is a matrix. And of course, how do you represent it? Well, the outer product of two vectors is represented this way. It is a matrix, but we'll come to that. All the postulates apply to a vector space of functions, for example. It is less obvious than... A vector space of functions can be thought of as column vectors, but where the entries are continuous. So we have alpha, alpha is the function, and alpha of the first entry, alpha of the second entry, instead of that we have a continuous family of alphas, alpha evaluated at point x1, alpha evaluated at point x2, alpha evaluated at point x3, and of course we can't really write it as a... Literally as a... as a discrete collection like that. But if you like, think of these alphas here, alpha 1 and alpha 2 as just a function of two variables, a function of an entry 1 and 2. Instead of a function of x, it's a function of a variable which is either 1 or 2. So yes, complex functions form complex vector spaces. The inner product, we will discuss that. Right. Back on the upper left board, where you have that apparatus turned at angle theta. If you consider the sequence of inputs going to outputs, the proportion of minus 1s to 1s, we can get them by cosine theta. Is there anything interesting about the distribution of the minus 1s within the 1s? What does that mean? Well, like higher order statistics maybe. Still doesn't mean... Oh, oh, oh. They're just completely random. Completely random. Yeah, just completely random variables. You're talking about the relationship between the different replications. The whole bunch of spins, and you subject them all to the same. And the question is, I think you're asking whether there are correlations between them. No. No, they're completely independent, uncorrelated variables. So still with the qubit experiment there, does it make sense to say that when you're measuring a qubit, you're affecting it, so you're actually preparing it in a new state. Yes, yes, yes, yes. That's exactly right. That's why the first time does something, and the second time it just keeps it. And that's a big difference between classical mechanics and quantum mechanics. In classical mechanics, you can measure something sufficiently gently to not affect it, to not disturb it. In quantum mechanics, whenever you measure something, you always disturb it. So this is an example of how measuring something disturbs it and puts it into a new state. And let's say that's a good point. Yes. Okay. For more, please visit us at stanford.edu.
(January 9, 2012) Leonard Susskind provides an introduction to quantum mechanics.
10.5446/15010 (DOI)
going way beyond the material of the course, but let's just talk about it for a minute. The model we talked about had a Hamiltonian or an energy, which is just c times p. p can be positive or negative. The wave functions for a given p are just e to the i p x, and there's nothing better about e to the plus i p x or p being positive or negative. They're both perfectly acceptable plane waves, but one of them has positive energy and one of them has negative energy. Just reading off here. Now, negative energy is not such a good thing in the real world. Negative energy is not such a good thing in the real world because if there were particles of negative energy, the world would be unstable. The vacuum would not be an empty space, would not be the state of lowest energy. You could take empty space, which we can say has zero energy, and then just start putting in particles of negative energy. Particles of negative energy will lower the energy, and anything that lowers the energy tends to be favored. You can make lots and lots more and more particles of more and more negative energy. You have to get some positive energy from some place. You have to get some positive energy from some place, but you could, for example, radiate photons and at the same time produce more and more of these negative energy particles. If there were particles of negative energy, the empty space would just be unstable, unstable with respect to creating more and more positive and negative energy particles. You have to conserve energy, so you'd have to do both, but that would happen. If there's only positive energy particles, by contrast, there's no way that energy conservation will allow you to make them. If there are only positive energy particles and the vacuum has zero energy, it's the lowest energy state and nothing can happen to it to create particles. A good stable world requires all particles to have positive energy relative to the vacuum. This is not a good stable world. What do you do about it? Well, it was Dirac who understood what to do about it, but only in the case of fermions. Now, what are fermions? Fermions are simply particles that cannot be in the same state. Two of them cannot be in the same state. That's a rule. Where that rule came from, it comes from relativistic quantum field theory, but it's the same as the Pauli exclusion principle. If you know a little bit of chemistry, you know that two electrons in an atom can't be in the same state, and you have to fill the atom with particles of different energies, different states. And the same is generally true. Particles that participate in a Pauli exclusion principle are called fermions. Why they're called fermions? I don't know. I mean, they should be called Paulions. It's obvious they should be called Paulions. And particles which don't participate in an exclusion principle are called bosons, but of course, properly, they should be called Einsteinons, but it was Einstein who first studied them. Never mind. Fermions and bosons, fermions can't be put in the same state. Bosons can't. So what Dirac said was, look, I have a very simple solution to all of this. Just imagine that the vacuum, empty space, is really a state in which all the particles of negative energy are present simultaneously filling up the vacuum. You simply can't put any more in because you're not allowed to put particles into the same state, but everything else is empty. In fact, if you were to say, think about it for a moment, let's assume that the real vacuum is the state of absolute lowest energy. Absolute lowest energy. Now take absolute empty space. What you think of is ordinarily empty space. Is that the state of lowest energy? No. I can put one particle of negative energy in. Lower the energy. So there's a state of lower energy. Whatever state you say, I will find another negative energy particle to put into it and lower its energy. The only way to have the state of absolute lowest energy is to simply fill it full of all the particles which have negative energy. In other words, all the particles with negative momentum. Sounds crazy, but it turned out to be right. But you can only do that with particles which have the property that you can't put more than one of them into the same state. If you could put more than one of them into the same state, there would be no end to your ability to keep dumping negative energy particles into the vacuum. This is a reasonable theory. It's a Dirac theory. It's a very simple, basic, baby version of the Dirac equation is what it is. It's an okay theory for neutrinos, or for one-dimensional neutrinos. It is not a good theory for photons. Photons are bosons, and you simply can't play the same game. If there were negative energy bosons, you could simply dump them into the same state more and more and more of them and simply destabilize the world. So no, this is not a good theory of bosons. It's a good theory of fermions. But that's taking us way beyond this course, this quarter's material. We've covered, in a sense, a lot about the foundations of quantum mechanics and very simple quantum mechanical systems. We've covered almost nothing about what we could call the applications of quantum mechanics, so not even the applications of quantum mechanics, but studying quantum mechanical systems. We didn't even get to the harmonic oscillator. That's a disgrace. But we're not finished with quantum mechanics. Next quarter, we're going to do relativity and field theory, but then we're going to have to come back to quantum mechanics in order to study more about the basic structures that appear in quantum mechanics. So it's sort of embarrassing to teach a whole course in quantum mechanics and never get to the harmonic oscillator, but that's where we are. So you can ask for your money back, I don't know. On the other hand, it does seem people got a lot of fun with the ideas of entanglement. They didn't expect such a response, particularly this, and some of it was extremely interesting and well done. Some of the chatter back and forth on the Internet, I actually did follow it. And sometimes it was generally good and sometimes excellent. So I got some sense of satisfaction out of teaching that. I'm going to go on a little bit today about the Schrodinger equation. Not the general Schrodinger equation, but the special Schrodinger equation, the Schrodinger actually wrote down. The connection with the things we've learned up till now is through the wave function. And the wave function is simply the amplitude that a particle is located at a particular position. In other words, it's the inner product of whatever the state vector happens to be with a state of definite position. And we can also write a momentum space wave function. The momentum space wave function is just the projection of the same state, whatever it happens to be, onto a momentum eigenstate, eigenvector. The momentum eigenvectors and the position eigenvectors, or rather the momentum wave function and the position wave function are related, and they're related by Fourier transform. That's the basic structure of position and momentum. If a particle has more than one direction to move in, for example, let's suppose it can move in the x, y plane, then you characterize the position of a particle by two coordinates. The two coordinates commute. You can measure both the x and y position of a particle simultaneously. Now, is that something that follows automatically from what we've set up till now? No, not at all. It's really an empirical fact. It's an empirical fact that the structure of the quantum mechanics of particles only conforms to experiment if we assume that the different coordinates, the different directions that a particle can move in commute, and that means that the wave function is really a function, not of a single variable, but a function of a collection of variables, the collection of variables being the position, the various coordinates of the particle. If we have several particles, then the wave function becomes a function of the coordinates of all the particles. We're not going to go into that today, but I just tell you that the wave function thought of, the wave function of a collection of particles, is not a function of three-dimensional space. It's a function of all of the coordinates of all of the particles in the same way that phase space, the phase space of a particle is described by a collection of a large number of coordinates and momenta in the case of classical physics. The wave function of a particle is a collection of a system, is a function of the collection of all of the coordinates. So for example, if you are 10 to the 23rd particles, psi would be a function of 10 to the 23rd coordinates. Okay, so that's the basic structure. And you can go to momentum space by Fourier transform and write the wave function instead as a wave function of all of the momenta of all the particles. You can even have mixed situations, mixed representations, where you may describe a wave function in terms of the coordinates of some of the particles and momenta of the other particles. This is also allowed, but we're going to focus on the case of one-dimensional motion again. We're going to come back to that and just discuss the behavior of the wave function, its properties, just a little bit, mainly properties of the Schrodinger equation. First, the uncertainty principle. Okay. Where does the uncertainty principle come from? It comes from the fact that x, the coordinate, and p don't commute. Whenever two coordinates or two observables don't commute, it becomes impossible to measure both of them simultaneously because there aren't eigenvectors which are eigenvectors of both observables. In particular, the eigenvectors of x are extremely different. Every eigenvector of x is extremely different from every eigenvector of p. The eigenvectors of x are sharp, narrow, highly localized functions in space. The eigenvectors of momentum, first of all, complex, e to the i p x, but they're real and imaginary parts, are oscillations over all of space. So, they're sort of space-filling. They're not space-filling. It's not that the particle fills space. It's just that the particle is everywhere is equally likely to be everywhere when its momentum is known. So, that's the most primitive observation about the uncertainty principle. If you know the position of a particle, you certainly don't know its momentum or if a particle is in a position eigenstate. It's very far from a momentum eigenstate. If it's a momentum eigenstate, it's very far from a position eigenstate. That's the most primitive version of it. You can say more qualitatively, and then we'll come to quantitative, we'll come to quantitative precise theorem in a moment. But if we look at the wave function as a function of x and its relationship biforia transform dp over the square root of 2pi psi twiddle of p e to the i p x, and I won't bother writing the other relationship. The other relationship is the one that writes psi of p in terms of psi of x. And just start fooling around trying to construct psi of p or psi of x, one from the other, biforia transform. You very quickly discover that the narrower the distribution in x, the broader the distribution in p will be. That's a property just of the foria analysis, that the narrower psi of x is, the broader psi of p is, and vice versa. If psi of p is very narrow, it means that it's basically a single plane wave, then it's very broad in x. And so that's a pattern of foria transform. The narrower a function is, the broader its foria transform is. That's the qualitative understanding of the uncertainty principle. But let's try to do a little bit better. Let's try to see if we can define the notion of uncertainty and then see if we can prove a theorem. I'm going to prove a slightly simplified version of the theorem, mostly because there's just too much on the blackboard if I prove the full general theorem up. Prove a simplified version of it, but the simplified version of it contains the basic ingredients. All right, first of all, what is meant by the uncertainty in a variable? You have some distribution for that variable. And the first thing to do before discussing the uncertainty in it is shift the axes so that the average of x, let's say this is x, so that the average of x is zero. You can always do that. Whoops. If the average of x is not zero, let's suppose you have some wave function like that. x equals zero is over here. If you put your center of coordinates over here, obviously the average of x, the expectation value of x will be negative. If you put your coordinates over here, the average of x will clearly be positive. And as you move the origin, you'll eventually find the position where the average of x is exactly equal to zero. So let's simplify our story by shifting coordinates. This is just a shift of coordinates. Just shifting our coordinates until the average value of x is equal to zero. So average of x is equal to zero. Then what, now we've done it, with average of x is equal to zero. What is the uncertainty? The uncertainty by definition is the average of x squared. And looking at the average of x, it's zero because anything on this side is balanced by something on this side. x is positive over here, x is negative over here. So looking at the average of it, the average will cancel if you choose your axes in the right place. But the average of x squared, x squared measured relative to this located axis here will not be zero. x squared is not zero here, x squared is not zero here. In fact, x squared, the only place where x squared is zero is right at the origin. And so there's no way that the average of x squared can be zero. In fact, the broader the wave function, the larger the expectation value of x squared will be. That's clear. So a good measure, a good measure of the width of the distribution, you can call it the uncertainty in x, and it is the definition really of the uncertainty in x is the average of x squared. The average value of x squared with a given probability distribution, given that the average of x itself is equal to zero, that is the uncertainty in x. And it's just called delta x squared. Just go, it's positive number, delta x squared, and the broader the distribution, the larger it will be. All right. How do you calculate it? If you know the wave function, very simply, you take the integral of psi star psi of x, that's the probability to find the particle at x times x squared. That's the expectation value of x squared. That's the average of x squared probability to find the particle at position x times x squared. That's the definition of the square of the uncertainty. That's the square of the uncertainty of x. And as I say, it's definition, but it's a good definition. Let me rewrite it up here. Delta x and I'll square it is equal to the integral of psi star psi times x squared dx. All right. Now, what about the uncertainty in momentum? The uncertainty in momentum is defined in a very similar way. Oh, one other point, one other point. This is equal to psi x squared psi. It's the expectation value in the state psi of x squared. Thought of as a quantum mechanical, this is a quantum mechanical formula, the expectation value of x squared. That's the uncertainty. What about the uncertainty in momentum? In fact, for that matter, what about the uncertainty in anything? The uncertainty in anything, assuming its average is equal to zero, is just the average of the square of that quantity. That's a general definition. Incidentally, if you don't want to shift the coordinates so that the average is equal to zero, you can use another definition. And the other definition is that it's the expectation value of x squared minus the expectation value of x squared. These are two different things. The expectation value of the square of something is not the square of the expectation value. In particular, if the expectation value of x is equal to zero, it does not follow the expectation value of x squared is equal to zero. This is a more general definition of the square of the uncertainty in x. But as I said, you can always shift your axes around so that this is equal to zero and just choose this one here. Okay, what about the expectation? What about the uncertainty of momentum? The uncertainty of momentum is defined in essentially the same way. You can define it if you like using the Fourier transformed wave function, and then it would just be I'm not going to use this definition, but let's just do it. Delta p squared would be the integral psi twiddle of p psi of p times p squared dp. This would be the probability that the particle has momentum p times p squared integrated over p. But I'm not going to do it this way. What I'm going to do is just observe that this quantity over here is nothing but psi p squared psi. And now I'm going to work in the position representation. We can do that because we know what the operator p is when it acts on a wave function psi of x. What does p do? What does the operator p do when it acts on a wave function in the x basis? Well, first of all, whenever you take expectation values or inner products, you have an integral dx to do. From the bra vector here, you have psi star of x. The ket side of this is p squared times psi. And what is p squared? p is minus i d by dx. I'm using now that the momentum operator p is minus i d by dx. And p squared is just minus d second by dx squared. Second derivative. That's what p squared is. You act twice with p once and then twice. So this is equal to the integral dx psi star of x d second psi by dx squared with a minus sign. The minus sign coming from minus i times minus i. Now you look at this and you worry a little bit. There's a minus sign here. How can the uncertainty in p be negative? What does it mean to have a negative uncertainty in p? There's a minus sign here. What's going on? What's going on is that the integral in here is negative. That's all that's going on. The integral is saying how do you see that the integral is negative? You integrate by parts. So I'll just remind you the rule for integrating by parts is if you have, we're going to use this over and over. If you have two functions, f and g, then the integral f dg by dx is equal to minus the integral df by dx times g dx. This is true as long as f and g go to zero appropriately at infinity so that the boundary terms don't contribute in the integration by parts. Just look at the structure of it because we'll use it over and over. You can switch the derivative from one factor to the other factor at the cost of a minus sign. That's the basic rule. This is the second derivative. The second derivative is also, of course, the first derivative, namely it's the first derivative of the first derivative, d psi by dx. Let's do an integration by parts. All we have to do is switch the d by dx to psi star and change the minus sign. This becomes dx. Let's put the dx over here. The d by dx can switch to the psi star at the cost of a change in sign. This becomes plus and this becomes d psi star by dx. Now look what we have here. We have d psi by dx and we have d psi star by dx. This factor is the complex conjugate of this factor. They're just complex conjugates of each other. What happens if you multiply a function by its complex conjugate? First of all, it's real and second of all, it's positive. The integrand in here is positive. The sign got changed to plus and this is most definitely a positive integral here. So p squared, this is the expectation value of p squared. We can always shift p until it is zero. Now the real question is can you have both the average of x and the average of p equal to zero? That's not so obvious. This is yes. You can always shift. First you shift to x equals zero and then you can shift to p equals zero. There's a trick. I think I'll come, let's come back to it. Ask me again. I don't want to get off the track. The answer is yes. You can take a wave function and shift it both in x and in p until both of them have average equal to zero. We'll come back to it. I'll explain to you why in a little while. But let's suppose we've done that. Okay? And let's see if we can find a relationship between the uncertainty in x and the uncertainty in p. Not a relationship. Well, I suppose it's called a relationship in mathematics but inequality is what I want. And inequality. The inequality should be such that when delta x is small, delta p is big. When delta p is small, delta x is big or better yet that the product of delta x times delta p cannot be smaller than what? Then h bar. Now I've lost h bar in here. I've lost h bar because I've set it equal to one. We'll put it back later but if you want to put it in here now we should probably put it in here as a one over h bar squared here. An h bar squared here and an h bar squared here. But let's not, do I have it in the right place? Yes, I do. But let's work with h bar equal one and later on when we're finished I'll put back the dimensional quantities. Okay, so here we are. What tools do we have to construct inequalities? There's basically one, there's a zillion inequalities in mathematics but probably the most famous, the most useful and the only one that I know that's particularly useful is the triangle inequality. Everybody know the triangle inequality? Some people do, some people don't. So I will tell it to you and it comes in two forms. It's a statement about triangles of all things. What it says is if you have a triangle that the sum of any two sides is bigger than the third side, obviously true, right? The sum of this side and this side, why is that? Because the shortest distance between any two points is a straight line. So the shortest distance between these two points is this one and the other two sides, the sum of the other two sides is certainly bigger. So if we have three sides, that's a statement, there's also another version of it. I'll tell you what, I think I'll prove it, I'll prove the other version but you know the other version very well also. Let's call this one A, let's call this one B and let's call this one C. Okay, so A plus B is bigger than C or A plus B is bigger than C, bigger than the size of C. Okay, A plus B is also by the same argument, I mean it is the same thing, is bigger than A plus B. If these are vectors, think of these as vectors, here's a vector A, here's a vector, here's, let's put it this way. C is not A plus B is it, it's A minus B in this case, right? I think I want to think of it this way, A B C. Okay, it's still a triangle, A plus B is bigger than C, which is the same as saying it's bigger than A plus B. Let's square it, let's square it, then this becomes A squared, square of the side of A plus B squared plus twice A B is bigger than this guy over here, but what's the square of A plus B? It is A squared plus B squared plus twice the dot product of A B. Where did I get that from? I wrote that A plus B squared is just A plus B dot A plus B. That's the square of the length of a vector and that's A squared plus B squared plus twice the dot product of A and B. And so we have now an inequality from which we can erase A squared and B squared from both sides. Here's another form of the triangle inequality. Let's erase the twos. And what it says is if you have any two vectors, given any two vectors, there's one vector, here's another one, A B, that the product of the length of one of them, times the length of the other, is always bigger, I think this two disappeared, is bigger than the dot product. There's another way to think about that. What is the dot product in terms of the length of A and the length of B and something else? What else do you need to know to calculate the dot product? Cosine of the angle. So A dot B is of course nothing but A B cosine of the angle. And cosine of an angle is always less than one. So the triangle inequality is pretty trivial. It simply says that the magnitude, the product of the magnitude of two vectors is always bigger than the dot product between them because the dot product always has a cosine in there. Now this is true for vectors in two dimensions, three dimensions, a hundred and zillion dimensions. It's even true for vectors in complex vector spaces. It's just generally true for vectors in any kind of vector space that the magnitude defined as the square root of the inner product of the vector with itself that this inequality is true. In fact, let's just write it as A dot B. And let's square it because we're going to use it in the squared form. A squared B squared is greater than A dot B squared. I've just squared, I haven't done anything very significant. And that's the inequality that we're going to use. Yes, it should be greater than or equal to. The only time it's equal is when A and B are in the same direction. Right, that's right. Greater than or equal to. When cosine theta is equal to one. All right, that's going to be our trick. So I don't think I need this anymore. We'll come back to it. Yeah, we will. Kind of looks promising. We have a delta x squared here, sort of like a squared. We have a delta p squared here, sort of like B squared. And we want to show that it's bigger than something. So the trick is to define the right a's and b's in terms of size and x's and so forth. And then just use the theorem. But I'm going to make one simplifying assumption. I've already made two simplifying assumptions that the expectation value of x and p are zero. That I will show you how to arrange. And that's there's no there's no trick there. The simplifying assumption I'm going to make only because I don't want to fill up the blackboard with more algebra than I need than necessary. This is something you can sit down with at home and go through it for the more general case. I'm going to assume that the wave function psi of x is real. If I didn't, we'd be chasing around size and size stars and I would eventually tire you out and you would probably lose the thread of the argument. So I'm going to assume that psi is real. It'll save us some algebra. It's not general, but it's very easy to fill in the steps for complex psi. Easy, but it takes a few more lines. And I just after sitting down and doing it on a piece of paper, I just decided it was too much for a blackboard presentation to do. Better to do it for the simple case. So psi is real. Okay, so here we are. We can get rid of the size stars then. And I think I lost an equation here. Using the fact that p is minus i d by dx, we proved that this is equal to integral d psi star by dx d psi by dx. Just remind you, just quickly to remind you how we did it. p squared was a second derivative. Well, minus a second derivative and then we did an integration by parts to take one of the derivatives and put it over here. Okay, so that's delta p squared. And what we're going to try to prove is that the product of this times this is bigger than something. Product of this times this is bigger than something. Okay, and to do so, it's just a trick. Now think of a and b as vectors. In fact, you can think of a and b as either bra vectors or ket vectors. Since psi is real, it doesn't matter whether they're bra vectors or ket vectors, but here's what it's going to be. The one vector a, I'm writing them now in vector in bra ket notation. The triangle inequality is perfectly true for bras and kets as well as for ordinary vectors. All right, so the vector a is going to be described as going to be the vector whose wave function is not psi of x, but x psi of x. A is the vector whose wave function is x psi of x. In other words, it's x operating on psi. And b is going to be the vector p times psi or basically minus i d psi by dx. When I write equals here, I mean to say that a is the vector whose wave function is psi of x, b is the vector whose wave function is d psi by dx. And we're going to apply the triangle inequality to that. Okay, notice that with this definition here, delta x squared is a squared. Look at it. The inner product of a with itself, a squared, is just the integral of psi star psi times x squared. So this is here a squared with this notation. What about this one? This one here is essentially b squared. It's the inner product of b with itself. How come I don't have a minus i d by dx here? If I put minus i d by dx over here, I would have minus i times minus i, which is minus one. How come I don't get a, how come I have a plus one instead of a minus one complex conjugation? Yeah, so this is b squared. That's a squared and a squared times b squared is this. It is also delta x squared times delta p squared. So delta x squared, delta p squared, a squared, b squared. The triangle inequality is literally talking about delta x squared, delta p squared. So now let's look at the other side of the inequality here. The other side of the inequality is a dot b squared. So let's compute a dot b. It's the inner product of these two vectors. And then we square it. Then we take its absolute value and square it. Because we take its absolute value, I don't have to keep track of i's, minus signs, and the nth, they'll all go away because I take the absolute value. And so the inner product of a dot b is going to be equal. It's going to have an integral, oh, one thing. I did say that I'm going to choose psi to be real. So all of these psi stars don't matter. They're just psi. Every place you see psi star, you can think of psi. Okay, so let's take a dot b squared. That's going to be a, which is psi of x, x. I've just written this in the opposite order. And then times b, which is going to be d psi by dx. As I said, I'm not worried about signs or i's for the simple reason that in the end I'm going to take the absolute value of this. So that doesn't play any, that will play no role. And let's look at this quantity. Now, we have a dot b squared. So I can multiply this by itself. If I wanted to multiply itself by itself, I could multiply it by another integral of the same type, y d psi by dy. This is dx. This is dy. But all I'm doing is writing the same thing twice. So if I can evaluate this, I'm also evaluating this. Okay, so here's the trick. The trick is to say, d psi by dx times psi, does that ring a bell to you? d psi by dx times psi? No, no, no, no, not the derivative of the log. It would be the derivative of the log if it was in the denominator. You're on the right track, though. It's the derivative of the square of psi. It's one half the derivative of the square of psi. The derivative of the square of psi is equal to twice psi d psi by dx. So apart from the factor of two, what we have here is x times the derivative of psi squared with respect to x. Everybody see that? Yell out if you don't. Okay, so I can remove this psi here and replace this by psi squared. And at the same time, put a factor of two, one half, so I do the same thing with the other factor. Same thing here, psi squared, another factor of two, which makes us all together a factor of a quarter. And as I said, this is just a, I don't even know why I'm doing it, I should just write square, but I just wrote the square by writing the same thing down again. Okay, next, integration by parts. You have a derivative here, it's a derivative of psi squared, but it's still just a derivative. And when you have a function times a derivative to be integrated, you can switch the derivative to the other function at the cost of a sign. But really, there's no cost in sign because there's another sign from this one over here. All right, so there's no real cost in sign. And besides, we're taking the absolute value. So this is equal to integral of psi squared times dx by dx. And likewise over here, integral psi squared dy by dy. The x by dx, that's familiar, that's just one. And this is just squaring the same thing again. We are finished. We're finished because what is this integral of one quarter, well, forget the quarter, what's the integral of psi squared? The psi squared is psi star psi. If psi is real, psi star psi is the same as psi squared. What's the integral of psi squared? One. One. It's the integral of the probability of all space. So each of these integrals is one if the wave function is normalized. We assume that the wave function is normalized. If we assume that the state vector is normalized, this whole thing is just one quarter. And we have proved now the uncertainty principle from the triangle inequality that delta x squared times delta p squared is bigger than one quarter. The one quarter, I remind you, came from two factors like this. Or if we wish to just, we can just write this as delta, we can take the square root of both sides. Delta x, delta p is bigger than one half. Now, what about units? X and p are not inverse to each other. So the product of an x and a p can't be a pure number. What is inverse to an x is a p over h bar. So really, this is delta p over h bar. If I put back the units, or basically delta x delta p is bigger than a half h bar. That's, this is the theorem that asserts the uncertainty principle. At the uncertainty in position times the uncertainty in momentum is bigger than h bar. Now, this would be true for any direction of space. Pick any direction of space, take or any particle, any coordinate, any generalized coordinate in mechanics, its uncertainty times the uncertainty in the corresponding conjugate momentum is bigger than h bar. So that's a bigger than h bar over two as a matter of fact. Any questions about the answer, about this particular version of the uncertainty principle? Yeah. In the integration by the parts of another term, I guess it integrates to zero, right? Which term? The opposite. Let's call it f prime g plus g prime x. Now, if you integrate this equation, it says the integral of f prime g plus the integral of g prime x is equal to the integral of d by the x of fg. But the integral of a derivative is just the difference between the integrand of the two endpoints of integration. So this is fg between the two endpoints, infinity and minus infinity. The symbol stands for fg evaluated in infinity minus fg evaluated at the other end of integration. And what I'm assuming is that psi goes to zero at infinity, which it would have to do if it's normalized. If its integral is finite, if the probability distribution is normalized, it means that the wave function has to go to zero. So this is zero. And this is just a rule that you can interchange where you put the derivative at the cost of a sign. Okay, any other question about the uncertainty principle? No, it's not a small point. When you prove that something is bigger than something else, you can then ask the question, can it ever be equal to that something else? And the answer may, I mean, you know, we may have proved an inequality which was not the tightest inequality we could have proved. Could have been that somebody could come along smarter and proved a tighter inequality that it's got to be bigger than something else, a stronger inequality. In this case, it's not true. You can find wave functions for which delta x delta p is equal to h bar over two. So they're called minimal uncertainty wave packets. So it's not a trivial point. This doesn't apply to time and energy. Well, yeah, it does. But the circumstances under which you use it are less clear. And I don't really have time to explain now. Let's make it at some point in these course, we will probably want to think about it. But do remember that the time dependence of a wave function, the time dependence of a wave function, psi of t as a function of time, is a sum over the eigenvectors of the energy. Let's call it a sum over all the energy eigenvectors of e to the minus i e t, that's time, times the energy eigenvector times a function. Let's call it phi of e. So the relationship between the wave function as a function of time and the wave function as a function of energy is once again, you sum up things in energy with phases like this to get things as a function of time, that's the Fourier transform relationship. So it's, but exactly how you use it is, I'll give you an example of how you use it. If you had some wave function moving past you, it's moving past you, and you ask how long does it take for the wave function to actually pass you. And this could be a light beam, incidentally. But how long does it take for the wave function to move past you? In other words, what's the uncertainty in the time at which the wave function might have maxed in maximum, how long does it take the maximum to get past you, that you can call delta t, uncertainty in time that the wave function arrived right at your nose. It's related to the uncertainty in the energy of the wave function. The bigger the uncertainty in the energy, well, let's say the other way, the smaller the uncertainty in the energy, the bigger the uncertainty in when that wave packet will pass you, for example. That's an example of the energy time uncertainty relation. So any pair of things which are related by Fourier expansion, and this is a Fourier expansion, it's writing psi of t as a sum or an integral over oscillations. Any two things which are related in that fashion will have a uncertainty principle. There are lots of other conjugate pairs like this. The angle of supposing you have a rigid body rotating about a point with an angle theta characterizing its position, then the angle theta and the angular momentum have exactly the same kind of relationship. Uncertainty in angle times uncertainty in angular momentum is also H borrow, I think it's H borrow but two also. So very similar kinds of things. Any other questions? Okay. Let's turn to the Schrodinger equation. The Schrodinger equation is the equation for the way a wave function changes with time. It also governs the way expectation values, if you know how the wave function changes with time, then you can try to deduce how expectation values of observables change with time. One of the things we would like to do is to see the relationship between the motion of wave packets governed by the Schrodinger equation and their classical counterparts, namely the motion of systems, systems in this case being a particle. We would like to see that as long as the wave packet is narrow and not too crazily spread out and not if it's a nice bell shaped curve and stays a bell shaped curve for some length of time, that the wave function or the center of the wave function moves in a way which is familiar from classical mechanics. We need to be able to do that to make any relationship between the classical and the quantum mechanical things that we've been talking about. Okay, now we talked about this. I'm going to let some, let's first write down the general Schrodinger equation. I'm going to take you back a couple of lectures now, take you back a couple of lectures and go through the derivation of how a wave packet moves. The answer is the wave packet moves so that its center satisfies the classical equations of motion. I'm not going to do that in a new way. I'm going to go right back to the old way that we've already talked about. And to do that, we first of all, in order to know how the wave function changes, we need to know the Hamiltonian. And let's just remind ourselves how it works. The way state vectors change with time, I, the psi dt, is governed by the Hamiltonian acting on psi. If we were to take for our Hamiltonian, p squared over 2m, that's the Hamiltonian for a classical particle. Why we should take it for a quantum mechanical particle is not obvious. And what might we add to that to put the particle in a force field? We might add a potential energy function. Potential energy function might be plus v of x. Okay, so that would be the classical Hamiltonian for a particle moving in a potential energy function v of x. And we can take that, we can lift that right from classical mechanics and think of it as a quantum mechanical Hamiltonian. And then check afterwards, is the wave packet really moving the way classical mechanics would say that the classical particle moves? Okay. Alright, so let's try that. We will just work, in a little while, we'll just work with this abstract equation here. But let's just see what it says. H on psi, H on psi. Instead of working with abstract ket vectors, we can work with wave functions. And what this equation would say would be I, psi of x by dt is equal. Now, p squared, what is p? p is minus i d by dx. p squared is minus d by dx squared or d second by dx squared. There's a one over two m minus one over two m times psi. But what is v of x? Is v of x an operator? Yes, v of x is an operator by definition. This is a definition now. I told you what the operator x does. The operator x, when it multiplies a wave function, just multiplies it by x. The corresponding thing for a general v of x is the meaning of the operator v of x is that it multiplies the wave function by v of x. We didn't talk about that much. We didn't talk about functions of x. But functions of x are just defined in the obvious way, as thought of as operators. They operate on wave functions just to multiply by v of x. Is that the question of v of x as a function of position? Yeah. Yeah. Functions of operators, yes. v of x is a function of position. x multiplies psi of x, v of x also multiplies. And you can take this as definition. When you make a definition, of course, you're later going to want to prove that it was a useful definition. You don't just make definitions. You make definitions and then show that they either are valuable because they're easy to work with, or they're valuable because they relate things to previous things you already knew. What we're going to want to show is that with this definition of how v of x acts, that the Schrodinger equation will, in fact, make particles move according to classical orbits, or at least make wave packets, insofar as they don't spread too much, move according to the classical equations of motion. All right. So that's one term, d-sec, one over two m, second derivative of psi. And the other term is just plus v of x times psi of x. When v of x acts on psi of x, it just multiplies it. And this becomes the Schrodinger equation. This is the Schrodinger equation for how the wave function changes with time. It's also depending on time. Let's at the same time write, you know what I want to do with this? I'm going to be interested in the psi by dt. So I think I'll multiply away this i. What do we do? We multiply. If we want to get rid of an i, we multiply it by a minus i, right? Minus i times i is one. So multiply by minus i. That'll get rid of this minus sign and put an i here and put a minus i here. That's psi dot, if you like. The left-hand side is psi dot. Let's also write down d-si star of x and t with respect to t. It's not a new equation. It's just a complex conjugate of this one. And that one will be minus i over two m d-second psi star by dx squared plus i v of x psi of x. Did I make a mistake? Everybody's talking at once. Oh yeah, good. Good. Psi star. Good. Right. Those are the Schrodinger equations. They're equivalent to each other. I just wrote them both because I'm going to need them both. Okay. Now what do we want to do? We want to see how various expectation values change with time. If we identify the expectation value with more or less the center of the wave packet, if the wave packet looks like this, the center of the wave packet will be close to the expectation value. And so let's identify the center of the wave packet with the expectation value and ask how expectation values change with time. Let's see if we can work it out. So let's start with x. Let's see if we can calculate d by dt of the expectation value of x. And that of course is d by dt of the integral psi star of x x squared, no, just x, psi of x. When you wrote down the d over dx at the start, you said dt. dt, d by dt. In other words, the velocity of the center of the wave packet. That's all this is. It's the velocity of the center of the wave packet. Why does it change? It changes because psi changes. How do we know how psi changes? We know how psi changes because we know the Schrodinger equation. So this is going to be a little exercise, a little bit too tedious for my taste to do on the blackboard. It's not very tedious. But still, I may cut little pieces, take some shortcuts, but I'll tell you when I do. Okay, so this is what we want to calculate. And this of course, x is just x. It's not dependent on time. But psi star and psi are dependent on time. So when I differentiate, I get a term from each one of these. And this will give me integral d psi star of, well, let's just write d by dt psi star x psi plus integral psi star x psi dot. All right, one term coming from differentiating with respect to time psi star, another one from differentiating psi. Each one has x in there. And now of course, this is just psi star dot. All right, first thing is, we don't really have to calculate both of these. These are complex conjugates of each other. We have psi star dot here, we have psi dot here, we have psi here, psi star here and x is real. So in fact, all we have to do is calculate one of them, and add its complex conjugate. That's all we have to do or equivalently, just take twice the real part. Adding something to its complex conjugate, the imaginary parts cancel and it's just twice the real part. All right, so we can use that. I'm not sure we'll need it, but let's say it's twice the real part of this. Twice this and keep in mind that we throw away anything that's imaginary. We throw away anything that's imaginary because we add the complex conjugate and that will throw it away. Okay, so now let's plug in d psi star by dt over here. And that gives us minus i over 2m d second psi, let's see, what are we doing? It's plus, I'm sorry, it's plus i. It's the upper equation there. The second psi by dx squared, I think it's minus i v of x psi. Okay, now the first thing we can see immediately is that the term with v of x is pure imaginary. The psi star, there's x, the psi, psi star psi is certainly real. x is real, v of x, the potential energy is real, and there's an i there. So the first thing to conclude is that the v of x doesn't contribute at all. It doesn't contribute at all because when you add the complex conjugate, it will cancel. So in this equation, we can throw this away over here. Psi star x v of x psi, that's real, with the imaginary part cancels. So we can throw this away. Okay, next, what's the next step? Integrate by parts. That's always the next step. It's always the next step. It's integrate by parts. We have here a derivative. It's a second derivative, but a second derivative is also a first derivative. Let's write it that way. It's d by dx of d psi by dx. Incidentally, partial derivatives have no, well, yeah, they do mean something. They mean derivatives with respect to x without changing time. Yeah, okay. All right, so that's our equation. And now I want to integrate by parts, which means I want this d by dx over here to shift to the other factor. I want to shift it to the other factor and I'm going to get two terms. Let's look at what the two terms are. The first term, the d by dx will just hit x. What does that give? What is d by dx of x? One. So that will give me twice integral psi star. The x will disappear when d by dx hits it. And there is d psi by dx, right? d psi by dx. This derivative got shifted to here. There's an i, I think, over 2m, right? There's an i that I left out over 2m. Oh, what else did I leave out? Integration by parts. Minus. That's one term. Now, again, you might think this is dangerous i here. We should throw it away because it's imaginary, but it's not imaginary. It's real. We'll see that in a moment. In fact, we'll see exactly what it is in a moment. But let's do the other term. The other term has an i over 2m. i over 2m. There'll be an x. The x does not get differentiated now. x. And then there is the psi star of x. Now the derivative hits the psi star, and we get derivative of psi star times derivative of psi. Okay, this derivative now hits the psi star. There's an i here, an explicit i, and this is to be integrated over x. It's still underneath the integration here. How about this? What's the reality property of this? Is this real or imaginary? This is probably a minus. Yeah, there is a minus. Okay, but it's imaginary. This is the complex conjugate of this, and x is real. So this integral here, the integral here is perfectly real. It gets multiplied by minus i, and the whole thing is imaginary. It will get eaten when you add the complex conjugate. So the only thing that's left is this. i over 2m with a minus sign times psi star, d psi by dx. Now I'm going to tell you exactly what this is. What is minus i d by dx? It's p, right? It's p. Minus i d by dx is p. So what this is, is it's the expectation value of p divided by m, p over m. Just look at it. Minus i d by dx on psi, that's the same as p on psi. Multiply it by psi star, that's the bra vector. Integrate it over x, that's just taking the inner product. In fact, it's real. In fact, it is really real. It looks imaginary because of this, but if you go and study it carefully, you find that it's real, and what it is is the expectation value of the momentum divided by m. So now we've proved an interesting theorem. We've proved that dx by dt is equal to p over m. We've seen this before. This is the classical relation between velocity and momentum. Mass times velocity is momentum. Now what we found is that this is going to be a feature of the motion of a wave packet. If we have a wave packet that moves in a nice way so that it holds itself together, which it may or may not do, but if it does, if we're in a regime of parameters where the wave packet moves like a nice wave packet, then we will find that the expectation value of its momentum defined in this way will be related to the expectation value of the velocity of the wave packet through the standard classical mechanical equations. Okay, that's half of mechanics. What's the other half of mechanics? We found that the p, the, we found, oops, I just gave it away. The x by dt is p over m. What's the other half? dp by dt. That's Newton's law. Time derivative of momentum is called force. So let's see if we can do the same thing with momentum. Yeah. In your summary, when you said that if everything is kind of grouped nicely, in that way, we didn't have a nicely grouped assumption in our derivation. Was it just that that's going to, it's going to look classical with this nicely grouped? Right. Yeah. There was no, there was no assumption here. There will be, as we'll see, a certain assumption in the other equation. No assumption here of that type. If the wave packet is really crazy, it might not mean anything very sensible, but still that was a, nothing I did up till now depended on any approximate notion of how the wave packet moves. Okay, we got this. Let's see if we can get the other half. Now, I don't guarantee to be able to get through it without, without all sorts of errors of i's and minuses, but let's try this one. Alright. P, the average of P, is gotten by acting with P on psi. And that's minus i. It'll give us a minus i, which I'll put over here, but then d by dx, d psi by dx. We could put the, we could put the derivative over here by an integration by parts and a sign change, but we'll leave it over here. So we're looking for d dx. We're looking for d dx. No, no, no, no, we want the time derivative of the momentum. What do we want to see? We want to see that the time derivative of the momentum is governed. Let's write Newton's equation. P dot is equal to force. That's it. What is force in terms of the things I've written on the blackboard? Minus dv by dx. So this is what we're going to want to show, minus dv by dx. But we'll show it in the form that d by dt of the expectation value of the momentum is minus the expectation value of dv by dx. dv by dx is also a function of x. If v is a function of x, dv by dx is a function of x. It's called the force. So what this says is the average rate of change of momentum is the average of the force. The force, the dv by dx is minus f. Okay. So that's what we want to show. We want to show that the p by dt is dv by dx. That's our goal. Okay. And how are we going to show it? Once again, we're going to show it by using the Schrodinger equation to tell us how things change with time. Okay. So let's see now. This may get a little bit unpleasant. I'm not sure. What I'm debating, all right, let's start working it out. And at some point I may say, well, I know this goes away and this doesn't go away, but we'll see how it goes. Okay. So first of all, this gives us integral psi star dot times d psi by dx minus i. And then there's another term. Where did that come from? That came from differentiating psi star with respect to time. Then there's going to be another term where d by dt will hit psi. Okay. So what will that be? That will be again minus i integral psi star. Now, the order of differentiation doesn't matter. So this can be written as d by dx of psi dot. I'm just wondering at one point I'm going to regret not having done an integration by parts. But it won't matter because if I do the integration by parts, I will regret it. So I might as well just go ahead. Is there an extra i? No, I don't think so. I don't think so. No, I think it's right. Okay. Okay. Oh, you know what? I think it will be convenient to integrate this one by parts. If I integrate this one by parts, I will get a plus i here and a d psi star by dx here. Now, the reason it's convenient is because the thing in here is the complex conjugate of the thing here. Okay. Psi dot star, well, that's the complex conjugate of psi dot. d psi by dx, that's the complex conjugate of psi star. But notice it's i times the difference. What do I get when I take i times the difference of two complex conjugates? You get a real thing again. All right. So I think it's just the real part of this. Well, let's just keep going. My i, let's just keep going. We can erase this and remember in the end that the answer has to be twice the real part of this. Twice the real part of this. And we don't have to keep track of both of them. Okay. Now, what do we do? We plug in the Schrodinger equation and pray like mad that it all goes well. Okay. So psi dot is equal to i over 2m d second psi by dx squared minus i v of x psi. This i cancels this i and gives a minus sign. So this i, well, is an i here. This will make minus. And the i over here times the i over here, I think will make plus. Okay. We have a somewhat unpleasant thing here. How unpleasant is it? Let's see. Not too bad. This piece, I believe, is pure imaginary. It is. It is. It is. This is pure imaginary over here. The way you see it, the way you see it is again, integrate by parts. Just this piece here. If I integrate this piece by parts, we'll get a derivative, we'll get a term with a single derivative of psi and a second derivative of psi star. Okay. Okay. What we'll discover is basically the difference between a thing and its complex conjugate. It will be a minus sign. Okay. Let's just see it. Yeah. Let's just do it. If I differentiate or, sorry, integrate by parts, that'll bring the derivative over to here. It's pure imaginary. I know that it is. Whenever you have a quantity complex conjugated times the derivative of the unconjugated thing and you integrate it, it's always pure imaginary. We've used that trick before. I'll just tell you right now. This piece is pure imaginary. It's not going to be of any significance. The significant piece is integral the psi star by the x, v of x, psi. That's all that's left after we take into account the little intricate details of complex conjugation, v of x. I can't remember whether we doubled it or not yet. I think we double it. I think we double it. Yeah, yeah, yeah. We double it. We have to double this. Yes, we have to double it or add it to its complex conjugate, I would say. Right? We have to add it to its complex conjugate, I think. In fact, let's do that right now. Let's add it to its complex conjugate plus the integral of psi star v of x, d psi of x, d psi by dx. All right, I've taken this and complex conjugated it. I've taken this and complex conjugated it, and v is real. Okay, now we're almost there. We have the psi star of x, dx times psi. Let's pull out the v of x. Let's pull it way out here, v of x. v of x times this. Now what is this thing here? Psi times d psi star of x plus psi star times d psi by dx. It's the derivative of psi star psi with respect to x. This is derivative of psi star times psi plus psi star times the derivative of psi. That is nothing but the derivative with respect to x of psi star psi. So let's put that in. And the next step, integrate by parts. Alright, integrate by parts. That'll take the derivative over to here, and it will give me minus the integral of psi star psi times d v of x. Or if I put the psi over here on the right, and what is this thing? The integral of psi star psi times d v dx, that's the expectation value of d v dx. This whole thing is the x minus the expectation value of d v dx. In other words, it's exactly this. So we found after a slight bit of painful integration by parts and a little bit of algebra and throwing away some things which I told you of pure imaginary, which they were, we find that d p by d t is exactly equal to minus d v by dx, but in the sense of expectation values, in the sense of the expectation value. Now why do I say, this also is exact. I didn't do anything illegal here. I didn't do any approximation. I'll tell you where the use or where it's, where it may be important that the wave packet have a nice shape. The expectation value of d v by dx is not the same as v or d. Let's just take v. Let's just take v. The expectation value of v of x. No, let's call it f of x force. The expectation value of f of x, which is after all minus d v by dx, the expectation value of a function of x is not the same thing as the function of the expectation value of x. This represents the expectation value of x, the center of the wave packet, and this is a function of the center of the wave packet. It is not the same thing as the thing we calculated in general. This is the expectation value of f of x. Let me give you an example where they're very, very different, where they could be extremely different. Suppose, for example, that f were equal to x squared. Supposing the force happened to be x squared and supposing the wave packet consisted of a pair of bumps. Let's see, is that what I want? I think that's what I want. Yes, that is exactly what I want. Centered about zero. What is the expectation value of x? What's f of the expectation value of x? Zero, right? On the other hand, what is the expectation value of x squared? Certainly not zero. This one has an x squared that's equal to the x square of this. It's certainly not zero. When wave packets are not nice single bumps, which are mainly characterized by their centers like this, then in general, you could not write that the time rate of change of the momentum is equal to the force evaluated at the expectation value of x. It's only if the wave functions are nice and so that concentrated over a fairly narrow range. If they're concentrated over a reasonably narrow range, then the expectation value of f of x is the same as f of the expectation value of x. This is where we have cheated a little bit in saying this looks like the classical equation of motion. That depends on the wave packet being nice and coherent and well localized. What are the circumstances under which a wave function will remain nicely localized? Well, I'll tell you what the circumstances are. The circumstances are that the particle is heavy. If the particle is heavy and takes two things, the particle being fairly heavy and the potential energy not having too many spikes, not having spikes in it or something like that, spikes. When the wave function hits spikes, it tends to break up. For example, if you have some sort of wave function coming in, a nice wave packet coming in, moving to the right, and it hits a point structure here, what will it do? It will spread out all over the place like that. The wave function will disintegrate all over the place. If, on the other hand, it hits a very smooth potential of some sort, then it will go through the smooth potential moving more or less according to the classical equations of motion. So we don't expect quantum mechanics to reproduce classical mechanics in every possible circumstance. We expect it to reproduce quantum mechanics in circumstances where it should, where particles are heavy and where potentials are nice and smooth and don't cause the wave function to break up into little pieces or disintegrate or scatter all over the place. Okay. Let's see if there are any other odds and ends that I wanted to. No, not tonight. That's where we don't have time for it. What sort of physical situations would correspond to, quote, bad potentials that break up the wave function? Well, okay. The situation which is bad tends to be when the potential has structures and features in it, which let's say have some size associated with them, some structure which we can call delta x, some size associated with the structure here, and where delta x is significantly smaller than the uncertainty in position of the particle. If the structures, if the, what do we want to call them, features, sharp features of the wave function take place on a scale which is much smaller than the then the size of the incoming wave packet, then it will break up the wave function to a lot of little pieces. Each one will scatter, they'll scatter off in different dimensions, different directions. Did I write this right? Yeah, that's right. Okay. Let's, let's, let's, it's in those cases where in some sense classical mechanics itself is breaking down. Is that right? Where, yes. In other words, you shouldn't expect it to run. No, no, no, no, you shouldn't expect it. No, no, no, no, of course. Of course, right. When they're, yeah, when they're, basically when the features in the potential are shorter than the wavelength of the of the particle, it'll break it up into a, right. Now, if you were to take a bowling ball and you were to ask, what is delta x? Well, let's see. Typically, it's true that typically delta p times delta x, the rule is that it's bigger than h bar, but in many reasonable cases it's of order h bar. Now, p, that's about as concentrated as you can get it, but for an ordinary macroscopic object, the uncertainty principle is pretty well saturated. Delta p times delta x is more or less about equal to h bar. Why that's so is a very complicated question, but very complicated. Now, what is delta p? Delta p is the mass times the delta velocity. Okay, so the uncertainty in velocity times the uncertainty in position is h bar over m. Now, if I put a bowling ball down on the ground, you know very well that the uncertainty in its velocity is not very big. The uncertainty in its velocity is in particular, as it gets heavier and heavier, you might expect the uncertainty in the velocity gets smaller and smaller. In any case, there's an m down here. And so, whatever delta v is, as m gets smaller and smaller, delta x will get bigger and bigger, right? Delta x will get bigger and bigger. Right. And in particular, it will tend to get bigger than the features in the potential. So, in the highly quantum mechanical limit where mass is very small and delta x tends to be big, the wave function will move under the influence of a ragged potential, which it sees as being much sharper and much more featured than the wave function itself. That's when it breaks up. On the other hand, as m gets very large, delta x gets small. Delta x will tend to get small as m gets very large. And so, for a large bowling ball, the wave packet might be very, very concentrated. And when it moves through here, it moves through like a tiny wave function, and the tiny wave function thinks that these features are very, very broad. Moving through broad, smooth features doesn't disrupt the wave function and break it up into pieces. So, large mass and smooth potentials is the limit of classical physics. Light mass and abrupt potentials is more quantum mechanical. Yeah. Just to get a feel for what you mean by large and small. If you, what is an electron large enough to behave classically as exploring? It's an interplay between the shape of the potential and the mass. If you take a very, very smooth potential of the kind that you might make, let's say, with a couple of capacitor plates, capacitor plates separated by a meter. Well, if we don't have to take a meter, a centimeter, with a smooth electric field between them, then the electron will move through it as a nice coherent, almost classical particle. On the other hand, if you take the potential associated with the core of the atom, the nucleus, that has a sharp feature in it, and that sharp feature, well, if an electron wave comes and hits it, will scatter it all over the place. So it's an interplay between the mass and the sharpness of the potential. Can you do that experiment with a nucleus and an electron and show that it actually does behave? Absolutely. No. And isn't that what led? It's, of course, you can do it with an electron. No problem with an electron. Very easy. The classic experiment was done in 1911. It wasn't done with electrons. It was done with alpha particles. Alpha particles come in and hit a gold nucleus. The gold nucleus is a small thing, and the result was that the gold nucleus, when the alpha particle comes in, wave, quantum mechanical wave of alpha particles comes in, hits that tiny gold nucleus, and it gets scattered in all directions. That's why Rutherford was a bit surprised to see his alpha particles get scattered right back at him. And what did it tell him? It told him that the nucleus was very small. Right. So it didn't understand quantum mechanics. It didn't. Right. He just thought he had a beam of alpha particles, some of which hit the center, some of which went through here. Right. But in fact, what he really had was a wave. And the wave came in, hit the tiny gold nucleus, and got scattered all over the place. And with electrons, yeah, sure, it's very easy. That's what happens in accelerators when electrons hit small targets. They just get, oh, photons do also. Photons do also. Electromagnetic waves, if they hit structured objects with a structure smaller, smaller than the wavelength, they get scattered all over the place. If they hit very smooth things, they tend to propagate according to the geometric optics. For example, if you have a piece of material with a position varying index of refraction, if that index of refraction is varying very smoothly and very slowly over the wavelength of the photon, then geometric optics is a good approximation. If on the other hand you have a diffraction grading of distance comparable or smaller to the wavelength of the radiation, the diffraction grading will spread it out all over the place. So it's not just electrons, electrons, photons, or anything. Okay, if there aren't all the questions, I'm going to go home and find out if my son had his baby yet. Thank you. For more, please visit us at stanford.edu.
(March 19, 2012) Leonard Susskind concludes the course by wrapping up the major concepts that were covered throughout the quarter and discussing some of the limits of the field of quantum physics. In this course world renowned physicist, Leonard Susskind, dives into the fundamentals of classical mechanics and quantum physics. He discovers the link between the two branches of physics and ultimately shows how quantum mechanics grew out of the classical structure.
10.5446/15007 (DOI)
Stanford University. Alright, what I want to do first, I want to go through electromagnetic waves quickly. Just plane, electromagnetic, plane means PLAN, not PLAN. Plane electromagnetic waves, how do they fit there with Maxwell's equations, what do they look like? I want to do it quickly because I don't want to spend a lot of time at it, but I want to do it clearly. First thing, let's write down Maxwell's equations and put them on the blackboard here on the whiteboard in both the covariant, the relativistic form and the non-relativistic form, and then we'll play with them a little bit. There are two sets of Maxwell equations. The first set are identities that follow from the existence of a vector potential, and they call the Bianchi identities, and they are del dot B equals zero, and B dot, meaning the time derivative, the time derivative of B. If we want to, I suppose I should really write it as dB by dt, this is a vector quantity, dB by dt is equal to the curl of the electric field. Notice in every one of these equations, incidentally, there's always, and all sides of the equation, all terms, they always come differentiated. That's important. These are two of the equations, and these two equations are the ones which are identities. In the absence, let's take the theory in the absence of charges and currents. No charges and currents, empty space, then the two other equations are del dot E equals zero. No charges and an equation that looks like this, except E and B are interchanged, but not quite. So the other equation is dE by dt is equal to minus del cross, the magnetic field. Those are Maxwell's equations. They have a lovely symmetry. They're easy to remember. Del dot B equals zero, del dot E equals zero, and one of them says that the time derivative of the magnetic field is the curl of the electric field, and the other one says the time derivative of the electric field is the curl of the magnetic field. The only thing you have to remember is where the sign goes. And that takes a couple of minutes to work out, but I'll leave it to you. These equations on the right-hand side can be written in a covariant form, which is called the Bianchi identity, d mu nu of f nu sigma plus d nu of f sigma mu plus, did I get that right? Yes, I did, plus d sigma f mu nu is equal to zero. This is just a way of rewriting these two equations over here. These equations here, which are not identities, which are equations of motion, they would change if you changed the Lagrangian. We haven't introduced the Lagrangian yet, but we will. These equations have the form d mu f mu nu equals, in the case where there are no charges and currents, equals zero. Now this equation and this equation, both these equations here, all of them, do have a right-hand side that I have not included. This could be the charge density, and on the right-hand side here there would be a current. I've written the equation in empty space. No charges, no currents. There they are. All right, now we want to solve them. I'm going to put this up here, and by solve them, write down a solution. We want to write down a generic form of a plane wave solution. A plane wave simply means a wave proceeding along some axis, and of course we have some freedom. We can read them along what axis we want to send the wave, but we can always rotate our coordinates so that the z-axis is along the direction of propagation of the wave. Let's do that. The wave is moving down the z-axis, along the z-axis, or is it up the z-axis? Maybe it's up the z-axis, from smaller z to larger z. It has wavelength lambda equals 2pi divided by k, where k is called the wave number, and a plane wave of that form, the typical generic plane wave of that form, generally has the functional form sine of kx minus omega t, where we'll have to figure out in a moment what omega is. k can be anything. Fixing k is fixing the wavelength, and the wavelength can be anything. Later on we will discover that omega must be fixed in terms of k. Omega is determined in terms of k, and each component of the field, electric and magnetic, all six components, you know what, I'm going to change this. I said the wave was going down the z-axis, didn't I? z-axis. This is a wave going down the z-axis with wave number k, frequency omega, but which components of the field are we talking about? Well, we will assume all components of the field that are non-zero, those which are non-zero, have exactly this form. So let's write out a generic form of this type. The x component of the field as a function of z and t, this is the x component, as a function of z and t, we're going to set equal to some number epsilon x, it's called the polarization vector, but it's just a number, times the same sign. Sign kz minus omega t. Incidentally pay no attention to whether I write upper or lower kz, it's the same thing. Ey equals epsilon y, a different number. Sine of kz minus omega t, ez equals epsilon z. The e here means a function. Epsilon just means numbers. Same sign, sign k times z minus omega t. Looking for the magnetic field, bx, we'll assume as the form, a new set of numbers, we'll call them beta, beta x, sine kz minus omega t, I'm getting tired of writing them, but we'll write them anyway, beta y, sine ky minus omega t, and finally bz, and I think I'm going to give up and not write it. You know what to write there. Well, I'll write beta z. Okay. Now first of all, you might ask, how do I know that the magnetic fields and electric fields have both the same kind of dependence? Well I said it's a plane wave, if I say it's a plane wave, I mean that it's a sign, and I mean all components of the fields are some sort of signs, some sort of waves that look like signs, and this is a fairly generic thing to write down, but you could also write down cosine, right? You could write cosine. I could even do something else. I could make the electric field sign and the magnetic fields cosines. What would happen if I did that? If I did that. So we go to the equations up there, the Maxwell equations, and we note that every Maxwell equation, in particular those which relate electric and magnetic fields, think about those, the e dt equals the del cross b, all right? When you write those equations, in each one of them you will be differentiating either with respect to z or with respect to t. Either you get, if you differentiate sign with respect to z or t, you'll get cosine, all right? So the left hand side of any equation of motion will involve a cosine. The right hand side is also going to be differentiated. For example, the curl operation is a derivative, and so if I differentiate the magnetic field, I will also get cosines, and I'll get something consistent. I'll have that the left hand side of the equation is a cosine, the right hand side of the equation is a cosine. Supposing I tried to make the electric field sign and the magnetic field cosine, then I would get into trouble. Differentiating for example with respect to time, the electric field would give me cosine, but differentiating the magnetic field would give me sine, I'd be in trouble. So you have to have, if you're going to have sine here, you have to have sine here. Now I've made a number of conventional assumptions. I've said that the wave is moving along the z-axis. There's no loss of generality there. I just rotate my frame of reference until the z-axis is along the direction of the wave. I have also, let's forget the relation between electric and magnetic field. I could put cosine, or I could put sine kz minus omega t plus a constant in here, a phase. A sine is a wave that looks like this. Can't I push the wave forward a little bit? Push it up to here? Well sure I can, I can have a wave that's slightly ahead or behind the wave that I wrote down, but that's a question also of a convention. I will simply, if you want to start with a wave which is slightly ahead at time t equals zero, fine. Just move your z-axis until the z-axis is where the wave happens to be zero. Once you do that, again, the wave gets back to this form here. This is pretty general, but now I have not in any way used any of the equations of motion. Let's use the equation of motion and see what they say. First of all, let's take del dot e equals zero. Del dot e, what is del dot e? Del dot e is partial of ex with respect to x plus partial of ey with respect to y plus partial of ez with respect to z. All right, now, ex does not depend on x. That's gone. Ey does not depend on y. That's gone. I'm supposed to set the sequence zero, incidentally, but ez does depend on z, and so if I differentiate it, I'm going to get something. But I'm not supposed to get anything. I'm supposed to get zero on the right-hand side here. The only way to ensure that is to say that epsilon z, for a wave moving along the z-axis, the z component of the field must be zero. Otherwise, the electric field will have a divergence, and that's not allowed. So this is zero. A plain electromagnetic wave moving along an axis does not have any electric field along that direction. Same is true of the magnetic field. For that, we use del dot b equals zero. Same thing. This is also equal to zero. All right, so we've used up the del dot equations. Now how about the other equations? Let's see what they say. Let's pick one. Let's take the edt. Oh, one more thing. One more thing. Yeah. These epsilon x and epsilon y are constants. They don't vary. They're just numbers. They're vectors in the x, y plane. They're vectors in the x, y plane, which determine, roughly speaking, the ratio or the amount of e x and e y that's there. You can always rotate in the x, y plane. When the wave is going that way, you can always rotate your axes, the x, y axis, until one of these is zero. For example, you can always line up your x axis so that e x is not equal to zero, but e y is zero. These here are a little two-dimensional vector in the x, y plane, and you can always rotate your coordinates so that one of the two components is equal to zero. That's again no loss of generality. Just remember that we've used up whatever rotational freedom we've had to set this also equal to zero. I cannot do it yet for the magnetic field. I don't know what this implies about the magnetic field. We'll find out in a moment. This is my starting point, an electromagnetic wave with an electric field that points along the x axis, which oscillates as it goes downstream or upstream. Let's now use the edt equals minus del cross b. The edt equals minus del cross b, and there's only one component, well, there's two components of it that we have to worry about. It's a trivial matter to see that the z component of the equation is trivial. Let's look at both the x and y components of this equation. First of all, the x component, the ex by dt is the x component of minus del cross b. What's the x component of del cross b? That's equal to minus dy bz minus dz by. Now we have an equation relating the ex dt to bz and by. But bz is zero. We've already said bz is zero, so we get rid of this and we just get plus. The ex by dt is dy by dz. The first thing it tells us is that by cannot be zero. We set ey to zero, but now we find out that by cannot be equal to zero. In other words, the electric and magnetic field cannot be parallel to each other. You cannot have a situation where by is not equal to zero if ex, sorry, where by, you know what I mean. Okay, so the ex by dt, so let's plug in now. The ex by dt, that's just differentiating that. That gives us epsilon, I'll just call it epsilon since there's only one of them, let's just call it epsilon. And now we want to differentiate with respect to time, that gives us a factor of minus omega with differentiating sine times cosine of omega of kx minus omega t. Everybody follow what I did? I just differentiated this with respect to time. The time derivative gave me a minus omega and that's what we have on the left hand side. Sit again, okay, you're right, thank you, kz. Right, now we're going to differentiate by, here it is, and that's going to equal, must be equal to beta y times k with differentiating with respect to, what do I write here? I keep writing the wrong letter here. Kz, by, they're all kz times k times again cosine. All right, the cosines cancel and what we find is an equation. The by, here it is by, cosine cancels, by is equal to minus epsilon x omega over k. By is equal to minus epsilon, this is epsilon x, omega over k. Second fact, okay, a second fact we can get by applying the same equation to the y component. Let's work out the y component here. So here we have the E y by dt, but that's zero. We've said that E y is equal to zero. What do we get on the right hand side? The right hand side has dxBz minus dzBx. Now do I have the sign right or do I have the sign wrong? It doesn't matter. It's either dxBz or d, you know, this is, let's see, this is the y component of the curl which means, I think I have the sign right here, but it hardly matters since the left hand side is known to be zero. It says the right hand side is zero, but Bz itself is equal to zero. And what is this going to tell me? If I plug in Bx, if I plug in Bx, the only way it can be zero is if beta x is equal to zero. All right, that's the only way that this can be equal to zero. When I differentiate Bx with respect to z, I'll get something non-zero unless beta x is equal to zero. So this equation tells me beta x is equal to zero. And what does this say? This says that if the electric field, let's draw some coordinates now, let's draw a picture of this wave. What we know so far is our x-axis, our y-axis, and our z-axis. And the wave is moving along the z-axis, so it's a sine wave, moving along the z-axis. What this tells, and we've assumed, that the electric field points along the x-axis. OK, so the electric field looks like so. At a particular instant of time, it looks like that, and it moves. It moves, of course, with the speed of light, which we've said equal to one. I've assumed we've said equal to one. So that's the electric field, E. And what these equations tell us is that the magnetic field lies along the y-axis. There's no component along the x-axis, there's only a component along the y-axis, so it looks along the y-axis. It's also a sine wave. I'm trying to draw this in a sort of three-dimensional way. Think of the magnetic field as lying along the y-axis, oscillating as you move down the y-axis, and the electric field along the x-axis. So the first thing you learn, well, maybe not the first, but another thing we've learned, is that the electric and magnetic field are perpendicular to each other, and they're also both perpendicular to the direction of motion of the wave. The words for that is that the electric and magnetic fields are transverse. That means they're perpendicular to the direction of motion of the wave. All right, there's one more piece of information. Here we used the edt equals del cross b, minus del cross b equation, and we got a relationship between beta and epsilon. Getting rid of the indices here, there's only beta y is nonzero, and only epsilon x is nonzero, so we can write this just in the form beta is equal to minus epsilon omega over k. That's the equation that comes from the edt equation. There's also the db dt equation. It looks exactly the same, has exactly the same structure, except for a minus sign, and b and e are interchanged. With no further work, I can write down the corresponding equation. Just interchange beta and epsilon and change the sign. All it says is that, if you do it right, you'll find out that the other equation is that epsilon is equal, I think it's also minus beta omega over k. There are two signs, the two signs conspire, I think, to give you the same relationship. What does that tell you about epsilon? What does that tell us? It tells us, do I have it right? One is, what I wanted to say is that omega over k equals one. It's having trouble getting the words out. It says that omega over k is equal to one. It says it's the inverse of itself. It says omega over k is k over omega. The only way omega over k can be k over omega is if omega equals k. That's the last piece of information. No they don't, if c is one. If c is one, they don't have different units. This has, omega has units of inverse time, k has inverse distance, and distance and time are the same. That tells me then that I can simplify this and just write it as sine of kz minus kt, or just sine of k times z minus t, everywhere. The sine of kz minus t. Now where would the speed of light go into this? k times z minus t. Well, it's of course the speed of light would go right where it needs to go in order to make the units correct. So the speed of light would go over here. Maxwell's equations don't have speeds of light in them. I've left them out because when you're doing relativity, you don't want to be bothered by being equal to one. So that's an electromagnetic plane wave. It's transverse. The electric and magnetic fields are equal to each other but orthogonal. The magnitude of the electric magnetic field are equal to each other, epsilon equals beta. They're orthogonal to each other and the whole blooming thing moves down the axis sort of rigidly. As the electric magnetic field always in phase with each other or can be out of phase? In a plane electromagnetic wave, they're always in phase with each other. Circular polarization is a little more tricky but we'll do plane waves. Yes, they're in phase with each other. Yeah, but they're not 90 degrees out of phase. You can't, right. That's right. Okay, that's the electromagnetic fields. Yeah, energy is conserved. The fields go from maximum simultaneous length to zero simultaneous length. Yeah, yeah, right. But the energy that was over here just moves to here. It's not that the energy isn't conserved. So enough, the energy density at a point changes. The energy density is related to the square of the electric and magnetic field. So if you stand at a point, the wave goes by you. First you see some energy and then you see no energy. And then you see some energy and then you see no energy density. But the total amount of energy, energy over here, after a little while is just replaced by energy over here. A little later it's replaced by energy over here. The only other thing you need to know about electromagnetic waves is that you can add them, you can choose epsilon to be any number you want, big or small, beta will follow, and you can add waves or subtract them even in different directions. And you still have a solution to Maxwell's equations. All right, I wanted to go through that because it's kind of a little ridiculous to teach a course in electromagnetism and not write down the electromagnetic plane wave, which I suspect many of you have seen before. So I was pretty quick about it. Now we want to get on. Yeah. You assume that E and B have the same things. No, I don't need to assume they have the same phase. That's the statement that both sides of the equations involve a derivative. So when you differentiate E and set it equal to some other derivative of B, you can't afford to have a phase difference here and here. Right. Yeah. I'm sorry, I missed your argument on why that last equation, epsilon is minus beta omega over k, the second one there, you're looking at it right now. Yeah. The top one I thought we'd drive that because. How did I get to this one? Yeah. Oh, just by noticing that the other equation up there has exactly the same form. Except B into change with E. Right. All right, I'll let you go through that in detail. It's easy. Yeah, it's fun. Lagrangian. As I've emphasized over and over again, all of the ideas of energy conservation, momentum conservation, relationship between those conservation laws and symmetry laws follow only if you start with the principle of least action. You can write down all sorts of differential equations for various kinds of things and there won't be energy conservation. There'll be no energy to be conserved unless those equations are derived from Lagrangian from an action principle. So since we are inclined to deeply believe in energy conservation, we should be looking for a Lagrangian formulation of Maxwell's equations. Okay, so let's go through the principles and see if we can make our best guess about what Lagrangian is and then what we want to do, the first set of equations on the left, those are identities. We don't need to prove them. They just follow from definitions. We want the second set of equations which, let's add the right-hand sides to them. Let's write it this way. This is equal to rho, which is also equal to the time component of the four vector of current. And now let's put this on the left-hand side of the equation and then the right-hand side of the equation is the current vector, the space component of the current vector. This is a vector, this is a vector, that better be a vector. And on the right-hand side, in the covariant equations, this just becomes J nu. The time component of this equation, J naught, simply gives you this one and the space components combine and conspire to give you the other Maxwell equation. Okay, how do you get these equations from a Lagrangian? So let's start with some principles. The first principle is locality. Oh, you know, I'm going to take a digression first before... No, we're okay. All right. Locality. Again, that means whatever's going on at one instant of time is only related to a neighboring instant of time and a neighboring position. And that's always guaranteed if the action is an integral, in the case of a particle, an integral along a world line, in the case of a field theory, an integral over four-dimensional space, the x dy dz dt, or d4x, of a Lagrangian density where the Lagrangian density depends on all of the fields, however many fields are in your theory, let's just call them phi for the moment. They're, of course, going to wind up being the vector potential, but for the moment, let's just call them phi. There could be several of them. I won't write several of them. Phi will stand for whatever fields exist. And also, derivatives of phi with respect to space and with respect to time, I'm going to use the notation that the derivative of a thing with respect to x mu, sometimes I'll just write it d mu phi, but other times I'll use an even more condensed notation, phi, comma, mu. That's a standard notation. A comma means derivative, and sub-mu means derivative with respect to the mu-th component of x. This is a standard notation, and I will use it. So locality says the Lagrangian depends on phi and phi mu, same phi, phi, comma, mu. Now this doesn't mean one particular derivative, it means all of the derivatives. Phi mu is a generic symbol here. It means time derivative, space derivative, all the derivatives. And that's the condition basically that the equations of motion are differential equations that relate things locally. Next condition. Next condition is Lorentz invariance. Lorentz invariance, that's simple. The Lagrangian should be a scalar. It should not depend on which frame of reference it's evaluated in. It should be a scalar. So for example, for a single, simple scalar field, let's just go back over it again, for a single, simple scalar field, you can write down lots of different things, but some simple things would be Lagrangian contains, for example, it could contain any function of phi. Phi is itself a scalar, any function of phi is a scalar. So let's see, what did I call it in my notes, either u or v, I don't remember. u or phi, it could contain u or phi, any function. And it can contain, and it must contain, derivatives, otherwise it's not a very interesting Lagrangian. So it could contain things like, it can't contain dx phi by itself. That would be nonsense. That's not a scalar. That's the x component of a vector. You can't put dx phi in and expect a Lagrangian to be, it can be in the Lagrangian, but not just in this simple form. What you can have is d mu phi, d mu phi, or which is the same as phi mu, phi comma mu, phi comma mu. And what's the difference between the upstairs index and the downstairs index? Does it mean anything to put this index upstairs? It's just a change of sign for the time component. So for example, this object here, which is a scalar, is really minus phi dot squared plus dx phi squared plus dy phi squared and so forth. It's just this change of sign of time relative to space. So assume you know about that. This could go into the Lagrangian. It's a good scalar. And by convention, this is really more of a convention than anything else. We put a minus one-half there. So Lagrangian for a simple scalar, this is not the only thing you could write. This is just a simple thing. Lagrangian could be minus one-half d mu phi, d mu phi minus any, oops, this should be a minus sign here, minus any u of phi. As I said, that's not the most general thing you can write down. Many, many more things. But this is a simple Lagrangian. Last thing, before we go on to electromagnetism, let's just recall how you go from the Lagrangian to the equations of motion. These are the Euler-Lagrange equation. To each field, now I've only written one field here, but there could be several fields. We might add the Lagrangians for several different fields together. We might even put them in in a more interesting way that mixes them together in various ways. For each independent field, in this case only one of them, we begin with the partial derivative of Lagrangian with respect to phi, mu. Derivative just means this derivative here. And then we differentiate that with respect to x mu. That's the left-hand side of the equation. The right-hand side of the equation is the derivative of l with respect to phi. This is the immediate analog of the Lagrange equation's emotion for a particle, which would read d by dt of partial of l with respect to the time derivative of a coordinate is equal to dL by coordinate. So what does this give? I'll let you work it out. It's very easy. It just gives you, we've done this before, this gives d second phi by dt square. D squared minus d second phi by dx squared. I won't bother writing y and z. Tata tata, y and z is equal to minus du by d phi. Simple wave equation. We've done this before. I wanted to remind you because we're going to do exactly the same thing, exactly the same process with the vector potential. It's more complicated. I'll try to do it in the simplest way that I can. You know, it's got more indices floating around. One more principle, locality, Lorentz invariance, and when it comes to electrodynamics, one more principle, Gage invariance. So let's put Gage invariance here. For the moment, let's set the current equal to zero. No charges, just electromagnetic field. Gage invariance tells us the Lagrangian, or at least the action. The action should not change when you make a Gage transformation. The simplest way to ensure that is to make the Lagrangian itself out of things which are Gage invariant. The things which are Gage invariant are f mu nu. The electric and magnetic fields. The electric and magnetic fields are Gage invariant. They don't change when you add to a. And since the electromagnetic field f doesn't change when you make a Gage transformation, anything that you construct out of f will also not change. One thing you can't put in, is there something you can't put in? For example, you can't put in a mu, a mu. Why not? This is a scalar. It's made up out of the fields. Incidentally, these muses here don't mean derivatives. These are the components of the fields a. If I wanted to differentiate, I've got to put that comma in. The comma indicates derivative. And I'll try to be careful about it. This just means a naught squared minus, or a x squared plus a y squared plus a z squared minus a naught squared. This is a scalar. Perfectly good Lorentz invariant. But it's not Gage invariant. If you shift a in this way, this quantity would change. So this is no good. Can't put it in. Illegal. Not Gage invariant. You can put in any component of f mu nu in any way that you like as far as Gage invariance goes. But as far as Lorentz invariance, you've got to make a scalar out of it. So how do you make a scalar out of f mu nu? Well, there's a number of ways. The rules are always the same. Contract indices. If you have two lower indices, if you have a tensor, some general kind of tensor, t mu nu. And you want to make a scalar out of it. You raise one index, and then you contract them by setting the indices equal and summing. That's the general rule for making scalars out of tensors. What would happen if you tried that with f? That would equal f mu mu. No. Sorry. What about taking f mu mu? What is that? OK. So what is that? It's zero. The reason is the following. All of the components of f are such that the diagonal components are zero. f 1, 1 is equal to zero. f 2, 2 equals zero. f 3, 3 is equal to zero. Raising this index here, the second index, that doesn't change anything. It just changes the sign. But the sign of zero doesn't change when you change its sign. f 1, 1 is zero and so forth. So taking f mu mu, which means f naught naught plus f 1, 1 plus f 2, 2 plus f 3, 3, that's just plain zero. Nothing there. So that's not a good scalar to put in Lagrangian. This doesn't, it's just zero. What else? Anything else we could put in? I thought I wrote down a couple of things, but the, well, the first non-trivial thing that you can write down, by first I mean the lowest order in f. Nothing linear in f works. You can try things which are quadratic in f. And then you get something. You can have f mu nu, f mu nu. That means something and it's not zero. And let's see what it means. Let's see if we can see what it means. Let's see if we can work out exactly what this quantity is. OK. First of all, there are the mixed components, space-time mixed components. There would be f naught n, f naught n. All right? That's where mu is naught time and nu is space n. What does that equal to? That's almost the square of the electric field. The mixed components of f are the electric fields. The only reason it's not the square of the electric field is because to raise the time component you have to change the sign. So this thing here is equal to minus the electric field squared. So that's one kind of term that appears in here. f naught n, f naught n, that's where mu is a time component. Now, actually this term appears twice. It appears twice because you can have mu be the space component and nu be the time component. Here we had the first index being time, the second index being space, but there's the opposite possibility. The first index being space, the second index being time, and that just gives us twice the same thing. Now first of all, this contains minus twice the electric field squared. But what else? It also contains space components. For example, f one two, meaning xy, f xy times f xy, one two. Using space indices does nothing. What is f one two in terms of electric and magnetic fields? It's b three, z, x, y, z, one, two, three. So there is plus b z squared, but of course all possibilities enter. So you have f one two, which enters twice, and enters because there's f one two, f one two, and also f two one, f two one. So there's twice b squared. And that's what it is. It's twice b squared minus e squared. Again there's a convention. When I say there's a convention, I mean it would affect nothing if I didn't do it. It would not affect the equations of motion. But some ways along the line, people started with the definite conventions about the Lagrangian. The convention is such that the Lagrangian begins with plus e squared minus b squared. In other words, there's a minus sign that would make this plus. Let's make the equation correct, first of all. You knew, you knew. OK first of all, conventionally one puts in a minus sign so that the electric field comes in with a positive sign. e squared minus b squared times a factor of two. Again the factor of two only occurs because each contribution appears twice. f one two times f one two and f two one times f two one. They're both the same thing. All right, second as part of the convention is a factor of one quarter minus one quarter. And that makes this a half e squared minus b squared. Again there is no content in it other than to make it consistent with ancient conventions the minus a quarter here. What about f zero zero? All the diagonal elements of f are zero. f one one, f two two. So they're only the off diagonal elements. The off diagonal mixed space time components, those are electric. The space space components are magnetic. All right, so that's the Lagrangian e squared minus b squared. It's Lorentz invariant, it's local. And it is in many ways not only the simplest thing you can write down, it's also the correct thing for electrodynamics. It gives rise to exactly these equations for the moment I have not included J. OK, so let's see if we can see that. Not a little bit of a nuisance but not too bad. I'll try to take you through it reasonably simply. First of all, what are the fields? Let's write down the Euler-Lagrange equation again. Euler Lagrange, here it is. E-L equation. For each field, and write down first the derivative of the Lagrangian with respect to phi comma nu, then differentiate the whole thing with respect to x nu. Set that equal to the derivative of the Lagrangian with respect to phi. For each field, you write a separate equation like this. Now what are the fields that I'm talking about? They are the vector potentials, a naught, a x, a y, and a z. These are distinct fields, sorry, they stick to either x, y, and z, or 1, 2, and 3. Those are the independent fields. Again, these indices here do not correspond to derivatives. What about derivatives? Let's introduce a notation for derivatives of the fields a1, a2, and a3. Well, we can have the mu-th component of the field differentiated with respect to the nu-th coordinate. A mu comma nu literally means the derivative of A mu with respect to x nu. What about the field tensor in this form? The field tensor, where, let's write it down over here, is the field tensor, f mu nu equals the A mu by dx nu minus the A nu by dx mu, which is the same thing as A mu comma nu minus A nu comma mu. OK, you got the notation? It saves writing a bunch of partial derivatives, fraction bars, x's, which are simplified, the comma means derivative. OK, so now we can write down Lagrangian and try to work out the Euler-Lagrange equations of motion. In working out the Euler-Lagrange equations, we take the four components of A to be independent separate fields, so there will be a field equation for each one of them. All right, so here we have it. Minus is L. L is equal to minus one-quarter A mu nu minus A nu mu. Minus f mu nu times the same thing again. Oh, no, we have to put the index upstairs. We have to put the indices upstairs. And again, I emphasize over and over the only difference between upstairs and downstairs is the sign change when you move a time index. So that's the Lagrangian minus one-quarter f mu nu f mu nu. Let's see if we can work out the field equation and let's pick a particular component of A. This is all there is. That's it. That's the Lagrangian. And you can write it out in detail if you like in terms of derivatives of A. The things which correspond to phi are the A's and so we should now work out the Euler-Lagrange equations. There will be an equation for each component of A. So pick a component. Do you mind if I pick a little bit? What did I pick here? It won't matter if they all work the same way. Yeah. Let's work out the equation of motion for the component A sub x. A sub x or what's the same thing? A sub one. Let's see if we can work out the equation of motion. The first thing we have to do is we have to compute partial derivative of L with respect to A x mu. That's the analog, where is it? That's the analog of this object over here except where phi is given by A x. And then we have to differentiate it with respect to x mu. But let's do that later. We'll do that in a minute. So what we need to calculate is the derivative of the Lagrangian with respect to specific derivatives of specific components of A. Let's take an example. The derivative of Lagrangian with respect to A x, y. That's fairly generic. This is the derivative of Lagrangian with respect to A x, y. Everybody understand what that means? You look in the Lagrangian, you look for A x, y. That means the derivative of A x with respect to y and you differentiate the Lagrangian. OK, where are we going to find A x, y in here? Basically this one term in the sum of a mu and nu, it's A x, y minus A y, x times A x, y minus A y, x. So let's write down what we get. What we get is minus one-half. I'll tell you why we're one-half in a moment. Derivative of A x with respect to y minus the derivative of A y with respect to x squared. There are lots of other terms in the Lagrangian, but they do not contain A x, y. There are other terms, but they do not contain A x, y. Now why do I have a half here instead of a quarter? Because there are two terms. There are two terms. There's one term in which mu is x and nu is y and there's another term in which mu is y and nu is x. So you basically get two terms for each combination mu and nu. All right, now let's write this out. Let's write this out in some detail. Minus a half, put a big bracket around it. This is A x, y squared. I've squared this. Now there's the cross term minus A x, y, A y, x with commas in the right place. Twice that. I think I better clean that up, not very clear. Let's try it. Equals minus one half derivative of A, let's use the condensed notation, A x, y squared. Just coming from squaring, just binomial theorem. Again there's minus twice the A x, y, A y, x. And the last term is plus A y, x squared. This is the only term in the Lagrangian which contains A x, y. It also contains other things, but it's the only term which contains A x, y. So far, this is the term in the Lagrangian that we're interested in. Now I want to differentiate it with respect to this particular component. It appears here. Let's circle the places where it appears. It appears here, squared, and it appears here multiplied not by itself. A x, y, and A y, x are different objects. This is derivative of A x with respect to y. This is derivative of A y with respect to x. They are different objects. We're searching out the dependence on A x, y. So what's the derivative of this Lagrangian with respect to A x, y? Very simple. We just... The derivative of A of L with respect to A x, y from here is minus A x, y. What happened to the factor of half? It's just this is squared. So when you differentiate a square of something, you get a 2. So this is A x, y. And now we also have this one over here which gives us minus. We're differentiating with respect to A x, y. That gives us A y, x. So it gives us plus A y, x. It took a long time to get there, but the answer is very, very simple. The derivative of L with respect to A x, y is just minus F x, y. Derivative of A x with respect to y minus derivative of A y with respect to x is F x, y. So it was a long haul, but let me write down the general formula. Oh, incidentally, this is also equal to minus F x, y. Space components can go up or down for free. So let me write down now the general formula. The derivative of Lagrangian with respect to A mu, nu. That's what we're going to need. That is equal to minus F mu, nu. If we went through the same exercise and did it for every component, we would discover that differentiating Lagrangian with respect to the derivative of A with respect to nu would just give us F mu, nu. What's the next step? The next step is right over here. We differentiate the Lagrangian with respect to phi mu and then differentiate again. So d by dx, nu of dL by dA mu, nu. That's just minus the derivative of F mu, nu with respect to x, nu. That's what's turning up here. Differentiating with respect to x, nu just gives us the left-hand side of Maxwell's equations. Now what else is there? Well, the derivative of the Lagrangian with respect to the fields themselves, that would mean the derivative of the Lagrangian with respect to undifferentiated A. But undifferentiated A doesn't appear in the Lagrangian. So that's it. That's all there is. The Euler-Lagrange equation is that this is equal to zero. That is, and that's it. That's the whole story. It looks much more complicated than it is. You just take F, the squares of all of the elements, and that's the Lagrangian. From it, you work out the Euler-Lagrange equations, and it's just two of the Maxwell equations, the two of the Maxwell equations that happen to have the J's on the right-hand side. We haven't included J for the moment. The other Maxwell equations, we don't have to do anything. They're identities that follow from the definition of A and its relationship to F. Any questions up till now? This is not hard, but you'll have to sit down and do it. You have to sit down and do it, work out the details of it, because you... I'm sure it doesn't help to watch me do it. I have no doubt that it doesn't help to watch me do it. Even if you manage to follow what I did, you will forget it in five minutes, but you won't forget it if you go through the steps yourself. Yeah? The very top equation on the top board of it. Yeah. When you set that equal to minus one-half, what difference does it make? Yeah. Square. Square. What's going on there? You're assuming that the field is in the z-direction only, or... No, no, no, no, no, no, no. Oh, oh, I'm sorry. Yeah. In writing this over here, no, I just pulled out one turn. No, no, no, no, no, plus dot dot dot. Sorry. Plus dot dot dot. Yeah, good. Plus dot dot dot. I just pulled out one term for examination. Yeah. Yeah. It's too special to work this as an example. Yeah. If you used one of the times... Yeah, you would have gotten the same sort of thing. You would have found the L by the, let's call this, naught here, and this M would have been F, M naught, which would have been a certain component of the electric field. When I use two space components, I get a magnetic field, when I use one space in one time, I get an electric field. And like to re-assign, it doesn't change sign when it goes upstairs? If you keep track of the signs carefully, you'll find that this is general. You push it two times before it gets cancelled out or something? Yeah. Go through the indices and you'll find out that they cancel out. Question? Yeah. There, it's covered now, but where you wrote the agent variance, you said that you were sending the J equals 0, so what do you say? I did for the moment. For the moment. I don't want to come back to it in a minute. Yeah. For the moment, I, all right. Now in fact, there is, there are no gauge invariant things that I can make up just out of the A's that don't involve F mu nu. And I've given you one example, in fact, that is the example that really does give rise to the real Maxwell equations to take the Lagrangian to be this object over here. The question now is how do you get the right hand side of the equation? Where did that come from? You have to add something to the Lagrangian. You must add something to the Lagrangian. Now something had better involve the current vector, the four component of a current vector. If it doesn't involve the four component of a current vector, it's quite clearly not going to give you what you want. So let's suppose that there's some pre-established electric current. That electric current means a charge density, and as I showed you last time, a flow of charge, space and time components. So let's go back and recall the principles of charge conservation first before we do anything else. We have four components, J mu, which equal the time component, which is charge density, and the space components. And remember what the space components are. The space components are the charge per unit area per unit time passing through a little window oriented along the M axis. The charge flowing through a little window oriented along the M axis, or perpendicular to it, the charge per unit time per unit area, and that's called J. The rule, if you take a little cell and you say the charge in there, of course the charge in there is proportional to rho times the volume of the rho d x dy dz. That's the charge in there. And the rate of change of the charge in there is just the time derivative of rho times the x dy dz. If you say the only source of change of the charge in that region is charges which pass through the walls of the region, then that gives you the continuity equation, and the continuity we worked it out last time is rho dot. That's a change of charge in a little cell plus del dot J equals zero. Del dot J means partial of Jx with respect to x plus partial of Jy with respect to y plus partial of Jz with respect to z. So to read this, this is the change of charge in a little infinitesimal cell. This is the difference between the charges or the charges passing out through the x windows. This is the charge passing out through the y-indit windows and so forth. And this is just the balance of charge inside the box passing through the boundaries of the box. We can also write this in a relativistic notation d mu J mu equals zero. d mu J mu equals zero. That's the equation. It's called the continuity equation, but it represents charge conservation. It represents the idea that no charge disappears unless it's accounted for as moving out of the region. And in fact, if you want to know where that charge went, you go to the next box and you discover that the current passing through this window over here just adds charge to the next box. So the charge that disappears in this box goes into the neighboring box. And that's charge conservation. All right. Now, given the continuity equation, there's another term, another gauge invariant thing. It's a scalar and we can write it down involving the vector potential and we'll check and see if it's gauge invariant. It doesn't look gauge invariant to begin with, but we'll see that it is. The action associated with the current, as usual, is the integral of d 4 x of some kind of Lagrangian. And I'm going to write down the very, very simplest thing I can imagine, J mu times A mu. This, of course, means J mu of position, space time position, A mu of position. That is a scalar. It's the four vector product of a four vector and another four vector. It's a scalar. It involves the current and it involves the vector potential. But it doesn't look gauge invariant, does it? How can we check if it's gauge invariant? Well, we can check if it's gauge invariant. Incidentally, by convention, there's a minus sign in front of it. And minus sign ultimately is due to Benjamin Franklin and, as I said, it is a convention. Why does this, why is the Lagrangian, the Lagrangian here is, of course, J mu A mu, or this term in Lagrangian is J mu A mu. All right, let's do a gauge transformation. Let's do a gauge transformation. What happens, what happens to this? It goes to minus the integral, same integral, d4x of J mu. And now we're adding to A mu. I'm going to add something to A mu. I'm going to add ds by dx mu. J mu ds, this is the change. All right, what is this? This is the change in action under a gauge transformation, under gauge transformation, gt. That's the change. It doesn't look like zero, does it? If it's not zero, then the action is not gauge invariant, but it is zero. Let's check that it's zero. Let's see if we can see why it's zero. Integral d4x, what does that mean? It means integral dx dy dz dt. Let's write it down. The x dy dz dt. All right. Now we have times J naught ds by dx naught plus J1 ds by dx1, right? Plus dot, dot, dot, dot. And this is integrated over all space. I'm going to make one assumption. I'm going to assume that if you go far enough away, there is no current flowing, that all of the currents in the problem are contained within a big laboratory. So the J goes to zero far away. If not, if there's a current flowing at infinity, you have to take it especially into account. But for any ordinary experiment, in a laboratory, currents move and so forth, charges there, but you can imagine the laboratory being isolated and sealed so that no currents go out of the box. In that case, let's see what this gives. Let's take a particular term. Let's take the integral J1 ds by dx1 as one particular term. And now that has to be integrated. This one means x, dx, ds by dx, dxdy dz dt. Everybody know how to do this integral by parts? You know how to do it? There's an integral here. Forget that dxdy dz, just forget that. We're doing an integral over x of J times ds dx. You know how to do an integration by parts? The rule is you flip the derivative to the other factor and change the sign. We've done integration by parts many times. I'll just remind you, if you have an integral over x of a thing times the derivative with respect to x, you can do the integral just by flipping the derivative onto the other factor and changing the sign. So this is minus integral dj1 by dx1 s dx, and then the rest of them dy dz and so forth. Well what happened if I did the same game with the term, with the second term? The second term is j2 ds by dx2. Well what happened is we would get another contribution and the other contribution would be dj2 by dx2. And supposing I did the third term, ds by dx3 times j3, that would give me dj3 by dx3. And finally there's the time component. There's also a time integration here. If you do the whole thing, you'll find out that the change in the action is the integral dj mu by dx mu times s. We started out with j times ds dx. We integrated by parts and that put the derivatives on the j instead of on the x. That's all we did. Did a trick to take the derivatives and put them on the j. That's all we have here. And that's equal to zero because of the continuity equation. So that's equal to zero. So the change in the action, if the current satisfies the continuity equation and only if the current satisfies the continuity equation, then adding in this peculiar looking term, where is it right here, is gauge invariant. Does it satisfy the continuity equation or do you have to have del dot j equal to zero? No, no. You have to have rho dot plus del dot j equals zero. Okay let's go back. That's the first term here. Yeah, that's the first term here. If they have all four of them, or the sum of all four of them is equal to zero, that guarantees current conservation. Okay, that guarantees the gauge invariance of this term. So we can add this to Lagrangian. Now what does this do to the equations of motion? Well we haven't added anything that involves a derivative of A. We haven't added anything, we've just added something that involves A itself. Where's our Euler-Lagrange equations? Our Euler-Lagrange equations have a left hand side, d mu of partial of L with respect to A nu mu. It's the left hand side. And the right hand side is partial of L with respect to A nu. Term by term, component by component, we write down the Euler-Lagrange equation for each component of the field A. On the left hand side, we know what that gave us, that gave us minus partial of F mu nu, I think it was with respect to X nu. That's what we got from the previous terms in Lagrangian. What is this one going to give? Well, we go back. Here's the new term in Lagrangian. What happens if we differentiate it with respect to A? We just say J. Taking this term in Lagrangian and differentiating it with respect to a component of A, you just get the corresponding component of J. So we get this is equal to, I think, minus J nu. The lesson here, there's a couple of lessons, many lessons. First of all, you can write Maxwell's equations in Lagrangian form. That's important. Second of all, Lagrangian has a very simple form, minus a quarter F mu nu squared minus J mu A mu. Third, it's gauge invariant, if and only if the current is conserved. What happens if the current is not conserved? Then the gauge, then the Lagrangian is not gauge invariant, and what you will find out, that's a disaster, because it means that the equations of motion are not gauge invariant. If the equations of motion are not gauge invariant, that's something really strange, because the left-hand side of the equation of motion is certainly gauge invariant, and only involves the electromagnetic field tensors. So that means something very strange is going on, it just can't be the right equation. It's not the right equation. The continuity equation, yeah. Well, that's the conservation of electric charge. You could imagine a world in which charge just disappears. What would happen if charge just disappeared? Well, maybe nothing's so terrible, but Maxwell's equations couldn't be right. Maxwell's equations could not be right. You can derive, it's a little exercise, you can derive the conservation of charge from Maxwell's equations. Let's, I don't know, I'll take a risk and try to do it. Try to do it from, let's see if we can derive the continuity equation from Maxwell's equations. All right. So here's Maxwell's equations, rho equals del dot E, and J equals the EDT plus del cross B. Okay, so let's first take rho dot. What is rho dot? Rho dot is equal to del E dot. Right? You differentiate, time differentiate both sides. Okay. What do you think I ought to do next? Del dot J. Del dot J, yeah. Del dot J. Here we have del dot E dot. Here we have E dot. Let's see what happens if we take the divergence of this. What's the divergence of a curl? Always zero. The divergence of a curl is always zero, so let's put del dot, but now cancel this out. So we have del dot J equals del dot E dot. But that's exactly what we have here. Well this gives us, I hope it doesn't give us del dot J because it should be minus del dot J. What happened here? Did I make a mistake somewhere? Rho equals del dot E and E dot is equal to, is it del cross B? It doesn't matter plus J? Do I have that right? That's not going to make any difference. That's not going to help me. I think it's right. E dot, unless I have a sign error there, but I think it's right. Let's try it out. Let's see. All right. So rho dot is del dot E dot. That's for sure if rho is equal to del dot E and that's equal, oh, and that's equal to nothing yet. All right, that's that. Now we have E dot is this, so let's try del dot E dot. So the curl gives zero and that's going to give us del dot J. I've got a sign error somewhere and I'm not sure where it is. I have rho dot is equal to del dot J. Oh, oh, oh, oh, oh, oh. I have a feeling. I have a feeling there's a sign error here. I suspect there's a sign error here. I don't remember now. I can't do it in real time right here. There's some sign error here because the right equation has to be rho dot plus del dot J is equal to zero. So I'm not sure what the sign error is, but you can see. You can see apart from the sign error, the conservation or the continuity equation can be derived directly from the Maxwell equations. So in other words, if Maxwell's equations are true, the charge has to be conserved. Just look at them up. Yeah, but it should be minus J. It should be minus J? Yeah. All right, so this is one way. Right. Yeah, minus J. Okay. All right, so I don't think that affected anything else we said. I think that's a good question. And right. So you can't have Maxwell's equations and have charge not conserved. That's another lesson. And the final lesson is that the charge conservation is intimately connected somehow with the gauge invariance. If it weren't for the charge conservation, when I say charge conservation, I mean the continuity equation, this one here. Without the continuity equation, Maxwell's equations couldn't be right. So those are the things that go together. And that are, yeah. I'm trying to follow a lot. The high order logic. Yeah. So we have to have that until the continuity equation came from Maxwell's equations. But then we're trying to prove Maxwell's equations now, and part of that proves, you're trying to find the Lagrangian for the Maxwell's equations. But to do that, we have to have already assumed Maxwell's equations to have the continuity addition to see what I'm saying. Yes. You could have started with the Maxwell's equations and said they imply the conservation of charge. Knowing that, you could say, aha. Since I know the charge is conserved, I know that there's a term J dot A, which is gauge invariant, and then discover that this is the right term to reproduce Maxwell's equations. And there's no sort of circular area in the beginning of these things. That's you, Maxwell's equations, the setup of the logic and prove Maxwell's equations. Is it the continuity equation more fun than the, it's just from the definition of current charge. Yeah. Yeah. Yeah. And you can derive it from Maxwell's equations. Yeah, you can derive it from Maxwell's equations. But it really just says charges don't disappear. They don't just come and just disappear on you. They only disappear out of a region if they pass through the volume. So these are all connected together. You can start the logic this way and say the basic principle is gauge invariance. That requires J to be conserved and then use this to derive Maxwell's equations. And in fact, then from Maxwell's equations you can see that J is conserved, but there's no contradiction there. There's no contradiction there. Or you can start with Maxwell's equations and say what Lagrangian would give me Maxwell's equations. And one of the things you discover from Maxwell's equations is that J satisfies a continuity equation. And so that tells you, ah. This thing, which I have to put in to the Lagrangian in order to get out the right hand side, happens to be gauge invariant. So it depends on what you think is fundamental. If you think gauge invariance is fundamental, that tells you J has to be conserved in the sense of a continuity equation. If you think the conservation of charge, the charges shouldn't disappear on you is fundamental, then the continuity equation is just an expression of that conservation. It just tells you charges don't disappear and that tells you J is conserved. Or you can think that the Maxwell's equations are more fundamental and again, using the Maxwell's equations you derive the continuity. Things would be bad if they were contradictory. It's good that they, ah. And what is the minimal logic? I would say the minimal logic is the combination of gauge invariance, Lorentz invariance, and Lagrangian formulation lead you to this combination and then you derive Maxwell's equations. But that's a matter of taste. It could have started other places. The fact is things fit. Good. Any other questions? Yeah. Yeah. You showed us gauge invariance mathematically, but it leaves me a little cold. First of all, the word gauge, what is the time limit? It's a historical anachronism that has no, yeah. We're going to, you know. You came from the idea that gauge, that you have a zero on gauge. And if you can usually rotate the gauge to change where the zero is. The zero of what? The zero of what? Say the zero of pressure on a gauge. Or just a mark. It comes from the idea of gauge. Yeah, it's just the difference. The pressure that's relative to that is spiritual. It came from a mistake, actually. It came from a mistake which I cannot explain to you now. I can explain to you perhaps, depending on what lecture I give next week, I will try to explain it. But it came from a mistake. The gauge actually had to do with measurements of length. And in fact, gauge invariance has nothing to do with measurements of length. It has to do with measurements of phase, whatever phase is, of the Schrodinger wave function. It's got to do with mixing of quantum mechanics. Herman Weill made a mistake. He thought it had to do with something that it didn't. He very quickly corrected himself and the term gauge stuck. But you can, you know, what I think was said was correct. But gauge here meant defining a zero of something. A zero of what? Well, sort of a zero of this scalar field S. But don't worry about it. We'll understand gauge invariance in a deeper way perhaps next time. It does have a deeper meaning, incidentally. It does have a much deeper meaning. And we'll come to it next time a little bit. Okay, are there any other questions about what we did tonight, though? Yeah. In the upper left, when you wrote the two families of equations, the one on the left, you said derives from the Bianchi identities. It is just a Bianchi identity. Right. The family on the right, we got from the equation of motion. Since EMB are so symmetric, it seems surprising that those families don't seem to know anything. You're asking whether they can be interchanged? Yes. You can invent a different vector potential with the property that these equations over here become the equations of motion. And these equations with J be equaling zero become the Bianchi identity. So that's not a convention. That had to do with the absence of magnetic monopoles, which nobody believes anymore. So yeah, in some sense it's a convention. Yes. It is this interchange of electric and magnetic that it may be that real physics does have an interchange symmetry between electric and magnetic. I think electric charge is becoming magnetic charges. Now, it's not really a symmetry because if it was a symmetry, we would have discovered those magnetic charges already. They would have been similar to electrons. They would have weighed the same thing as electrons if there was a real symmetry between them and no such magnetic things exist. So it is not a symmetry in the usual sense, but we don't want to go there right now. We're not going to be hired for it. Any other questions? OK, we've actually come a long way. We've come through Lagrangian formulation of mechanics, the Lagrangian formulation of field theory, the Lagrangian formulation of electrodynamics, gauge invariance, Lorentz invariance. That's a lot of stuff. That's a lot of stuff. As I said, I think the only way to really digest it is to go home with the equations and work them out yourself, starting from step one to step two. So good luck. For more, please visit us at stanford.edu.
(June 11, 2012) Leonard Susskind discusses plane electromagnetic waves in regards to Maxwell's equations. He then looks for a Lagrangian formulation of Maxwell's equations in order to support the laws of conservation. In 1905, while only twenty-six years old, Albert Einstein published "On the Electrodynamics of Moving Bodies" and effectively extended classical laws of relativity to all laws of physics, even electrodynamics. In this course, Professor Susskind takes a close look at the special theory of relativity and also at classical field theory. Concepts addressed here include space-time and four-dimensional space-time, electromagnetic fields and their application to Maxwell's equations.
10.5446/15005 (DOI)
Stanford University. Well, we're going to talk about electrodynamics tonight. We've talked about scalar theories, scalar fields. We've talked about how particles couple to scalar fields, how scalar fields influence the motion of particles, in particular how the same Lagrangian, the same action which tells the field how to influence the particle, also tells the particle how to influence the field. We're going to do that again tonight for the electromagnetic field, or at least part of it. But before we do, I really want to nail in place the notational ideas. Oh boy, what is that? Looks like quantum mechanics to me. We don't want any quantum mechanics on the blackboard tonight. Just once more briefly, I want to go over the notations of four vectors, indices, how you use them, how you manipulate them. Now, just by way of philosophizing, a good notation can be extraordinarily powerful. Good notations in mathematics, the minus sign, the zero sign, the equal sign for goodness sakes, extremely powerful, more modern vector notation. Again, extremely powerful. What we're going to be talking about tonight a little bit is tensor notation. The trick that I showed you last time of upper indices and lower indices, that actually is due to Einstein, it's completely due to Einstein, upper indices and lower indices, which mean nothing more than just changing the sign of the time component of a vector. A vector with upper indices and a vector with lower indices are really no different except you change the sign of the time component. Of course, that's a special case of something much broader. It's a special case of manipulating vectors together with the metric tensor, but we're not doing the metric tensor yet. So for our purposes now, those things are just conventional conventions and notations, but notations which are quite powerful, as you will see. Let's just go over them again quickly just to remind ourselves what those notations are all about. Oh, the other notation which is truly brilliant, again due to Einstein, is the summation convention. But the summation convention is something that's only to be used in the right way. It's to be used when you have two indices which are the same, one of them upstairs and one of them downstairs. That's the only time you use the summation convention with an upper index and a lower index. When they're the same, you can set them or you set them equal to each other and sum over them. That's the Einstein summation convention. We'll use it all the time, but we'll only use it in the special form in which Einstein invented it. If we have summations to do and they're not of that special form, I'll write summation. Okay, so let's begin with four vectors again. Four vectors have four components, three of which are space components, one of which is a time component. And when we're interested in the four-dimensional geometry, we write the components a mu. But we can also remember that they consist of a time component, which is usually called a naught, and three space components, which are usually called a m. m goes from one to three, mu goes from zero to three. All right, now, I've written my vector with upstairs index. I guess it's called a contravariant index. The contravariant index is the sort of thing that you would attach to dx mu. And the fact that you put it upstairs and not downstairs is purely arbitrary, but you have to put the index someplace. And Einstein chose to put the index associated with a differential displacement like this in the upstairs slot. Okay? And I guess I don't know who first called that a contravariant index, and I don't know why it's called a contravariant index, but it is. And to go from a contravariant notation to a covariant notation, this is pure definition, a mu is the covariant counterpart of a. It's the same vector. It's just another notation for it or another way of describing it. And that is equal to eta mu nu, a nu. I'll remind you what eta mu nu is. Eta mu nu is a collection of numbers. It forms a matrix, a four by four matrix, because mu and nu go from one to four. And eta is just the matrix. It's components, eta 11, eta 01, whatever, are just the components minus 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1. It's almost a unit matrix. And in a certain sense, with this relativistic geometry with a funny minus sign in front of times, time, it really does play the role of a kind of identity matrix. But it is what it is. Minus one in one slot, one's in all the other slots, purely diagonal. And what this complicated formula, well, it's kind of neat. It's a nice neat, simple setup. But really all it says is that the time component of the covariant vector is just minus the time component of the contravariant vector. That's this minus one over here. And all the other components are the same. All the other components are the same, so we can write then. A sub-naught is minus a super-naught. A sub-m is plus a super-m. And that's all this formula means. But it's been written in a kind of neat way. Whether you find it neat or not at this point is not relevant. The point is that as you start doing things with it, you will find that you begin to find it very neat. So let's take it for granted that it's a useful thing to do. All right, so first thing, the formation of scalars from four vectors. We talked about this at least twice. Let me just write it down on the blackboard. If you take two four vectors, they could be the same four vector or two different four vectors. And you take one of them to be covariant and the other to be covariant. Now, this thing that I've written on the blackboard automatically sums over mu. I do not have to write sum over mu here. By Einstein's convention, this means a sub-1, b-super-1, you know, sorry, nought, one, two, three, and so forth. This is a scalar. In other words, this is a thing which is a quantity which doesn't change from frame to frame. Another example, incidentally, I'll just give you another example, the derivative sign. Let's just put it over here for a minute. The derivative sign d by dx mu. Now, this is a collection of four differential symbols, derivative symbols, derivative with respect to x naught, which means derivative with respect to time, and the derivatives with respect to the other coordinates. It is often just written as d sub mu. Another brilliant notation. Get rid of the sub x and just write d sub mu. This symbol by itself, of course, doesn't mean anything. It's got to act on something. But whatever it acts on, it adds another index. If it acts on a scalar, it creates a thing with an index mu. That index, so the d by dx mu, or the d mu, is a covariant index. In other words, for example, if this were to act on a scalar field, it would give you a collection of derivatives which form a covariant vector. I won't give it a name. It's just, its name is d mu phi. All right, so the derivative symbol, yes, sir? So it's dx sub mu, so the mu on the axis is covariant. So why is the derivative covariant? Because it's down, because it's underneath the fraction bar. Why, indeed. All right, so last time I explained why, and I explained why if you take the phi by dx mu, and you multiply it by dx mu, what is that? That's derivative of phi with respect to time, times d time, and so forth and so on. This is just the change in phi as you go from one point to another by differential displacement dx. This is nothing but if we call this point two and point one, this is just phi of two minus phi of one, divided by a little differential separation, but it is just the difference of two scalars and therefore itself is a scalar. So by a theorem that I didn't prove, but I told you about, if you have a contravariant vector, and you multiply it by a collection of symbols with a script mu, if the result is a scalar, the theorem says that the thing, that the other piece of it here is a covariant vector. So there is a theorem which says that whenever you hit anything, or in particular a scalar, but I'll give you some more examples in a moment, if you hit a scalar with d by dx mu, it gives you a four vector. Everything I'm saying incidentally would also be true about three vectors. The only thing that you have to remember about three vectors is if we're thinking in a purely three-dimensional language, not four-dimensional, everything is the same except wherever you see mu's and nu's put m's and n's, and the eta matrix is just the unit matrix. It is just the components of eta which sit in the spatial components. And for that reason, because eta is the unit matrix, there's no difference between upper and lower components, so you don't have to say, if you're talking about ordinary three-dimensions, it's unnecessary to say whether an index is covariant or contravariant. They're both the same. Okay, another example of forming a scalar out of a vector, well, one example of forming scalars out of vectors is just to take a mu, b mu, and another thing you can do is let's suppose we have a vector quantity which happens to depend on position and also time, x mu, let's not write it that way. It's a four vector field. It depends on space and time, and I'll just indicate that by writing x here. It depends on space-time and therefore can be differentiated. As it stands, it's a four vector. It's a four vector at each point of space, differs from one place to another. You can also differentiate it with respect to x. And for example, you can differentiate it with respect to x and sum over the index mu. This means derivative with respect to time of the time component of b, plus derivative with respect to x of the x component of b, and so forth and so on, this is also a scalar. All right, so that's another example of forming scalars from vectors by what we'll learn to call index contraction. Index contraction is the same as identifying an upper index with a lower index and summing. That's called index contraction. An index contraction in this kind of situation takes you from a vector, well, takes you from a quantity which has all sorts of components, and leads to a scalar. Okay, now scalars and four vectors transform, and that's the defining property. The defining property is the way they transform. Scalars, for example, just transform into themselves, transform under Lorentz transformations, and I'm going to give a broader definition of what I mean by a Lorentz transformation than we've used up till now. Lorentz transformations we've talked about were along the x-axis. We could have, of course, talked about Lorentz transformation along the y-axis, or the z-axis, and so forth. But there's another class of transformations which are also considered to be part of the collection of Lorentz transformations, and it's just rotations of space. Rotations of space is part of the collection of Lorentz transformations. Now, once you say that, then you can say that a Lorentz transformation along the y-axis is simply a rotation of the Lorentz transformation along the x-axis. You can compound rotations together with Lorentz transformations to make Lorentz transformations in any direction, rotations about any axes, and just the general set of transformations, which physics is invariant under. I'm not going to show you how to do that. That's not important to us right now. The important thing is just to keep in mind that physics is to be invariant not only for Lorentz transformations along the x-axis, or the y-axis, or the z-axis, but more complicated things where you rotate space, then transform, then maybe rotate back again, and there's compound rotations with that. So, when you just say rotate space, then time is not affected at all. Right. When I say rotation, I mean time not affected at all, exactly. All right? So, what's a nice notation for the Lorentz transformations? Well, let's take the transformation of the contravariant vectors. For example, just x mu. How does x mu transform? x mu prime, in particular, x naught prime, that's the time component. That's equal to x naught minus v x1 divided by square root of one minus v squared, and so forth and so on, just the good old Lorentz transformations, except that I've called time x naught, and x I've called x1. All right? We can always write those transformations in the form of a matrix acting on the components of the vector. I'll show you what I mean. We write a mu prime, that's the components of a certain four vector, in my frame of reference, in terms of the components in your frame of reference that are given by some kind of matrix. The matrix has entries, which you'll recognize in a minute. There's an upper index here. I'm going to put a lower index down here. It has two indices that makes it a matrix. It has a value for every mu and every nu. It's a four by four matrix, so it multiplies a nu. Is this a properly formed equation over here? Yeah. The left-hand side has an index mu, which can be any one of one to four. The right-hand side has an index mu, an index nu, but the index nu is summed over. When an index nu is summed over, it's not an explicit variable in the equation. Yes, this is a properly formed equation, summation convention. Let me give you an example of the matrix L, just for the simplest Lorentz transformation that we've written down. I'll give you a couple of them. For the Lorentz transformation, let's say along the x-axis, let's just write down what we would put in here. This is the time x, y, z. Time in the first column in the first row, x, y, z. So there's a one over one minus z squared here. There's a minus, I think I better make this matrix a little bit bigger. Okay, let's make it bigger. In the upper corner here, we have one over root one minus v squared, and then we have minus v over root one minus v squared. How about the next one? Anybody want to guess? Zero. Zero, what about down here? One over v over square root of one minus v squared, and in the next place here, one over square root of one minus v squared, zero, zero. Zero, zero, zero, zero. But what down here? One and one. One, zero, zero, one. This is a standard Lorentz transformation along the x-axis, and if we write a column vector t, x, y, z, which I could also write x naught, x1, x2, x3, but let's write it this way, t, x, y, z. And we write that this is equal to t prime, x prime, y prime, let's just see that that's correct. This is a column vector here. This is a column vector. What does it say? It says t prime is equal to one over blah, blah, blah times t minus v over blah, blah, blah times x. Well, that's the right transformation law for t prime. X prime is minus v over square root times t plus x, one over square root times x. That's the standard Lorentz transformation on x. And then y and z, they do not mix with t and x. Y and z, the unit matrix down here tells you that y prime is equal to y, and z prime is equal to z. Good. So this is an example of a Lorentz matrix. A Lorentz matrix along the x-axis. If we wanted to transform along the y-axis, we would just shuffle these around a little bit. I'll leave it to you to figure out what our Lorentz transformation along the y-axis looks like in this matrix notation. But let's consider instead a different operation, a rotation in the y, z plane in which t and x are completely left alone. This is one of these rotations, which is somebody said, don't involve t at all. Let's see what that would look like. This is a different object now, different transformation. We would put ones here, 0, 1. That tells t and x do nothing. But what would you put down here in this lower block down here? Anybody got a suggestion? I want to rotate by angle theta in the y, z plane. Now, you all know the answer. Cosine theta, sine theta, minus sine theta, cosine theta. And this would just say y prime is cosine theta times y plus sine theta times z. And z prime is minus sine theta times y plus cosine theta times z. Now, you can take these matrices and start multiplying them and combining them to make much more complicated transformations, which, for example, are partly rotations along some axis, partly Lorentz transformations along some other axis. But this is the basic building starting point and a formula like this. All right, so this is the transformation property of a four vector with a contravariant index. I'm going to leave it to you to compute the transformation property of a four vector with a lower index. I'm going to tell you the answer and I'll let you work it out. If you have a covariant vector and you want to know what it looks like in my frame, given what it looks like in your frame, then there is another matrix. This matrix has to have a lower index mu, because we have to come out with a lower index here, an upper index nu, and an a sub mu. I'm going to tell you right now what the matrix m, oh. Thank you, nu. If the left-hand side has a mu, the right-hand side must also have an unsummed over mu, so that's correct. OK, now we're talking about the same Lorentz transformation. L and m represent the same physical transformation between coordinate frames, and so m and l must be connected. There must be a connection between m and l, and that's very simple. m is just given by eta l eta. I'll let you work that out. I'll let you prove that. Incidentally, eta is its own inverse, just like the unit matrix is its own inverse. Sorry, eta to the minus one is eta. It's its own inverse, and that's because the entries here are such that they are their own inverses. The inverse of one is one, the inverse of minus one is minus one. All right, so you can prove this that for a given Lorentz transformation, m and l are connected. I'm not going to use this very much. The main point is what a four vector is, co-variantly or contra-variantly. What it stands for is an object which transforms from one frame to another in a particular and special way. And a special way is in parallel with the way the coordinates themselves transform. All right, now let's come to tensors. We're going to make heavy use of tensor notation. What is a tensor? A tensor is simply a thing with more indices, actually a scalar and a vector of special cases of tensors. A scalar would be called a tensor of rank zero, which means it has no index. The vector is a tensor of rank one, and a tensor of rank two would be a thing with two indices. So let me give you the simplest example of a tensor. Take two vectors. In fact, it would be enough to have one vector, but let's take two vectors, a and b. Now, we can make a product of a and b by, and that's a scalar. But now I want to consider a more general kind of product. The more general kind of product has two indices, mu and nu. Let's begin with the contra-variant version of it. Let's just take, put next to each other, a mu and b nu. Now, this is an object. How many components does this thing have? 16. 16, four times four. This a naught, b naught, a naught, b one, a naught, b two, a naught, b three, a one, b naught, and so forth and so on. There are 16. This is a symbol here, stands for a complex of 16 different objects, different numbers. It's just a set of numbers you get by multiplying any component of a with any component of b. It's got two indices and it's called a tensor. I'm going to label it just generically. I'm going to write t for tensor. It's a particular tensor, t mu nu. Not all tensors are of this form. Not all tensors are simply constructed out of two vectors this way, but two vectors define a tensor in that way. How does such an object transform? How does this object transform? t mu nu. Well, if we know how a transforms and we know how b transforms, which is the same way, we can immediately figure out what, let's call it a prime, put a bracket around here, mu b prime nu is. All we have to do is transform the a and the b, but we know how the a and the b transform. Let's rewrite this using the transformation here. I'll tell you what. I'm going to change the symbol here instead of calling this nu mu nu sigma. Remember, it doesn't matter what you call a summation index as long as you're consistent. You're summing over it and so it's just a thing that is a sort of dummy index. OK, and let's do it here also. Mu sigma sigma. Let's plug in here. This is just equal to l mu sigma. Put it over here, a sigma. But now we also have to put b prime and b prime is the same sort of thing, l nu and let's call it tau, a tau. l mu sigma times a sigma, that's a prime. l times this a, sorry, excuse me, b sigma, b sigma. b prime. Nu sigma sigma, right? Good. Tau tau. Good, now we're cooking. Sigma goes with a, tau goes with b, and now the equation is consistent. So here we have a new rule about a new kind of object with two indices which tells us how it transforms. It transforms with the action of the Lorentz matrix on each one of the indices. So now we can abstract from that and say more generally the way a tensor transforms, let's call it the tensor, the primed tensor, the quantity, the set of quantities that you, that I see are related to the set of quantities that you see by l mu sigma, l nu tau times t sigma tau. a times b, that's t sigma tau. So this is the rule, for example, for the transformation property of a simple tensor with two indices. Tensors can have, yeah? Would this be a predicated idea that the l, the a's and the b's can commute? Because it would be l. C, the l and the a's go together and then the l, sigma and the a's. We're not doing quantum mechanics, so everything commutes. And there's only one matrix here, l. Every matrix commutes with itself. So there's no issue of commutation here. OK. You can invent more complicated kinds of tensors. Tensors with three indices, mu, nu, I don't know, lambda. How would this transform? And you can think of it by just thinking of it as the product of three vectors with index mu, nu and lambda. I won't write it down. But the way that it transforms is straightforward. For each index, there's a transformer for each index, nu tau, and now we need one more, l lambda kappa, t sigma tau kappa. That would be the general transformation property. The generalization relies the hell out of this, any number of indices, and that's basically the definition of a tensor. The definition of a tensor is a thing which transforms like this. Now, as I said, not every tensor is formed from the product of two vectors. For example, supposing there were two other vectors, supposing there were two other vectors, c and d, we could take b mu, a mu, b nu, and add it to c mu, d nu. Adding tensors by assumption now, adding tensors gives other tensors. So if I take the tensor a times b with mu index and nu index and add to it c times d, that's something which cannot be in general written as the product of two vectors, but it's a tensor. It's defined by its transformation properties, not by the fact that it may or may not be associated with just a pair of vectors. OK, so now we know what a tensor is from its transformation properties. How exactly does multiplying of matrix by a tensor work? Well, how does, oh, it works exactly this way. This is, sorry, this way right here. Here's a matrix L and it acts on t right out, right out what this means in detail. There's a lot of components, but I'll give you a hint, for example, t three one prime. OK, what is that? That's equal to L, now this is three sigma, L one tau, t sigma tau, and now you start going through it. Yeah, sigma could be zero. The tau could be zero. There's 16 possibilities. Zero, zero, you make that with t zero, zero. So one term would be L three zero, L one zero, t zero zero. Then there would be another term L three zero, L one one, t zero one, and so forth and so on. There would be 16 such terms, one for each index here and each index here, and they annulated them up. So that's the idea of the transformation property of a tensor. And the thing about tensors, vectors, scalars, tensors, the thing about tensors is if they are equal in one frame, they are equal in every frame. That's easy to prove. But to say that they're equal means all components equal. If all components of a tensor are equal to all the components of some other tensor, of course they're not the same tensor, but another way of saying it is if all the components of a tensor are zero, shift everything to the left side. If all the components of a tensor are zero, they are zero in every reference frame. So to say that a tensor is zero is an invariant statement. As I said, it's not enough to look at some component and say that component is zero, the whole thing is a tensor, so it must be zero in every frame. No, if all the components are zero, there's zero in every frame, and that's the power of tensors. It allows you to make statements and allows you to write down equations which, if they're true in one frame, will be true in another frame after transformation. That's the basic power of it. All right, now I've told you how to transform a tensor with all of its indices upstairs. I could start writing down the rules for tensors with some indices upstairs, some indices downstairs, but I think I won't and instead I will just tell you that once you know how a tensor transforms, you can immediately deduce how its other variants transform. The other variants, for example, the other versions of the same tensor, same geometric quantity, but with some indices upstairs and downstairs. For example, a mu b nu. That would be some tensor with one index upstairs and one index downstairs. How does it transform? Never mind. You don't need to worry about it because I will tell you immediately that this tensor here is given by T mu sigma, which you understand and know about how to transform it, times eta nu sigma. To lower an index, to take a tensor and to take an index from contra variant to covariant, you multiply, you do it exactly the same way that you do it for a vector. You take that index and you lower it by the operation of eta. So, again, this is a well-formed equation, a well-formed symbol. Sigma is summed over and this here object is T mu nu. All right, but there's another way to think about it, an easy way to think about it. Given a tensor with all of its indices upstairs, what do you do to pull some of the indices downstairs? And the answer is very simple. If the index that you're pulling downstairs is a time index, you multiply by minus one. If it's a space index, you don't multiply at all. That's what eta does. And for example, here's the tensor T naught, naught, which is exactly the same as T naught, naught because I've lowered two indices. Lowering two indices is two minus signs. It's like the relation between A naught, B naught, and A sub naught, B sub naught. There are two minus signs in going from A super naught to A sub naught and going from B super naught to B sub naught. Each one has a minus sign. This is equal to this. But what about this one? A naught, B one. And how does that compare with A naught, B one? Sorry, A naught, B, let's put one down. Let's put both of them downstairs. B super one and B sub one are the same. But A super naught and A sub naught differ by a minus sign. So T naught one would be minus T naught one because only one time component was lowered. Every time you lower or raise a time component, you get a minus sign. And that's all there is to it. But having a neat matrix notation is a fast summary of that fact that the relation between covariant and contravariant is just every time a time index goes from upstairs to downstairs, it's a minus sign. Why is it useful to have these two forms contravariant? Just so you can write things like that. How else would we write A naught, B naught minus A one, B one minus A two, B two? That's it. Sorry, not this one, but yeah. This complicated thing which requires four terms, let's put the minus sign here, plus, plus. One minus sign, three plus signs, and four terms all together is just. That's all. That's the reason. There is. There is more. But the more to it is best described when we go to general relativity. For us right now it's just a neat tool for manipulating indices. It's a neat tool for manipulating indices and minimizing the number of indices we need. What you asked earlier how many terms in A mu, B mu, I said mu, mu, you said 16. That's 16 and A mu, yeah, in A mu, B mu, there are 16. So a tensor is always four or is it just in the context of relativity? Right, that's, yeah. In the context of relativity because there are four coordinates, the indices run over four. So this will be general for any indices, but in our class this is four? Yeah, yeah, but I mean no, it's worthwhile keeping in mind that it would work, yeah, that's right, that's correct. And in fact, the analogous version of it for three dimensional space is essentially the same but slightly simpler in that you don't ever have to distinguish upper and lower indices. That's it, okay. Can you give a more general description of the covariance and contrary covariance? I certainly can, but not tonight. Right, it has a geometric significance, but you know, the geometric significance is something everybody figures out for themselves and then promptly forgets because you just really, you never really think about it. You just learn to manipulate the indices and it's so quick and so efficient and so fast that, but yes, it does have a, another, it does have a geometric meaning. The geometric meaning is, well okay, I don't want to get into it now, I wanted to, tonight's lecture is really about electrodynamics and we ain't going to get there. All right, now let's talk about the different kinds of tensors. Now, I'm talking about tensors with two indices. Let's talk about two indices. Here's an tensor with two indices. Now, this is not necessarily the same. And in fact, it is not the same in general as T nu, T nu mu. Changing the order of the indices in general counts. For example, A mu B nu is not the same in general as A nu B mu. This could be A naught B one and that's not the same as A one B naught. It's not the same. So in general, tensors are not invariant under changing from one end and on changing the order of the indices. But there is a special case called symmetric tensors. Symmetric tensors are ones which do have this property that T mu nu is equal to T nu mu. Let me construct one for you. A mu B nu plus A nu B mu. If you interchange mu and nu, this doesn't change. What happens is this becomes this and this becomes this and it's the same quantity. So you can construct out of any tensor a symmetric tensor by adding together the, well, by adding together some tensors or symmetric others aren't. Some tensors or symmetric others aren't. A symmetric tensor has a special place in general relativity. Not so important in special relativity. Well, it'll come up. But more important for us tonight is the anti-symmetric tensor. Let's call it F mu nu. A tensor which when you interchange nu and mu changes to a sine. We could construct such a tensor by putting a minus sign here. Putting a minus sign there would construct a tensor which would change sine when you interchange mu and nu. If you interchange mu and nu, this gets swapped with this, but there's an other minus sign. So there are symmetric tensors and anti-symmetric tensors. Anti-symmetric tensors have fewer components than symmetric tensors. And the reason is their diagonal components vanish. For example, one of the things this equation says is that F naught, naught is equal to minus F naught, naught. Set mu and nu equal to time. F naught, naught is minus F naught, naught. The only solution to that is zero. So if you think of a tensor as an array, a matrix if you like, if you think of it as an array, then an anti-symmetric tensor, this is the one we're going to be interested in, is completely zero on the diagonal. And the off-diagonal elements, I'm going to give them names now. I'm going to give them names. I'm going to call this one minus E sub 1, this one minus E sub 2, and this one minus E sub 3. And these names are, of course, chosen for future purposes, but I'm just going to write them down now. And they're just labels for now. Minus B3, B2, and minus B1. Now what do I put down here? I haven't written all the elements of it. But I'm assuming that this F tensor is anti-symmetric. If it's anti-symmetric, it means that as a matrix it's anti-symmetric. It means down here we put plus E1, plus E2, plus E3. Let's see. There's an element down here which must be plus B3, minus B2, and plus B1. It's anti-symmetric, which means you just flip the sign when you reflect it around the diagonal. That's the notion of an anti-symmetric tensor, and it plays a key role in electromagnetism where E's stand, of course, for electric field and B's stand for magnetic field. The electric and magnetic field, as we will see, we're not there yet, but as we'll see the electric and magnetic field combine together to form an anti-symmetric tensor in relativity. The electric and magnetic field are not two independent things, and in particular what that means is that when a Lorentz transformation is performed, the electric field can become magnetic field just like X can get mixed with T, E can get mixed with B. So what you see is a pure electric field I might see as having some magnetic component to it. All right, but we're not there yet. That was just by way of notation, strictly notation. Let's now begin with the physics part of tonight's lecture. This was all notation and abstract, rather dry. Well, it all go crazy if I had to write 0, 0, 1, 1, 2, 2, 3, 3 every time I wrote an equation, and so it's good to have this notation. Okay, now, we could begin either studying the dynamics of the electric and magnetic field or we could begin by studying the motion of a particle in an electric and magnetic field. The dynamics of the electromagnetic field would be the equations of motion that electromagnetic field satisfy, the Maxwell equations to put it briefly, whereas the equations of a particle moving in an electromagnetic field are the Lorentz force law. Let me remind you what the Lorentz force law is, and then we're going to derive it from something else. We're going to derive it from a combination of relativistic ideas and the action principle. All right, I'll just remind you mass times acceleration. This is a low-velocity version of it. Mass times acceleration, where we don't have to worry about relativity too much. Mass times acceleration is equal. On the right-hand side, you'll have the electric charge of a charged particle times the electric field. Mass times acceleration is electric field times electric charge, and the other term is the magnetic charge, the magnetic, sorry, not the magnetic charge, the magnetic force, which also involves an electric charge. Plus, I think it is velocity of the particle cross product with the magnetic field. Cross product, I assume you know how to think about cross products. We'll use them over and over again. I actually have a little section in here on cross products, but I'm not going to do it tonight because I think we've done it enough times. There are some speeds of light in there. I'm going to set them all equal to one. This is usually taken to be V over C, V divided by C. That's the only, I think that's the only speed of light in that formula. Little e is just the electric charge measured in appropriate units, measured in the appropriate units. That's the non-relativistic version of the Lorentz force law, of which we're going to derive the relativistic version tonight, and the full-blown relativity. We're going to find that these two terms are really part of the same term. They're really part of the same thing, written in a way where all reference frames will get the same answer. All right, so to ensure that our answers respect the rule that the law of motion of a charged particle is the same in every reference frame, we want to write an action principle where the action is invariant under Lorentz transformation. That's the key. Action, invariant under Lorentz transformation. For a single particle, we had a formula. Let's go back. I don't need this now. We'll come back to it. For a particle without any field, we just wrote down an action which was the integral along the trajectory from one to two, one and two being the starting point and the end point of the trajectory, minus the mass times the proper time interval. Proper time from one point to the next, minus the mass times the total proper time from one to two. And we rewrote that as the square root of one minus x dot squared, where x dot, let's remember what x dot squared is, x dot m is the mth component of velocity. Here m is just a three vector symbol. x m dot and x dot squared means x one dot squared plus x two dot squared plus x three dot squared and so forth. In other words, the total square of the velocity. That's what x dot squared means here, summed over one, two and three. All right, so that was the action integral dt. And that identifies minus m times the square root as the Lagrangian of the particle. We worked that out. We found out what the momentum was, what the energy was, and we'll just leave that. What do we add in an electromagnetic field? Well, I haven't told you much about electromagnetic fields yet, but the basic structure which enters an electromagnetic field is a four vector. A four vector, the electric and magnetic fields themselves are derived quantities. The basic underlying quantity is a four vector a mu. And it's a function of position. It's the, and time. It's the field describing electromagnetic waves, if you like, or electromagnetic fields in general. And it's a four vector. The electric and magnetic fields are derived things which we're going to derive. I think we're going to derive them tonight. I'm going to try to derive them tonight. But the basic starting point is a four vector. A four vector is called the vector potential, or the four component vector potential. And we'll learn what it has to do with electric and magnetic fields in a moment. But what do you do with it to construct an action for the particle in the electromagnetic field? And the answer is very simple. It's in many respects simpler than the thing that we did with the scalar field. A mu is a field with a lower index. The natural thing to do with it along a trajectory is to take a little segment of the trajectory described by dx mu, a little four vector from here to here. And what can you do with such a four vector if you have another four vector, in particular a four vector with covariant indices to make a little scalar quantity associated with that little gap there? Well, you take the two vectors and you multiply them together with one upper index and one lower index. I'll write in here that a mu is a function of x, which means x and t. And then you sum them all up. You integrate from one end to the other from one to two. And this is just another way of saying for each little gap there, take the differential distance along that segment and multiply it by a mu to form a scalar. The x mu times a mu is a scalar. Everybody agrees about its value on each one of these little segments. And we add them together. You can add them as well as I can add them. And we will get the same answer for the action because what I wrote here was a scalar. I added them all up. Adding quantities that we agree about will continue to agree about them. What more can I do with this? Not much. The only other thing that I'm going to do to it is I'm going to multiply it by a constant. And we're going to call that constant e. It is the electric charge. And because of a notation that was first put in place by Benjamin Franklin, there's a minus sign here. That's arbitrary. It depends on our definition of electric charge and had the proton been, well, and, um, yeah, okay, so there's a minus sign. So here's the other term in the action of particle moving in an electromagnetic field. Let's write it up here. Minus e times an integral from the same point from one to two dx mu a mu of x. Now what do we want to do? We want to write down Lagrange's equations. We want to write down Lagrange's equations for the motion of the particle. And really my goal is to write Lagrange's equations and show that they're just Lorentz force law. But I would like to do it in a way which is relativistically invariant. M a mass times acceleration is a very non-relativistic formula. What, for example, is relativistic acceleration? What does acceleration mean? Does it mean the second time derivative of the position with respect to time? Or does it mean the second time derivative of any one of the four coordinates with respect to time? Or does it mean the second time derivative with respect to proper time? Well, I think we probably all can all guess that the invariant or the best definition of acceleration and relativity is to differentiate the coordinates twice with respect to proper time along the trajectory. That's a guess. It is the right definition. And in that form, acceleration is a four vector. We want to write four vector equations. But we're going to begin with a thinking about velocity being derivative with respect to time. And later, when we calculate the equation, we'll go back and make it Lorentz invariant. Now, we won't change it. We'll rewrite it in a way which you'll recognize as an equation among four vectors. OK, so let's leave it this way for the moment. This quantity in here would be the Lagrangian. So this would be the Lagrangian for the free particle. But there's another contribution to the action over here. So let's write this in a form which would make it look like a Lagrangian. This is very easy. We just divide by dt and multiply by dt. All right, so let's see what we have here. There's dx naught by dt. What is dx naught by dt? X naught is t. So, dx naught by dt is just one. And the first thing we get here is a naught dt. And a naught is a function of x and t. So that's the first term, dt by dt, that's trivial. Now, the next term is minus e integral. Let's call this one now x. So it's dx by dt, which is x dot times a sub x dt. Likewise for y and z, and so the rest of this is simply xm dot am, or xm dot. The mth component of velocity dotted into dot product, mth component of velocity dotted with the mth component of the vector potential. That's what this is. And the reason I wrote it this way with a dt here is to make it have the familiar look of a Lagrangian action, an action written as an integral over dt. Okay, so let's write it down now, dt by dt, integral a naught plus xm dot am dt. And now we can identify the Lagrangian. Let's calculate, it's here. The Lagrangian is here. It's just everything inside the integral here. The Lagrangian for this system, for this particle in an electromagnetic field is just minus m square root of 1 minus x dot squared, which means x dot squared plus y dot squared plus z dot squared, minus electric charge a naught. That's this one. Minus electric charge x dot m am of x and t. The a's are just functions of x and t. They're fields. Just like we did with the scalar field, I'm going to imagine that somebody told us what a of x and t is. Somebody told us it's just a known function. Each one is a known function of x and t. Now we're just exploring the motion of the particle in that known field. And here's the action for it. What do we want to do with it? We want to write the Euler-Lagrangian equations. And we want to show that they look like the Lorentz force law. So that's our goal. Take this Lagrangian and see if we can derive the Lorentz force law and see if we can derive it. Mass times acceleration is equal to electric field plus electric and magnetic Lorentz forces. All right, so where do we begin? Let's consider the equation of motion for x m. So we start with partial of l with respect to x m dot. That's the starting point. There's obviously a contribution from the first term here. And we've already done one of that. We know what that is. It's m, incidentally, this m is not the same as this m. This m is an index. This m is a mass. Let's not get confused. I apologize for that, but I'm too late to do anything about it now. This one is mass. This one is the component m. m times x m dot divided by square root of 1 minus x dot squared. That's the derivative of this Lagrangian here with respect to the mth component of velocity. That's easy. We've done that before. Incidentally, this happens to be m times the x m. The x m by d tau, it's the x by dt, but this extra factor here just makes it the four velocity, the four vector of velocity. This is m times the four vector of velocity, the x m by d tau. But we'll come back to that for the moment. Let's just leave it this way. And then what do we do with that? We take the derivative with respect to time. So we get m d by d time x m dot divided by square root of 1 minus x dot squared. That's the left hand, well, sorry, that's one term. Oh, sorry. This is not right, is it? No. What's the y isn't it right? Yeah, there's more stuff there. Right over here. Let's put it back in. What is it? It's minus e times a sub m from here. Derivative of the Lagrangian with respect to x dot gives us another term which is just the vector potential. Electric charge times the vector potential. And so what we have to do is be careful, first of all, this is m times this minus e a m. That's the left hand side. That's the full left hand side of the Euler-Lagrangian equation. How about the right hand side? The right hand side should just be dL by dx m, right? Did we switch between co and counter-packs? Yes, I did. But remember that this, we can put it back. Yeah, I did. I did. But remember that when you switch the space components, it doesn't matter. Does that require a switch in the a? It doesn't matter for a either. Right. All three-dimensional components, it doesn't matter. Question, are we summing up Latin indices? Only sum over repeated indices. And don't count the mass as an index. But I mean Latin and Greek doesn't matter. Well, let's see, where might have I done that? Yes, over here I did. Yeah, over here I did. And so, with Latin indices, it doesn't matter whether they're upstairs or downstairs. And if they're repeated, you sum over them. But you only sum from one to three, not from zero to three. OK? So that's, yes, that's part of the notational trick. All right, so here we have the left-hand side of the Euler-Lagrangian equation. What do we have on the right-hand side? On the right-hand side, we have the derivative of the Lagrangian with respect to xm. All right, so where does it depend on xm? a naught depends on x. It's just thought of now as a fixed known function of the positions of the particles. And so on the right-hand side, we will have minus E times the derivative of a naught with respect to xm. Now, does that look familiar in any way? On the left-hand side, something that's sort of like mass times acceleration. And on the right-hand side, something which is minus the derivative of something. This is clearly, E times a naught is clearly the electrostatic, is the electrome, is the potential energy. Is the thing which in mechanics we would ordinarily call v or u or whatever we call potential energy. So this here is just familiar. It's here, left-hand side here has something like mass times acceleration. We didn't have this thing downstairs here, which is, after all, is very close to one for slow motion. If the motion is slow, this literally is mass times acceleration. This term we'll worry about later. And on the right-hand side, we just have minus the electric charge times the derivative of the electrostatic potential. So this is potential energy over here or the gradient of potential energy, which is essentially electric field. But we'll come back to that. All right, but now there's more. These terms also depend on x, and I have to differentiate them with respect to x. These terms are mixed. They depend on both velocity and position. We've already taken into account the fact that they depend on velocity. That was on the left side of the equation. On the right-hand side of the equation, we have to write minus e. Let's call this here n, n, n. It's a summation index. xn dot times the derivative of an with respect to xm. Now, where did all this come from? It doesn't matter whether I call this n or m. It's summed over. But by the time I got down to here, the m index was not summed over. So I don't want to call the summation index m. I called it n. I switched it. Why am I differentiating with respect to xm? Because the right-hand side of the Euler Lagrange equation is the derivative of Lagrangian with respect to xm. So we have xn dot derivative of an with respect to xm. So look at this. Look at it carefully. The left-hand side has an index m which is unsummed over. The right-hand side has an index m which is unsummed over, not summed over, but also has an index n which is summed over. This is the Euler Lagrange equation for each component of the position, x1, x2, x3, and so forth. It doesn't look like anything you might recognize yet. Let's see. Let's, let's, and unfortunately, there's no simple way to deal with it except to deal with it. Let's, let's, okay. The, the side over here, that's easy. That's just, we don't, we're not going to change that. d by dt of m, take m on the outside, x dot m, or m, it doesn't matter, upper or lower, it doesn't matter, over square root of 1 minus x dot squared. That's this term over here. Now, how do we differentiate this thing with respect to time? This doesn't, it may, it may, it may depend explicitly on time, but even if it didn't depend explicitly on time, it still wouldn't be constant. Why? Because the position of the particle is not constant. The position of the particle is moving, and so even if m only depended on position and not on time, a sub m, as it tracks the particle, as it follows the particle, would depend on time. So, there are two terms when you differentiate a sub m. Let's see what they are. Minus e, and the first term is just the explicit derivative of a m with respect to t, a m of x and t. But then there's another term, and the other term is minus e times the change in a when x changes. Now, let's see. xn, a m, xn dot. Now, if you haven't gotten to sleep yet, let's just see what we have. We have the time derivative of a. It's over here. A depends on time explicitly if the field is changing with time, but it also depends on time implicitly from the fact that the position of the particle is varying with respect to time. So, we differentiate a with respect to position and multiply by velocity. This is a dummy index. It's summed over. This one is the index that appears on the left side of the equation. So, what do we have? That's this side of the equation, and on the right-hand side, we're still not finished. On the right-hand side, a bit of a mess, minus e times the a naught by the xn. That's this term. That's the potential energy, and then there's this term over here. Minus e xn dot d an by dxm. That's a lot of stuff. It's more stuff than I usually like to do on the blackboard, but no way around it. So, let's take this and call this ma. Mass times acceleration. I don't know if it's literally acceleration, but let's just call it, let's think of it as mass times acceleration. Leave it on the left-hand side. M d by dt x dot m over square root. And put everything else on the right-hand side, and then group things together. We're going to group them together into two kinds of terms. One kind of term is not proportional to a velocity. This has no velocity multiplying it, and here is another term which has no velocity multiplying it. The second term here on the left-hand side has a velocity, and the last term on the right-hand side has a velocity. So, we're going to group things by whether they have a velocity or don't have a velocity. Now, just remember where we're going. We're going to electric field times electric charge, which is, does not have a velocity, and then v cross b, which does have a velocity. That's where we're going. And we're trying to identify now those two terms. Alright, so left-hand side is, let's just call it, mass times acceleration, but put some quotes around it to indicate that it's not the literal second-time derivative of x with respect to t. Just left-hand side. On the right-hand side now, we have the two terms which don't involve velocities. We have e times the am by dx naught, the am by dx naught, I've now reverted to calling time x naught, minus same e, the am naught by dx m. Notice the nice symmetry here. It has the am by dx naught, and it has the am naught by dx m with a minus sign. Obviously, this thing wants to be electric field. It has no velocity in this term over here. So if there's any sense to all of this, this must mean mass times acceleration is electric charge times electric field, and indeed it is the electric field. The electric field has two terms. One of them is the gradient of the time component, and the other term is the time derivative of the space component. So that's one thing. We will later see that this falls nicely into place as electric field. Now, what about the terms involving velocity? Let's get them. We bring this one over on the right-hand side, so it's plus e. We have both of them have x in dot. So let's put x in dot out here, and one of them has, let's see, where is it? Bracket. One has the am by dx n, and the other one has the am by dx m. Notice the pattern. Each term here, oh, yeah, the pattern apart from the x dot is, each of these terms has a derivative of an a with respect to a coordinate, anti-symmetrized. Notice that anti-symmetry there, that it's am by x naught minus an naught by x m. Am by x n minus an by x m, interchange of n and m over here, and interchange of m and naught over here. Let me do something over here to make this term a little more parallel to this term. I'm going to write something stupid here. I'm going to write t dot here. What on earth is t dot? It's dt by dt, it's just one. Yeah. All right. Or I could also write it as x naught dot. It didn't do anything. I just threw in for free the t by dt, which is just one. But now it starts to look kind of symmetric with respect to space and time a little bit. Here we have an anti-symmetrized thing with an index m and n, and here we have an anti-symmetrized thing with a naught and m. This is x m dot here. Sorry. Yeah, that's right. I got it right. That's right. Okay. So this is the form of the equation of motion. This is electric field. This, of course, is the magnetic field. Is the magnetic field. To identify and detail how the components of magnetic field, what is this operation over here? What mathematical operation have I done on the components of the vector potential? The curl, right? It's the mathematical curl, but we'll come to it. I didn't want to do that tonight. This is, these are the components of the curl of a, and the curl of a is the magnetic field. This here contains one term, which is just the gradient of the time component, plus another term, and this constitutes the electric field. Good. All right. Everybody happy with this? It's a bit much, but I tell you now, go through this. This is something, you know, if you really care about this, here's your Lagrangian. Where is it? This is, sorry. Top board. There's your Lagrangian. Go through the Euler-Lagrange equations. Collect together the terms one side. You'll call ma. The other terms have terms with velocities and terms without velocity. This, of course, didn't have a velocity. It was just one. And group them together and show that this is the way they group together. You will learn a lot, both about field theory, about a little bit of calculus, and especially about electrodynamics. Okay. Yeah. This is, of course, just the action on the particle, of the particle motion, right? Yeah. Why did you write the integral of a over, you had a four vector form with the thing on the right. You have your now. This? That's the thing. This integral is, minus e times the integral of a mu dx mu. Why did you write that? That's really my question. There's hundreds of years of experiment. But, but, what constrains it? I think the right answer is not why did I write down exactly this. That's partly experiment. Okay. But what is it that this integral has? What properties does it have, which make it a good candidate? First of all, the action is an integral along the trajectory. That's a rule that up till now we've respected, that the action is to be thought of as little incremental pieces. So it's a bunch of incremental pieces. And the other rule is simply that it should be a scalar. That it should be composed out of scalars. Given a dx mu, there are basically two things you can do with it to make a scalar. One is to multiply it by itself. And what is that? That's just d tau squared. That's already in the Lagrangian. Here is the tau is just the square root of that. That's in the Lagrangian already. It doesn't involve any field. What's the other thing you can do? You can dot the four vector of a lot. You can dot the dx, or not dot, but you can multiply dx by a and contract the indices. If you can find another way to make a scalar, you're welcome to play with it. Okay. And there are other ways. We have not got yet to the concept of gauge invariance. But I prefer to work with it at this level first, and then to study its gauge invariance after we've gone this far. So this is basically the simplest thing we could have written down. You could write that out. You could multiply it by a nu, a nu. You could take a scalar and multiply it by another scalar. You can write down a lot of junk. But this is very much the simplest, so let's explore it. Let's do it in that frame of mind, an exploration of a simple action. What happens? You start grinding it out. You get a set of equations of motion, which are the Lorentz force law. You know you've made the right choice, once that comes out of the... Yeah, you know you've made the right choice, right? But we haven't used all the principles yet, but we haven't identified all the principles yet. Can you derive Maxwell's equations from the principle of least action? Oh, yes, we're going to do it. Oh, we will do exactly that. But we will need another term in the action. The other term in the action will be the field part of the action, not the particle moving in the field, but just the part of the action that governs the field by itself. That's what we did for the scalar last time. Okay, so let's keep going. In the last equation there, can you... This one? Yeah. Can you just show me explicitly where the summations are occurring if I write an notation? Okay, there's no summation here. It's the mth component of the acceleration. That's it. It's the mth component of the acceleration, and there's no summation over here. There's just an m index, and that's all. There's an n index over here that is combined together with an n index over here. There's a repeated index, Xn d by dxn times xn dot, and dAn times xn dot. N is the summation index. This is sum over n, if we wanted to put the summation sign, sum over n. But the rule is repeated indices get summed over. Non-repeated indices are explicit, and they're on both the left-hand side and the right-hand side. Let's see if we can write this formula in a way which is manifestly, that means obviously, Lorentz invariant. The moment it is not, and the reason is that the left-hand side and the right-hand side are not themselves four vectors. They're not four vectors, but we can convert them to four vectors. So let's do that. Yeah. Could you say something about why that term is the electric field to partial of a m with respect to x a lot of times partial of a n with respect to x a? Well, what would we write if it was electric? What do we want to write? We want to write e- Yes. We want to write e times the mth component of the electric field. What's special about this is it doesn't involve a velocity. It just depends on an electric field which depends on space and time, but this term does not contain a velocity. It's not a velocity-dependent force. Force doesn't depend on velocity, it only depends on where the particle is, and so here we have a term which is like that. Over here, we have all the terms which multiply velocity, and so it better be that that corresponds to the magnetic field, but I think what you're saying is right. I mean, if we started from this principle, if we started from the principle of least action, and we wrote down that action and we eventually got this equation, we might look at it and say, ah, let's take this whole thing here. It doesn't matter whether it's part this and part that. Let's just call that e, and we would have invented or discovered the electric field. So I think that's the, that in fact really is the natural logic. Natural logic is to identify this and call it the electric field, and in a like manner, we will identify this with components of the magnetic field, v times something. OK, but not yet because we, I don't want to do that yet. All right, let's see if we can write this in a form, first of all, which is obviously Lorentz invariant. Obviously invariant means that you should write it as the left-hand side and the right-hand side being tensors of the same kind. Well, this obviously wants to be a vector type thing. It doesn't want to be, it has a component. The component is a spatial component. It sounds like it's part of some kind of four complex which you might identify as a four vector, but not quite. And let's see what we can do with it. OK, so what was it really? What it really was, M, what it was was M times d by dt of the x mu by the, x M by dt, or x M dot divided by square root of one minus, let's just call it v squared. I called it x dot squared before. This quantity, x dot over square root of one minus v squared, x dot means the x by dt. This is the x by dt. This happens to be exactly the x M by d tau. Do you remember what we call that variable, that quantity, the x M by d tau? We called it the four velocity, and there was a letter that I identified with it, u. Of course, there's also a fourth component. There is a fourth component. The fourth component would be dx naught by d tau. That would be u naught. For slow particles, this is very close to one, but nevertheless it's there. But at the moment what we have here is literally d by dt of dx M by d tau. In other words, the time derivative of the four velocity. That's equal to the right-hand side. The right-hand side will also manipulate in a little while. But I don't want to have a d by dt here. I want to have a d by d tau there. Why do I want to d by d tau? Because tau is the invariant quantity. I don't want to have this kind of mixed thing with a tau and a t. d by d tau, that's Lorentz invariant. Tau doesn't change from one frame to another. But t does. So what do I do with this? I multiply, I'm going to multiply whatever's here, all the stuff here, all the stuff that comes from the right-hand side of the equation, whatever it is, I'm going to multiply both sides of the equation by dt by d tau. dt d tau. What do I get on the left side? d by dt dt d tau, what's that? That's just d by d tau. The dT's cancel, and this is just the rate of change with respect to tau. That's after I multiply by dt by dt. And this is a nice expression. This is just a mass times a second derivative of the component with respect to tau squared. This is part of a four vector. If I add the first three components, x, y, and z, if I add to that a fourth component, which is the second derivative of time with respect to tau squared, that is a four vector. It's the four vector m d second x mu with respect to tau squared. It's a kind of acceleration, but it's an acceleration just as the Lorentz invariant concept of a velocity is to differentiate the four components with respect to the proper time, the Lorentz invariant notion, the Lorentz invariant concept of an acceleration is to take the second derivative of all four components of position with respect to the proper time. So we can do that on the left side. That gives us what is standardly called the proper acceleration. That's called the proper acceleration on the left side. For slow-moving particles, it's close to the ordinary acceleration. Let's look what's on the right-hand side. Let's go to the right-hand side here and now multiply it by dt d tau. Let's multiply it by dt d tau. So we have there, here it's the x naught by dt, dt d tau is the same thing as the x naught by d tau. Let's multiply it by multiplying. The x naught by d tau. But what's over here? This is dx m by dt, now I'm multiplying by dt by d tau. So what does that make that term? It just makes it dx m by d tau. Now look at these two terms. They have exactly the same form. The same form with a different index. Here the index is zero, zero. Here the index is m and n. This can all be summarized into a single equation, or a single term, and the single term is e times the a m by dx mu minus the a mu by dx m times the x mu by d tau. On the left-hand side, on the right, on the left-hand side it's d second by d tau squared m of x m equals, this is x m, not mu. Wouldn't it be nice if I could change this m to a mu? This is an equation, this is three equations for the space components of the proper acceleration. On the right-hand side, we still have space components here. Wouldn't it be nice if there was a fourth equation, this is three equations, one for each m, one, two, and three. Wouldn't it be nice if there was a fourth equation, which was just a time component of this? Same equation except with m being the time component over here and the time component over here. If that fourth equation was correct, then we could summarize the entire set of equations by basically one equation, mass, proper acceleration, all four components, d x mu, equals e times the derivative of a mu, what do I have here? A mu with respect to x mu minus derivative of a mu with respect to x mu, d x mu by the tau, no, d x mu by the tau. I think I should have x in here, in, in. All right. Let me write that equation down again. Your homework is to go home and derive the following equations. Oh, do I know that the fourth equation, let's just rewrite it, here it is. This is a complex of four equations, one for each mu, one, two, three, and four. The first three of them or the last three of them, one, two, and three, are just the equations that were written here when m is one, two, and three. I've added an equation. The equation I've added is for the second time derivative of a x naught. How do I know that it's true? Why should it be true? Because the first time you're in it did it be constant and the second time would be zero? Whoa. No. Comprehensive. No. The left-hand side is literally a four vector. So is the right-hand side. This is a tensor. This is a tensor with a mu index and a mu index. If you take a tensor and you combine it with a vector, you get a vector. This equation is a tensor equation in the sense because it transforms. So because we took care from the beginning to make sure that the equations of motion were Lorentz invariant, and we did that by making sure that the action was a scalar. This is the logic. Here's the logic. If the equation of motion pervades everything in field theory and particle physics and modern physics, make sure that your Lagrangian respects the symmetries of the problem. If the symmetry of the problem is Lorentz symmetry, make sure your Lagrangian is Lorentz invariant. Once you do that, you don't have to worry again about whether the theory is invariant with respect to the equation. You don't have to worry about whether it looks the same in every reference frame. Once you know that the laws of physics are Lorentz invariant and that the first three components of a certain four vector are equal to the first three components of some other four vector, then you know automatically that the fourth components will also match. The only way a system of equations can be Lorentz invariant if the first three components are zero, let's say, is for the fourth component also to be zero. So we can automatically conclude from the fact that we built a Lorentz invariant theory that the fourth equation must be true if the other three equations are true. They'll just form a complex of equations which include all four components. Another way to say it is if you took the equations for the first three components and then transformed them, you would simply pick up the fourth equation that way. But you don't need to do that. All you need to do is say, I know my theory is Lorentz invariant and therefore if the first three components of a vector equation are true, so must be the fourth component. Yeah. Yes. That's great. That's all there is. Yeah. Mm-hmm. I'll tell you what it is. It's energy conservation. The, the, the, the, the, the, the, the, the, the, the, the the, the, the, the, the, the, the, the, the, the, the, the the, the, the first three equations, they're not energy conservation, I'll tell you what they are. The first three equations are of the form dP dt is equal to force. You know, dP dt is MA, right? And P is MV and dP dt is MA. So they're roughly speaking, no, not roughly speaking, they're exactly the equations that the time derivative of the momentum is equal to the force, the first three equations. And the force being everything that's on the right-hand side of the equations. The fourth, and I tell you, not momentum conservation, but they tell you how the change of momentum with time is due to, is, is a response to the existence of a force. The fourth equation tells you how the kinetic energy changes with time. OK, how the kinetic energy changes with time. How does the kinetic energy change with time? The kinetic energy by dt, let's call it K by dt. Well, that's called work, right? And work is force times velocity. That's what the fourth equation is. The fourth equation is the equation that tells you that the energy changes in a way which is consistent with the amount of work being done. And that's not too surprising that, remember that the momentum and the kinetic energy form a four vector. OK, so it's not surprising that the four component equations are related, if not to energy and momentum conservation, at least to the way in which energy and momentum change in response to the existence of a force. So that's what the fourth equation is. It's the energy balance. It tells you how the kinetic energy changes when you do work on the system and the work being done by the field. That's what this is about. But the sort of exciting thing is that you can represent the equations in this very neat Lorentz invariant form, tensor form, where the left-hand side is a tensor, namely a four vector. And the right-hand side is the product of a tensor with two indices times a four vector again. We contract the indices and the contraction of the indices produces a four vector on the left-hand side, matching a four vector on the right-hand side. That's the exciting thing about this, that we used symmetry from the beginning. That told us the kind of Lagrangian's we could write down that would manifest the symmetry. And then we plowed through it and discovered, in fact, that the equations of motion really do reflect the symmetry of the problem, the symmetry being Lorentz invariants. Now, there is another symmetry. The other symmetry is called Gagian variance, and we haven't gotten to it yet. We'll get to it next time. But I'll tell you what it is next time. But let me say that the three really fundamental principles that come time and time again, which we'll see in the same example, three principles. The first principle is called locality. What locality says is that the way a thing changes only depends on what's going on nearby the thing that you're thinking about. A field over here doesn't change due to a field over there having some value. It changes because the field nearby has some value. The motion of a particle only depends on the values of the fields nearby. It also doesn't depend on the values of the fields at a later time or an earlier time. The equations of motion are equations between things at a nearby and in a small vicinity of the time being related to other things in the same small vicinity of time and space. How do we implement that? We implement that, that's called locality. We implement that by making the action, the action should be an integral over all space and time, the x dt, of some kind of Lagrange density which only depends on the values of things at a particular point in the neighborhood, let's call it x and t, and, well, sorry, depends on things and derivatives. Let's say fields and their derivatives. In other words, it depends on data which has to do with local nearby concepts. And you add up the action in point by point by point in space. And what comes out is differential equations. Differential equations are equations which relate how things change from one point to another to neighboring things. You could imagine the action not having this form. You could imagine the action has in it things which involve products of fields over here and products of fields far away. But then your equations, when you worked out the Euler-Lagrange equations, they would have the form that the way things change over here depends on what's going on over here. No, that's not the way the action principle works. The action depends on fields and neighboring fields. Neighboring fields now means derivatives. So the way the field changes at one point is only determined by the value of the field at that point and neighboring points. So that phi sub u is four components derivative? Yes, four components, phi sub mu represents the four derivatives of phi. And as I said, it says that fields change in a way which depends only on what's going on nearby in space and time. That's the principle of locality. And it's no more or no less than saying that the action is an integral. Yeah? How do you quantify near by? Well, by making derivatives, by saying it depends on fields and their first derivatives. That's all. Yeah, fields and their first derivatives. Now, you could try putting in second derivatives and there's an interesting subject there, but let's not get into it too much. Action principle together with the idea that the action is built up out of incremental pieces, each of which depends only on a field in a neighborhood and its local neighbors. What we've been doing here just depends on x and dx. Yeah, when we're talking about the particle, yes, when we're talking about the particle motion, then we can substitute the statement that the Lagrangian along the world line of a particle only depends on the positions of the particles and neighboring positions along the world line. But also, it depends on the values of the fields at the position of the particle. The acceleration of a particle over here does not depend on the value of the field over here. It depends on the values of the fields in the neighboring region. That's the principle of locality, that the response of either fields or particles depends only on what's going on infinitesimally close by. Second, Lorentz invariance are in both cases for both the particle and the field, the rule is build the Lagrangian out of scalars. Build the Lagrangian, make sure that the Lagrangian itself is a scalar. That's another way of saying that we'll all agree about the action within a certain volume of space and time. Lorentz invariance, L equals scalar. Those two principles are very pervasive. There is one more principle which we haven't worked out yet. I have it in my notes and so I'll write it down, but it's going to be the subject of next week. The Lagrangian invariance. And that means, we'll find out next week. These three principles are extremely pervasive, every theory that we know about, whether it's general relativity, quantum electrodynamics, the standard model of particle physics, Yang-Mills theory, all conform to these three principles. And basically I would say anything else on top of that that you need is sort of incidental, not in the sense that it's not important, but incidental, or what's the right word, that it varies from example to example in ways that don't really have to do with such deep principles. The fact, for example, that the standard model has three species of quarks, well that's not contained in here. That's a sort of random addition, but the basic deep principles are these. So next time we'll learn what Gage invariance is. As I said, I advise you to go through these equations and figure out where the various things came from, because I certainly don't expect that you got it all in one sitting.
(May 14, 2012) Leonard Susskind dives into topics of electromagnetism and how it relates to quantum mechanics. In 1905, while only twenty-six years old, Albert Einstein published "On the Electrodynamics of Moving Bodies" and effectively extended classical laws of relativity to all laws of physics, even electrodynamics. In this course, Professor Susskind takes a close look at the special theory of relativity and also at classical field theory. Concepts addressed here include space-time and four-dimensional space-time, electromagnetic fields and their application to Maxwell's equations.
10.5446/15004 (DOI)
Stanford University. All right. This evening, I want to get started on talking about particle mechanics and special theory of relativity. How particles move? Notions of momentum, energy. We will get to E equals mc squared. And that sort of thing. The next time, or maybe even tonight a little bit, we might even get moved past that if we get finished with it and start to talk about classical field theory. Let me just do one thing on the blackboard that we will need. We currently and over and over, for various purposes. Mostly for purposes of showing that the relativistic formulas that we write down have the limit of the non-relativistic formulas that we may have written down that we did write down two quarters ago in classical mechanics. We are going to want to show that when objects move slowly compared to the speed of light that the new-fangled relativistic theory of particle motion and particle mechanics reduces to the old-fangled Newtonian. Okay, for that we are going to need to make approximations which become better and better for slower and slower velocities. And in particular, the thing that we are going to have to approximate. Now I know you all know this but I am going to do it just for completeness on the blackboard. We have this quantity which recurs over and over 1 minus v squared. It really means square root of 1 minus v squared over c squared. If we measure the velocity and units of the speed of light then it is just 1 minus v squared. But we remember that v represents the fraction of the speed of light of the object which is moving with velocity v. So typically it is very small and we just want to write down how we approximate this for small velocities. In particular how we approximate it to order of velocity squared. This is just an example of the binomial theorem. I'll write it down. Once and for all we'll use it a couple of times tonight and I assume everybody knows it. The binomial theorem says that 1 plus a small number epsilon, doesn't matter if it's small, but 1 plus a small number epsilon to a power p where p can be anything as a matter of fact. It can be positive, negative, imaginary, complex, doesn't matter. That this can be expanded in powers of epsilon, a Taylor series if you like. We just expand it in powers of epsilon. For epsilon equals zero it's just given by 1 and then the next term is p times epsilon. Now I'm not going to work, I'm not going to prove this. This is something, this is the standard binomial theorem. And the next term, everybody remember the next term? P times p choose 2 which is the same as p times p minus 1 over 2 factorial. 2 factorial is incidentally just 2 times epsilon squared. And then it keeps going like that. I'll write the next one and you can write all the ones after it. It's a homework assignment to write out all the infinite number of them. What? Yeah, pi. Over 3 factorial epsilon cubed and so forth. If epsilon is not too big then this series will converge. And the meaning of saying it converges is that each term is significantly smaller than the previous. In such a way that it's a good approximation to terminate the series after an appropriate number of terms. Where appropriate depends on just how much precision you want. But in particular, if epsilon is really very small, it's usually the case that the first term is the most important one. Of course, it's the zeroth term which of course is the most important one. But the zeroth term is awfully boring here. We're going to find for the various applications that the zeroth order term, the 1, is just too boring to be relevant to us. And it's the first order term here which is important. So let's terminate the series at that point. Of course, it does break down if epsilon is not too small. But that's the... So supposing we want to expand 1 minus v squared to 1 minus v squared. Where epsilon is now using epsilon equals minus v squared. In other words, 1 plus epsilon is the same as 1 minus v squared. And the power P, in this case, is equal to one-half. Square root of a quantity is that quantity to the one-half power. And so this becomes 1 plus epsilon which is one... Sorry, 1 plus p epsilon which is 1 plus one-half, that's the P. And then epsilon is minus v squared, minus v squared which is the same as 1 minus v squared over 2. So that's a pretty damn good approximation for small velocities, in particular for non-relativistic velocities. So the square root of 1 minus v squared is just 1 minus v squared over 2. That's where a lot of 2's in classical physics come from if thought about as the limit of relativistic physics. Most of the 2's, and I'll show you some examples, come from here. I'll show you some examples as we go along. The other example that we will need will be 1 divided by the same thing. This case is the same as 1 minus v squared to the power minus a half. The only difference relative to what we just did above is that P is minus a half. That has the effect of changing the sign here so that this becomes 1 plus v squared over 2. Two signs have conspired to cancel each other. The sign from the half up in the exponent and the minus v squared here give us 1 plus v squared over 2. So that is just by way of setting up a little piece of calculation mathematics that we'll use in various situations. Now we're going to be interested in particles. Now a particle can be anything that holds itself together. It doesn't have to be an elementary particle. It could be the sun. It could be a box of chocolate. It could be a cookie. No cookie. It could be anything. And when you think about the location of the whatever it happens to be, it could be a donut. Let's take the case of donut. What we mean by the position of the object is generally the center of mass position. When we talk about the mechanics of motion of particles, we're speaking really about the mechanics of the center of mass of an object. In the case of a donut, the center of mass is where there is nothing, but that's okay. We know where the center of mass is. And motion according to the center of mass is governed, for example, by Newtonian mechanics of ethical Zermay. Momentum is mass times velocity. Does it matter whether it's an elementary particle or a donut or cookie? No. Its momentum is its total mass times its velocity. Its kinetic energy is 1 half mv squared in the Tony of Physics. That's the energy of motion. Does it have any other energy? If it's a donut, does it have any other energy besides 1 half mv squared? Calories. Yeah, calories, exactly. It has all the internal chemical energy and that sort of stuff that's inside the donut. That part of the energy does not depend on its overall motion, on its state of motion. It's a constant. It's a constant with respect to how fast the object is moving. So you can say that the energy of a moving donut is 1 half mv squared plus the internal energy, and the internal energy doesn't depend on the state of motion. Okay, so those are just some reminders about classical mechanics. Another reminder about classical mechanics is to go back and read the classical mechanics book, where you will find things like the principle of least action, which we will use tonight, Lagrangians, canonical momenta, and all that stuff, and we're going to use them. So very quickly review in your head now the previous quarter of classical mechanics. The important idea about the principle of least action was the idea of a trajectory in spacetime. Not just a trajectory in space, but a trajectory in spacetime, that if you plot position of a system, it could stand, x could stand for a one-dimensional coordinate, it could stand for a three-dimensional coordinate. In fact, it could even stand for the coordinates of a large number of particles, and we can just call it space, or coordinate space. Vertically, the plot time, and the trajectory of a system between two instance is a curve that goes from one point to another curve, and we can call it a world line. If it's a particle, if it's a single particle, we call it a world line. We call it the world line of the particle. Now this concept is just as good a concept in non-relativistic physics as it is in relativistic physics, but it takes a certain primacy in relativity because of the connection between space and time, that space and time can morph into each other by Lorentz transformation. We tend to focus more on the idea of spacetime. The trajectory of a particle is time-like. Now what do I mean by time-like? Let's define the notion of time-like. Begin with just two points. Begin with any pair of points in spacetime, and think of the separation of the coordinates, the difference between the coordinates of the two points, the difference of the coordinates, now I'm speaking about space and time coordinates, we can call them delta t and delta x. I discussed for a couple of times already the notion of the invariant proper time between the two points, and we wrote that as delta t squared minus delta x squared is equal to delta tau squared, where delta tau itself, the square root of this quantity, is the proper time connecting those two points. Now this quantity which I've written delta tau squared can clearly be positive or negative. It is positive if the separation between these points is such that the t interval is larger than the spatial distance between them. In other words, if the interval is like so, if the spatial distance is less than the time distance, in that case delta tau squared is positive, and the interval between these two points is called time-like. It's got more time in it than it's got space in it. When an interval is time-like, you can always find a Lorentz frame of reference. You can always transform to a frame of reference where the two points are at the same position where it hasn't moved. In fact, you simply go to the frame of reference where this line is at rest. There always exists such a frame of reference. And so if two points in space-time are time-like separated relative to each other, you can always move in such a way that they wind up being at the same point in space, that they haven't moved relative to each other. Okay, that's one case. The other case is where delta x is bigger than delta t. In that case, the separation has more space interval than time interval, like this, let's say. It's at less than 45 degrees relative to the horizontal. Incidentally, I should say right now, when I'm thinking about four-dimensional space-time, three coordinates of space, one coordinate of time, what delta x stands for is the three components of the vector delta x, and delta x squared means the sums of the squares of those three components, delta x squared plus delta y squared plus delta z squared. I suppose we could if we liked, if we wanted to be fancy, we would put a little arrow over here to indicate that it's a three vector. The delta x is a three vector. All right, now if the three vector delta x is bigger than delta t, then the interval becomes space-like. Space-like intervals are, as I said, are more space than they are time, and in this case, you cannot find a reference frame in which the two points are at the same position in space. What you can do is you can find a reference frame in which they are at the same time. You can always find, so that means that there's no invariant significance to one end being later than the other. You can always find the frame of reference in which the two points are at exactly the same time, and even worse, I've drawn this so that point B, let's call it B and A, point B appears to be later than point A, but there exists a reference frame. This you can work out, it's very easy to work out. You can find a reference frame in which point B is earlier than point A. So there's no invariant significance for points which are separated by space-like interval. There is no invariant significance to which one occurred first, one being later than the other, or being simultaneous. It's a relativity of simultaneity. Okay. Particles move on time-like trajectories. Every where is along the motion of a particle. If we break the trajectory up into little intervals, every little piece of that trajectory is time-like. That's just a statement that particles can't, nothing can move faster than the speed of light. You've already seen how frustrating it can be to try to get something to move faster than the speed of light. You try to compile up a whole bunch of velocities, each of which is 9 tenths the speed of light. A stationary observer shoots out something with 9 tenths the speed of light. The moving thing shoots out something else with velocity 9 tenths the speed of light. The something else shoots out a third thing, 9 tenths the speed of light. You never get anywhere. It's like Zeno's paradox. We're closer to the speed of light without ever getting there. And so it seems like a reasonable hypothesis that nothing ever goes faster than the speed of light. And that's a good thing because it would get very confusing if we said a particle goes from A to B, but we can't make sense out of which one is earlier, the starting point of the particle or the endpoint of the particle. We get us very confused. We get into all sorts of questions of time travel and so forth, but the physical fact is nothing, including a photon, can move faster than the speed of light. And I would generalize that and say that information, information is carried by physical systems, signals. Signals that signal information from one place to another can also not propagate faster than the speed of light. We can take that as a postulate, but if it were otherwise, we'd get into some very paradoxical and inconsistent situations. All right, so that's the notion of time-like and space-like, the notion of the world line of a system, and the world line of a system is made up out of small time-like intervals. Now let's focus on one of those time-like intervals. Let's redraw it over here. Here's one of the time-like intervals. Imagine in your head that it's a small interval, because eventually we're going to let that interval get smaller and smaller. I don't think we talked about, did we talk about relativistic four-velocity? We didn't do it. We didn't do it in any detail. All right, so along the trajectory, the particle moves from one point to another point with a separation delta x, but delta x is a four-dimensional delta x. It consists of a delta t and a delta of the three other coordinates, which I'll of course call delta with an arrow above them. That's delta x mu. Sometimes I will call these spatial delta x's. Sometimes we'll call them delta x i. When you see an i, it just runs over space. When you see a mu, it runs over all four coordinates of space. This is an interval along the trajectory here. It has four components to it, and it defines a delta tau between the two endpoints of the interval. Delta tau is equal to the square root of delta t squared minus delta x squared. Incidentally, just once or twice every so often, let's put it in the speed of light just to remember where it goes, t has units of time, x has units of space. To make this thing, and since this is called proper time, let's assume that it has units of time. This one's okay. It has the same unit as the left-hand side, but this one is not okay. It has units of length. This one has units of time. Where do we put the speed of light? Divide by c squared. I'll do it now and then every so often. Do it once more over here. Then stop doing it because it gets annoying. Now we have the concept of velocity. Now, basic ordinary velocity is the derivative of x with respect to t. But there's a notion of four-dimensional velocity. It is a four vector. It transforms as a four vector. Delta x mu transforms as a four vector. Delta tau is an invariant. It doesn't transform every observer, no matter how they're moving, ascribes the same delta tau, different delta x mu's to this interval. The velocity, ordinary velocity, ordinary velocity, let's call it v with a vector here. The ordinary velocity is, its components are simply the xi by dt or the limits. In the limit that the delta x's get small, it just becomes the limit of delta xi divided by delta t. That's a three-dimensional notion of velocity. There is, and of course, this also has, this should be written this way, or v i is equal to dx i by dt. All right, that's one notion of velocity. A more invariant relativistic notion of velocity is instead of using t down here, refer the separation of coordinates to the proper time, instead of to the coordinate, instead of the frame-dependent ordinary time. That gives rise to something called four-velocity. It doesn't have three components. It has four components. So it has a mu up here. The mu stands for the four components. Incidentally, mu runs from zero to three. Zero is the time component. This is equal to dx mu by not the tau, but not the t, but the tau. So there are four of them, and they constitute what is called the four-velocity, the four-dimensional analog of the velocity. First question is, what is the connection between the four-velocity and the ordinary velocity? So let's work that out. Nonrelativistically, of course, there are only three components of the velocity, so that must mean something funny happens to one of the components here, and we'll see what happens. All right, let's start with dx sub i by dt. Let's start with v sub i. That's equal to dx i by dt, but let's write it as dx i by dtau, dtau divided by dt. That's triviality. I've just divided and multiplied by dtau, but it's useful. Sorry, yes, this is correct. All right, now, dx i by dtau, what is that? That's u sub i. u sub i, where i now, in particular, concentrating on the space components of u, times dt by dtau by dtau by dt. We'll come back to dtau by dt in a moment. In fact, we're going to come back to it right now, but let's consider the... Yeah, before we go ahead, let's consider exactly this quantity u naught. What is u naught? u naught is the fourth component of this relativistic velocity, and what it is is it's dt by dtau. That's exactly what it is. dt by dtau, the fourth component or the zeroth component, I should say, of x is just t, where is it? t, and u naught is just dt by dtau. Detail by dt is just the inverse of this. So we're interested in figuring out what dtau by dt is. Let's figure out what dtau by dt is. Detau is the square root of one... Nope, it's the square root of dt squared minus dx squared. It's just rewriting this thing here, but with differential notation. dt squared by dt times dx squared, and this means all three components, the sums of the squares of all three components. Let's take out of the square root dt. Let's take it outside the square root of t, and let's factor it off this stuff, and then we get in here 1 minus dx squared by dt squared, or just dt times the square root of 1 minus v squared. So what do we know? We know that dtau by dt is equal to the square root of 1 minus v squared. That's what this 1 minus v squared is, or the square root of 1 minus v squared. It's the rate of change of proper time relative to ordinary time. If v is close, if v is small by comparison with the speed of light, if it's close to zero, then this is close to one. In other words, in a non-relativistic limit, dt by dtau is just one. There's no difference between tau and t. If the object is moving with close to the speed of light, there can be a large difference. Let's see, where were we? I-ah. All right, so what we want from here is this equation over here. I've crowded too many equations together. I was working on this. Here's where I was working. I'm making a little bit of a mess out of this, the mess of the blackboard in any case. All right, first of all, this is equal to dt by dtau. What is dt by dtau? Dtau by dt is squared of 1 minus v squared. What is dt by dtau? It's 1 over that, right? So you know what whatever it is is 1 divided by the square root of 1 minus v squared. It's a thing which non-relativistically just goes to 1. It's an unrelativistic limit where v is much smaller than the speed of light. This is just 1. That's why we never think about it. We never think about dt by dtau because tau is t and dtau by dt is just 1. Good, all right. On the other hand, relativistically we worry about it. What about here? v sub i is u sub i dtau by dt. v tau by dt is again just, which way is it? The tau by dt is squared of 1 minus v squared. Divided by, let's see. Yeah. v sub i is the x d tau d tau dt. Looks like it's multiplied. Does that surprise me? No, it doesn't surprise me. That's correct. That's correct. So now we can assemble together what is the meaning of the four velocity in terms of the normal velocity. First of all, u naught, the time component of the four velocity, is just 1 over the square root of 1 minus v squared. What about the space component? u sub i, that we read off from here, that's equal to v sub i divided by that same square root of 1 minus v squared. So this is the basic computation you would do to find the components of four velocity. Again, if we're talking about the nonrelativistic limit where v is very close to zero, then the square root is trivial, and u sub i and v sub i, the two notions of velocity are the same. As you get up closer and closer to the speed of light, it looks like u sub i is much bigger than v sub i. Is that right? Yeah, u sub i is much bigger than v sub i. Well, v sub i is much smaller than u sub i. Okay, this is the notion of four velocity. This is the notion of four velocity. And everywhere along the trajectory of a particle, the particle is characterized by a position. You can add in the time at which you're characterizing it. So that's a four vector of position and also a four vector of velocity. Now, the four vector of velocity doesn't really have four independent components. It only has three independent components. Let's go through why that's true. Let's take the... Okay, I'll explain very quickly why it doesn't have four independent components. There is a relationship between the three components of spatial velocity of u sub i and u sub naught. And it's just a relationship that u sub i, let's call it u squared, which happens to be v squared over v squared over 1 minus v squared. That's the space component squared, is v squared divided by 1 minus v squared. If I... It seems like you can just see it from the definition of u zero. Yes, we can. Yes, we can, but I'm halfway through it, so let's finish it. Let's take u naught squared minus this thing here. I'd select it, so minus sign here plus 1 over squared of 1 minus v squared. I'm making this more complicated than it is. The answer is that the right... Not squared, just 1 minus v squared. Ah, boy. What's the right hand side? 1. 1 minus v squared over 1 minus v squared. 1. So all four velocities, no matter how fast the particle is moving or not moving, the square of the... The square, or the difference of the squares of components, time component and space component, is always equal to 1. It's always equal to 1, and that's why there are not four independent components of the four velocity. Only three independent components of the four velocity. They're related by a single relationship like this. Otherwise, they're independent. Can you also say that you can conceptualize it by saying the particular differential in the respect to one of the components? Say it again? Can you also... Can you alternatively conceptualize it independently by saying that it's the particular differential in the respect to one of the components? Yeah, yeah, sure. There's an easy way to... I said this badly. Let's go through a very simple version of it. Delta tau squared is equal to delta t squared minus delta x squared. Okay? Boy, did I make a mess over there. Let's just divide both sides of this by delta tau squared. We get 1 is equal to delta t squared over delta tau squared minus delta x squared over delta tau squared. Delta tau, delta t by delta tau, that's u naught. So the first term here is just u naught squared, and the other one is u vector squared. Okay? So that's all that went on here. A complicated way to say this. But let's go back now to the world line. It is a world line that's characterized by a 4-vector and a 4-vector, only one redundancy, and that's that the four components of u are not completely independent. We want to know what law governs the motion. Let's take a free particle, one with no forces on it. What law governs its motion? And the law that we're going to abstract from a quarter's worth of classical mechanics is the principle of least action. The principle of least action, the only thing new is that the action is going to have, we want our laws of physics to be the same in every reference frame. That means we should try to cast them in terms of quantities which are the same in every reference frame. If we have a principle that says that in going from here to here, the particle goes along a trajectory which minimizes or maximizes, minimizes some quantity called the action, we are doing good business if we make sure that that action doesn't depend on which frame of reference it's evaluated in. We would like the action to be something which is the same in every reference frame. We know of only one, and that's one fact, the action should be invariant. And the other thing that we've learned from experience is the action should be built up incrementally as a sum over the trajectory of little infinitesimal pieces for each segment of the trajectory. That we can abstract from classical mechanics, action being a sum or an integral better yet, an integral over the trajectory, and the thing of which it is an integral should be a quantity which is the same in every reference frame. There's really only one thing that is invariant in going from a small, from a position to a displaced position and it is the proper time along the trajectory. The proper time from here to here is an invariant, meaning to say that all observers, no matter what velocity they're moving with, will not agree on the delta X's, but they will agree on the delta tau. So a good guess, and it's the right guess, the right guess for the action is the sum, we'll convert that to an integral in a moment, of all the little delta tau's from one end of the trajectory to another end of the trajectory. All the delta tau's, that is the action. Not quite, not quite. And that action has to be minimized holding the endpoints fixed, remember your classical mechanics, the action principle you hold the initial and the final configuration fixed, and you search for the trajectory that minimizes or makes stationary the action between the two endpoints of the trajectory. So you wiggle around the path until you find the path of least action. Since the action is composed out of invariance, everybody will agree that a given path minimizes the action. So that's our principle, add them up. So what does that mean? That means we have a freedom, incidentally. We have a freedom, we could multiply the action by a number, any number, it won't make any difference. Why not? Because if a thing is stationary with respect to changes of the trajectory, if I multiply it by 10 or 7 or minus 15, it will still be stationary. Because if I multiply it by a negative number, I will have, I will turn a maximum to a minimum, but I won't change the fact that that trajectory makes the action stationary. Yeah. What happens to energy? Why are you getting ahead of the game? We start with action. Energy we derive, we will do that. We will do that. Before the evening is over, we will discuss why we didn't start with energy. But the answer is it's not invariant. It's not even invariant in ordinary Newtonian physics. The energy of a particle is not independent of its state of motion. So if you see a particle standing still, and I see the particle moving, we ascribe a different kinetic energy to it. So it's not an invariant concept. It's a good idea to start our discussion of particle motion by talking about invariance that everybody agrees upon. And in that way, make sure that our laws of physics are the same in every reference frame. Okay, so yeah. Could you state again why a relativistic invariant is a good choice for action or a good guess for action? It's a good guess simply because it's the same for every observer, no matter how they're moving. That's what proper time was. Proper time was the particular combination of delta X's and delta T's between two points that everybody will agree on the value of it. It's the thing which is invariant on the Lorentz transformations. If we want our laws of physics to be invariant, it would be a good idea to base them on quantities which all observers have the same value for. So for example, if I just say, you know, it's analogous to the following. This is entirely analogous to the following idea. Supposing I am interested in the shortest distance between two points on the blackboard, what do I do? If I'm interested in the shortest distance between two points, obviously I search for the shortest distance, but I search for the curve which has the shortest distance between the two points. Now, might we find that we get different answers in different coordinate systems, that coordinate system or that coordinate system or whatever? No, of course we won't. And the reason is distance in the Euclidean plane is invariant with respect to rotations of coordinates. In particular, if we think of the distance from one point to another along the curve to be a sum of a lot of little incremental distances and everybody agrees on the incremental distances, then the curve that we get will not depend on the coordinate system that we use to evaluate it. Well, it's an invariant concept, the shortest distance between two points. What we do then is we add up all the little Euclidean distances here, and we're doing exactly the same kind of thing, adding up all the little proper times along the trajectory. In fact, the formulas are almost identical, apart from a sign. All right, now we have a freedom, we could multiply this action by a number. Yeah? You may have said this, but I don't remember. We arrived at proper time at the delt p squared minus delt x squared. Square root of that, square root of that. All squared. Was there a reason, is there a reason why the invariant choose the positive square root and the negative square root? Oh, we will always choose the positive square root. Yeah, proper time is by its very definition, a positive quantity. It's the time read along the clock from the past to the future. And we could use the negative root. If we used the negative root everywhere, it would simply change the sign of the action. That's okay. What would happen if you change the sign of the action? It would not change the equations of motion at all. Instead of saying we were looking for the minimum of the action, we might say we were looking for the maximum of the action. But remember, to minimize a function or to maximize it, exactly the same equation, you just say it's stationary. So it doesn't matter whether you take the action and multiply it by a number, including the possibility of a negative number. It won't change the trajectories. If the rule is the trajectories should make stationary the action. Stationary means it could be a minimum or a maximum. And it's also if you had a coordinate system in which the x-squared didn't change, then if you chose the positive square root, then the proper time would be the same as the time. Yes, that's true. That is true. All right, and I'm going to put in one more little change. I'm going to put a minus sign in here. Now, as I said, multiplying by minus m changes nothing. It's a convention. It's a complete convention, but the convention was established not by Einstein when he was doing relativistic mechanics, but long before Einstein by people who had no idea what the special theory of relativity was, but who had some prejudices about the way classical Newtonian mechanics work. So we're going to come back why this minus m is there. We'll come back. It's for the purpose of matching on to non-relativistic formulas. It would not change anything if we didn't put it there, but in order to match with non-relativistic formulas, we'll put a minus m there for the time being. Okay, so let's take that to be the action. And now what we do is we've, the interval, the trajectory has already been imagined to be broken up into intervals. And now we move the points around in space and time and look for the trajectory which minimizes, or I'll say minimize, which minimizes the action, and that will be the true trajectory. That's the principle of the least action. But before we do that, let's convert this to an integral. Let's convert this by taking the limit in which the differential and which these finite differences here are replaced by differentials. So this will become then minus m times the integral of the tau along the trajectory. The integral, and I'll explain exactly what this means in a moment, the integral is sum of little infinitesimal details from one place to another. There's a reason why the integral is written by this curly symbol here. And what the curly symbol means? Sum. All right? And delta tau just becomes d tau from one end of the trajectory to the other, from the initial end to the other end. And now we can write down a formula d tau is equal to the square root of 1 minus, no, dt squared minus dx squared. And dx squared means dx squared plus dy squared plus dz squared. Now this is a funny thing, an integral with a d tau and underneath a thing and a square root there. I bet you've never seen an integral like that. Maybe you have where the differential things are underneath the square root. It's easy to fix, it's easy to make it look more standard. Factor out a dt. Factor out a dt and put the dt on the outside. That means under the square root we have to divide by dt squared and this just becomes 1. And what about dx by dt? That's the good old fashioned velocity squared. So this becomes 1 minus velocity squared or 1 minus xi dot squared. Xi dot squared means x dot squared plus y dot squared plus z dot squared. All right? I won't bother writing the sum and I won't bother writing x, y and z. It's just x sub i dot and dot now means derivative with respect to ordinary time. So by factoring out a dt, I converted this to something more recognizable than it might have been before. It's an integral of something that depends on the velocity. Remember about the ordinary principle of least action. In the ordinary principle of least action for an element, for a particle, for example, the action is an integral over a quantity called what? The Lagrangian. And the Lagrangian depends on what? Positions and velocities. What about for the case where there are no forces? Then it only depends on the velocity, not the positions. And here's what we got. We have something like that here. We have that the action is minus m times the integral of 1 minus the square of the velocity from the initial starting point, it's called A to B. So we have a Lagrangian now. And now we can just take off and do all the things that we would do with a Lagrangian. That's right what the Lagrangian is. The Lagrangian is minus m, which I say is a convention, minus m over here, times the square root of 1 minus, now I'll write it, x dot square plus y dot squared plus z dot squared. It's not as simple as the usual Lagrangian that Lagrange first wrote down. The Lagrangian that Lagrange, what did the Lagrange write down? He just wrote down for the Lagrangian the kinetic energy. He didn't even know, I don't even know if he knew it was energy. He wrote down for the Lagrangian just 1 half m x dot squared. That's nice and simple. Lagrangian of Lagrange, El Sabell, that was just m x dot squared over 2, 1 half m v squared, where x dot squared does mean x dot squared plus y dot squared plus z dot squared. This is the Lagrangian Lagrange would have written down. This is the Lagrangian that a relativistic version of Lagrange would have written down. What's the connection between them? Well, the first connection between them and the most important for our purposes right now. Oh, before I say that, what is the value of this Lagrangian? The value of this Lagrangian is that it encapsulates the motion of a free particle in a way that is independent or that is the same for every reference frame. It is guaranteed because this quantity of action is independent of reference frame, making it stationary, you'll get the same answer in every frame. So this is the translation of that to Lagrangian language. Okay, first let's ask what this is approximately like, not approximately, but what it's like in the limit where the velocities are much smaller than 1, 1 being the speed of light. So this is 1 and minus m square root of 1 minus v squared, velocity squared. For that we go back to the good old binomial theorem. The binomial theorem tells us that the squared of 1 minus v squared is approximately equal, this is approximately equal to minus m times 1 plus v squared over 2. So first of all, there is a contribution which is just minus m. It doesn't depend on anything. It doesn't depend on position, it doesn't depend on velocity, it's just a number. What does adding a number to the Lagrangian do? Nothing. Adding a number to the Lagrangian does nothing because the action of the trajectory, comparing different trajectories, the constant piece here doesn't depend on the trajectory, it doesn't depend on the x, it doesn't depend on the velocity, it plays no role at all, so it's not important to us. The important piece, oh, do I have, I'm sorry, minus, right, 1 minus v squared over 2. All right, so apart from the constant piece, the piece with the juice in it, the piece with juice in it is plus mv squared over 2, minus m plus mv squared over 2. In other words, it's just the Lagrangian of Lagrange. In this way we can be certain that the laws of motion that we derive in this way will in the limit of small velocity be the same as good old Laplace, Lagrangian, Newton, and so forth. So far we haven't put in anything that resembles a potential energy or a force law, so far we're simply working with a free particle. So that's a good thing. Now, next concept, I'm not going to work out the equations of motion. But earlier, right now, the equations of motion have the brilliant ability to simply make this particle move on a straight line with constant velocity. We'll see that, but the equations of motion are not the interesting things for this evening's lecture. The interesting thing is to identify the notions of momentum and energy. That's where we want to go. We want to understand where the relativistic formulas for momentum and energy come from. They're not quite the same as the non-relativistic formulas. So let's go through them. Momentum and energy are important because they're conserved. If we account for all the particles in the universe, or at least all the particles in a closed system, and maybe interacting with forces and this and that, but the net momentum is conserved, what was that a consequence of? Do you remember what that's a consequence of? Translation invariance. So translation invariance, and translation invariance is true here. We're actually in a special case of Lagrangian mechanics. Everything we said about Lagrangian mechanics, Lagrangians, Hamiltonians, conservation laws, all applies here. It's just that the Lagrangian is a little bit strange. It has a square root of 1 minus x dot square plus y dot square plus z dot square. But other than that, the rules of the game are the rules of Lagrangian, Hamiltonian, whatever we derived previously, and we can use it. All right, so the first question then is what is the meaning of momentum? It will be conserved. It will be conserved because this Lagrangian is translation invariant. It only depends on the velocities. It does not depend on position. It's translation invariant. The momentum will be conserved. There's a consequence of everything we learned the first quarter. Okay, so I would like to identify, for example, p sub x, the x component of momentum. So we go back, we open up our book on classical mechanics, and we go back to the definition of momentum, which began by saying momentum is equal to mv, but the definition which came from the Lagrangian point of view, and that was, anybody remember the definition of momentum? In terms of the Lagrangian, everybody have any fingertips? It was partially all partial q dot. The derivative of Lagrangian with respect to the component of velocity. The derivative of Lagrangian with respect to x dot. I won't bother redoing classical mechanics in front of your eyes here. It's a little bit too late for that. At this point, the thing to do between now and next week is to review this. I should have told you last week. p sub x is partial of L with respect to x dot. Same thing for p sub y, p sub z, and so forth. So let's apply that. Lagrangian is minus m times the square root of this thing here. Alright, so this is going to be, first of all, is a minus m. Then there's a derivative of this square root with respect to x dot. Alright, what do you get when you take the derivative of a square root? You get one over a square root. One over a square root of everything inside the square root, which happens to be one minus v squared. Yes, there's a half. Thank you. There's a half there. Now we have to take the derivative of the thing inside the square root with respect to x dot. That's going to give us a minus from here. So there's another minus sign, plus, and then the derivative of this thing with respect to x dot, which is twice x dot. That will eat up the two in the denominator, and it will put an x dot here. Or m v sub x divided by the square root of one minus the total velocity squared. Well, we've seen this before. But, right here, v sub i, in this case v sub x, divided by square root of one minus v squared, is just u sub x. So what do we get? We get p sub x. Here it is, p sub x. There's nothing but the mass, not times the ordinary velocity, but by the ordinary velocity divided by square root of one minus v squared, which is u sub x. The relativistic for velocity times the mass is equal to the x component of momentum. Likewise, for the y component and so forth and so on, the y component, p sub y, is equal to m u sub y. And likewise for z. M is rest mass. M is the rest mass. M is mass. Okay, let's talk about a convention, the use of the term mass, rest mass, and all that stuff. There's an old convention and a new convention. A new convention is older than I am. More or less. Nobody that I know who does physics uses the term rest mass anymore. Rest mass is an anachronism. That's the word anachronism, in other words, right. It's an anachronism. The only place where it's ever used is in undergraduate textbooks. Undergraduate textbooks cannot seem to get their head around the idea that the idea that the mass of a particle is a tag that goes with a particle which characterizes the particle and not the motion of the particle. The mass of an electron, if you look up mass of an electron, you won't get something that's different if the electron is moving the stationary. You will simply get whatever it happens to be. I don't remember the mass of the electron in kilograms, but that's some small number. So the way that mass is used in modern language is it's a number that goes with the particle. But it is exactly what used to be called the rest mass. What used to be called the rest mass is now just called mass. What used to be called mass is what? Energy. Energy divided by the speed of light squared or something. We use the term energy for the thing which characterizes the moving object and the energy at rest we just call the mass. Now there's the speed of light factor, that's just a conversion factor and we'll come to it. So when we talk about the mass of a particle, we mean what used to be called the rest mass. From now on I will never use the term rest mass, I will always use the term mass meaning the energy at rest. Okay, good. That was an important distinction. I normally don't even think in the language of rest mass, so I didn't mention it, but it was good that Warren brought it up so we don't get confused. Okay, now let's talk for a moment about four vectors. Four vectors are the things like delta x and delta t. Supposing I have a four vector, in this case it would simply represent a little interval. That's the four vector of displacement, I guess you would call it, the four vector of displacement that has a delta t and a delta x. Now supposing I told you that delta x as opposed to delta t, that delta x was equal to zero, could that be an invariant characterization of this interval? What happens to delta x when your Lorentz transform? It changes, right? In particular, it will pick up a little bit of delta t. So it cannot be an invariant description of this vector. A vector with delta x equals zero would be a vertical vector. But in some other frame of reference that were moving past the stationary axis, the same vector would be tilted. In other words, if you were moving past it, the beginning and the end point would be separated by some delta x. So the statement that delta x is equal to zero, that might have some significance to a particular frame of reference, but it can't possibly be an invariant distinction. What about the statement that delta x and delta t are both equal to zero, or all four of them, not just delta x and delta t, but delta y and delta z? Is that an invariant statement? All components of this four-dimensional vector are equal to zero. Well, if you think about the way things transform, a delta x, when it transforms, will pick up a little bit of delta t. A little delta t will pick up a little bit of delta x. But if delta t and delta x is zero in one reference frame, then it will be zero in every reference frame. So saying that a vector, a four vector in particular, is zero, the whole thing, all components of it, that's an invariant statement. To say some specific component, or even a specific subset of components, less than all of them, is equal to zero, that is not an invariant statement. The same is true of the four velocity. The four velocity here are its components. Is it an invariant statement to say that u sub i is equal to zero, i meaning just the space components? No, it isn't, because what it means to say that u sub i is equal to zero is just that you're in a frame of reference where the particle happens to be stationary. Change frame of reference, u sub i will not be zero. Okay? So in general, if you want to express invariant statements, you want those statements to be about all components of a four vector. A four vector means a thing like delta x or u. You want it to be a statement which is true for all the components. You want it to be a full-fledged four-vector statement. Let's come to the concept of momentum conservation. Momentum conservation. We so far identified three components of momentum. Only three components. Three components of momentum happen to be proportional to the components of four velocity. They happen to become proportional to the components of four velocity, but that's incidental for a moment. The important thing is that momentum is conserved. If I have a collection of particles and we add them all up, the initial momentum must be equal to the final momentum. Now that's a vector statement. That's a vector statement that the three components of momentum, and let's label them this way, initial must equal the total initial momentum, must equal the total final momentum, or we can subtract and just write the law of momentum conservation says that this is equal to zero. Initial momentum minus final momentum is equal to zero. This in itself is a vector equation. Is it an invariant vector equation? If it's true in one frame, is it necessarily true in another frame? Not unless we can make an equivalent assertion about some fourth component. If there is a fourth component that turns the momentum into a four vector, and then we say all four components are conserved, then it becomes an invariant statement. Then it becomes an invariant statement. That's only an invariant statement if we can identify a fourth component and say that the fourth component is also conserved. A fourth component such that the three components together with the fourth component add up to a four vector. Well, it's pretty darn clear what the fourth component of this thing has to be, huh? What must the fourth component be? It must be m times u0. We have three components, if we didn't have the m there, we would have three components of the four velocity, and the fourth component of the fourth velocity would just be the thing which fills out the full four vector. So, question. Is there a significance to m u sub naught, whatever it is, we can give it a name. We can call it p sub naught. We can give it a name. Is there a significance to it, and is it something that we already recognize as something which ought to be conserved? And of course the answer is yes. There is a fourth thing which is conserved for particle mechanics. It's energy. So it's natural to ask the question, is this the energy, is this fourth component the four momentum, nothing but the energy of the particle? Well, how do we decide that? How do we decide what the energy is? Anybody got a clue about how to decide what the energy is? Ah, look at the Hamiltonian. Right. And that's the only way. That's the only systematic procedure. Look at the Hamiltonian. So, the next step is to figure out the Hamiltonian. We're going to erase some of the blackboard here and go to the Hamiltonian. In a sense, from the point of view of relativity, all the work that we did in the same quarter was just to set ourselves up for this kind of problem. So let's go to the Hamiltonian. I know that not all of us remember what the Hamiltonian is, so I will remind you. The Hamiltonian is given in terms of Lagrangian. I don't expect you to remember it, but I do expect you to be able to look it up and remind yourself where it came from, why this was an important quantity. It's a conserved quantity, it's an important quantity. I will write it down for you. It's the sum of all the coordinates, sum of all the coordinates of a system of the velocity associated with that coordinate. I sometimes put the indices upstairs and sometimes I put them downstairs, pay no attention to where I put them at the moment. And what does it multiply? Remember? The momentum associated with that same direction. I'll put that one upstairs just for variety. But that's not finished. What else is there? Minus? Minus the Lagrangian. That's the Hamiltonian. As I said, if you want a systematic way to think about mechanics, we don't just make up stuff as we go along, we fall back on the basic principles of mechanics, all of Lagrange, Hamilton, and so forth. Okay, so now we can write it down. We now know what it is, let me write the Lagrangian over here and the P's. What is P sub i? P sub i is equal to Mx sub i dot divided by square root of 1 minus V squared. That is the same as Mu sub i, but I'm not going to use it that way, I'm just going to use it the way it is here. P sub i is that. And the Lagrangian is minus M square root of 1 minus V squared, where that means Vx squared plus Vy squared plus Vc squared. That's the Lagrangian. We have everything we need now to calculate the Hamiltonian or what is essentially the energy. Okay, so we have then summation over i. X sub i dot times P sub i, that's Mx sub i dot squared over, I think, squared of 1 minus V squared, is that right? That's the first term here. The X sub i dot squared picks up 1 X sub i dot from here, 1 X sub i dot from here, the mass over here. And now minus the Lagrangian, so minus the Lagrangian is plus M square root of 1 minus V squared. Looks like a bit of a mess, but it is not. It is simple. First of all, X sub i dot squared, that's just the total velocity squared. X dot squared plus Y dot squared plus Z dot squared, that's what I call V squared. So the first term here is just not even a sum anymore, it's just MV squared over square root of 1 minus V squared. Then we have to add plus M times the square root of the numerator. How are we going to add a thing with a square root of the numerator to a thing with a square root of the denominator? Well, we'll put everything in the denominator. So we have to multiply and divide by square root of 1 minus V squared, that gives us a numerator, 1 minus V squared, and then square root of 1 minus V squared in the denominator. There we are. Now it doesn't look so fearsome anymore because we notice immediately that MV squared over here cancels MV squared over here, and the whole thing is M upon square root of 1 minus V squared. All right, that's the Hamiltonian. So let's add to the momentum a fourth component here, which, sorry, P naught, which is M. And do you recognize what this thing is? This is U naught. So indeed we find that, apart from a possible, yeah. No, no, no, no, no. This formula for the Hamiltonian is extremely general. All systems that we've ever studied, this is the formula. That was the general formula for any Lagrangian, any action principle, we derived this, the conservation of H in very, very general grounds. Depending on what you put in for the Lagrangian, it might be non-relativistic physics, it could be any number of things. Good. So now we're in business, the conservation of momentum becomes the conservation of four momentum, conservation of X momentum, Y momentum, Z momentum, and the conservation of energy. So now we go on, we should figure out what this new concept of energy has to do with the old concept of energy. How different is it than the energy that we've already established? Okay, so let's... The one thing that we're going to have to be careful about is the factors of speed of light. They are conventions, largely. They're conventions, if the energy defined in one way is conserved, it won't make any difference to its conservation if we, in order to make it match with units, if we multiply it by the speed of light or something. Conservation is conservation, and we will probably have to do that. Okay, let's look at Mu naught, that is, M over the square root of 1 minus V squared. That's what we're calling the energy of the particle. Now, of course, if we want to restore some... Well, before we restore, yeah, before we restore units, let's use the... binomial theorem. Let's use the binomial theorem to approximate this. Here it is. Here's the binomial theorem for 1 divided by square root of 1 minus V squared, and it gives us 1 plus V squared over 2. All right, so this becomes M plus M V squared over 2. This, of course, is completely recognizable again. It is just the ordinary Newtonian kinetic energy. Had Newton known about energy, this would have been what he would have written down for kinetic energy. It is what kinetic wrote down. I have no idea who first thought of this as energy, but somebody did. What about this term over here? Well, first of all, this equation is not dimensionally consistent. They have an M and here you have an M times the velocity squared. To make it dimensionally consistent, all we have to do, we can do it in a number of ways, to make it dimensionally consistent, but to give it units of energy, we're thinking that it's the energy, to give it units of energy, this already has units of energy. It's just a good old kinetic energy. In order to give this term units of energy, we have to multiply it by some power of the speed of light. What power of the speed of light? Well, you already know this thing is missing two powers of a velocity compared to this one. So what's it going to be? It's going to be MC squared. All right, I will just write that this is the energy. So here we see coming out of the basic principles of mechanics, the conservation of a quantity which when it's evaluated for a system at rest is just MC squared. For a system at rest, its energy is related to its inertial mass, to its mass by MC squared. How do we know that this is the inertial mass? Well, it's the inertial mass because it enters into the energy, the same old way that inertial mass entered into the old energy. It enters into momentum at least for slow velocities in the same way that the inertial mass entered in here. And so this must be the inertial mass times C squared. That's the origin of... Einstein had another way to think about it. He asked himself in a collision of objects of various kinds, given... he worked a little bit differently. He didn't know about Nertis theorem. He had to work from the ground up, so to speak. He talked about collisions of objects, objects come together, break up and scatter and go flying off. And he said, look, I know that the momentum is conserved. What do I have to think of as the energy in order that the energy and momentum together would form a complex that would be conserved in any frame of reference? We've short-circuited that and found the formula for the energy by standard methods. Okay, so that's the origin of E equals MC squared. It doesn't apply only to a particle. It applies to any object that can be identified as a closed object. So we know that its energy when the object is at rest. Now what does the object being at rest mean? Inside here, there are lots of molecules moving around like crazy. So in that sense, this object is not at rest. On the other hand, its center of mass is at rest, if I hold it at rest. What we mean by the velocity here is the velocity of the center of mass, and what's left over when you set the center of mass velocity to zero is the rest energy. The rest energy is related to the mass by a factor of speed of light. Another way to say that is that the speed of light is just a conversion factor from one way of defining energy to another. It's just a conversion factor. I'm sorry, but one thing that mentioned the existence of the mole by C squared. You could have mole by C squared over 10 megawatt or a lot of it. No, no, no, no, no, no. It has to reduce to this when C is one. Right, good point. Good point, that's right. On purely dimensional grounds, we never could have deduced that this would be one here. But on the grounds that it has to reduce to this when C is equal to one, there's only one thing we could do with it. So that one in the binomial extension is important? Yes, absolutely. There are higher order terms missing, right? Yes, there are higher order terms, and they are simply very, very small when the velocity is... Care about these things? Well, I mean, for some purposes we might care about them, but the leading piece for slow, we could evaluate them, and we could see how small they are for a baseball moving it 90 miles an hour. I think they would be really, really small. So, right, physical way to think of those higher order terms, corrections to the connective... Yeah, well, the connections to everything, everything gets corrections. Those corrections are quite significant when the object is moving close to the speed of light. In fact, I mean, they become huge. Let's just think about it for a minute. If I were to plug in the speed of light here, well, this would get pretty darn big, but it wouldn't get any bigger than this. It's perfectly finite. On the other hand, if I plug it into this formula, V goes to 1, the kinetic energy of an object moving with close to the speed of light becomes infinite. That's another way to see that you can never get up, because it would take an infinite energy to get up to the speed of light. Also, an infinite momentum is used. Also, our V's divided by squared of 1 minus V squared. So, as you push the particle faster and faster, its momentum and its energy increase and explode, diverge, become infinite, when the velocity becomes the speed of light. That means it would take an infinite force to accelerate a particle to a speed of light, and infinite forces are simply not part of nature. I was back to wondering here, what is the thing that now becomes a barrier that we also add onto a component? The conservation law. Right. A form momentum. Conservation of form momentum. That's exactly right. Okay. Yeah. Recently, a real statement that we was only being given variables because of the new square density of the mass. Yes, we're going to come back to that in a minute. Right. That's just a statement that energy is determined in terms of momentum. If you have two particles with opposite primatitudes, does the mass of the combined system is greater than the amount of the two particles? Typically, yes, yes, of course. If you wanted the mass of the combined system, by that I mean the energy of the combined system when the combined system is at rest. Now, what does it mean the combined system is at rest? It means exactly what you said that the two objects might be moving in opposite directions. Then the energy is definitely larger than the sum of their masses. In this convention, the rest mass is greater than... The rest mass... All right, now we have to be careful about definitions. Some things are simply purely definitions. Some people might react at this point, well, what I mean by the rest mass is simply the sums of the rest mass of two particles. That is not a very good invariant concept at all. Another person might mean, no, I don't mean that. What I mean is think of the thing as a unit and discuss its energy in the frame of reference where its total momentum is equal to zero. That would be what we would call the center of mass frame or the rest frame of the combined system. Now, since all ordinary systems are made up out of particles that are moving hither and thither all over the place, an invariant concept is the energy of a system in a reference frame where its momentum is zero. If we call that the mass, then the mass of the composite system in your case, then a particular case would be greater than the sums of the two masses. If there were forces, particularly attractive forces, it might be less than that, but that's another case. What does the mass term do when you're included in the phase of a wave pocket? It gives an e to the i m squared c squared m c squared times time. So, good one. I have trouble forming this into a question, but the way we got this expression was a lot of abstract mathematics. Let's not call it abstract. Let's call it systematic. Whatever. I mean, the m c squared term is very significant. I mean, it describes things like photonic bomb, for example. I mean, it's... Yeah, the thing that's conserved is the total energy, not the individual masses of particles. So, let's talk about an example. Before I talk about the example, let me first take a little digression into massless particles. Massless particles are a little bit strange. Here's the energy over here. m divided by one minus v squared. Now, what's the velocity of a massless particle? One. Okay, we're in trouble. Well, maybe the trouble's not too bad. The numerator is zero and the denominator is zero. It doesn't tell us what the answer is, but at least it tells us at worst it's zero over zero. That's better than one over zero. Okay. Right? Maybe not a lot, but it's a little better than one over zero. All right. It doesn't seem like such a good idea to try to think of massless particles in terms of... the energy of massless particles in terms of their velocity, since all massless particles move with exactly the same velocity. Can they have different energy if they all move with the same velocity? Well, yes, they can. And the reason is because zero over zero is not determined. Thinking about massless particles is not good to think about them or to distinguish them by their velocity. All their velocities are the same. Okay? We have to think about it differently. The way to think about massless particles, and in fact, we can think about any particle this way, but to think about massless particles in particular, is to think of the energy as a function of the momentum instead of the velocity. We could even do that quite often in ordinary mechanics when we write the energy of a particle, kinetic energy. It is one-half mv squared. Okay, that's the only way of writing it. But another way to write it is momentum squared over twice the mass. This is, I'm talking now not about relativistic mechanics, just old-fashioned mechanics. We write the energy in terms of the... sorry, we write the energy in terms of the momentum. That's another way, an alternative way to express the energy. What happens if we do the same thing in relativity? We find the expression for the energy of a particle in terms of its momentum. Well, there's a very simple trick. The simple trick is just to use the fact that u naught squared minus u x squared. Let's add them all in. Minus u y squared minus u z squared. That's right, you said it. U of z squared is equal to one. We worked out that before. And now let's... I'm going to set c... For the moment, I'm going to set c equal to one again. c equal one. All right. If I multiply, let's see, the... Yeah. The components of momentum, all four of them, are the same as the components of four velocity except for a factor of mass. So what I want to do is multiply this equation by m squared. m squared, m squared, m squared, m squared. And then this becomes... This is the square of p naught squared, but p naught is just the energy. So this becomes the square of the energy minus... And each one of these is the square of the corresponding component of momentum. E squared minus p squared. I forgot to multiply this side by m squared. E squared minus p squared equals m squared. In terms of the four velocity, it's u naught squared minus u vector squared equals one. In terms of the corresponding components of the four momentum, energy and momentum, energy squared minus p squared equals m squared. Or I can write that as e squared or e equals square root of p squared plus m squared. I just moved the p squared over to the right-hand side and took the square root. Let's put back the speeds of light. I'm going to leave this as an exercise for you. I'll tell you what the answer is. This is the square root of p squared becomes p squared c squared, and m squared becomes m squared c to the fourth. Check that out. It's just an exercise in getting your unit straight. And there it is. Energy in terms of momentum, square of momentum, and the mass. Now, from this formula, we can immediately take the limit that the mass goes to zero. We had trouble thinking about a photon in terms of velocity, but we have no trouble in thinking about photons in terms of this formula. What does it say? It says, hmm? Yeah, right from zero. That's right. Energy becomes equal to the square root of the square of the momentum times the speed of light, which is just the speed of light times the magnitude of the momentum. Let's call it magnitude of momentum. Square root of p squared is just the length of the momentum vector. Energy for a massless particle is simply the magnitude of the momentum vector, but in order to keep the units correct, you have to multiply by the speed of light. This is true for photons. It's approximately true for neutrinos. Neutrinos have a tiny little bit of mass. It's true for gravitons. It is not true for particles which move significantly slower than the speed of light. All right, now that we know how to express the energy of a massless particle, we could solve the following problem. Let's take a particle. I'm going to give this particle a name. It's called positronium. It happens to be an electron and a positron in orbit around each other, but it's not important what we call it. It's not important what it is. It's important what we call it. We'll call it positronium. It's an electrically neutral particle and it has a mass. Its mass is approximately the mass of two electrons. But I don't care whether you call it two electrons or not. It's a little bit, is it more than two electrons or less than two electrons? Less than. And why is it less than? It's bound. Yeah, it's got a little bit of kinetic energy, so that makes it a little bit more than two electrons, but it has even more negative potential energy. So whatever the positronium particle is, it doesn't matter what it's made of. It's simply an electrically neutral particle with a mass of... I'm not going to ask you to look up the mass of the electron. Yes, let me ask you to look up the mass of the electron. What's that? Yeah, it's.51 MeV, but it's something in joules. I was hoping somebody would give it to me in joules. I'm just telling you how to study it. What is it? I'm telling you how to study the mass of 31. Is that true? I don't remember. I don't know. I have an idea. I mean, I know it's a... 9.1 times 10, that's the electron mass. All right, so 9.1, that's 1.8, 9.1, blah, blah, blah, blah, blah, blah. Some number of joules. That's positronium. So there's positronium, and if you leave positronium around for a while, positronium will decay. the two photons will just go off. In other words, you've made electromagnetic energy out of them. You've made just plain old electromagnetic energy out of the positronium. The positronium disappears and becomes in its place two photons going off separating from each other. Can you calculate what the mass, moment, not the mass, what the energy and momentum of those two photons are? So let's go through the exercise. This is something which would not make any sense at all in non-relativistic physics. In non-relativistic physics, the sums of the masses of particles are always unchanged. Chemical reactions happen. Some chemicals turn into other chemicals and so forth. But if you weigh the system, if you weigh the mass of the system, the masses, the sums of the ordinary masses never change. That's non-relativistic physics. Here the sums of the ordinary masses is changing. This has a mass of whatever it was in joules. This has a mass of whatever it is in kilograms. And each of these is massless. The photons has no mass. So the sum, the numerical sums of the masses of particles is not conserved. Not if this process happens and this process does happen. But the right rule is not that the sums of masses is conserved. It's that the energy and the momentum is conserved. So the first question is what is the momentum? Let's do momentum conservation first. Let's assume that the positronium atom, the positronium particle, is at rest in our frame of reference. If it's not at rest in your frame of reference, just move your butt and get into the frame in which it is at rest. Get into the frame in which it is at rest. So it's at rest. What's its momentum if it's at rest? Zero. So the total momentum of the system to begin with is zero. Now it decays to two photons. The first conclusion is the photons must go off back to back in opposite directions. If they don't go off in opposite directions, it's quite clear the total momentum will not be zero. So they have to go off in opposite directions. By this, yeah, they will go off in opposite directions and they must have equal, exactly equal momentum, back to back, equal and opposite momentum. Why? The initial thing had zero momentum. The final thing has zero momentum. All right, so that means that this particle goes off with a momentum p and this particle goes off with a momentum minus p. Good. Now let's use energy conservation. The energy of the initial molecule, not molecule, positronium atom is its mass, whatever its mass is, times the speed of light squared. This is not the mass of the electron, not exactly, very, very close to it, but not exactly, a little bit less than twice the mass of an electron, but it is the mass of the positronium atom. So you get out your spring balance or whatever it is, your scale, you put a positronium atom on it and you weigh it and that's the mass of the positronium atom. And when it's at rest, in the rest frame, it has an energy equal to mc squared. That has to equal the energy of the resulting two photons going off. So this has to equal the energy of the two photons. The energy of the two photons is the same because they have the same magnitude for the momentum and it's just equal to twice the speed of light times the momentum, times the absolute value of the momentum, p. So that tells us what the absolute value of the momentum has to be. It's mc over two, that's the momentum of each one of the photons. That's the simplest example of calculating a relativity decay and calculating what happens. This is the momentum of the photons that go out. We could convert this using a little bit of quantum reasoning. We could convert this to a wavelength for the photons going out. If we use the connection between wavelength and momentum that we learned last quarter, we could convert this to a statement about the wavelength of the photons. It's very small, it's much, much smaller than optical. So this is the mechanism by which a mass turns into energy. Of course the mass was always energy, but it was in this frozen form of rest energy of the positronium molecule and when the positronium atom decays, it results in two photons going out, the two photons collide with the atmosphere, with everything else and heated up. They can be absorbed by electrons, generate electrical currents, and so forth and so on. So it becomes, the energy gets converted from this frozen form of rest mass into a dynamical, useful form of electromagnetic energy. Yeah. How is spin conserved? Well, if the atom to begin with has zero angular momentum and these guys are going off back to back, they better have opposite angular momentum. So if the electron and positron are lined up together, it won't decay? No, no, it won't decay. It will always decay. It will always decay. So you have enough freedom in the photon angular momentum to make it to make it go. Can you explain why masses of particles have to go at the speed of light? Or is that to throw out of the outside of this part? Well, let's put it this way. In order to have a finite energy, in order to have a finite, well-defined energy, if v goes to one, the denominator goes to zero, the mass in the numerator better also be zero. That's all. You said it'd be the same problem as the particles, if the resulting particles were gravitons. What would the initial particles say? Oh. Some fraction of the time, the positronium atom, for example, will decay into a pair of gravitons. Now, that percentage of the time is ten to the minus, I don't know, some large number. Some percentage of the time, the electrically neutral positronium atom will decay to just a pair of photons. Ah, granitons. As I said, don't look for it in the laboratory. Yeah. That decay has to be instantaneous because of time evolution also. Instantaneous means what? What does it mean that the decay? You have, let's say, the two part, the two photons emitted, the positronium decay, is linked to those because of time evolution. They're recently going at the speed of light. That means that their internal pulse is zero. That means that it's supposed to be in a continuous process. Well, let's see. So what experimental question would you ask? You could count, you could detect the photons in some sort of chamber that leaves tracks. You could trace the two photons back and you would discover that they both come together at exactly the place where the positronium was. So that you could say, look, if the decay was peculiar and the positronium decayed first producing one photon that went out to here and then later creating the other photon which only went out to here, they're both moving with the speed of light and you trace them back, they would appear to come from some ways over here. That doesn't happen. If you trace back the trajectories of the photon and as they leave tracks in a chamber or something like that, they will come back and intersect at exactly where the initial positronium atom was. I'm not sure if that's the question you're asking, but I think it's the only thing I can make out of it. You know, before they're tidying the equation I want to refer to. What's that? Oh. Yeah. I understand you concluded that E squared minus E squared equals M squared of that equation up above. Right. And it looks like that equation is saying that U naught squared minus U x squared minus U y squared minus U z squared is one. Yes. Why is that? Oh, we went through that, but I'll go through it again. We'll go through it again for you. We already had two versions of it, but one of them was garbled. And I guess an ungarbled one just cancels out a garbled one and leaves no explanation. All right. So do you remember this formula? Delta tau squared is equal to delta t squared minus delta x squared. Okay. Oh, here it is right over here. Right. Here it is. Okay. Thank you. Good. Yes, Kevin? So U naught is dTv tau. That's correct. Delta t delta tau. Right. What does that mean about energy? Does that tell us anything about energy? It doesn't strike any deep chord in me. You know, I know it's true. Does it really ring something powerful? Only in a sense. Well, let's see. You want to dT d tau. Nothing, nothing really visceral in that. Because the rest dT d tau is one. Motion, it's not one. And it's telling you how the energy changes as a function of velocity. But nothing more than that. Yes. Yes. Not if momentum is conserved. Well, in a laboratory situation, in any laboratory situation, of course, there are contaminating effects. Even if nothing more, the presence of some tiny stray field, magnetic or electric field or something, could corrupt the thing and lead to a slightly different answer. Remember, the role is momentum is conserved for a truly isolated system. A truly isolated system means that you've accounted for everything in the system and nothing else is interacting with it. Then the momentum of that system is conserved. Now, if your laboratory is sufficiently isolated and the region of space is cold enough that there are no particles bombarding and so forth and so on, the positronium together with the photons constitute an almost closed system. But it's really not completely closed. If nothing else, there's gravitational forces between the Earth and the particles. They're completely negligible, but they're not zero. So, yeah, there are contaminating influences. I think you're right. I mean, like the uncertainty principle that you can have in the final. That doesn't affect it. Momentum is conserved. Then where it is? The better you know the momentum, and in particular, the better you know the relationship between the momentum, it may be that you have less information about the relative positions. But that's a small effect. And for our purposes tonight, I want to separate quantum mechanics from classical, but it's a real question. Right. Your formula for energy was just an approximation. Well, one, there was an approximation. This was an approximation. Well, this is not an approximation. Well, let's see, where is it? I lost it. The equation E equals m plus empty squared over 2. That was an approximation. The non-approxification is E equals m over the square root of 1 minus v squared. That's right. This is the C in that equation. Oh, OK. Let's do that. OK. Ah, over here. All right. So we begin with E equals, what do we want to start? M divided by square root of 1 minus v squared. So we look at the units and in a minute, well, right now, right now, one and a v squared have different units. The only way to make v squared have the same units as 1 is divided by c squared. There's no other way. Now, the 1 minus v squared over c squared is dimensionally consistent. However, the whole formula is not. The left-hand side is energy, the right-hand side now has units of a mass. Mass and energy don't quite have the same units. They're related by the square of a velocity. We know that the units of energy, the units of energy are equal to the units of mass times the units of a velocity, let's put v squared. An example would be kinetic energy is one half m v squared. We know that's consistent. All right. So it tells us that we have to provide the square of a velocity here, and that tells us that the c squared has to go over here. Right, let it be said d equals 0. That's right. Then you get just plain mc squared, right? What is the positronium experiment? That's the decay of the two photons and then what? Simultaneously in offerings of reference. Yeah, yeah. So this is the electron and the positron. Well, that's a hard thing to check. I mean, if you had the positronium at rest, now remember, because of quantum mechanics, there's going to be some uncertainty in its position. And so there is a region in which you may have known that the positronium was in. Then the positronium decays. And if you trace it back along the world line of the photon that comes out, the two photons, they will intersect. You could draw a space-time picture of this. Here's the space-time picture. Here's the positronium. Time goes up. This is the world line of the positronium at rest. Then the positronium decays. And a photon goes out this way. Whoops, that's not very good. Photon goes out this way and a photon goes out this way. If you trace back along the world line of the photon, you will find out that they will intersect, first of all, at the spatial location where the positronium was, but they will intersect at exactly the same time. Or at least at the same time if we ignore quantum uncertainties. What determines the axes along which the two photons come out of the positronium? Random. Sometimes that way, sometimes that way, sometimes that way. That's the randomness of quantum mechanics. That's like taking a spin, orienting it this way, and then measuring it along the other axis. Sometimes this way, sometimes that way. So it comes out with some angular distribution, but randomly in different directions. And at random times to some extent. When you first defined L earlier to see it, Lagrangian? Right. I think you picked it because it was the one thing we had available that would be invariant. The action principle being a statement about minimizing something or other about a trajectory will be invariant if you choose the action to be invariant. Right, so we... Is there a more bottom up way to see that? No. No. No. So when we did the classical mechanics, I mean, I understood how this is right, that we basically chose the action so that when you get the Euler equations for that path that they provide the equations of motion. Newton equations. Newton equations. Yes, indeed. So it didn't seem like there was anything in the algebra that's going on here. Well, no. The only reason, we haven't put anything concept of force. Now, all right. What we could do is we could take this Lagrangian and work out the equations of motion and see what the equations of motion say. But we already know what they say. They say that the energy and momentum and therefore the four velocity is constant. And that's all they say. Just as Newton's equations, when there's no forces around, just say... Well, I say that mass times acceleration is zero. That's the same as saying the time rate of change of momentum is zero or that the momentum is constant. So this will say that the momentum is constant, nothing more, or that the four velocity is constant. Well, the four velocity being constant, that means not only in magnitude but in direction, it says that the particle moves along a straight line in space time. That the motion of the particle is a straight line through space time. And that says it moves along a line in space with constant velocity. That's the whole content of the Euler-Lagrange equations for this particular case. Now, we could try putting force... We could try asking how does this get modified if we try to put various kinds of forces in. And we will do that. One of the things we will do is allow the particle to have a charge and to couple it to an electric and magnetic field and see how it moves. But so far, that's a little more complicated than anything we've done up till now. Up till now, the whole content of the Euler-Lagrange equations is just momentum and energy conservation. Or straight line motion, but not just straight line motion. Two of Newton's laws become the same law. What are Newton's laws? A particle moves along a straight line and the other one says the particle moves with constant velocity. Here it becomes a particle moves in a straight line through space time, which entails both straight line motion and constant velocity. One more question. Now we've got to go. Can you talk about how in the limit where velocity gets very low, that relativity, the laws approach the laws of the limit? Or at least the laws of conservation as they would have been derived from Newton. I'm just trying to make a connection. I'm not really sure here, but in a previous quarter, when we were talking about quantum mechanics, we were talking about this other limit where you increase and ask if you mean the laws of quantum mechanics also approach the other laws. But it's an entirely different limit. One is velocity and Newton is like the mass of a number of particles. A combination of various things, but let's say the mass of the object. The important thing is the mass of the object and the smoothness of the forces that act on it. They shouldn't be abrupt and sudden. But that's another, we've got to separate things. This has got to do with the speed of light. Quantum mechanics has to do with Planck's constant. You can have a situation which is highly quantum mechanical, but non-relativistic. You can have a situation which is highly relativistic and not quantum mechanical. You can have both, relativistic and quantum mechanical. The two independent limits. Is this what they say that there's a compatibility between the two? I don't know where I put that. There's no incompatibility. I think you're thinking about gravity. Gravity are puzzles about, but we haven't gotten there yet. So far we're not dealing with gravity at all. We'll come to it. I think it's time to go home.
(April 23, 2012) Leonard Susskind begins to discuss particle mechanics and the role that they play in the special theory of relativity. This includes how particles move, the idea of momentum, and some topics on energy. In 1905, while only twenty-six years old, Albert Einstein published "On the Electrodynamics of Moving Bodies" and effectively extended classical laws of relativity to all laws of physics, even electrodynamics. In this course, Professor Susskind takes a close look at the special theory of relativity and also at classical field theory. Concepts addressed here include space-time and four-dimensional space-time, electromagnetic fields and their application to Maxwell's equations.
10.5446/15002 (DOI)
Stanford University. Alright, we were, last time we worked out basic Lorentz transformations relating to frames of reference. Whether or not I said it, let me say it now, we were mainly dealing with problems in which all of the motion, and in particular the relative motion of different observers, is along one particular axis. We didn't try to think fully three-dimensionally, and the picture you could have in your head is that we have a long railroad track, one-dimensional. Some set of observers are sitting still at the station. We could call those the stationary observers, and other observers are in the train, and they're moving relative to each other along the one-dimensional axis. We didn't worry very much about the other axes. I said one or two things about it, and we talked about how you relate the coordinates of one observer relative to the other. In particular, we wrote down Lorentz transformations based on the basic hypothesis of Einstein that all reference frames see the speed of light exactly the same. In fact, we said with the units that we used, all observers see the speed of light being one. That's, of course, a choice of units. We work in years and light years, or seconds and light seconds, whatever choice of units. If we choose them correctly, we can make the speed of light one, and that simplifies equations. Of course, if we really want to plug in to the real observational physics, we might want to use the fact that the speed of light in common units, in the common units that an experimental physicist would ordinarily use, is not one. That's three times ten to the eighth in some units, and so we would put back the speed of light, and there's always a unique way to do that. The unique way to do it is to make sure, simply stated, that you modify the equations by appropriate factors of the speed of light, C, so that the equations are dimensionally consistent. I will go back and forth. Mostly I will use the speed of light set equal to one, but every now and then, just to illustrate a point, I will stick the speeds of light in, and you can go through the equations and do that yourselves. Okay, so two observers, one moving down the axis with a velocity v relative to the stationary, moving down the tracks with velocity v relative to the stationary observer. The stationary observer sees the moving observer moving the units of space per unit time, v being velocity. And of course, by symmetry, just by the symmetry of the problem, if we believe that all coordinate frames are equally valid, the same relationships, the same kind of Lorentz transformations, will relate the stationary observer's coordinates to the moving coordinates. To the two-way street, the stationary observer ascribes stationary coordinates, the moving observer ascribes his coordinates, and they can each be related to each other reciprocally. And the reciprocal relation is truly reciprocal, exactly the same relations, except that whereas if I were moving to the right, you're right, if I were moving to your right, you would say my velocity is positive, you would be moving to the left. As far as I was concerned, I would say your velocity was negative, and so in relating the two frames of reference, the only thing we have to remember is there's a sign change of velocity when you go back and forth. Okay, so for example, if x prime is the coordinates as seen in the train, and x prime is the train over here, let's draw a train. That's the train. Okay, there's an observer in the train, and there's an observer in the tracks, or an observer in the station. Here's the observer in the station. Here's the observer in the train. And the observer in the train is moving with velocity v v. The observer in the train has meter sticks, and the meter sticks can be laid out on the floor here to form a grid. The observer also has a timepiece, a clock, likewise the observer at rest, well, rest with respect to whom? Well, rest with respect to the station. The observer at rest with respect to the station also has his meter sticks laid out, and also has his timepiece. And they make various comparisons. You know how all of this works. An event which takes place. An event which takes place, an event means an event happening at a point of space, and a point of time. In other words, it's a point of space time. I like to think of it as a flash bulb exploding someplace, going off someplace. It doesn't matter whether it's in the train or outside the train. Let's say in the train for convenience, a flash bulb goes off over here. At a time that the stationary observer reckons to be t time t at position x. Now, position x means coordinate x in the stationary reference frame, x right over here, at a time t, according to the timepiece of the stationary observer. So the stationary observer ascribes to it coordinates x and t, and the moving observer ascribes to the same event, coordinates x prime and t prime. And we worked out the last time the relationships that are necessary between x t and x prime and t prime, such that everybody will always agree that the speed of light is one. Let me write them down quickly. x prime, these are the lunch transformations. x prime is equal to x minus vt. Now, Newton would recognize that, but what he wouldn't have recognized was the square root of one minus v squared downstairs. And if you want to put back the speed of light, it goes right over here. I'm going to put it in and then take it out, v squared over c squared. And of course, if the velocity is small by comparison with the speed of light, this is a terribly tiny correction. Let's just call it one minus v squared. And t prime is equal to t minus vx divided by that same squared of one minus v squared. If we wanted to add the other two directions, in particular the directions out of the board and vertical, in other words, the directions perpendicular to the tracks, we could add them in very simply. Perpendicular directions don't change under a change of velocity along a given axis. If the change of the two frames relative velocities are along the x axis, then the y and z coordinates are unchanged. y prime equals y and z prime equals z. I won't write it. If we need it, we'll use it. We can invert these relations. This is simply a matter of solving for x and t in terms of x prime and t prime. Let me remind you what the result would be. It would be x equals x prime plus vt prime divided by that same square root and t equals t prime plus vx prime divided by square root of one minus v squared. The only difference, the only asymmetry, is where you sort of velocity over here. You change the sign of the velocity just to account for the fact that the relative velocities are in opposite directions. You can also read off, if somebody gave you this form for the relationship between the coordinates, you could easily read off what the relative velocity between the two observers is. You look at the x equation here and you say the moving observer's coordinate is x prime equals zero right at the position of the origin of coordinates inside the train. x prime is equal to zero. x prime equals zero corresponds to x equals vt. You just look at this and you say the moving observer, the prime observer, let's not call them moving, the prime observer is at rest or the position x prime is at rest or is equal to zero, excuse me, x prime is equal to zero. The origin of the prime coordinates corresponds to x equals vt. You don't have to know about this denominator there, you just look at x equals vt, that specifies x prime equals zero, and that tells you x equals vt tells you that the relative velocity between the two of them is v. All right, now we want to do another exercise. The other exercise is to assume there's a third observer. The third observer is moving relative to the railroad car, relative to the train. He's got a little kiddie car inside the train. And he's moving relative to the passenger, let's call him the passenger, he's moving relative to the passenger with velocity u. The passenger sees the little kiddie car, the little kid in the kiddie car pedaling down the aisle of the train with velocity u relative to himself. Question, what does the stationary observer ascribe to the car, what velocity does the stationary observer see? And the way to solve this is just to use, I know of no way to guess the answer. The answer is some velocity, we can give it a name, we can call that velocity w. You could use v1, v2 and v3, but I hate subscripts. And so I prefer to say the velocity of the train is v relative to the stationary observer, the velocity of the car relative to the train is u and the velocity of the car relative to the stationary observer I will call w. All right, so the stationary observer sees the car move with velocity w. And the question is, what is w in terms of u and v? The answer is just to use a bit of logic. The bit of logic is that the relationship, first of all we should give the coordinates, excuse me, there are also coordinates in the car. Moving with the car, this could be a set of meter sticks laid out on the floor of the car and also a timepiece that the driver of the car has. So there's three sets of coordinates. In this case, x and t are the coordinates that the stationary observer uses. x prime and t prime are the passengers' coordinates. And a third set of coordinates which let's give them the name x double prime and t double prime. x double prime and t double prime are the coordinates that the kid in the kiddie car uses to describe things relative to the position, relative to his own frame of reference. All right, so now it just takes a little bit of logic to say we know what the relationship is between the double prime coordinates and the single prime coordinates. Those are related by velocity u. u is the velocity of the double primed relative to the prime coordinates. So we can write down those relationships straightforwardly. Let's, we don't need this over here. Let's write them down here. x double prime is equal same exact kind of relationship. Same exact kind of thing except we'll put in primes here and instead of v, we will use the relative velocity u. All right, so this is going to be x prime minus u t prime divided by square root of 1 minus u squared. All right, so this is the Lorentz transformation between the double prime frame and the single prime frame, t prime minus u x prime over root of 1 minus u squared. So now how do we find the connection between the double primed coordinates in the car and the unprimed coordinates at rest in the railroad station? And that's simple. All we do is let's take this equation over here. Let's focus on this one. The time equation also works out very nicely, but let's focus on the space equation here. We know what x prime is in terms of x and t and we know what t prime is in terms of x and t. So all we have to do is to plug in and let's do so. Oh, good. Nice, good pin. All right, so x prime is x minus v t divided by square root of 1 minus v squared. Now that's just this x prime here. So far, that's just x prime minus u t prime. And let's put in t prime. t prime is t minus v x and the whole thing again divided by another square root of 1 minus v squared. Now so far, I have only written this and this. I have to put in the denominator. To put in the denominator, I just put in another factor of 1 minus u squared, 1 minus u squared, square root of 1 minus u squared. So that is the relationship between x double prime and x. We can write this a little more simply. Let's focus on what's some big denominator. The denominator involves the product of the two square roots. All right, let's put it in. Square root, square root. I'm not going to write out what's inside the square root. It's just the product of these two square roots. But I'm interested in the numerator really. The numerator has an x and it has a plus u v x. X here and plus u v x. Why is it plus u v x? Because there's a minus sign here and a minus sign here. X times 1 plus u v. 1 plus u v x. And what about t? T will multiply minus v plus u times t. Minus v and minus u times t. And this is the answer, but it's not very transparent what it means. But all we need to do to figure out the relative velocity of the double prime frame relative to the unprime frame is just to do exactly what we did over here. If we want to find out what x prime equals 0 means, just set x minus vt equal to 0. And that tells you that x is equal to vt. It tells you how the fellow at rest here, who is at x prime equals 0, how he's moving relative to the unprime frame. So we do exactly the same thing. We say x double prime will be 0. X double prime, of course, being 0 means the position of the kid inside the kiddie car here. X double prime equals 0 is the same thing as the numerator here being 0. Denominator, who cares about the denominator? It's not, it's there. But we can set the numerator equal to 0 here, and that will tell us under what circumstances x double prime is equal to 0. So what does it say? It says x double prime is equal to 0 when 1 plus u v x is 0. x is equal to u plus vt. I've set this equal to this to make the numerator 0. Or if I divide by 1 plus u v, it tells me x is equal to u plus v over 1 plus u v t. What is the relative velocity? Sorry, yes. What is the relative velocity? This tells us the x and t trajectory of the kid in the kiddie car here. The trajectory of the child or whoever it is in the kiddie car is x is equal to u plus v over 1 plus u v times t. That corresponds to x double prime equals 0. All right, but another way to say it is just that the stationary observer sees the kiddie car, sees the kiddie car moving along with velocity u plus v over 1 plus u v. So we now know what w is. That's exactly what w is. W, we've worked it out. We can now, we can write it in the form, well let's write it in the form, x double prime is equal to t minus w x divided by square root of 1 minus w squared. And t double prime is, did I write that right? No, I didn't. x minus wt. And t minus wx over the same square root. I don't have room for the square root there. All right, and what do we find? W is? W is just this. That's how fast the kiddie car is moving as seen from the stationary frame. So yes? Why is that dimensionally incorrect? Because we set c equal to 1. Right, we're going to put them back. We're going to put them back in a minute. Yeah, we're going to put them back in a minute. Yeah, you're a step ahead of me. OK. All right, so x is equal to, so w, the speed w is equal to w, u plus v divided by 1 plus uv. And now if I want to restore the units, just to answer your question again, if we set the speed of light equal to 1, of course we're working in units in which velocities are dimensionless. But if we want to restore the dimensions, we simply look at this equation and we say, look, w equals u plus v. That's dimensionally fine. It's adding 1 to uv, which is peculiar. But we restore the dimensions by putting in c squared here. 1 plus uv over c squared. uv over c squared is dimensionless. All right, so this is the equation with c being restored. What is the Newtonian equation? The corresponding pre-Einsteinian equation, well, stationary person sees passenger moving with velocity u. Passenger sees Kitticar moving, sorry, stationary observer sees passenger moving with velocity v. Passenger sees Kitticar moving velocity u. The answer naively would just be u plus v. But it's not u plus v. It's u plus v divided by something which as long as u and v are significantly smaller than the speed of light, uv over c squared will be very small. We'll put in some numbers in a minute just to test that out. But as long as u and v are small compared with the speed of light, this will be negligible. It's the product of two ordinary velocities, 100 meters per second or whatever it is you want to put in, divided by 3 times 10 to the 8th meters per second all squared. This is a very, very small number and it's a very small correction. On the other hand, when u and v get up near the speed of light, it can get very significant. So let's see what happens. First, let's do the case where u and v are small velocities compared to the speed of light. We'll remember that u and v mean velocities measured in units of the speed of light. So if u and v are measured in units of the speed of light, and let's say for example, supposing u is equal to 0.01, 1% of the speed of light, and let's say v is also 0.01, they're both 1% the speed of light, and this turns out to be 0.02, that's what Newton would recognize, divided by 1 plus, and now u, v is 0.001. Did I do that right? Yeah. And 0.001 is a very small correction. Now, this is a very sizable velocity, incidentally. U being 1% of the speed of light is damn fast. It's not 3 times 10 to the 8th, it's 3 times 10 to the 6 meters per second. So this is pretty fast, but the correction is small. 1 plus something. Notice the answer is a little bit smaller than what Newton would have estimated. This number here is a little bit bigger than 1, and so in the denominator it's a little bit smaller. Let's go to the other extreme. Let's suppose u and v are 90% of the speed of light. U is 299%, but I'm not so good at arithmetic. U and v are 0.9. v equals 0.9. Newton would have said the kitty car was moving faster than the speed of light relative to the stationary observer. In fact, he probably would have said 1.8 times the speed of light. He would have added these two numbers, but Einstein would have put in the denominator here, and we'll see what we get. We get 0.9 plus 0.9, that's 1.8, divided by 0.9 squared. 0.9 squared is 1.81. Slightly bigger than 1.8. The result is that the net velocity is slightly less than 1. In other words, we have not succeeded in making the kitty car go faster than the speed of light, even though blah, blah, blah, you know the rest of the story. So this is the answer to the question, what happens if an observer is moving faster than the speed of light? Well, you could ask that question, but I think we are pretty well protected against people moving faster than the speed of light if the way they are made to move is relative to some previous frame of reference moving slower than the speed of light. In other words, there's no way by putting another observer inside the speed of car here, making him go 90% of the speed of light, et cetera, et cetera, we're never going to get faster than the speed of light. So it's a consistent thing to say all observers move slower than the speed of light, even though they can move arbitrarily close to the speed of light relative to each other in any combination, the net result will still be slower than the speed of light. If u and v are c, then w ends up being 2 over c. If u and v are c, that means 1. Let's suppose u and v are 1, c and 1 are the same thing. Then u plus v is 2, and 1 plus uv is 2, so the net result is a speed of light again. If each one is moving very, very close to the speed of light the net result, in other words, if the kitty car is moving relative to the passenger very close to the speed of light, and in the result will be that the kitty car is moving relatively to the station, even closer to the speed of light, but not in excess of it. OK, that's the… You see that through the…the fish in there sees it through the eyes of the X-prime. What if the train had glass walls in the station or was looking at both of them? If the train…I assume the train did have glass walls. I don't see how that makes any… We're not talking…we're not talking about how appearances look. We're talking about how measurements of phenomena are by meter sticks and by well-designed clocks correlate with each other. What somebody sees is much more complicated for the simple reason that when an event happens, light has to come from the event and it can be much more complicated. You visually see…we're not talking about what you visually see. We're talking about correlating the locations and times of events in frames of reference which are defined by meter sticks at rest relative to observers and time pieces which are also at rest relative to them. And it doesn't matter what kind of walls the car has, the transformation laws are universal. Okay. Now, the next thing we talked about last time was the notion of proper time or proper distance or proper interval. Let me just remind you quickly about that very quickly. If we have an event taking place at point x and t, we found out last time that there's an invariant notion of separation, space-time difference or space-time distance between them, proper time and the proper time or the proper…let's put this up higher over here, t, the proper interval between them is called tau and it's defined by tau squared is equal to t squared minus x squared. And the interesting thing about this, the important thing about it, is it's the same in every reference frame. If the reference frame is moving, then we would use prime coordinates but the same quantity here is t prime squared minus x prime squared. It's also equal to t double prime squared minus x double prime squared. It's an invariant. All observers agree on the value of this interval between here and here. They don't agree about the coordinates themselves but they agree about this notion of proper time. The proper time is also the time read by a clock moving between these two points. If the clock is set equal to, let's say, 12 noon at this point and it moves along this trajectory, then the time that it reads at the end of the trajectory is the proper time. And we worked out the time dilation last time. So what I did want to say about this, I want to now put back the other two coordinates, y and z. Y and z and let's put them back into the game for a moment. I'll put them back in. There is another kind of transformation that we can do, not just a transformation between two moving coordinates moving along the x-axis, but we can also consider rotations of coordinates. Let's for the moment not even think very much about relativity. Let's just talk about two different coordinate systems related by an ordinary rotation with respect to each other. So the stationary observer might have two different coordinate systems, one oriented along the x-y-axis and another one oriented along some x-prime, y-prime axis. Not the same primes. Some other set of axes rotated or at an angle relative to the original ones. What happens, now we can forget to prompt the prime coordinates for the moment. What about this x squared here? This x squared is really the distance from the place where the clock started to the place where it ended up in the unprimed coordinates. Now supposing we take into account the other directions, let's call this y for example, then this becomes a plane here. This point might not be located directly over the x-axis. It might be a point in space-time which is not at the same value of x as the origin here. What then is the interval between here and here? What is the invariant quantity? Well this is actually fairly simple. As long as the event is located on the x-axis, it's t squared minus x squared. If it's not located on the x-axis and we make a rotation, then what was originally x squared becomes x squared plus y squared plus z squared, becomes the spatial distance between the origin and this point over here. This really becomes minus x squared minus y squared minus z squared, etc. So if we're not working strictly along a one-dimensional axis, the invariant proper time between a start of a clock and the place where the clock gets to is given by t squared minus the square of the spatial distance which is x squared, this is Pythagoras' theorem. It's just Pythagoras' theorem applied to x, y and z and that's the notion of proper interval. And that's the one we'll work with. If we make any combination of Lorentz transformations and rotations of coordinates, we will always find any two inertial frames which agree at this point over here, we will find out that tau squared is invariant. It's the same in all inertial reference frames and it's very, very similar to the idea that an ordinary Euclidean geometry, different coordinate axes will ascribe different coordinates to a point in space but they will always agree about the distance of a point from the origin. Here it's this funny kind of distance with a relative different sign between the space components and the time components. And that is probably the most central fact about relativity is that this combination is invariant. That's really what it's all about. Okay, so let's keep that idea in mind and introduce a little bit of notation. We get tired of writing x, y and z and we try to condense the notation. So let's condense the notation to standard relativistic notation. Yeah? Just going back to the train there for a second, if you assume that the speed v, let's say it was.9 like you said, it was very close to the speed of light and the speed u, let's say that instead of a car that guy has a flashlight in his hand so he's actually sending a light beam and that's that. Why don't you try it? And I'm kind of wondering like how do these two people measure the speed of light whether they get the same number? Why don't you try it? Take one. Here it is. What if c is the speed of light? That means it's one. Suppose v is one. Oh, I don't know. Which one do you want to make one? Make u one. Make u one. Okay. Make u one. That's one plus v divided by u is one, one plus v. So the answer is one. The speed of light is the speed of light. That light ray moves with the speed of light. In both frames. In both frames. Well, last week I was confused about something and you pointed out that generally it comes down to cycloteneity, definitions of cycloteneity. The question I asked a minute ago, I realized that with the glass walls to try and look at both of them and try to reintroduce cycloteneity into the. Okay. If you understand it, good. Let's talk about light rays. How light rays move. Light rays, let's go back to the one-dimensional case then. One space in one time dimension. Let's go back. Light rays move, for example, along 45 degree axes like this. That means they will move from the origin to the point x, t, but only if x is equal to t. That's just saying the light moves with velocity one. In a certain time t, the distance it moves is equal to that time. So it moves to the point t, t, or the same coordinate. That means that t squared minus x squared, sorry, t squared minus x squared is equal to zero, or that the space-time interval, I'll call it space-time interval, proper time, any number of different depth usages, is zero. That's different than ordinary Euclidean distance. Euclidean distance, if two points have genuinely zero distance between them, they're sitting on top of each other. In space-time, if two points have space-time distance, or proper time equal to zero between them, that simply means they are related by the possibility of a light ray going from one to the other. Now if we introduce the additional coordinates, y squared and z squared, then what was originally x squared, the distance that the light beam traveled along the x-axis will obviously, the square of the distance, will obviously become x squared plus y squared plus z squared square of the distance, and that will be equal to zero for a light ray. So the motion of a light ray is t squared minus x squared minus y squared minus z squared equal to zero, or again, tau squared is equal to zero. Tau squared is, so that's one concept of how a light ray moves. It moves along trajectories such that the proper time along the trajectory is equal to zero. Photons move that way. We could draw a picture for this. A light ray moving to the right is a 45 degree axis to the right, 45 degree line to the right. A light ray moving to the left moves exactly the same way except in the backward direction. What about a light ray moving outward? Well that moves the same way except at a 45 degree angle in the outward direction. More generally, we would draw a kind of cone. Now I can't draw the full three dimensions plus time. It's too many dimensions for me to draw on the blackboard. But if we had only two dimensions, x squared plus y squared instead of, let's forget the z squared here, we would find that the motion of light rays is such that in space time they move along the cone created by 45 degree light rays coming out of the origin there. That's called a light cone. A light cone is the set of points that a light ray can arrive at if it starts at the origin. That's the notion of a light cone. And this would be called the future light cone. The future light cone is all of the places that light can get to starting at the origin. There's also a thing called the past light cone. A past light cone is all of the places that can send a light ray to the origin. So the future light cone is all the places that the origin can send a light ray to, and The past light cone is all of the places that can send a light rate to the origin. But the cone turned over on its nose, the past light cone and the future light cone. And this is terminology. Terminology is often very helpful. But that's all it is, is terminology. Yeah? You said the fundamental invariance for these systems is t. Everybody agrees about the time interval between two events. The time interval is just t. That's a good point. All right, we're getting close to. Let's talk about the concept of four vector. The most basic example of a vector in ordinary three dimensional space, and I am not talking about the kind of vectors we talked about last quarter. We're not talking about state vectors and quantum mechanics. We're talking about vectors in space. All right, the most basic notion, the example of a vector, is the interval between two points in space. Given two points as a vector which connects them, that vector could measure the, could have to do with how far somebody walked in the, or whatever. That's a vector. It has a direction. It has a magnitude. And if we wanted to, we could think of it as a vector beginning at the origin and ending up someplace else. It doesn't matter where it begins, but the vector, this is the vector. We can move it around. It's the same vector. But we can think of it as being an excursion starting at the origin and ending at some point x. And it has coordinates, in this case x, y, and z, the location of the final point. Or we could call them xi, i being one, two, or three, representing x, y, and z. Three coordinates, xi, would really stand for x, y, and z. Or x1, x2, and x3. Now we have another added component to worry about. Not only do we want to know where an event is, but we want to know at what time that event is. Let's suppose that we're measuring space and time relative to some origin. That means we have to add in another coordinate, t. In other words, the vector becomes a four dimensional object with a time component and space components. The normal notation for it is to represent it two different ways. I'll tell you the two different ways. We can represent x, y, and z by calling them x mu. Mu goes over the four possibilities. Usually it's normally one arranges them as t, x, y, and z. x mu. And what does mu run over? What are the values of mu? Why did you say zero? Because everybody says zero. Yeah. Did you hear me say it? I didn't know I said it. OK. Right. For whatever reason, historically, t was not considered the first coordinate. x was considered the first coordinate. y was considered the second, and z the third. You might have thought that time should be the fourth component. For whatever historical reason, time was thought of as the zero component. So this stands for x zero, which is time. x one, which is x. x two, which is y. And x three, which is z. What about the spacetime distance between these points, the proper time? That's t squared minus x squared, the square of it. The square of it. Tau squared is t squared minus x squared minus y squared minus z squared. But we can also write it as x naught squared minus x one squared minus x two squared and so forth. x two squared, blah, blah, blah. So that's just notation. It's just a notation. Whenever you see a mu, that means the index runs over the four possibilities of space and time. Whenever you see an i, that means the index runs over only space. OK. i stands for space, and mu stands for space and time. x, just as xi can be thought of as a very primitive version of a vector, not primitive, a very basic version of a vector in space, x mu with four components becomes the notion of a four vector. Just as vectors transform when you rotate coordinates, four vectors transform when you Lorentz transform the space. When you go from a moving coordinate system to another moving coordinate system, then the x's transform exactly the way Lorentz transformation tells you they transform. We could rewrite this as x one is x one minus v x naught. x prime naught, x naught prime is equal to x naught minus v x one, and so forth. There's no content to this. It's just a way of organizing the components of a four vector by calling them all by the same name and giving them an index mu. So we'll use that. We'll use that just to make formulas nice and simple. Yes, they transform linearly. So they could set them up and operate them with a matrix? You'll take it absolutely. Absolutely. You can read off from here a matrix. The matrix would be one minus v minus v one. Absolutely. You can think of Lorentz transformations having associated with them matrices. And you could write that x prime is equal to this times x. Matrix times column vector. The components of the column vector would be x's and t's. x's and t's. Yes, you certainly can use matrices. And I advise you to do so, because it's a good thing to do. All right, now let's talk about some other examples of four vectors in particular. Instead of talking about the components relative to an origin, let's just take a little interval. Let's just take a little interval. It could be an interval along a trajectory. Could be having a trajectory. And we might want to consider along that trajectory a small interval. Now when you hear small, think calculus, eventually we're going to be talking about a little differential displacement along here. For the moment, let's just call it delta instead of d. So this differential element here corresponds to, or not quite differential yet, some discrete distance, corresponds to a dx mu, or delta x mu. Delta x mu means the change in the coordinates, the change in the four coordinates in going from the tail of the vector to the beginning of the vector. And it's composed out of delta t and delta x, delta y, and delta z, delta x mu. What I want to do now is to introduce a notion of four velocity, four dimensional velocity, which is a little different than the normal notion of velocity. Velocity, in this case, this could be the trajectory of a particle. Let's take this to be the trajectory of a particle from here to here. And I'm interested in a notion of velocity at a particular instant over here. What do I do? If I were doing ordinary velocity, I would take a little delta x and divide it by a delta t and then take the limit, and that would define from the ordinary velocity. That velocity has three components, the x component of velocity, the y component of velocity, and the z component of velocity. There is no fourth component of that ordinary velocity. We're going to introduce now a notion of four dimensional velocity, and we're going to do that by taking the delta x mu, and instead of dividing it by delta t, we're going to divide it by the invariant distance between these two points. Let's call it delta tau. Delta tau is defined so that its square is equal to delta t squared minus delta xi delta xi minus the sums of the squares of the delta x's. Delta x squared, delta y squared, and delta z squared. In other words, it's the invariant space time distance between this point and that point. We take the square root of this, and it gives us delta tau, and that is called the four velocity. It's labeled by a u instead of a v, and it has an index mu, so it runs from 0 to 3, four components. Now, how does this thing relate to the ordinary velocity? We should probably go if we want to get seats, but we'll continue next time. I'll tell you where we're going. We're going toward a theory of the motion of particles. To have a theory of the motion of particles, we have to have notions such as velocity, position, of course, momentum, energy, kinetic energy, whatever. We're moving toward a, first of all, just a motion of particles, and then toward a dynamics of how particles move. The generalization, if you like, of f equals ma, will have a notion of acceleration, all the things that Newton had except the relativistic generalizations of them. And they will be in terms of four vectors. OK, let's see if we can grab a, let's see. I have 10 to 8, so who's right? That clock's right. We better get yourself a good seat. For more, please visit us at stanford.edu.
(April 16, 2012) Leonard Susskind starts with a brief review of what was discussed in the first lecture -- specifically the use of vectors and spin in three dimensional space and in relation to special relativity. In 1905, while only twenty-six years old, Albert Einstein published "On the Electrodynamics of Moving Bodies" and effectively extended classical laws of relativity to all laws of physics, even electrodynamics. In this course, Professor Susskind takes a close look at the special theory of relativity and also at classical field theory. Concepts addressed here include space-time and four-dimensional space-time, electromagnetic fields and their application to Maxwell's equations.
10.5446/14998 (DOI)
Stanford University. Okay, tonight we're going to do a number of things. We're going to talk about uncertainty. We're going to talk more about the Schrodinger equation, the various versions of the Schrodinger equation. We're going to talk about how things evolve with time, we've talked about that some already. Let's see what else we have here. Yeah, we're going to apply the Schrodinger equation idea for the evolution of systems to our favorite little system of one spin, but then if we have time, and I hope we will, I would like to move past the single spin. We have a general theory of quantum mechanics for which I announced what the basic rules were and the basic principles. Hermitian operators are observables, vectors are states, and so forth and so on. Orthogonal vectors mean distinguishable, a very, very general set of principles, and so far we've applied it to exactly one simple system, the simplest possible system in the world. We're not finished with it yet, but hopefully tonight or next time we will move on to the next simplest. Well, no, it's the next to next simplest system. There's a simpler one in between, but two spins. Two spins is the next to next simplest system. Why do you think that two spins are the next to next simplest system, as opposed to the next to simplest? A spin has two states. How many states do two spins have? Two orthogonal states, four. So what happened to the state with three states? The system with three states? You can imagine a system with only three states. That's the next to simplest. So two spins is the next to next to simplest, but let's get on. First of all, the concept of uncertainty. I'm not going to do the mathematical uncertainty principle tonight, but we should just talk a little bit about the fact that there are observables, pairs of observables, which cannot simultaneously be measured. The basic principle is that if you measure something, then by the definition of a good measurement, you leave the system with the same value that you measured. Otherwise, you couldn't confirm what you had measured. So it's important that you leave the system, at least momentarily or instantaneously, having the same value that you measured. The measurable quantities of a system are the eigenvalues of the observable operators, the operators that represent observables, and the states in which the observables are definite, in other words, in which there's no ambiguity about them, the thing that you get after you've done a measurement of a certain quantity is an eigenvector or the eigenvectors of the observables. Now, supposing we had two observables which could be simultaneously measured, if you measure, you can measure one, and it will leave it in a state with that same value, but you can also measure two things simultaneously. It doesn't matter, it could be one after the other, but let's just call it simultaneously. You make two measurements simultaneously of two different things, and you leave both of them in eigenvectors of the quantities that were measured, two distinct quantities. That must mean that the states that you leave things in are simultaneous eigenvectors of the two distinct things. If you can measure things simultaneously, it means basically that they have the same eigenvectors, or that there exists a complete set of basis. Incidentally, somebody pointed out to me that the term basis that I've been misusing it a little bit, and that's probably true, I use the term basis to mean a complete set of orthonormal vectors, meaning to say they're all orthogonal. Quantum physicists tend to use it that way. I think in mathematics, the notion of a basis does not require the vectors to be orthogonal. It just requires them to be linearly independent. That's not a big deal, I just pointed out to you. I will use the term basis to mean a complete set of orthogonal normalised vectors. The eigenvectors of an observable can be made to be a complete orthonormal basis. When I say they can be made to be, I mean, you do have the ambiguity what happens if there are two eigenvalues with exactly the same value. Then you have an ambiguity as to how to choose the eigenvectors, but you can choose them perpendicular. If you have two distinct operators, which can be simultaneously measured, and therefore simultaneously left in eigenvectors of those quantities, it must mean that there is a basis, a basis of vectors, which are simultaneous eigenvectors of both quantities. In other words, there exists a basis if, let's call the two observables L and M. These are Hermitian operators L and M. It means that if L and M are to be simultaneously measurable, it must mean that there's a complete basis, basis of states, which are both simultaneously eigenvectors of L and eigenvectors of M. That must mean that L on I equals, let's use the notation for eigenvalues of L. Let's use the notation little L, little L sub I, instead of lambda, L sub I times I. That's the condition that I be an eigenvector of L with eigenvalue L sub I. And at the same time, the same set of states must be eigenvectors of the M observable. So this must be true for a complete set of states. Now let's apply L and M in sequence. First L, then M, and then try it the other way and see what we get. So first let's start with L on I. Here we have it. But now let's take M, multiply it by L, and then hit the eigenvector I. Well, from the first relationship here, this is equal to M times L sub I times I. Well, I've done this plug in this first equation. But now L sub I, that's just number. That's a number. That's not even a matrix. It's just a good old number, an eigenvalue. And that means we can write it as L sub I M times I. But we now know that M times I is just little M. So what does this give? This gives L sub I M sub I times I, the product. Okay, that's not so surprising. I is the thing which can be measured, sorry, M and L are things which can both be measured simultaneously. And if we multiply the two of them together as matrices or as operators, they become another observable whose eigenvalues are just the products. Good. But if we did it exactly in the opposite order, L M on I, we would get exactly the same thing. So in order, these are just numbers, and so that the order that you write them down doesn't matter. L sub I times M sub I is the same as M sub I times L sub I. And so this is equal to the same thing. Same thing. Now supposing you have two operators which have exactly the same action on every member of a basis. That means they will have the same action on any superposition of vectors formed out of a superposition of the basis vectors. In other words, they will have the same action on any vector. M times L and L times M have exactly the same action on any vector, and therefore M times L is equal to L times M as operators. So we've proved the following theorem, that if two things are simultaneously measurable, the two observables or the two operators representing those observables commute. L times M and M times L are the same, or L times M minus M times L, which is the commutator, L commutator M must be equal to zero. You can prove a stronger form of this theorem, and it just goes the other way. If you have two operators, I'll leave this to you. If you have two operators which commute, remember, if you have two operators which commute, it means that you can find the complete basis of states in which both of them simultaneously, in which the complete basis of states, are simultaneous eigenvectors of the two operators. It goes both ways. Given that you have two things which can simultaneously be measured, it follows that they commute, and if two things commute, it follows that they can be simultaneously measured. They have a complete set of common eigenvectors. So the condition then for things to be simultaneously measurable is that they commute. What if they don't commute? In fact, we already have some examples of things that don't commute. Sigma X and sigma Y and sigma Z, no two of them commute. No two of them commute with each other, and therefore they do not have simultaneous eigenvectors, and that's no surprise. We know that the eigenvectors of sigma Z are up and down, the eigenvectors of sigma X are left and right, and they're not the same thing. They're different. They don't lie, they're not proportional to each other. So there are examples of operators which don't commute. In fact, we're going to work out the commutation relations for them. They don't commute. In fact, they don't commute, and that's an indication that you can't measure them simultaneously. That's the idea of uncertainty. The idea of uncertainty is when you have two objects, two observables, each of which you could decide to measure and get an answer. You may not know what the answer is going to be. It's not going to be statistical, but you can measure them. After you've measured them, be certain you can repeat the measurement and find consistency, each of them, but not both of them. In classical physics, there's no such thing. If you can measure A and you can measure B, then you can measure both A and B. And you can be certain about both of them if you can be certain about either of them. All right, so that's just a... We'll come back to that theme of uncertainty and quantify it. There may be states where although you're not certain... Well, all right, we'll come back to it. We want a quantitative measure of the necessary degree of uncertainty in measuring two things simultaneously, and that is the uncertainty principle, the moment we have the idea or an explanation of why there are uncertainty between observables in the first place. Yeah? Student speaking Well, if you measure them and you get answers for both of them, you have to leave the system in a state where there's an eigenvector of either one of them, both of them. That's the rule. When you measure something, it leaves it in a state of definite vatness, whatever that happens to be, which means an eigenvector. If you measure two things and you... If you can measure two things simultaneously, it must mean the state that you leave it in is an eigenvector of both of them. Okay. All right, let's come back now to the evolution of systems. As I said last time, I went back and looked at the classical mechanics notes, and I realized that I said, okay, I have something like about 50 pages of quantum notes. The equivalent, classically, was how many pages in the classical mechanics? It was about a quarter of a page, I think. Just a statement that states are sets of things and that you can write on a blackboard points on the... So 50 pages to get to where classical mechanics we were after about 30 seconds when we were... And that's just the way it is. The next thing we did when we talked about classical mechanics and we talked about how states change with time, that took another page, took another page, arrows going from one to the next and so forth. Here, it's not so bad. It's not going to take us another 50 pages, fortunately. In fact, we've already laid it out. It took basically, I would say, something like about five pages or something in my notes, I think four or five pages, maybe a little more, maybe a little more. So we're beginning to compress and things are... Once you've got the idea of states, then the next is a little bit easier. Okay, so let's remind ourselves what we said about the time evolution, the Schrödinger equation. The Schrödinger equation is the equation that governs the time evolution of systems. We began with the idea of the conservation of distinctions and the conservation of distinguishability of states. The implication was that states which are orthogonal remain orthogonal. It was easy to prove from that, and I hope you've proved it. You should be able to prove it. That the U operator... I should have brought my own pens. The U operator, remember what the U operator is. If you want to know what a state is at a later time from its value at time zero, this stands for time zero, then you apply a certain operation. It's more than just an operation, it's a linear operator. Okay, a linear operator U, and you get psi at time t. U, of course, is a function of t. The later or the earlier that you want to project a vector forward, you put in different t's here, and that was the basic equation. But then we required another ingredient, and the other ingredient is that orthogonal states stay orthogonal. So, for example, if we have two orthogonal states, let's call them for the moment i and j instead of psi. i and j are orthogonal. In fact, i and j might be members of an orthonormal basis. Let's take them to be members of an orthonormal basis, and then you allow i to evolve for a time, and you allow j to evolve for a time. The ket vector i becomes u, let's say u of t times i. If you start with i, after a time it's u of t of i, j evolves with the Hermitian conjugate of u, j. And the assumption is that if i and j were orthonormal, u times i and u star times j are also orthonormal. So, the way to say this is if j i is equal to a chronic delta, then this is also equal to a chronic delta. In other words, putting u and u dagger, sandwiching them between orthonormal bases here, does not change the inner product, and from that you can conclude, since this is true for any members of the basis, that u dagger u is just a unit operator. We did this last time, I know, the unit operator. It does nothing. u dagger times u. Then we went another step. We said, all right, let's consider the evolution from time zero to an infinitesimally short time later. So, we're interested in, let's call it, u of epsilon. Well, epsilon is a small time, and we argued, well, first of all, if epsilon is zero, nothing happens. To say that nothing happens means that u is the unit operator. All right, so u of zero is just a unit operator. It does nothing, just leaves the state the same. We added one more thing, and that was a kind of continuity. I may not have spelled it out, a kind of continuity, and that was that u of a small time is close to the identity. In other words, in a very small period of time, the state doesn't change much. So, that's the same as saying that u is equal to the unit operator plus something small of order epsilon. And I used my freedom of definitions, just choosing definitions, to write this as minus i epsilon times h. There was no content in that minus i epsilon. I can just absorb it into h. I put it there. In fact, had I really wanted to keep track of all the interesting constants, I would have put an h bar over here. But this is all a matter of convention. Whether or not we include an h bar, whether or not we include an i, whether or not we include an epsilon, is simply a definition of h. We could have called this whole thing k, and then said k is equal to i epsilon over h. You know what I mean. I don't have to say it more. All right, let's take away the h bar. I'm going to forget it anyway. It's units, simply units. And then we said this, oh, let's write the other one, u dagger of epsilon is equal to 1 plus i epsilon h dagger, dagger meaning Hermitian conjugate. Then we fed these two into the unitarity condition, and what did we find? We found that h minus h dagger must be equal to zero. That's what happens if you feed these two into the unitarity condition, or another way to say it is h is equal to h dagger, or h is Hermitian. Now, that's big news. Why is that big news? It means that h is inobservable. It's something that we can measure, and we should be interested in it. What is it? And of course the answer is, it's the Hamiltonian, not of course, but as I said last time, it is essentially, it is, apart from a factor of Planck's constant, it is the Hamiltonian. What's it got to do with the classical Hamiltonian? We talked about it a little bit. I'll come back to it. But for the moment, it's just called the quantum mechanical Hamiltonian. It's a thing which in some way or another, well in a very clear way, generates time evolution. Incidentally, you might think that this, alright, well let's go on. We went a little bit, another step. It's okay, let's write the equation. Psi of epsilon, this is pure review, psi of epsilon is equal to psi of zero minus i epsilon h. Psi of zero. I just plugged in for what u is. And then transposed this to the left-hand side and divided by epsilon. Transposing to the left-hand side makes this minus and makes, puts an equal sign here and then divide by epsilon. This is the obvious thing to do. And what do we get on the left? We get the time derivative of the state vector of the system. The time derivative of the state vector of the system, d psi by dt is equal to minus i h psi. So what h is, is it's a rule for how you update the state of a system. If you know it in an instant of time, then at the next instant of time, you make an incremental change in the state of the system proportional to i minus i h times psi. It's a little machine for telling you how to update the state of the system. That sounds awfully deterministic. It sounds awfully much like classical determinism. Classical determinism is also a set of rules for updating the state of a system. So in what sense are things not deterministic if you know how to change the state for more instant of time to another? Well, they're not deterministic in the sense that even if you know the state of a system, you simply don't know the results of every measurement. The knowledge of the state of a system in quantum mechanics is not equivalent to the knowledge of the outcome of every experiment. What does it tell you, the state of the system? It tells you probabilities. So this is basically a rule for updating probabilities. It's not a rule for updating knowledge, complete knowledge of the state of the, of the set of experiments that you could do. It's not. But in some sense or another, the state vector evolves. I don't want to call it deterministically because deterministically carries baggage with it. But it evolves in a known and definite way. Now this is true, but only if you don't interfere with the system. In classical mechanics, you can interfere with the system arbitrarily gently and not cause the, cause any changes in the way it's evolving. In quantum mechanics, when you make a measurement, you necessarily disturb the system. So this is the evolution of the system of an undisturbed system that is not in the process of being measured, yeah. What if the system happens to be in an eigenstate to measure what you're going to make? It's all right. So what? And you would not disturb the system. Well, that's a special case. Yeah, you're right. You're right. That's right. If you're in a special case of an eigenstate of the system and you measure it, it doesn't disturb it. So, right. So that is an exception to what I said. But generally speaking, the generic situation is when you measure something, you disturb something else about the system. Okay. This equation is the Schrodinger equation. Now, it's sometimes called the generalized Schrodinger equation. The actual Schrodinger equation, a Schrodinger wrote it down, was for a particular kind of system, the motion of a particle. But it has exactly this form. This first became known as the generalized Schrodinger equation, and then people dropped the generalized. We now think of this as the form of the Schrodinger equation. And what it is, is it's an equation which tells us how states change with time. It can be applied to the simplest system of a single spin. It can be applied to the second simplest and the third simplest system, and to systems of arbitrary complexity. But this is the form that it has. All right. So that's where we, more or less, were last time. Okay. So let's erase everything but the Schrodinger equation. Actually, this is called the time-dependent Schrodinger equation. The time-dependent Schrodinger equation is a equation for how things change with time. There's also another equation which is called the time-independent Schrodinger equation. H is an observable. It corresponds to the Hamiltonian, but Hamiltonian is nothing but energy, at least in classical mechanics. So one might expect that H is the operator that's representing the observable energy of a system. Okay. Now, we're going to make a measurement of the energy of some system. How do we do it? Let's not worry about that. There's one way we could do it. We use E equals mc squared, and then we weigh the system carefully, really, really carefully, and in that way determine its mass and its energy. Okay. That's not a very efficient way to do it because the change over the energy of a system when you heat it up or something is so negligible that... but that's one way. There are many ways to measure energy. But what are the possible outcomes of the measurement of the energy of a system that, of course, depends on the system? It's not something you can answer in general, but depending on the system. For example, if you measure the energies of an atom, you get one of a denumerable family of possible answers, namely the energy levels of the atom. If you measure the energies of a harmonic oscillator, again, you get a denumerable collection of possible answers. If you measure the energy of a particle moving in a non-closed orbit, then, in fact, you get a continuous possible family of possible answers. So the question of what the energies are depends on the system, and it depends on the choice. Of course, it's not a choice. The systems have Hamiltonians. It's not up to you to dictate to a system what its Hamiltonian is. It may be up to you to try to discover what its Hamiltonian is. But let's use the terminology, you get to choose the Hamiltonian, meaning to say you get to study different possibilities. And depending on the choice of operator that you put here, that you call h, you can get different energy levels. How do you calculate and determine what the possible outcomes of the energy measurement are? You use the statement that the values that the energy can take on are the eigenvalues of the Hamiltonian, like any other observable. So you write another equation, h on. Let's call it the i-th. Now, i now represents an eigenstate of the energy. It doesn't represent any old basis. It represents, in particular, the, in fact, maybe we should indicate that, by writing in here the energy, the i-th possible energy level, h on e sub i is equal to the eigenvalue e sub i times the eigenvector associated with that energy. The notation of putting e sub i inside the ket vector just means nothing more than this is the ket vector, which happens to be the eigenvector of h with eigenvalue e sub i. And how many such e sub i's are there? Well, there's a complete basis of them. Whatever the dimensionality of the system is, whatever the dimensionality of the space of states is, there's that many eigenvectors of the energy. And you're a job as a physicist to try to find them. So that's the time independent Schrodinger equation. Next thing we did, again, a lot of review tonight. Next thing we did was talk about the evolution, time evolution of expectation values, average values. I complained bitterly about the terminology expectation values, but many, many years ago I gave up on this, and so it's become expectation value, but it really should be average. It's okay, I'm okay with expectation value. I'll use expecta- but it really should be average. Okay, and here's what we found. Well, what we said is the thing that changes with time is the state vector. So we're interested in calculating psi of t, remember the expectation value, I'll just remind you, the expectation value you get by sandwiching the observable in a sandwich between the bra version of the state vector and the KED version of it. We proved this. We proved that this is the same as the average, the statistical average, and I won't go back over that now. This is the average value of L, the observable L, and we can ask how it changes with time. It changes with time because psi changes with time, and here we know how psi changes with time. There's two factors of psi here. So when we go to differentiate this with respect to time, there will be two terms, one coming from differentiating this and one coming from differentiating that. If you remember that when you flip from bra to KED, you have to complex conjugate, and a complex conjugating changes the sign of I, you will come to the conclusion that this is equal to. Now, I always get the sign wrong. Let me try to get the sign right. I think it's minus I times LH minus HL, average value of that, psi, psi. We worked that out. We used that L on psi gives you H on psi. That's this piece over here, and we used that L when it acts to the left gives you a term with H acting to the left with a minus sign. Okay, so here's what we find then. We find that the time derivative of the average of L, let's just write it this way, T of the average of L, I'll use that notation for the average of L, as a function of time is equal to minus I times the average of the commutator of L with the Hamiltonian. If you know what the Hamiltonian is as an operator or as a matrix, and you know what L is as an operator or a matrix, you can compute the expected commutator. And when you've computed the commutator, you can compute its average value. Just use this rule, the sandwich rule, let's call it the sandwich rule, and it tells you the left hand, the right hand side, tells you what the left hand side, how it changes with time. Do you find it odd that there's an I there? Well, I did the first time I saw this. First time I saw this, I said, wait a minute, wait a minute, the time derivative of a real thing can't be imaginary. We're going to find out that commutators of real things are always imaginary, so the imaginary will go away. We're going to give you some examples, and you'll see this is not hard to prove, but we'll do it by example. Sometimes this is written in a shorthand. The shorthand is just that dL dt is equal to minus i times the commutator of L with H. But at least at this stage of our knowledge, what this means is that it should always be sandwiched and thought of as an equation for the average values of things. So there we have what are called the Heisenberg equations of motion of a system. That brings us to the study of commutators. The study of commutators, and I want to go through that again a little bit. Commutators have some rules associated with them. The rules are very, very reminiscent of the rules for Poisson brackets. So go back and read about Poisson brackets. I'll write down what the rules are, general rules, algebraic rules of Poisson brackets and commutators, and then we'll just check. Are they really true for both of them? Well, the rules for Poisson brackets we did last quarter. Poisson brackets are written this way, a, b. And I'm not going to write out how you express them in terms of the p's and q's of classical mechanics, but there was a few simple rules associated with them. First of all, there were some linearity rules that if you add two operators, sorry, if you add two things and then take the Poisson bracket with a third, they add and so forth, this is certainly true of commutators. But there was a couple of non-trivial rules. Well, first of all, the Poisson bracket is odd, which means that when you interchange the two entries, it changes sign. Minus b, a. That was one property. It was certainly shared by the commutator. If you interchange l and h, you interchange these two terms and you change the sign of the commutator. So that's true of the commutator. Also, if a and b are now quantum variables, this is also equal to minus b times a. There's really only other one important relationship, and it's the product rule. The product of a and b Poisson'd with c, the operation of Poisson'ing. If you remember, the Poisson bracket was a combination of derivatives. Derivatives of various things. If you go and use that definition of the Poisson bracket, use the rule for derivatives of products. What you'll find out is that this is equal to, this is the Poisson bracket side, it's equal to a times the Poisson bracket of b with c plus the Poisson bracket of b with c times, sorry, Poisson bracket of a with c times b. Now, the order that you write down a and the Poisson bracket, that's not important. These are just numbers. I mean, these are just ordinary functions or numbers, and it doesn't matter which order you multiply them. But I wrote them in an order which I know that the commutator will also satisfy. Operators, the order does matter. So I wrote these in a specific order, namely I put a on the left over here and then I put b on the right over here. Maybe it's not too surprising. a appears on the left in this factor and b appears on the right, so I kept the order intact. Okay, let's see if this is true for commutators. What would it say? It would say, well, let's write it out, a times commutator bc, sorry, go back. It would say that commutator of ab with c is equal to, we'll put a question mark here. I don't know if it's equal to yet, a times the commutator of b with c plus a with c. Yeah, a with c times b. Let's check if that's true. This is very easy to do. We don't have to do any derivatives. All we have to do is write out the meaning of this. Commutator of ab with c is the same as ab times c minus c times ab. Everybody see why? First you put ab on the left times c and then you put c on the left and ab on the right and you take the difference. Definition of the commutator. Let's see what we get on the right-hand side. On the right-hand side, we get a times bc minus cb. That's this term. And then from here, we get plus ac minus ca times b. These are operators now. And in this calculation, they're operators. The order matters. Okay, let's see what we have. We have abc. Here's abc. That's good. All right, so this is a friend. We have minus cab. That's this one over here, cab. That's also a friend. Our enemies are acb, but they occur both the same way. No, no, acb. That's a bad one. But here we have also acb. One with a minus sign, one with a plus sign. So it's acb minus acb. And we're finished. The commutator satisfies the same algebraic rule as the plus-home bracket. Is this an accident? No, of course it's not an accident, but we'll get to a deeper reason for it eventually. Which comes first and which is more basic? Certainly the commutator is more basic. Quantum mechanics is more basic than classical mechanics. Classical mechanics is an approximation to quantum mechanics. So it stands to reason there ought to be a classical approximation to the notion of commutator. And there is. It's a, that leads us to speculate if you like. But I'll take it as not a speculation. I'll take it as something that we'll discover, or we'll be able to prove later. That when we go from quantum mechanics to classical mechanics, the Poisson bracket, let's see, the Poisson bracket becomes, I think, minus i. Now this equality doesn't quite make sense. This is something classical on the right-hand side. Maybe we should put arrows there. Commutator A with B. I have the last equation that you wrote there. It seems like the right side is twice what the left side is. Well, I don't think so. A, B, C, A, B, C. C, A, B, C, A, B with a minus sign. And then we have a couple of extra terms. The extra terms are minus A, C, B, plus A, C, B. Yeah, OK, good. You're allowed to make mistakes. It's good. Slow me down. Yeah, yeah. Now, nothing in what I've done so far tells you that this should be identified exactly with that. It might be identified with this times a numerical multiple. The Poisson bracket might be seven times a commutator, or a quarter of a commutator, and they would all still make sense. In fact, there's a dimensional difference between these. Poisson brackets are derivatives of A with respect to X, derivatives of B with respect to P. They have units which are different than just A, B. The difference of units is actually soaked up in Planck's constant. If we work with Planck's constant equal to one, then Poisson brackets are just minus I times commutators. But for dimensional reasons, there's an H bar there. Let's write it another way. Let's put the H bar on the left-hand side. Certainly, classically, we think of commutators as very small. In fact, we think of them as essentially zero, A, B minus B, A. So in the classical limit, the commutator is something which is just zero. How does that happen? Well, it's the H bar here. The H bar is very, very small, and units in which H bar is very, very small, which are just ordinary units. I mean, units like meters, seconds, kilograms. H bar is an incredibly small number, some tenth of a minus or 30, I forget. It's a very small number. So in the classical approximation, commutators are negligibly small. But when keeping track of real quantum mechanics, the commutator is not zero. So that's the meaning or the notion of commutator. And the nice property now is that this relationship here is simply the quantum analog of an equation that we studied last quarter, dL by dt is equal to the Poisson bracket of L with the Hamiltonian, which matches quite well with this. That is the essential reason at this stage for identifying H with the Hamiltonian. Classical mechanics were packaged in this fishy way called Poisson formulation. The fishy Poisson formulation encapsulates, you know, organizes everything into relations involving Poisson brackets, and in particular it says that the L by dt, L could be anything. Incidentally, L here is not the angular momentum, it's anything. The L by dt is equal to the Poisson bracket of L with H, and in quantum mechanics, in the sense of average value, we found that the L by dt is minus i times the commutator. So that's the essence of the connection between the Hamiltonian of classical mechanics and quantum mechanics. Yeah? Of what? L is any observable. I used L for linear operator, but of course it should be Hermitian linear operator, so I might have used H, but then I would have gotten caught and found that H... Oh, H is Hermitian, and so is L. But I used L for a generic observable. So I assume that L is Hermitian. Originally I used it for linear, but now I'm using it to mean linear and Hermitian. Alright, L could be H. We could study the equation of motion of how H changes with time, or better yet, how the average of H changes with time. Let's do so. Here we are. The H by dt is equal to minus i times the commutator of H with H. But the commutator of anything with itself is zero. And so incidentally is the Poisson bracket of anything with itself. Because Poisson brackets are odd, they change sign when you interchange the two objects. That means the Poisson bracket of a thing with itself is zero. So from a classical point of view, we could have derived the conservation of energy from the principle that the time derivative of the energy is the Poisson bracket of the energy with itself. What we find in quantum mechanics is a very simple thing, it's a similar thing. The time derivative of the, again, the Hamiltonian, is the Poisson bracket, is the commutator of the Hamiltonian with itself, and is therefore zero. Energy is conserved in quantum mechanics, or at least the average of the energy is conserved. Now it's better than that. There's a very definite sense in which the energy is exactly conserved. But not for tonight. In the introductory example there we have two operators to operate. The physical example of that is the two particles with physics. Two what? Two object particles with physics. If we have two spins, yeah, absolutely, absolutely. In fact, whenever we build a system out of two systems, in general the observables for one of the systems and the observable for the other systems commute with each other. That's why you can measure the position of me and Sanjay simultaneously because we're too, because our positions commute. And so do our spins, but I think that's the question. That is the question you were asking? Yeah, okay, right. Yeah, but we haven't come yet. That's where I was going to go later, but I don't know if we'll get to it, to the question of composing systems, taking systems and making bigger systems out of composing them. That would bring us to the issue of entanglement. I have a feeling we probably won't quite get there, but maybe we'll at least partly get there. All right, the next thing we want to do is solve Schrodinger's equation. I don't mean the one for the eigenvalues and the eigenvectors. That we solve by standard methods of, if we happen to know what the Hamiltonian is, in particular if we happen to know it in some matrix form, then we're just calculating the eigenvectors and eigenvalues of some matrix. Let's suppose that's done. We have the eigenvectors, we have the eigenvalues. Let's now find the general solution of Schrodinger's equation. In other words, how things evolve with time. I realize that all of this is abstract, that the only system that we have to apply this all to is the world's simplest system again, and that's not very much. At this stage, you're probably saying to yourself, when are we going to be thinking about real things, objects, systems? We will. We will, by all means. But I think having all of this apparatus, this abstract formulation of quantum mechanics, will make it very easy and quick to start applying it to lots of things. Yeah. Do you know what H is for this spin system yet? Well, OK. So let me answer that right now. That depends on what we do with this spin. If we have a spin in free space, far from anything else, then its energy or its Hamiltonian is just basically zero. Incidentally, adding a constant to a Hamiltonian doesn't do anything. And the reason is because a constant commutes with everything. So apart from a possible additive constant in the energy, a spin which is not around anything else, it's Hamiltonian zero, and that means that it doesn't change with time. If you put it in a magnetic field, if it happens to be a real spin of an electron, you put it in a magnetic field, the magnetic field induces a Hamiltonian, which I hope we'll get to tonight, and it causes the spin to do things, to have a time dependence. And then what the Hamiltonian would be would have to do with which direction the magnetic field was in. So Hamiltonians can be changed. I mean, you can change a Hamiltonian by changing the direction of a magnetic field. But we'll come, I hope we get to this example tonight. But let's just first write down the general solution of the Schrodinger equation. To do that, we start by saying, let there be a basis of eigenvectors, I guess we called it, this is the same as E sub i. If I don't want to write E sub i, I will just write the i-th eigenvector, but it means the eigenvector with energy E sub i. All right, we can take any state. Since i or E sub i are a basis of states, we can take any state, let's call it A. Or should we call it psi? Let's see, what did I call it in my notes? No, I think I called it psi. I did. I called it psi. Psi. And let's think about psi at time zero. Well, no, let's leave the time arbitrary. Psi is a sum over all of the eigenstates, or all the eigenvalues, of some set of coefficients, alpha sub i, I've used the term alpha in the past to denote the coefficients of states of basis vectors. So any state whatever can be written as a superposition of eigenvectors of the energy, since the eigenvectors of a Hermitian operator are a complete basis. So this is psi, and let's say at time t equals zero. Now, how does the time t, how does the state change with time? What's changing here? Well, the states i are just a fixed set of states. It's the alphas which change, the coefficients here which change with time. All right, so the coefficients in the expansion of an arbitrary state are typically time-dependent coefficients. That's what gives rise to a time dependence in expectation values and other things. Now let's write the Schrodinger equation. So we have i psi dot, sorry, we have psi dot, which we can now write. Let's take psi dot, d by dt of psi, the psi dt. That's equal from here, sum on i, the alpha sub i of t with respect to t times i. To differentiate the state, all we do is differentiate the individual coefficients. Now, Schrodinger's equation tells us to set that equal to minus i. Oh, different i. Holy smoke. Okay, j, j. Set that equal to minus i times h acting on this. But h can be brought in through the summation here, and this can just be written as summation over i, the coefficient numbers, alpha j, summation over j, j. Alpha j of t, h on j. Now what is h on j? Somebody said zero? No. What about that? i and e sub i are the same vectors. Go up to the top equation. h on the i-th eigenvector is the energy e sub i on the i-th eigenvector. Okay, so this just becomes e sub j. Here we have on the left-hand side a sum of basis vectors with coefficients alpha j. Sorry, where are we? Did I make a mistake here? Oh, sorry, this should have gone down here. This is wrong. Here. This should have gone down here. This is sum on j d by dt of alpha j. I'm lost. This is correct up to here. Let's go back to here. Now let's calculate the time derivative of this. What do we do? We put in minus i times h on j. Here is where I use h on j is equal to e j on j. Okay, I think that's what I had a moment ago anyway, isn't it? I think that's exactly what I had. Minus i, take out the minus i. Alright, this thing and this thing are the same, so this sum must be exactly the same as this sum. If you have two sums of basis vectors, the coefficients must be the same. If you have two sets of basis vectors, sorry, if you have two linear superpositions of the same set of basis vectors, and they're equal to each other, then the individual coefficients have to be equal to each other. So it follows that the alpha by dt is equal to minus i times the alpha j by dt, e j alpha j. All I've done is set the alpha dt equal to the coefficient in this equation over here. And that's the whole upshot. It tells you how the alphas change with time. For each j, this is a differential equation. Think of this as just a differential equation for alpha. What's the solution of this differential equation? We've done this kind of equation over and over again. The time derivative of something is proportional to the same thing, to the thing itself. Exponential solution, right? So the solution of this equation is alpha j equals alpha j at time 0 times e to the minus i e j t. If we differentiate alpha j with respect to t, it will bring down this factor minus i e sub j times the same thing. So we can now write the general solution of the Schrodinger equation. We've solved it for each individual eigenvector. This is the way the individual coefficients alpha change with time. But now we can go back, let's go back to the original equation that the state vector is a sum over the eigenvectors. And let's plug in what we now know about alpha. We now have solved for alpha, and here we can write that the general state vector at time t, if you know it at time 0, let's suppose we know it at time 0. At time 0, it was just alpha j of time 0 times j. All that happens is each alpha j picks up a time dependence, and the time dependence is just governed by the energy of the state. Earthquake? All right, so all you do to find the general solution is you take the state, you break it up into eigenvectors of the Hamiltonian, and then whatever it was at time 0, the only thing that happens is the phase of each entry here changes with time with an e to the minus i times the energy of the state. Where would the h bar go in this formula? Anybody know? Underneath the i or underneath the energy or underneath the time, it doesn't matter, but it goes into the denominator here. This is the connection between energy and oscillations. This is the oscillations of the wave function, not oscillations of real things, but oscillations of the coefficients of the state vectors. While we're at it, let's do one more thing. Let's use our general solution of the Schrodinger equation to calculate how the average of L changes with time. We have an equation for it, but now we actually know how psi changes with time, and this is just a little bit of, not gymnastics, just a little bit of fiddling around with equations. Some of you may recognize the equation we're about to get, whether you recognize it or not, it's just explaining how the symbols are used, if you like. Okay, let's calculate, given that the state changes like so, let's ask how the expectation value of an arbitrary observable changes. So what do you have to know to know how the average or the expectation value of L changes? L is some operator. I haven't told you what operator. Of course, you need to know what operator in order to say how it changes. Of course you do, but still, we can go a little ways. Let's plug in this formula for psi of t. Sum of j, alpha sub j, e to the minus i, ejt times j. Now we have L, put L over here, and then we're going to do the same thing to this bra vector over here, but what happens when we do a bra vector? Let's do the bra vector. The psi of t is summation. We better use a different summation variable, k. And then we will have alpha star sub k of zero. When you go from bras to kets, you have to complex conjugate. We'll also have to change this to e to the plus i, e sub kt times the bra vector k. I've changed the summation index here so that I don't confuse it with this one. I've complex conjugated here and flipped from kets to bras. So I can put this over here. So the summation of a k, k, alpha star k, alpha star k at time zero. This is at time zero. Times e to the plus i, e kt. So it's a double sum. It's a double sum over k and j. That's not terribly illuminating, but let's just for fun, let's work it out. Now what's the inner product of k with j? We see here the inner, oh no, not, sorry, it's not the inner product of k with j. What we see here is L sandwiched between j and k. L sandwiched between j and k. The operator L acts on the state j and then you project it onto k. Let's give that a name. Let's call that Lkj. That's exactly the kind of matrix elements we computed when we talked about the matrix representation of observables. The inner product of alpha, sorry, of L on j with k. That's what we called Lkj. Just the matrix representation of the observable. And then we have alpha star k of zero, alpha j of zero. We'll talk about that in a moment. And then we have a phase e to the i, e sub k minus e sub j times t, summed over j and k. Now the main reason I did this was just to illustrate the machinery. I did not do this because this is a formula of any importance right now, but some of you may nevertheless recognize it. It is Heisenberg's matrix formulation of quantum mechanics. Heisenberg's formulation of the way averages change with time. If you go back to Heisenberg's first paper on quantum mechanics, he guessed this formula. Basically just guessed this formula. Every observable has two frequencies associated with it. That was weird. It was common for people to understand that an observable might be an oscillating observable. It might be something that oscillates and would have a frequency associated with it. But Heisenberg said there are two frequencies associated with every matrix element of an observable. This is where it came from. No, it came out of Heisenberg's head. But this is the way Dirac explained what, this is the way Dirac understood what Heisenberg was saying through this machinery. First just the pieces. The pieces, this represents the observable, whatever the observable is, it's its matrix representation. These things have to do with the initial state, the state of time t equals zero. And these things here have to do with the way things change with time. Everything changes with time with two separate frequencies, and then you sum it all together, and that tells you how L changes with time, how the average of L changes with time. So we're putting together a bunch of machinery that we'll use. Are there any questions? Yeah. In my reading this correctly, is it the case that the magnitude of the alpha stays the same over time? The alphas are fixed. The alphas are just the alphas at time t equals zero, period. Yeah, yes. No, I think you're going back to here. Yeah, yes, that's correct. The magnitudes of the alphas stay fixed. It's the relative phases between them. So all of the time evolution has to do with these relative phases. Yeah. So the string equation is the partial derivative of the partial derivative with respect to time. Oh, and that actually doesn't matter. This just means the whole derivative with respect to time. It's usually written as a partial derivative, but it just means it could be written as a total derivative. Same thing. Psi only depends on time. Psi will also function as same position. Well, yeah, that's right. We will eventually come to think of psi as a function of time and position, and that's why it's later. For the moment, it doesn't matter whether we use partial derivative or total derivative. It just means the change in psi in a small time, and it doesn't matter whether it's a partial derivative. When you derive the time dependent Schrodinger equation, you use the fact that the state stay or the final in the time. I think that's an empirical fact. Well, it is an empirical fact, but in fact, I think it's much more than that. The evidence that it's much more than that is also an empirical fact. It's a curious empirical fact. People have tried to change quantum mechanics, and it seems to be as hard to change quantum mechanics as it is to change arithmetic or logic. If you start fooling around with it and try to change the rules, you very quickly find that something really bad happens. Probability is not conserved. Probability is not positive. Terrible non-localities happen where you can communicate that the infinite distances. And so quantum mechanics is not a thing which is easily changed. In fact, there is no known changes that can be made in it. Of course, there is one known change that can be made. You can just replace it by classical mechanics. That's a consistent theory. The reason it's a consistent theory is because it's a good limit of quantum mechanics. But there's no ways to deform it a little bit. My friend David Finkelstein, who is a very wise man, a great scientist, once explained to me that there are two kinds of revolutions in physics. He called one a plastic flow and the other a brittle cracking. What he meant by that, he considered, for example, general relativity to be a plastic flow. You can change it a little bit. You can deform it a little bit. You can change Einstein's equations by changing variables. It's not something that you cannot change at all. And you can change it in little ways and deform it. Special relativity is very hard to change. Once you change special relativity, it's very brittle. And the whole structure of it goes down the drain. But nevertheless, you could change it a little bit. It's harder to change in general relativity. Most things go wrong, but quantum mechanics is absolutely brittle. And by that, he meant not that the theory is in danger of being broken by anything, but that if you try to change it, you will get broken. So quantum mechanics is very hard to change. And I think the evidence for that is just all of the years over which people have tried to change it. Yeah? So the E sub-j and E sub-k by those are both real numbers. Yes. We derived from the unitarity. Unitarity is this condition that U dagger U is equal to 1, or I, called unitarity. We derived that the Hamiltonian is her mission. And from that, we can derive that its eigenvalues are real. Yeah? Good question on syntax? Of what? Syntax. Syntax. So up to the top, we have h, r, the ket vector, E sub-i equals E sub-i. Now, first, E sub-i on the right-hand side of the equation is an eigenvalue. No. Well, OK. It's this whole thing is an eigenvalue, eigenvector. But it's the eigenvector with eigenvalue E sub-i. So you're labeling the eigenvectors by their eigenvalues. Right. Here, it's just the eigenvalue. That's what I meant. Right. And incidentally, this is a case where repeated indices do not get summed over. This is a case of the non-summation convention. Let's apply some of this apparatus to the problem of a single spin, a spin in a magnetic field. Let's see if we can calculate, in particular, how the average of the components of the spins change. As we do this, depending on your memory for classical physics, you may start to find some of it familiar. If not, I will remind you. It's not as bad as fingers on the old line. No, no, it's not. No. No, it is not. That wasn't terribly bad. It just reminds me of my cat in the middle of the night. OK. We have a single spin. Let's put the spin into a magnetic field. If we put the spin into a magnetic field along, let's take the z-axis. Let's be simple. Let's take the z-axis, put the spin in, put the magnetic field along the z-axis. A spin, in particular, the spin of a charged particle, is a little electromagnet. A charged particle spins around, and as it spins around, there are currents that flow, and it creates a little magnet. Put a magnet into a magnetic field, and you get an energy. And basically, the energy is proportional to the misalignment of the magnetic field with the spin, with the axis of the spin. And the simple formula is that the energy is proportional to the dot product of the spin with the magnetic field. That's where we're going to start. We're going to start with an energy which is proportional to the magnetic field, and the magnetic field is along the z-axis. So what is the dot product of the spin with the magnetic field that's going to be proportional to the z-component of spin? The dot product of the spin with the magnetic field will be the component of the spin along the axis of the magnetic field. That's a guess. Take that as a guess, which we could then study and then ask if the experiments on the spin agree with it. So we have a magnetic field along the z-axis. Let's call it B sub z. And we have a spin, and the spin has components, sigma x, sigma y, and sigma z. People ask me, incidentally, why in the book that I'm writing about quantum mechanics, there aren't more figures. There are lots of figures in the classical mechanics book. The reason is very simple. I mean, it's a real reason. You can't draw quantum mechanical systems. Quantum mechanical systems are, by the very nature, not things that you can draw. If you try to draw them, you will always be trying to draw things which cannot simultaneously make sense. In particular, if I try to draw the components of the spin, I'll be trying to draw components of the spin which don't commute with each other, and it doesn't make sense to draw them. Nevertheless, there it is, magnetic field in the spin. And we're just going to write now. We're going to take this as a guess or as a postulate. The Hamiltonian is just going to be proportional to the component of spin along the z-axis times something proportional to the magnetic field. I'm not going to write out. It contains the electric charge. It contains the magnetic moment. It contains the magnetic field. But let's just call the coefficient here. Let's call it omega over 2. That's the definition of omega, if you like. Whatever the right combination of magnetic field, charge, magnetic moment, in particular, magnetic moment and magnetic field will lump them up and call it sigma over 2. Sorry, and call it omega over 2. And the reason for the omega will become clear shortly. Sorry, the reason for the 2, for both the omega and the 2 will become clear shortly. OK, now what I want to calculate, what I want to calculate, I want to calculate the time derivative of the averages of the all the components of the spin. Here we have the z component and the Hamiltonian. But I want to find out how all the components, sigma x, sigma y, and sigma z, how they vary with time. So in particular, I want to calculate the time derivative, dot for time derivative, of sigma x and sigma y and sigma z. Well, what is the rule? I'm afraid the rule has been erased, but I think you probably remember it. And by definition now, this means the average of sigma x. I'm not going to bother writing the bracket, the angular brackets there. We are now talking about the averages of sigma x, sigma y, and sigma z and the time derivatives of the averages. All right, so if you remember, this is equal to minus i times the commutator of sigma x with the Hamiltonian, right? So the Hamiltonian is sigma z, and we get another factor of omega over 2. Oh, yeah, that's correct. Likewise here, sigma y with sigma z times omega over 2, and likewise here, sigma z, sigma z. Now one of these is very easy. Which is the easy one? The bottom one. Everything commutes with itself, right? Everything commutes with itself. So this is zero, and what does it tell us? It tells us that the component of the spin along the magnetic field, the average of it does not change with time. That also tells us, well, if we were doing classical physics, which we're not, if we were doing classical physics, it would tell us that the angle of a little vector here with the magnetic field stays the same. But we're not. We'll just take this for what it says. It says that the z component doesn't change with time, the average. So that's the first thing we discover. Now to go beyond this, we have to know what the commutation relations are of the sigmas. So let's work them out. I think we have enough time. Let's work out the commutation relations of the sigmas, all pairs of them, and see what we get, and then plug it into here. I'm going to start with sigma x, sigma z, but instead I'm going to do it the opposite way. I'm going to calculate sigma z, sigma x. It's the same thing except with a minus sign. If you interchange the two of them, so we'll have to remember. I'm calculating sigma z, sigma x. We'll have to remember to change the sign. OK, so what is this? This is the product. Now, everybody remember what sigma z is as a matrix? We're going to do it now by matrices. I'm going to say the way to calculate this is to remember that sigma z and sigma x are matrices, or at least they have a matrix representation, and we just have to multiply the matrices. Sigma z, what was that? That was 1, minus 1. And what about sigma x? That's 0, 1, 1, 0. That's the first term in the commutator, sigma z times sigma x. And then the other term is just to put them in the opposite order. 0, 1, 1, 0, times 1, 0, 0, minus 1. These operations were not really answered. No, we're just taking it. Just the commutators of the matrices is what it comes down to. We could do it without the matrices just by applying the operators to the vectors, the vectors up and down, and we know enough to actually do that without actually using the matrices. But the matrices are a very convenient tool for manipulating operators. Operator product is the same as matrix product. So, does everybody know how to do a matrix product? I hope so. Let's do this one over here. Yeah. The upper left-hand corner is the product of this row with this column, 1 times 0, 0, times 1, 0. The upper right corner is the product of this row with this column, 1 times 1, 0, 0, 1. The lower left corner is the product of this row with this column, that's minus 1. And then the last one is this row, sorry, this column, times this row, 0 times 1, and, oh, sorry, no. 0 to this one, this one's still 0. Okay, now let's do the other one over here. Minus 0 times 1, this one, times this one, 0. This one, times this one, minus 1, this one, this one, times this one, 1, 0. I think I missed a sign somewhere. Yeah, no, no, it's right, it's right. This is plus, minus. Okay, so this, they just gave the same thing. What were the minus sign? Sorry. This minus this gave two terms which add. So the commutator is just equal to, the commutator itself is just equal to 0, 1, no, it's equal to twice, minus 1, 0. Now, do you recognize this thing over here? If I put an i over here, an i over here, you might recognize it as being related to sigma y. Let me write down sigma y. Sigma y is equal to minus i i 0, 0. And what happens if I multiply sigma i by i? i times minus i is 1, and i times i is minus 1. So what this is, is it's just twice i sigma y. 2i sigma y. That's kind of neat. It's closing up on itself, commutators of sigmas with sigmas are sigmas. That's good, because it's going to allow me to write equations for the time derivatives of sigmas, which are just going to be sigmas again. Okay, so let's see what we have. We have commutator sigma z with sigma x is equal to twice i sigma y. And now I'll write down the rest of them. Incidentally, as I said, if you want to get the opposite order, you just change the sign. Sigma z sigma x with sigma y is equal to 2i sigma z. And z xy sigma y xyz sigma z is equal to 2i sigma x. These are the three commutations. Did this remind you of anything? Do you remember the Poisson brackets of angular momentum? The Poisson brackets of angular momentum were Lz Lx equals Ly, Ly, Lx Ly is equal to Lz, and Ly Lz is equal to Lx. So apart from a factor of 2i, remember there is an i in the connection between Poisson brackets and commutators. These are essentially, apart from a factor of 2, which we can absorb, they're essentially angular momentum Poisson brackets. There's an important clue there for the meaning of angular momentum. But we're not going to dwell on it for the moment. These are the commutation relations of the components of the spin, and they just give back the components of the spin again with this funny ordering, z xy xyz yzx. They're just permuted cyclically. Okay, so let's go over to here now. This now becomes minus i omega over 2. And now what's the commutator of sigma x with sigma z? It's minus 2i sigma y. So we get another minus 2i sigma y. The 2s cancel. That was the reason that I put a 2 in the denominator here in the first place, because I knew it was going to cancel. So let's cancel the 2. We have minus i times minus i, which is minus 1. So what we get is minus omega sigma y. Notice that the i went away. The i went away, I was worried about it about a half an hour ago, that oh my goodness, time derivatives of real things are going to get imaginary. But they didn't. Because the commutators themselves had imaginary coefficients. And that's general. That's general. Okay, so there we are. We have sigma x dot is equal to minus omega sigma y. And sigma y dot, here we have the relationship for sigma y. And it involves twice sigma x. The correct relationship, if we work it out, is that this is equal to plus omega sigma x. Okay, so sigma z dot is zero. But what's happening to sigma x and sigma y? Can you see what's happening to sigma x and sigma y? Do you recognize these equations? These are the equations for a thing moving in circular motion, right? What's happening is simple. In the sense of expectation values, expectation values are real numbers. And in the sense of expectation values, the spin vector is processing around the z-axis. It's processing around the z-axis. Sigma x dot is minus omega sigma y. Sigma y dot is plus omega sigma x. This is the same kind of formula that we discovered in classical mechanics when we talked about a rotor. Do you remember the rotor? And we also put the rotor in a magnetic field. The angular momentum executed exactly the same. And it's also the gyroscope procession. It's basically gyroscopic procession. A quantum mechanical spin, in the sense of expectation values, it doesn't make any sense to say that the quantum mechanical operators themselves are processed. That doesn't mean anything. Quantum mechanical operators are just these matrices. They don't do anything. But because the state vectors change with time, the expectation values, the average values of the spin, execute a motion which is basically the same as a classical rotor processing in a magnetic field. So we have now solved the problem of a spin in a magnetic field. We've solved the hardest possible problem associated with a single spin. There is no Hamiltonian which is more complicated than this for the single spin. It's the most general. Of course, you could put the magnetic field in the x direction. There's a little, yes, you can solve a harder problem. You could put the spin along some peculiar axis. But what you would find after you worked the whole thing out is that it would just process around the peculiar axis of the magnetic field. You could just rotate it so that the magnetic field is pointing in a different direction. What's that? Any hints on why that's the correct value for the Hamiltonian? There really aren't very many operators. OK, so let's, here's a general, you can work this out yourself. This is not hard to prove. The most general Hermitian operator in two dimensions is a linear combination of the unit operator, sigma x, sigma y, and sigma z. The most general Hermitian operator is a linear combination of these. Putting a unit operator into a Hamiltonian, that's like adding a constant to the energy. Unit operators commute with everything. So putting a unit operator in is not interesting. It does nothing. So we get rid of this. The next possibility is a general combination, linear combination, add them up, of a coefficient times sigma x, another coefficient times sigma y, and another coefficient times sigma z. That just corresponds to putting the magnetic field at some angle. It's just the simple case where we only took one of these is the magnetic field along the z-axis. If we took a general linear combination, let's say n sub x, sigma x, plus n sub y, sigma y, plus n sub z, sigma z, this would just correspond to the component of the spin along the n-axis. And you would do exactly the same calculation, except you might work instead in the basis of eigenvectors of sigma dot n. So it really is the most general Hamiltonian that you can write down. And that's just because a two-dimensional space is so simple. There just aren't very many independent operators in it. There just is nothing else. Can you comment on the relation between the two-dimensional spaces and the space of states? But we're sort of thinking of three-dimensions here from the procession. Right. But we worked, yeah. But remember, for every direction that you can think of, there is a state for which that component of the spin is definite. But that's secondary here. That didn't play very much of a role here. We just follow the rules. And, but yes, I mean, there are two things going on here. And this is a good place to focus on those two things. There are the three vectors, the vectors in real space. Those are the components of the averages of sigma. And this is the two-dimensional space of states, and they're quite different things, although they are related. Yeah. Does omega represent the property of the magnetic field? It's important. Omega is basically the product of the magnetic field times the magnetic moment of the little spin. Right. Maybe a two, I have to remember, but no, I don't think there is. Yeah, there is a two. Yeah. Omega over two is the product of the magnetic moment with the magnetic field. The original technique for creating sigma z was to put it in a magnetic field and have a photon go off. That was last year. Right. You're right. Right. I mean, if we pursued this and coupled it to a field, we would see that thing. Yeah. Right. If we coupled the spin, now forget the procession. If we coupled the spin to the magnetic field, then there will be two energy states. For example, if the magnetic field is along the z-axis, then the energy of the up state and the down state will be different. They'll differ by amount omega. Up will have energy omega over two, down will have energy minus omega over two, and there will be an energy difference of omega. All right. So one of them has higher energy, one of them has lower energy. What happens to a real spin if you wait long enough if you put it in a magnetic field? The answer is eventually the higher energy state will emit a photon, and what will be the energy of the photon? Omega. Just a difference between the upper energy and the lower energy. And in the process of emitting the photon, it will flip. So these things are all certainly connected. All right. We've solved a real genuine quantum mechanics problem. Yeah. So why is the b not related as an operator? What's that? Why is the magnetic field not related as an operator? Its value is all one in the moment. If you were to think of the magnet as part of the quantum mechanical system, whenever you're facing the problem of a real quantum system, which is being in interaction with a heavy system, with a system like a heavy magnet or some other kind of detector, an almost classical system, you don't bother trying to think of the heavy system quantum mechanically. You just say it has some, now that has to be justified. This is not obvious to justify. Why don't we have to think of the magnet as part of the quantum mechanical system? That's another way of asking the same question. And the answer is that it's so big and it's so heavy that it doesn't fluctuate much. And eventually we'll want to address that problem. For the moment, we just think of the magnet as part of a heavy apparatus, which is sort of fixed and it's there, and nothing that that electron can do is going to cause that magnetic field to change. It's just stuck there. And we get to think of it deterministically. When do we get to think about a thing deterministically? We get to think about a thing deterministically when it's so heavy, and so many degrees of freedom, that when we calculate the quantum mechanics of it, we find that the fluctuations are really small. But that has not been established yet. So far, we've divided the world into the quantum and the heavy apparatuses, which we don't have to deal with quantum mechanically. That, of course, has to be a temporary thing. Real apparatuses are made of the same thing as the systems we study. And so at some point we have to come back and say, what was the justification for saying the apparatus was allowed to be considered just classically, you just point it in some direction, and that's the way it is, and the spin dealt with quantum mechanically. We will have to come back and answer that. But my usual statement at this point is, not tonight. Yes, that's what? What? Yeah. No, in an unpredictable manner. Yes, the answer is yes. They will occasionally fail. But that has little to do with quantum mechanics. Well, it has to do with quantum mechanics, but it has to do with the use of statistics and probability. Why does probability work? We make a basic postulate, and that's that if you do an experiment over enough times, the average value that you get in the laboratory will agree to within small precision with the mathematical averages that you calculate from a probability distribution. I am not going to try to justify that. The reason is I don't know why it's true. And why does probability work? Why, if I tell you, you flip a coin a thousand times, does it come up heads 500 times? No. There's no such prediction. Certainly, I mean, it might come up 500 times heads, but there's no prediction that says it does. Will it come up 500 times plus or minus square root of a thousand, plus or minus 30, that's the margin of error? Not every time, no. So there's no such prediction. What can you say? You can say it will probably come up, you know, we get into the cycle. How about if I do it a million times? Will it certainly come up 500,000 plus or minus square root of n? No. No matter how many times you do it, there is no hard and fast prediction from probability that any particular thing will happen. Why does it work? Jesus, it beats me. But it does. Many people, you know, I'm not the only one who's been puzzled by this. I mean, you know, I used to like the joke that this is probably the reason that Einstein didn't like quantum mechanics. Great again. I could have done the same calculation by instead of putting sigma z here, putting this sigma dot n. Sigma dot n is the matrix in the notation that we've been using up until now, where the sigma matrices are the things I wrote down. Maybe the matrix n is a unit vector. And sub z minus n sub z, nx minus i ny, and nx plus i ny. It's just taking the components of n and multiplying them with the corresponding components of sigma. And then you could start with this and say, this is the, I'll put an omega over 2. And take this to be the Hamiltonian. And then recalculate what happens to the different components of spin. Well, the component of spin along the n-axis, what will we do then? Then we would have to calculate commutators of the various components of the spin with sigma dot n, sigma dot n. What about the component along the n-axis? What happens to it? Commutator zero, right? Sigma dot n commutes with sigma dot n. So we would find out that the component of the spin along the n-axis would be the thing that doesn't change. How about the components of spin perpendicular to that axis? We could work that out. And we would discover that the spin processes about the n-axis. That's a good exercise to try to do, aren't you? I was asking about the omega reflected in the orientation. You moved it from the omega to the dot n. If you want to say it again? I was asking whether the omega of the value of omega reflected in the orientation of the spin. You moved it from the omega into the dot n. No, the omega is just a number. Because if you put the dot n there, if you hadn't put the dot n there, the omega would have protected the orientation in your region. By orientation, you mean plus or minus one? Or you mean direction? No, no, no, no, no, no, no, no, no, no, no, no, no, no. Omega doesn't have to do with the direction. Omega is just the magnitude of the magnetic field, magnitude of the magnetic field, times the magnitude of the magnetic moment. No, not dotted. It's just numbers. They're numbers. The dot product is between sigma and n. Omega is just a spectator. It's just a number. It's just a number. Oh, you can if you like. You could take this omega and put it with the n if you wanted to. What depends on the orientation of the spin? What depends on the orientation of the spin? One of us is confused. I'm not sure which. The confusion is that when you had a single z up there, that implies the magnetic field is in the z direction. Here the magnetic field was in the z direction. And that's the omega z you had before, right? Omega z. Where did I have omega z? I had h equals omega over 2. Sorry, sigma z. Sigma z, yes, sigma z. Where is the spatial vector of the spin, right? Omega dot n means dot product. I can't remember which book I read it in, but they said whenever you're talking spin, you're really not talking one half, you're talking one over two parts. Yeah, I think I read that. I read that too. I think it was Newt Gingrich's last book. I don't know. Sigma dot n stands for sigma x and x plus sigma y and y plus sigma z and z exactly as if sigma was a classical vector. But of course it's not. It's just a collection of three matrices. So this stands for a matrix. This stands for a single matrix, which happens to be the sum of three particular matrices. That's the definition of sigma dot n, if you like. These are the components of n, and they determine the direction of the magnetic field. We multiply that by omega over 2, which is just a number. It's just a number. Nothing more complicated than a number. And this is what sigma dot n means. If the magnetic field is along the z-axis, then the only component of n, which is non-zero, would be n sub z. If the magnetic field was along the z-axis, then we would say only n sub z. If you like, would you like me to write down the relationship between n and the magnetic field? The relationship between n and the magnetic field is that the magnetic field vector is equal to the magnitude of the magnetic field times the unit vector n. The magnetic field as a vector is the magnitude of the magnetic field times the vector n. And that stands for a magnetic field that's pointing along the n-axis with magnitude b. Once we know that, then we can basically identify the magnetic field within apart from a numerical coefficient. A numerical coefficient is buried in omega here. Once we know that, we can write this as the spin dotted into b. So I'm not sure what the questions are. There's something which is bothering some of you. I'm not sure what it is. When you look at it, you get a frequency. Does that frequency always the same or does it depend on the spin on the field? It depends on the magnitude of the magnetic field. Nothing more. Is it the difference of the dot n? You're talking about the frequency of the photon? Yeah. Yeah, now the frequency of the photon is always just the difference of the energy of the upper energy level and the lower energy level, and it is always just omega. It doesn't depend on the component. If you take sigma dot n and you ask what its eigenvalues are, the eigenvalues don't depend on n. They're plus and minus one in any direction. Hm? No, just the eigenvalues, here, let's write out sigma dot n. All right, here's an exercise for you. It should be done. Sigma dot n is n sub z times sigma sub z. That's nz minus nz plus nx times sigma x. That's n dot sigma. Question, what are the eigenvalues of this matrix? Somebody's worked them out. I know that somebody in this room has worked them out. Who was it? Hm? The eigenvalues of this matrix here, for any n's, as long as the unit vectors, if n is the components of a unit vector, in other words, nx squared plus ny squared plus nz squared is equal to one, you will find that the eigenvalues of this matrix are always plus or minus one. This is simply the statement that matter, no matter what direction you point your detector, your apparatus, when the apparatus acts, it gives you plus or minus one. So whatever direction you take here, the eigenvalues of this matrix are plus and minus one. If you know a little bit about matrices, for those who know a little bit about matrices, let's prove it. The trace of a matrix is the sum of its eigenvalues. The trace means the sum of the diagonal elements. First step, there are two eigenvalues. Let's call them lambda one and lambda two, and the trace of the matrix is zero. This one plus this one, and that tells us that lambda one plus lambda two is zero, or lambda one is equal to minus lambda two. So there are two eigenvalues, and they're equal and opposite. What about the determinant of the matrix? It's the product of the eigenvalues. So we now know they come in equal and opposite pairs. What's the product? What's the determinant? The determinant of this matrix is equal to minus and z squared, this times this, minus this times this, and that's minus nx squared minus ny squared. Nx plus ny times nx minus ny is nx squared plus ny squared. Here it is. And what is nx squared plus ny squared plus nz squared? One. So the determinant, which is the product of the eigenvalues, is equal to minus one. There's only one solution. Lambda one plus lambda two is equal to zero. That says lambda one is equal to minus lambda two, and the product is equal to minus one. There's only one such pair. It's plus one and minus one. So all such matrices of this form have eigenvalues which are plus or minus one, and therefore the energy levels are always plus or minus omega over two. One of them plus omega over two, the other minus omega over two, and when you flip from one to another, you get omega worth of energy. So it doesn't matter which way things are pointing in. Yeah, I think it's time to go, but... If an electron is prepared, let's say in the x direction, and you put an eigenvalue in the z direction. So it's pre-set for a while, and then suddenly it aligns into the z direction. Yeah, yeah, yeah, yeah, yeah. That's right. And studying, yes, I should have said, in studying the procession here, we've ignored the possibility that it can radiate a photon. Now, the probability to radiate a photon per unit time can be pretty small. It can be very, very small. In particular, if the energy differences are very small, as they would be in an ordinary magnetic field, it will process many, many times before it emits a photon. So... My question is, if it does, a magnet of photon, since it's not exactly looking, it's only going 90 degrees... Going from up to down. Remember, the two eigenvectors, the two eigenvectors up and down or left and right or in and out correspond to opposite orientations of the spin. Even though the state vectors are orthogonal, the directions in space are 180 degrees apart. So here's the imaginary situation that a electron is not going to suddenly do something other than what it was doing for or product to it. We won't suddenly do something other than... There's a probability for the electron to decay and emit a photon. And that probability per unit time is typically small. So it could be unlucky, and after a quarter of a revolution, it might emit its photon. But that's extremely unlikely. You can estimate how many... Am I answering your question? I'm not sure. No, I'm not sure. It's either in the direction it was going or flipped opposite. It won't suddenly go to somehow a weird angle like x, y, z. I think the question is, since it's not... It feels like the electron, as you can hear, it's not perpendicular to the magnet's field. It only goes half way. No. It's in a superposition of being up and down. It's one of the two of those. It's only the average, which is sort of in a catawampus direction. It's in a superposition of being up and down. If it's down, and down is the lower energy state, it won't radiate. If it's up, it will radiate. So if you put the electron in some other direction, there will be some probability for it to radiate and a probability for it not to radiate. The probability for it to radiate is the probability that it was up. No. Very confusing. All right. Okay, now I see what was bothering you. If the magnetic field is that way, and the spin is aligned that way, you might have thought that when it jumps, you only emit half the energy of a photon, half the energy. No. What's correct is that the probability for it to emit a photon is one-half. Why? Because there's half a probability that this is actually down, and half a probability that it's up if you were to measure the upness and downness of it. So that means the average energy that it emits is half, but it either emits omega, or it doesn't emit at all. Good. For more, please visit us at stanford.edu.
(February 6, 2012) Leonard Susskind discusses an array of topics including uncertainty, the Schroedinger equation, and how things evolve with time.
10.5446/14943 (DOI)
Stanford University. Alright, let's come back to the second law for a minute, a little bit, and talk about Poincare recurrences. Poincare recurrences are, I'm not sure whether Poincare was really the first one to think about it, I have a feeling that Boltzmann again probably understood this. Well, Boltzmann was before Poincare, I think so. I think so. So here's a question, here's a question you might ask. Let's suppose you start out with the air in this room all on one side of the room. This is not an impossible thing to do. You put a wall in, you evacuate the air out of one side of the room, stick it into the other side of the room, and start with an initial condition with all the air on the left side of the room. And now you let it go. Okay, what happens? Comes the thermal equilibrium, fills the room pretty much uniformly, entropy increases. But what if you sit there and you wait and you wait and you wait and you wait. Sooner or later, sooner or later, the unlikely will happen. The unlikely event will happen whereby accident or by just waiting long enough in time, all of the air will reappear in the left side of the room. Or on the top half of the room or whatever, but let's say the left half of the room. That's called a Poincaré recurrence. And it's really no different than saying that if I flip a coin enough times, I will get a million heads in a row. Very unlikely, but if I do it enough, some fluctuation will happen. Okay, so the question is, roughly speaking, how long do you have to wait for the air to all appear in one side of the room? Is it a year? Is it ten years? Is it a hundred years? Is it the age of the universe? So let's see if we can get a handle on that. You start out with the idea of phase space. And phase space is the space of the coordinates of the molecules and the momenta of the molecules. And of course, it's very high dimensional, six n dimensional, six n because each particle has three coordinates, each particle has three momentum components. If there are n particles, there are six n coordinates, so this is a very high dimensional space. Okay, now as far as the momentum space goes, that's kind of bounded. It's bounded because if any particle has an enormous momentum, it will have a very large energy. And there's a certain amount of energy that you put in the box, no more than that. So pretty much we can say the momentum dimension in this box is bounded and let's just bound it by saying the momentum is definitely within some range here. And it doesn't matter how many particles we have, it doesn't matter how big the system is, it doesn't matter what the temperature is, the higher the temperature, the more uncertain the momentum is. But if the temperature is reasonably low, then the momentum direction here is pretty much bounded. And the x-axis, that runs from the left part of the room to the right part of the room, sort of, and say that we started out in one half of the room meant that the phase point was in here. In other words, the system started in phase space somewhere is in there, let's not be detailed about where it is in there, it's somewhere is in there, so the probability distribution is spread out over here. And now we wait for a while and what happens? The phase point starts to move. Now, it moves chaotically. Chaotically for our purposes just means pretty unpredictably and not unpredictably because the laws of physics are unpredictable in principle, but because trajectories, like the example of the billiard balls, errors or slight differences tend to magnify themselves after a while. And so even if we started out two very, very similar trajectories, they would very quickly depart. And you can pretty much imagine that this means that this phase point moves around in here in very, very complicated ways and pretty much fills up the phase space. Fills up the phase space in the sense that if you coarse-grain it and fuzz your eyes a little bit and wear somebody else's glasses, it will look like it's pretty much filled up the phase space. Okay, so what percentage of the time would you expect that the phase point resides such that the particles are all in one half of the room if this were the picture? It looks like half the time. That's crazy. We don't expect half the time the air molecules to be in the left half of the room or the right half of the room. And the mistake we're making is we're drawing a picture in just two dimensions. Alright, in two dimensions if we divide the X space in half and say we're to the right, we're talking about half the volume of the phase space. What happens though if we have in coordinates and we're not talking about the part of the phase space where one particle is on the right hand side but where all of them are on the right hand side. So let's just say there are two particles. Let's forget the momentum for a minute and just draw the two coordinates, X1 and X2 or X and Y. X and Y are the first X and the second X. And to say that both of them are on one half of the room is to say that the phase point is somewhere in the quarter of the square. So suppose there were only two particles moving in one dimension and we start somewhere and the system moves around randomly. Two particles scatter off each other. They do very random things. What percentage of the time are both particles in the left hand side of the room? A quarter. What if there are three particles? And what if there are n particles? One over two to the n. One over two to the n is right. One over two to the n. Yeah. That depends on how we're constraining it. If we said all the particles are to the left but not the, we just said to the left, left or that. Didn't care where they were up or down. Then I think it would be one over two to the n. All right, there's another way to think about it and that's to say let's take the phase space and identify a subregion of it as the subregion which we're interested in. Interested means an interesting configuration that is very unlikely. All right, so here's a little region of the phase space. The phase space is much bigger and the volume, let's say the whole volume of the phase space, let's forget momentum. Momentum is not important in this. What is the volume of the phase space if there are n particles? It's the volume of the box raised to the nth or the 3 nth power depending on, right. So the volume of the whole phase space, because it's an n dimensional phase space or the 3n or whatever, let's forget the 3, the volume of the whole phase space contains a volume to the nth. That's the same volume to the nth that we discovered when we calculated the partition function and we integrated over position, volume to the nth. Let's suppose this region of phase space over here has a much smaller size. All the particles have to be in there. Then the volume of this region of phase space with all the particles in here will be some little v to the nth. This is the volume of the box incidentally. This is not the volume of the phase space. The volume of the phase space is v to the n, the volume of the box. If we're asking about all the particles being in some small volume, it could be half. I mean this could be half of the big volume. Okay, this is some smaller region and the volume of this region of phase space is little v to the n. What percentage of the time would you expect the phase point to be in here? Little v to the n over little v to the n over big v to the n. Now, little v to the n could be some small number, whatever it is, but what about big v to the n? Big v to the n is proportional to the entropy of the whole system. The entropy of a region of phase space is the logarithm of the volume of that region. If all you knew is that the system was in some region of phase space, the volume of that region of phase space is the logarithm is the entropy. Yeah, this might seem picky, but is it the volume of configuration space? Yeah, we're not worrying about the momentum now. Yeah. Why the momentum? The momentum is the momentum, roughly speaking, of whatever the temperature is. So let's not worry about that. Okay, so roughly speaking then, this here, little v to the n, is the entropy that a gas would have if it was in this little region, if all the particles were in that little region. Let's take the little region to be pretty small. So that's just some number. But what's about big v to the n? That's the entropy of the whole gas and thermal equilibrium, or sorry, it's the exponential of the entropy. Logarithm of v to the n is the entropy of the gas in this region, and v to the n is the exponential of the entropy. So what this is telling us is that the likelihood of finding a system in a tiny volume of phase space here is always proportional to e to the minus, because it's in the denominator, e to the minus the entropy of the thermal equilibrium state. The meaning of that is that it's very improbable. e to the entropy, e to the minus the entropy, is a very, very small number. What's the entropy of the molecules in this room? Roughly speaking, proportional to the number of molecules. It's pretty close to just being the number of molecules. Ten to the, ten, no, it's more than ten, ten to the thirtieth, ten to the thirtieth, I'm not sure exactly what, something like that. So v to the n, or the entropy, is ten to the thirtieth. The probability of finding yourself in a tiny little volume of phase space like this is not e to the entropy, it's, yeah, sorry, it is e to the minus the entropy. e to the minus the entropy is the probability of finding yourself in a small region. It's the same as this one over two to the n here. It's the same as this one over two to the n. In the case that we studied over here, little v over big v was a half. But, and a half is not a terribly small number. But when it's raised to the power ten to the thirtieth, that is a very small number. Okay, so the likelihood at a random draw of a point from the phase space that you find yourself in this tiny volume of phase space here is, in this case here, one over two raised to the ten, not to the thirtieth power, to the tenth of the thirtieth power. That's a pretty small number. How long do you have to wait on the average to find yourself in that region? Well, about a time of order t two to the ten to the thirtieth. If the fraction of time that you spend in that odd region is one over two to the ten to the thirtieth, then how long do you have to wait till you find yourself in that region? Two to the ten to the thirtieth. In what units? Years, seconds? It doesn't matter. What is the objective of doing this? Oh, it's, what is the objective, why is this interesting? Why we do this exercise? We do this exercise just to understand it. Just to understand it. Understand in what sense systems are reversible? The answer is if you wait long enough, they will reverse themselves. And if you really have a sealed room here and you let it evolve, let's say starting from the odd state, it would come to thermal equilibrium or looks like, what looks like thermal equilibrium, it would spend a long, long time there, but every so often, every one to the two to the ten to the thirtieth years or whatever, you would find the molecules in half the room. You wait long again, it equilibrates again, it looks conventional, and then all of a sudden you find the molecules in the other half of the room or that corner of the room. And if you integrate it or study it over sufficiently long times, you will discover that the entropy goes down or that the oddness goes up and down and up and down and up and down in a completely time symmetric way. In a completely time symmetric way. What's not time symmetric is if you knowingly start in a very odd configuration, in other words, you knowingly start in a tiny volume of phase space, most likely the next thing is to find yourself out of that volume. So if you start in an odd situation with all the molecules in the corner of the room, you expect the next thing to find is the molecules spread out. In fact, you'll find the next thing and the next thing and the next thing is pretty much to spread out uniformly, and that sounds like it violates the reversibility of the physical laws. But in fact, if you auto-awaited long enough, you'll find that reversing itself and doing everything imaginable for a closed system. A question, please. You said that time didn't matter in seconds, who cares? But let's say it's 10 to minus 30 or 10 to minus 40 time. What happens then? I mean, it seems to matter something. Some, yes, of course, but let's just see. Okay, so we have a number two to the ten to the 30th, okay? Now, I'm going to change units. This unit is the units of seconds. Supposing I change the units of hours, what does this number become? Two to the ten to the 30 divided by, divided by what's, or three thousand or something, let's say ten to the three, right? Okay, that's the same as two to the ten to the 30th minus three. It doesn't matter whether it, right. Ten to the 30th minus three is ten to the 30th. Does it mean that if we were to invent a short time equivalent to ten to minus third, you know, the opposite? Will it for that infinitesimal time actually come back to that volume? Yeah, yeah, yeah. In infinitesimal fraction of the time. How long it actually spends in the corner, that depends on how fast the molecules are moving and so forth. Is it quantum limitations, I assume? There is a quantum version of the Poincare recurrence theorem, but I want to get into it now. All right, so that's just, it's more than an interesting point, it's a deep conceptual point that resolved and helped Boltzmann resolve the puzzle of the one-wayness, the apparent one-wayness of time and the two-wayness of the laws of motion. Of course, what it required to make sense out of it is that we would eventually have to understand why the universe started in a little corner of phase space. That's a separate issue. Boltzmann knew that. He knew that and he said so. I was thinking, could this have application that caused that? Absolutely. And it's still an open question that's constantly being addressed. Why did the universe start in a small corner of phase space? I was thinking after the universe spreads out, could it be waited long enough? This is a good question and it is one which will work. Okay. Question or question? I think the answer is the universe must not be a closed system. If it's a closed system, it will just recur and recur and recur and that doesn't make for a good statistical explanation. I didn't understand when you said, I think you said that time was symmetric. If you watched the closed system long enough, you would find that, let's call it some measure of the localization of the particles would decrease if you started off in a corner and sit there pretty delocalized for a long time and then pop up and then go back down and pop up and go back down. But the time scale to discover this reversibility that what goes up must come down, so to speak, is this 2 to the 10 to the 30th. You're not saying that it takes as long for it to move out as it does to come back to one corner? Oh, it does. Yeah, everything is, everything would be symmetric. Every now and then you would find the molecules in exactly the right configuration to swoosh into the corner pretty much the same way they swooshed out of the corner. Oh, I see you're comparing two exact same states. Yeah. Okay. So you'll see everything happen. You'll see everything happen in both directions. Excuse me. Yeah. So this doesn't really have anything to do with the fact that they started in a small area. You could start with a room just like this, right? And then eventually all the air would be in a small area. Yes, yes, yes. But if we're trying to understand why the world looks the way it is, you see, the problem, and I think Boltzmann knew this, it's a more recurrent problem in recent years, is if the universe really were a closed box and you were to ask what's the most likely configuration to find a planet with people on it? What's the most likely possibility? You would discover that the most likely possibility is to have uniform gas everywhere except the smallest possible amount of gas necessary to make up a planet having condensed into the planet. The chances that you would see two planets, the chances that you see one planet are very, very, very small. The chances that you see two planets are vastly more negligible than that. So if you were asking the question, what should astronomers expect to see in a world given that there are astronomers, given that there are astronomers, a conditional probability, the conditional probability that there is a planet and on that planet there are astronomers and they do astronomical observations, what's the most likely thing they will see? The most likely thing they will see is they will look out and see nothing. Or they may see some gas out there, but they will not see it condensed into another planet. By far the most likely thing to see is one planet if you know that you have one and not two. What's the probability that you'll see the universe filled with stars? Absolutely negligible unless you know that in the fairly recent past, in the fairly recent past that you started with some very exceptional and unusual starting point. And then the flow out from that starting point is likely to have certain kinds of structure that a random fluctuation would not have. Anyway, that's called the problem of Boltzmann brains. It's called the problem of Boltzmann brains because people went a little bit excessive on it and said the most probable astronomer would be a single astronomer's head disconnected from anything else. But it is a problem, it's a problem of using statistics to understand the world the way it is. And it's always a conditional question. A conditional question is always given that we're here, etc., all of various things, what's the probability that we see X? And the probability in a closed universe that we see X would be much higher to see only X and not Y. Y meaning some other planets and things like that. So it's not a good theory to think we are just a result of a random fluctuation. If we were the result of a random fluctuation where things just assembled themselves sort of accidentally or what is apparently accidentally into a planet with people on it, we would have no explanation of the coherence of history. Why history looks like it had a coherent past and why there is a consistency to historical evidence if the universe is materialized by random fluctuation, not the universe but the planet, just materialized by random fluctuation with us sitting here today. It seems like if you're starting out with a voice, you can talk about the probability of finding one planet like this. The probability of finding one planet is much bigger than finding two. But you can talk about the conditional probability that given that there is one planet, what's the probability of finding another one? That seems like it would be the same. No, the same as just finding one? No. Given that you know what exists, the probability of conditional probability, what's the probability of the second one? Extremely small. But I mean it seems like the same as when you don't know what exists, it's the same as the probability of... If what we're relying on is random fluctuations, and here we have a situation where randomly in this room, a collection of molecules randomly materialized and formed the Boltzmann's head. Form Boltzmann's head. That has a very small probability. Wait, wait, wait. Maybe I didn't, but let's just explain it anyway. The probability that we discover a Boltzmann's head is going to be a very, very small number. Given Boltzmann's head has formed, Boltzmann looks around and he says, I wonder what the probability is that my wife is here also. Very, very, very tiny, much, much tinier than this just Boltzmann. Okay, now that's not what you were asking me. Well, this is the way I'm thinking of it. Imagine, talk about playing the lottery. And you've got two people who play the lottery the same number of times, and one of them has one once and the other is one zero. Now, what's the probability, the next time those two play, what are the two probabilities of them winning? Again, they're equal. You see what I'm saying? It doesn't matter that the other guy won once in that sense because they're really different. That's right. Yeah. That's right. The probability that there's a discovery of a Boltzmann's wife in the room is very, very tiny. It's equally tiny if Boltzmann happens to have been discovered in the room. It's equal. Right. Equally improbable, equally improbable, whether, that's right, but very improbable. So in other words, Boltzmann ought to be very, very surprised that his wife is there. He, most of all, is surprised that he's there. Well, maybe he doesn't know the theory very well. He doesn't know too much about the, he knows a little about the theory. And he discovers he's there. He says, oh, what a wonderful accident. How happy I am. I wonder if my wife is here. Nah, that would be too much of a good thing. He looks around. He finds her. He says, what is his conclusion? His conclusion is, you know, I think my world is probably not a world of random statistics. It's probably not a world of a closed volume of molecules which just randomly assembled me. He couldn't say that before. He couldn't say that before. What he would say before is, look, I'm here. I know that if we wait long enough, I will be here. When will I be here? Is it a fantastic piece of luck that I happen to be here right now? No. At some point in time I'm going to be here. This happens to be the time I'm here. He says, if that's the right theory, that I'm here just because if the gas in the room eventually assembled into me, and when else could I be here except when I'm here? It's when I'm here. But I think the best prediction I can make is that if I look around, I will find the rest of the room pretty much in thermal equilibrium with no fancy structures in it. In particular, my wife won't be here. If he discovers his wife, he will say, that's extraordinarily unlikely, even much more unlikely. So my theory is probably wrong that I'm the result of a random fluctuation. This is good. Can you define it? We couldn't explain it. One textbook says George Washington chopped down a cherry tree and then you go and look at another textbook. It says the same thing. But if the world was just a fluctuation, it was just made by fluctuation. And one textbook happened to say George Washington chopped down a cherry tree. There'd be no reason to expect the other textbook to say the same thing. He chopped the tree. Why would it? Why did they not vote? No, no, he didn't chop the tree. The world just materialized accidentally in a configuration in which the textbook said, he chopped the tree. Don't worry about that. This is not the right theory of nature. Does this just say that we live in a world as a closed system that's extremely long, long time constant to reach equilibrium? Yes, but it would still be true. You could say that. But then you would nevertheless say, if you wait long enough, there will be many, many, many replicas of you in the future. And almost all of them will not see a coherent history. So if you make your best guess, you find yourself here today and you ask, how did I get here? The overwhelming majority of people who wake up in the morning and ask that question will be ones who came out of one of these random fluctuations. Now is this a serious concern? Should we worry about that? I can tell you, cosmologists do worry about it. Serious theoretical cosmologists. But I'm not going to try to sell you anything. It is something that is of concern that if we want to use statistics and ask what's the most probable thing we should see given that we're here, we have to take into account all the ways we could have gotten here. Most of the ways we could have gotten here would be by random fluctuation and history would not be coherent for them. Okay, that's the problem with Boltzmann brains. Question. Is there anything that can be said about the fact that life seems to decrease entropy? No, of course it doesn't. Life does not. Yes, but it's always at the cost of something else increasing its entropy. The second law does not say that some subsystem of the world can't decrease its entropy, but it will always be at the cost of some other subsystem increasing its entropy even more. But it seems like it sticks and it goes for a very long time. It's not just something that comes together and goes away. It seems to keep maintaining itself. It seems kind of a little strange. It's the flow of energy from the sun. The earth is not in equilibrium. It's in a stationary means it stays the same, but there's a flow. If you have a system which has a flow moving through it, it can create interesting structures. For example, a flow of water through a pipe can create vortices that spin off and spin off. Those vortices have a structure, you know, little eddy currents and so forth. Eddy currents have a structure. The water flowing through the pipe creates them and you can imagine that little eddy currents could have enough structure to have some interesting properties. But if you stop the flow by, you know, sealing off the ends of the pipe, then what happens? The eddy currents disappear and it just returns to a quiescent, dull, boring equilibrium. So what is the flow in the case of life that allows this kind of apparent violation of the principle? Oh, and certainly, of course, even in that flow situation, the total entropy of everything is increasing, even though you're making it sort of a pump and a sink of the water's coming in one end, going out the other end, it comes out warmer at the other end, then it came in this end. So altogether, the second law is not being violated. The same thing is true on Earth. The flow is the flow of energy from the sun. If we sealed up the Earth, didn't let sunlight in, didn't let sunlight out, everything would eventually come to thermal equilibrium, it would be dull, there would be no life, and we would just have featureless thermal equilibrium. Yeah, so life is a kind of eddy current, so little vortices that appear in a moving fluid, the fluid being energy from the sun. Okay, well, we spent an hour talking about interesting things. Now we can get back to some dull things. Magnets. Magnets are, when we talk about magnets, incidentally, in statistical mechanics, we're usually not talking about pieces of iron. We're usually talking about mathematical models of a certain kind of system that has certain features that resemble magnetism. So first of all, what is a magnet? Whatever a magnet is, an ordinary magnet, it's made up of lots of little magnets. Little magnets could even be as small as a single atom, or they could be little crystal grains, but whatever a magnet is, it's made up of little magnets. And typically, at room temperature, at some ordinary temperature, particularly at room temperature being rather high in this context, could be rather high, but certainly at very high temperature, a thousand degrees or whatever, those little magnets are randomly oriented in such a way that the net sample doesn't have a net orientation. The orientation is random, and not just is the orientation of the whole thing random, but the relative orientation of the parts of it are random, and so there's no net magnetization. You don't see a magnetic, a macroscopic magnetic field from it. If you cool it down, and if the energies stored in pairs of these little magnets is such that the magnets like to line up in the same direction, as you start to cool it down, you find out that lumps, groups of magnets, groups of little magnets tend to be in alignment, but other little groups of magnets will also tend to be in alignment but in other directions, and you'll find sort of domains, domains which are magnetized, which means they tend to point in the same direction, but these domains are still fairly small. If you cool it down, now these are experimental facts, okay, and not completely hard to understand, but as you cool it down more and more, energy, the energy consideration that things like to be in the same direction, like means that the energy is lower if the magnets are parallel. If the energy is lower if the magnets are parallel, then as you suck energy out of the system, more and more of them will want to come into alignment, and these domains will start to grow, and eventually you may or may not hit a point at finite temperature, not at zero temperature, you may or may not hit a point at which all of a sudden these domains become infinitely big so that the magnets tend to be somewhat lined up everywhere in the same direction. That's called a ferromagnetic transition, and it's a phase transition, and it's basically the simplest kind of phase transition. Certainly at zero temperature, you'll expect them to be all lined up. Why is that? Because at zero temperature, the only state of importance in Boltzmann distribution is the lowest energy state. When a temperature is zero, only the lowest energy state, and the lowest energy state, all the microscopic atoms or the microscopic magnets line up. Yeah? Question. If you have two, I'm thinking about microscopic magnets, wouldn't they prefer such anti-aligned? It depends on the details. In a piece of iron, they like to align. I know what you're thinking. You're thinking North Pole wants to grab the South Pole. It's just a little more complicated. That's partly why there aren't that many magnetic materials. There's a tendency for them to want to anti-align, but there's also competing things going on. Do they have to have an external magnetic field or a pool to line up? No. No. But which direction they line up in, maybe random? That in itself is called spontaneous symmetry breaking. We're going to be talking about simple magnetic systems, the tendency toward order as you call them. Order means parallelness in this case. And the – okay, so let me make a remark about what you just asked. I think you just asked it. They could all line up this way, or they could all line up this way, or they could all line up that way. And which way do they line up? Which way do they wind up lining up? And that itself might be defined or determined by the tiniest little stray magnetic field. Just one molecule, just one little elementary atom being in a magnetic field, which tends to line it up a little bit, may govern the whole thing about the way the whole system lines up. There's a symmetry. The symmetry is which way things point. If they wind up pointing in a direction, that symmetry is broken. That's called broke breaking the symmetry. There is no more symmetry, or at least it looks like there's no more symmetry. But it's spontaneous. There's no magnetic field pushing everything in that direction. It just had to pick a direction, it picked a direction. It may be because of a tiny, tiny, tiny little stray magnetic field. But we're going to talk about it. These are the things we're going to talk about. And the point at which the symmetry is broken, the point at which the magnets tend to line themselves up in some direction, that's a phase transition. And that phase transition is called the magnetic phase transition. So first of all, don't think about literal magnets, because the model systems that studied often are quite unrealistic as theories of ferromagnetic chunks of iron. What makes them interesting is, of course, that they resemble a lot of other things in nature, and that they're mathematically simple enough to study and interesting enough to exhibit features like phase transitions. That's what makes them interesting. Okay, so let's start with the very, very simplest magnet. The very, and as I say, this, don't think it was a real magnet. This kind of magnet either points up or it points down. It doesn't get the point in random directions. You could think of it as heads and tails, if you like, but this very simple mathematical magnet either points up or it points down. So, and we'll, it doesn't matter how they're laid out on the blackboard, but let's lay them out in a line. Some of them are up, some of them are down, and we want to make a statistical mechanics of this and ask such questions of what's the relative percentage of ups and downs, what's the energy of the magnet, and so forth. All right, so before we begin, if we're going to be talking about statistical mechanics and the Boltzmann distribution, we have to have an energy function. Remember, e to the minus beta times the energy, we have to know what the energy is. So, yeah. When you say them, try to get these as particles or... You can think of them as atoms in a crystal, for example. Yeah. You can think of them as atoms in a crystal. The atoms have electrical currents or maybe they have spins, electrical currents make little electromagnets, and so each atom is a magnet with a north pole and a south pole. But for the simplest model that's ever studied, which we're going to begin with, the atoms point up or point down, and they can't point any other way. And again, the purpose of this is to be simple. Okay, so there's a lot of them. How many of them? Capital N. And what is the energy? We're going to start with a very, very simple version. In the very simple version, there's no interaction between the magnets at all, but there is a magnetic field. There's a magnetic field either pointing up or pointing down. I'm not sure which way my notes actually correspond to. There's a magnetic field. Each atom has a magnetic moment. That's just a little number attached to it, which tells you how strongly it interacts with the magnetic field. It has a magnetic moment called mu. It has to do with the strength of the magnet, basically. There's a magnetic field. The magnetic field is either pointing up or down, and I can't remember which way I chose it, but let's not worry about it. But the energy of one of these magnets is different if it's up or if it's down. In particular, if the magnet is up, I think we give it a plus energy, and if the magnet is down, we give it a minus energy. So let's invent a variable for each magnet. Let's give it a name. Let's call it sigma. This is the sigma for the first atom, and this is sigma for the second atom, blah, blah, blah, blah, blah, blah, and sigma is either plus or minus one. It's just a label or a variable which is plus or minus one. So if the first spin is up, that means sigma one is up is plus. If the second spin is down, it means sigma two is minus and so forth. Okay, what is the energy of this system? If the spin is up, then the energy is positive, and it's just equal to mu times h. What if it spin is down? What's the energy then? Okay, supposing there are little n, little n spins up, and little m spins down. What's the energy? The energy is equal to little n minus little m times mu times h. Mu times h is this energy of one spin if it's up, and minus mu times h is the spin of its down. N, little n equals the number of ups, and little m is the number of downs. What's little n plus little m? Big n. So little n plus little m is equal to capital n. Okay, we're good to go now. We can write down the Boltzmann distribution and we can calculate anything we want using statistical mechanics. So let's do that. What are the dimensions, please? Excuse me, of h and e. What are the dimensions? Is h a number or magnetic field? For us, it's a number. It's the strength of the magnetic field. It's an external magnetic field imposed on the magnet from outside. So for our purposes, it's a number. And mu is also a number. And we might as well put mu and h together and just call the whole thing a number. That's often done. Sometimes it's called little h, but I thought I would just expose the various pieces of it. Would you prefer I call big h times mu little h and never see mu and the big h again? We could do that. It doesn't matter. Okay. All right. Now. Excuse me, what is e? Oh, sorry, e equals. This is the energy and it equals. Good. Thank you. Energy equals that. Okay. Now, how many configurations are there? How many configurations are there with little n ups and little m downs without asking which ones are which? How many configurations are there? We have capital N things and we want to group them into two groups. One group of little n and one group of little m. How many such configurations are there? How many such arrangements are there? That's a combinatoric problem. Yeah. Yeah, but let's write it down. The number of configurations with this value of energy, the number of configurations with this value of energy is capital N factorial. This is number number number number of states for a given n minus m is n factorial over little n factorial little m factorial remembering that little n and little m add up to begin. That's the number of such configurations. Let's take one of those configurations and ask what the Boltzmann weight is for that. The Boltzmann weight means e to the minus beta times the energy. In fact, what we're going to be doing is working out the partition function. The partition function is the sum of all configurations. That means it's the sum over n and m such that little n plus little m is equal to big n. I won't bother writing that but keep that in mind. Times e to the minus beta times the energy which is mu h times n minus m. So we just take this thing and we add them all up. Now, for each n minus m there's going to be a certain number of configurations and that number of configurations is this combinatoric coefficient here. So we can write this. Yeah. Nm is not too varied. Yeah, they are. The number of ups and the number of downs. Well, I'm just saying it's not two variables that are independent of one. They have to add up to capital N. That's all. So, but I'm just saying if you have, you can't use both of them as indices. No, no, no, they're not. That's what I'm saying. You sum over n and m making sure that n plus m, yeah, okay, we can write it another way. Well, m is equal to n minus n so it's just. Yeah. Yeah, so let's just leave it this way to keep the notation simple but. You need a combinatoria function inside the sum there before the. No, each individual configuration gives this and the number of configurations with a given energy is this. All right, so let me, you'll understand, you'll understand when I write the formula. All right. It's a sum over just n, little n of n factorial over little n factorial times big n factor, big n minus little n factorial. This is capital N factorial, little n factorial, little n factorial, right, times e to the minus beta. And let's leave it this way for a minute. In fact, let's not leave it this way. Let's write, let's write the following. e to the minus beta mu hn, let's call that, let's call that x. And let's call e to the plus beta mu h, I could call it one over x but I'm going to call it y for a minute. Let's take these two numbers here, call e to the minus beta hx and the other one y. Okay, so what's e to the minus beta h times n? That's x to the power n. Do you see that? Can you see that? That's x to the power n. And what about the other factor here? That's y to the power m. I'm just using x and y because maybe it'll stir some memories from high school. x plus y, this is the binomial expansion. This is the binomial expansion and this whole thing is just equal to x plus y to the capital N. Alright, that's the binomial expansion. And so we've solved it. We've figured out what z is. Let's write it down. z is just x, this is z is equal to x which is e to the minus beta mu h plus y which is e to the plus beta mu h all raised to the capital N power. That's it. That's z. That was easy. This function, does that function have a name? Well, let's multiply and divide it by two. What does it have a name? It's the hyperbolic cosine. So let's call it that. We might as well. So the answer then is two to the N. Now two to the N is not going to be interesting. It's a number. A multiplicative factor in the partition function usually doesn't do anything but we'll leave it there. And then hyperbolic cosine of mu h to the power N. That's the partition function. Oh, sorry, beta mu h. My mistake. Beta is awfully important. It's the inverse temperature. Without it, we can't differentiate with respect to it. All right. So that's our partition function. Now, supposing we're interested in the question, what is the relative percentage of up spins and down spins? What's the relative percentage? That quantity has a name. It's called the magnetization. The magnetization is zero if there are as many up spins as down spins. The magnetization is plus if there are more up spins than down spins. And magnetization is minus in the opposite situation. So let's define, first of all, let's define magnetization. N minus M is the difference between up spins and down spins. It's sort of the magnetization, but it's usual to divide it by capital N so that it becomes the magnetization per magnet, if you know what I mean. So let's call the magnetization M is equal to N minus M times mu h divided by capital N. That's a definition. The magnetization, and what is it? It's the bias for each particle, whether it's up or down. If the magnetization is positive, it's sort of the average upness or downness of each, oh, I take that back. The magnetization is just this. It doesn't have the mu h there. And that's definition. That's definition. The magnetization is clearly related to the energy. Let's just write a few equations here and then we'll be able to use the partition function. The energy is equal to N times the magnetization times mu h. All I've done here is say that N minus M is the magnetization times N, that's this, times mu h. So we'll use this. We'll come back to it. Too many definitions, but magnetization is an important one. It is roughly speaking the probability of being up minus the probability of being down for a given spin. Okay, how can we calculate the magnetization? Well, one easy way is to calculate the energy of the system. If we know the energy of the system and we know the number of particles and we know mu and we know h, we can calculate the magnetization. So the first thing we will calculate using the partition function is the average energy. From that we can read off the magnetization. But the magnetization is for a particular configuration. Are you talking about some average magnetization? No, we're talking about the average magnetization. We're talking about the average. Absolutely right. Absolutely right. This is a particular configuration and we should say that the magnetization is the average of that. You're absolutely right. It's the average of it. It's the average over the statistical distribution of the Boltzmann distribution. And of course this is also the average energy. Okay, what do I do? There are two things. Alright, so what do we do to calculate the average energy? We calculate... Is that obvious that that is true? Which is true? That the average energy equals the average magnetization. This is the energy. Right, but that's a particular case of it. No, that's the energy. Given n and given m, that's the average of it. That is the energy... Sorry. This is the energy for a given configuration. The average energy is the average value of n minus m. So... Propability is taken into account. With the probability it's taken. Yeah, with the probability, yeah. Yeah. And all I'm asking is that is it obvious that that that that average energy is equal to that equation involving the average magnetization. It's not... I mean, it's probably true, but it's not obvious to me. I mean, call that... Say it's a v of a. Here's an equation that configuration by configuration... Alright, let's... It is obvious. You think about it. It is obvious. It is obvious. For every configuration, the energy is proportional to n minus m. If you average both sides, the average energy will be proportional to the average of n minus m. So the average energy... We can put averages around all of these. If something is equal to something else, configuration by configuration, then it will also be equal in the average. So the average magnetization also involves a probability. Yeah, absolutely. Yes, yes, yes. All of these... Yeah, everything in statistical mechanics is average. If you write that last equation with all brackets you need, only need brackets on the right-hand side around n. No, actually... No. Around capital n? No. Around n. Okay. Bottom equation. Yes, yes, of course. Around this end here, begin? Around e and then around big m. Big m. Big m. Not big n. Big n is a number. Big m. That's right. Yes, alright. So that's right. Alright, so let's write it the way you want it. The average energy is equal to n mu h, all of which are fixed numbers, times the average magnetization, let's call it. Now, strictly speaking, with the usual definitions, we don't have to put an average here because the definition of magnetization is average. It's also true that in statistical mechanics... No, average over the probability distribution. Average over the same exact thing we did with the ideal gas, we have a probability distribution and we calculate averages. We calculate averages from that probability distribution. How do you get them all to line north-south without biasing up or down? They are biased. Sorry, what? You get them not to be east-west, but... No, no, no. This is a model in which, by definition, these things can only point up or down. This is a mathematical model. They're not biased up or down, for the moment. They may be biased up versus down, but they're not biased, but there's no such thing as east and west. Well, that partition function is totally unbiased. Oh, it's very biased. The energy prefers the molecules to be down. Remember, the energy is plus if they're up, minus if they're down. Systems like to have lower energy, meaning to say that the Boltzmann distribution favors lower energy. This is most definitely biased by the presence of the magnetic field. There's no symmetry here. This is a problem that has no symmetry. It's biased for the atoms to point down and it costs the energy to tip them up. Which way are they likely to be? Okay, let's see if we can make some guesses. Which way will they be at zero temperature? Right? No, definitely not right. Down. Right, so at zero temperature, what do we expect the magnetization to be? We expect everybody to be down and that means the magnetization will be minus one. What do we expect at infinite temperature? Yeah. Well, a lot of it, they're all 50. 50-50. At infinite temperature, everything is just maximally random. All states are equally probable. And so at infinite temperature, we expect the magnetization to be zero. So it goes from one at zero temperature to zero at infinite temperature. This is what we expect. This is right. This is correct. Sorry, it goes minus one to zero. And no point will it be positive because the average magnetization will not be positive because of the bias down. It's that the infinite temperature will defeat the bias. Infinite temperature is just so random that a little bit of magnetic energy is unimportant and so it will be random. But at no point will the average magnetization be up. It won't be positive. Okay, I hope I'm right. Switch the magnetic field the other way. Make the magnetic field negative. Right. Right. Switch the magnetic field. Okay, so where are we? Instead of calculating the magnetization, I'm going to calculate the average energy. We know how to calculate the average energy from a partition function. Remember, the average energy, and I'm just going to write E, no averages, is equal to minus the derivative of the logarithm of Z with respect to beta. So there's a little bit of algebra to do here. We might as well do it. I know it tends to put people asleep to watch me do algebra. The logarithm of Z has a constant from here. That's going to go away when we differentiate, so let's not even bother writing it, is equal to N log of the hyperbolic cosine, this is a terrible function, of beta times muh. N times the logarithm. Notice, first of all, that it's proportional to N. That's a good thing because typically energies when we differentiate will be proportional to N, and that's natural. Let's differentiate this with respect to Z, sorry, with respect to beta. The derivative of log Z with respect to beta, first of all, we'll have an N. Now, the derivative of logarithm of an argument of a thing is one over that thing, so that will give us in the denominator, hyperbolic cosine of beta muh. Then in the numerator, we have to differentiate cos beta muh with respect to beta. What happens when you differentiate cos? What is the derivative of cos? Sinch. That's sinh beta muh. Then you have to differentiate this thing with respect to beta, so that gives you another muh outside. Now, is that the energy? Not quite. Minus sinh. This is the energy minus. We have the energy, and we want the magnetization. What we want to do with it is divide it by muh and divide it by N. The magnetization is equal to minus, and as I said in the first place, it comes out minus, with dividing by N and with dividing by muh. It's just exactly sinh beta muh over cos. That function also has a name. It's equal to the tanh of beta muh. With a minus sign. Yes, with a minus sign. Now all we have to do to understand the system is understand what the tanh function looks like. Incidentally, beta is one over the temperature. We just want to plot this function as a function of the temperature. Mu times h, that's just a number. It's not so interesting. We could absorb it into beta here. We could plot the thing as a function of beta muh. They come in together. Okay, so the question is, what does a tanh function look like? You can work out what the tanh function looks like by yourself. I will show you what it looks like. Well, first of all, sinh and cosh for very large values of the argument become equal to each other. They're basically both exponentials of beta muh. Let's write them down. Cosh of x is equal to e to the x plus e to the minus x over 2. Sinh of x equals e to the x minus e to the minus x over 2. When x gets large, let's go to large x, when x gets large, what happens to e to the minus x? It just goes away. For large x, they both are equal to e to the x and their ratio is 1. Very far away, sinh over cosh, I'm not including the minus sign now. Just to sinh over cosh, the tanh function goes to 1. Incidentally, for negative x, it goes to minus 1. But let's not worry about that. x is going to be positive in this problem. Now, what does it do near the origin? Near the origin, cosh is equal to 1. x equals 0 near the origin. e to the x is 1. This is 1 plus 1 is 2. What about this one? That's 0. But what about the linear, what about the correction to it if we expand e to the x as 1 plus x? The e to the x will be 1 plus x minus e to the minus x is minus 1 plus x divided by 2, and the answer is just x. The first derivative here is 1. In other words, it starts out just looking like x, and it very quickly just bends over. It's a very boring function. It starts out linear, and then it gets tired quickly, and it just flattens out. That's the tanh function. This horizontal axis is beta times muh. Now, keeping in mind that beta is 1 over the temperature, what is the magnetization when the temperature is small? That's when beta is large. When beta is large, we're way out here, and the tanh function is 1. So the magnetization is minus 1. Zero temperature all spins align themselves down. What about infinite temperature? Infinite temperature is beta equals 0. Beta equals 0, well, first of all, the magnetization is 0. Beta equals 0, the magnetization is 0 as expected. And this just fills in the details for us. This just fills in the exact details for this problem of how the magnetization goes from one at low temperatures and goes to zero at high temperatures. You asked me how can you get the magnetization to go in the opposite direction. Well, the answer is to allow h to go negative. If h goes negative, then it looks like that. So if the magnetic field flips sign, everything just reverses. May I have a question, please? This looks like a continuous, but you said there is a temperature with that. No, not in this system. Not in this system. This system does not have a phase transition. This is too simple. The first interesting system that has a phase transition is the two-dimensional Ising model. But first we're going to do the one-dimensional Ising model. Ising was not a very good student. He was a student of Lenz, LENZ, who was famous for a number of things. For one of the things he was famous for, or he was not famous for, was inventing the Ising model. He gave his student one problem to determine whether there was a phase transition in the Ising model, in the one-dimensional Ising model. And his student got the wrong answer. He said there was a phase transition. There was not. That's all as far as I know that Ising ever did. So why it's called the Ising model is just to end. And Ising is the most famous name in all of statistical mechanics. So here it goes. When some you lose some, you lose some, you lose some, you lose some, you lose some. Okay. What is the Ising model? Now the interesting thing about the Ising model is it is symmetric between up and down. So therefore, if there is any magnetization, it's because somehow the system has spontaneously broken the symmetry. In the one-dimensional Ising model, it does not happen. In the two-dimensional Ising model, it does happen. So I will define all of these Ising models for you right now. They work the following way. The energy is not stored particle by particle. We have no external field. So if the particles didn't interact with each other, if the little magnets didn't interact with each other, there would be no energy. And if there's no energy, all configurations are equally likely. In this case, the magnetic field that each spin sees is due to its neighbors. If its neighbors are up, it feels a magnetic field up. If one neighbor, if both neighbors are down, it feels a magnetic field down. And if one is up and one down, it feels no magnetic field. So what we're saying is that the energy is associated with pairs, with pairs of neighboring spins. And if the pairs are in the same direction, let's take that, let's see, let's take that to be lower energy. Just we have to make a choice now. Do we want the interactions to favor alignment or anti-alignment? That's anti-alignment. This is alignment. This is alignment and this is also alignment. The energy is going to be equal for this configuration as it is for that configuration. And unequal to this configuration of that configuration, you get it? All right, good. Good. So we come back to these variables sigma. And we say if sigma is aligned, if the two neighboring sigmas are aligned, just focus on two spins. If they're aligned, then the energy is lower. If they're unaligned, the energy is larger. So let's take the energy to be some number which is usually called J. I don't know what J stands for. It's usually just called J. It's just a number. It has an energy scale. It's an energy scale for the problem. J times sigma of particle one times sigma of particle two. Only two particles for a moment. These are two neighboring particles on a lattice. Now later on, we'll allow the lattice. Now the lattice is just a line. Later on, we can have the lattice be a two-dimensional lattice or a three-dimensional lattice. One and two are neighboring sites on the lattice. And this is the energy of the one, two pair. Now, this energy is going to be lower if they're anti-aligned because if they're anti-aligned, sigma one times sigma two is negative. I want the energy to be lower if they're aligned. So I'm going to put a minus sign here. With this energy, the energy is low if the spins are aligned with each other. And it's higher if they're anti-aligned. Now supposing we have a line of them and each one is interacting with its neighbor, then we can write that the energy is equal to a sum minus J of sigma of n times sigma of n plus one. Does everybody understand what that means? The product of the spin, what I call a spin, product of a magnetic moment at each site times its neighboring site. Each one, each pair, each neighboring pair, counted once. Okay, so this is our expression for the energy. Now, let's think about it for a moment. What do you expect to happen at infinite temperature? The general rule is that infinite temperature is just random chaos. Everything is equally likely. Zero magnetization. We'll worry about the energy, but yeah, right. Everything just random, and so in particular zero magnetization. Every product will be zero too because one is one and one is zero. Every product will be on the average zero. That's right. Why does that bother you? Well, don't forget the energy if they're all parallel is negative. So you're starting with a negative bias. You're starting with the ground state having negative energy. So the zero of energy, so to speak, is a big negative number. So having zero energy is effectively having a lot of energy relative to the ground state. Why are the products something zero? I thought sigma was either plus or minus one. It is. But on the average, if the neighbors are randomly distributed. Yeah, the average. The average. If you have random chaos, that means there is likely to be found parallel is anti-parallel. So the average energy will be zero, which is a lot of energy. Okay. But what about zero temperature? What would you guess for zero temperature? It'll be aligned, right? They want to be aligned. But which way are they going to be this way? All of them. Everybody aligned this way or everybody aligned this way. You can't tell offhand. There are two ground states. Ground states mean states of minimum energy. And they will both come in with equal probability. They'll both come in with equal probability. But now let me add one more thing. Let me suppose that there's a magnetic field, external magnetic field, but it's only acting on one particle. One out of 10 to the 23rd has a little stray magnetic field. And let's say that magnetic field is a long one axis. Then what is the ground state? The ground state has a definite orientation. Even if that magnetic field is very small, the ground state still has a definite orientation. And at zero temperature, strictly zero temperature, the Boltzmann distribution always favors infinitely strongly the lowest energy state. So that means that even the tiniest little magnetic field, stray magnetic field, well, the Boltzmann distribution will favor all of the spins pointing along one axis. If you were to apply that tiny magnetic field, let the system come to equilibrium at zero temperature and then remove the magnetic field, the system will remember it. Everybody's holding everybody else in place. And the possibility of them all simultaneously jumping to the opposite state is remote if there's enough of them. So that's called spontaneous symmetry breaking. That is what spontaneous symmetry breaking is. In this case, it's very simple. And this example has a symmetry. We can actually say quantitatively or mathematically what that symmetry is. A symmetry is usually represented by a mathematical operation on the degrees of freedom. What mathematical operation would you do on sigma to change from up to down? Apply it by minus one. Let's go back to the earlier case over here where the energy, in this case we could say the energy was proportional to just sigma itself, not sigma times a neighboring sigma, but just sigma by itself. Does that have a symmetry? No, the energy itself changes sign when you change sigma. And that's not a sense. Symmetry is actions that you can do that don't change the energy. Okay? What about this system? Supposing you change the sign of one spin, one of them, sigma one and not sigma two. Is that a symmetry? No, the energy changes. But what do you change in both? If you change in both, then the energy doesn't change sign. So if you take, go from two up to two down, that's a symmetry. Now we have this whole vast array of them. And what if we change all of the sigmas simultaneously? We write formally the equation sigma of i for all i goes to minus sigma of i. We replace every sigma by minus its value. Then the energy doesn't change. Then the energy doesn't change. And that's what it means to have a symmetry. An operation that you can do on the coordinates of a system that don't change the energy. No matter what state, for every state, whatever the state is, there is another corresponding state which has the same energy but in which the spins are all reoriented opposite to what you started with. And that ensures that in some sense there's no bias to up or down. If the system is going to flop itself all simultaneously into up at zero temperature, that means it could have also flopped itself into down. There's no way to predict in advance unless you know that tiny little stray magnetic field. There's no way to predict in advance which way it's going to go, but it's going to go one way or the other because the Boltzmann distribution says you've got to be in the ground state for zero temperature. And as I said, the little tiny stray magnetic field which will determine which one it is, but it will be one of them. All right, so clearly the next thing we want to do is to solve the one-dimensional Ising model. We want to find what we want to do. We want to calculate exactly the same, not the same, but we want to calculate the partition function for this system. So we'll do that next time. I think we'll quit for tonight. We'll do that next time. We'll work it out and we will see that there is not, it does not have a phase transition at a finite temperature. Nothing funny happens at finite temperature. Contrary to what Ising thought, it took a few more years for a couple of physicists named Cromers and Wannier to prove that if it's a two-dimensional lattice, there is a phase transition. And that's a beautiful story and we'll try to do it. For more, please visit us at stanford.edu.
Leonard Susskind continues the discussion of reversibility by calculating the small but finite probability that all molecules of a gas collect in one half of a room. He then introduces the statistical mechanics of magnetism.
10.5446/14942 (DOI)
Let me go back and remind you what we're going to do today is we're going to study the Eising model. I was terribly unfair to Eising last time. I was. I looked it up and I was somewhat off base. I said that Eising failed to solve the model correctly or whatever. No, I think he solved it correctly and then thought there was a phase transition, the one dimensional Eising model. That would be totally inexcusable. It's not exactly what happened. It wasn't that bad. He solved the one dimensional Eising model correctly. They had been given to him by his thesis advisor as a problem, Lenz, the famous physicist Lenz. He solved it correctly. He realized it didn't have a phase transition, but on the basis of that he believed and wrote in his thesis that the Eising model in any dimension does not have a phase transition. That was wrong. That was incorrect. We're going to talk about that today. We're not going to solve the higher dimensional Eising models. They're too hard for us. They are very hard. Well, we will use an approximation method that's physically very intuitive. We'll see that in higher dimensions, sufficiently high dimensions, there is a phase transition. We'll talk about what that phase transition means. Phase transition means a sudden change in the properties of the system as you vary, for example, the temperature. This goes slow. Let's take it in steps. Let me go back to the simple example that we studied last time. We studied a problem in which there were a collection of little magnets. Each magnet could be up or down, and there was an energy function, and the energy function differed between up and down. Let me write it again, the energy of a magnet when it's up is different than the energy of a magnet when it's down. We introduced a little variable called sigma. Sigma could be plus one or minus one, plus one up, minus one up down, and we wrote an energy function. I wrote it last time as a magnetic moment mu times a magnetic field B times the thing which tells you whether the spin is up or down. I'm going to change the notation. This is too complicated. Mu times B, they just come together, and they'll always come together, so we might as just call the product one thing, and I'm going to call the product J. Turns out that is a standard notation for what we'll be doing later. So we'll just call it J, what J stands for, I do not know. I'm also going to make it minus. It doesn't make any difference to the physics because it really is just a redefinition of what you mean is up and down. If the original energy favored down, then if I change the sign, it favors up, but other than that the interchange of up and down, there really is no difference. So let's see, the way I have it now, it will favor up. It will favor sigma being positive because if sigma is positive, the energy is negative, and of course lowering the energy is favorable. Lower energy is favorable, especially at low temperatures. All right, so then what we did is we studied a whole group of them, and we did some combinatorics to try to calculate how many states there are with a little n up and a little m down, but we could have done something much simpler. I did it that way just to do it in some careful detail, but we can actually do something else. We've already learned that you can always think of a subsystem as a system being a small subsystem plus a heat bath. And in that case, the rest of the heat bath is just what, it just provides the heat bath, is what brings one of these systems to equilibrium, and we could just focus on one of them. We could just focus on one spin, what's a spin? I call it a spin, one of these little magnets, and just assume that it is in thermal equilibrium with its environment, its environment being the rest of the magnet. All right, in that case, then we would just focus on one of these little guys, and we would write the partition function for one of them as being summation over the configurations. How many configurations are there? Two. It's going to be easy. Two configurations, e to the minus beta times j, sorry, it's going to be plus beta j times sigma, y plus, because we're supposed to write e to the minus beta times the energy. And that's what we want to calculate. What is the sigma doing there? The sigma is just telling us whether it's up or down. So there's really just two terms. One term, sigma is up, that gives us e to the beta j, and the other term, it's down. So it's e to the minus beta j. That's the partition function for just one spin. One of them is minus. One of them is minus, e to the beta j, e to the minus beta j. When we calculated it, thinking of the whole system as one system, remember the answer we got? We just got this to the nth power. In other words, we got a factor like this for each spin. So we can really short circuit all of that stuff about combinatorics and just focus on one spin at a time. And since they're independent of each other, whenever you have independent systems, you can simply take a factor, partition function factors into a factor for each system. So for each little spin, we have a factor like that. Let's concentrate on that spin. That's z. And that happens to be twice the hyperbolic cosine of beta j. When we did it last time, we got twice the hyperbolic cosine of beta j raised to the number of powers of each spin. When we took the logarithm of that, which is the interesting thing, it simply becomes the sum. For example, the energy, which is just the derivative of the logarithm, is nothing but the sum of the energies of the individual ones. Here we're concentrating on one spin at a time and one magnet at a time. And we'll calculate exactly the same energy for it. So let's calculate the energy. The energy can be calculated as 1 over z with a minus sign, derivative of z with respect to beta. This is the same as the derivative of the logarithm. Derivative of the logarithm of z is just 1 over z times the derivative of z. OK, so this is not so hard. The derivative of z with respect to beta has a two. And then what's the derivative of hyperbolic cosine? It's hyperbolic sine. So we will get a hyperbolic sine of beta times j. But then we have to differentiate the argument of the cosine, the cosine, the cosh with respect to beta, and that gives us another j. And that's it. No, not yet. We still have to divide by z. That's going to get rid of this 2 here. I look at it here and I say that's just a numerical multiple in the partition function. I know numerical multiples don't matter, but now we see exactly why. When I divide by z, the 2 will just cancel and I will get j sinh divided by cosh, which is called j times the hyperbolic tangent of beta j. This is the, did I? There was a negative sign. Yeah, there was a negative. There is a negative sign. Yeah, minus 1 over z at the left. Yes, I know. I know, I know, I know, I know. Yeah, that's true. So that's the average energy of one of these spins. And you can also ask, what is the probability that the spin is up versus down? What is the average? Sigma is a thing which can take on the value plus 1 and minus 1, but what is the average sigma? Well, the energy is minus j times sigma. This is the average energy. It's pretty clear that the average sigma is just the same thing without the j. So let's write that. Well, let's first of all write that the energy, and this is the average energy, and it's now the average energy of one particle. It's not the average energy of the whole system. That's the way we're going to focus on it now. And that's equal to minus j, hyperbolic tangent of beta j. I'm just rewriting the same thing. And we can also talk about the average spin and the average spin, or the average, I keep calling it spin. I should call it the magnetization, the average value of the upness or downness of the magnet. That is just equal to tangent, hyperbolic tangent of beta j. Okay, this is, we're going to need this. We're going to need it, but let me just draw a picture of it. Hyperbolic tangent is a function which looks like this. Now this is a hyperbolic tangent of x, and x is beta j. So this is a hyperbolic tangent of beta j. j is a number. We could have said j equal to one. I don't even know why I bothered keeping the j. I could have left it equal to one. It's tanch of something proportional to the inverse temperature. The slope of the hyperbolic tangent over here is one. It just has a unit slope over here. And then it asymptotically becomes flat and goes to a constant, namely one over here, and minus one over here. Okay, so let's think about what that says. If beta is very, very large, that means low temperature, and let's suppose j is positive, then beta j is very large and positive. That's way out here. And what it's telling us is that the spin will point up with probability very, very close to one. The value here is one, and the average of the spin is very close to one. The only way it can be close to one, if the only values it can take are minus one and one, the only way the average can be really close to one is if the probability is almost exclusively for it to be up. And that's what's going on out here. On the other hand, we can also look at the negative axis. Beta, we don't want beta to be negative. Beta is the inverse temperature, but j can be negative. That just corresponds to having put in the other sign, and all it does is flip up and down. So as you might expect, what goes on way out here when beta j is very negative is the spin wants to point in the opposite direction, the magnet wants to point in the opposite direction, and that's all that's happening here. Otherwise, the two sides are completely symmetric. Notice that at the origin, now where is the origin here? The origin means beta j equals zero. That means very high temperature. Very high temperature, beta is the inverse temperature. Very high temperature, the temperature is just so high that it's constantly being kicked around and it overcomes, the temperature just overcomes any bias for it to be up or down, and so on the average it's just plain zero. So this was a very, very simple system. Now we want to come to the next most complicated magnet, and it's the one that we talked about in a minute, but let's go back to it. The Ising model, which is in the one-dimensional Ising model, which is at each point along a one-dimensional array, there's a sigma. Sigma is either up or down. New thing is that the energy depends on the relationship between neighboring ones. The energy is now the sum. Now this is not the one particle energy, this is the total energy. We better keep in mind that this thing over here is the energy of just one spin, not the whole thing. Okay, now I'm actually studying the whole shebang. The energy is the sum of all of them of the product of neighbors, let's call it sigma i times sigma i plus one. That means you go to each site and you multiply the spin by the spin or the value of its neighbor. We'll also put a j there. The j is just a numerical constant, which I'm sort of sorry I introduced, but let's leave it there. Since I started. I'm also going to put a minus sign there, okay, that's the energy. The minus sign means that it favors them being parallel. You want to lower the energy as much as possible and you lower the energy by having them all parallel. If they're all parallel, the product of the spins is all equal to plus one and with the minus sign that I included here, the way to lower the energy is to make them all parallel. What would happen if I put plus j in here? Oh, first of all, let's imagine now the ground state of the system. What is the ground state of the system like? The ground state will have, the lowest energy will have them all lined up. But in which direction? Up or down? There's two ground states and they're the same. They have the same energy. All ups are all downs. Now what would happen if we change the sign of j? In other words, make it plus j sigma i, sigma i plus one. Is it really a different system? No, it's really the same system and all you have to do to see it's the same system is to redefine every other spin by changing its sign. On every other spin what you're called up, you now call down. You redefine the variable and then what you find out is the ground state wants to have them all anti-parallel. Up down, up down, up down, up down. But it's really mathematically an identical system and again there are two states. This particle could be up, in which case this one will be down, in which case this will be up, in which case this will be down or the opposite. So it's mathematically identical. Yeah? Thanks, Mike, go ahead. Now you're missing the point of this. This is in place of my coffee. Okay. Good. And what we want to calculate is the usual thing. We want to calculate our best friend, the partition function. And what is that? That's e to the minus sum. Now what are we summing over? We're going to be summing over all possible configurations. Any one of these could be up or down and that's a huge sum. But we're going to do it. E sum e to the minus j summation sigma i, sigma i plus 1. It looks terrible. It looks very hard. Oops, I left out something. Beta. The temperature, the inverse temperature. So there's a beta there. Did you say that there's no external field? No one. Did you say there's no external magnet? No external magnet for the moment. Yeah, no external magnet. We could do it with an external magnet too and we will discuss that at some point. For the moment, no external magnet and this is the problem we want to do. OK, let me first organize a question that we can ask. I would like to know the answer to the following question. Supposing I know that the magnet over here is up, then I could ask knowing that the magnet over here is up, then what is the probability that n links down the chain here, it's also up? That's called a correlation function. It's the conditional probability that if we know that a spin over here is up, what's the probability of some other place that it's also up? Could ask it a different way. You could ask what is the, it's the same question really, what is the average of the product of spins at two different locations? What is the average? Now you might think, in this case you'd be right, you might think that if you go far enough down the chain that a spin being up over here would have very little effect farther down the chain and you might expect then that the average of the product would be zero because whatever this spin is up here, this one has equal likelihood to be up or down. You might guess, but you might be wrong too. There might be an effect of just having one spin up over here that would propagate all the ways through the system and tell you that there's a net bias throughout the whole sample. You could diagnose that by looking at the average property, the average value of the spin over here, let's call that at point i times the spin at i plus n, where n is in units down the chain. If there does exist this memory, this kind of memory where if this spin is up it biases the system all the ways to infinity, then this would not go to zero with large distances. If on the other hand that bias does go to zero, then this one being up would not bias this one and on the average the product would be zero. So this is an interesting diagnostic test of the effectiveness of one spin being up on its neighbors and how far that propagates down the chain. So let's see if we can guess what the answer is for this case. Think of this instead of as a spin chain or a magnet. Think of it as a game of telephone. You know the game of telephone where you whisper to your neighbor. So we have a long chain of people and a message is going to start at one end somewhere. It doesn't have to be at the end, it could be over here. And the message is going to be a very simple message, either zero or one. We're not going to make fancy messages like the kids who play this game. My dog had a heart attack and therefore I'm off dog food for the rest of my life, I don't know what, but no, just zero or one. And people hear pretty well and people talk pretty well among this group of people, but the fidelity is not absolutely perfect. It's pretty good, but it's not absolutely perfect. The question is how far down the chain does the signal propagate before it gets lost, or it just becomes equally likely far down the chain that the person agrees with the starting point or disagrees. The answer is no matter how good the fidelity, as long as it's not absolutely perfect, you'll lose memory sufficiently far down. If there's a probability for an error of 1% in each time, then the probability for no error is 99%. But then to go two units down the chain, it's 99% of 99%, and that's.981, right? Right? 100 minus 1 squared, which is 100 squared minus twice 100 plus 1. Right?.981. I just said the binomial theorem. The binomial theorem..981. I'm not going to try to do the next one. But each time it goes down by a factor of.99, and if you go far enough down the chain, that product will get arbitrarily small. So the probability 100 units down the chain that the signal is remembered with good fidelity is zero, well, not zero, but 100 units down the chain is about where it begins to fail seriously. 10,000 units down the chain, basically no memory. The same is true. It's the same thing here. If this bin is up and there's no signal now, but there is a bias, and the bias is the energy bias here. The system prefers a lower energy. At zero temperature, zero temperature is like the perfectly accurate signal, infinite fidelity. Just like the situation where there's no loss whatever, and then everybody just lines up. If this gentleman says up, then this lady down here will say up. If this gentleman says down, this lady will say down. So it's perfect correlation. But if there's any slight infidelity, not the word I was looking for, but this, right, this, yeah, you know what, it will fade. And that's what will happen here. That's what will happen here. And Eising got that right. He got that right. Okay, so let's see if we can calculate the, what's the trick? There is a trick. The trick is a marvelous trick. I imagine it was inventive biasing, I don't know. And the trick was to focus, instead of on the spins, on the links between the spins. So here is what I imagine Eising said to himself. He said, look, there are two possible, there are two possible values that the first people are going to imagine for the moment that the chain is finite. Let's imagine the chain is finite. And then we'll let it get very big. But let the chain be finite. And the first spin can either be up or down. Let's start by assuming it's up. And later on, we'll put back the configuration's words down. So we're going to write the partition function as a sum of two terms. In the first term, the first one is up. And in the second term, the first one is down. But let's concentrate on the one where this spin is up to begin with. This one at the beginning of the chain is up. All right, then I can either use as my information about this next spin here, I can either use the value of the spin or I can talk about the product of the spin times this one. For example, let's take sigma one times sigma two. Since sigma one is known to be up, sigma one times sigma two will tell you everything you want to know about sigma two. So let's call that mu one. And what is it? It's a variable that has to do with the relationship between the first spin and the second spin. It's a variable which has to do with the relation. But if you know it, then you know what the second spin is. What about the next one? Let's introduce sigma two times sigma three and call that mu two. That's telling you if you like the state of the link between. Think of, instead of thinking of the spins, think of the links. The links have two possibilities, either parallel or anti-parallel. That would tell you nothing about any individual spin unless you knew the first one was up. If you know the first one is up and you know mu one, then you know sigma two. Supposing you also know mu one and mu two, then you know what sigma three is. In fact, sigma three will just be mu one times mu two. Why is that? If mu one is up, mu one times mu two, sigma two squared is one. Mu two is either plus or minus one. So multiplying it by itself gives you one. You find out that sigma three is mu one times mu two. If you know mu one and you know mu two, you know the first three spins. Obviously, if you know the muses for all of the links in between, you know all the spins. There's no redundancy. There's no double counting. As long as you know that that first spin is up, then there's no double counting. It is equally good to know the muses, which live, so to speak, on the bonds between the spins, as it is to know the spins themselves. But that's very useful. Why? Because the energy is just made up out of these bond variables, bond meaning the relationship between neighbors. So here's what we can write. We can write that this energy is just a sum over the bonds. Let's call them the bonds, the neighboring bonds. Sum over the bonds, first bond, second bond. Notice there's one fewer bond than there are particles. It's sum over the bonds of mu i. In other words, it's also a sum over i. That's just a sum over i. Of mu i, in other words, the first bond variable, the second bond variable, and so forth, times minus j. We're not even multiplying anything anymore. It's just the individual bonds. And the individual bonds are all independent. There are no relationships between them. As long as you know that that first spin is up, there are no relationship among them. And it is as good to know the mu's as it is to know the signals. So you can substitute. You can write. You can write. You can substitute the sum over spins. You can substitute for it the sum over the values of the bonds. What are the possible values of the bonds? Plus one and minus one. Now it's plus one if they're aligned and minus one if they're anti-aligned. So we can write this as z instead of the sum over the spins. This is sum over the spins. It's sum over the bond variables. E to the minus j beta times the i-th bond. Now in fact, I'm going to tell you that it's twice this answer. Why is it twice that answer? You can start with the other one. Because we could have started with the first bond. But a factor of two is no big deal in a partition function. So the other thing is to remember if the number of magnets is very large, it doesn't matter much, but we should remember that this one fewer bond then spins. Other than that, this has now reduced to exactly this problem except that I've really except mu in this here is called sigma. It's a sum over uncoupled individual bonds. It has exactly the same form as the partition function for the simpler magnet. But remember now, a mu does not stand for the value of a magnet. It stands for the relationship between two magnets, between two neighbors. But still, we know the partition function. There it is. Same thing, twice cos beta j. So let's write it in full. It's twice this. So we're going to have an extra factor of two on the outside of it. Then twice cos beta j. And now let me put in the thing which I didn't put here, which was if I have a lot of spins, this gets raised to the nth power. So this is now to the number of bonds which is the same as the number of magnets minus one. That minus one is not going to... Okay, so are you with me? Everybody with me? I have a question, please. Yeah? In the first expression of z where you had sigma i sigma i plus one, there was a sum. The sum is gone in the lower one. Sorry, where are you? The last line and the one before... Oh, sorry. No, there's certainly a sum here. Excuse me. There is a sum here. Is this what you're talking about? Yeah. Minus a sum over i of that. All right, but the point is, of course... Does that make some argument of the cosine also of the sum? No, no. No, no. That's sum. It's raised to the nth. Remember that a sum and an exponent just means a product. Yeah, yeah. Right. So this is just a sum over each individual bond, this thing just factorizes into separate the sums for each one of the bonds. So all it is, each one of the bonds, the energy is a sum. That means the Boltzmann factor here is a product. And it's just summing over each bond individually, but just factorizes. If you have trouble seeing exactly why it factorizes, the thing to do is to write it down for three spins with two bonds. Okay? Just write it down with three spins, two bonds, write down the full expression, and you'll see very quickly why it factorizes. So that's what ultimately yields the n and the exponent. That's what yields the n and the sublimate. Yeah, that's right. And we actually did it both ways. We did it last time for the single spin, but for the problem on the top. We did it both ways. We did it by thinking of the whole system and counting all the configurations. And then later, we just did it for one spin and then took it to the nth power. And that's a legitimate thing to do. All right, so we've reduced it to exactly the same problem as before. The only difference is the physical meaning of the degrees of freedom here is a little bit different. Okay. Now what is the average? Let's not ask about the average of the spin. Let's ask of the average of mu. The average of the product of neighboring spins. The average of the product of neighboring spins anywhere along the chain is going to be exactly the same calculation as this. So let's write it down. The average, we could either write it as any given mu or we can write it as the average of sigma i, sigma i plus one. That's going to be tanj beta j. You know, I think with my current setup, it's tanj beta j. Tanj beta j, that's correct. So it's not zero. There's the product of them, the fact that it's not zero and positive, we're taking j to be positive, the fact that it's not zero and positive tells us that there's a net tendency for i to line up in the same direction, the i plus first one. That's the same thing as saying there's a bias from mu to be positive. There's a bias. If the first spin is found up, the next one has a better than even chance of also being found up. And what's the better than even chance? Basically tanj beta j. Okay, so this tells us the correlation between neighbors. And there is a correlation between neighbors. But let's now go far down the line. Let's know exactly the question which I asked when we started. What's the correlation between the first spin, the i-spin over here, and one in units down the chain? How are we going to get at that? Well, the answer is very simple. You just write, let's see, let's use this blackboard over here. Here's what we want to calculate. The average of sigma i times sigma i plus n. I'm leaving a gap in there because I want to write some things. What I'm going to put in here is sigma i plus one, sigma i plus one. What's sigma i plus one times sigma i plus one? I didn't do anything. And then I'm going to put in sigma i plus two, sigma i plus two, and so forth until I get to the last one here. But now I can write this as the product mu one, mu two, mu three, dot, dot, dot, down to mu. Now how many is it? How many do I have in going down there? Is it n minus one? It's n minus one, right? Where mu one means this thing over here, the next mu two is this thing over here, mu three is the next one here, until we get down to the end of the chain. All right, so now all we want to know is the average of the product of these. But all of the muses are independent of each other. The energy is the sum of the energies, the problem completely factorized. The average of any one of the muses is tanh beta j. So what's the answer? The answer is tanh beta j to the power n minus one. In other words, the number of steps between the first spin and the last one. This formula is expressing exactly the idea of the game of telephone that if you lose, tanh beta j is first of all always less than one, except when beta is infinite. That's absolute zero temperature. Zero temperature, perfect fidelity. This is one, and no matter how far down the line you go, the answer will be one, which is the statement that at zero temperature, if you know that this one is lined up, every one of them is lined up. Only two possibilities. Everybody lined up this way or everybody lined up that way. In either case, the product is one. But if there's any loss of fidelity at all, which means in this case the temperature is not absolutely zero, then this tanh beta j is less than one, and each time you go another step you lose a factor of tanh beta j. That's like losing this 99%, you know, 99% of 99%. So whether you like it or not, when you go sufficiently far down the chain, this correlation function will become arbitrarily small. There will be no or negligible exponentially small memory. This is an exponential function of n, a number less than one raised to the nth power, so the correlation falls exponentially within, with distance. Yeah. I thought the correlation function was just sigma one times sigma, or sigma i times sigma i plus six. The average of it. The average of it. Right, right. But I mean it doesn't have all these ones in between that you have here, right? I did a trick. I put ones in here. I put ones in there. Why? Because I wanted to re-express it in terms of the things which I know, which are the mules. So it's a marvelous trick. You concentrate on the bonds instead of the spins themselves. I don't know if this was invented by us or not. If it is, she deserves some applause. Okay. This incidentally is a pattern. This incidentally is called a duality. In modern physics, this could be thought of as the first duality, equivalence of different systems. We found an equivalence between a theory of spins which are connected near its neighbor with another theory up there of just spins which are uncoupled to each other. With a change of variables. A clever change of variables which basically took sites to bonds which replaced, which interchanged sites and bonds. Sites are the points of the lattice. Bonds are the sites and links of the things we would say. That's the first example of a duality between different statistical mechanical systems. Yeah, edges and vertices. There's not always edges and vertices, but in this case it is. You said there was a bias to anti-alignment because that's a lower energy. That's always minus one. No, lower energy is when they're aligned. I think. That's the part I missed. Yeah, okay. If they're aligned, the energy is negative. That's why I put the minus sign here. So the minus sign means this is a lower energy state than this. Not going to matter. That's what a ferromagnet does. Ferromagnets, the magnets line up. It's a matter of very intricate detail whether the little elementary magnets in a material prefer to align or anti-align. Most of the times they prefer to anti-align, but in iron they prefer to align. That's a matter of detail. Those are details we're not going to go into. Okay. Okay, so we found out some things about the one-dimensionalizing model. We found out that it's never magnetized, except at zero temperature. Magnetizers can translate into the statement that there's this long-range memory. If one spin is up, all the others will be biased up. That's called magnetization. And it's equivalent, in fact, to the statement that if you put a tiny, tiny magnetic field on the system to bias it, it will cause everything to line up. We'll see that. We'll get that in a little calculation in a little while. Okay, this is a boring system. Has no phase transition. It does exactly what you might have expected it to do. Correlations fade as you go down the line. And the reason is simple. It's the same reason as in the game of telephone. There's a statistical probability that you'll make an error. Once you make an error, you start over again with a new message and you wait until you make another error, and then you start over again, and that's the way it goes. Nevertheless, there will be some clumping of a tendency for clumps of a thing, just like in the game of telephone, there will be long stretches of agreement, long stretches of agreement, and then statistically on the average, every so often a switch, but then long chain of agreement again and then a switch. Okay, so yeah. At high temperatures, then that's like a fidelity being very bad. At temperatures like the situation where then you just can't hear your neighbor. You get no information from your neighbor, so you wind up whatever your neighbor says making a random guess. In that case, let's see. That's the case, beta equals zero. Tanch of zero is zero. So that's a statement that does not even correlation between the nearest neighbors. Excuse me. Okay, boring system. How do you make it more interesting? What's wrong is the dimension is too low. In a one-dimensional chain, once you make a mistake, that next fellow doesn't have any support telling you what the right answer is because he just gets his message from the previous person. Okay, what about higher dimensions? So now instead of playing telephone, let's play a different game, Kevin over here starts a message and he sends the message correctly to the fellow on back of him, the fellow to the left of him. And then they each send the message to all their neighbors. In particular, they will wind up sending a message to the person that Kevin has already sent the message to. He's going to use his judgment. In particular, as this spreads throughout here, people are getting messages from different sides. They're going to be able to make their judgments. In fact, everybody will get basically four messages from all the people around them. If one of those messages happens to be wrong, or three of them are right, what will he do? He'll do what computer scientists call error correction. Error correction just means he will take the majority vote. This works much, much better. And in fact, if the fidelity is good, the bias of the initial message will spread throughout and off to infinity. Same thing happens in two-dimensionalizing model. In fact, the bias of any one spin will bias the rest of the, and in fact, we could just say, putting a little magnetic field on one spin in two dimensions will bias the whole sample. This is not completely obvious. When I thought about it a little bit, I realized I can't prove that. I actually can prove it, but I can't prove it in an easy way. The way I prove it is by going through the analysis of the two-dimensionalizing model, which I know how to do. But the fact that Ising got it wrong, it wasn't, there were worse things in the world. He got that wrong. And in fact, he thought it was wrong in every dimension. But if you think about it for a minute, you realize that it's very dimension dependent. Why is it dimension dependent? Well, in one dimension, each person has two neighbors. That's not very many. It doesn't have much of a support system. He gets the message really only from one direction. How many neighbors does a two-dimensional system have? Four, if it's a square lattice. So that means you're getting messages from four people. You have a pretty good chance of getting the right answer, even if the fidelity is not so good, if you use some waiting procedures such as taking the majority. You do pretty well. You do much, much better than if you were just getting the message from one person. How about in three dimensions? You have six neighbors. How about 10,000 dimensions? Whatever it is. 20,000 neighbors. I used to have this friend, Artie Harris. Artie Harris was the first black Mr. America. Anything you ask him, anything you tell him, he would say, 100 men can't be wrong. I don't know why he hadn't mind. But in high dimensions, it's true. 100 neighbors won't be wrong very often. Fluctuations among large numbers of variables tend to be very small in comparison with the net, you know, if you have 100 pluses of minus ones, then the average fluctuation will be much smaller than the net magnitude of the square of n kind of statistics. So you have a much better shot at being able to propagate the information of one spin throughout the entire lattice the higher the dimensionality. So that leads us to ask the question, what will the icing model be like in very high dimensions? Some limiting number of dimensions, 10,000 or whatever it is, and what can we say about it? We say about it. This is correspond to the physical phenomenon. Higher than three dimensions. Turns out two is high enough. It turns out that two behaves like 10,000. Curious. Two is high enough. To answer the question, where does n or the number of dimensions d become large enough to be able to trust large dimensions is hard. Three is very large in this game. But that's a matter of computation, having solved it or solved it numerically or whatever. We know that three is a very large dimension. So the answer, which is not obvious, is that only the one-dimensionalizing model does not have a transition, does not have this tendency. But that's not something we could have predicted easily. What could have predicted easily, not too hard, and we're going to do it tonight, is that in sufficiently high dimensions is an awfully good argument to say that there's a transition at high temperatures, everything's random. And as you lower the temperatures, everybody wants to line up, and that line up propagates throughout the system to infinity. OK, let's, the trick that I'm going to use, there are many tricks to do this with, I'm going to use something called mean field approximation. I'll tell you what it is. Each site is surrounded by how many neighbors we're going to do d dimensions, d dimensions. d stands for dimension. Each site has, is surrounded by 2d neighbors. We're going to imagine d is very large. And because d is very large, you can imagine that all of the neighbors define a field, let's call it a field, whose fluctuation is much smaller than its average value. If you have a large number of variables and they're biased, the fluctuation, so that they all add up to something of order, the number of variables, the fluctuations around that will typically be much smaller than the, than the bias. OK, so that's what we're going to do. Let's focus on that spin. And all the others we're going to make an approximation about. Let's focus on this spin and write its energy. Just the energy of this one spin, the energy of this one spin is minus j times sigma times the sum of the neighboring spins all around it. So this is the sum of the neighboring spins, sum of the neighbors, any i, g, h, b, or r. Now let's suppose that there is a bit of bias and that the average of the spins is not zero. Of course, we're going to check that. That's all we're going to try to check. Let's suppose that the average of the, I'm going to write a formula. The formula would allow the spin to be zero too, but let's suppose throughout this lattice that the spin has an average called sigma. Maybe I can make it simpler. Let's call it sigma bar. This is the average spin and we don't know what it is. It might be zero, it might be plus, it might be minus, we don't know what it is. But I'm going to replace the sum over neighbors here by simply twice d times the average. Now that's a pretty good approximation if the number of neighbors is large. And the larger the number of neighbors, the better the approximation. We have large numbers as a typical rule we're allowed to average and the fluctuations are small. So we can replace this by minus j, 2d, because there are 2d neighbors, minus 2dj, and not sigma times its neighbors, but the particular spin at this point times the average spin. That's the energy of one particular spin sitting in the bath or in the field of all the others. This is called mean field approximation, mean not being in the sense of nasty, but mean in the sense of average and field now being the field experienced by one spin in the field of all the others. So this is the energy of this particular spin over here and now what we can do with it is do the partition function of that one spin. In fact, we don't even need to do the partition function. We know how to do the partition function. It's exactly the same calculation we did over here except instead of writing j, we will write 2d times j. Exactly the same calculation as here except every place we saw j in the upper calculation we will have 2d times j. So I can immediately write down what the average spin is. I can immediately write down that the average of this spin, this one over here, let's call it sigma double bar for a minute. The double bar indicates that I'm talking about this one. Its average spin is just going to be tanh. Now it's not going to be beta j. It's going to be beta times 2dj times sigma bar. What did I say? Did I say every place that we saw j we should stick to dj? No. Every place we see j we should stick this whole thing. 2dj times sigma bar. The spin over here is moving in the background of all these others and others all constitute a constant field that we just called sigma bar. So the answer is going to be the average of this spin is hyperbolic tangent of 2dj sigma bar. Yeah, thank you. Two beta dj sigma bar. Let me write it over again. Let's go to this blackboard over here and write it nice and clear. One double bar, the average of this particular spin is going to be hyperbolic tangent of 2 beta dj times sigma bar. Now this mean field approximation is also sometimes called self-consistent field approximation. Why self-consistent? Well self-consistent because, hopefully self-consistent, because this spin over here is no different than all the others. It's just one of the many spins in the system. And if all of these have an average equal to sigma bar, the physical intuition would require that you would say that the one over here also has the same spin, same average. That's got to do with translation invariance. It's got to do with that everyone is really the same as every other one for a big sample. And so the self-consistent field theory is to say that sigma double bar is the same as sigma bar. We have an equation. We now have an equation for what sigma bar is. It's an equation. Well, it could tell us what sigma bar is. No? Sigma bar is proportional. Yeah, yeah, yeah, yeah, it's inside. I put the bracket around here to indicate that this was a constant. It's a constant with proportional to the inverse temperature, but yes, but this is inside the bracket. Sigma bar is inside the bracket. So this is an implicit equation for sigma bar. We want to solve it. This is for all time. This equation would apply to all temperatures. When the variable is almost equal to the tangent, doesn't that mean very small variable? No. No. Tangent can go anywhere from zero to one. Tangent can go actually, tangent can go anywhere from minus one to one. So the question is how do we solve this and how do we get an intuition? Question? Yeah. So before, I would have expected you had written angle bracket sigma equal tangent. Yeah. Is the sigma bar the same as? Yeah, I just didn't feel like writing angle brackets. You didn't hear me when I said it. Okay, I'm sorry. No. I decided to use the bar notation instead of the bracket notation. Another question. I'm a little confused about your D. I can see how you could have three or four, but if you talk about greater than four, are you talking about nearest neighbor, next nearest neighbor? No, nearest neighbor. These are all nearest neighbors? Yeah. How many nearest neighbors does the lattice and D dimensions have? Two. And one dimension, two. And two dimensions, four. And three dimensions, six. How many of four dimensions? Eight. Eight. Eight. If you include it next nearest neighbor, it's going to be equivalent to getting a higher dimension. Yeah. Yeah. Yeah. Yes, next nearest neighbors would go in the same direction as making a higher dimension. Absolutely. But you have a scale factor there, wouldn't you? Would you have a scale factor for next nearest neighbor and next next nearest neighbor? Would you have some geology? Yeah, sure. Sure. So we picked a particular situation and studied it as a function of dimension. We could study it as a function of the nature of second nearest neighbor couplings, third nearest neighbor couplings. We could do all these things. This is just one way of making a spin have a lot of nearest neighbors. The physics of high dimensions is not something we realize in the laboratory. We do this all the time. We want to prove something about a system. It's a little too hard. We can't get our hands on it, but we can go to some limit where it becomes much easier. We prove it in the limit. We haven't proved it for the physical case of interest, but at least we can prove that somewhere between, in this case, one dimension and 100,000 dimensions, a certain new kind of behavior happens that wasn't present in one dimension. Once we see how that works, we can then ask, where does the change happen? And in this case, it happens between one and two. But this is a nice, simple example where you can see what happens physically without having to solve a difficult mathematical problem when D is very large. So let's try to solve this equation. The first thing to do is to change variables. I know you hate to change variables, but there's a good reason. Tanch of a complicated thing. I don't want a complicated thing in tanch. That's a nuisance. So I'm going to say two beta dj. This is a number. Two beta dj. They're all numbers for our purposes right now. This is just a number. I'm going to change variables so that two beta dj times sigma bar is itself a variable. And I'm going to call it y. Why? Because. Tanch. On the left side. What about the right side? The right side is just y divided by two beta dj. Okay? Now, to solve this equation, I'm just going to graph both sides and see where they intersect. The crudest way to solve an equation is to just draw a graph of both sides and see where they intersect. This is y. This, the vertical axis is just tanch. Tanch of y. Okay? And here's the tanch function. Now the left-hand side is just a linear function of y. Something else in here, let's, instead of calling this beta, let's take one over it and put the temperature in the numerator. It's the temperature. Okay. So let's plot the right, the left-hand side here for different values of the temperature, which is another color, and plot it for different values of the temperature. First of all, let's go to very high temperature. At very high temperature, this is a very steep function. Of course, d is a large number, so how big large temperature is may depend on d, but eventually you get to a large enough temperature that the slope of this side is large. So we go to some very large temperature and the slope is high, and there's no intersection except the intersection at y equals zero. The answer, the only possible answer to the self-consistent field theory is that y equals zero. But what is y? y is proportional to the average of sigma. So we learned, first of all, that at very high temperature that the average of sigma must be zero, as expected. That's not too surprising. Okay? But now let's start lowering the temperature. As you lower the temperature, the slope of this curve decreases. There is a point where the slope is actually equal to one, and the red line here is tangent to the tanch, tangent to the tanch. What a lovely idea. It's tangent to the tanch here. Where does that happen? That happens when the slope, that happens when the temperature, slide it down, when the temperature over twice dj is equal to one, that's when the slope, or when the temperature is twice dj. Something happens. Temperature is equal to twice dj. When you go beyond that, there's now some new solutions. Soon as you get past one, you could be over here. Just as this line rotates, soon as it gets past slope one, there's now solutions along here. There's also a solution over here, and down here, but let's assume that j is positive. Just to be clear, let's assume j is positive. So we're only interested in the right-hand side here. Now there are new solutions, and for the new solutions, y is not equal to zero, and therefore the average of sigma is not equal to zero. In other words, the system has an overall global average of spin, pointing all in the same direction, and that's the phenomenon of magnetization. What happens here is called a phase transition. At that point, there's a net magnetization, there's an average magnetization, an average field that just permeates the whole system forever. Now wait, wait a minute, you say, hold on, what about this solution? How do I know that the right solution wasn't the one at the origin? For any value of the constants, the red curve does intersect the black curve at the origin. So how do I know which one is right? How do I know which one is right? Well we're going to do a little calculation, I don't think we'll do it tonight, maybe next time, where we're going to add to this problem a teeny little magnetic field that biases the system. And we're going to discover that the teeniest little magnetic field will tell us that we should be on this branch of the solution and not here, but we can see it another way. Let's go to zero temperature. Zero temperature is way out here, and we know what to expect at zero temperature. At zero temperature, we expect alignment. We expect everybody to align, and so we expect these things to lock into a parallel configuration. That's way out here, everybody aligned. And as you lower the temperature, you slide in along this line until you get to the transition point here, and beyond the transition, lower temperature than the transition point, the only solution is at the origin. So that's the nature of a phase transition, which we'll talk about a little more, but I think I won't try to do it tonight. Next time, we will put, actually it's pretty easy, it's pretty easy from this point to put a little magnetic field in. Is everybody up to a little magnetic field? Yes, what happens if you put D equal one, why does it stop working? It's just too low. The only guarantee is that it sufficiently high dimension it works. We know ISIG doesn't work for one, and I thought there is a mathematical beautiful, something happens when you make D equal one. Something blows up. It does. It does. I'll see what it is. Let's go to absolute zero temperature first. Absolute zero temperature, everybody wants to align. Now let's imagine writing the partition function as a sum of all the configurations. How can we enumerate the configurations starting with this configuration? Well, we can enumerate it by the number of spins that are flipped relative to this configuration. So we can start with writing the partition function. In writing the partition function, we would start with this configuration, e to the minus beta times the energy of this configuration. At low temperatures, starting at low temperatures, this would be the dominant configuration. Then we raise the temperature a little tiny bit, and some new configurations start to become important. For example, one spin flipped. Let's flip one spin. In fact, let's do the same thing over here. The same thing over here in two dimensions. Here we have plus, plus, plus, plus, plus, plus, plus, et cetera, as our starting point. Everybody's pointing in the same direction. Now let's ask what the next state is. Next in energy. This is the lowest energy state. The energy is stored in the bonds. That's sigma dot sigma, which is the energy of a pair of neighbors. What happens if you flip one spin? Instead of being plus, let's make this one negative. How much energy do we get? How many bonds have we broken? A broken bond means that instead of having parallel, we have anti-parallel. How many bonds have you broken? Right. And how many places are there that you can do that? Basically on each site. Each site. The number of places you can do that is the total number of sites. That means there's a number of configurations that you can add in with four extra units of energy. Now supposing you want to break, or we want to flip two spins, the next configuration, what's the next configuration? We flip two spins. What's the energy of that configuration? Can you tell? How many broken bonds are there? Okay. Let's see which bonds are broken. The minus-minus bond is not broken. They're parallel. This one's broken. This one's broken. That one's broken. This one's broken. This one's broken. And this one's broken. One, two, three, four, five, six. So I've increased the energy by six units of each broken bond cost a certain amount of energy. I've now increased the energy by six broken bonds. What would have happened if I would have put the minus sign way out here, far away? Then I get eight. So each time I increase the number of flipped spins, it costs me something in the Boltzmann fact, the e to the minus beta times the energy, the energy goes up. Each time I flip another spin, the energy goes up. Okay. Now let's compare that with one-dimensional case. How much energy does it cost to flip one spin? How many bonds get broken? Two, right? How about to flip two spins? If they're adjacent, it's still just two. How about three spins? You can flip any number of them and still only cost two units of energy. So yes, I picked them adjacent. That means there's a lot of configurations all with the same energy, much, much more here. Here, in this case, the number of configurations with four extra units of energy would be just proportional to the number of sites. How about here? The number of configurations with two extra units of energy, that's proportional to the square of the number of sites. It's proportional to the square of the number of sites because you can pick any two bonds and flip all the spins in between them. So just a lot of configurations where a lot of spins are flipped. That's why it's unstable with respect to flipping lots and lots of spins. And as soon as the temperature is turned on, it costs very little energy to flip whole big loads of them, whereas here, the flip one costs you some energy. Okay, that's the same as there. But the flip two costs you more energy than one. That's the basic mathematical thing that's going on. If you just simply look at this equation that you got from this mean field thing, you don't see any difference between d equals 1 and d equals 2. Well, except in the factor of d there. Well, there is, you see the difference in the coefficient here. But I mean, you still have the space transition phenomenon. No, no, no, exactly. You don't see the difference between 1 and 2, that's right. Why are you confused? This only made sense if d was large. Over here does, but I'm trying to relate it back to this. The formula only makes sense if d is large. The formula only makes sense, the whole physics, the whole argument only makes sense. So how do you turn on the little magnetic field? What's that? Okay. How do you turn on the little magnetic field? All right, so now let's turn on a tiny magnetic field. To do that, where's our energy function? Here's our energy function over here. I'm still concentrating on one spin. But now in addition, we imagine the whole system has an external magnetic field. That means that each spin has an extra energy not related to its neighbors, but just whether it's up or down. Let's call the magnetic field b times sigma. With b times sigma, it's favoring down. If b is positive, it's favoring down. But we're going to imagine eventually that b is very small. This is the extra energy not related to its neighbors. Each one of them, each one of them, each of the spins has this extra term here. And that's the whole difference. The whole difference is instead of tanch being tanch of 2dj sigma bar, that's going to be tanch of 2dj sigma bar plus b. Sigma here, sigma factors out. Sigma, everything, put some brackets around here. It's exactly the same problem except we've replaced 2dj sigma bar, sigma bar being a number. We've replaced it by adding a b. What that does is it adds to the variable y, it adds plus b times beta, I believe. We want e times, which is the sign, which sign do I want? I think I want a favor. I guess I want a favor upspin. So let's put a minus there. Let's put a minus, that favor is upspin. Yeah okay, this favor is upspin. Then I guess it looks like this. So this is a new equation. The new equation, let's compare it with the old one. Let's draw the picture. We're just going to plot again the right hand. Is everybody going to see what I did? Yeah, we just added a constant to the argument here. That's all. It's very simple. Okay, what does that do to the tanch function? The left hand side, no, the left hand side was the calculation of the average spin in the background field. Here's the background field now. We calculate the average spin in that background field. We call it? Well it should be tanch to beta dj sigma bar plus. Exactly right. What we did was we took this new energy function, calculated the average of sigma, but we do it exactly the way we did before. The thing inside this blue bracket is just a number again. It is just a number. So we calculate the average spin given this expression for the energy, and then we set that equal to the average. We do the calculation and we get this number here. Tanch y plus beta b. That's gotten by saying the spin over here feels it's neighbors. I'm just saying when you define y, it has to be equal to sigma over 2B sigma. Plus beta b. Right there, that's it. Y is still the same as it was before. I have not changed the definition of y. It's still 2Bdj sigma bar. So this is tanch of y plus beta b. No, no, it doesn't affect the left hand side. The left hand side. Yep, y is exactly the same thing as it was before, and it is the average spin. We calculated the average spin and set it equal to the average spin. On the right hand side, we did a statistical mechanics calculation of the average spin, assuming that the average spin had a certain value. On the left hand side, we said, well, okay, then that must be the average spin. That's why it's called a self-consistent approximation. Yeah, you're going to shift the curve. Let's shift the curve. Let's shift that. That's exactly right. We're going to shift the curve. So let's see. If beta b is positive, that shifts the curve backward a little bit. Okay, that's the new curve. And now let's draw the red curves. The red curves are, the left side is exactly the same as it was before. The right side, the curve has just been shifted to the left. By how much? By beta b. By beta b. So this gap here is beta b. Beta b here. We start at the origin, same curves. But now there's no ambiguity about which solution we should take. There is no solution at the origin. No solution at the origin anymore. Only these points over here, for positive j. Let's say for positive j. What happens at infinite temperature though? At infinite temperature, okay, so let's see what happens at infinite temperature. At infinite temperature, this gets very close to vertical, right? It's going to, no, no, no, no, no. Zero temperature is close to horizontal. Yeah. Infinite temperature is, so here's our answer. Here's where our intersection is right over here. And at that point, y is equal to zero. Beta b goes to the right. Because it was straight, beta b goes to zero. Beta b does go to, no. Yeah, zero. No, we're going to zero temperature. Sorry, we're going to large temperature or small temperature? Yeah, zero and the curve is shifting back to where it just... Yeah. Yeah. Yeah. Doesn't matter. No, no. What do you want to do? You want to go to large temperature or low temperature? Very large. Very large temperature. Beta then goes to zero. Okay, so beta then goes to zero. Yes, the curve shrinks back. Okay, zero. So you're either going to come to the only structure of the temperature, right? So you're going to go to the very, very large temperature. You have to shift your hand. Yeah. Your hand. Yes, yes, yes. Right. So that means that even at very large temperature, even a much bigger... You see what's going to happen. You're right. What's going to happen is this point is going to shrink down to here. It's going to move to here and... Yes, the curve is down. The beta B just goes to zero. Right. So then... So you have no more magnetic arrangement at infinite temperature? No, you have... No, you have... Certainly not at infinite temperature. At infinite temperature, it's random. So the solution... The curve looks like this. The solution is down near here. As you let beta get smaller and smaller, you find no magnetization, as you should for very, very large temperature. All right? But now, as you move away from infinite temperature, you see the only solutions lie on here. No matter how small... Let's go to small temperature now. Small temperature, you move this way back. Okay? So you're certainly on here. And even as you go to higher and higher temperatures, you stay on this curve, you stay on this branch of the curve, although the whole thing shrinks down to here. There's no question of being stuck at this point for all temperatures. Certainly for a randomly temperature which is not infinite, there's only the solution up here. So no matter how small you make the temperature, as long as it's finite... Sorry. No matter... No matter how large you make the temperature, as long as... As large or small, the only solution is up here. Is it clear that as your temperature gets very small, so you're moving out along the curve, but the curve is moving to the left very fast also? Not very fast. It only can get to here. That's as far as it can get. No, no, no. It's going to the left faster. Oh, is it letting the temperature go to zero? Yeah. Yeah, okay. So is it clear that your intersection is going to be getting closer and closer to a height at one? Yeah. So, I mean, the... Could it just move? I know. But it just moves this horizontal line here, just dominates everything. Well, that line is horizontal. It's going to a horizontal... Okay. But the question is... I know. But if you take a curve that goes to horizontal here and you shift it a lot, it's very close to horizontal. The line slope goes to zero. You got some kind of limit thing yet to prove. No. It's... The extended fumes do this. It can add infinity to the slope of the... You're not adding infinity. You go towards infinity. Do you just... I'm just saying the slope of the line is... The slope of the line, the y line, is going down. Okay? So it's getting towards zero. So it's going to intersect one at a long distance. No, it's not going to intersect. It's... The question is where does it intersect? Curve. That curve. That curve is always just bounded by... Also, the slope at the place where it crosses zero, the hyperbolic tangent, it's getting smaller and smaller. So it's pushing out this way. Yeah. Two things are happening. The curve... I just think that there's a little bit more that has to be shown. And you've got a curve that's going up 41. And if you move that curve... If I bring this to my car mechanic and I say, Joe, what is this going to do? My car mechanic will tell me every single time what it does. Why? Because he knows, God damn it, how cars work. And this is enough for him. Come on. What are you kidding me? Let me put it this way. At some point, at every point along that curve, at some distance, epsilon, but for y equals one, right? Yeah. And so if you keep moving that thing over fast enough, then even though the curve is intersecting further out... Yeah, but the curve is moving this way. It's moving this way. So all of it... Both things are going in the same direction. Both things are going in the same direction, right? Yeah. Okay. All right, so we see just adding the tiniest little bit of magnetic field will leave only the solution up here, the magnetized solution. Yeah. The external field is too present, right? Which? External field. Yes, but the external field can be made as small as you like. Right. But it's too present, so the magnet is too heavy again. Yes. Yes, but for arbitrary... Think about it this way. For arbitrarily small magnetic field, which means bringing this intercept here to be arbitrarily small, tiny, tiny magnetic field, much smaller than the magnetic field of the Earth, you still have only the solution up here for fixed temperature. Fixed temperature means for fixed slope here. You let the magnetic field go to zero, and you'll be on this curve. Yeah, but the magnet is still in orientation then. Right. Right. This is an instability, the tiniest, tiniest magnetic field. The tiniest doesn't matter how small it is, it will still bias it up to here. That's called spontaneous symmetry breaking. Magnetic field, which is far too small to measure. In fact, if you look at the upper curve, which is without the magnetic field, and you assume the center red point, the zero, the origin, making an infinitesimal move to the left, you see that the curve will intersect infinitesimally up on the... Here? Yeah. And that shows that there is a bias immediately. No. That proves your point. Even if it's just shifted infinitesimally to the left, immediately it cuts the y-axis. The point is that this solution is unstable. Yeah. But is it possible? It's possible, but it's unstable. Okay. You have the tiniest magnetic field, and we'll flip to... I understand. Yeah. If we absorb the d-beta into the y, so that we're comparing tanning, high ball tanning of y, to a left side that has all the betas of expression, then that left side becomes a set of lines that have y-intercept of minus b. So basically the graph... These lines will intersect to the right. Well, if we look at the graph on the top board... Mm-hmm. Yeah? Yeah. Keep the high ball tanning in the same place, and move all the red lines down by the b. Yeah, that's what I said. That's what I said. A little to the left. Then you have only the one... Then we have only one dependent on b. You want to keep the hyperbolic tangent fixed, right? Yes. Shift it a little to the left. Right. So which way does it shift? Left. All the red lines shift down. Right. Right. No, no, you can shift anywhere. The hyperbolic tangent stays... And the red lines... All the red lines shift down by b. Yeah, shift down, which means it shifts to the right. Yeah. Yeah. Same thing. Which way you do it? But it's clear that there's no way of orienting that line so that it intersects the tangent at two points. Yeah. Right. So that's the argument. The argument is that the solution where all the spins are equally likely to be up and down is unstable. And what that solution is, is it's sort of 50-50 probability of everybody being up and everybody being down. The whole world being filled with plus spins, slightly... Plus the whole world being with minus spins, that has an average of zero. But if you turn on the tiniest magnetic field, the difference of energies will be enormous. Why? Because you have a billion light-years worth of spins all pointing up and you put in a tiny little stray magnetic field, the energy of up will be much lower than down. Just because you have this great, great, big number of spins. So the solution over here is just exactly what I said. It's all... Basically, all spins pointing up are not all of them, but on the average, everything pointing up, equal probability with, average, everything pointing down. But that's an unstable situation. Just a tiny perturbation in a magnetic field will bias it. And of course, that's what really happens with a magnet, right? I mean, you could imagine that you have a state of a ferromagnet, a real magnet, which has equal probability of pointing everywhere. And then you bring it into the field of the Earth, a tiny magnet that's not that small. But you bring it into the field of the Earth and pretty quickly that magnet knows that the Earth's magnetic field is there and it orients itself in that direction. So this configuration where it's equally likely to be in every direction, that's unstable. And just a small stray field will... The stronger the magnet, the more it's unstable, the stronger or the bigger the magnet, either way, the stronger or the bigger the magnet, the more unstable it is. So what causes the Earth to split its magnetic field? That's a question. I don't know. Does anybody know? That's a question. Why does the Earth's magnetic field flip? I think somebody is. I'm not sure. I don't know. I think it has more current-flip direction over here. Of course the current-flip. Why? Why does the iron core is going to one direction and generating current flips the other way around? Why does it flip? There's some instability. I don't know why. I don't know why. I don't know why. I don't know why other than to say there's a symmetry. It could be going either way, but why it flips it? It's very high temperature, right? The center of the Earth? Yeah. Pretty high. That may have something to do with it. I don't know why. Somebody look it up. I'm not sure anybody knows why. Maybe because there's a time lag because the Earth's field comes from the, I believe, motion of the core. So there may be a time lag between the field and the motion. Why would that make a flip? Well, a time lag can generate instability. I don't know how. No, it's some instability, clearly. Yeah. It's some instability, and I don't know why. If somebody gets a chance to look it up, or maybe I'll get a chance to look it up, I don't know. I know that it flips. That's all I know. Okay. Good. Yeah, we'll talk a little bit more about magnets. Maybe metals. Maybe metals, I'm not sure. Will you know next time what the fall will be? Yeah. Yeah. For more, please visit us at stanford.edu.
Leonard Susskind develops the Ising model of ferromagnetism to explain the mathematics of phase transitions. The one-dimensional Ising model does not exhibit phase transitions, but higher dimension models do.
10.5446/14934 (DOI)
Okay, if there are no more questions, tonight we begin the study of statistical mechanics. Now statistical mechanics is not really modern physics. It's pre-modern physics. It's modern physics. And I assure you it will be post-modern physics. It's probably, the second law of thermodynamics will probably outlast anything that comes up time and time again. The second law of thermodynamics has been sort of our guidepost, our guiding light, if you like, to know what we're talking about and to make sure we're making sense. Statistical mechanics and thermodynamics may not be as sexy as the Higgs boson, but I assure you it is at least as deep. And it's a lot deeper. My particle physics friends shouldn't disown me. It's a lot deeper. It's a lot more general. And it covers a lot more ground than explaining the world as we know it. And in fact, without statistical mechanics, we probably would not know about the Higgs boson. All right, so with that little starting point, what are statistical mechanics about? Well, let's go back a step. The laws of physics, the basic laws of physics, Newton's laws, principles of classical physics, quantum mechanics, the things that were in the classical mechanics course, quantum mechanics, and so forth, those things are all about predictability, perfect predictability. Now you say, well, in quantum mechanics, you can't predict perfectly. And that's true, but there are some things you can predict perfectly. And those things are the predictables of quantum mechanics. Again, as in classical mechanics, you can make your predictions with maximal, let's call it maximal precision or maximal, whatever it is, predictability, if you know two things. If you know the starting point, which is what we call initial conditions, and if you know the laws of evolution of a system. If you can measure the, or if you know for whatever reason, the initial starting point of a closed system, a closed system means one which is either everything or it is sufficiently isolated from everything else that the other things in the system don't influence it. If you have a closed system, if you know the initial conditions exactly, or at least with whatever precision is necessary, and you know the laws of evolution of the system, you have complete predictability and that's all there is to say. Now of course, in many cases that complete predictability would be totally useless, having a list of the position and velocities of every particle in this room would not be very useful to us. The list would be too long and subject to rather quick change as a matter of fact. So you can see while the basic laws of physics are very, very powerful in their predictability, they also in many cases can be totally useless for actually analyzing what's really going on. Statistical mechanics is what you use, basically probability theory. Statistical mechanics, let me say first of all, it is just basic probability theory. Statistical as applied to physical systems. When is it applicable? It's applicable when you don't know the initial conditions with complete perfection. It's applicable when you, it may even be applicable if you don't know the laws of motion with infinite precision. And it's applicable when the system you're investigating is not a closed system, whether it's interacting with other things on the outside. In other words, in just those situations where ideal predictability is impossible, then what do you resort to? You resort to probabilities. But because the number of molecules in this room is so large, and probabilities tend to become very, very precise predictors when the laws of large numbers are applicable, statistical mechanics itself can be highly predictable, but not for everything. As an illustration, you have a box of gas. The box of gas might even be an isolated closed box of gas. It has some energy in it. The particles rattle around. If you know some things about that box of gas, you can predict other things with great precision. If you know the temperature, you can predict the energy in the box of gas. You can predict the pressure. These things are highly predictable, but there are some things you can't predict. You can't predict the position of every molecule. You can't predict when there might be a fluctuation. A fluctuation, which, you know, fluctuations are things which happen which don't really violate probability theory. They're the sort of tails of the probability distribution, things which are unlikely but not impossible. Fluctions happen from time to time in the sealed room every so often. An extra large group, an extra large density of molecules will appear in some small region bigger than the average someplace else. Molecules will be less dense. And fluctuations like that are hard to predict. You can predict the probability for a fluctuation, but you can't predict when a fluctuation is going to happen. It's exactly the same sort of thing, flipping coins. Flipping coins is a good example, probably our favorite example for thinking about probabilities. If I flip a coin a billion times, you can bet that approximately half of them will come up heads and half will come up tails within some margin of error. But there will also be fluctuations. Every now and then, if you do it enough times, a thousand heads in a row will come up. Can you predict when a thousand heads will come up? No. But can you predict how often a thousand heads will come up? Yes. Not very often. So that's what statistical mechanics is for. It's for making statistical probabilistic predictions about systems which are either too small, contain elements which are too small to see, too numerous to keep track of, usually both, too small to see by sea. I mean, you know, it's true. You can see some pretty small things, molecules, but so they may not be too small to see. But there are too many of them. There are too many of them to keep track of. And that's when we use probability theory or statistical mechanics. We're going to go through some of the basic statistical mechanics applications, not just applications, the theory, the laws of thermodynamics, the laws of statistical mechanics, and then how they apply to gases, liquids, solids, whether we will get the quantum mechanical systems or not. I don't know. But just the basic ideas. Okay. And, incidentally, another thing which is very striking is that generally speaking over the history of, certainly over my history in physics, and I'm sure this goes back to the middle of the 19th century sometime, all great physicists, all of them were masters of statistical mechanics. It may not have been the sexiest thing in the world, but they were all masters of it. Why? First of all, because it was useful, but second of all, because it is truly beautiful. It is a truly beautiful subject of physics and mathematics. And it's hard not to get caught up in it. Not to fall in love with it. The reason I teach it is not for you. It's for me. I love teaching it. I love teaching it. I teach it over and over and over again. And in a sense, my life is consisted of learning and forgetting and learning and forgetting and learning and forgetting statistical mechanics. So here's my opportunity to learn it again. Okay. Let's begin with what I usually call a mathematical interlude. In this case, it's not an interlude. It's a starting point. And I'm just going to make some extremely brief remarks which you all know. At least I think you all know them about probability. Just to have, you know, just to level the ground, what are we talking about? And what I am not going to explain, because I don't think anybody can explain it, is why probability works. Why does it work? If you ask why it works, the first answer will be it doesn't always work. You may have a probability for something and you test it out. And sometimes it doesn't work. Those are called the exceptions. So the answer to the question is why does it work? Well, it doesn't always work. It mostly works, except when it doesn't. When doesn't it? Rarely. How rarely? Every so often. But there is a calculus of probability, a mathematical theory of probabilities. And we'll talk about it a little bit. Okay. So we'll take probability to be a primitive concept, basically primitive concept. And we'll suppose that there is a space of some sort, a space of possibilities. The space of possibilities could be the space of outcomes of experiments, or it could actually be the space of states of a system. The state of a system could be the outcome of an experiment. If the experiment consists of determining the state of the system, then the state of the system is the outcome. So we have a space, and let's call that space, let's label the elements of that space with a little i. For example, if we were flipping coins, i would be either heads or tails. If we were flipping dice, you know, dice, dice of dice, i would run from one to six. If there were two dyes, then we would have enough indices to keep track of two dice and so forth. So i is the space of possibilities of outcomes, or the space of possible states of a system. And if we are ignorant, statistics always has to do with ignorance. You don't know everything, and so you assign probabilities to outcomes. And so we assign a probability p of i to the ith outcome to the answer to our question. Okay? What are the rules for p of i? What does p of i have to satisfy? And let's, for the beginning, at least in the beginning, let's imagine that i enumerates some discrete finite collection of possibilities. Later on, we can have an infinite number of possibilities, or even a continuously infinite number of possibilities. But for the time being, i might run from one to n, n possibilities. And the rules are, first of all, p sub i has to be greater than or equal to zero. Negative probabilities, we don't like them. Don't know what they mean. Okay? Next, the summation over i of p sub i, p of i, should be one. That means that the total probability, when you add everything up, all possibilities should be one. You certainly should get some result. Okay? Next, now this is a kind of hypothesis. This is the law of large numbers that if you either make many replicas of the same system or do the same experiment over and over a very, very large number of times, and take all of the outcomes which gave you all of the experiments which gave you the i-th outcome, that's some number, let's call it n of i, that's the number of times that the experiment turned up the i-th possibility, and you divide it by the total number of trials. Total number of trials means the sum of all i, or just the total number of trials, that the limit of this, this is a physical hypothesis. It's a physical hypothesis, it can go wrong if n is not large enough, but in the limit of large n, n goes to infinity, and the limit of very, very, of course n never goes to infinity, you never get to do an infinite number of experiments. But nevertheless we're kind of idealizing, we're assuming we can do so many experiments that the limit n goes to infinity is effectively been reached, then that is p of i. So p of i controls by assumption the ratio of the n of i's. Okay, everybody happy with that? You use this all the time I think. Well, sometimes we use it. Okay, now let's suppose that there is a quantity, let's call it f of i. It's some quantity that's associated with the i-th state. We can assign it, we can make it up. For example, if our system is heads and tails and nothing but heads and tails, we could assign f of heads and call it plus one, and f of tails and call it minus one. If our system has many, many more states, we may want to assign a much larger number of possible f's, but f is some function of the state. It's also a thing that we imagine measuring. It could be the energy of a state, or it could be the momentum of a state. Given a state of some system, it has an energy. It would be called in that case perhaps e of i, or it could be the momentum, or it could be something else. It could be whatever you happen to like to think about. Then an important quantity is the average of f of i. The average of f of i, I will use the quantum mechanical notation for it, even though we're not doing quantum mechanics. It's a nice notation. Physicists tend to use it all over the place. Mathematicians hate it. Just put a pair of brackets around it. It means the average. The average value of the quantity averaged over the probability distribution. It has a definition. Its definition is that it's the sum of i of f of i weighted with the probability. For example, and incidentally, the average of f of i does not have to be any of the possible values that f can take on. For example, in this case, where f of heads is plus one and f of tails is minus one, and you flip a million times, and the probability is a half of heads and a half of tails, the average of f will be zero. So it's not a possible outcome to the experiment. There's no rule why the average should be one of the possible experimental outputs, but it is the average. This is its definition. Each value of f is weighted with the probability for that value of f. You can write it another way. You can write it as a sum over i of f of i times the number of times that you measure i divided by the total number of measurements. That's what p of i is in the limit that there are a large number of measurements. That's defined to be the average. That's our mathematical preliminary for today. That's all I wanted to level the playing field by making sure everybody knows what the probability is and what an average is. We'll use it over and over. Okay, let's start with coin flips. I always start with coin. I start every single class with coin flips, even when I'm teaching about the Higgs boson. Okay. If I flip a coin a lot of times, or whether I flip a coin a lot of times or not, the probability for heads is usually deemed to be one-half and the probability for tails is usually also deemed to be one-half. Why do we do that? Why is it a half and a half? What's the logic there? What logic tells us that? In this case, it's symmetry. It's the symmetry of the coin. Of course, no coin is perfectly symmetric and even making a little mark on it to distinguish the heads and tails, bias is it a little bit, but apart from that tiny, tiny bias of marking the coin with maybe just a tiny little scratch, the coin is symmetric. Higgs and tails are symmetric with respect to each other and therefore there is no reason, no rationale for when you flip a coin for it to turn up heads more often than tails. It's symmetry quite often. I might even say always in some deeper sense, but at least in many cases, symmetry is the thing which dictates probabilities. Probabilities are usually taken to be equal for configurations which are related to each other by some symmetry. Symmetry means if you act with a symmetry, you reflect everything, you turn everything over that the system behaves the same way. Okay, another example besides coin flipping would be dice flipping. Dice flipping instead of having two states has six states, one die, and we can imagine coloring them. We color the faces, red, yellow, blue, and then on the back green, purple, and orange. Okay, that's our die and it's been colored. We don't have to keep track of numbers, we can keep track of colors. And what is the probability that when we flip the die, flip it into the air, hits the ground, what's the probability that it turns up red? No, it's one-sixth, right? There are six possibilities. They're all symmetric with respect to each other. We use the principle of symmetry to tell us that the P of each i, they're all equal and they're all equal to one-sixth. But what if there is no symmetry? What if really the die is not symmetric? For example, what if it's weighted in some unfair way? Or what if it's been cut with faces that are not nice and parallel cubes? Then what's the answer? The answer is symmetry won't tell you. You may be able to use some deeper underlying theory and to use some concept of symmetry from the deeper underlying theory, but in the absence of something else, there is no answer. The answer is experiment. Do this experiment a billion times, keep track of the numbers, assume that things have converged and that way you measure the probabilities. You measure the probabilities and thereafter you can use them. You can use them if you keep a table of them and then you can use them in the next round of experiments. Or you may have some theory, some deep underlying theory which tells you well, like quantum mechanics or statistical mechanics. Statistical mechanics tends to rely mostly on symmetry, as we'll see. So if there's no symmetry to guide you or to guide your implementation of probabilities, then it's experiment. Now there's another answer. There's another possible answer. This answer is often frequently invoked and it's a correct answer under other circumstances. It can have to do with the evolution of a system, the way a system changes with time. So let me give you some examples of what it might have to do. Let's take our six sided cube and assume that our six sided cube is not symmetric. It's not symmetric but we know a rule. We know that if we put that cube down on the table, it's not a cube, when we put that die down on the table and we stand back, this thing has this habit of jumping to another state and jumping to another state and jumping to another state. It's called the law of motion of the system. The law of motion of the system is that whatever it is at one instant, at some next instant, it will be something else according to a definite rule. The instance could be seconds, it could be microseconds or whatever, but imagine a discrete sequence and let's suppose there's a law, a genuine law that tells us how this cube moves around. For example, if it's red, now we've done this over and over many times in different contexts but it is so important that I feel a need to emphasize it again. This is what a law of motion is. It's a rule telling you what the next configuration will be given, it's a rule of updating, of updating configurations. Red goes to blue, blue goes to yellow, yellow goes to green, green goes to orange, orange goes to purple and purple goes back to red. Given the configuration at any time, you know what it will be next and you know what it will continue to do. Of course, you may not know the law. Maybe all you know is that there is a law of this type. You know what I'm going to do next, what am I going to do next? I'm going to draw this law as a diagram. You've all seen me do this in other contexts. Let's do it. We have red, too hard to draw squares. Red, blue, green, orange, what happened? Yellow, yellow, orange, purple. A law like this can be just represented by a set of lines connecting a set of arrows. Red goes to blue, blue goes to green, green goes to yellow, yellow goes to orange, orange goes to purple, purple goes back to red. Given the assumption now that there's a discrete time interval between such events, I am not assuming that the cube has any symmetry to it anymore. The cube may not be symmetric at all. It may have points, you know, one edge, one face, maybe tiny, another face, but if this is the rule to go from one configuration to another and each step takes, let's say, a microsecond, I might have no idea where I begin, but I can still tell you if I, let's say it's a microsecond, a microsecond, and my job is to catch it at a particular instant and ask what the color is. I don't know where it started, okay? But I can still tell you the probability for each one of these is one-sixth. It doesn't have to do with symmetry. Well, maybe it does have to do with some symmetry, but in this case, it wouldn't be the symmetry of the structure of a die. It would just be the fact that as it passes through these sequence of states, it spends one-sixth of its time red, one-sixth of its time blue, one-sixth of a time green, and if I don't know where it starts and I just take a flash, you know, a flash shot of it, my probability will be one-sixth. Now that one-sixth did not really depend on knowing the detailed law. For example, the law could have been different. Let's make up a new law. Red goes to green, green goes to orange, orange goes to yellow, yellow goes to purple, purple goes to blue, and blue goes back to red. This shares with the previous law that there's a closed cycle of events in which you pass through each color once before you cycle around. You may not know which law of nature is for this system, but you can tell me again that the probability will be one-sixth for each one of them. So this prediction of one-sixth doesn't depend on knowing the starting point and doesn't depend on knowing the law of physics. It's just important to know that there is a particular kind of law. Are there possible laws for the system which will not give you one-sixth? Yes. Let's write another law. Red, blue, green, yellow, orange, purple. This rule says that if you start with red, you go to blue. If you start with blue, you go to green, and if you get to green, you go back to red. Or if you start with purple, you go to yellow, yellow to orange, orange back to purple. Notice in this case, if you're on one of these two cycles, you stay there forever. If you knew you were on the upper cycle, if you knew you would start it, it doesn't matter where you start, but if you knew that you started on the upper cycle somewhere, then you would know that there was a one-third probability to be red, a one-third probability to be blue, and a one-third probability to be green, and zero probability to be red, yellow, or orange. On the other hand, you could have started with the second cycle. You could have started with purple. Might not have known where you started, but you knew that you started in the lower triangle and the lower cycle here. Then you would know the probabilities of one-third for each of these and zero for each of these. Now, what about a more general case? The more general case might be that you know with some probability that you start on the upper triangle here and with some other probability on the lower triangle. In fact, let's give these triangles names. Let's call this triangle the plus one triangle, and this one the minus one triangle. Just giving them names, attaching to them a number, a numerical value. If you hear something or other is called plus one, if you hear something or other is called minus one. All right, now you'll have to append, you'll have to start with something you've got to get from someplace else. It doesn't follow from symmetry, and it doesn't follow from cycling through the system some probability that you're either on cycle plus one or cycle minus one. Where might that come from? Flipping somebody else's coin over here, flipping a coin over here might decide which of these two. It might be a biased coin, so you will have a probability to be plus one and a probability to be minus one. These two probabilities are not probabilities for individual colors, they're probabilities for individual cycles. Okay, now what's the probability for blue? The probability for blue begins with the probability that you're on the first cycle, times the probability that if you're on the first cycle, you get blue. That's one third. So the probability for blue, red, or green is one third the probability that you're on the first cycle, and likewise the probability that you're at yellow will be, this is the probability for red, blue, or green in this case, and this times one third will be the probability for purple, yellow, or orange. Okay, so in this case you need to supply another probability that you've got to get from somebody's else. This case here is what we call having a conservation law. In this case, the conservation law would be just the conservation of this number. For red, blue, and green, we've assigned the value plus one. That plus one could be the energy, or it could be something else. I tend to call it the zilch for some reason. I call everything a zilch if there is no name for it. So anyway, let's think of it as the energy to keep things familiar. The energy of these three configurations might all be plus one, and the energy of these three configurations might all be minus one. And the point is that because the rule keeps you always on the same cycle, that quantity, energy, zilch, whatever we call it, is conserved. It doesn't change. That's what a conservation law is. A conservation law is that the configuration space, the space of possibilities, divides up into cycles like this. Now, the cycles don't have to have equal size. Here's another case. One, two, three, four. You go around this way, and then the two guys over here go into each other. So red goes to blue, goes to green, goes to purple. That's the upper cycle here, and the lower cycle is yellow goes to orange, goes to yellow goes to orange. Still, we have a conservation law here. It's just the number of states with one value of the conserved quantities, not the same as the number of states or the other value, but still, it's a conservation law. And again, somebody would have to supply for you some idea of the relative probabilities of these two. Where that comes from is part of the study of statistical mechanics. And the other part of the study has to do with saying, if I know I'm one of these tracks, how much time do I spend with each particular configuration? That's what determines probabilities of statistical mechanics. Some a priori probability from somewheres that tells you the probabilities for different conserved quantities and cycling through the system. Yeah, question? No, okay. So, so far, you're assuming that within any conservation arena, you will, the probabilities of all the states are the same? The time spent in each state is the same. Right. So, it's completely deterministic. Laws are completely deterministic. This would be classical physics. Laws completely deterministic. No real ambiguity of what the state is, except you're kind of lazy. You didn't determine the initial condition. Your timing wasn't very good. Each state only lasts for a microsecond. You're a lazy guy and you only have a resolution of a millisecond. But nevertheless, you're able to take a very quick flash picture and pick out one of the states. That's the circumstance that we're talking about. Yeah. If we take two pictures, is it reasonable to then assume that if the first picture indicated that we are in one cycle, the later one should indicate the same cycle since it couldn't get out of it? Yes, that's a good assumption. Yes. Right. So, once you determine the value of some conserved quantities, then you know it. And then you can reset the probabilities for it. Unless, all right, so let's talk about honest energy for a minute. Yes, if we have a closed system, to represent the closed system, I will just draw a box. Closed now, closed means that it's not in interaction with anything else and therefore can be thought of as a whole universe unto itself. Okay. It has an energy. The energy is some function of the state of the system, whatever determines the state of the system. Now let's suppose we have another closed system which is built out of two identical or not the identical versions of the same thing. Now, if they're both closed systems, there will be two conserved quantities. The energy of this system and the energy of this system and they'll both be separately conserved. Why? Because they don't talk to each other. They don't interact with each other. The two energies are conserved and you could have probabilities for each of those individuals. But now supposing they're connected. They're connected by a little tiny tube which allows energy to flow back and forth. Then there's only one conserved quantity, the total energy, and it's sort of split between the two of them. You can then ask, what is the probability given a total amount of energy? You could ask, what's the probability that the energy of one subsystem is one thing and the energy of the other subsystem is the other? If the two boxes are equal, you would expect on the average they have equal energy, but you can still ask, what's the probability for a given energy in this box given some overall piece of information? That's a circumstance where it may be that giving the probability for which cycle you're on, now which cycle you're on, I'm talking about the cycle of one of these systems here, may be determined by thinking about the system as part of a bigger system. And we're going to do that. That's important. But in general, you need some other ingredient besides just cycling around through the system here to tell you the relative probabilities of conserved quantities. Okay. So we're often flying with statistical mechanics. There are bad laws. By bad laws, I don't, not in the sense of DOMA or any of those kind of laws, but in the sense that the rules of physics don't allow them. You all know what they are. The laws that violate the conservation of information. The most primitive and basic rule of physics, the conservation of information. Conservation of information is not a standard conservation law like this. It's the rule that you keep, that you can keep track. You can keep track both going forward and backward. So let's just mention that again. It's all work, it's all described in the classical mechanics book. I'm just reviewing it now. But let's take a bad law. It's a possible law. By bad, I mean one, two, three, four, one, two, three, four, five, six. So these are the faces of a die again. But the rule is wherever you are, this is red, wherever you are, you go to red. Even if you are red, you go to red. Okay. We'll discuss in a moment what's wrong with this law. But this law has one of the features that it has, is it's not reversible. It's not reversible in the sense that you can go from blue to red, but you cannot go from red back to blue. So in that sense, it's not reversible. You can predict the future wherever you are. The future is very simple for this particular law, wherever you are, you'll next be at red. You can make it more complicated. You could make a few, you can make it more complicated. But this law always winds up with red. It's a bad law because it loses track of where you started. Whereas these laws don't lose track. If you know that you've gone through 56, 56 and a half cycles, then you know that if you started at red, you'll come back and you can tell exactly where you'll be. And you can also tell where you came from. You can tell not only where you'll be, but exactly where you came from. Well, this law, you can't say where you came from. This is a law that loses information. And it's exactly the kind of thing that classical physics does not allow. Classical physics also doesn't allow the quantum mechanical version of it. So the rule that this type of rule, that this type of law is unallowed, I give a name to. It's as I said many times, there is no name for it because it's just so basically primitive that everybody always forgets about it. It's so basic. I call it the minus first law of physics. And I wish it would catch on. People should start using it. I mean, it is really the most basic law of physics that information is never lost, that distinctions or differences between states propagate with time and you never lose track. In principle, if you have the capacity to follow the system, because you may be too lazy to follow the system, that's your problem. But nature doesn't have that problem. Nature allows, in principle, that you can reconstruct where you came from. All right, so that's a bad law. How do you tell the good laws from the bad laws? Just by diagrammatics here, it's very simple. Good laws, every state has one incoming arrow and one outgoing arrow. An arrow to tell you where you came from and an arrow to tell you where you're going. So those are good laws. In classical mechanics, continuum classical mechanics, there is a version of this same law. Anybody know the name of that version? Of the theorem that goes with the conservation of information? It's called Leaville's theorem. We studied it in classical mechanics. But let me give you a counter example to Leaville's theorem. Friction is an apparent contradiction. Wherever you start, you come to rest. It's sort of like saying wherever you start, you come to red. Wherever you start, you come to rest. Well, you may not know exactly where you are, but you always come to rest. That seems like a violation of the laws that tell you that distinctions have to be preserved. But of course, it's not really true. What's really going on is that when you run the eraser through here, it's heating up the surface here. And if you could keep track of every molecule, you would find out that the distinctions between starting points is recorded. But let's imagine now that there was a fundamental law of physics. By fundamental law, I mean a rock bottom fundamental law for a series of particles, for a collection of particles, and the equations of motion for the particles were this. d second x, that's the position of the particle, somewhere by dt squared, that's called acceleration. We could put a mass in, but the mass is not doing anything. There's a lot of particles, so I'll label them i. Oh, we've used i to label states. I should not do that. Let's call it n, little n. The nth particle, and what is that equal to? It's equal to minus some number gamma, we've seen that number before in another context, times dxn by dt. Anybody remember what this formula represents? Friction. Viscous drag. Again, it has the property that if you start with a moving particle, it will very quickly come almost to rest. It'll exponentially come to rest pretty quickly. And so if all particles in a gas, for example, satisfied this law of physics, it's perfectly deterministic. It tells you what happens next, but it has the unfortunate consequence that every particle just comes to rest. That sounds odd. It sounds like no matter what temperature you start the room, it will quickly come to zero temperature. That doesn't happen. This is a perfectly good differential equation, but there's something wrong with it from the point of view of conservation of energy. There's something wrong with it from the point of view of thermodynamics. If you start a closed system and you start it running, you start with a lot of kinetic energy, temperature we usually call it. It doesn't run to zero temperature. That's not what happens. In fact, this is not only a violation of energy conservation. It looks like a violation of the second law of thermodynamics. It says things get simpler. You start with a random bunch of particles moving in random directions, and you let it run and they all come to rest. What you end up with is simpler and requires less information to describe than what you started with. That's very, very much like everything going to red. Among other things, it violates the second law of thermodynamics, which generally says things get worse. Things get more complicated, not less complicated. Okay. But there's another way to say, another important way to say this rule, that every state has to have one arrow in and one arrow out. The thing that I called either the minus first law or the conservation of information. Supposing we have a collection of states and we assign to them probabilities. P of state one, P of state two, P of state three, and so forth. For some subset of the states, not all of them, some subset of them. All the others, we say have probability zero. Okay. So, for example, we can take our die and assign red, yellow, and blue probability a third, and green, orange, and pink, or whatever it was, probability zero. Where we got that from, doesn't matter. We got it from somewhere. Somebody secretly told us in our ear, it's either red, yellow, or blue, and I'm not going to tell you which. All right. And now you follow the system. You follow it as it evolves. Whatever kind of law of physics, as long as it's an allowable law of physics, after a while, and you're following it in detail, you're not constrained by your laziness in this case. You are capable of following in detail. And what is the probability, what are the probabilities at a later time? Well, if you don't know which the laws of physics are, you can't say, of course. But you can say one thing. You can say there are three states with probability one-third and three states with probability zero. They may get reshuffled, which ones were probable and which ones were improbable, but after a certain time, there will be those same three, not the same three states, but there will continue to be three states which have probability and the rest don't. So in general, you could characterize these information-conserving theories by saying, supposing you assign some subset of the states, let's say, N out of N states, let's say there are N states altogether, that's the total number of states, and now we look at some M where M is less than N, and we say for those M states, the probability for those M states is one over M for these states and zero for all the others. You understand why I say one over M? If there are M states equally probable, then each one has probability one over M and all the remaining have probability zero. Then the number of states which have non-zero probability will remain constant and the probabilities will remain equal to one over M. Is that clear? Is that obvious? That should be obvious. The states may reshuffle, but the number with non-zero probability will remain fixed. That's a characterization, a different characterization of the information-conserving laws. For the information-non-conserving laws, everybody goes to red. You may start with a probability distribution that's one over five for red, green, purple, orange, and yellow, and then a little bit later, there's only one state that has a probability and that's red. This is another way to describe information conservation. We can quantify that. We can quantify that by saying let M be the number of states which all have equal, under the assumption that they all have equal probability, let M be the number, let's give it a name, occupied states, states which have non-zero probability with equal probability, and then M, what is M characterizing? M is characterizing your ignorance. The bigger M is, if M is equal to N, that means equal probability for everything. Maximum ignorance. If M is equal to one-half N, that means you know that the system is in one out of half the states. You're still pretty ignorant, but you're not that ignorant. You're less ignorant. What's the maximum, what's the minimum amount of ignorance you can have? That you know precisely what state it's in, in which case M is what? M is one. You know that it's in one particular state. All right, so M is a measure of your ignorance. Really M in relation to N is a measure of your ignorance. And associated with it is the concept of entropy. Now we come to the concept of entropy, notice entropy is coming before anything else. Entropy is coming before temperature, it's even coming before energy. Entropy is more fundamental in a certain sense than any of them, although in a certain sense it's, we'll discuss entropy in a minute, but S is a logarithm of M. Logarithm of the number of states that have an appreciable probability more or less all equal for the specific circumstance that I talked about. That entropy is conserved. All that happens is the states which are occupied reshuffle, but there will always be M of them with probability one over M. Okay, so that's where we are. And that's the conservation of entropy if we can follow the system in detail. Now, of course in reality we may be again lazy, lose track of the system, and we might have after a point lost track of the equations, and lost track of our timing device, and so forth and so on. Now we may wind up, we may have started with some, a lot of knowledge, and wound up with very little knowledge. That's because, again, not because the equations cause information to be lost, but because we just weren't careful. Perhaps we can't be careful, perhaps there are too many degrees of freedom to keep track of. So when that happens, the entropy increases, but it simply increases because our ignorance has gone up, not because anything has really happened in the system which has, if we could follow it, we would find that the entropy is conserved. Okay, that's the concept of entropy in a nutshell. We're going to expand on it. We're going to expand on it a lot. We're going to redefine it with a more careful definition. But what does it measure? It measures approximately the number of states that have a non-zero probability. Okay. The bigger it is, the less you know. What's the maximum value of s? Log in, log in. Now of course, n could be infinite. You might have an infinite number of states, and if you do, then there's no upper bound to the amount of ignorance you can have. But you know, in a world with only n states, your ignorance is bounded. So the notion of maximum entropy is a measure of how many states there are altogether. Now I said that entropy is deep and fundamental, and so it is, but there's also an aspect to it which makes it in a certain sense less fundamental. It's not just a property of a system. It's a property of a system and your state of knowledge of the system. It depends on two things. It depends on characteristics of the system, and it also depends on your state of knowledge of the system or the state of knowledge of the system. So keep that in mind. Okay. Now let's talk about continuous mechanics. Mechanics of particles moving around with continuous positions, continuous velocities. How do we describe that? How do we describe the space of states of a mechanical system, you know, a real mechanical system, particles and so forth? We describe it as points in phase space. We learned about phase space. Phase space consists of positions and momenta. Momenta in simple context, momentum is mass times velocity, so roughly speaking, it's the space of positions and velocities. Let's draw it. P is momentum. It goes that way. And this axis is a stand-in for all of the momentum degrees of freedom. If there are 10 to the 23rd particles, there are 10 to the 23rd P's, but I can't draw more than one of them. Well, I could draw two of them, but then I wouldn't have any room for the Q's, for the X's. And horizontally, the positions of the particles, which we can call X. X or P, doesn't matter. All right. A point here is a possible state of the system. If you know a point here, you know a position and a velocity, and you can predict from that through if you know the forces. Okay, let's start with the analog of a probability distribution, which is zero for some set of states and constant or the same for some other set, for some smaller set. Well, some fraction of the states all have the same probability, and the other states have zero probability. We can represent that by drawing a patch in here, a subregion in the phase space, and say in that subregion, there's equal probability that the system is at any point in here and zero probability outside. This is sort of a situation where you may know something, where you may know something about the particles that they're in some subregion here. For example, you know that all the particles in this room are in the room, right? So that puts some boundaries on what X are. You may know that all the particles have momentum which are within some range that confines them to this way. So a typical bit of knowledge about the room might be represented at least approximately by saying that there's zero probability to be outside this region and a probability equal, I won't say one, but equal probability to be in there. Okay, now what happens as the system evolves? As the system evolves, X and P change. The equations of motion say that X and P change, if you start over here, you might go to here. If you start nearby, you'll go to some nearby point and so forth. And the motion of the system with time is almost like a fluid flowing in the phase space. If you think of the points of the phase space as fluid points and let time go, the phase space moves like a fluid. In particular, this patch over here, let's call it the occupied patch, the occupied patch becomes some other patch. That other patch, after a certain amount of time, the system now is known to be in here. After a certain amount of time, we now know that the system is in here, not in here anymore, and that it has equal, in some sense, equal probability to be anywhere in there. Okay? There's a theorem that goes with this. The theorem is called Leaville's theorem. And what it says is that the volume in phase space, the amount of volume of this region in the XP space, and keep in mind, the XP space may be high dimensional. Not just two, if it were two dimensional, we would think of it as the area. Phase space is never three dimensional. It's always even dimensional. It has a P for every X. So the next more complicated system would be four dimensional. When I speak of the volume in phase space, I mean the volume in whatever dimensionality the phase space is. If you follow the phase space in this manner here, Leaville's theorem, you can go back to Leaville's theorem. It's in the classical mechanics lecture notes. It occupies, I think, a whole lecture, I think. All right, you follow? And it tells you whatever this evolves into, it evolves into something of the same volume. In other words, roughly speaking, the same number of states. It's the immediate analog of the discrete situation where if you start with M states and you follow the system according to the equations of motion, you will occupy the same number of states afterwards as you started with. There'll be different states, but you will preserve the number of them, and the probabilities will remain equal. So the rule is, not the rule, the theorem says that the volume of this occupied region will stay the same. And a little bit better, it says that if you start with a uniform probability distribution in here, it will be uniform in here. So there's a very, very close analog between the discrete case and the continuous case. And this is what prevents this kind of fundamental equation from this kind of equation, an equation where everything comes to rest, that can't happen. Okay, why not? Let's see why it can't happen. Let's just look on this blackboard and see why. Imagine that no matter where you started, you ended up with P equals zero. That would mean every point on here got mapped to the x-axis. It would mean that this entire region here would get mapped to a one-dimensional region, and one-dimensional region has zero area. So Leeville's theorem prevents that. What it says, in fact, is if the blob squeezes in one direction, it must expand in the other direction. The situation for the moving eraser is that if the phase space of the eraser gets shrunk, it means somebody else's, some other components in the phase space, the probability distribution is spread out. What are the other components in this case? It's the P's and X's of all the molecules that are in the table. So for the case of the eraser, there's really a very high-dimensional phase space, and as the eraser may come to rest, almost rest, so that the phase space squeezes this way, it spreads out in the other directions, the other directions having to do with the other hidden microscopic degrees of freedom. Okay, so there we are with information conservation, minus first law of physics, and let's pass... Let's not go to the zeroth law. Let's jump the zeroth law. We'll come back to the zeroth law. You know what the zeroth law says? Well, I'll tell you what it says. We haven't defined what thermal equilibrium is, okay? But it says whatever the hell thermal equilibrium is, if you have several systems, and system A is in thermal equilibrium with B, and B is in thermal equilibrium with C, then A is in thermal equilibrium with C. We will come back to that. Just put it out of your mind for the time being, because we haven't described what thermal equilibrium means. But we can now jump to the first law, minus one, zero, and first law. And the first law is simply energy conservation. It is simply energy conservation, nothing more. It's really simple to write down. That simplicity belies its power. It is the statement that, first of all, there is a conserved quantity, and the fact that we call that conserved quantity energy will play for the moment not such a big role right now, but let's just say there's energy conservation. What does that say? That simply says DE. Simply energy is DE by DT is equal to zero. Now this is the law of energy conservation for a closed system. If a system consists of more than one part in interaction with each other, then of course any one of the parts can have a changing energy, but the sum total of all of the parts will conserve energy. So if a system is composed, as I drew before, of two parts with a link between them, and this is called one and this is two, then this reads that DE1 by DT is equal to minus DE2 by DT. I've really written DE1 by DT plus DE2 by DT is equal to zero, but then I transposed one of them to the right hand side just to indicate, just to make graphic, that if you lose energy on one side, you gain it on the other. So that's the first law of thermodynamics, and that's all the first law of thermodynamics says, it says energy conservation. Now in this context here, there's a slightly hidden assumption. We've assumed that if a system is composed of two parts, that the energy is the sum of the two parts. That's really not generally true. If you have two systems and they interact with each other, there may be, for example, forces between the two parts, so there might be a potential energy that's a function of both of the coordinates. For example, the energy of the solar system, being very naive, I'm thinking of the solar system as two orbiting Newtonian particles. The energy consists of the kinetic energy of one particle plus the kinetic energy of the other particle plus a term which doesn't belong to either particle. It belongs to both of them in a sense, and it's the potential energy of interaction between them. In that context, you really can't say that the energy is the sum of the energy of one thing plus the energy of the other thing. Energy conservation is still true, but you can't divide the system into two parts this way. On the other hand, there are many, many contexts where the interaction energies between systems is negligible compared to the energy that the systems themselves have. If we were to divide this table top up into blocks, let's think about it, divide the table top up into blocks, how much energy is in each block? Well, the amount of energy that's in each block is more or less proportional to the volume of each block. How much energy of interaction is there between the blocks? The energy of interaction is a surface effect. They interact with each other because their surface is touch, and typically, surface area is small by comparison with volume. So many, many, we'll come back to that. We'll come back to that. In many, many contexts, the energy of interaction between two systems is negligible compared to the energy of either of them. When that happens, you can say to a good approximation, the energy can just be represented as the sum of two energies of the two parts of the system plus a teeny little thing which has to do with their interactions. Under those circumstances, the first law of thermodynamics, the top is always true. The second has that little caveat that we're talking about systems where energy is strictly additive, where you add energies. Does everybody understand why I say you don't always add energies that sometimes energies are not additive? Yeah, actually, I was thinking that we have the same possible problem with the probabilities. We assume that the outcomes were mutually exclusive. Otherwise, the sum law doesn't work. Yeah. Yeah. Okay. So in all the contexts which we've talked about, the dye, if it's yellow, it can't be red. You say orange. You say orange, orange is both yellow and red, well, we don't count that way. Yeah. So that's correct. No, that's absolutely correct. We made the assumption that what I called states, what I called states are mutually exclusive. Absolutely. Okay, let's come back to entropy. We're not finished with entropy. We've done entropy. We've done energy. We haven't gotten the temperature yet. Just the temperature comes in behind entropy, and even energy comes in behind entropy. But temperature is a highly derived quantity. By highly derived, I mean it's a, despite the fact that it's the thing you feel with your body, so it makes it really feel like it's something intuitive, it is a mathematically derived concept, less primitive and less fundamental in either energy or entropy, but we'll come to it. Let's come back to entropy. We define entropy, but only for certain special probability distributions. Let's lay out on the horizontal axis, just to be schematic. On the horizontal axis, we will put down all the different states. Here's i equals one, here's i equals two, here's i equals three. This axis just labels the various states, and of course, vertically, let me plot probability. Okay. Well, the probability, of course, is only defined on the integers here. That's not very good. It's some probability. But let's, I don't want to have to draw such a complicated thing every time I want to draw a probability distribution, let's just draw a graph. Some probability distribution, or some probability for each position. Now what we did was we defined entropy for a very special case, the special case being where some subset have equal probabilities and the rest have zero probability. For example, if our subset consists of this group over here of m, the whole group being n, then all of these have the same probability, and their probabilities have to add up to one, so the probability is one over n. We just draw that by drawing a box like that. Then we define the entropy to be the logarithm of the number of states in here. Generally speaking, we don't have probability distributions like this. Generally speaking, we have probability distributions which are more complicated. In fact, they can be anything as long as they're positive and all add up to one. So the question is how do we define entropy in a more general context where the probability distribution looks like this? I'm going to write down the formula, and then we're going to check that it really gives us this answer when it should give us this answer. For today, I'm just going to write it down and tell you this is the definition. You'll get familiar with it, and you'll start to see why it's a good definition. It's representing something about the probability distribution, and what it's representing is in some average sense, the average number of states which are importantly contained inside the probability distribution. The narrower the probability distribution, the smaller the entropy will be. The broader the probability distribution, the bigger the entropy will be. I'll write it down for you now. We'll write it down and then explore it just a little bit tonight. S for a general probability distribution is, first of all, minus. That's funny because this is positive, but nevertheless the formula begins with minus, a sum, and it's a sum over all of the states, all of the possibilities. So it's a sum over i. There's a contribution for each place here. The probability of i times the logarithm of the probability of i. Do you remember? All right, let's write something else then. Remember that the average of f is equal to the summation over i, f of i times p of i. This is actually the average of log p sub i. It's the average of log p sub i. All right, let's work this out. Let's see what this gives. In the special case where the probability distribution is 1 over m for m states altogether. It has width m, and because it has width m, it must have height 1 over m because all the probabilities have to add up to 1. All right, so let's work this out. Let's take the contribution for all the unoccupied states. All the unoccupied states, p sub i is 0. You get nothing. But log of p over i is minus infinity. Yeah, that's right. Now what's the, all right, good. So let's consider the limit of log p over p. Or let's just say the limit of, no, not that's not right, the limit of p log p as p goes to 0. You know how to calculate that? Okay, I'm going to leave it to you. It's a little calculus exercise to calculate the limit as p goes to 0 of p log p. It's 0. The point is that p goes to 0 a lot faster than log p goes to infinity. Log p as p goes to 0 goes to, is very slow. p goes to 0 fast. So this goes to 0. You're absolutely right though. That has to be, that has to be shown. But p log p in the limit that p goes to 0 is 0. So with that piece of knowledge, the contribution from states with very, very small probabilities, probability will be very, very small. And as the probability for those states goes to 0, this quantity, the contribution will go to 0. But what about the ones here which have significant probability? They all have the same piece of body. And they all have the same log piece of body. The log piece of body is logarithm of 1 over m for all of them. The piece of body is also 1 over m. Not log 1 over m, but 1 over m. So each contribution is 1 over m times log 1 over m. How many contributions are there like that? All right, so we multiply by m and get rid of the 1 over m there. There's a minus sign here, I'll carry it along. All times m because there are m such terms. So the 1 over m cancels and we just left with log 1 over m. What's log 1 over m? Minus log m, right? So that's why the minus sign was put there in the first place. There's no miracle. The minus sign was put there because probabilities are less than 1. And so the logarithms of them are always negative. So you soak up that negative with an overall negative sign and entropy is positive. But this is exactly the same answer that we are, the original definition just s equals log m. Logarithm of the number of states. But this is a definition now that makes sense even when you have a more complicated probability distribution. And it is a good and effective. It's the average of log p. For the special case where the probability distribution is constant like this, then all of the probabilities in here are 1 over m and calculating the average of log 1 over m just gives you this. All right. So this is the general definition of the entropy that's associated with a probability distribution. And notice, entropy is associated with a probability distribution. It's not a thing like energy which is a property of a system. It's not a thing like momentum. It's a thing which has to do with a specific probability distribution, probability distribution on the space of possible states. So that's why it's a little bit of a more obscure quantity from the point of view of, you know, intuitive definition. As I said, its definition has to do with both the system and your state of knowledge of the system. Let's do some examples. Let's calculate some entropy for a couple of simple systems. The first system is just going to be not a single coin but a lot of coins. So we have capital N coins, N of them, and each one can be heads or tails, et cetera, and so on. As a matter of fact, we have no idea what the state of the system is. We know nothing. The probability distribution, in other words, is the same for all states. Good ignorance, absolute ignorance. What is the entropy associated with such a configuration? There are n of these. All the probabilities are equal. Under the circumstance where all of the probabilities are equal, we just get to use logarithm of the number of possible states. The answer here is the logarithm of the number of total states. How many states are there altogether? Two to the N. Right. Two to the N. Oh, let's, I'm sorry, I'm going to change definitions for a minute for a reason that you'll see in a minute. I'm going to call this little N. Number of coins is little N. Over here, big N stood for the total number of states. So if I match terminologies, big N, the total number of states, is e to the 2n, no, 2 to the N. Sorry, I'm getting tired. 2 to the N. 2 to the N states altogether. Two states for the first coin, two states for the second coin, and so forth. 2 to the N altogether. And the total number of states is capital N. What's the entropy given that we know nothing? Two log N. Two log N. N log 2. S is equal to N log 2. That's the logarithm of 2 to the N. All right, so here we see an example of the fact that entropy is kind of additive over the system. It's proportional to the number of degrees of freedom in this case. N times log 2. And we also discover a unit of entropy. The unit of entropy is called a bit. That's what a bit is, an information theory. It's the basic unit of entropy for a system which has only two states, up or down, heads or tails or whatever. The entropy is proportional to the number of bits, or in this case, the number of coins, times the logarithm of 2. So log 2 plays a fundamental role in information theory as the unit of entropy. It does not mean that in general that entropy is an integer multiple of log 2. We'll see in a second, it doesn't mean that. Okay, so that's this case over here. Let's try another case. Let's try another state of knowledge. Here our state of knowledge was zilch. We knew nothing. This is the maximum entropy. Logarithm of capital N, or logarithm of 2 to the little n, which is n log 2. One bit of entropy for each coin, if you like. Okay, let's try something else. Let's try a state in which what we know is, oh, something else. Supposing we know the state completely. In other words, that's the case where m is equal to 1. That would be the case where we know that the probability is only nonzero for one state. Then m is 1 and s is 0, logarithm of 0. So absolute knowledge, perfect knowledge, complete knowledge corresponds to zero entropy. The more you know, the smaller the entropy, excuse me. Okay, let's take an interesting case. Let's take our heads and tails again. And here's what we know. We know that all of them are heads. Let's begin with that, all of them are heads, what's the entropy then? Zero. Except for one of them, which is tails. Now, supposing we know which one is tails, what's the entropy? Zero. But suppose we don't know which one is tails. Equal probability for all states which contain one tail and n minus one heads. What's the entropy? Indeed. Okay, so why is it log n? How many states are there with nonzero probability? The answer is little n. It could be this one, it could be this one, it could be this one. In other words, for this particular situation, capital M, the number of states that have nonzero probability is just little n. All the states have the same probability. So we're in exactly this situation except that M is just equal to little n. N possible states, this one could be tails, this one could be tails, this one could be tails. They all have equal probability. So capital M is n and the entropy now is, for this situation, the entropy is equal to logarithm of little n. Notice that that's not an integer multiple of log two in general. So in general, entropy is not an integer multiple of log two. Nevertheless, log two is a good unit, it's a basic unit of entropy, it's called a bit. Yes in this case is equal to log n. Yes? The two in the log two come from the fact that you only have a probability of heads or tail, right? Yeah. So if you had three, it'd be log three. Absolutely. Absolutely. Now, computer scientists of course like to think in terms of two for a variety of reasons. First of all, the mathematics of it is nice, but two is the smallest number which is not one. It's a small signature. Yeah, it's a small signature, not equal to one. But it's also true that the physics that goes on inside your computer is connected with switches which are either on or off. So counting in units of log two is very useful. Excuse me. Yeah? It's probably not important, but what is the base of the logarithm? Yeah, the standard, this is a definition, of course. This is definition. The definition, okay, good. The, good. It depends on who you are. If you're an information theorist, a computer scientist often, or a disciple of Shannon, then you like log to the base two. In which case, this is just one, n, and the entropy here is just n, measured in units of bits. If you're a physicist, then you usually work in base E. Okay? But the relationship is just a multiplicative, you know, log to the base E and log to the base two are just related by a numerical factor that's always the same. Okay. So, when I write log log, I mean log to the base E, but very little would change if we use some other base for the logs. Okay. So there we are. Let me just tell you what the definition is of the entropy in phase space. If we're not talking about, if we're not talking about these finite discrete systems like this, we're talking about continuously infinite systems, phase space. Now let's begin supposing the probability distribution is just some blob where the probabilities are equal inside the blob and zero outside the blob. In other words, the simple situation. Then the definition of the entropy is simply, well, you could say the logarithm of the number of states, but how many states are there in here? Clearly a continuous infinity of them. So instead, you just say it's the log of the volume of the probability distribution in phase space. S for a continuous system is just defined to be the logarithm of the volume in phase space. Now if I wrote V, you might get the sense that I'm talking about volume in space. No, I'm talking about the logarithm in phase space, the volume in the phase space. The phase space is high dimensional. Another dimensionality of the phase space is the volume is measured in those kind of in units of momentum times velocity to the power of the number of coordinates in the system. All right, but S is equal to the logarithm of the phase space volume. Let's call it P phase, V phase space. That's under the volume of the region which is occupied and has a non-zero probability distribution. This is the closest analog that we can think of to log M, where M represents the number of equally probable states of the discrete system. More generally, if we want, if we have some arbitrary probability distribution, the arbitrary probability distribution, P, what would P probability? What would it be a function of? It would be a function of all of the coordinates and all of the momenta. All of the momenta and all of the coordinates. P's and Q's or P's and X's, whichever you like, all of them. Probability for the system to be located at point of phase space P and X. Incidentally, when you have continuous variables like that, do you write that the sum is all equal to one? That wouldn't make sense. You can't sum over a continuously infinite set of variables. It becomes integral. We'll come back to this, but let's just spell it out right now. That if you have a probability distribution on a phase space, the rule is that the integral of it is equal one. P is really a probability density on phase space. It's a probability per unit cell in phase space. What would you expect the entropy to be? I'll give you a hint that starts out with minus. Where's the formula that we had here before? Let's rewrite the formula that we had before. S is equal to minus summation over I of P sub I log of P sub I. To go to the continuum, you simply replace sum by integral. There's an integral over the phase space. That's like the sum over I. Then probability of P and Q times the logarithm of the probability. In first approximation, I don't mean first approximation numerically. I mean first conceptual approximation. It's measuring the logarithm of the volume of the probability blob in phase space. Okay, oh, I could know mistake. In classical mechanics, X's and Q's are coordinates and P's are momenta. You know that. Okay, so we've now defined entropy, which, as we've seen, depends on a probability distribution. I'm going to go one more step tonight and define temperature. We could stop here. I think that's probably enough for one night. Next time we'll do temperature and then discuss the Boltzmann distribution, which is the probability distribution for thermal equilibrium. We haven't quite defined thermal equilibrium yet, but we will. You can ask some questions. I don't mind some questions now. I just, I just, I have the feeling that I've probably done enough for one night. Oh. Well, I haven't made any such assumption. I haven't made any such assumption. My only assumption in calculating the various entropies was either that you know nothing or I told you what you know. How you got to know that and what the reasoning was and whether it had to do with knowing something about the dynamics and the independence and so forth may come into your calculation of what P is. But in saying you know nothing, the implication was that all states are equally probable. Without asking how you knew that. To say that all states are equally probable is closely related to saying that there are no correlations. It does say that if you, all right, let's, good. Let's talk about correlations for a moment. To say that you know nothing means you know nothing. So in particular, if you, you, you begin knowing nothing and you measure one of the coins, what do you know about the other ones? Nothing. Nothing. You started knowing nothing about them. You measured one of them. You still know nothing about the other ones. Of course, you know about the one that you've measured. Now let's take the other, the other case that we studied. Anything. We know that all coins are heads except for one which is tails. And we now measure one of them and we find that it's tails. What can we say about some other one? It's surely heads. What if we measure that that one is heads? What do we know about the other ones? It changes the probability. It changes the probability. Just a new probability that one of the other ones is tails. Right. It's one over n, one over n minus one instead of one over, sorry, one over, yeah, one over n minus one. So that's correlation. That's correlation where when you measure something, you learn something new or the probability distribution for the other things is modified by measuring something. That's called correlation. For the complete ignorance, there is no correlation. For any other kind of configuration in general, there's very likely to be some, not necessarily, but there's very likely to be some correlation. Correlation as I said means you learn something about the probability distribution of other things by measuring the first one. Or you modify the probability distribution. Good. Okay. I don't know if that's what you asked about or not, but yeah. Okay. All right. Well, is that original system there? We had two parts. Once we measured one thing, we had to conserve quantity. What's that again? We had to conserve quantity there that once we did one measurement, we knew which side we were. Yeah. Yeah. Incidentally, entropy is additive. It's additive. It's the sum of the entropies of all the individuals. It's proportional to a number of things. It's additive whenever there is no correlation. When there's no correlation, it's additive. Now, so uncorrelated systems have additive entropies. We'll come back. That's a theme that we'll come back to. Is an interesting question. Here's what you know, you know that if you measure a coin up, then with three-quarter probability, it's too, they're laid out in a row. If you measure one of them and it's up, then the probability for its neighbors is three-quarters to be down. That's what you're given. That's all you know. All you know is that if one of them is up, if any one of them is up, its neighbors are three-quarters likely to be down. It's an interesting thing to try to calculate the entropy of such a distribution. That is correlated, of course. That is correlated because when you measure one, you immediately know something about its neighbor. Well, make up your own example like that. Make up your own example like that and compute the entropy. You can learn something from it. Any other questions before we go home? Not particularly. The formula here is due to Boltzmann. I think it's just the one that's on Boltzmann's tomb, I'm not sure. Now what's on Boltzmann's tomb? He meant this. Well he did write this. This is Boltzmann's final formula for entropy. The only difference between the Shannon entropy and the Boltzmann entropy is that Shannon used log two. Now, of course, Shannon discovered this entirely by himself. He didn't know Boltzmann's work from an entirely different direction from information theory rather than thermodynamics, but none of it would have surprised Boltzmann. Nor do I think Boltzmann's definition would have surprised Shannon. So they're really the same thing. There's no real point in comparing them because they're comparing them, they are the same. There's no real difference. Shannon may have, I don't know whether he did or not put the minus sign in here. If you don't put the minus sign in there, it's called information. If you put the minus sign, it's called lack of information or entropy. So I don't know which Shannon wrote down. Anyway. Shannon wrote down entropy. He did. Yeah. Is there a simple way to relate to Heisenberg's uncertainty principle? No, no, this is a separate issue. This doesn't have to do with quantum mechanical uncertainty. It has to do with the uncertainty implicit in mixed states, not the uncertainty implicit in pure states. Okay. Oh, oh boy. Yeah, there's a conversion factor. You know, C equals H bar equals G equals Boltzmann's constant equals one. Very natural. Boltzmann's constant was a conversion factor from temperature to energy. The natural unit for temperature is really energy. But the energy of a molecule, for example, is approximately equal to its temperature in certain units. Those units contain a conversion factor K Boltzmann. I'll remind me to talk about K Boltzmann. For more, please visit us at stanford.edu.
Leonard Susskind introduces statistical mechanics as one of the most universal disciplines in modern physics. He begins with a brief review of probability theory, and then presents the concepts of entropy and conservation of information.
10.5446/14933 (DOI)
Rwy'n meddwl genno'r pnd hoşaf. And the next part of course is the assessment part thats been given to honorary by Sophie Kirshal who is standing here going to introduce this out in a minute. and so I'm basically acting as Sophie's soak and speed of assistant at this so she's in charge and will tell you what to do but I'll be around as well and help and make sure something's going okay but the general aspect I'd like to explain that. Yeah thats fine Thanks Hi everyone so I'm Sophie Kirschal колpl adran neu Gogd ag ond roi i gest, ar yLaw<|el|><|transcribe|> Felly, mae Лnod i Gwlaff-e Dem rainingmund ar syniad arnod hynw r Siadod Su greedudiaeth bwyd yn gwyfredig rydych werthfaig ar ôl解onoenau gaff i wisio ar yrwr Rha flyingwch i greurau Warfngod L perturbio rhwng poddi無 craff d wigwydde'. Cyfyll uch arolfant goeth을 yr un edrych ario'r Ironer, hawddias ddilyn R NYBT eiterwydd i gwiriau Daid y dail ar derydlogllwch am fan eich yejad ytau'r sefyllfa. Wyd plantebwydd â'r ursymiad<|te|><|transcribe|> d ведь wedi'i gymryd y brif sjymau ddyn nhw'n hanesentio'u gwydd ghostodd ar hynw'r p addictiveiced Opeth Wynigach neu r microhyn i y gallu symbolig oroom builder?凛odieth? Pa sylwgr dweud ychydigodd? Beth syddaliaeth pethauwrs ar hwnnw ac yn buddur allan rych yn fwrdd cynhyrch gyfarwydd rename neu pryn Bryn y Dyn Dod씨? Na oedd yn bwysig f 라�fèd o braking i ddweud, roedd yn mynd â'r 01. I yn hyn yn rhoi y dda wedi caeth~! Efallai a chym recruit o'u persoedd y pethau sydd yn merair i'r edrych ymwneud cyntaf proactive i draf tables a ger ymarfer greu diol樫 ddiwstiol, sydd yw gweith dros The Eyes of a Research user mae'n effeith trwyddiwyd rydynnod fydd y mynd i ddweud eu hwybotlo wedi newid ar意 MachLab. Fod跟我'r twelaf y tar Fillf yn fwy grabach bod poisoned w모ot gen i weld gyda theirion cymdeithas iddo rhymat i'r ystydd eich beth y bydd ei gwirionedd BLACK Lab bwyd y wealthy. Salwell, yr eraill e капu'r amser beirio i'n roedd, yr ydych chi fod yn edrych i chi beth o gallem i arddugmai'r of的时候au yn dknag arddanef Hast phiamt modderol hefyd yn chefudaenien bob назыв mae rhod Settings diddyn nhw nego'r community a'r BioG loudly Minister appeared source bundle of offers to people on YouTube at the Welsh media companies and the DLotional Team that is the epidemiology department of the of waith tbspёт dyfodol o fe wedi cyffredd o yi tro o fod ei w заметio wrth arli quilawrhyg y millionaryll cry devam o etテ. Er si ont oedd amlyrif wych yn disfynia wrth cyreio'r viw nôl a'rdyossenol yn siŵr oedd bod mightl practitioner Drewll yn G meas method eight rym ni'r modd Дуд Unill gyda ni'r proses tenants I ond roeddaeth Gyrystion i'w gymun courwyr yng mwy ar bod yn ei laid yn arwm o gwch y contactio'r gael y ffordd yma i weld i gigationu cydweith gael cof. Ond roeddut ei ch Swanon cry concerts o gywe content roedd y miriau pas peth yourni ac mae gennym eich cyfießiull o gyfan am y af Barn Cyddiwch loslu gymaint ffisныad hyn a gys diddри nhw'n mynd.. Iddiwch chi am hoffm y Mod ifen nhw i bead y llai o dal peth gydaestaeth gyda mynd i gael ein peth siarad hi lle rehearsal du ond ysgol fสfodd ddีch y cilwysol nha hwn wedi'i ch거fio chi'n dweud pan yw bobl ynnoch afg toast o ddrwythingiaeth yn y cilwysol 89. Me'n dweud amgy реб collwed, mae o'n dweud ar gwrtho gw hipanc anghoeddau cilwysol? Dry, roedd ein demand o gyllwyn y berydd y mae nhw yn ddorol diod arno gyngorotogio i g micsro? Felly'. RF standing am'r gweld, na es i ddim i'w gwybod ei gysyllt ddim chi awardart gydechrau ch Muchasus. roedd wedi cael ei gweithio i mi gymhysaumate familiesd o'r trefnad penframp ac yn gweld gyda spllegarau i mi allwn i'r bahcon, ond os fe 2020 heb hayddut rhywun bod nhw'n edrych yn cael pethau'r hipth i'r cyngholi yn arhyf eich gyfrifio. all caminoio Roedd Art Buffer hwn yn harvestedol e'n sgirio dewis yn llawer sefydlu fel gyda'r filler mewn cyddiadau dyla edrych gennym iddo ddiwedden ddory Surf i ni ddechrau passedoedd directur ar fy oedd y r Invital Llywodraeth Cadw abruptu cael siarad y gy darkodd yn gyngltyfrannwyr i pan ni gyrwch arall Moregh Againr. iawn fydden nhw'n iawn yw'r per designation sydd wedi'u cyflwytoedd perl booth o y firstly Color Ladwg ôl, arall Freedom такor. Mae Chey 我是, dreaming o'r unify chemical approach with what you are delivering. So we will actually look at how that is developing over the next few signs. Let's just take a quick look at reproducibility. This is as a whole not specifically on occupational science, as you can see. You might want to take a look at this paper. This was a commentary by Baequie and Alice that appeared in Nature in March of last year. What they were effectively discussing was the issue of lack of reproducibility of scientific results, specifically i taelid AMGEN plants yn rhad wrth maes gymh seatedadた i d welcoming os seodoludding ar gyfer gaanod wal PT,э o'r labgr hon siwr o'r ffordd ychydig o'r cwmhysgau Llandmark yn y canser cell line preclinical wedi'i llwydd. Yn y gweithio yng Nghymru, mae'r byw'r gweithio'r informatio ar y llwydd yn y cwmhysgau Llandmark yw'r ddweud, wrth gwrs, mae'n fwy o'r ddweud i'r ysgolwch i'r gweithio'r ddweud i'r gweithio'r ddweud? Felly, mae'n gwybod i'n gwybod i'r gweithio'r gweithio'r ddweud i'r ddweud i'r ddweud i'r gweithio'r ddweud. Ond o cho теaidd y'r gobl<|el|><|transcribe|>mediw rhan eich trailnu gyda cyngني Ca�� ein inviw séb earlier… … knowphy deamlu o bwyl yn felhaith gusto negar y sydd yn Y Crif, ac i respondediyn cydyddiaeth mewn digwydd hefyd o titanium ar y gyfer稱io. 114 children ond showing up in a small部wyd geometry, Iddyn nhw'n gwneud i nad wnaeth wnaeth yr Give D Palmerfyaith yn dneg ar maes i ddim yn g speaking I awdd yr aur i rai'r cyngh diagnosis nawr doesn haha i'w gwneud ar criminals Am y cad concludiwch iaid o ddylamwch yn asi Attilaiddol Brydym Chymے Allait diddyn nhw i ddim yn go conditioning a obethzaid o wneud Na yw'r ddwyg addys elbowsa phoblau Iatwr yn dros Is hefyd yn gweithio yng ngy700 Newgo hon, ran le ar gyfer gyrfaedd, Co 9mm orewyr ar gweithio Adeg Pokemonapp founding Re petits violin macro komb reportingお был gyd gan hwn i gydag ysgol y Lland Mark botip. Wran ddod, ar weishfyrd gweithio cy starts fwyb geldyg coconutu a'r rhagled poseb okwyd am y cyfle peidun ty whonny sylw analysis y cwpyll, neu roedd afwrs cadar fan o diweddiding a'i agweithio llawn dobeth i dymlo fyddy a cael mor galenau. ond hynna danion y gall�es oddi am eisiau ond yr acio gennym yn addysgol 무なくad arbtun. Felly fellylei'r research锁厥edu yn oed mewn hyfryd yr aw ministersio wrth g bendith. So, berffyn yw hwn yn bobl i fod yn ymateb, ynych hwn o'r gwahlo'n fewn y tu perthiliawn o stadio. Saedd nid o ga mwy tf boyog yn y ddylau yma, os ymyddai fel nifer. Rydyn ni'n oed yn gwneud beth rydyn ni'n fod fwy am y ad�� ang accusechia<|ko|><|transcribe|> mawr be' ti bod gwylliannol yn ateb o'n easeriynaid. Be Godf사를 a gael gwith amenig bwysigoddau are y comingau a dyfodill yr ad disputes. Mae'r sambydd arm� периod, mae'r therapist yn gweithioashi ar gyfel ei ddych siarad am wneud – a sut ydych chi'n cleif adrodd glisiau'r Maid homi,.. absolute complexionion, ond mae���izaeth yma, ac mae biwyddiun. I ddim yn dyfcodd cyian meddiersi planets maesodd yn meddwl, a bwyddaeth yn ddiweddad gyda parw жyn transcein. A answered o, dy iconol diain ymdano mor gwaith poetker trwy fwy enw, taki'n Eiiencies am dewis. Mae added digonol yn dis give r technologies, yn dd advertising iddyn ni, lle mae'r code susét doorll Diemm Daniel Iverату. I ilerioall pwysyddau i such criteria Mae ymwneud fy o ffgrifodwyd. Mae'r gwaith bod dda yn ram Tamil. Dw i ddim yn ddigon i'r ddoa ar наion rydymau arериadau. description mawr o fydy, Felly ar awr iddynt yndyn nhw, In a newid gwiras Wysiadome ref principillwch 때는 Els gyda'i cynllunau esoill ff bonus Ar y dyfissaill bwysig o'r сохранiaid awdd blำbeth, mae'r fan o mor mwy若. Ond d 떠wch o r Hanl channelau siaradol i Maith Wysiadome exchanges Ac pa ihm y gallwn unwyddiadwch dradysbwys â'r間fod sp思 Hu modd ran y blaen i newydd yn iawn o fellow astud iawn mo ni Dwi newydd mewn have roedd yn bobl skeletonol шaw pawn oes yn je código, sustained ond to has any locking hollwch arwador vwatchydd, oedd y peu. Y rheino gyda wichtf aquellwch yn amrywio, yn oleansen i diemyn yn F─xeim spacefankannaeth i ty nerweithio ac y gallw'r pressu allan yn y mewn cened refused. Byddwn yn dweud guidingion ju o dd finishes ar ynlyny. Rai reun o ganfawr, ac roedd y cened yng Yn Laelwyddon yn ty ffoo deab nadiell o hollwch newydd. Wrth chi oedd anod o ddweud sydd wedi meddiag ymwneud ond yn ymQueinu i weith 아이ad gennych swipefyunol, a'i fodfennad valve o attends bod ni wedi ble Dumosion a Dweud Malchueno'n gwahanol hi fydd iawnかなnu ar boe TIME-us a cynnig a weithio fagiad yma醤 o cisig oedd yn c restrictions pas ni ar un uchsideg byddai ddweud Yna gwahanol chi gael bod ni oedd door部分 o zeithio i yw shooters datblygiadsal yma o oed覺得 humans Fel hawddodd yn hawdd i collriaeth o wyb jumpedом neu oedda'i hyd yn ôl i wneud all dries soedd ond dwi'nோer yn gus i metws styff competition fel y cymryd felly y llif wneud rangedud i gynharieddoed改 Gaf blib iawn o fwy y cwymruver Fath o a'i pleid ag yw, fy angen, gallwn i ddesmarlu gweithio'r f mencion bobl ddisgw!!!!!yn y byd datblygu, roedd佐wch weredd leysgr yn lle Ac felly gwybod arall y cyfreysaeth Dry also define Fe dod am gyfer os gynhyrchu Mae'r llyfr, Llyfr, ac ymddydd. Felly, ychydig. Llyfr yn llyfr, Llyfr yn llyfr. Llyfr yn llyfr yn llyfr. Felly, mae'n ddweud y llyfr yn llyfr. Mae'n ddweud yn y rhaid i'r ddechrau. Felly, ydych chi'n ddweud y papur o'r llai. A'r ddweud y papur o'r llai. Felly, ydych chi'n ddweud? Felly, ydych chi'n ddweud? Welau. Yn y cwntex yw mewn oau ac ymddydd, mae'r papur o'r llai sy'n gwneud, mae'n ddweud y gweithio i chi'n ddweud y papur o'r llai. Mae'n ddweud yn fawr y barir cyfnodol o'r llai sy'n gwneud â'r llai a'r ddweud o'r llai. Llyfr yn y ddweud o'r llai sy'n gwneud. Mae'r papur o'r llai sy'n gwneud y papur. Ac mae gwych chi'n mynd i gamb o ran b Jesenololediaeth o'ch cawn fears games<|hi|><|transcribe|> Mae safddwn yn peddwl, mae eit splhwydur â popeth o bellachien y cyd-laer ac yn rhoi fel mynd i gyda ei wneud hynny. youtuberинг rwy'r mendäh aolaeth y papyr ac yn rotate i'r peddwl gwneud hynny hebingu, I have the book we know the time, allowed to reuse the book I have been told from that licence. So that's just an example of how a Libra paper can help you understand wether or not you can actually use that research and what ways it is appropriate for you to do so. However, actually what we are coming to is open. a i ofen. Roedd y modd ychydig rhagfeyddion yn yw'r byw ni'nEEEEתי London, a re間 o'r rhaid brothers tobiwr ac y gall rumah yn y cyflwytaeth broad hynny, y cyánhddreifau'riveir ar induced a hayat ar y puzzles weld 이렇게 am y byd yw'r m containers. Felly, mae'r parahych yw'r bhwrsh cyho iechyd yn cyn�tr i'r perthyn ni da chi wnaeth eich barhael o y blaen ac mae wir o'r byd arall gyhoesael ac a ddywed y dyn a fy eni masai efo'in ryw ar tuillechau Rumelu o'ch fwy yw Iím talking toihio eu bydd, ac mae y contest ynquinfaed yn go badges fel Ballasthaeth Gw��터 Llywodraeth Fyliadol Fygebwyrd Awwyd â'i Llywodraethł fel Gallwn Hேiswyr yw lleoli advisory arcynna ita hyd, oherwydd yna isprinied y dywed penledle, gwled boneig y Ministerial FNaum L functionality Bench. Mae ga wnaethaf hynny y tooch iawn, a yna allani efo'r siarolau. Mae sy'n ba pob doorsbwrdd yr Under demanding的是 efo'rior ailengaf oes, So you don't need to worry about it too much now. But share a like effectively means that if that open paper's obviously got a license with it, if it's a share a like license and you go on to use that paper it would mean that you'd have to license any derivative works from that paper under the same license that that source paper originally held. We'll be discussing all of that tomorrow so don't worry too much about that for now. ei boel ein ysgolio i Talkin 73 yma? Ynять bob 1300 perian yn y Deyr culprityr, oedo gwmradd y cyffredinGened Cymru angenu, ond hynny'n mynd sutODd yn ysgolio ac mae'r sgol ag gol, a Sgolio 56 yn y job iawn i bwyd. Dgylch hi'n gwirionedd y young hefyd. Wel, d-2 phob ar gyfer weithiojaeth dynif ar dal fewn o gwmradd ylysig inni'n …ac gweld nod yn credu'r entirely o fod y Chyn sadness. Y gweith ham i'n iohadll trimwg hwn wedi eto'r mawr cwrs a el ensemble cyrtamunus, dragiwn y gallai'r tynnu. Mae'r gwahanol genedl yn ochre yn meddi signedty. teamol Gw Lamb cyphrinei y cyfan yn gwneud gwahanol o netoedd sydd kinro, gw saricaid y cyfan, argoedd sylwyd, yn adeilad y siwr firen, sydd drosr Australiant ryngau. All gynsan ddyっぱ ni'r fanyllha rangedisch exit precyn sydd ajem i fod yn cyfei'r wok grapesau, o pobl cyfei fund lawon syniad i amlian â nhw'n effeith y promote. Beth yw All Genesis' gyffredin nhw'n m pastel wahanol yma rywun Cymru? Ymall indicates yma hefyd ynGoneth Gwенно Unedig Felly pod cyfreent eich golwch technology yn o Серun Cymru i'r arfer relaxed wedi cael ei hyd yn ei gorffinion a brifuno i'n ymweld ironiedig. Ac yn rodd gan yw charlywydau pen hyn ymdび Twist yw w spliteng? Mae o gwn gyda'r cyfeint yw jomones, ac синun yn iconolio'r cyfrinio sy'n ei unig. They still give people who want some control an imicle i a han에is iawn gyda f precisio gyntaf, a holy 라고 fe glofodd i gyn gallfa gyda'r pen Llyfrgell yng ng ту yng Nghr fel 30 gyn popularu a'i peri hwn ledrig flawneg. Felly beth mae'r S4T1 numbwell sydd yn trebu haf gyntaf nbriaeth, dwi'n credu holliant, providers ni'n mynd i 환ol inni, mae hynny'n meddwl cyframe flynyddiaeth yn digwydd yn vesinn fan lawer neu myndódi'n effordebeth hi'w wneud i holl battling ychaf eu ddefnyll mergerio? Mae gennymodol learnaeth wedi weithio i ddim yn ynチ asegur y byddem under y bydd yn digwydd? Erusedd darwy dyma rhaid y r täflawn i dyf benef ym log wrth hyn nai. Byth gweithio ein yr ysgoff hon yn cael y totolaeth y byddd. Dyma fyddio. Ok, rŷch cerddwyd yn yw Ion i ddweud dwel gyd, o f scrffol yna bod ni'n dod i'w ymlaenau yn ei bod yn DLC. Yna'r rydyn ni'n shear i'u gwneud rhywun, fydd Fan Tasig. Roedhu roedden nhw'r Porsche yn pernyddio Croesus march. Mae'r rhai iciad ar gyflogaethau Llyfr Manageradau a roedden nhw'n ceisgrOUR mae'r gŽffrone laddam mewn gwn ar y cyllid y cyhoeth.' Roeddwn yn arweinyddwyd digonol gerai bod cyllid y singol yn enghgar yfu Мыind Ysbjedau Cymru ystyried i gwirio gael kapddau gallai bant yn cereddo ar unig. A bod ni'n hawddol o gyfan gŵr. Ac mae fyddai maen nhwydd i cóctio yn unig gweithre schools i fren rhy hwnnw i,yen ei neud. A dydym yn gweld anodd cyffлаг 0a cymddfry a'r guvedweld i'r bwysig o'u panffreg! A drafod nhw'n gwasitwch ond arall Promise Films I EatGL, yna'n ni e直接 dim wneud eich gwahodol Gwod effsaith passengers geneidны reign heb ad택o dun o gêm Mae'r arghysСп ac oswod dwy prosecution o gairacs ar gyfer areaethau, rhywbeth cyeffort gan iawn teimlo, corridor yr eg Senbyn yn cael ond yng solution behalf-ab cueshau ac yn net mettrecaf i bobl mae'r copying g Gangh Musmaru Hymdpan oedd yn yaudienceeniad er bydd ychael ein go cyfal hynny mae'r gwrth gublig sydd sy'n osfodolol drwyates iawn i olderon rhoi'r gw acerca ar wneud o uned rhann Covid. Yew o pulsef, ac mae'r llwyddoeremoson iawn o'r gwnaethe回來ol Mae'n meddot o naedlau'n prayrfyniad defnyddian Mae'n meddot o naedlau'n facultycol wrth rhoo'r sydd shouldn 답u Mae'n meddot o naedlau'n後 yn meddot hon i chi cadw ac mae'n fobylog i'r chi wazion gw borned yn marchwner crypto ei bod yn sgweithio fya fydd gofyn y�wg writer o fod yn bant iawn, ac rydych i weld mae oedd y gallu oherwydd eich Ghag-ddorol, … Dychким, ychydig o'diill, yn lunag morarcho, n ちo'n ei bwysig welcomingon a bydd yn perthyn nhw rwy'n ddag, rydych i addod o'ch bod yn tali fod wedi flwydd cry� iawn ac ni bob nog iawn – Marc, mae'nlicting Ch<|zh|><|transcribe|> Pray d restroom sydd gallwch gyrdd capti traddotsel fel bod cyfra 2017 hefyd ddiemlaueth yn y cael<|oc|>ch yn ôl Byddwch ei wneud, ran siart hon ti wedi'u perfrus Five tracks felly oedd dyma'r perch Angry efласт y gallwch yn eu hunain,苦akeld anterfiw Merth Universal ac mae anir a iawn iawn, ch Dassol i bzynnau. Gallwch i'r cyhoed Aeb ggoed yn? efallai mae i ph Myswysen policirion wedi gyn halfway eich gwneud eu… Ac ydych blue onaeth cael i gynnyddol yn y bwy� ceis知 dedimad, neu i ddiweddodol wil nad ymarfer y ballach i gynny trosdu,ser a bach ddysgu ryewn i wгрth canned Soul Barth Watanegwyd yn y dyfodol cymraeth peiryddメ Ion<|la|><|transcribe|> pheth. Yma ni'n dod i'r wyniad ar y Dazu mai chi i runed yn lle niad ymgyrhappydd cymrydyctiwr eu cerddwyd o fi nhw'n ddigon wedi'u meddim iawn i illudio mewn ddech Malol settlementol na therogallus i'w swyddol rhai sy'n sicrhau CSI- school, ond parl bob os o oed McLw accident ti adnodd rhai. Yn gwn i'w ddim dealio han innod ynghylch neu facfi anonymous ond mae'r angen nesaf. A gynndeiddo, d Mountain Shawshrut is fel efo'r ddeiw, gwys到了wyd gyhoedd, biadodd pa ffordd o gwast broled a lagun WE& Representative 168, traff am 12 15. yma games ond ond captain shin brauchen panels Fe金 That is the point I'm actually Ond felly drwsiaeth, digonain bod unrhyw ff behaving챌� gyda gwpri hyn four alefl gyllawnol yn bobl adopdo mewn gwawr, flynyddo chi gn maison ond affrle후 gydaew является ond yn ddieunio cyffans, mwy ffordd a'i atweith Polizeidredau ac egwais board o fforddon yn a ateb affrlettog sydd ddim yn d々n disuhaeth yn cael ddid lakeol. A fydd yn colli'r lle Telesom sydd o deyn Move gallew customiau'r brind uncommon i harnessiaeth yn hotsiau i stradwyd F neckter mae'r gweithre plankned Rad balcony States from the data mining of the patient records and translate that into experiments in the lab on particular targets. And so that's a really fantastic way of actually driving science by using this large scale automated approach. It was actually having access to that patient data, which is an interesting example of that. There's this kind of research at patient data, which is understandably a very specific commission that you need. That's a really, really good piece that you might like to look at. And then also here we have Chaz Bailger, who's here in Oxford at the Structural Genomics Consortium. So SGC based up in Oxford over the disorder towards the headingson way. And as you can see here, they aim to accelerate identification of candidate targets for drug discovery by generating freely available novel reagents. Now it's the fact that they actually are providing these freely available reagents that is a huge factor in their success. Because in opening up their research, you get a lot of sort of confidence from other researchers feeling well this group is being very open about their research. We feel that we're actually open to discussing all elements of their research and they're actually providing the free reagents. And so there are about 200 strong scientists in the consortium. They actually collaborate with something like 250 different institutions and companies worldwide including a lot of very high level pharmaceutical companies. They actually, that's actually borne out in the success of this. They crystallise 5 to 10 new structures a month compared to many other labs that may only achieve one to two a year. A lot of that is actually off the back of openness allowing them to generate an extended and strong scientific research network for collaboration. And as you can see at the bottom, finally, SGC is actually responsible for 25 to 50% of all structures deposited into the protein data bank relating to the two main areas of interest of the SGC, namely human parasites and what are termed proteins of biomedical importance or interest. So that just gives you a bit of a snapshot of ways, sort of maybe slightly non-standard ways in which you can actually approach research and get some really interesting new developments. But then finally, the reason why this affects you as well is because this idea of openness and reproducibility and a change moving away from that heavily paper driven metric of actually assessing researchers to something that takes in all of the different components of your research output is actually starting to be borne out by a lot of the research councils. So you can see many of you are actually mandating that if they demand your research on a grant that you actually have to deposit your papers in a particular online repository or make them freely available after some in barbiel period or otherwise. And for example, we have from Research Council's UK free and open access to the outputs of publicly funded research offers significant social and economic benefits. So one of the reasons why we're delivering this assessment in the way that I'm about to tell you is because you're actually going to be part of this rapidly changing research world and you need to know from this point onwards how to deal with that and what the different aspects of things like licensing and data release actually mean. So I know that there's a lot of you who, even if you've enjoyed learning aspects of programming in the MATLAB course, might be thinking, oh well, I know that programming and computational work isn't for me, I'm definitely going to end up in a lab for my research career. That's absolutely fine, but what I will actually say to you is the licensing details and the open access sort of discussion that we're going to have over the course of these little mini lectures that we'll be delivering over the next week, will actually, is completely independent of computational science and will actually apply potentially to your work no matter which discipline you actually end up in. So hopefully the stuff you're about to learn over the next week will actually prove useful to you in your research career. So just if you wanted some nice like reading, you might want the popular science book by Michael Nielsen called Reinventing Discovery. It's a really, really good read. He basically takes you through all sorts of different crowd sourced or large-scale network-sized projects and looks at how those have succeeded and why and actually what the implications are for scientific researchers a whole. So you might want to take a look at that one. Okay, so this is the bit that you probably came here for, the assessment. Right, so having said all of that, you are all going to study cancer modelling and infectious disease modelling. Oh, so is that? Do you want to ask the question now if you are? Just like two things about it. One is that one of the best research still will not be open and two, by making it open, isn't, I think you see how that could make it of a quality? Because you know the intense open dose for the rubbish. Yes, so sort of, I can see what you mean in the sense of just releasing it isn't enough. Because a paper out there is open does not mean that it hasn't been through peer review. So what you're actually finding, which we'll actually be discussing I think on Tuesday of next week when we look at open access publishing, we'll be looking at sort of what the current publishing model is at the moment with a sort of very traditional submit to a journal subject to peer review and that sort of process. But then I'll go on to actually explain the two open access models that are actually being mandated now by the research councils, which actually do involve peer review. So just because a paper is open, if it's from what you believe to be a reputable journal, it can actually, will have been through a peer review process. As you say, things like open lab books do not have necessarily the same opportunities for peer review. That's not something I'm necessarily advocating right now over the course of your research. But as people are actually beginning to get the idea of having your research in the open, open for comment and that sort of thing, there's an idea of people being able to decide for themselves. But in particular with the open access journals, there is a peer review system embedded in a lot of those journals. And so just because something, for example, is open access doesn't mean it hasn't been through rigorous peer review, but you're quite right to actually question that because a lot of the time when we're actually trying to look at these research outputs, there is that issue of, well, when I find a piece of research, how do I actually know that that research is good research? It's a conclusion you have to come to by yourself. But obviously having a piece of work peer reviewed is going to assist with that. But we will actually be covering a lot of those issues in the publications talk, which will be on Tuesday. Any other questions before we move on? In terms of reproducibility, to what extent might you incorporate some manner of your production results into peer review? Sorry, I can't quite hear the end of your question. In the context of peer review. It's one of the big sticking points for the moment, because obviously you've got, it's certainly not mandatory across the board for computational papers to request code. But what we are finding is that at the moment a lot of papers are actually requesting that authors submit code and raw data as a part of actually submitting their paper for review. Of course, the sticking point is, as we're moving from one very traditional model over to something where the financial benefits of that model and the financial clarity and transparency of that model are much better. The idea of actually having the code out there is great. It means that people can make their own decisions. But obviously the peer review process for something like code is something that would be in its very very early days at the moment. And that's something that actually I think is going to become quite a hot topic in the next couple of years as this new system of publication actually develops. You will actually find in computational journals, a lot of them are now requesting as standard, that if you actually want to publish with them you have to submit your original code so that it can actually be inspected by either by the reviewers or by the readers. In terms of openness, how do journals source their money if they are open? Exactly. Well, actually, it's probably best if I did for most of that question to Tuesday. But what is traditionally happening at the moment is we have a system whereby, for example, the Bodleian Library will have a significant amount of funding to pay for subscriptions. Of course, we're very lucky here in Oxford. The Bodleian has a lot of money to actually pay for those subscriptions, but you will find that other institutions that maybe don't have the benefit of quite such large resources, they won't be able to afford a lot of journals. I don't know if any of you have ever found, you know, sometimes friends at other institutions might contact you and say, hi, I've got this particular paper, I can't access it because it's behind the pay wall. So you're right. If you're actually going to move over to a system where it's free, you actually need to make sure that there's money coming in in a different way. At the moment in sort of open publications, you will typically find that there's an article processing charge that's levied to the author at the point you submit. But because, of course, the research councils are mandating this idea of open access, they are typically putting allowances into research grants to enable the costs of publishing open access and paying that article processing charge to actually be covered. And then it means that you've got much more transparency with up front what is actually costing for article, and it means that there's a bit more of a level playing field across the board, and you're not finding that your research, which often has been at least part funded by the taxpayer, isn't going to be hidden behind a pay wall where, you know, readers might be charged an incredible amount per article. But you're quite right. If you're going to move from one model to another, you also need to think about what funding model you're going to adopt. That's not a national model because you're expecting governments to pay in all governments, but give the funds. Well, quite a few, there's going to be a whole lecture on sort of the publication process and sort of green versus gold open access and how both of those work. And as I say, you need to actually think about it in a global sense as well as a sort of local government sense, but we will try and touch on those on Tuesday. Anybody else? Right. Okay. So you probably have this slide up for a few minutes now, so you've actually seen what's on it. But basically, you're going to all be studying cancer modelling and infectious disease modelling at some point over the next sort of week and a half. So the advocacy deliver need to be open and reproducible. Now, I know from this lecture you don't necessarily have all of the resources and the skills yet to deliver, for example, research that is appropriately licensed. But don't worry, there's going to be a series of mini lectures over the course of the next few days that will actually equip you with the skills and the knowledge that you actually need to bring that into your assessed work. Okay. So just a quick word about how we're structured it. We've had to split you into eight groups for this, A through H. So for the first week, groups A to D are going to look at cancer modelling and groups E to H are going to look at infectious disease modelling, which is typically going to be either HIV or influenza. Okay. And so you will be expected to take that as you would a research question. We've got some sort of handouts here with some papers on. So for each of these topics, the cancer modelling and infectious disease, you'll be expected to go away in your groups after this lecture, examine the papers in the area that you've been assigned to, and pick one of them that you're going to be focusing on for phase one of the assessment, which is going to run from today all the way through to the end of Monday. As part of that, you will probably want to start by extracting a particular section or subsection of that paper, examine some of the results and build a sort of suite of MATLAB routines that will enable you to try and reproduce that work as a starting point. Once you've actually gone from there, you can be working with me and the demonstrators over the course to effectively build on that foundation and start developing the whole piece that you're working on as a mini research project. So by the end of Monday, you're going to have to submit a written report on your first project, and you're also going to need to submit the data that you produced, the code in MATLAB that you produced, possibly any figures, and also a data management plan. So again, we'll be discussing these in the lectures, so don't worry too much about the details now, we will get to them in plenty of time. But in terms of actually releasing it online, you're going to be using the version control provided by Git. So you're going to need to register online with GitHub, and the lecture this afternoon will actually take you through that procedure, we'll explain a bit about version control, and get you set up with GitHub. The idea is that you're going to use a GitHub account online, that everybody in your group can access, and that I can access, to actually put all of those materials online. So you won't need to worry at the end of each phase about emailing in all these items to a particular email address, you just need to make sure that they're on GitHub, on your little folder online, and then you can all work on them, and I can actually pull down all of that stuff at 5.30 on Monday, and sort of assess you for phase one. Then comes what, sort of after the initiator phase, we're going to actually have what we call the successor phase. So I spoke earlier about needing to see yourself as both a research producer and a research user. In that first phase, you will initially think of yourself as more of a research producer, because you're just trying to sort of look at a short project for a few days, see what code, and what sort of little research outputs you can actually produce. But you're actually at the same time going to have to think of the work that you produce from the perspective of a research user as well. Because once you're actually coming on Tuesday morning, we're going to rotate the groups. So you won't be aware of which project you're going to get, but effectively, we'll swap over. So groups A to D will work on infectious disease, and groups E to H will work on cancer. So one rule of the way that this is being done, obviously, the test comes in the successor phase when you've actually got to take work created by your peers, and you're going to have to understand what they've done, and you're going to have to build on that, and continue it as a research question in whatever way you choose, again, in collaboration and discussion with myself and the demonstrators over the course of the next three days until you hit the end of Thursday. So the rule is you're not allowed to discuss between groups about what you're doing, because what we want to do is try and create a little isolated research environment for all of you in that first phase where you're trying to work on your research project and produce these research outputs. But you are going to have to make sure that not only are you producing good quality code and producing what you believe to be a good report, you've got to be able to look at that stuff that you're handling in our Monday and say, if I were completely new to this and I were trying to examine these results, understand where they've come from, assess their validity before I do anything else with them, would I be able to? So for this reason, if you could please not discuss what you're doing between other groups. Then we'll swap you over, and once you hit the successor phase, you'll inherit a different project and you'll have to move forward with that. So you will get chance to work on cancer and on infectious disease at some point. Do we get the initial paper from? Yes, so you're allowed papers and sort of stuff that's available online, but you've in terms of actually working out what your predecessors have actually done to actually achieve the results that they claim they've got. You've actually got to be able to, for example, understand their code and how it's structured. You say continue in a new way, so is it tangent to what the original paper did because the first phase you're just trying to copy the original? Well, some of you might find that once you've reproduced stuff, you may have time to develop it further. But ultimately, you're actually going to have to continue using the inherited code, for example. So if the people that have handed their stuff over to you haven't particularly structured the code well, you might end up wasting a lot of time in the successor phase just trying to work out what's going on. So successor phase will carry on through there and the successor phase is going to finish at 10.30am on Friday the 18th of January. So at this point, you'll be expected to submit another report, data, code and figures, again via GitHub. So don't worry too much about the idea of swapping over projects. We'll actually handle all of that on Tuesday morning when we actually assign you to a new investigation. But then on Friday the 18th of January, each one of the groups will have a 30-minute period of in-year where you can present. Now you don't all have to present, your group will have five or six people in it. You may actually find that it's sort of, you prefer to just nominate one or two people to deliver that presentation. Presentation should include a description of what you did for phase one, a discussion of what you did for phase two, but also should actually discuss how easy you find, you know, your predecessors work to actually understand how easy it was to use. To what extent do you feel your predecessors delivered reproducible research? Okay, so I want you to actually start providing some analysis and feedback for each other as to maybe where you could actually go with this. Okay, and what you actually need to do to improve the potential impact and reproducibility of your own research. Okay, so quick assessment over for you, which I've really already gone through. So from today you're going to be working on phase one, then you'll sort of, whoops, that shouldn't say Friday 18th, deadline for phase, yeah actually. Okay, yeah. So you've got your first deadline Monday, you'll arrive on Tuesday morning, so Monday night you should have quite a nice time because you'll just be able to disappear off and not think about work at all. Once you come in on Tuesday, we'll rotate the groups, you'll have another three working days to focus on that until you hand in on Friday morning. Okay, so I've got some handouts here that will explain all of that. In terms of what we're actually assessing you are, as with a lot of things in the Matlab course, we'll be actually assessing you on the quality of your code and data and other submitted materials, that will give us some idea of how we've grasped the techniques and approaches that you've actually met already in this course. We'll be assessing you on the quality of the written report. Now we've requested the report should be no longer than three pages per group member. You don't have to fill that quota. If you'd rather produce a report for each stage that's a shorter one that is to the point, then that's great. It's, you know, if you can communicate your research well, that's fine. And also, though, we'll be marking you on the openness of the project. So we'll be able to see some of that. When I, for example, pull down all of your work from GitHub, I can actually look at all of those files to see how you adequately licensed these files in the right way for them to be considered open. Is your code structured in such a way that it's easy to kick up? And also, we'll glean some information from how we see your successive route deal with the project that you've handed to them. Okay? So there's three main things there. The report, the code and data and other materials, and how open you can actually do this. Okay? So there's a sort of schedule to assist you with this idea of learning about licensing, learning about openness. There's a schedule of very short lectures. The idea is that most of these are going to be 20 to 30 minutes at most. We don't want to be taking too much time away from that phase where you're actually just wanting to get down to work and produce stuff. If you're back in here for 12.30 today, we'll take you through managing your code using GitHub. Then we'll go on to data and content licensing tomorrow. Monday morning, we'll be discussing data management plans. We've also got a guest speaker in from Zoology that day who will be discussing also a bit about scientific workflows and how those help to actually unite these different bits and pieces of research output. Then Alex Fletcher will be in on Monday at 2 with a brief talk on written mathematics in the context of a report and how you should be approaching that to make your mathematics readable and understandable. Then on Tuesday, we'll actually be discussing the issues surrounding open access and how you're actually publishing work and how that sort of the sort of, well as we say, the changing face of publication, how what sort of models are removing from what direction is this headed in. Wednesday, Tom Dunton from Computer Science is going to be coming in looking at how you can actually link MATLAB to other codes and actually use MATLAB to interface with the, with Chase, which is sort of cell modelling written in C++. On Thursday, we've got another guest lecturer. Jenny Mulroy from the Open Knowledge Foundation is going to be visiting. She's going to be demoing different projects that the Open Knowledge Foundation actually deliver and it'll give you some idea of the scope of different projects in science that are actually being developed at the current time. So attendance at these is actually compulsory as part of your assessment and actually if you were to miss a lot of these, you won't actually know what it is you meant to be handing in. So we do need to have all of you at each of these. And so final slide here are your assessment groups. So I'll just hand out, there's a few handouts we'll get started with Leah. So just to make your job easier in terms of thinking by God, it sounds like we've got a million and one things to think about, handing to GitHub. There's a checklist here that will help all of you work out when you've got everything you should have. All right, so one of those handouts have gone around. I'd just like you to get into groups. So I've actually got two different handouts according to which group you're in. So as we said, A to D are on cancer modelling and E to H are on infectious disease modelling. Okay, so while we actually, I'm sorry to clarify that we've called, we've invited as a group. The report is meant to be a group report. So you'll have to be working on that report together. We don't want you to submit any of the individual. The whole assessment is actually done on the basis of your work as a group. So you've got two handouts here, one for the cancer modelling people and one for the people actually working on infectious disease. So what I actually want you to do is a list of papers at the bottom of here. Over the next sort of hour or so, you need to actually go away in your groups, look through these different papers, have a look at the different models that they actually involve, the different results they come up with and you're going to, in your group, have to decide one paper that you're going to base the first bit of your assessment on. So you'll want to ideally select some portion of the working from that which you actually want to attempt to explore in MATLAW and attempt to reproduce. As we said, from there you'll actually go into treating it as a proper research question that as a group you will choose how you're going to take it forward. So it's probably best if you get into groups, I don't know if I just, is it easier if I just pass these around and you take the cancer modelling board if you're doing cancer modelling? Yeah, okay. Right. Let's have the counsellor on the list. Right, so once you've all got the handouts, you have Rita disappear off downstairs and I will kind of call the groups in about an hour in turn. So good luck and the papers aren't on WebAlign yet. I will be getting all the papers involved in this upon WebAlign in the next five or ten minutes. So yeah, feel free to visit their office. Thank you.
The following video is an original recording of the opening lecture to the OSTI pilot initiative, hosted by the Doctoral Training Centres in Systems Biology, Life Sciences and the Industrial Doctorate at the University of Oxford. Entitled "Reproducibility and Open Science", the seminar identifies some of the current issues facing scientific research and introduces the theme of open science as a possible solution. Prospective OSTI course leaders may be interested in the end of the lecture, which provides an explanation of the Rotation Based Learning (RBL) implementation.
10.5446/14932 (DOI)
Fel ydy'r playback sydd yn ôl perthau gilynyddiaeth partly a mater weights â pamach cyiçãol yno. Yn gyflugiau bwais ystod, nol uncertainio i gynhyrchu psych��니 y pethys llwyddiad ar odd. Ond wedi ca Hard North Serfyn fallb o'i roi'r bobl, chq culturewch us tier gan'r beth'o ceio yst怎麼 hanesill, ni o lists éw barbario b!!!!!! Llywbeth yna, pe détaisol e Eraill mewn penniedig. Rau lle wedi oedd christ foundation p maen nhw'n mynd, gan ar hyn o'r naturau llifer honno y rhramf этоw yn gweithreidd. Y nodi bod r durantellau y meddwl yw hefyd os y byddwch yn trafing y r forcedllai a bachwch yn Jahren. Mae'r hynny'r ganwch don o instances datblyguill. Wrth gofodaeth allot i eisiau y rhydwm i haf de refugell, mae'r bethau'r lle mae'n weithio bobl, ond fe o ㅋㅋㅋo daeth eu cas, rydw i wedi bod yn gyfle i hon i amddind Enterprise, twofodd honno mor oain y festive headlandsm� nhw rydych chi'n byw,上面 ng hang drion gwaith, gyda'i g remindaf cyd-osedyn innadeldiad, fel gyda eu gwneud gyda'i cael flynyddiaeth ers eu tanleddiad itu defining y ddigwydd, sy'n gallu caill grateful peth yn citabout o myd yw anghariaent, o gylliannod i'ch cynhyrtaeth, credu myf expert Focus. Felly, fanorais ferchio bod serys ym thyng twenty-two cärdd COVID-19. Dweud o'r brLER o askr Blyfa? Dywed caruniad yn ru'r peth na o CHAR A frasio, yn meddwl �edig amgylched dem mwy設io. Roeddwn â symud ond tydd ganallingion a dyn nhw'n hyffけれbolaeth o'r dry немножко chi ddim yn defnyddiad ar y paskyr. Yn ym Stadium ac yw gyf Оed accuse'r sowith ar gyfer werthidol gŷ o'rfeyddwyr yn fan bir projector. superpower newydd yng witness? Wil wrth includew cyn ychydig oedd ichi Siog Artddol y bydd darkness i gael na Nomod? I policies ar gŷ annulliadau i'r awm yn lyhyn am bwysig yr ar draws ac Tywch angen i ddim lle llawer yn el-wccan yrddam gydaom ei wneud hynny yw'r lyte Meysgabysau gallu cyndag rydych wedi bod ysbryddon yw'n ner melodygiad drafwy fel y launch pe全 e'n lle..so that that license will actually dictate what people are allowed to do....with the work of yours that they choose to download and use. Ultimately, in the context of your research career....you can actually enhance your research impact in awful lot....because as we saw in particular one of the examples yesterday....of the work of the structural genomics consortium....they and a lot of other groups and organisations like that....have found that actually by having a much more open approach to research....and inviting a lot of collaboration, people can actually see your work....they can actually look at it for themselves, often with the raw data....and in a way that actually allows them to assess the integrity of the work you're doing. You will often find that licensing your work and actually putting it out there for people to use....will really help strengthen the research network that you can actually build on. Yesterday I gave you some statistics from the structural genomics consortium's website....where they claim they've got 200 scientists and they collaborate with around 250 partner organisations worldwide. Those partner organisations span a wide range of commercial and industrial companies....and really high level academic institutions. You can see there are benefits to your own research network....from taking up an approach like this. Roughly what do you need to do in licensing your code? Well, for starters, I know it seems fairly obvious, but you need to actually identify what items of your work....are you actually seeking to open up. If you're seeking to open those items up, you need to think about what kind of licence you're going to have to apply. So as we said, the licence is what delivers what we call a sort of legal openness. It's providing that legal framework that will actually sit on your work....and tell people what they're actually.. are how to use it for. You actually need to deliver some kind of release. So there's no point having all of your nice work....and it's all got a nice licence with it....if it's just going to sit on your hard drive and then one can access it. So if you're actually going through the process of releasing your day-to-day....to make sure that you've actually got a lease to transmit it....for the purposes of this project, you don't need to worry about that....because the way that you're all going to be releasing is by your GitHub repositories. But in an academic context, you might actually want to end up promoting the fact....that you are working in a group that is licensing their stuff....and putting it out for release for other people to use. So now we'll actually get a bit more into the nitty-gritty. The actual basic three steps of licensing are all here. I will remind you at this point, if you go up onto WebLearn later....there is a folder that should have appeared in the last half hour....and it's called something like Osti Day 2. If you go into that, there's actually a copy of something called....the Data Licensing Handbook....which goes through a lot of these steps in much more detail. But bear in mind that is specifically just for data licensing....not for some of the other materials that I'll talk about here. But to start with, ensure you have permission from the rights holders. So ultimately, if you've just, for example, the main bit I wanted to warn you about here is....if you're putting your project reports together in the next day or so....you might find that there's an image from a fully copyrighted journal....that you've actually been working from. If you're going to start trying to license that report before you release it to me....you cannot stick a license on the image that you've dragged down from that particular paper. So you need to make sure that all of the content that you are choosing to license....that you actually have permission from the rights holders to do that. So in your groups, if you're putting together a report....you will have all written the text, at least you should have all done. So in that sense, you've actually produced that work. So as long as there is a consensus in your group....that the written content of that document....that you're all happy going with the license that you've selected....then you've got permission from all the rights holders. But what you won't be able to do for the purposes of your assessment....will be to copy and paste an image that you've taken from a different paper....that you've been working on and stick that into your report....and then license your report because it will be a fully copyrighted image. So just beware of that. This bit, I'm going to go into a lot of detail on next. There are lots of different licenses out there. And I think one of the things that often impedes uptake of people....choosing to practice licensing of their research outputs....is that they feel a little bit phased by all of the different licenses that are there. So in a couple of slides time, I'll show you a couple of places....where you can actually find particular licenses....and I'll explain a little about some of the terms that you'll see in those. And once you've actually chosen your license....you actually need to make sure that the declaration of that license....is associated with your file in some way. So if you imagine this is, for example, your reports....as you'll see, reports count as a content license. And so once you've selected the license that is going to cover your written report....for this assessment, you need to actually declare the fact on the top of the report....or somewhere that's very prominent that that is the license you have chosen. Often you tend to find, I mean, this isn't something I'm requiring you to do....but any of you that know how to mess around with the meta tags of the file. So for example, even if you went into a lot of office applications....when it comes to saving the file, there's often a little box somewhere on the save menu....that will allow you to embed meta tags. And that's the kind of place where you could also add some digital information....that's attached to the file that lets people know what the license is. In some cases, machine readable licenses are also available. But again, that's not something I want you to worry about over the course of this assessment. Often if you've actually, particularly if you're working online....it can actually help to state the weather address....and provide a hyperlink to the online listing of the license. This will all make a lot more sense if you've actually seen the stuff later on in this talk. And if as part of your license you are demanding that anybody that uses your work....gives you a citation, they will actually attribute that work to you....ideally what you should be saying is you should tell all of those people....how would you like to be cited. So typically, that was a bit of an example actually, of content licensing. If you want to look at it on the front of these slides, there's a license at the bottom. It's actually as easy as that. I'll be explaining later what this term means. This is actually the name of the license that I've used for these presentation slides. So I've offered them. You know what the license is and there's a link here to the actual listing for the license. And the specific license I've chosen is what we call a Creative Commons Attribution License. I'll go on to explain what that is. But this little BY stands for Attribution. So it means that, hypothetically, if any of you actually wanted to take these slides....and they are upon web learning in the same folder that I mentioned....so you can actually drag these slides down later and look at all of them. If you wanted to actually chop and change this presentation....add in some of your own stuff and then give it as a separate presentation to your own group....you wouldn't even have to actually come to me in person or email me or anything to ask me. You'd be able to just drag these slides down, look at this and say, oh, all she wants is to be attributed. So as long as you mentioned in the time that you actually gave your new presentation slides....that they were based on my work originally, then that's fine under the terms of that license. Okay? So feel free to chip in and ask questions as you go because I know I'm actually going to be throwing quite a lot of bits of information at you in today's talk. Right, so just remember to state the form of citation. Now in that sort of slide couple back, I talked about making sure that the license you choose is appropriate for what you're trying to license. Of course, when you're actually dealing with a legal situation, it's not as though you've just got one license that is suitable for any kind of content that you could possibly deliver. So this is purely partly for legal and partly for historic reasons. The licenses that you will meet fall into three main categories. Licenses for code, now they often sit separately because of course the open source coding community has been around for an awful long time. So code is just naturally has its own group of licenses that apply to code. And then there are these two, data and content. Now initially you might think it's a bit difficult to distinguish between those. For the purposes of this, you should largely think of data as big numerical data sets or not necessarily all numerical, but the kind of data sets that you was potentially putting a database or analysing some numerical way, that would be data. Content is a really broad term. Basically content would cover if you wanted to license your report for this project. Licensing a report or written report would come under content. Similarly photographs, videos, images, anything like that just comes under the window of content. So there's three different types of license and you need to be aware of that fact and make sure that whichever license you end up choosing falls into the right category. Otherwise there's no point taking a piece of code for example and trying to put a legal license on it that's only suitable for licensing data. It's not going to make any sense. So actually these are the two links that you can follow and if you follow these two they'll actually give you a little listing for suitable licenses that you might like to use for your materials. So as I say you can go through these later on by a learn and follow all of those links and hopefully see what's up there. So yeah coming back to number two still, select a license that is appropriate for your material. We might say well clearly you've told us this three different categories of licenses and there's lots of legal people around the globe that are writing all these licenses. So kind of well ultimately where are they all coming from? You will get people that have highly legally trained who might choose to write their own licenses. Technically that's fine but remember at the beginning we actually spoke about wanting to give your users confidence in the fact that there's a particular license that they know, understand and recognise and therefore there's that understanding that's there as to what they're allowed to do with your work. So what people would largely recommend is that rather than writing your own license or using a very little known license, you want to often follow those links and go to the main groups of people worldwide that are producing the most highly used licenses because people are familiar with them. So the ones that I'm going to show you to begin with are from a group over in the States. They've been around 10 years, they just celebrated their 10th anniversary in December 2012. Creative Commons, so they're a non-profit organisation, it's a huge worldwide organisation now, they employ all sorts of people, obviously a lot of legal experts, but also a lot of people who've worked extensively in academia and industry. So they understand the constraints of the people that are under. And basically they have big teams that develop licenses that are designed for you to use when you're licensing your data, your content and things like that. So they're one of the best starting points if you want to learn a bit more about what licenses are, how to use them and that sort of thing. So I'm going to split the next bit into three little chunks just so you can see what you need to do for each component that you're pushing up to GitHub. So the first of these I'm going to look at content licensing. Now content licensing, as we've said, if you want to put any figures up that you've produced, that you've produced yourselves, if you want to license those before you put them up to GitHub, they come under a content license. If you have, you know, you've written reports, again, they will also come under content licensing. So these are all flight of content licensing. Now I mentioned creative commons. You can recognise creative commons licenses because they'll all start with CC usually. You noticed that when I showed you on the cover slide of this presentation said CCBY because it's a creative commons license. And the really nice thing that creative commons have done, they've thought about the different concerns that you might have about people reusing your work. And they've actually sectioned those out into sort of several major categories. And it's a really flexible licensing system so that you can include, you can sort of pick and mix with all of these different areas and work out how do you want people to use your work. So we started with this one. I already mentioned this. If you see a BY in a lot of licenses, in creative commons licenses, as in bys, the work is by so and so, you might want to be attributed. So if you put a BY license on or anything that sort of says attribution in the title often, it will mean that that license is dictating anybody using that work, has to actually cite you, have to say this is the person that created this work and I've derived stuff from it. You might decide that you don't want anybody to profit commercially from the work that you're releasing. If you see that sign, that often means no commercial use. You can actually see, I should probably mention now, these all sort of nice circles with images in are a bit of a riff on, you'd be used to seeing the standard copyright symbol. You'll always see this. You're used to that as a copyright symbol. What that actually means is all rights reserved. As you're aware, if you see that all rights reserved symbol, it means that somebody is asserting complete control over every single thing that happens with that work. So if you see that one, that means no commercial use. This one here, ND, is no derivative works. So a derivative work would be anything like, for example, any of you taking this set of slides and wanting to chop and change them and put your own stuff in and maybe cut some stuff out. If you did that, it would be a derivative work. So if I'd actually, if instead of using, can you actually all see if I write here? So you remember at the very beginning of this talk, I'd actually used something called a creative commons attribution licence to licence these slides. That is why you'd be allowed to take them and use them for whatever you liked and chop and change them and remix them. If instead on the main intro to the slide deck, you'd seen something like that, as soon as you see that ND, it's no derivative works. So that would mean you'd be more than welcome to take the slides and reuse them elsewhere, but you wouldn't be able to start chopping, changing them around and remixing them because I would have prohibited that under the terms of that licence. The other one, this one, the first slide here is share a like. Now share a like is quite a specific term. What it actually means is that you're welcome to take my work and chop it around and change it and mix it with things and use it often. Obviously this inclusion notwithstanding, but it means that you have to share the work that you produce from my stuff under the same licence that I've actually used. So this one can restrict things a little, but it basically means that if you've released something into the open and you want to make sure that it stays that way, you can actually ensure that by applying a share a like licence. So if I'd actually licenced my slides like that and you wanted to use them for something, you'd be more than welcome to, but you'd have to give me an attribution and you'd also have to make sure that anything else you produced was also under the terms of the licence. How do you define commercial uses? Does it have to be money gained by someone else or is it just image or something like that? Generally it would be. Obviously with images, it will often be quite clearly stated on an image. When you say images though, in terms of... For example, if somebody uses it working at some company and whether you will not actually get the company more money. You're actually talking in terms of like a product identity sort of sense, aren't you? What does that mean? No commercial use for... What you should think of it as is financial terms. Most people mean by commercial use, it's covering financial derivatives. Obviously if at any point you feel that something you're doing with your work might infringe a sort of brand identity of something, then you're in that sort of grey area where I'd probably advise that you seek a bit of legal advice for that. By and large though it's very, very clear what is meant by commercial use. If for example some of you didn't bother with a commercial use clause in phase one, this is a really wild imagination. But if in phase two then one of the groups decided to take that work and in three days managed to sort of create a highly successful functioning spin-off company from that stuff and made loads of money. You'd be fine with that if you didn't have a no commercial use clause in. So just be aware of how these things function. So generally people mean financial commercial use. You're quite right, it's actually worth being aware of the idea of infringing identity, but that's not something that's really on the side of licensing. You're going into quite deeper financials and legal waters by that point I think. Right, so if we go back to the slide I put up yesterday though as to what is openness. Just looking at those four different constraints that can be put on a licence gives you some idea of what you're thinking about letting people do with your work. But there is actually an official definition of openness and as you've seen some of those constraints we just talked about are going to be more restrictive than others. According to the open definition a piece of the content or data is actually open if anyone is free to use, reuse and redistribute it, subject only at most to the requirement to attribute and or share alike. So what this isn't saying is not saying that you're a bad horrible evil person if you don't want your work to be used for commercial purposes by other people. But what it is saying is that inevitably you are going to restrict the kind of use that you'll see of your work, so just be aware of that. So to go back to this we've got those four main areas under the terms of if you're actually trying to stick as close as possible to the open definition you would actually find that if you really want to be open they generally discourage the no-commercial use licenses and given the whole point of actually being an academic and producing new scientific results as you saw in that Creative Commons, Science Commons video I showed you in the first lecture yesterday, they describe scientists as the ultimate remixes. As part of your scientific research the very nature of it involves you taking other people's ideas and hypotheses and research and chopping it around and remixing it. And so if you're actually going to be truly open you want to avoid anything that's going to prohibit derivative works. So this leaves us with attribution and share alike type things. Now bear all of those four things in mind as we move on to the sections on data licensing and code licensing. You will find when you start looking through these lists of licences they don't all have those nice little letter tags on them but you will often see terms in them that say things like attribution or non-commercial license or share alike license. And as long as you understand the general concept of what's going on and you're following the major links I showed you for licences that are very open and are good to use then it's all fine, you should understand what's going on really. And the only one that I really needed to mention is this one, public domain. Now this is basically the most open of all the licences you can have and what you're actually doing if you put your work in the public domain you are waiving all rights to the work in as much as that is possible under local legal jurisdiction. So we have to actually put this in where it says public domain dedication, anyone can use it for anything. In a lot of areas there will, the one sticking point with this is that in some local jurisdictions they will not allow you to waive the attribution side of things because that comes under what is often referred to as the moral rights of the work. So as the author of that work you can assert the moral right to be attributed. But if you actually slap a CC0 license on it what you're actually saying is I don't even care that you tell people that the work was originally mine, you can chop and change and reuse it and I don't mind. Like I say some local areas will say oh well will allow you to relinquish all rights but we will not allow you to relinquish your moral rights to the work should you choose to assert them. The most open, inevitably, if you're not really putting any ties on people it is the most open of all the licensing arrangements and nobody owns the rights to materials then. You will often find for example in certain things in the music or publishing industries you will be aware that certain pieces of music for example will be granted copyright for a certain period after the lifetime of the artist or the composer and after a certain period of considerable number of years they often revert to being in the public domain. So that's the sort of thing that's happening here. So now on to data licensing. So what I will actually do now I'm going to follow this link just for data and content. Much of the same ideas apply so all of that stuff I discussed about attribution, share a like, non-commercial. So those ideas still apply to data licences. You will just apply a slightly different legal absolute licence because it's a licence for data not a licence for content. So if I actually just follow this link and show you. This is the page supplied by the open definition. So you can actually find out a lot more information here about openness, open definitions, open knowledge and also about licensing. So in light of you wanting to be as open as possible with your research for the purposes of this exercise over the next week. These are conformant licences that are compatible with the open definition. So you can actually see here we've got some names down the side of content licences you can use. You notice here the creative commons attribution share a like. So these terms are starting to look a bit more familiar to you and there's the BY for attribution and share a like. That column is just telling you whether or not you have to attribute your share a like with those particular licences. So that gives you a way of just browsing through some of the content licences available. And then just below it we have something fairly similar. Now what's known as a PDDL licence is that sort of really, really liberal public domain declaration. Is there a question? Is there any licence that you can have that's share a like but not attribution? Cos creative commons used to have one but they deprecated it. Yeah, I think that one's now fallen out of common use. I'd actually have to, I mean it's possible but I don't think so. Might be worth a look though on the greater common side. And as you can see at the bottom there's actually recently deprecated licences. Again for data you can have attribution and all sorts of things. So then this is the CC0 licence so you'll often see a CC with a little zero after it or actually zero in words. So again that's a sort of public domain one, this is public domain. So again what you'd be thinking about with your data, if you're producing a file of data that's got the numeric data in, you ideally want to attach one of those licences to it as you see fit, embedding that maybe into the meta tags of the file and at the very least making the very open declarations somebody in the file. So that's what the licence of that data is. So at the bottom here as well there are actually other ways of sharing your data. None of them have become completely mainstream yet but I don't know if anybody here has heard of anything called the data paper. So in the way that you actually publish an academic paper, while a lot of people were realising before the sort of creative commons type licences really spread, people want their data to be out there but they want to make sure that people use that data in the right way. And so the data paper was effectively a way of publishing a data set in the way that you would publish an academic paper and so then it is a citable resource. So that is something that is out there, it's not really that widespread but if you do come across it you'll know about it now. And then there's also websites like Figshare, so Figshare is this huge site for releasing your content and for releasing your data. And they actually have the rule that as far as I'm aware it is actually free for you to sign up and host your data and the content there but you actually have to abide by their choices of licence and so on. Figshare I think, all data that you put upon to Figshare has to sit under a creative commons zero licence, so that public domain waiver. And all content on Figshare I think has to sit under a creative commons attribution licence. So just be aware of that if you're using sites like that. But one of the great things about something like Figshare, if you actually put your data up there you can actually get a DOI, digital object identifier for that which would help if you wanted people to be able to link to your data set from elsewhere on the web or by providing a link within a written resource. This is a set of principles known as the Panton Principles. They weren't specifically just by the Open Knowledge Foundation, but the Open Knowledge Foundation over in Cambridge has had quite a considerable hand in releasing these. And these have been around since 2010. And basically they provide four main guidelines to assist you with what might be good practice if you're a scientist trying to release your scientific data. And so you can see here some of the pointers we've already covered. The idea of serious clarity with you actually stating what licence you're putting on things and making sure that if you're releasing your data you use a licence that is actually appropriate for data rather than any of that with one that's only going to be suitable for code or other content. Now these are just sort of guidelines again in terms of wanting to progress scientific research as quickly as possible and as thoroughly as possible. They actually recommend where possible that you try and use one of those public domain licences if possible. But of course it might not always be possible for you to do that or it might not always be the route that you want to take. But just be aware that obviously the more open your data is the more useful it is to people. And again this issue of trying to avoid where possible the non-commercial or the restricted causes. I do actually have a hard copy of these on little postcards which we'll have with us on Monday if you want to take away a copy then. And then the final bit. I'm not going to rest on this too much because you've probably heard quite a bit now about all the different bits of licensing. But code again is a very separate area. And as we said the open source coding has been such a big thing for so long that computer scientists are way ahead of the rest of us on it really. If you have a look at things like Linux and all sorts of things that have spun off from large scale projects where people are just free to take source code, modify it as they sort of fit and build amazing things with it. So code licensing again rather than going to the open definition page. If you follow this link to open source, down. So yeah if you actually follow that link later if you can it will take you to this page. And again keep those same ideas in your head about the ideas of attribution and reuse and sharing. These are the sort of open source approved licenses. And the way that you would actually implement this in your code, for example if you had your Matlab file and you wanted to put it under one of these, I'd probably recommend for what you're doing something like the general public license will probably work. It's an attribution style license so if you actually click on that one and click on the most recent version which is this one. You can actually see here it's quite long and wordy. It will tell you everything about the license and all of its terms and conditions. If you scroll right to the bottom there's a little bit of code that you can actually click out. Just here. You'll see this stuff. So you'd have to just put in a little line to give the program's name and an idea of what it does. And then you actually put the license in at the top to say I'm actually licensing this specific piece of code under these conditions. And the general public license requires then what once you've actually put this in your code. If I were to take your Matlab code then and actually use it for something, I would have to leave this licensing at the top of the file. Even if I modified your file quite considerably, it's basically the equivalent of a sort of share-alike license. So if I'm using your code I would have to keep that in and have to share it under the same terms. Okay. Importantly, under the terms of this general public license, I would have to make sure that your name was still associated with that license because you started off the project originally. All right. This is through. Right. So a little bit of pointers for this. We've covered most of them now. Just make sure you're using an appropriate license that's not for day-to-day content. Pop the license wording at the top of every single file that you upload to GitHub. Because, of course, GitHub, by the way, it works, does not actually assert any ownership or licensing rights over your code. It does not select what license you have. So if you've got code that's up there on GitHub, it will be unlicensed at present. And it's not good enough for you to just skip that legal jargon bit and say, oh, I don't like the legal jargon, so I'll just write this is open source at the top of my code. Because if you're just writing that, it means there's no legal framework officially supporting your code. So you need to make sure that something like that is actually in there. So is that a question? In every file or just in every single file? You have to do a reading or something. Can you do just a reading in the folder? No. It actually needs to be at the top of each file. So if you actually, I know it sounds like a lot, but actually when you think about it, it is just a basic copy and paste once you've got it set up. And if you work on very large projects, I don't know if any of you here have worked on Chase at all. No, you might meet that on Thursday for Wednesday or next week when Tom Dunton comes to talk. But any big coding project, even if it's got thousands of files in, if it's covered under a particular licence, there will be that declaration at the top of every single file so that you know where you stand. So, yeah, no, thanks for asking that. You are free to choose from that list of licences if you think a different one would suit. But as a starter point, certainly the GPL should be appropriate for what you're wanting. And you can look at that site for more info. This has also been endorsed quite on quite a large scale by the scientific community with this site called the Science Code Manifesto. And if you actually take a little look at that, you'll notice that it covers a lot of those concerns and directions in science that we've mentioned over the last day or two. The idea of copyright citation and actually curating your source code. So it helps you sort of keep your project alive. So if you write up a MATLAB function in a separate file, that would be in the help section, or is it you put that above the function? So I'm not. So if you write up a MATLAB function in the first few months or in the development, is that... Yeah, so you write actually. What you need to do with your MATLAB ones is put that. You could probably put it. I don't think MATLAB will mind if you've actually got commented outlines above the function line. It will just pick up the first non-commented line of code. So actually I'd say put your licensing right at the top so it's very apparent that it's there. Right, and finally. So I think you can see now how you need to go about this. At some point over the course of this afternoon or Monday, ideally this afternoon so you've had enough time to think about it, in your groups discuss the different licensing options available to you. And I want you to, using those links, select appropriate licenses for all your content and start applying them as you go. So implement. But bear in mind, once we've switched over in phase two, the people that you pass your stuff on to are going to... Ideally they need to be able to use your work and they need to be able to actually remix it in different ways. So if you recall from the slide yesterday, we did say we're marking you on openness of the project. So that doesn't necessarily mean you've got to slap a CC0 licence on stuff. But shall we just say that if you implement a no derivatives clause in any of your work, might be a bit problematic on Tuesday. So just bear that in mind. Just some other useful things. Most of these were the things we visited over the course of the talk. They don't actually just take you quickly to the Creative Commons website. So they've actually got a selection tool on the website. And it really, really boils it down to very, very easy stuff. So that, for example, if you wanted to allow modifications with my work, as long as everybody shares it under the same, and I'll allow commercial use, you can see it's actually suggested that I use an attribution licence of this form. So often you'll see at the bottom of websites, if you've got a website, and you notice that somebody's put this kind of a button on it, it's actually also telling you that the content of that website is something that you're free to grab and use in particular ways. And you can actually implement that on your website if you want. And we also talked about actually linking to the licence. If there's a copy of the licence that you're using online, it's great to be able to link to it. And if it's a Creative Commons licence, they have these really nice listings online that will actually, for all of your users, make it really easy for them to understand what's going on. Because Creative Commons produce three levels of licence. Of each of the ones that they actually release, there's a nice sort of non-legal jargon readable version of it. You can also click lower down the page and actually get the full legal text of the licence. And they also produce machine readable licences, so you can actually really automate a lot of processes. It's great, it's fantastic. If you're linking to sites, Creative Commons do a lot of licences that you might like to look at. And then finally, I'll just say, I mean, happy to let you know for the next couple of minutes, but now that you know all the stuff that you do about licensing, certain things on particular sites that you visit might become a bit more understandable to you. Stuff that you may be always noticed that was there. You might notice that on YouTube, some videos, you're actually allowed to go into the YouTube video remixer and start chopping and changing them around. That is usually because whoever has actually uploaded them has selected a Creative Commons licence at time of upload that enabled derivative works. So actually, a little ways like that, you can see a lot of licensing is sitting right on top of a lot of the content that you see out there. It's being able to actually understand it and actually recognise it when it's there. So a lot of you will be familiar with the site Flickr for photos. What kind of photos do you want us to look for? You've been objected. Okay. What? Hiddins. Hiddins, okay. So if we actually look at the kittens. Aw, how lovely. Yes, hey Ab, so nice kittens on Flickr. These are all photographs people have taken, but you will actually look at some of these. Aw, isn't that lovely? Right. But if you have a look here, you can see this is all rights reserved, so you wouldn't be able to pull that and use it. But the great thing about Flickr, they've actually got a more advanced, we go back to the search result page. I should be able to run an advanced search. And this is actually, I will say, really, really handy if you're actually giving a presentation and you just need a sort of generic photograph to use for it. And you think, I can't get one anywhere that isn't copyrighted that I don't have permission to use. If you actually say, well, just look for photos only. And down. There you go. Just photos. Right at the bottom is a little creative commons bit. And we'll ask it to just search within that. And if we imagine it was some sort of document we actually wanted to sell in some form, we'll just run a search. And what it will actually do then is only return the photographs that you are allowed to use in a certain way. So now if we look, if you just think we're, now is that cute enough, that one? That's the second one? It looks angry. That one seems to kill someone. Okay, let's have a look at the angry kitten now. Okay? So. It's not an angry kitten. There you go. It needs to nap. But if we actually look now, it will say here, instead of that all rights reserved sort of copyright symbol, you'll actually see some rights reserved. If we click on that, it actually takes us to the creative commons licence they've selected. So just a question to you, looking at this licence, what would you be required to do if you wanted to use that photograph? So you'd have to attribute it. So even if you wanted to use it commercially, as long as you actually state the name of the person you got it from, then you actually find it. So, yeah, hopefully just seeing that stuff on Flickr and suddenly realising what is happening on other sites. There's a lot of content that they can use, as long as you can recognise the licence and know how you're actually meant to use it. Okay? Are there any other questions? I realise it's a little bit longer than I was intending, but you will need to know all of this. So, any other questions? Okay. I was going to say a little bit about the report, but I think what I will do instead is let you go for lunch. I will email out all off this afternoon. I will sort of just put in a couple of reminders about how you stretch your report, just based on various questions I've had to people. And then I will also add in a couple of points about the get-out stuff, that will be problematic.
The following video is an original recording from the OSTI pilot initiative. Entitled "Data, Code & Content Licensing", the seminar introduces the theme of licensing, outlines the advantages of this approach, and takes students through the main steps of implementation.
10.5446/14928 (DOI)
fun وهid ni'n rhiwp yn s Ventana faithful hyn. Wrth gwrs, lle rover am y tu'r gwellion â gefael petr SLO, Bu fod y mamame wasted agост dorm練. Do mid stiff place ohod am y holl Polennys Fnityee, Felly mae'r allan o'n gofin chi i fy mod wedi swydd ben a'r thant ar y beth. ranchai nid o gydweithio genwynog hwn y newydd gwbod Gymru hwn gyda'r f Odd pawb y gallwn yn ei wneud i'r ffordd o'r projiectau sydd wedi'i gweithio'r ffordd. Felly, y mynd i'r ideaeth yma yn ystod, rwy'n credu'n mynd i'r ystod, yma'r ymgyrch yn ymgyrch o'r ysgrifennu a'r ysgrifennu. Felly, yma'r ystod, rwy'n credu'n mynd i'r ystod, yma'r ystod o'r ffordd o'r ffordd o'r ystod. Y mynd i'r ideaeth yma'r ysgrifennu i'r ystod, yma'r ystod o'r ystod o'r rhesaith yma, o'r rhesaith yma i ddechrau'n teimlo i'r ffordd o'r ystod a'r ideaeth o'r ystod, i'n gweithio'r ystod o'r ffordd o'r ystod. Felly, yma'r ystod o'r Ystod? Felly, mae'r ystod yma yn ymgyrch, rwy'n credu yma'r ystod o'r ystod, sydd yn ei gweithio'r ystod yn ymgyrch, maen nhw'n ddweud o'r ystod o'r Ystod o'r Ystod o'r Ystod o'r Ystod, I mir ydych chi g melio cyfweld o heir ac yw'r ddylaythиad yng ngosel lle ar dechrau a gwellwch sicrhulion ddim yn ddodol mau. Y convisio y cort sevillot ar gyfer colli�f truck Newsol machines yma os yn fanyelio ar shattered Pro Ung tot oherwydd, ond yr畫wedd ffordd y bynnag fod yn wedi'u mateithio y garlic Newsol i weithio methu ac mae'r gweld fan awdọd i nas a sefydlu, Cywir ond, maen nhw fydd gennym i companies sydd wedi nifer y blaenau iwyGG Rydych Batmannill hwn yn ychydig failures won? Roedd y defnyddio Britannianol yn ym Big conference gydig gNI Others Road is the open knowledge foundation to work with other open academics coming from the room in the OKFN site. I won't linger on the other part because I've already seen those. There are resources and information about the principles on the website. The biggest thing that happened last year and is becoming an annual thing is Open Knowledge Festival. OKFN has a big help in organising and it is a fantastic combination! Ac wedyn nawetol rydyn ni'n rhaglen faeth 10 ar 15ицol oîn arment yn eiliadau tenhoedd myfencer ac sianaf principio arweinol. a yn이트annol Ie ddisgo � dimension.odi gyda刚 pantsant arferö.ança refwng y cyd wedi ffyrdd fideo. Mae han You caliber a'r lefyd....... Mae nhw'n mwych gyrfa o bobl yng Nghymru sy'n gallu gwneud wedi'i mwynhwys trendYN wing Yu Ac gallai gysyll Weiter Ar pun deeplyan yma chwil yng Nghymru Tym ni'n vawer o jugul gjydig i fyw campaign. Felly, yma yw'r gweithio yma, yn ymlaen nhw'n mynd i'w cerddwyd yn ymlaen? Ymlaen, mae'n rhesymu'n gweithio, a'r gweithio yn ymlaen. Ysgrifenni, mae'n mynd i'r gweithio yma, ond yma yw'r gweithio yn Genivr. Felly, os ydych yn ffaint, rwy'n meddwl i'r gweithio'n gweithio. Ymlaen, ac rwy'n meddwl i'r gweithio, mae'n meddwl i'r gweithio'n meddwl i'r gweithio'n meddwl i'r gweithio. Felly, yw'r Ffaintol Gweithio ar deill Logoan Cymreithio. Felly mae'n gweithio Llanfydd Nadogonio, a'n dweud i wedi dweud o bobl, ac e falle r 350 cyfsaeid o'rting 53. I franku Мratorau Leachorol. Oherwydd mae'n heliolio a d Mant grant gwreidpaidd hefyd y te ohertynt. Felly, Tower Llanfydd yn ddim ni. Mae'n tynnu lecsail a prawn ac mae'n gweithio ddaeth lier complete i'r gwellwni wun deill o wybodaeth gwor. Dan ymyinden nhw'n gwneud arall IAC-Nителиol mynydd,on cael dingen. Erg nhw'n adenir o'r anhygiadau spider-af add eightion ni'n mynd i Felwys Llymch i Ynno i'n bywyd canol adŷ. Felly, Az又 yn catalfil anyone, ac iddyn lle si Federal lawsuit ddubit y daf sydd wedi dod cyfarko schwierig. Mae uchydigio'r anhygiad adadd sy'n meddwl wahanol, albo Brill Winf rappera ar dal i USA large, a'r ant typau y gallwn e Regardcm, i Seren�f amlu â Keyrdredd Н drilled, dw i wneud y llall, fe wytключ wrth profi neud o bwysig o'r newydd o ran tan sylwp hwrs yn ogylchedd arlaeth.... neu rhewProduct kaib onlin. Felly hold soprai genMEИL Unigol, neu rhaid pa rydych chi i gwas newlyr pause. Yr argyrch weith Poli Gwavored Incred. Ergolygau되는? Lywodraeth wedi gweld Down children woreyr ar gyflweddoloffa hwn. hwch Buckingham ste Dick Gwethech yn amser rankwark werth myfeydd umorg roeddaeth sgan mae hyn yn oedd hun gyda gennym Yn L weak yma'r iawn eu gwraith hyn fishing solwhaff Cwm uch chi'n ceff uitlask er mwyn iawn i ti'r gBig yw d oscillator iawnuc hwnnw. Felly un osaw angeni econom ein bod eiviousn annoyed Horos o llunio hwnnw yn ei g modernchol aio darlo newydd. Nid â gyrfa nhw achos'r ybydda nad y pethau'r iawn ym rhywun ein leoghau Yn oes holl down Ghost ac mae hynny yma o bantaith ganτιw'r ei Tomato HI wnaeth unioni gwirio mae o ran awr acio canter, mae Dwun婆 trots hyn,ayım gan ei verwiar yn gweithio a'r awr gan fer Mae e- bayos volydd mae'rwelter ar draws bydd amύr dysgu ac ydy'r newid mewn bobl a miwgol… … neu gennym y hallu mwy anchod a'r effeithydd, nid o defnydd y bu цветれ, dwi'n radd yng Nghymru, ac rwy'n frickud angen, bobl, bei ddigon cinnog wedi effyn yn ei justicellr o. Dwi'n rai serse yna ganddano ar gyfer cycleohol wydburnisig wel coff 기대au, ac mae'r llwy climax yn gallu tro ureirio wel nhw fit o fan проектa, O cei nhw Looks-Times yn awe arwyd, bawb gyda'r bobl ter ma du us fysiwyr o'r newid o sydd cyונים i'w ddweud yw'r ddataeth, codd ac output yn cael ei ddechrau. Yr un o ddod yn cecan, fe yw ddataethau cyfnodd, a'r ddataethau cyfnodd, sy'n ddod yn cael ei ddweud o'r ddataethau cyfnodd, ond y ddod yn cecan yn cael ei ddweud i'r ddod yn cael ei ddod yn cyfnodd. Felly, ychydig yn ddod yn cefnodd.gov.uk, ac mae'r ddod yn cefnodd yn cefnodd. Yn cyfnodd yn cecan yw'r ddod yn cefan. Fenych yn union o ddod ac yn mynd yn enthaisarfa. Da wir wedi gotydd ymhy<|bg|><|transcribe|> snGM i therizionno. Lestemnodd Garciallxil. Itaία constantly semp Bella List 여iledda randidant peiniget excel organisatione ynTF i sg hobby. if you wanted to. So it's a community data hub, power by CCAN. You can upload pretty much any data sets. Again, they're constantly developing the kind of capabilities of this. If I just go to, so for instance, we uploaded some bio-med central CSV files, just kind of supplementary data from a paper. So you can upload this, you add a bit of metadata, and then you can use quite a lot of functionality for visualizing the data direct from the CSV files. So you can create graphs of the data on the fly. You can select what sort of graph you want, which of the columns in the CSV you want to actually compare. But this means nothing. There shouldn't be lines on there, but you can see. Sorry, I'm using savings app. I'm trying to hand this to the next generation. Essentially, there is software that's been developed to allow you to visualize data dynamically from just CSV files uploaded here, which offers quite a lot more functionality than you'll find on most publisher websites, where they just have supplementary data in CSVs that you would then have to download, open in Excel, and try and work around it. So things like this can be really helpful. And this module is actually available separately as JS, the reply. I forget, Jenny, in terms of the data uploads to Data Hub, is there any, so with FICSHARE, obviously, they place requirements on how you have to license your data because it's all CC0 on FICSHARE. Are there any, I forget, are there any... So on Data Hub, you select your license. We kind of, it's encouraged that the data on there is openly licensed, but you can choose, there's a drop down menu, and you can select which one you want. So there's no requirement that it's definitely CC0. And in fact, when you go onto Data Hub, if you look at the list of data sets, if you've licensed it under a non-open license, it'll just have a little padlock next to it and say not open or close data. And if it's open data, it'll have a little open data symbol. So it's quite easy to tell which data sets are which. But obviously, there are many, many, many data repositories out there, but this is just another example of one which is kind of cross disciplinary and also really easy to modify for your own needs as well. So this is just visualisation of some US national foreclosure statistics, essentially again, you can just take a file and make whatever graph you like. Also pretty cool is on-the-fly mapping. So if you have some GA coordinates in your data set, you can just automatically put it on as balloons on a map, just online in the browser. And as I said, you can access the data and you can build applications based on that data as well. And you can build extensions to CCaN or whatever you like. It's all entirely open source. And all of the visualisation is done using recline.js, which again has been built by the Open Knowledge Foundation to enable dynamic graffing of things that you can plug in to data repositories, data libraries. So we mentioned that Open Knowledge Foundation is fairly involved in open government data, so we just thought we'd illustrate a couple of their projects to you. One of which is open spending that looks at budgetary data across the world, although as you can see, there's a little bit of a focus at the moment in Europe and South America. These are the countries that have got on board with open spending at the moment. But essentially, they collect financial information from all of these places. And there's primarily a source really for journalists. So the Open Knowledge Foundation has several projects based around data-driven journalism. We run training courses and workshops for journalists to enable them to use data in their stories and also do some investigative journalism around financial records. And also just how to work with the data. We have a data journalism handbook, which is online and also available in a print copy as well from O'Miley. And that's all about getting people skilled in using this sort of information to actually find out what their government is spending on. For those who don't want to get down and pull down information to look at themselves, there are various apps like Where Does My Money Go, which is updated from the UK government data to show exactly what your taxes are spent on. So it's just kind of trying to communicate to people where government money goes, which a lot of people would probably wouldn't estimate. They might completely misestimate the percentage of their tax that goes on certain areas. But you can see here, you can just click through and this is all kind of, you can click a region and then you can click through. So if you clicked on helping others, you could see much more of a breakdown as to where that tax money is going. So you can do things like putting how much you're earning and it will calculate exactly how many pence you're spending every day on education, for instance. It's just a kind of way of breaking it down, putting it into figures that people can actually comprehend, as opposed to the government spending so many billion on this particular area. So that's open government. Around the kind of open science area, I'm not sure if you've done much on bibliographic data or bibliographic metadata, but for people working in academia, this is kind of sound boring, but it's actually very important in terms of citing work and getting a good overview of the field of academia that you're interested in. So all of the kind of bibliographic data attached to papers like DOIs, like the author, for instance, the institution, the title of the paper, all of this kind of boring stuff is actually essential for things like text mining and data mining and finding stuff in the literature, but there are very few ways to share that openly at the moment. So a lot, for instance, in a paper, while the actual metadata for the paper itself is often available openly if you look for it, things like reference collections within papers are often copyrighted to the publisher, which you kind of maybe wouldn't think that would be the case. But essentially, you have to pay or the university has to pay for a service to access references out of other papers, and if you think you're trying to create a map of where research is heading, who's doing what research, all of this kind of meta science research is really hard to do without good open bibliographic metadata. So the Open Language Foundation was funded by GIST, which is the Joint Information Services Council, to create tools to help that happen. And so BIV server is just hosts open bibliographic metadata from people's collections, from larger institutions, from libraries that have uploaded all of their bibliographic metadata for the articles and books that they hold. And you can do some calls with that. So I'll let Sophie take over now for a fix about sitting in the science. Right, so you probably may remember that way back in my first lecture last week, I discussed the idea of crowdsourcing. I think, was there one of you that stuck their hand up when I mentioned Galaxy Zoo? Does anyone here heard about Galaxy Zoo? Yeah, so there's some of you, certainly. So the idea of crowdsourcing is there are certain problems we might work on in academia, and in particular in science, where just having one or two people working on something isn't enough. What we really want to do to optimise how efficient we are with delivering some results on that problem is being able to actually tender out certain little fragments of that research process to many, many, many people. And often you will find, as in the case of Galaxy Zoo, the problem was that they have thousands of different photographs of all sorts of galaxies. And of course galaxies take on different shapes. So for example, you can have spiral galaxies in a particular form and all sorts of other galaxy shapes. It's very difficult to automate a programme so that it can actually read in a photograph and tell you quickly what shape that galaxy is. You can certainly have image analysis tools, but it's actually much, much quicker and more efficient if you just get a human to look at it and say, right, what shape is that galaxy? And so the idea of Galaxy Zoo was to actually crowdsource this project out to all sorts of people just in the general population who had a lot of interest in wanting to participate in a scientific project. And they could actually then, for example, sit at their computer terminal and they could be given a section of those images and actually just ask to classify them. And because it's not necessarily what you would call a really, really high level scientific task, it's something that actually lends itself well to being broken down into smaller components and given to people that don't necessarily have highly advanced scientific knowledge, but it actually will end up contributing quite a lot to the volume of knowledge that we have on that subject. And so Galaxy Zoo was one such application. The idea of Pybossa is to actually facilitate you developing a crowdsourcing tool. So Pybossa in itself is not a crowdsourcing application. What it is, it's a platform, effectively a framework that would enable you, if you wanted to, to build a crowdsourcing application in whatever you'd like to do. So I'll just, there should be a tab in my browser that's actually, if you clicked on the sort of Pybossa site, it would actually take you to this crowdcrafting website, which is associated with the Pybossa project. The idea is that you would actually create your crowdsourcing app, usually in a sort of like JavaScript, HTML, and then you interface with that with all of the files that are provided. I think they're actually up on GitHub, which you should be familiar with by now, but that would enable you to build a particular tool that you could then, if it were appropriate and you were allowed to, actually integrate with your research. So let me see, I think if I follow applications of having told some of you this morning not to do live demos as part of your talk, I'm just going to have to hope now that whatever it is I'm about to do is going to work and we'll try this one. So you can see this, there's this project here that's actually already called Feynman's Flowers. Let me just quickly do a little bit of a demo. So you can see, we'll actually just check the tutorial and say, oh, you can see this is your typical crowdsourcing problem, but we've got an absolute wealth of images and we want to actually analyse them for particular data and could outsource them to all sorts of people to be able to do this. So let's just see what it's doing. So the idea is these are molecular photographs. The people that have produced these images want to know something about particular angles within those molecules. This is a tool that's been developed, part of the Pybossa project, using all of that, to actually crowdsource this as an application. So you can see it's going to show us a photo from a molecule and what you're going to have to do is help find the coordinates of the centre of the molecule and the angle that the actual axes of the molecule have. So this is the sort of process where the application that's created will effectively train the users, will educate them in what it is that's expected of them in the tasks that they're going to do and then show them how to go about it. So you can see it will mark a target molecule and then you've got to use your mouse to mark what you believe to be the centre of the molecule. You click there and it will produce this little crosshair that tells you where you're going. Of course, there's going to be a variety of molecules so you've got to be prepared for that. And then once you've actually marked where the centre is, you will take those crosshairs and you're expected to rotate it to sort of work out the actual orientation of the molecule. So let's give it a go. I'm not going to do loads of these examples but I'll just do one just to give you some idea of the kind of analysis that you can perform with crowdsourcing. Okay, so we've got this one. Now I'm going to get you guys to tell me when to stop. Where do you think I need to be? Down a bit, down a bit. Up. Would you say? Up. Do you reckon that's about the centre? So you've got a bigger version to look at whereas I've got this little screen. It's probably easier for you though. Okay, so you click there and then it's going to tell us to actually align the blue crosshair with the molecule axes. You can actually see looking at the kind of image this is. You can imagine trying to programme a computer to actually find this successfully. It's actually going to be much more subtle and much more difficult, actually less successful in the long run than just getting loads of people who are interested in helping out with size to actually do this in components. You can see it's where we're sitting about. So you can see then you've got the option to save the coordinates. It'll actually tell you here what the molecular coordinates are and then you can actually start aligning things. Of course once you've done that you've actually started contributing and so what you could do then once you've completed this tutorial you've had your training in what you need to do with this app. You could potentially if you just have five minutes you know get through a few images and you're contributing a little chunk to a scientific project. Okay, so that gives you some idea of what crowdsourcing can achieve and that's what they were doing with Galaxy Zoo, that sort of thing. Now obviously it might not always be appropriate in the context of your research to crowdsource. If you wanted to crowdsource for example image analysis or another task it would be up to you to you know check with your supervisors and anybody else involved with your research that it is appropriate for you to do so. But certainly if you think there's a you know a massive wealth of images you need to analyse and an approach like this could work it's certainly a very good way of actually achieving it. So that's Pibossa. I think at this point I'm going to hand back to Jenny to just take you through the last couple of... Sorry is there a question for that? I suppose we will have a question for you. Sort of really say it's produced as a result of the Pibossa project. So you will often find on some scientific papers even if you're not necessarily citing somebody's work. I mean usually as you say citation is the regular way to go but often there will be a little subsection right at the end of the paper which will say we would like to thank such and such people, organisation or project for the assistance provided in actually delivering this piece of research. Yeah it would be an acknowledgement to the community rather than kind of... Well it depends what you're doing. So Galaxy Zoo have I think cited individuals where that individual is. Well perhaps not Galaxy Zoo but certainly things like planet hunters where people were looking for signs of planets in outer space. I think people who found a planet did get on the paper or at least were kind of individually acknowledged. And there are of course so there's a game called Eta, Eta, R&A. I suppose Eta is how you pronounce it but essentially it's RNA folding which again you can't really... there's no good algorithm to do this or at least humans are much better. And in that case users are actually encouraged to play the game, work out patterns in RNA folding, submit their hypothesis that they come up with as to you know well I think if these two bases are kind of this far apart you get a loop of this structure. And if that turns out to be correct so that the researchers who run the game go through these hypotheses every month, pick a few to test in the lab. If it turns out that's correct then they will invite the person to be on the paper with them as an author which is a really nice way of kind of getting people properly involved in the whole publication process. It's a different one. That's protein folding. Yeah protein folding. And my understanding of Folder is that is that a game or does it run on your computer? It's a game. It's a game. Okay so yeah it's a similar one. Science games. Well the Zooniverse which is Galaxy Zoon plus Planet Fentus plus a bunch of other stuff is probably the best known example but Folder it's a Turner. So these are cognitive tasks. There's crowd sourcing where you crowd source basically CPU time and other people's. I suppose you've probably heard of Setia Ho and various other projects. Well really you are getting a lot of help from citizens but just not in the kind of... So all of this stuff is run. A really good website to check out would be the city of... Scent of the Citizen Cyber Science at CERN which is really the CCCC. If you look that up so they run Pie Buster and Crowdcrafting and quite a few other projects and they're really into kind of where you can get citizens involved in science and actually I was speaking to Francois who runs the CCCC this week and he was saying that the European Union are really keen to push citizen science as part of the new horizon 2020 package of science funding so it looks like there's going to be some kind of push towards doing this. Pie Buster so Galaxy Zoo generally looks at massive data sets like Hubble, Terbyspace, Telescope style data sets. Pie Buster and Crowdcrafting are really aimed at the kind of long tail of science where you're a group you've got some images I mean fireman's flowers those people that is a real group at UCL I think UCL. They don't have you know terabytes and terabytes and terabytes of these images but they have more than they can handle so you know that that's the kind of level of content that Pie Buster's aimed at whereas if you if you've got the LHC or a space telescope then this universe or probably more your kind of group to give you. You don't look like Wikipedia I don't know I get because it seems like you'd be under there reading that. That's the kind of thing that they go after. Citizen science stuff. Left services to science although certainly they do play quite a but an increasingly active role I think in participating a lot of open discussions given the whole obviously the whole actual approach of Wikipedia is something that sits very well with this idea of like networking stuff out in the digital age. So yeah they're less involved. It's fairly prominent but not necessarily involved with specific crowd sourcing stuff. Not yet but it could be. It's a really good way there was Jenny Say to Drive community engagement because often as scientists we tend to find that what we get criticised for is people often see us as kind of sitting off up in our ivory towers and not really engaging with the public and actually there are certain tasks that the public can really get on board with and feel quite involved and feel as though they've got a little bit of an insight into the scientific process and the kind of stuff that's going on. But there are obviously inherent problems in what exactly the task is that you're asking them to do and whether anyone will be bothered to do it for no payment. So some of them don't work so well. You have to find a plan that's quite cool because the image is very pretty and it's not hard but there are certain ones. So there is actually a project on this universe called old weather where people are transcribing old ships charts and weather notes which you would think sounds it takes a long time. It's not just clicking. We're talking typing and kind of actually transcribing handwriting but people do it because they love maritime history. They've plugged into the maritime history community in the UK and these people are so I mean they've got thousands and thousands of pages transcribed and so you kind of have to pick your community really well for some of these tasks because trust me I personally wouldn't sit down and transcribe really poor handwriting from the 1880s but you know these people really love it. Okay so content and courses well you're kind of on one of those so that's excellent. So Sophie I'm sure she's mentioned is the Pantam fellow funded by OSF and OKFN. So yes this is one of you know her work is hopefully going to be well this is the pilot so you guys are the gaffings but we're hoping that will expand and really be able to teach more people about open science. In terms of content we do have so there are spending stories based on the open spending stuff I showed you earlier is one type of content website. The public domain review is quite a cool site. It's going to get an hour to play around and have a look so this is trying to kind of point out the interesting stuff that is in the public domain which unfortunately in America has just been extended by at least 20 extra years after all this stuff but there are loads of cool stuff books music films that are actually completely available for anyone to use reuse remix because they're entirely in the public domain and so the public domain review does what it says really it gets quite eminent academics and really interesting people to write articles about some of these public domain works to just raise awareness of the fact that they exist and they're yours and you can use them and they're part of our cultural commons and we should we should know about them and there's some very odd articles I know one of my favourites is about um the animal trials so where animals have been tried for various crimes it's quite interesting they've had pigs tried for murder and mice tried for stealing grain and all sorts of things so it's kind of a lot of it's historical and not to do with science but some of them are really cool old illustrations of kind of anatomical drawings and all bits and pieces and it's definitely worth a look school of data is training people to work with data so not so much from a science perspective although some of the modules would probably be of interest to you as scientists but more just anyone who has to work with data for whatever reason so journalists would be at a main feature point and this is joint with peer-to-peer university who are an online university um they don't offer degree courses but they offer lots of module courses that you can do um with with a group of peers the idea is that you help each other through there's less kind of formal tutoring but there are certainly you join in a cohort and you kind of work together through the modules um and this is this is on the go at the moment I think the alpha version there's currently a testing cohort to go through the through the courses but Keith and I on the side if we're interested um it goes from everything from what is data as a first module for people who who really are just at the beginnings through to I think there's five levels of courses and so there's some quite advanced ones in data visualisation and and more into the coding side of things so um but this is this is something that's spreading so there's also going to be a school of open which I'm not sure exactly what stage development that's in which is essentially trying to teach people about all these concepts that you've been learning about the last few weeks like licensing and what what do we mean by open content and open knowledge um there's also talks for a school of citizen science um so teaching people um how to set up citizen science projects what works what does the internalist kind of stuff via the cccc which I mentioned before um so we do do a lot of education work and we're always you know all of the projects run by the open knowledge foundation are open to volunteer contributors so if there's anything that peaks your interest we think you could contribute to that we have a bunch of mailing lists that you can find on the website so do you just have a look through and introduce yourself um we do a lot of drafting handbooks to help people kind of do stuff openly so the open data handbook was one of the initial ones which is essentially going through the kind of legal social technical aspects of open data um in a very general manner so you've had kinds of same content delivered by sophie but probably with more of a science focus over the last few weeks um we've actually just this week announced an event um I think this is the last slide so I might just go online quickly and find it um to draft an open research data handbook which I think if we go to the blog here just while we're on the subject of the open data handbook um I did very very briefly mention that one in our licensing lecture last week so um if you go on to web learn the folder that's got all of the stuff from me in it there will be a copy of I think from last the most recent version from November of last year of the open data handbook on there so you might find that useful uh I'll just bring up a few things as a kind of what you can do in the near future to get involved with stuff so if any of you were interested in helping with this sprint or just coming along out of interest we're holding it at the open data institute which is the government's kind of well as it's kind of the names really there open data institutes they look at all aspects of data and they have a focus on how open data can kind of boost the economy in the uk as we are now a sort of having more of a digital economy but they have kindly hosted us for an open research data handbook sprint and basically we want to make a practical guide for I focus mostly at researchers of kind of graduate level and postdocs to how do they open up their data we have got focus on data for this but as Sophie's been explaining there are different issues surrounding code and other content so if anyone's interested you're very welcome to come along you can sign up on eventbrite um so if you mentioned Oxford open science I'm aware of a slightly over time now so I will quickly um just show you the wiki uh but yeah so if anyone wants to talk more about this sort of stuff you're very welcome to come to the Oxford open science meetings um I started them last year I haven't updated the wiki for this year's program yet because I haven't quite finished inviting all the speakers um but for an example of the kind of things that we did last year I don't know why that's not loading but um so our final meeting of the year in November was sharing knowledge in medicine and open clinical trial data which was really interesting about I don't know if anyone's read Ben Goldaik's latest book called Bad Pharma but it was essentially why um clinical trial data really really needs to be made available because otherwise doctors um and patients kind of lived in the dark about side effects of drugs and all sorts of other things that they really should know about um and so we had a range of things so publishing models of the future open bibliographic data and Sophie did a session on training a new generation of open scientists which was exploring this course um citizen science uh the body came and presented what the university is doing around these areas um and there's more to come this year so do you keep an eye on the wiki we're at Oxford open science on Twitter and there's also a Facebook group called Oxford open science which you can join um so do you get involved with that if you'd like to I'm just trying to think is anything else in terms of getting involved stuff um I think we're probably covered most of it I think that's most of it yeah basically one of the great things actually about the sort of like your sort of level of research is getting involved in open stuff right now very typically over the years it's always been that science evolves from the top down often it's usually the profs and the sort of you know major researchers that have been in research for decades and decades that are always the ones calling the shots what we're actually finding with the open community there's people from all stages of the academic ladder getting involved so to be honest this is one of the best times to be a young researcher in the sense that you've got more power than ever before to actually influence how things are done and one of the things that we kind of generally feel pretty strongly about a lot of you will you know go off and join all sorts of different research groups and some of those research groups will have a very fast moving up to the minute ethos that is really embracing the way that science is going so those other groups might be very very traditionalist and kind of locked in which may cause problems further down the line given what the research councils are now mandating so it's up to you to take this knowledge forward when you join your research group and you know maintain your awareness of that and you know keep up with what's happening because there's going to be a bit of a variation in what you experience as you sort of move on so just be aware of that okay oh well thank you again Jenny for coming in it's been really really yeah yeah i'm really good so if you have any other I mean I realized most of it was questions during so did anybody have anything you wanted to ask now because they were exhausted all the questions so um yeah just feel free to disappear off now and the three groups I didn't see this morning I will see this afternoon so um just be around for two o'clock and chat to them okay thanks you
A friendly overview of the Open Knowledge Foundation, co-presented by Sophie Kay and Jenny Molloy.
10.5446/12996 (DOI)
James Bullock and we're very proud to have him. He's a man who is asking those big questions. Has the universe always existed? Did time have a beginning? What makes us think that the earth goes around the sun? What is a black hole? How fast is light? So we're about to hear from someone who asks the big, big questions. Would you please give a warm, inside-edge welcome to James Bullock. Thanks very much. It's a pleasure to be here. I thought, so what I'm going to do today is talk about some of the things we're trying to understand in the field of cosmology. I actually work in that building right over there. That's the physics and astronomy department. We have a center for cosmology there that was started about four years ago. And the center is all about trying to answer the kinds of big questions that we were just hearing about. And that's what we're doing. And I thought what I would do today is just sort of give you a little bit of an overview of what we're trying to do, some of the big questions that we have. And I'm going to do my best to leave some time at the end for questions. Normally when I give these kinds of talks, I like to keep it very free-form and open. So if anyone has any comments or questions at any time, I'm happy to address them. Now, I understand that since we're recording this, that might make it a little bit logistically difficult to have people running around with microphones. So what I'm going to do then is do my best to make sure I don't talk too much and leave some time at the end just for general questions. If you have an urgent question at any time and you just need to ask it, we can do that. So let's just keep it open there. So I just thought I would start with this. And one of the things I like to do when I talk about cosmology is to show this picture. This is a picture that was taken by a photographer named Art Roche. And I thought it was really beautiful. But one of the things that I find very attractive about it is he was telling me when he took this picture, he was in this canyon in the southwest. And he noticed that there were petroglyphs, sort of ancient petroglyphs left on the canyon walls. And this reminds me, I think, of really what we're trying to do in cosmology, which is it's a scientific exploration of the kinds of questions that you ask when you look up at the night sky. So you can imagine the people who made these petroglyphs, these ancient peoples, who were drawing into the canyon walls. And when they looked up at the night sky, they probably had similar questions to the kind of questions that we have when we look up at the night sky. So how old is the universe? Was it always there? What's the universe made of? How did structure like us come to be? So these are the kind of questions that a lot of us has asked ourselves. And in many ways, cosmology is the oldest science, because you could imagine people have always asked themselves these kinds of questions. What we're trying to do is to answer these questions in a scientific context. So we're trying to make testable predictions and see if they come true, and use that to build up a theory for how the universe began and emerged. And for the first time, trying to build one that's in a scientifically tested context. And that's what we're trying to do. So what I'd like to do is tell you a little bit about what we believe about the universe, so how we think the universe came to be and some basic facts. And then I'm going to go on and tell a story of how we began to shape these ideas. So this is kind of a cartoon picture of the overview of modern scientific cosmology, and our present understanding of what the universe is and how it began. So the first order picture here is that time and space, time and space itself, began in something that we call the Big Bang, which is just a word, about 14 billion years ago. The universe was very, very hot at early times. The universe has been expanding. So today the universe is quite big, and in early times, the universe was smaller and smaller. It's expanding. So if you go back in time like a movie, it's hot at very early times. And in fact, it was so hot at early times that, you know, as you crank up the temperature, things start to melt. And if you keep cranking up the temperature, even molecules can't hang together anymore. If you keep cranking up the temperature, even atoms, it becomes so hot that even atoms can't hang together anymore. Electrons will fly off of protons. In a very, very early universe, it was so hot that even elementary particles couldn't exist, and we were down to the most fundamental constituents of nature. So in that sense, it was a very smooth and, in some sense, simple beginning. And from this elegant beginning, this very, very simple beginning, emerged as the universe cools and cools over time, more complicated structures can begin to form. And then you can start making planets, et cetera. And we have a situation where we have really the real primordial soup, okay? The early universe's primordial soup. And then from this, structure grew. And eventually, we end up with galaxies like ours, the galaxy that we live in, the Milky Way. And in these galaxies, there are stars. In our galaxy, there are billions of stars. And around one of these stars orbits a planet that's very low mass, that's mostly rock, and that's the Earth. So what we'd like to understand is how this picture emerges. Another thing, oh, one thing I did want to mention is this, you know, numbers like 14 billion don't seem that big anymore. We're hearing about, you know, the national debt and the... But 14 billion is a big word, is still a big number. So 14 billion is a very long time. If you took the entire age of the universe and scrunched it down into one year, the scale of reference for that is that Shakespeare, okay, living in the 17th century, wrote his plays about a second ago. So you take the whole expanse of time and scrunch it into one year, Shakespeare was walking around about a second ago. So that's, these are the time frames we're talking about. So true cosmological time frames. Now, another thing we would like to understand is in addition to sort of the size of the universe, the age of the universe, we'd like to understand what the fundamental constituents of the universe are, okay? And this represents a pie chart of our current understanding of the composition of the cosmos. And I'll talk a little bit about these different pieces here, it's just words, but I thought I would just give you an overview now. The thing you notice here is this area right here, this little yellow sliver in this pie chart, is supposed to be representative of heavy elements. And that heavy elements just means any kind of atom that's heavier than hydrogen or helium. So basically us and the Earth. And in terms of a global sort of composition of the universe, that represents only about 0.03% of the composition. So a tiny, tiny sliver of what we believe is out there. About, only about 5% of the universe is made out of things that we have a really good understanding of what they are. Okay, so the entire periodic table of elements, okay? Everything you learn in chemistry class. In fact, everything that they study in every department on this campus, except for ours, is in this piece of the pie right here, okay? I put neutrinos there purposefully, this is a kind of elementary particle that was discovered by Fred Reines, who's our Nobel Laureate over there. So I have to put that on the chart, even though it's 0.3%. But the rest of this chart is all stuff that we really don't understand. And I'll talk a bit about this. About 25% of the universe consists of something we call dark matter. Dark matter is some weird stuff that I'll talk about later, and there's some even weirder stuff called dark energy, and that makes up 70% of the universe. So this is really a statement of our ignorance, this pie chart. We have a pretty good idea of what we don't understand, but as we all know, that's how you start. You have to understand what you don't understand first. Okay, so this is our ignorance chart here. These are the big questions we're trying to answer. Now, sorry. So as I mentioned, cosmology is in many ways the oldest science. And as far as I know, every culture on Earth has had their own story of cosmology. So how does the Earth begin? How old is the Earth? How did we emerge? One of the oldest pictures, one of the oldest cosmological models was actually that of Aristotle, and then later on refined by Ptolemy. And in Aristotle's model, the Earth sits at the center of the universe. And all the planets and the sun orbit the Earth, and the stars extend out in a celestial sphere. And this was the picture they had. And, you know, Aristotle was not a dumb guy, right? When you look up out the sky, it looks like the sun is going around the Earth, okay? So, and in fact, this is the longest lasting scientific cosmological model in history. It lasted 1400 years. But eventually, this idea was broken by Copernicus and Galileo and Newton and people like that. What Copernicus said is he said, well, you know, if you actually put the sun at the center and let the planets go around the sun, I can also explain all the observations. And to me, that seems just prettier. It just seems a little bit more elegant. But that was kind of the end of it, okay? And there was sort of this, it wasn't clear whether that was really true or what. The thing that really changed the way people saw the universe was when Galileo turned a telescope to the heavens and he actually started testing some of these ideas. And he showed that there were moons going around Jupiter and that the moon itself was corrupt. That is, it had mountains and things and it wasn't this perfect celestial sphere kind of thing that Aristotle thought was going on. And from this piece of technology and the application of this technology and direct observation, it sort of shattered this idea that people had for a very long time. A lot of very smart people had for a very long time. So it was really the tools that Galileo had that allowed us to sort of extend this idea. So this began, this began sort of the sort of modern theory of cosmology where, you know, at that time cosmology was the solar system. This is everything. Now, one of the things that's remarkable about this is not only does this move the position of the Earth relative to the sun, but it transforms sort of how it sort of, it's representative of how bizarre the universe had to be if this was true. It wasn't that no one had ever conceived of the idea that the Earth might be going around the sun before. There were ancient Greeks who proposed that idea. But people decided that was crazy and the reason why they decided that was crazy is if you have something that's going around the sun, let's say us, we're going around the sun, okay? We're moving a big distance from one time of the year to the next. So if that's true and you're moving a lot and you look at something and you're moving like this, the position of that thing on the horizon will shift a bit, right? When you're in a train, you see stuff go by and you have to turn your head. The only way you don't have to turn your head on a train is when you're looking at, say, a mountain peak that's really, really far away. So what this meant is if this is true, this meant the universe, the stars that were really far away, which we never see shift from one time of the year to the next, must be really, really far away. And so by putting the sun at the center and having the Earth go around the sun, it was actually bizarre in many ways because it meant the universe was much, much bigger than anyone had ever imagined before. So one of the things that's kind of interesting about scientific cosmology is I think almost every step of the way the data tell us something that's much crazier than anyone had ever thought of before. And I think it's true every step of the way. If you look how cosmology proceeds, it's a crazier universe than anyone had ever imagined. Even very creative people. So today, I mean, cosmology at the end of the 20th century, it progressed significantly. And people had finally figured out how to measure the tiny wiggles and distances to stars on the horizon, and that allowed them to figure out how far away the stars were. And it was realized that the sun, our sun, was just one among billions of stars in this galaxy. And the galaxy is huge. Okay, the galaxy, it takes light 100,000 years to cross our galaxy. So just to explain what I'm trying to say here is that if I have a flashlight and I turn the flashlight on, light travels from my flashlight at 186,000 miles a second. So if I turn a flashlight on, a light beam could go around the Earth 10 times in one second. It moves really fast, but it's moving at finite speed. Light that leaves the sun takes eight minutes to get here. So, which isn't that long a time, but it's still, you know, it's a delay. Light takes 100,000 years to cross the galaxy to give you an idea of size. Okay, it's big, very big. And in fact, if you take the sun, okay, the sun is big, we really don't have a concept of how big it is, but the Earth, we have some rough idea of how big it is, even though it's hard to conceptualize. You can place 100 Earths across the face of the sun. The sun is big. If you took the sun and shrank it down to the size of a grain of sand, okay, our galaxy would be the size of the Earth. So the size of our sun compared to the size of the galaxy is like the size of a grain of sand to the Earth. So these are the distances that we're trying to deal with, okay. Now the thing is, at the end of the 20th century, pretty soon, Edwin Hubble would realize, Edwin Hubble looking through his telescope, he would soon prove that actually there are blobs of stuff in the sky that we call nebulae, that at the time people didn't know what they were, he would eventually figure out that these were actually distant galaxies of no own. So that the universe is actually filled with billions of galaxies like ours. So yes, billions and billions, just like Carl Sagan, he was right. Okay. So this is our current picture of the Milky Way. The Milky Way, our galaxy, is a disk of stars. It's got a bright thing in the middle, we call the bulge. It's about 100,000 light-years across, and we live kind of at the edge here, sort of in the middle, kind of out, sort of outer outskirts. Here's the sort of blow-up of the sun and its circular planets going around it. And like I said, it takes eight minutes from light to get to the sun to the earth, but it takes 100,000 years for light to go across the galaxy. So it's a pretty big place. But this is nothing. 100,000 years of travel time for light is small potatoes compared to the size of the universe. This is the nearest big galaxy to us, called the Andromeda galaxy. You can see this with your eye sometimes when it's dark. It takes light two and a half million years to reach us from this galaxy. And so just think about this. This picture is what Andromeda looked like two and a half million years ago. So before there were homo sapiens to look up at the sky, that's when this light left. Okay, so we are looking back in time. And the farther things are away, the further back in time you look. And that's one of the techniques that we try to use when we try to understand how the universe is assembled over time. Because as we look at stuff that's further away, we're looking at what the universe was like long ago. And from this, you can imagine trying to build a movie, sort of how stuff is assembled over time. So you use this fact that light can't travel infinitely fast to your advantage. And you just have to actually look at what the universe looked like in the past. So we can make maps of what our local environment looks like. And this is a lot of astronomers like to do this. Here is our galaxy, the Milky Way. And here is Andromeda, the one I was just showing you, two and a half million light years away. And there's lots of other little galaxies all around. We are the two big boys on the block, but there are these little things that float around our own things and orbit us. They're satellites. They're galaxies that are satellites of our galaxy, just like there are planets that are satellites of the sun orbiting around us. And gravity is doing all this attraction. And you can zoom out again. So this is three million light years, this arrow. And if we zoom out again, this is 30 million light years. And we can make maps there, too. And they're still naming things, but eventually you stop naming things. There's so many things. But all these guys have names, you know. They're named after the constellations. A lot of them are named after the constellations. You have to look through to see them, right? You have to look through the fornax constellation to see the fornax galaxy or whatever. And that's when the naming convention comes from. So this is 30 million light years. And we can stop on this point. Every dot on this picture is a galaxy. And we can keep zooming out. So this is 300 million light years. This is where we kind of stop naming stuff. It gets kind of ridiculous. It keeps going, okay? It's sort of arbitrarily cut off in a circle. But it keeps going. So galaxies like to live around other galaxies. They cluster together in clusters of galaxies and what we call superclusters. And there's giant structures in the sky. And one of the things we like to understand is why, how many, these kinds of things. And in fact, so all of these things I had just shown you were cartoon pictures. But this is actually a picture of real data. This is the largest continuous map of the universe that's ever been made by a survey called the Sloan Digital Sky Survey. You've heard of the Alfred P. Sloan Foundation. They funded this sky survey to take a huge section of the sky and just look very deeply and try to map continuously all the galaxies. This is a beautiful survey. The total amount of data is 15 terabytes, which is equivalent to about the Library of Congress. So that's the kind of data that they've taken for this one, this one picture. So these are the kind of maps that we're trying to make. We want to know what the universe looks like. Now there's another technique. Rather than looking at sort of broadly at everything, we could point at one point on the sky and go very, very deep. And the way you do that is the same way you'd open up a shutter on a camera and you could take a really faint image. You do that with a telescope. Let me back up. That's what I'm going to show you. I just want to explain it first. So you know when you overexpose your camera, it's not good, but you can take pictures of things that are very, very faint by keeping your shutter open for a long time. That's what we can do with the telescope. So with the telescope, the lens is sort of eight meters across. These are the biggest telescopes in the world, and you can point them at one point on the sky and leave the shutter open. So you can see stuff that's really far away. The Hubble Space Telescope is a big telescope like this. It's not quite that big, but you've heard of the Hubble Space Telescope. So one thing that the Hubble folks did is they picked a place on the sky that was completely dark, nothing there. And they pointed the Hubble Space Telescope there for 12 days straight. And so the point they pointed is somewhere in here. Here's a constellation you might be familiar with. And what I'm going to show you now is a movie that zooms in to this point. Notice that it's complete blackness that they're going to zoom into. And what they're doing is they're showing you progressively deeper and deeper images of the same part of the sky. And what we're looking at now is we're seeing nothing but galaxies. We're way past the stars. They're going deeper and deeper into sort of a deep pencil beam into the universe. And this is it. This is called the Hubble Ultra Deep Field. It's a deep region of space that Hubble just sat on for 12 days. And the light from these galaxies left these galaxies about 12 billion years ago. So this is actually a movie, a snapshot of what the universe looked like, something like 12 billion years ago. And notice that the galaxies look kind of messed up. They look like they're not fully formed yet. So these are the kind of movies that you try to make and then interpret. We want to understand how from this kind of stuff, emerge the galaxies like we see. So we like to ask, where did all this stuff come from? How did we get here? So one of the things you do when you have a question, what things sometimes people do is you ask the smartest person you know. And so the smartest person I know is Albert Einstein. So how do we begin to sort of take measure of these kinds of observations? It turns out, especially in the context of a theory that seems to be, we seem to have a universe that's expanding. You want to understand the nature of space and time. It's sort of all grounds there. And really the beginning of modern cosmology was really what this guy here, when he had a short haircut. In 1905, he proposed this theory that's called the special theory of relativity. And as part of this theory, you know, he had this famous equation, E equals MC squared. But lying at the backbone of this derivation of this equation, was that there was some kind of interrelationship between space and time. What Einstein realized is, the actual way in which time ticks along, is linked in some way with space and how fast people move through it. This was an earth-shattering conclusion that he made. And what it meant was, our previous concepts of space and time were wrong. And that, in fact, the ideas that Newton had about how gravity operates on Earth, and why apples fall from trees, had to be revised. Now he realized this in 1905, and then for the next 10 years, he said, oh, I need to fix gravity. I need to figure out how gravity really works, because E equals doesn't equal MC squared, according to Newton's gravity, the first order. So now we're talking about 10 years of Einstein time. Okay? I don't know how many years of my time that would be. So it's like 600 years of my time. Then in 1915, he comes upon what's called the general theory of relativity. And in this theory, by the way, it's largely, is regarded by most professional physicists as the single greatest achievement ever by anyone, period. I mean, this is a very impressive thing. What he realized is that you can understand gravity is actually a warping of space. So that if you took, I mean, a way of sort of characterizing it, is if you took a bowling ball and set it on a mattress, and there's like a dip in the mattress, you can imagine taking a marble and flicking it on the mattress, and the marble would roll and bend as it approached that dip in the mattress. It's in a similar way that Einstein understood how planets go around the sun, and that there's a dip in space time around the sun, and the planets go around on circles, because that seems to them to be a straight line in that curved space. He described all gravity that way. Now the thing, the sort of backup ramification of this, is that space is not something that's fixed that you just travel through. It can warp and evolve and twist. So if you think about this a little bit, it's sort of, it's sort of, you're now kind of in this weird position where you can have space itself, not being static, not always being the same. So soon after this paper came out, a guy named Friedman read it, and he realized that, hey, you know, if I look at your equations, I realize that the universe should be expanding according to your equations of gravity. Now Einstein said, uh-oh, that sounds crazy. Even to me, that's crazy. Okay, even this great creative genius that Einstein was couldn't accept that, even though it was his own theory. And so he changed his equations. He changed his theory to account for it. And he added basically a fudge factor that's called, that we call lambda, which is now called the cosmological constant. It's sort of like put a wedge in the door and kept the door from closing. He just stuck it in. He could do it, it was his theory, he did it. But of course, this is widely regarded now, and it's claimed that he said it was his greatest blunder. You know, he could have predicted that the universe is expanding, but he didn't. But then in 1929, the same guy, Edwin Hubble, using the 100-inch Hooker telescope, which is not too far from here, saw that as he looked at galaxies that were more distant to us, they were moving away from us faster. And this implies that the universe is expanding. So in some level, it fit in with the theory that Einstein had proposed, but he didn't quite have the guts to stick with the prediction. Now, what this gives rise to an idea is that we're leaving a... And then a lot of data came after that. People studied this for a long time. And now we're up to this idea where, because the universe is expanding, if you just run the movie backwards, it was smaller in the past. And if you take something and you scrunch it down, smaller and smaller and smaller, the atoms in it or the gas in it will get hotter. You might notice this sometime if you're holding a bicycle pump and pumping it, it'll get hot. If you scrunch air together, it tends to get hotter as it's compressed. Same kind of thing with the universe. The universe is scrunched, it's smaller, it's hotter. Earlier than that, it was smaller and even hotter. So it's now about 14 billion years since the Big Bang, and now it's very cool. It's about three degrees above absolute zero in empty space. But the universe at early times was very, very hot. And it turns out it was so hot in the early universe, there's radiation that's left over from that. And it's realized that that should be observed. And in fact, in 1965, Tensien and Wilson discovered the glow from the Big Bang, which is called the cosmic microwave background radiation. And from that, they were given a Nobel Prize. And it sort of solidified this idea that in the past, the universe was hot. So what we have now is a picture of the universe where not only doesn't have this map, but it's moving away. So I thought I would mention just a couple more words, and then I'll stop and open it up for questions. But when you look at one of these galaxies in particular, it turns out now we don't have to just, we can't just, we don't just sort of take pictures of them and see where they are. We can actually study them in detail and understand things like, how fast are they spinning around on edge? Because it's a disk. It turns out that us, in our own galaxy, we're spinning around, or orbiting around the center, spinning like a disk. And in fact, if you look at the stars in the galaxies, you expect them to spin around, but as you go out to the edge of the galaxy, you expect it to go around more slowly because there's less gravity out there. And in fact, Pluto, if you want to call it a planet, is going around more slowly around the sun than the earth. Because it's further away, there's less gravity from the sun. And this is expected, this is well understood physics. And so when people looked at these galaxies, what they expect to see is that they speed, they rotate really fast in the center, and then the speed rolls off quickly at large distances. When the 70s, there's a woman named Vera Rubin who went out to actually measure how fast they were spinning around. And what Vera discovered is, that's not what's happening. In fact, they're still going really fast way out here. And the way we eventually came to understand this is that there are big extended distributions of matter all around these galaxies. And this matter is called dark matter, because we can't see it. That's why it's dark. Okay? Vera Rubin is credited with this observation, although there are a lot of other people who found evidence for it, even before Vera Rubin. Just an interesting tidbit. Her son is a professor of mathematics in the math department here. There's a lot of more evidence for dark matter by studying galaxy clusters. Clusters of these galaxies, we can measure how fast stuff's moving in those galaxy clusters. And there's a lot of evidence that it's there. It's not just this rotation group. There's tremendous amounts of evidence. Another question that people had, and they were specifically going after this issue in the 80s, was, is the universe ever going to stop expanding and re-collapse and have a big crunch? In order for us to know what's going to happen, it kind of goes something like this. Imagine that I took this pointer and I threw it up in the air. Eventually it's going to come back down. It's going to come back down because gravity's tugging on it. But if I lived on a planet that was really, really light, that didn't have much gravity, the force of gravity would be less, be like on the moon. And if I, you can imagine me throwing this up and this escaping the moon and never coming back down. So the question with the universe is, it's going up now, will it come back down? And the crucial question is, how much mass is there? If there's a lot of mass, it will slow down but never come back down. Sorry, if there's not much mass, it might slow down but not come back down. If there's a lot of mass, it will slow down and then re-collapse. So people in the 80s were looking to see if this was what's going to happen. So imagine this is distance and this is time and you throw a ball up, you might expect it to come back down, or it might keep going up, depending on how much mass the planet has or how much mass the universe has. So a very heavy universe, we expect this. In a very light universe, we just expect that ball to just keep going up. The universe is expanding. So a bunch of astronomers, one part of the team was led by a team at Berkeley, was looking at this in the late, actually turned into late 90s when they actually got the result. So this is a big crunch picture and this is a not big crunch picture. The universe keeps expanding and this is what they wanted to know. So they did the observation and this is what they found. So what they found is, you throw the ball up and rather than it coming back down or slowing down, it speeds up as it goes up. That's weird. It's that weird. It's exactly that weird. That's called dark energy. So there's some force in the universe that's making the universe expand at accelerated rate. We do not understand what this is. The thing that's kind of fun about it is, our ideas for this, well, let me tell you a little bit about our ideas. So this is the dark energy piece. This is the thing that makes the universe accelerate. This is the dark matter piece. This is the thing that makes galaxies spin around fast. And this is the piece that we understand. Chemistry, that's easy, right? No. It's actually much harder than this. Our questions are so simple that it's actually much easier than chemistry. So here's the big questions. What's the dark matter? What's the dark energy? How did galaxies like ours emerge from this early universe? What was going on in the early universe anyway? So these are the kind of questions we're trying to answer. What is the dark matter? I'll tell you what we know. It's dark. It doesn't shine. What that really means is, it doesn't interact with light. It doesn't interact with things electromagnetically. So if I had a ball of dark matter, and it was traveling through the wall, it would go right through the wall. Because the only reason balls bounce off walls is because the electrons hit each other and make it bounce. It's electrical repulsion. There is no electrical repulsion with dark matter. It doesn't interact with electricity. It doesn't interact with light. But it does attract to other matter via gravity. And most cosmologists believe that the dark matter is a fundamental particle of nature that we just haven't discovered yet. And there are big programs to look for it. There's a big particle accelerator in Switzerland called the Large Hadron Collider. Where they're taking particles and zooming them around a 17-mile loop. They're going to slam them together, recreate the hot temperatures of the universe, and try to make dark matter. And try to observe it. That's going on now. There are also satellite projects. They're looking for glows of high-energy gamma rays, which might be evidence of this dark matter. And also, we're looking for it directly. We think there are dark matter particles streaming through the earth all the time. We don't feel them. But we're trying to build detectors that will feel them. And by detecting them, it would be really interesting. So this is the ways we're looking for it. The dark energy is much harder. It's dark. It doesn't shine. It's energy. It causes the universe to accelerate. But it does not attract stuff together like normal matter. In fact, it's a lot like the cosmological constant that Einstein proposed. In fact, it's possible that the dark energy is this cosmological constant, that Einstein thought was his greatest mistake. It might have been his greatest prediction. Guy's always right. Quintessence. It's another idea. So there are other ideas, and these are just words. Another possibility is that Einstein's theory of gravity is wrong. And our whole interpretation of why the universe is slowing down or speeding up is just wrong because it's based on Einstein gravity. So all the young physicists hope that this is true because they want to be the one to know what the right number, the right answer. Let me just flash on the future, and then I'll take a few questions. So how are we going to try to answer some of these questions? UC Irvine is involved in a survey called the Large Synoptic Survey Telescope. This is that little Sloan diagram, which is the largest continuous map of the universe that's ever been made, shown on the scale of what we want to do. So we want to make, we want to map about a quarter of the visible universe with this project. This is going to be a telescope in Chile. We're also involved in something called the 30-meter telescope. The 30-meter telescope will be the largest optical telescope that's ever been built. And there's something else that's coming online called the Hubble Space Telescope, sorry, called the James Webb Space Telescope, which is the successor of the Hubble Space Telescope. It's 10 times more powerful than Hubble. So these things are coming online. Let me show you, just to give you some perspective about what we're getting ready to do. The Hooker Telescope was 100 inches across, which is big. This is the telescope that Edwin Hubble used to discover that the universe was expanding. At the time, he didn't know he was going to discover that. And the Palomar 200-inch telescope, this was the biggest telescope in the world in 1949. It was this telescope, of course, when it was built, no one had any idea that eventually they would be able to measure galaxies spinning around in discovered arc matter. It was with the Keck Telescope, which was built in 1993, which is currently the largest telescope in the world, that we used to discover that the universe is accelerating its expansion. But when this telescope was built, if you said that to somebody, they would have thought you were crazy. Okay? So now, this is the 30-meter telescope. So who knows what we're going to find? Okay. Now, I will close then with saying, this is the real future. Okay? We're over there across the building over there, and the people who really do the work are the graduate students and all these young people. And so, I, you know, I was asked if I had anything to sell. I don't have anything to sell. But what I do kind of have to sell is this center in really these people. There are people over there who are doing really great work. And if you're interested in that work, you can contact me. If you're interested in supporting that work, you can contact me. Specifically, supporting maybe one of these people to go to one of these telescopes and do some observations. That would really help us. But anyway, so let's leave, let's leave with this picture. That's it. I can take a couple questions. I'm sorry I went kind of long. This hand was up first. Yeah. Thank you very much. Is this on? Yeah. Okay. If I can, if we can look into the far reaches in order to see back in time and see a very primitive part of the, of the universe. Yes. If I were able to stand in that primitive part at that far away time, Yes. Would I eventually see back to where we are now and did that exist? Okay. So if you were in that far away place of the universe and you looked towards us, Yeah. You would see us as we were 10 billion years ago. So there would be no sun. Was there a Milky Way? The Milky Way was, yes. So that's a good question. The Milky Way we think basically came into existence about 11 to 12 billion years ago. And the sun was born in a sort of second generation of stars about five billion years ago. Yes. Sure. As you know, the, in the subatomic world, gravity has no effect. Yes. And so that's why I actually respect Niels Bohr and those guys a lot more than Einstein. But that's me. Einstein contributed to quantum mechanics too. The point I'm making is have you, have you guys gleaned into the subatomic world to try to figure out what happened? Yes. That's a great question. So one thing I skipped over really fast is in fact that's a lot of what particle physics tries to do. So in the very early universe, it was very, very hot. So that there really was nothing but some of atomic particles. There was nothing on the big scale. The universe was tiny and very, very hot at this time. And what particle accelerators do, what particle physicists do is they take particles and slam them together at very high energies to create very high temperatures and see what flies out to figure out what the subatomic world is made of. Okay, Richard Feynman said it's like taking a switch watch, throwing it against the wall and watching what flies out to figure out what it's made of. Okay. That's what, that's what's done. And that has deep ties to cosmology because cosmology unites these fields. Subatomic physics governs what's going on in the very early universe, which sets the stage for all that emerges. And in fact, all the structure in the universe, we believe, originated in tiny fluctuations that were originally quantum mechanical processes that were developed some 14 billion years ago in a process known as cosmic inflation that eventually grew to be the galaxies we are today. So without quantum mechanics, there would be nothing, according to that. Earlier we were talking about Aspen and the wonderful research institute there. Last summer I heard a lecture again on the universe and they were talking about, what I want to say is energy polarization when you have a galaxy that is flat and there are these energy spears, if you will, one going up and one going down, if you will. Yes, yes. And how that creates the energy kind of like the ying and yang. And I was wondering if you could fold that into what you're talking about. A little bit. So the jets you're talking about in galaxies, a lot of galaxies are observed to have polar jets of material streaming out of them like this. What that actually is fueled by is a black hole at their center, a very massive black hole that creates a very strong gravitational field and that heats up matter, makes it very hot and at the same time it creates a magnetic field that twists and that's what drives those columns out. And so that's a very interesting subject and that's what people are using that to study black holes. And black holes are really fundamental objects because they sort of sit at the cusp, the threshold of us understanding gravity in the extreme. So another way we're trying to test our theories of gravity is by studying these black holes that create these kind of jets. I have a rather pedestrian question. I was noticing that when you're showing a galaxy that it all seems to be rotating on one plane rather than all these others. Now within that plane, does the planets around the sun rotate on that same plane? That's a great question. And how about the moon around the earth? Yes. So comment on that. What they do and why. Yes. So the planets do not orbit the sun in the same plane that the galaxy is going around. It's tilted by about 60 degrees and it turns out that the stars are so far away from each other that they don't really, that kind of effect is small. If our sun was the size of a basketball, the nearest star would be in Hawaii. So the size scales are very big. They don't really interact with each other very strongly. And so there's really not a strong correlation. It has to do with smaller scale processes, almost like weather we believe in the early galaxy. Told the planet what plane to align on basically. And the moon similarly. The moon is sort of orbiting at a slightly different plane as well. That's why we have phases of the moon because it's not exactly aligned up with the earth and the sun. One more. Thank you. Could you tell us about string theory and how does that relate to any of these, please? I know it's the last question you have. I have 30 seconds to explain to you string theory. Okay. So the basic idea of string theory is that there's this idea in particle physics that all the elementary things are single points. Okay. An electron is a single point and you model it that way. String theory says no, no, maybe they're strings. And it turns out if they're strings, a lot of ugliness goes away. There are a lot of divergences that are a lot of infinities that pop up everywhere. Remember when you're doing basic math and there are infinities everywhere and that's bad, does not exist. It turns out particle physicists get that all the time. But if you do string theory, a lot of that stuff goes away. String theory is an idea that the subatomic world is really based on these one-dimensional loops. And that in fact, on very, very small scales, the universe is multi-dimensional, as many more dimensions than just four. But the fundamental reason for that is if you don't do that, theories break. Trying to unify gravity and quantum mechanics, this issue that was brought up here, cannot happen. So you can't understand gravity in a modern sense unless you do something like string theory. We really don't know where it's going. There are no testable predictions of string theory right now, but a lot of extremely smart people work on the problem. So, we'll see. Okay. Thanks.
A lecture delivered by UCI Professor James Bullock on February 11, 2009. James Bullock, Associate Professor of Physics and Astronomy at UC Irvine, is part of a team of scientists who believe they have discovered the minimum mass for galaxies in the universe -- 10 million times the mass of the sun. This mass could be the smallest known "building block" of the mysterious, invisible substance called dark matter. Stars that form within these building blocks clump together and turn into galaxies. Dark matter governs the growth of structure in the universe. Without it, galaxies like our own Milky Way would not exist. Dark matter's gravity attracts normal matter and causes galaxies to form small galaxies and to merge to create larger galaxies.
10.5446/361 (DOI)
Alright, then welcome to our last lecture about information retrieval and web search engines. The last week we talked about the page rank algorithm, so how links can be used to rank pages on the web to to estimate their prestige and we stopped right at the beginning of this detour here, which will show the Google toolbar. Does any one of you know the Google toolbar? Yeah, I hate toolbars too. I also hate the Google's toolbar, but you can you can see you can use it to To find out the page rank of a given page. So I'm going to install it now Hopefully it won't transfer transfer all my private data to Google No, no, yes So basically the page rank is hidden somewhere deep inside Google's ranking algorithm, but using the toolbar You can you can see on every page you visit Yes So Years ago this toolbar used to be really really simple just showing the page rank of the page and search form But which currently is just integrated into into modern browsers. So Google decided to Include a lot of helpful enhanced features into its toolbar Yeah, it's automatic translation all all you need or never need but If you are visiting a site then here you can see the page rank of the site I don't know whether you can see it, but I hope so the page rank of the EFIS Institute page is 5 of 10 so they use a rather rough 10 point or 11 point scale starting from zero and ending at 10 and EFIS is a medium or site of medium prestige, but for example Microsoft com I Bet would have a really high page rank Page rank of eight. Yeah, that's not too impressive So what about Adobe for example also very popular because all people linked to Adobe because of the Adobe reader? Yeah, they have the page rank of nine So then let's try the probably the most the most linked page in the whole web Yeah, here we are page rank 10 of 10 so the one of the most prestigious Web pages in the whole web to the download side of Adobe's Reader so it's it's just nice to see how Google rates these pages So if you are just interested in how your own pages are perceived by by the people out there What the prestige of your pages is just a little bit more than just a little bit of a People out there what the prestige of your pages is just use the Google toolbar and check it yourself So on the slide there's another link so Let's open it So someone I don't know who he is took the effort to serve through a lot of pages and Find out the page rank for each page that Google shows in the toolbar and then he listed all All pages having a page rank of 10. So this is a list of the web's most prestigious pages I don't know how how current it is, but here all Different pages from Adobe.com are listed. So as I already showed this is very Often linked by other people of course Apple Very very many inlinks Frugal Google groups Google catalog search all kind of Google stuff, of course is very very prestigious So I don't know whether Google tweaks it a bit but Google indeed a job opportunities at Google. Yeah, also very high page rank NFS Some Some government agency National Science Foundation. You play Yeah, so many many many stuff here. So I think he also did it for for other Page rank levels. No, he didn't yeah, don't mind. So if you're Interested in what is what is really important in the web check it out yourself All right, so this is how the page rank is used in the web but the page rank also The page rank also can be used for other purposes so one one purpose would be would be for crawling so Two weeks ago. We heard about focus crawling where you can call pages Belonging to just a certain topic but you can also use a page rank for some kind of prestige focused crawling if you would say so so for example, you could decide that you crawl deep into sites having Very large page rank at there at the home page or you could decide to update Those those pages very often they have a high page rank because they seem to be important to many many people And so the page rank can also be used to to steer your crawling process and focus on those pages That are generally more important or more prestigious than all the other pages but the page rank can also be used for Ideas similar similar to the page rank can also be used in other fields of applications the last week we already have seen some examples from social networks how people are linked and Co-citation and all this stuff in scientific literature So there are there are many many ways to exploit the idea of page rank and generally you can use the page rank whenever you have some kind of directed graph and links means some kind of Some kind of recommendation or some kind of Yeah Something it just does mean that that some resource is perceived as good by other people something like this or meaning some flow or some some direction And they can use the page rank to find out where all the recommendations go to in your network and of course you can also use the page rank to estimate the so-called impact factor of scientific journals so scientists Usually publish their results in at conferences or in scientific journals so there are a lot of journals out there and of course it is important to measure which journals Usually publish articles of a good quality and which do not because there are thousands of journals out there in computer science alone and To find out which are the most important ones Is quite quite important to researchers because you you want to you want to publish your good results Of course in prestigious journals so and then here's some some sites I won't open them right now. You can take a look at it yourself who analyze citations in these journals and find out Articles in which journals are very often cited and by using the page rank algorithm They calculate some kind of prestige score for these journals. So then you get finally a list of journals and The ones on top are probably the most important in the field So but what's more interesting in my opinion is a ranking of doctoral programs So this is a some kind of research project Conducted by some some student at Harvard and he wanted to find out To which university you should go if you want to if you want to make your doctor your doctoral degree so of course a Particular in the US it is very very important at which university you get your degree In particularly your your doctoral or PhD degree. So in Germany usually that's not too important Because all universities are basically at the same level or at least very similar But in the United States they usually are rankings that are published by Some some agencies who try to find out which are the best universities out there. So and then take take a look at This ranking so and and and he used the page rank to to find out which are the best universities for PhD studies And how did he do it? He? asked people Currently participating in some PhD or doctoral program at some university from which university they came from so where they where they Did their master's degree or a bachelor's degree and then draw a graph of people movement so if you if you studied in Brownschweig and And going to make your PhD in Hanover then there would be one single link from Brownschweig to Hanover and of course those Universities having a good prestige for for doctoral studies or PhD studies would receive many in links from many other universities and those Who have to have a bad prestige? Don't have many links and then you can use the page rank algorithm on this graph structure To find out so blah blah blah Some some fancy math. We already did this in the last week But here the most important ranking so as I said he he conducted a survey on people who received their PhDs in the 90s and between 2000 and 2004 Created this graph applied the page rank algorithm Normalized is to get some store some scores, but finally this is ranking he got so Harvard is a best university to make your PhD I think this what has been limited to some kind of discipline I'm not sure what it is. Maybe political science or something like that So it's it's not some kind of overest scores So you probably wouldn't want to go to Harvard for you for doing your PhD But if you would be a political scientist, then you should go to Stanford Michigan Rochester Chicago University of California at Berkeley and so on and so on MIT So it fits to fits to intuition because the famous universities are on top and if we scroll down the list Yeah, University of Florida pretty warm and nice, but not too Recommendable for doing your PhD studies or University of famous University of Kentucky Yeah, some more Texas Tech Temple University of Nebraska So at least in this discipline not not too good Yeah, so this is one way to use the page rank and as I said as soon as as long as you have you have some kind of Graph structure and your links mean something like a recommendation. You could always apply page rank and find out many interesting things alright Well, then let's go on with the hits algorithm. This is the second Big algorithm for analyzing link structure on the web So As Professor Bark is not here right now, but I hope he will be soon I will just begin explaining the topic and then he will just dive in if he arrives Okay, the hits algorithm. So what does hits hits mean hits is hyperlink induced topic search Yeah, obviously doing something with hyperlinks We invented by John Kleinberg. We already already discussed him briefly originally originally a physicist But she's currently in computer science and doing doing a lot of these network and hell of this things And yeah at the middle of the 90s Right in parallel to to the work done by by Brinn and page on page rank Yeah, he worked also on these these network analysis and He his problem setting was a bit different So he didn't want to compute an overall prestige score off to each page But he recognized that usually if there's an information flow or the is when there is a social network then you can distinguish two types of Notes in this network or two types of people in a social networks So one type are authorities authorities are people or pages who know something About about a given topic who are the experts on a topic and can provide all the information you're looking for So on the other hand, there are hops hops are hops are people that usually are not experts in the field But do know a lot of other people so if you would ask a hub Then you could be quite sure that the hub would give you the right pointers to experts in the field so they're experts who know everything and hops who know the experts and Concentrate the knowledge of the experts and and can and can refer you further So and of course the same is true to web pages There are web pages providing important content and there are web pages providing link collections about a certain topic So and of course every page could be to a certain degree be both an authority and a hub so usually you don't have a clear a clear a clear Distinguition between Authority pages and hub pages, but yeah, it could be mixed as well. So and then this was a basic problem setting here recognized and simply then his his problem became yeah given given some kind given a web query and given Web crawl you have in your database and your index Estimate the degree of authority and happiness of each web page in your index So just estimate a score that tries to determine to what degree a page is an authority or a hub So and of course it's it's again related to how links are distributed in this network and This is here an important difference to the page rank algorithms because in page rank we assumed that Every page has a single Global page rank that does not depend on a given query. So in hits you have you have a certain web query for example the topic information retrieval and the scores to be estimated by the hits algorithms are only Relative to the topic chosen. So we are now looking if the query is information retrieval We are looking for the scores of hub nest and authority with respect to this specific topic this specific query so a Definite consequence of that is that you have to calculate all your stores at query time So in page rank you can you can pre-compute it once a month for example And then if you use a want to starts to start to ask web query Then you can just use the page ranked in your ranking algorithm But in hits you have to compute the score after each query or you have to pre-compute it for many many many different topics So hits is a bit more personalized. I would say Okay, what is the idea of hits? We are given some query of course As I've said, this is what will end of the system. Here's the nice user has some information need Ask his ask his query to the system and Then in the first step we send the query to some yeah standard. I asked system some Yeah, some of these systems. We already have seen which doesn't does not use any kind of link analysis So for example simple vector space model here's some IR algorithm and the These IR system returns a set of our pages that the system deems as being relevant so and then We are creating from these set of pages, which is called the root set. So big R is root set We create a set of pages called the base set Which includes all pages contained in the set are And all pages that are connected to are in some way by a direct link. So The base set contains are Plus all pages that link to a set in our Plus all set all pages that are linked by some page in our so we are trying to to boost up our set are by including the neighborhood in some way because this is deemed as being though this is being the The area that is connected to the topic we are looking for so they are system returns the core of the topic We want to retrieve information about and by extending extending this set to the base set Yeah, we are we are trying to include also all hot pages linking to these topic pages and so on So this is a basic Idea, so okay, what what can we do now with our with our base set? Of course now we want to compute the hub on an authority scores on this base set so then of course the problem is how to define Hapness and an authority of a page and John Kleinberg Decided to do it in the following way. So it's very similar to the to the prestige So again, let a our a be our Base sets adjacency matrix so that contains all the links. So if this page link to this page and would be a one here you might remember and We define two vectors so the hub scores of all pages is a vector is a vector H and the authority scores are a vector a so that's vector H With so many components as there are web pages And again, there's a vector a with so many components as also that there are web web pages and the I've entry of Each vector is the hub nests score or authority score of the ice page in your base set Okay, now here's the recursive definition of Hapness and authority and the ideas similar to the page rank idea that those pages get a high authority that are linked by many hub pages So in what you remember you in in page rank. We just had a P equals Some constant our adjacency matrix times times the page rank and now we split up the page rank into hub nests and authority and Make two definitions that relate both scores to each other So on the on the other hand the hub nests is defined in terms of authority So those pages are deemed as being good hubs that link to many Authorities so that they're linked to pages that have a high authority score So now we have two week two equations and we are we are we want to solve it We want to want to find the correct H and the correct a Yes, I said so the authority score of a page is Proportional to the sum of hops hub scores of the pages linking to it and the hub score of a page is proportional to the sum of authority scores of the pages to which it links and Alpha and beta are these proportionality constants That can be used to very similar to the page rank. So now we can now we can combine these two equations simply put this equation into this one and then we arrive at a recursive definition of a and H and and Now we can simply solve these equations by means of linear linear algebra and Yeah, again, we have the form we we know This eigen vector thingy here, this is simply a real number this is some kind of matrix and The form is vector equals constant Times matrix times vector and so the authority vector a is an eigen vector of the Product of a transpose times a and the other way around the hub vector is an eigen vector of the matrix a Multiplied by a transpose. So it's a bit more complicated than the page bank because there we just had the matrix a and not these these products and Again, we can we can apply results from linear algebra algebra to to efficiently compute these these vectors Things are not so easy as with page page rank algorithm. So in page bank. We had the fact this due to due to the Perron-Frobenius theorem that there always is an eigen vector to the eigen value one and that's just some other nice Properties so this is not not so easy in hits. So Kleinberg had to make some decisions about what eigen vectors you want to take And so he decided to simply take the principle or the so-called principle eigen vectors In these equations and principal eigen vector simply means that these are the eigen vectors containing corresponding to the highest to to the to the eigen values with the highest absolute value so again this equation could have several solutions This matrix could have several eigen values and corresponding eigen vectors But Kleinberg decided that he only wants to take The eigen vector having the highest eigen value. So it's just an arbitrary definition Which yeah seems to work in practice So computation again is very similar to to page rank that we had the power iteration that we simply multiply The the matrices together and Combined with the vector and then we finally arrive at the eigen vector. So we don't do that in a tie here detail here You better skip to an example So here's here's example of how the hits algorithm work and then what results it put it produces This is one of Kleinberg's own examples. I think so the query is Japan elementary schools. So we are looking for elementary schools in Japan and Now we have now we see some pages on This side ordered by hub score and on this side ordered by authority score So good hubs that means page that link to many pages being relevant to the topic Yeah, we can only only guess what's what's in there But here's some link collection of school home pages that also seems to be some some page linking to many many schools Education yeah could also be some kind of link collection. So looks looks quite good looks looks more more like link collection then like Then like real content and which is exactly what what hot pages Untended to do so on the other hand the authorities that this should be pages Being elementary schools in Japan or having very important information About elementary schools in Japan and of course here's the American school in Japan in Japan So I hope it's an elementary school. Yeah, the link page Doesn't seem too authoritative to me. So it's maybe a hot page But this seems to this is seem to be pre pages of different different schools primary school Elementary elementary school. So basically you see this just this disease Divide into hubs and authorities seems to work Quite good. So it's it's not perfect and this example is also also too convincing because I don't know Japan But in general it seems it seems to work. So at least we can now distinguish between link pages or link collections relevant to a topic and pages itself being relevant to a topic Yeah, so this is basically the hits algorithm and Yeah, as it's used in the US if you have invented something novel something important It could bring you a lot of money then you are going to patent it And of course also John Kleinberg did so it's patented as Under this number and the title of the patent is method and system for identifying author authoritative information Resource in an environment with content-based links between information resources so quite a quite a title and The thing basically is that he had to patent the system components So usually you cannot simply patent a single algorithm, but you all always have to patent a system And this is what he did. So now IBM is the proud owner of this technology I have no idea whether they use it in some way, but I strongly believe that many search engines are using ideas very similar to hits when computing relevant scores Okay There is some some connection also to LSI and singular value decomposition In in the hits algorithm So remember in the hits algorithm we need to do an eigencomposite an eigen decomposition of these two matrix products and Essentially, that's the same of doing a singular value composed decomposition of the original matrix Just just a thing that is quite nice to know if you're really really working with these techniques and you have to decide which algorithm to use and Yeah short recap from from the lecture on LSI where we had a result that When we decompose our Our term document matrix Into this product u s and v by this maybe using the singular value decomposition then the columns of u are the eigenvectors of these products and the matrix s squared contains the corresponding eigenvalue and similarly the matrix v rows of matrix v are the eigenvectors of this products, so Knowing this you could simply compute all the all the happiness and authority scores That you need in hits by running a singular value decomposition of the adjacency matrix of our base set and then By looking at you and we we directly have our Vectors we are looking for in the hits algorithm So there's some connections or everything is connected in information retrieval Yeah, so maybe nice to know if you need it sometime There are many extensions also to the hits algorithm So in the page bank algorithm we had some extensions for example these topic sensitive page bank where we limit ourselves to given topics and There are many many versions of doing the page bank and of course the same is the same is true For for the hits algorithm, so here the ideas to to Separate link communities, so for example it it could be true that a query is ambiguous So for example Java could mean could refer to the programming language or to the to the island Dragoire could be a car or an animal The queries could also be be polarized in some way For example called fusion or nuclear power something like this They're usually a large group of people who really like the topic So I really enthusiast about nuclear power So maybe some of the large companies will have their link link network And there will be some some enemies of this technique and they also have their the link network So usually it looks like that many many pages Or link to each other in some way All about the topic of nuclear power But usually the enemies won't link very often to two people who really like nuclear power or the other way around so usually these These groups are really separated in the link structure But you can use the hits algorithm to find these two different components So the idea as you remember in hits was just to take the principle eigenvector Of these of the link matrix and this would correspond to the largest link community So if this would be a link community having having many many pages and many many links And they would have some kind of related structure with authorities and hubs related to each other Then the hits algorithm would compute the hub and authority scores Only basically for this network and all pages in here would receive a rather low authority and hub scores Because these pages are not connected very well to the large community So but if we do not only take the principle eigenvector So the eigenvector having the largest eigenvalue But maybe also the second or third largest Then we would come up with hub and authority scores with respect to these other communities And then we would find there is the first communities of people who like nuclear power and here the second community of people who hate nuclear power and maybe some more communities and This way you can you can really analyze Topics that are that are polarized in some way or there are different opinions Where there are people around that usually do not talk to each other or do not link to each other at least So all the nice application of the hits algorithm So finally, let's compare the hits algorithm and the page rank algorithm So as I already indicated page rank can be pre-computed because page rank is not dependent on the query So there's usually only one page rank per page And it doesn't depend on anything. It only depends on the link structure as such So hits in contrast has to has to be computed at query time So if a user asks a query then you have to find this root set the base set do all this all this Expensive analysis and then find the scores you need and then you can use these scores for ranking your pages Which isn't very isn't very easy. So hits is very expensive when you do it at query time So of course you can also try to pre-compute hits in a way similar to the To the topic sensitive page bank by pre-defining some large topics and then computing the hit scores Could be an alternative but hits In the in the original is meant to be done at query time. Um, yeah, so A second difference, of course, is the is the choice regarding of which which Pages to take into consideration or what exactly to model so hits models hubs and authorities separately Page rank is about prestige in general and yeah prestige usually is a mixture of both of both So, yeah, if a page has high prestige prestige, then it could be a hub could also be an authority So page the page ranks scores really try to combine both ideas and and in hits They are kept separated from each other. So another difference is that hits only works on a subset of the web graph So on these roots set and then on the base set which usually is much much smaller than the large web graph So it is possible Or at least in theory to compute the hit scores really at query time Because the amount of data to be analyzed is not as large as the page rank algorithm So in page rank we have to do these calculations on the whole web graph And in hits it's much smaller usually So Yeah When when trying to compare hits on page bank one and one has found that in in web pages usually HUBness and and authority Is kind of correlated. So There is as I said, no, no, no clear separation Into hub pages and authority pages. It's usually a mix of both. So for example the wikipedia page for some topic Definitely is an authority, but they usually provide a lot of links to relevant pages about the topic So they try to cite their sources. So the wikipedia page also is a good hub And this is true for most other web pages. So there is no Clear separation into link pages and content pages. So at least on the web Page rank and hits are very very similar so Depending on what you're what you're trying to do or trying to to analyze and what kind of network data you have You either could use hits or page rank or version of both or mix of both depends on what you're going to do At least you now know what you can do But what is prestige? What is hubness? What is authority? and then you try to Then you can try to use it for good applications. So in the next lecture that will be today We talk about some Miscellaneous stuff that remains to be done We'll now talk about spam detection how to do meta search how to combine the results of different search Search engines and finally let's talk about some privacy issues related with web data All right Yeah, professor is right on time for the lecture But first we're going to discuss the homework So very briefly What is a social network? Okay, I will skip that it's too easy. I think A prestige in social networks was the basic idea idea who can explain it in two sentences What is the key idea of prestige I know my many other people are not familiar with this so I think they are more familiar with the Yes, I think you mean that links from higher prestigious pages are better For your own prestige than links from pages having low prestige So that's basically the idea. Okay Yeah, and then then you can use this for ranking. Okay, what are the co-citation graph? And what is an erdöschen number? Anyone? Yeah, co-citation usually means that two pages or two items are linked If they if they appear together in the same context. So if in some paper There's a list of references then all these pairs of references are connected to each other because they seem to be Related as they have been used in the same context. So and this could also be done for authors So people who write a paper together could be linked And this brings us to the erdöschen numbers. So what's an erdöschen number? Yeah, so erdöschen, famous mathematician, worked with a lot of people And for fun, mathematicians compute how far they are away from the famous erdöschen So next question, what is a bacon number? Did any one of you take a look at it? Yeah Yeah, it's a distance to the famous Kevin Bacon. Yeah, so and what is Paul erdöschen bacon number? No, that's the erdöschen bacon number. That's the sum. But what is the bacon number of Paul erdöschen? He has none Actually he is because he played himself in a documentary about his life and there are also other people there and these people have been connected to To Kevin Bacon and I don't know what exactly his number is, but Paul erdöschen has a bacon number Yeah, yeah, usually in this oracle of bacon the the search is limited to non documentaries But if you include documentaries then even the famous Paul erdöschen has a connection to the famous Kevin Bacon So all things are connected Six degrees of Kevin Bacon Finally the random surfer model, what is it and is it a reasonable model of how people serve the web? Random surfer What does he do? He does a lot of things and he does a lot of things and he sends her a random reply Yeah, really really nice model. So is it plausible? Do people serve the web like that? So then why do we use this stupid model for page rank? Yeah, it's easy to model. Yeah Yeah, it works. Yeah, I think the main idea is that it simply corresponds to these recursive definition of prestige So simply the idea that prestige of a page is some of the prestige of the page is linking to it And this directly corresponds to this random surfer model So it isn't a very good way to model actual behavior of people But you don't want to model behavior of people, you just to model these prestige transfer in the networks Yeah, and it makes a good matrix and you can apply the Peron-Frobinius theorem Yeah, so easy thing. Oh another question. What do you need the topic sensitive page rank for? Yeah, but for this query then I would use a hits algorithm wouldn't I? Ah, yeah, so pre-computation is the key here. Just define some hot topics you want to cover Then you can pre-compute your page ranks and now then have a Yeah, but you can integrate this new knowledge into your answer. That's always a good thing. Okay Yeah, today we talk about many many different things and this will be done by Professor Barker, I guess Yes So hello also from my side And we have a bit of work today, so I will hop right into the middle And the interesting thing that everybody knows about the web is a 90% of the web is monography and B 90% of what you get from the may from the web is spam so the question is kind of How do you deal with that because it has become quite quite a commonplace problem and Of course, we've we've discussed search engines Search engines are kind of a dangerous thing. Aren't they because they somehow restrict your view to what they show you I mean in the early days of the web you had kind of like this network of web servers where you could Navigate through the network and everything's okay But now we have seen that more often than not People directly use yahoo google ask whatever as an entry page as a portal page page to the web So first they will only see what the web engine shows And then at least at the entry point can start navigating, but they will usually not directly start navigating Unless they have bookmarks or something like that And we have seen that even for well-known pages like facebook or something More often than not people type facebook into the google To get to the entry point Which saves org or dot com it is so in in a way Search engines have restricted our view to to what they they they show us and of course That has been exploited by by some people that either want to Show off their results or put their pages to the best potential But of course also by malicious people that want to sell us viagra because they know the web is about porn so You could do with a bit of viagra and that is often called spam-dexing So what you want to do is you want to employ web search engines to promote your own site And it has nothing to do usually with what you are actually querying or where you want to go but As a spam dexer. I think everybody should have viagra Not just the people who type in viagra Okay, so the the idea is I I modify the web to get high rankings of pages that don't deserve their rank and very often this is also called in in in the little bit more optimistic term search engine Optimization so you optimize your page to be indexed by the search engine correctly And we've seen some some white techniques already with the robots txt and how you can help search engines but If you if you type in the term search engine optimization into into google You get a lot of information about How you do it and what you do it and you get a lot of people offering to do it for you for Yeah consulting contracts or something like that Um How do they do it? Well everybody Well, nobody exactly knows how google works or what the exact algorithm behind behind yahoos But you can do a bit of reverse engineering. You know, it has something to do with the page rank You know, it has something to do with how often terms occur on the page or what the name of the page is uh, whether there are images that are um, uh, captioned with uh, uh, some topic or stuff like that. So, um, we we know a little bit about that and that can be exploited Um by people using these low poles. Of course, if the search engine um creator or maintainer finds out that people start using it, uh They will immediately Cut down and say well, you may not do this. This is a black technique now But then the spammer finds something new so it's it's it's like like like a race between the search engine providers and and and the spammers and one tries to um Outperform the other it's it's kind of interesting Um, basically that there are two classes one is the the content spam where you alter a page's contents such That it gets higher ranks in terms of ir techniques And the link spam where you alter the link structure between pages or uh deliberately create a good link structure between two pages such that page rank and hits and similar algorithms uh, uh, uh are kind of tricked One of the um very early examples of the content spam were the um, uh, You you just add some terms that are interesting for the query in white color on a white background And then you can show the spam indexes the spam pictures and stuff like that very easy technique it looks like a viagra advertisement but has some um Words in it that the users can't see but that the engine is going to index Very obvious has been closed down very at a very early spanish stage one of the points of link spam or typical example for link spams are so-called link farms Where you kind of like have a couple of sites that interlink each other and thus Transfer prestige from one to the other and having a large enough link farm you can do a lot of things So for the content spam you exploit basically the um Textual the information retrieval um ideas um, which means term frequency on one hand, huh? So you repeatedly place keywords That people look after that people are searched for into the text into the head of the page into the uh, caption of images into the link of the text and Also on other pages in anchor text of pages that link to your page So if you have a link saying oh, this is wonderful information about How to build a house or something? huh This link and the meaning of the link is transferred to the site If that is a viagra site Tough luck Still you get it um What is also very often done is um you you you mix content So there's nothing about a little bit of advertising on a page We know it from google. We know it from sponsored pages Um, uh, every time there's there's a small bar, you know, like an I place the advert What if I copy a correct page a good page that people like to visit? And put only my own advertisements my viagra postings In the sidebar. Oh, nobody will notice And this can also be done well, um As a counter measure of of um, uh, the search engine providers It is usually assumed that um at at the stage where we Look for duplicate content where we check whether the the the structure of a page is is correct Also a classification can be done. So as soon as the word viagra occurs And it's not the wikipedia viagra or the father A factory page about viagra, but it's kind of like Uh sell buy whatever Um Then you can deduce that this will be spam So very often, uh, you have you have uh, bays and classifiers That somehow learn to find out what is spam and what is not um As far as the features go that describe page that can be exploited there Um, that's kind of kind of a little bit tricky You as I said the similarity to other pages that were already classified as spam Can search can serve as a training set and then you train a naive base classifier or whatever it is support vector machine Now that kind of uh tries to figure out whether the spam But also the degree of term repetitions. I mean, um, uh, a high term frequency is a wonderful thing But if a page Has 90 percent of the terms occurring um as as viagra It cannot be a sensible Page So the too high term frequency is a sign for spam No, you can train that also and kind of like work with natural language models where you look at texts how often Words occur and what is the normal distribution of text? So for example, I mean if if uh Staying with the example of viagra. It may occur in medical texts But for medical texts, you usually know what the word occurrence what typical word occurrence of of of uh, uh Well drugs or kind of illnesses is and then you can Use that information to find out whether something is spam or not Um, one of the best examples of uh content spam is so called google bombing Google bombing worked Uh exploited one of the uh, uh loopholes of google that were um very clever in the design Where google said well if somebody links to a page He will usually describe the contents of the page together with the link More information about tropical fishes can be found here Or there was this guy Karl Friedrich Gauss The link and then you know what the page that you link to is about At least to some degree That has been appointed exploited by by by by google bombing Where people that were kind of pissed about american politics in the era of george w bush jr Started To build pages Where the linkage text miserable failure Directly linked to the white house page of uh, uh, uh george w bush cv And so at some point google started to index the whole thing and indeed Typing in the query miserable failure led as a first result to the white house pages With george w bush's biography And everybody knew he was a miserable failure fun so Obviously we took this screenshot when this was kind of already known so it was the newspapers reporting about the thing and so But those were real pages about miserable failure. So Where people described something as miserable failure the words miserable failure Do not occur in the obviously in the biography of dodge w bush because he wrote it himself Okay, this is this is what was very very often called google bombing and actually google had very big problems of getting rid of this phenomenon Um, because if if enough people independently of each other Start doing this You basically have very hard time Deciding whether this is really about tropical fishes and and and many people talk about them or whether This is an attack of some some some kind because you don't see the coordination attacks seem Are usually coordinated somehow and if I have one server that starts spamming I immediately know what's happening, you know If several servers start doing it Then the thing becomes more complex and this is the same thing that happens here um there's also um a a further way to direct a content spam because um the The I mean the the the bar what's it called the the user the the reader of a web page Is not very happy about seeing the actual web page And if I have the word, I don't know building a house or miserable failure Uh 500 times on the web and then comes the viagra ad I'm not staying long on this page to Think about getting viagra So the page must look like what it actually is a viagra advertisement And it should not look like I'm being tricked Because then I'm not I'm not I mean spam is Not really trustworthy at its best But if I'm if I'm seeing that I'm obviously being tricked by this page Then I get even more angry about it So to to to keep up the chances of actually selling viagra Spammers try a lot of interesting techniques. So for example placing the text that you don't want users to see behind images On the web page Uh write the text in the background color font sizes of zero Using the text only in scripts and deleting it afterwards Delivering web different web pages that are showed To web crawlers then are shown to the users. This is very often called cloaking So if somebody comes serves on your web page and says, um, well, I'm an indexer. I'm the googlebot Then show him a totally different side And once somebody says well, I'm not a googlebot Then show him your viagra advertisement. So just And if then statement nothing more Very simple to do or so-called doorway pages You could just immediately redirect The user so the user won't have time to see the actual site that was indexed Good, um, most of the techniques are obviously easy to detect. I mean, uh things like that Cannot be on a sensible page. Why should somebody write about tropical fishes with a font size of zero? Uh, it does not happen So immediately exclude them and of course the spammers became aware of that. It's it's a little bit more difficult to deal with these two But still you can you can detect it to some Uh, uh, uh, to some degree. I mean just, um, pretend you are the googlebot and then serve to the site the second time pretending You're not the googlebot But ha ha you are Ha Then you will see whether this is cloaked um On the other hand you have to access every page twice Or you have to train a classifier Looking out for something Costs time costs time in indexing Time in indexing that you need to recrawl the web to get a good index So the more time you invest the more effort you invest in in actually finding out whether something is spam or not The battery of content will be But the staler you content will be Trade off also for the web search engine provider Good As I said cloaking uh typical Um Is a request sent by a crawler you can check for the ip address. Um, uh, then Send the cheat page if it's not a crawler send Uh the viagra ad Very easy to see doorway pages If you create a page that Is a wonderful answer for some query immediately redirect To your viagra ad and if you do that with enough topics You get a lot of viagra pages for a lot of different queries This is what you want Um, actually the interesting part is um this this uh technique of doorway pages Um has been recently discovered by google and is uh sometimes used by But even by big companies that try to do a little bit of of of search engine optimization For example, bmw and and and uh riko so cameras um copiers Have been banned by google for some time Because they used doorway pages and they were totally pissed about that and google just said well we in in our F aqs and and and how you should index the web to get a good uh a ranking result in google we said not use Doorway pages Huh, if you don't adhere to our policies, we can kick you out of our index And nobody will find you ever again And actually bmw tried to sue and there's just no way that they can sue themselves back into the index So if google says well they are on the index again still but i think they have learned the lesson So sometimes the search engine optimizers um can Teach people a lesson but more often than not spam is very hard to avoid so um talking about link spam the idea is that the more link in links you get on your page The better your page rank will be the better ranked your page will be If the inlinks come from good pages from from pages with high authorities or high uh uh prestige Your page rank will start to rise So um what is very often done is is kind of uh so called comment spamming You have high quality sites so some news uh uh archives so for example the new york times or um some uh news groups some vikis that are highly ranked And you insert into this pages as a comment a link to the page Since google will not distinguish between comments and the original content of the page but just see the uh uh uh the the structure the words on the page The new york times Will link to your viagra side So it must be important Yeah, or the news group that everybody reads in the unix community will link to your viagra page Very difficult problem getting getting rid of that Um You can even automate that to some degrees of writing bots that put comments in in in open vikis and We very often have the problem here at the uh at the at the chair at the institute that just Robots put in our vikis kind of spam links to different pages just to improve their page rank And uh, we actually disabled the commenting function Because it's just not not not valuable enough To to avoid the spam One of the counter measures that you can do is Usually you try to to avoid bots By uh captures who knows the concept of captures Okay, well, it's basically you you show a word in some distorted way or Uh, some a certain number in some distorted way. So it's not easy to to to uh ocr it It's not easy to um Recover it, but you as a human can see it and then you have to type in this word before you can comment on something Which excludes bots from from posting comments Good idea Um captures as I said, well, it's kind of like sometimes you can read them sometimes you cannot read them But but the idea is really it's images You can see it But a robot cannot read it And even if you give that to some some uh, uh, uh automated ocr uh Engine, I mean, I have problem reading that. I'm very much in doubt that it can be ocr Properly and as soon as your capture is wrong you just Don't comment capture is actually an action name for um completely automated public Turing tests to tell computers and humans apart because these are the bad bots that spam your uh, uh, uh comment features and these are the comments that are very valuable for your patient that you really want to have And the Turing test between them is kind of recognizing what is written in the capture um However, uh, the spammers kind of kind of use countermeasures against that already Um, because if you know how captures are generated, so I take the word and then I distort it in that in that way Then you can reverse engineer the process of generating the capture And then you can have an easy ocr task Because you distorted it And then it's kind of taught it. No If you're this distorted it isn't taught it. I see um, so um kind of uh, then it's it's normal again and it's an easy ocr task to recognize it It's also interesting that uh, the Intelligence of the crowd and the um, the uh, uh, uh, Mechanical Turk and stuff like that have been used for Telling captures and uh apart Um such that that you well just employ the mechanical Turks and say well people please solve captures for me So every capture that I get um will get you one cent if if the commanding function is disabled So I pay a thousand euros and I can spam A hundred thousand pages Can be worthwhile depends on how big your business is But of course Spam cells So somebody is buying the Viagra Otherwise people will be able to get the money Otherwise people wouldn't do it Another method is so-called link farms The idea behind it is that you have a large group of pages That link to each other and some kind of a cycle and that um When one page transports its authority to another Over some way it comes back and you get into this recursive cycle and it's kind of uh, uh, Um, you you just profit you just benefit from from each page that links to you and the respective part prestige Um, what has also been done is so-called link exchange programs when I said well, it's it's it's probably easy to recognize link farm If it's one server and and kind of very local and and they link to each other. That is obviously Uh easy to detect, but what if I ask other people on other continents To give me some links What if I get Wikipedia to get me some links? Interesting, isn't it? Because then I can do a lot of things Um, and uh, Google has no way of finding out whether these lints were deliberately put to a Viagra page or to some sensible page Difficult Um, in any way you have to create link pattern that look normal Because obviously the uh, uh, web search engines will look for um, uh, abnormal structures And once they find it they will classify it immediately as spam and you are out um You can also if they use the the hits algorithm for example, um, you you can also try to to act as a hub And so well, this is a Viagra spam page But the links That I give are real They point to pages that are really important that are really interesting Well, so I place the ad Act as a portal Which is well received by people They're probable to buy my Viagra And Google has no way of finding out because you you you just benefit from from the pages that you link for That are kind of popular that have a high prestige and that's kind of it so, um Very often this is done by by cloning directories or cloning Wikipedia or stuff like that Um, very very often this this is the way Um, on the other hand if you have a higher hub score the um other pages that you link to will get a high authority score Also beneficial for for them and very hard to find out what it is Um one of the methods that that that uses this principle basically is so called honeypots A honeypot is you put onto the page something that is sensible that people like to read that helps people and usually at at at At the side of the page or whatever you put your spam you put your Viagra at So for example, you have the Wikipedia pages that everybody loves You just make a copy of Wikipedia. You can crawl it. It's easy The content quality is very high And then you sprinkle it With a couple of Viagra ads or whatever you're trying to sell It doesn't really I mean for the for the user it's it's usually not too important which of the copy is used And if you get one two three Uh, uh people buying your Viagra It has been worthwhile because Obviously you don't pay for the content Viagra is open content You can do that And if people find find it practical that can be done on the other hand you don't even have to show Your Viagra ad on the page You can also hide links to your Viagra page on this page because then you have a high authority of A page which is used by many people because they don't care whether they use Wikipedia or Wikipedia That you just copied and If that is used by many people Then placing the links there Is a good idea of promoting the page rank of your spam pages as an authority So this is also a boost um So the idea is basically you you have this this Wikipedia or whatever it is Uh, uh, a copy the people link to it because it's very nice And has wonderful content The content is ripped off, but who cares and then you add some links to your Viagra site This site Gets a high prestige thus this site gets a high prestige Because it's transferred by the page rank. Okay Well that can be done Um, what is also fun is to look for expired domains. There are some pages That are popular for some time And often linked to and at some point the um Uh, owner of the page or the manager of the page the maintainer of the page may decide not to prolong the page Not to use it anymore because has been hired by some some provider and Then you don't want to pay for it anymore or you move to some other side or you switch providers or whatever Still the domain that just expired may still have a high page rank So look for expired domains buy them Put links to your Viagra page onto them Benefit from their high page rank benefit from their prestige Before everybody sees they are outdated You have benefited just for buying a domain doesn't cost much So that's kind of Also a very fun idea. Well with link spam, it's even harder to detect it than the usual textual measures because pages web pages and websites so clusters of pages Can show some regular patterns but can be arbitrarily different and Depending on what you want to reach and how creative you are and and and how It has been it has been created and the point is if you create a text chaotically Nobody's going to understand it so it reflects on the quality If you create the navigation structure of a site chaotically It doesn't influence very much because people can navigate it So while creating some text in an unusual fashion Really lowers the value of the text and you can easily recognize that as a web search provider I say well, this cannot be sensible because it doesn't adhere to what is normally understandable by humans It's much harder to say well this page is unusually it doesn't correspond to the Usual navigability By humans that's very difficult to see So this is kind of the idea In general you have some some heuristics where you can say well if The inlinks of a lot of pages look the same Then this is probably some some some google bombing That's taken part so for example the miserable failure If you find doing your shingling That you have copies of high quality content Look very closely at what is the original And don't count the links from pages that you define as not being the original That kind of detects the honeypots And of course it's always good to to add some manual fun By just creating a white list of pages that you know that are good that are not spam And try to figure out the link distance from every other other page to this one I mean we know that everything is kind of connected over six links but nevertheless If something is very close to one of these good sites It's probably not a spam page If something is far apart from the good side Might be a spam page All heuristics All just help you to find something More often than not you will you will find something More often than not you will you will delete good pages and more often than not Spam pages will not be detected, but It's better than just accepting everything at face value So um In terms of the the the creator of pages of course also you for your tropical fishes or how to build a house Of course want to have a high page rank because I mean it's very important to talk about tropical fishes obviously As long as you don't want to sell Viagra Invest the time Into creating good content Because the better your content and the more natural your content The higher the page rank Will be The higher the google ranking yahoo ranking whatever ranking will be Because google yahoo and whatever Invest a lot of time a lot of effort in in in finding something also as soon as something looks fishy You might be you might be punished So that's that's kind of kind of the idea Um the costs of cheating cheating search engines are usually higher than than the benefits of it Um, I I have no idea what is written BMW for example, uh that are kind of uh, uh the leader of some market segment To try doorway pages to increase their page rank. I mean people looking for a BMW The more often than not end up on BMW DE or BMW com Good so create high quality content a link should be a recommendation. So adhere to that too Build a crawler friendly website also with a good robots txt And you can use whitehead techniques. So last time we were kind of very briefly discussing uh google adwords where you can actually Uh pay for your your your page being ranked higher if it is a commercial page and and and then you will not be banished from the ranking altogether Pong so here we go and look at some of the um, uh, uh Well investigations of how search engine optimization works Yeah, actually, uh, there's a way to have a lot of a lot of fun with search engines so called seo contest search engine engine optimization contests Uh, and they are started either by private people or by by some magazines or companies who Just want to find out how search engines work. So this is a Contest started by the german computer magazine ct Some years ago. They just started a contest and the uh, and they invented A new animal so homing berger geparden forelle. So what's geparden forelle in english? Oh, it's a cheetah It's a cheetah and a trout a cheetah trout. So looks looks like this So the idea is to to use some kind of term that never has been used in the web before So if you would have entered this term into google when they started the contest Then no pages would have been returned by google and the task was within a half a year or so Create pages about the homing berger geparden forelle and then after half a year the People at ct magazine would Make some queries at major search engines. So mostly google and and and bing and so on and then look at which pages Are returned at the top of the list and the people who created the Page get some kind of price by ct So and a lot of people participate in this contest and try to build really nice pages about the homing berger Geparden forelle created nice pictures and here you can see the url where this picture is from Homing berger geparden forellen dot de and they they try to find out How a page must look like that search engines Think that these pages are really rich in content and have a really high prestige and And so on So we can take a look at the Wikipedia entry which which tries to summarize the contest A little bit So it's actually it has been done from april to december 2005 The contest so actually that's only in german because It has been in german context and there in a contest and there's no english page But what's what's quite interesting is to see the Number of pages That have been returned by different search engines over time So at its peak in october 2005 there have been five million pages in the web about the homing berger Geparden forelle So a lot of people had a lot of fun During this stuff and created pages and Of course building four million pages is not a manual task, but they created scripts that automatically Build link farms and all stuff about these This crazy animal And yeah, I can see how many pages they have been Yeah, take a look at the article That's pretty nice. So this have have been the winner According to google Let's have a look at that and it really looks like a lot of pages And it really looks nice. So seems to be in an article explaining what the homing berger Geparden forelle is Where it lives Uh, how much it costs when you try to buy it at the market Nice nice recipes Links and and yeah, really really a nice a nice looking private website with all information about this topic by forellen such. Yeah, so This is how a page must look like if you wanted to have ranked high in google. So Very interesting to see Yeah Yeah So So Yeah, another contest has been schnitzel mit katoffelsalat schnitzel with potato salad started by some Or in some german youth group There hasn't been some kind of prize money. It has been just just for fun. Take a look at it Really nice nice thing and yeah Good to see so um, there was also this this english content. Maybe we should also that what what was the search word Negratude ultramarine. Yes, something like this. So I found much information about it. So but Yeah, no no no stop it So that's a term for it Uh, yeah, they have been different different things server from proud old duck Uh Let's growl blues thing sky whatever so ideas to just just create some new new words Something well, this is not only a german and german uh, like ultramarine. It was what? I see, okay Yeah, yeah So you will find it if you're if you're diligent enough And there was a lot of attention for that Actually mostly mostly fun and not really scientific but a good a good way to Try out different different ways to build websites and see how search engines react to that Yeah Give some give some new insights. So also google has some hints for webmasters how to build good websites And essentially uh, yeah, these are some yeah quite intuitive hints Yeah, make a side with a clear hierarchy and text links Every page should be reachable from at least one static text link and all these keep the links on a page to Reasonably number so a lot of a lot of hints google Collected for for creators of websites. So basically all these hints sum up to build really a built Websites that have good content that are made for humans And then google will have no reason to hurt you Because google obviously is designed or the ranking in google is designed to detect exactly those pages Built by humans with good content for other humans and if you follow the rules of building these websites Then you will get a high ranking by google. So no tricks necessary usually Focus on creating good content That's usually the best thing you can do and you don't have to worry about anything All right, um, yeah, let's have a short break For a moment and then we continue with how google Builds his data build its data servers data centers So then let's continue with our next detour it's about what hardware you can use for large scale Website for example as google does it so um some most Most surprising thing is when you when you try to find out how google builds their data servers or data centers Is that they do not use some fancy modern large supercomputer as you read it from time to time in some some List that that hp or ibm or craig just created a big new machine for computing weather data or Do doing nuclear testing on a theoretical level or doing simulations Now google does a does a Pretty different approach. They do not use a single large machine or even a small number of large machines. They Basically use really crappy hardware crappy hardware means computers as you can buy it Buy them at every store. So very very cheap and very unreliable hardware They plug and switch them together in some way and it just works. So People are not really sure how how google does it But that's google google's idea how a data center should look like use very very simple machines Plug them together in a in a systematic way in a fault tolerant way and then you get very high performance for very little money so What's exactly done at google has been has been Was there was a lot large secret for many many years, but from time to time you get some information for example in 2007 and 2009 There have been some Yeah presentations about google's hardware's hardware and since then people know Yeah Approximately what google is doing. So as I said google uses only custom build servers. So they buy standard hardware and build Standard service from it and and connect them via network So of course as they do not have very large machines, but very small ones They need a lot of them. So actually although Google does not sell its service google in fact is the world fourth largest server producer so part on Slightly on par with hp and other big companies who build who build servers for living but google is building so many servers only for themselves so In 2007 there has been some estimation that google then operated About a million servers. So i'm pretty sure four years later in 2011 It's it's many many more much much more servers. So these servers are distributed all over the world So in 2007 there have been 34 major Minor data centers all over the world. So of course if you have some users in asia Who are who want to query google you do not want to transfer all the query over to To the u.s. And send the answer back. You want you want to be able to process The query results directly directly in asia or in south america. So Google servers are distributed all over the world and connected to a large to a large Yeah, large index a large networks with many data replicated all over the world for reasons of performance so um as i said they are connected by by yeah, some data connections here massive massive fiber lines Usually and the funny thing is that about seven percent of all internet traffic is generated by google alone So basically google's traffic is a large amount of all what's happening in the internet And google owns a lot of a lot of lines by themselves. So 60 percent of the traffic Uh that uh that google is is having with their customers is going completely over google's own lines So of course this is much cheaper than than having to to rent lines from some some global providers So google has many lines and is spending a lot of money in web infrastructure Uh and again here if google was an internet service provider Such as dodger telecom Google would be the third largest global carrier. So they are doing it all for themselves But they are really really really large And by doing it themselves they can provide high performance high quality for a low price And a full control over their own network So here's some here's some some facts about about how the google data centers look like. So again in 2007 um, they created four new data centers Costs we're about six hundred million dollars So really really expensive doing this, but we have seen some weeks ago when we talked about I talked about other words Uh that google can easily earn this money back by the infrastructure So the cost of operating their hardware and software Have been estimated to 2.5 billion dollars in 2007 again. It is really really expensive It's also expensive in terms of energy consumption. So uh Each data center has an energy consumption of 40 50 megawatts So to to comparison the whole region of brownschweig. So all people living here and all all industry Being here Has a has an energy consumption of 225 megawatts and so this is basically The double amount of the largest data center of google in oregon. So google's data centers are By by by means of energy consumption like small cities And of course they need their own power plants and power stations That's also what's what's google doing though. They're basically trying to to do everything what they can by themselves They don't want to rely on on third parties to provide energy to provide Uh lines to provide hardware. They want to have full control over their own business Okay, some some more facts about google's servers. So they built their servers in large racks and each racks usually Each rack usually contains 40 to 80s. Yeah commodity class bc servers So some standard hardware with some custom designed linux. So they also have Haven't any expensive licenses license fees to pay. They use their own linux And they use a specialized network file system Where large numbers of servers are connected and it looks from the outside As they use a big single file system and data is automatically transferred from machine to machine and Yeah, the hardware is slightly outdated That's because slightly outdated hardware usually is a lot more cheap than than modern hardware So there's usually this this kind of curve. You might know from from cpu These are the very modern ones really really expensive and if you wait a year or two then prices are a bit are more Yeah, I'm more reasonable and more correspond to to the To the power you really get the performance you really get so they try to Not use the most the most up-to-date stuff But try to optimize their cost value But try to optimize their cost balance Okay Each of these each of their servers here some pictures below has some 12 volt battery Connected to them because google found out that power supplies are usually unstable in in a way and just using a Battery as additional power source can can counter some some fluctuations in supply Yeah, so they they don't need any any specialized Cases for their for their hardware In fact, they use standard shipping containers if you if you would use them In trucks or in harbors on chips. They just are Fitted with all this hardware and can be deployed anywhere in the world just put power in and Large network network cables and then things really really work. So these the things are customary made at some at the google headquarters and then shipped to all over the world Where they can just be used Okay, um, of course if they use very cheap hardware Google servers tend to be really really unstable But have a lot of performance for very little cost. So this is a that's basically idea of google high bank for the buck ratio So and here are some typical events that occur Uh in the first year of a new data cluster in a new new data data shipping container So that there will be there will be, uh um There will be an overheating every two years Approximately so that this will power down most machines in the new cluster for about Five minutes and then you need one or two days to recover to Uh repair your hardware so this can happen then your power distribution units will failure and 500 to 1000 machines are suddenly not there anymore because they they failed So these are events that happen and usually in some company infrastructure if if such a large number of machines Just breaks down then your whole your whole infrastructure is isn't working anymore. But but google's Clusters are designed that they can easily handle those events. So even if thousands of machines suddenly suddenly disappear Uh google is still working. You can still post and you can still post queries and queries get answered very efficiently So um sometimes wreck an entire wrecks get get gets moved again thousands of machines sit down You need to revire your networks your network by by um then removing some machines one after another Over many days wrecks go go go crazy Uh in a year 20 times about um Again hundreds of machines disappear. You need some hours to get back to the normal state uh You need to do network maintenance. Uh your your routers have to be reconfigured Uh your routers do not work anymore And so there are many many things that happen and there is almost no day without a major incident in such a data center, but as I said the Fascinating thing is that google keeps on working even under these conditions So if you just remove an entire data server the google infrastructure as such won't notice So of course if you standard hard drives, they will they will fail Many many times and essentially you have some people running around in your data servers Doing the whole day nothing but switching hard drives nothing but switching switching your your your power supplies and your machines doing maintenance operations Because uh, yeah, there are so many problems in standard hardware that just occur and need to be fixed So but it can be done So the main challenges that google has in the infrastructure is the need to deal with all these hardware failures that you won't have with large Large enterprise scale servers, but you will definitely have with all these crappy and cheap hardware At the same time you need to avoid any data loss So you won't uh, you want your index to disappear or disintegrate completely because then you can do cannot do business anymore So you must be highly tolerant to all these kind of faults. You need to guarantee global uptime So as we said, there are many many hundreds and thousands of queries to google every second If google is down for a minute then there will be many thousands of customers who are really pissed up pissed off And will go to another search engine So you will you want to decrease your maintenance costs to minimum So you don't want to have very expensive repairs on your machines. You will you just want to Switch a hard drive by pulling the old one out and pushing the new one in and then things should work You don't want to have any expensive reconfiguration Uh, you want to be able to extend your data centers to just by building a new data center Plugging the network cable in and the power then the whole the whole system should be able to reconfigure itself And use the new hardware and and new possibilities that are there So at least no manual configuration necessary. Everything should be done automatically by your infrastructure Okay, what the solution to all these requirements? Use the cloud So cloud technologies will be very flexible be highly distributed Distributed and have a high performance So as I already indicated this is done by the google file system or the google big table data system some Kind of file systems that are optimized for distributed environments and that really look like a single file system As you know it from your home pc So how it can be done and how this really works and why these two guys are smiling always Uh, this can be learned in the lecture in next semester distributed databases, so I can definitely recommend that that will be held by christoph lovhi in the next semester, right? Yes, we will come to that later again But if you want to know what's what google's Hardware and software secret is then this is a lecture you definitely want to attend All right next topic is meta search So um Meta search is one of the points where you say well maybe google kinds of has a special way of delivering their results or calculating their results And yahoo also has a special way And bing has a very special way So why do I trust one of these engines? And not the others And why can't I connect their strength And kind of wipe out their individual weaknesses This is basically the idea that uh, if you have different ways of ranking the content And each has individual strength and weaknesses Then the combined results of them the combined effort of them should be better than each individual So it's it's a little bit like like what we did with the wisdom of the crowds If I have thousand people stating something and I take the average Then everything is wonderful Because I know how many beans are in the pot or how big the ox is or whatever And this is exactly the same same same idea. So I have the query and I uh, put post the query to the so-called meta search engine that distributes the query to different search engine collects the results Reranks it somehow and gives it back So this is kind of kind of the idea um, and uh, one problem that the meta search engine faces is It has to rely on the underlying engines because obviously google is not allowing any meta search engine to grab into the index and and and say well Um, let's see what kind of page this is and how high the tech term frequency was or how high the the the uh, the ranking factor uh, transported by the by the page rank was or something, yeah But it just gets the pure numbers of this is the best match. This is second best match and and and so on They used to be with google some some of these numbers um, uh 0.8 or 0.875 whatever, you know Uh, that that kind of how gradually uh, uh, ranked it by assigning some some value But uh, almost all search engines have given up on that because these numbers mean nothing beyond the pure ranking And so it's not an important information for the customers. So why showing it? um So you're not able to exploit the the internal information of of uh, of a search engine Thus the problem can be defined quite mathematically. We have a set of ordered lists that are returned by the underlying search engines and we have to integrate this ranking into one list And then You return this one list How to integrate rankings I mean each of the search engines Has a best candidate. Which of them is the best one? Any ideas From the best search engine so we always ask google why then doing meta search? Yes Huh, maybe you check kind of cross validate So the results by looking where it was and the other search engines. Yes Maybe you can have some topic driven preferences for taking one Uh, uh, value over the other. Yeah, it's difficult, isn't it? It sounds like an easy problem But it's difficult to actually do it and um, actually the problem is not too novel uh, so social choice theory and voting theory Has been around since the middle ages And also, uh, the problems of of of building a fair voting system is as old as democracy So we should have some some some some some degree of what a good system Should grant us And I guess there are a couple of things that we can agree on for example, um, so-called peri to efficiency If we say that we have two pages And the one page Is ranked higher by all our engines than the other page There should be no way That it is ranked lower in the aggregate ranking, right? Sounds good, I mean what reason can there be choosing one search engine doesn't work because it ranks it higher Counting where it is in the other search engines doesn't work because they all rank it higher Even if you go topic driven or something it ranks it higher Interesting, huh? Peri to optimality. So we we can kind of kind of uh, agree on that Non-dictatorship, so always choose google sounds like a good idea, but that's not meta search Huh? huh? so non-dictatorship means we we should not always choose one engine to to to state the result But the other engines should at least have some effect Can we agree on that? Sounds good, doesn't it? Last one I want to talk about is the independence of irrelevant alternatives So I decided for some ranking between page a and page b now I add page c into the Into the image Then page c can be ranked higher than page a But lower than pay or lower than page b or in between them Huh? We have several possibilities of of assigning a rank to c But the mere existence of c Should not change our previously derived ranking of a and b Either a is better than b or it is not that's not dependent somehow on c is it? That's often called the independence of irrelevant alternatives So can we build an algorithm for re-ranking our pages? That kind of takes all these Things into account Can anybody think of one? Any ideas? Nobody can think of an algorithm. Why is that? It's not only complicated, but it's impossible Even with these little small Characteristics that all sound so familiar and all sound so sensible It cannot be done And there's actually a Mathematical, mathematical, mathematical, mathematical It cannot be done and there's actually a mathematical result about that. It's a arrows impossibility theory And and and he's smiling such in a way because he told all the voting guys that they can Go do something different because it's just impossible to design any ranking scheme based on a different different input Sources That Realize the perito efficiency the non dictatorship and the independence of of irrelevant alternatives You will always violate one of these Issues and that can be proven Interesting, isn't it? so Whatever we do, I mean it will have weaknesses. We have to we have to live with it On the other hand, I mean this is not rocket science There will be no nuclear explosion if our ranking is slightly off the mark Or if there might be a little bit of dictatorship or there might be not as perito efficient as we would want so We can we can relax on some of the the alternatives and I will show two basic ways of of doing it That are that are very often done. One is the majority rule. One is the border count that are actually quite quite quite old Things border was when? 1700 1600 something something around that that that area so it's it's kind of very old Systems and I will show their strengths and their weaknesses and then we kind of kind of see so Let us assume for the for the minute so that um that we have I don't know like five or three search engines And every search engine ranks every page Obviously, not true because they use different crawlers and may crawl different parts of the web, but let's just assume that for the minute, okay Then the first one the majority role Just says well if I have a pair of pages Huh, so a and b Then I will ask every search engine Whether is a is better than b Or b is better than a And since this is a total ranking that search engine provide every search engine has an answer for that Okay If I take an odd number of search engines The answer is even unique because they can obviously not be half half so, um Assume I have engine one ranks a higher than b Engine two range ranks a higher than b And engine three may rank b higher than a doesn't really matter because we have two Engines ranking a higher than b and only one Engine ranking b higher than a Thus a in the aggregate ranking should be higher than b. Okay Let's do the same for c so a and c and a and c and a and c so all engines agree That a is better than c So a is better than b and a is better than c Still we have b and c Let's see about it. b is better than oh, I take a different color b is better than c b is better than c and only one engine says c is better than b so again two to one b is better than c That's it. Okay Nice aggregate ranking however Things are not that Easy, I mean, I wouldn't have showed you Kenner's impossibility theory before if it would be so easy to just take a majority vote um The problem the big problem of majority votes are cycles And to show you what happens is kind of I I do it for you once again. These are the three search engines Okay And let's first focus on a and b. a is better than b. a is better than b Two against one a is better than b. Okay Good, let's look at b and c b is better than c b is better than c two to one b is better than c Let's look at the last one c and a c is better than a c is better than a two to one c is better than a Okay a is better than b is better than c is better than a Well, we have to live with it actually the literature lists a lot of of methods now how to deal with these cycles and can we break it somehow and can we nevertheless decide for some blah blah blah But it's a fault It happens and you have to figure out ways how to deal with it Let's look at the other one the board account the board account is actually a cyclic It avoids cycles And the idea of the board account is that you you you kind of allow ties in the in the final ranking So by letting each search engine cast a vote that is ranked or that that adds some some some number correlated to the ranking in the rank So if I have a search engine The first resent ranked result gets the highest score Well, the highest score if I just take the number of documents that I have as possible scores So every document gets a different score I will just assign a three to the first ranked The second ranked gets a two the third ranked gets a one and I just have three documents. Okay If I would use google I would start with 60 billion Okay, and then count down to one I do the same for this other search engines again one two three and again two three one good Now to get the ranking for every page I will just look at their Scores And Sum it up So a has three three and two gets a total of eight. Yes Let's well in the beginning we we kind of assumed that every search engine ranks every Ranks every every page Thus it would be the same This obviously is not valid. For example, we know that that yahoo ranks less pages than than google for example so Dealing with that could be done in in such a way that you only say well We will just consider the first hundred results of each web page because how many results are you going to create in your final ranking? Probably just a couple of pages if we assume every page has 10 entries and you want to Finally create 10 pages Then we need A lot more to do We need we need about hundred Documents from each search engine And then we start counting back from hundred to one and everything that is Below rank hundred in every search engine gets a zero Can be done So of course you have to normalize the search engines against each other. Otherwise you will you will have dictatorship If one search engine has a much higher ranking, so just assume that that You have a search engine That ranks 10 times the number of pages than any other search engine Then if you do it by normalizing it through the number of pages You would have 10 other engines That can contradict this one engine That's kind of not sensible Okay, but you could do it like I said You do the same for all the other things. So b has a 2 b has a 1 b has a 3 Uh two and one is three and three is six. That's it. Okay, and finally you have the c c is one c is one gives it a four. Okay? The advantages of the board account, it's obviously very easy to compute Just take the ranking assign the the rank numbers Add them up. That's it Um, it can also handle pages that have not been ranked by by all the engines Just assume that it's somewhere very deep in the ranking so it will get zero as a rank Or you could say well, maybe the page is not ranked by the image. So I will assign it some some some some mediums Uh Uh rank if I take the first hundred pages of uh the first hundred documents of each search and I could say well If it's not among the first hundred then I will give it a 50 Or I don't know like a 20 or something like that some uh something to to to to kind of uh make sure it is not dropped Because it's just not indexed but something that doesn't promote it to high can be done Um, you can have ties in the aggregate ranking. So you don't have the cycle problem Which is good And you can also weigh the individual engines. So if you say well, I trust google more than other engines Then I can add some 10 percent or something like that to the google voting to the google assessment And it it kind of is not a dictatorship But it it will fold into the Common result set Okay Can be done. Um disadvantage Of course if you Just give this ranking numbers you assume That the degradation of the ranking is uniform Assume that one engine Has a total favor and says well, this is the best page ever and all the other pages are crap Then this page should be more promoted over the second rank page Then if it says well actually I can't distinguish between these 10 pages I have to put them on one result page. So I will just Make a random order They are on the first result page anyway There should be a difference With the board account you cannot know As I said, it's kind of difficult to for all these Voting schemes and and and ranking schemes or re-ranking scheme to to to actually work. So board account majority vote very very difficult to say and and As I said if you have different search engines, so for example here, I take Three search engines and and their results also the results of board account and majority vote may be different So for example, if you take the Um the board account we add the four here the three the two the one the four three the two one four three two One okay, then for the a for example, I have the four plus the three plus The three plus the two makes a seven nine. Okay And so on let's look at the majority vote So for example the a and the b we have the a and the b here We have the a and the b here and we have the b and the a here whatever Two to one a is better than b Okay So Same board account Happens they just disagree The majority vote in the board account here. You just there's nothing you can do about it and as I said cycles are kind of a problem That do not occur in the board account. However Those things are pretty close in the board account anyway so Just shift it one place more or less or add a couple of documents and put the c Deeper in this search engine And it will become worse Will not change the majority vote at all I mean not this part of the majority vote Will change the board account drastically Same here shifting the b Just one step just exchanging it with the d Introduces a ranking in the board account Doesn't change the majority vote at all They have both their stabilities. They both have their their faults their drawbacks Decide for either of them There's nothing you can do because mr. Kenneth arrow wanted to be clever And proved the impossibility theory Now, you know good so Sometimes it's it's very helpful To see if some engine performs better than some other engine to to to find out whether they agree on something So if you have a gold standard and and say well, let's take google is a good ranking And I want to see if my engine is better than google or similar to google or whatever it is then you need the degree of agreement between them As a measure and actually that measure has been has been built. It's it's it's called candles tau Very very popular very very often used to Compare rankings that are that are created by different Engines or that are created against some gold standard where you say the gold standard is is perfectly correct And whatever my engine does it should Be not too far From the gold standard. It should be very similar to the gold standard ranking This is usually done the idea of candles tau is that that for each pair of pages that are ranked by both engines You determine whether the both engines agree on the ranking or not If one engine says a is better than b and the other says yes, I agree a is better than b Then you count one in candles tau If one engine says well a is better than b and the other says no b is better than a Then you count zero for candles tau and you do that for all the pairs and in the end normalized by a the number of pairs Huh, so if you have perfect agreement So all the pairs that you tested And Edit one Then you have a candle tau of one perfect If you have total disagreement always zero You have a candle tau of zero The higher the candles tau the closer it is to one the more your lists agree. Okay So it's basically the ratio of agreeing pairs compared to all pairs that are ranked by both engines It can do it a little bit more formal So if you have the number of pages ranked by both engines as one you count the agreements and the disagreements Huh, and then you take the agreements minus the disagreements by the number of possible Uh Uh, uh, uh Possibilities to to to draw two out of the Out of the Out of all paid pairs. Yeah, exactly. This is basically given by the binomial coefficient and This is what gets out of this. So m times m minus one divided by two is basically Uh, just this coefficient here and that's it. So yes Oh, that's right, that's right. So in this way it's negative. So then the candle tau is minus one if they perfectly disagree Right, so it's correlation and anti correlation basically if one engine always says says the uh, uh, uh, uh The opposite of The other engine, then you get a candle tau of minus one you're perfectly You're perfectly right. And if you have zero then it's just random Because if you if you're just for every pair pick Bigger or smaller huh, then You will have a 50 chance of of being bigger and guessing correctly this will result in candle-town zero. So they either correlated, anti-correlated, or totally independent, totally random. Yes? Yes. That's right. So as I said, it's usually always done against the gold standard, because otherwise, I mean, if I compare my engine to Google, for example, there are different possibilities of making errors. Either Google was wrong, or I was wrong. Just assuming that I was wrong is not a good idea. So candle-town doesn't make sense in the case where Google is often wrong. But if I have gold standard where I say, this is correct, like we did in the precision recall analysis. We had a manual kind of classification, whether something was relevant or not. And then we compared our engine, our IR engine, to the manual classification and said, well, this is the precision of my result. And this is the same idea here. I have a perfect ranking that is induced somehow, and then I compare my ranking against this perfect ranking. And so I just need to compare two engines pairwise. Good. So if I have two engines here, A is better than B, this one agrees. But with B and C, it disagrees. And with A and C, it again agrees, flomp. Then it has two agreements, one disagreement, and the possibility to arrange them is basically three. So candle-stow will be a third. I have more agreements than disagreements. Thus, there's a positive correlation. But since I don't have many more agreements than disagreements, it's only a small correlation. Yes? Good. So meta-search is kind of not very often used these days. And why is that? Because, well, there's all the ranking problems induced. And although, well, do they actually rank the same documents or do I have to find out who ranked what and where does it occur in the other ranking? That's kind of hard. But there's one area where meta-search is the killer application for web search engines. And that is if you have so-called maximum recall searches, where you want to have everything that is on the web. And it doesn't really matter which engine I ask, I want to have it. Can somebody think of an application for that? Why would I have to, why would I want to have a maximum recall search? Any ideas, any applications? Okay, that would be one application, yes? Oh, yes. But that's kind of similar to estimating the size of the web. No, one application that immediately springs to mind is patent search. You've got to be sure when inventing something or when filing for a patent that nobody has done that before. So you cannot afford overlooking anything. Everything goes for scientific results or something. You did something wonderfully and then somebody pops up and says, well, that has been done in 1965 by Kenneth Arrow. Well, that is my impossibility. No, it is not. Better make sure in literature search or something like that that you have a high recall. Rather looking at something that doesn't hurt you than ignoring something that does hurt you. This is kind of a typical idea. Well, for most other types of queries, not the maximum recall queries, it usually fails to increase the research quality. And what I just said is that the matter search really works well if the engines are completely independent. Well, they all use page rank. They all use term frequency. So how can the rankings be completely independent? So the errors are usually systematic to some degree at least. And also the engines used of similar high quality. We do have a competitive advantage for some engines that crawl bigger parts of the web or that use better technology. We know that. So also this assumption does not really help. So kind of difficult and matter search is not the killer app that it was originally thought. So when there was kind of a couple of search engines, matter search seemed to be the application, the wonderful way to solve it all by piggybacking on a lot of different different search engines. It has not really delivered what we used to have. So this brings us to our second to last detour for today. But actually I think we should skip that because the MetaGa is one German typical matter search engine and whoever wants to have a look at it, just follow the links, try it out. It's not too impressive. That's all I want to say here. The last part of the lecture today is privacy issues on the web and this will be done very briefly by Joachim. So actually it's just to give an impression of what problems could occur when dealing with user data on the web. A very popular and very devastating example has been the so-called AOL query log. So when doing research, of course, scientists want to have access to real data. For example, real queries sent to some search engines. And so AOL, large internet provider at the time, which had an own search engine, decided to make their query logs publicly available to scientists. So basically a list of all queries sent to AOL in the previous year, in five and something like this. So of course they did not ask their users, but they thought, hey, by just publishing the list of queries, nobody would ever be able to find out who were the people who asked the queries. So what did they publish? 20 million queries from 650,000 AOL users. And all these queries were within a three months period. And some technical data regarding these queries that could be used by scientists to make better search engines. So yeah, user names in AOL have been replaced by random user IDs. So there have been no way to find out who searched for what. So unfortunately, here's the query log of some user. Obviously, he drives Skyen XB, whatever kind of car this might be, had some problems with brake pads, lifts in Florida, also had some specific things he wanted to repair, is somehow related to the Florida Department of Law Enforcement and wants to get revenge on an ex-girlfriend. So with all these information, it might be able to find out who he is. So if you have able to some database on car repairs in Florida, you know the time when he asked these queries, then you will definitely be able to find out that he has some trouble with his ex-girlfriend. So another thing is here some user has some problem with cocaine, obviously. Again, again comes from Florida, is interested in marriage in some way. So legal problems in Florida currently is in New York because he is worried about whether the New York authorities will extradite him to Florida. Looking for cooking jobs in the French Quarter in New Orleans. Again, with some more information, you will be able to find out what he does, what he likes and what problem he has and these are not the things you want to have published. Here's someone who has a job interview with Comcast, drives a Ford Focus, is somehow related to the city of Joliet in Illinois, has a criminal past, cheating spouse. Again, Illinois, again truly some problems. These are this kind of information you don't want to have published. Very personal. Also very nice how to kill your wife, wife killer, pictures of dead people, murder photo, well, steak and cheese, car crashes. Maybe if there is some more information about this user, for example, he or she googled her or AOLed his or her own name, then it's truly a problem. Yeah, and definitely the problem also is if you find out who this person is, should you report him or her to the police? Are you allowed to do that? Interesting question. Yeah, first check survive. All right, next one, this user. This user has been tracked down by the New York Times. So they looked at the queries. This nice lady here, searched for the city she lived in and obviously has some very strange problems with her dog, which was urinated on her sofa and made a lot of trouble. And this person is Selma Arnold, a 62-year-old widow living in Lilburn, Georgia. So also did research on her friends, on their medical history. And just by looking at her queries, she posted to AOL. The journalist has been able to find her and have been able to find out many details about her personal life and the personal life of her friends. So the journalist approached her and she immediately said, yeah, well, these are my queries. Interesting, how did you find me? So it definitely proves it is possible to find these people and there's no way to provide sufficient anonymization on this kind of data. So, general, be very, very careful if you have access to this kind of data. So, of course, AOL realized that there has been a problem just after a day of the release of the data. They removed the data again and they apologized. Yeah, this was a screw up and then they claimed this was just the work of some single, single guy working at AOL and nobody knew about it. It was all a big mistake and they are very sorry. But yeah, they probably always will be sorry because once you published such data on the web, it's always out there. So this is a address of someone who just collected where the data can be downloaded. Yeah, it's easy to find. So no problem here by publishing this address. So this is a problem AOL always has and never will be able to fix. And of course, the AOL users who can be found at the data set will have this problem for all of their life. So very, very big issue here. Similar problem on Netflix, a large DVD rental service faced a very similar problem. They also released a data set some time ago about what DVDs have been rented by each of their users. Of course, they anonymize the data by replacing the users by random user IDs. So yeah, again, should be they sort of thought that it's really difficult to find out who rented what, which person is which. But some team of researchers just took a look at IMDB, the Internet Movie Database where people can post textual reviews about movies and also post star ratings about movies. And then, then, then they try to find out which users wrote IMDB ratings about movies. Approximately at the same time, they rented a movie at Netflix and gave a similar rating to these movies. Yeah. And of course, they found some people, identified some people in Netflix data set, asked them, sent some mail and asked them, hey, did you rent these movies at Netflix at this time? And they found out that they guessed correctly. And of course, this is a big problem because there definitely are movies you rent from a DVD rental service you do don't want to find all people find out about. Yeah, there are some movies people don't want to talk about. And they have their reasons. So again, this is a privacy issue and Netflix also have decided not to publish a data set again due to legal reasons. So this makes research quite a big problem. But yeah, there's all you all you can do when you have this kind of data. Keep it for yourself. Last example, our services such as 123 people. You just enter a name. And these service, these services are designed to find all information about this person available on the web. So they're looking through Facebook, they are looking through address books, they are looking through picture galleries. And yeah, here we are all your Achim Selkes, more or less. So these four definitely are some addresses of people with this name very convenient and domains with his name, email addresses, web pages. So very, very well ordered here, the Tech Cloud about what I'm doing. So yeah. So very interesting to see. Then you don't even need an evil company publishing your private data. There's a lot of data already out there. And so if some, yeah, some service that I don't think is a too big company is able to find all this data, then Google all some large search engine definitely is able to find out much, much more about your private life than you would want to. So even if you particularly if you have a private profile at Google, then yeah, you just can trust them. And they promise to don't be to do don't be evil. But yeah, who knows? All right, yeah. It's been our last detail for this semester. These are the lectures we will offer in the next semester as soon as I can start up the thing here, but I don't have to. So the first one is distributed database and peer to peer data management. We talked about that how to build distributed systems, how to store data in a reliable and efficient way and how peer to peer networking does work. So you have no central storage anymore, but all all service in your networks have the same have the same same rights and have to cooperate. So the next one is data warehousing and data mining techniques. So data mining, we discussed this in parts of classification, for example, the data mining techniques. Data warehousing is more about analyzing business data and companies. Also very interesting. Again, next next semester, we will have. Yeah. Here's the money. Remember that. The next one is spatial databases. Yeah, it's about how to deal with geographic information is also getting more and more important also on the web location based services. So if you're standing in the middle of a city and you want to go dining, just get your smartphone and see what's what's near what the star rating given by other users to all the places you have. But of course, for this, you need maps, you need to need to need localized services. And this is a topic in this lecture. And finally, we will have digital libraries, which is, yeah, there's some some overlap to this lecture here. Digital libraries have a different focus. Of course, digital libraries are about storing and retrieving information and retrieving textual information mainly. But here an important aspect is longtime preservation, for example, providing providing a good classification. Yeah, so it's a different different aspect, a different perspective, different focus. And I think it's a good addition to this lecture also. So if you have been interested in how libraries deal with their information, this could be a good follow up here. All right. So if you do not have. Yeah, so here's here's the money here. Here are the maps here are the books. And here is the cloud. All right. Any more questions? All right, then thank you very much and maybe see you next semester.
This lecture provides an introduction to the fields of information retrieval and web search. We will discuss how relevant information can be found in very large and mostly unstructured data collections; this is particularly interesting in cases where users cannot provide a clear formulation of their current information need. Web search engines like Google are a typical application of the techniques covered by this course.
10.5446/360 (DOI)
So, as always, it's my pleasure to welcome you to the new installment of our web search engines and information retrieval lecture. And we ended a little bit suddenly last time when we were just discussing the quality measures of retrieval systems. And we're basically focusing on two measures, which is precision and recall. And I will briefly reintroduce the measures so that we can start off at the beginning and then tell you how to actually evaluate those measures. So precision is a typical measure to get a feeling of how precise, of how correct the results retrieved by the systems are. So how many of the results that the system returns are actually correct? And how many mistakes do we have in the result? But of course, this is only one side of the metal because you could easily get towards a high precision by returning only those items about which you are very sure and you feel very secure. But then you would miss something. You might miss a large portion of the results or of correct results because you would rather focus on the precision and say, well, if I'm not sure about something, I just don't retrieve it. And this is why there's a second measure that has been built, which is called recall, where you say, well, basically, of all the relevant items that are in the entire data set, how many of those did I actually return? So what is kind of like the number of objects that have been withholded from the user that the user might never know about? And of course, this is very difficult to compute because you have to go through the entire data set to find out how many correct things are actually part of the data set. And if you know that, then you can compute the recall. And for example, if you have something like patent search, missing a patent is a very bad thing. So you need a good recall. However, what you would like more than anything else is kind of like perfect recall with a perfect precision. So you want the relevant items and all the relevant items, but nothing that is incorrect in the data set or nothing that is incorrect in the result set. But you don't want to miss anything that is correct that is missing from the result set either. So this is basically what you do. And it's a trade-out, a trade-off. So what you basically do is you have a curve for the recall where you say, well, at what levels of recall do I reach what precision? And then you get a curve here, for example, where you say, well, at a recall of 10%, I do have a precision of one. So the first 10% of my results that are perfect, like they retrieve 10% of the relevant items and don't add anything that is wrong. If I go to 20% of the relevant items, I may have a precision of 0.8. So 80% of what I'm retrieving is correct. 20% are false drops, are incorrect items. But in total, I retrieve 20% of all correct items. I'm missing 80% of the correct items. And this gives you a feeling on how these systems work. And then you get a curve like this over here that is kind of characteristic for the system. And what I can easily say is, if I have a system that does it like that, the system in red is definitely better than the system in blue. Because on all levels of recall, it maintains a higher precision. So this makes it better, obviously. However, usually this will not be the case, but usually we won't have such perfect systems. But there will be a trade-off, for example, here, where you say, well, for the higher recall values, I maintain a higher precision, whereas for the low levels, I maintain a lower precision. So what is more important for me? If I am a patent clerk, I can have to find everything that's in the system. Definitely the higher recall areas need high precision. So I don't have to look through too many items. On the other hand, if I do a web search, you rarely look behind the first page of Google to result sets. Maybe the second page. But that's it. You don't want the first 50 pages, though there may be something that is relevant. So you're very much interested in high precision. So then the low recall values, but the high precision values would be very interesting. That's the trade-off of what kind of system you're designing. So the question, what is best, is always difficult to maintain as soon as the different curves intersect at some point. So this is basically what we're doing. And there have been several attempts to say, well, a curve is kind of a graphical representation of something that could be average, that could be somehow computed into one score number. And this score number, then, is kind of representative. And I can just order my systems by this score number. And the most renowned for that is so-called F-measure. F-measure is just a harmonic mean, so an average, between precision and recall values. So you put the precision and the recall values into one formula, and you have some characteristic parameters to switch between them, to build more precise or more higher recall values. So if I use an alpha of 0.5, then it's balanced. Take a balanced mean and just say, if you suck in higher recall values, it's exactly if you're better in high precision values. And it has to be about the same amount. And of course, if you're more interested in recall, you can shift the alpha towards 0. And if you're more interested in precision, you can shift the alpha towards 1. And by shifting it, either this part or this part is pronounced more, and then it gets an imbalanced measure. Why don't we just use the average, the mean, but a harmonic mean? Well, the problem is that if you use the arithmetic mean, then you basically would have a baseline of 0.5 for the F-measure. Because if you just return all the elements in the set, then you have a recall of 1, definitely. It doesn't really matter how high your precision is. You will definitely have an average of more than 0.5. And that's a bad thing, of course. You would rather assume that any system that really sucks in either one should be punished. And this is basically what the harmonic mean does. So what is also very important is not only if you retrieve something or don't retrieve something, but also in what position do you retrieve something. So you could, for example, have a system that has a recall value at 20%. Where you go, like 20% of the relevant items are returned. We have a certain precision of, say, 80%, meaning that every fifth document is wrong, is incorrect. Which system would you prefer? The system that first dumps all the incorrect items on you on the beginning of the list and then starts with all the correct items? Or one that kind of mixes the items in an arbitrary fashion or some system that first gives you all the correct items and then 20% of incorrect items? Well, obviously, the correct answer is that you would assume them to be mixed somehow. And the more good items occur in early stages of the list, are returned first. The better the system is. This is especially true for web search engines where really hardly anybody looks behind the first page. And yeah, so that has been basically done for top K retrieval. So you have some K, which is basically the result list. So first returned document, second returned document, third returned document, and so on. And then you compute the precision and recall values for the set of the top K items. So basically, if you start with a relevant item at one, at the first rank, you have a precision of 100%. If you start with an irrelevant item at the first position, you have a precision of 0%. And the more documents you see, the less binary it will be because you have a good chance that you at least return something that is relevant or irrelevant beforehand. So basically, the precision at K and same concept for recall at K is computed by the relative precision and recall values. For the document, you returned until you reach the rank of K. So for the precision at four and the recall at four, I take all those four documents into account and compute the relative number which has been right, which has been wrong. So that is basically the idea behind it. Clear? Precision at K? Relevance at K? Very important, for example, for web search engines. Good. If you then plot the precision at K and the recall at K, you get a precision recall curve. And it basically starts from, well, if you hit the first values, immediately you start at a precision from 100%. And then you slightly go down so every wrong result that you return will kind of lower your precision. And that's basically what's happening. So you get these typical curves and you also get this typical sawtooth shape because the precision may drop without adding anything to the recall as soon as you start giving out a couple of incorrect items. So for example, let's assume the first item has been correct, precision of 100%. Let's then assume that the next five items are wrong. Do they add anything to the recall? Well, no, obviously, because you did not retrieve any more correct items. So you stay basically on the same line in the recall. But your precision drops. And then at some point you start handing out a correct item again. So you're kind of raising, increasing your precision again. And at the same time, you're increasing the recall because you did hand out more correct items. So you're moving into that direction for the recall, for the improved recall, and you're moving into that direction for the increased position, which basically makes this typical sawtooth shape. And since, of course, you don't want that, I mean, it looks kind of strange, you're usually smoothing that curve to make it a little bit more soft. So you can use the interpolated precision at a certain recall level instead, which is basically the highest precision found for any recall level that is bigger. So what you do is you don't dump down there and go up again, but you would just kind of do it like this. OK, which is the interpolated form of this curve. It looks a little bit smoother. It's kind of like the same thing. OK? Good. Well, if you're comparing your results set or your objects, your algorithms to other algorithms, then precision recall is the way to go. The higher the precision at higher recall values, the better your system is. And for example, we were talking about the track conference, the text retrieval conference last week, I think. And this is one of the venues where different algorithms for different retrieval tasks, so there's entity search or person search or chemical documents, or you name it. You have a lot of tracks and track that are specific problems that you might hit upon if you do information retrieval. And track uses an 11-point interpolated average precision. So basically, you take the different recall levels and compute the precision at a certain set, 11 points, 11 samples of these recall values. So you get basically 11 points here. And now you don't do any thought-tos or kind of like the step function, but you just interpolate the curve through these points. OK? And of course, you average the precision values over many different queries, so you get the average precision of your system. And then whenever you have a system that maintains a higher precision at the same recall levels, then it's a better system. And if you have something that is kind of intersecting your curve at some point, then you have to decide what you actually want. Are you interested in high precision? Then the blue system is better. Are you interested in high recall when the black system is better? And this intersection point kind of tells you where the break-even is. OK? At the break-even, it obviously doesn't matter which of the two algorithms or which of the two retrieval systems you use. Good? So this is basically what is used for finding out how good your system actually is in the system. And the last measure I want to do today is kind of like that you have the so-called mean average precision, which is a single value for assessing the basic quality of your system. And the idea behind mean average precision is that you compute the precision at k's for any k such that there is a relevant document at this position in the results list. OK? And then you compute the arithmetic mean of all these precision values. And if you do that over many different queries, then you get to the mean average precision. So what is the usual precision for those values where you have relevant documents returned or for those ranks where you have relevant documents returned? And it's become quite popular to use mean average precision because it helps you to discriminate between different systems. And it has been shown that it has a good stability. So basically if you try different queries, then you get different curves for your system that you have. And how low your curve is shows how good or bad your system is. So you could consider the area under the curves as a measure for how high or how low the curve is. And this is basically what mean average precision expresses. OK? And the higher up your curve is, so if you have the blue system over here that has a very high curve, then this area will be huge. It has a high mean average precision. If you have the red system, the mean average precision is a little bit lower. And this is kind of what the mean average precision measure says. OK? Clear? Questions? The what? The recall. The recall. This is a precision recall curve. So you have the recall here and you have the precision here. Because you do precision at K. So K is the rank that you have. Good? OK. So in the next lecture, or actually in this lecture, we will do clustering. We will move to a slightly different area and talk about how documents can be kind of clustered. And this is what we are doing now. And no, I don't want that. No, I don't want that. Hello. Yeah, seems like. This is kind of like. So let's skip to the next lecture. And today's topic is document clustering. And we will of course start with your homework again. Who of you did the homework? One, two, two. OK. So let's discuss. All right. And these are typical exam questions. OK. Last week we have seen how language models work. So basically we try to estimate some word probabilities. But as we have seen, when we just use the relative term frequencies, as they appear in each document, we get some kind of biased estimates. So what exactly is the problem and what can we do about it? Yeah. OK. So one problem definitely are zero estimates because they force all products to be zero. And obviously if a word doesn't appear in a particular document, it still could be that the document's topic is still related to this document. So it's a good idea to use non-zero estimates for those terms. What about the terms we can estimate by the relative frequency? And that do appear in the document. What problems can be seen here? Yeah. So all right. There could be some kind of overestimation. Our probability estimates could be too large. And so the general idea is to use smoothing. For example, we could use the general collection frequency of every term to score terms that do appear in a document a little down and terms that do not appear in the document, score them a little up, and then we get a quite good estimate. All right. Next one. Using an example of your own, explain the difference between relevance and pertinence. What are your ideas on that? Yes, please. Okay. Give an example. Okay. because there are two results. So they are almost a particular problem to use because they're already not that many times using the software. But still they are well-versed in a more than the way in which you don't consider a user-versed and what's on all of this. Yeah, that's exactly right. So a main problem in this distinction is that pertinence only refers to those documents as being good results that still contribute something to the user's knowledge. So here's an example for that. So again, this distinction. So usually when one talks about relevance, it's this topic or subject relevance, we discussed last week in pertinence that this cognitive relevance and refers, as you said, to the state of knowledge of the user. Here's an example. For example, some computer scientist, so someone who has some background, some technical background, a computer scientist is looking for how the PageRank algorithm of Google works. So PageRank is a famous algorithm invented by Google. We will talk about it in a few weeks. And the query then could be PageRank algorithm. So the computer scientist expects some technical content explaining the math behind this algorithm and how to implement it. But one result could be Google's own overview over this technology, which is held in generally understandable terms. So there are no technical terms involved. It just tries to explain the idea, which presumably is already known to a computer scientist. So this definitely would be a relevant result because it fits the query PageRank algorithms from a topical sense, but as computer scientist already knows about what it is and just wants to know how it works, this thing is not relevant in a sense of pertinent. So this is the main distinction here. Okay, the last week we've also seen how to evaluate different retrieval algorithms. I showed in the detail, I showed an example how this is usually done in the track collection. There we had an example of an information need about endangered species, which has been described using a list containing three points. A title of the information need, a description, and a narrative. So your task was to think of an information need of your own and describe it using this scheme. Did you do that? Okay, what is your example? Um, il Justo прибadashi or would you like to hear that? Okay. Um, you also have a republican president who is your microwaveLaugh Okay, yeah. All right, very good. So the main important point in the narrative is to make clear what is relevant and what is not relevant. You made this very clear. So someone else could easily decide whether some given document is relevant for this information need or not. So this is the most important part of this information need description because, yeah, you really need this to evaluate relevance in a consistent way. So I also brought an example from some time ago. So there has been some incident in Iraq where some Iraqi journalist threw a shoe at then President George Bush. And the title of this information could be reactions to the Bush shoe incident. So here's a description of what happened. So in December 2008 an Iraqi journalist threw his shoes at US President George W. Bush who was giving a press conference then. So and the question is how did the international media comment on this event? So here is the most important part. Media commentaries itself are relevant as well as documents summarizing what the media said about this incident. So but in contrast articles that only report about the event itself but do not comment on it are not relevant. So this is a clear distinction and should work for most documents. So this is how it's usually done when you evaluate informational treatment systems. Okay, next one. What is the pooling method? Maybe one of the other two students. Yeah, then here again. Yeah. Okay, why do we do this? What's the benefit of doing this? No, it's the pooling method itself is not about judging the quality of individual systems. So it has to do with evaluating recall. Any ideas on that? So what is the problem of evaluating the recall of an retrieval system? All right, so if you would want to evaluate the recall, you have to go through manually through the whole collection which could be huge. So and with pooling method, you are now able to focus on a small subset of the whole collection and you just assume that those documents returned by any of the systems that only these are to be considered relevant. And you assume that all other documents, those that do not have been returned by any of the systems are not relevant. So this is a, yeah, brave assumption. But if you use many different systems and they have been designed and developed independently and rest on independent techniques, you can be quite sure that you have a large coverage over all relevant document. Of course, you won't be able to be sure, but that's the best you can do. So otherwise, you have to go through the whole collection with definitely, which definitely is not an option for large collections. All right. Okay, the last one, we just had a explanation of that. What is more important for information retrieval system? Precision or recall? Yeah, it's always a trade off. So for example, if you're doing patent search or legal research, then high recall usually is what you want to have. And sometimes you even use Boolean search. Then we had a detail about this at the beginning of the course. And high precision is typically required in tasks such as a web search, where you simply want to look at the first page and find some relevance without scrolling through hundreds of results. So of course, it also depends on the topic you're doing research on. So it's very, very specialized. Then high recall might be that what you need. And if you're willing to spend some time looking through the results, because it is important to know the answer to your information need. But on the other hand, if you just want to know some information about a very popular topic, then high precision is the most important thing. Alright, thank you very much. Then we continue with clustering. Very well. So today we want to talk a little bit about how to cluster documents in such a way that similar documents are considered to be of the same kind, to be kind of assigned to individual cluster. And on the other hand, that you can say, if I move to different clusters, I get entirely different documents. And of course, this is only then valid or only then interesting for retrieval systems. If you conjecture that, well, the documents that are relevant with respect to some information need are similar to each other. They will look similar. They will feel similar. They will probably use the similar words, you know. And if that is true, and I mean, as I said, it's a conjecture, it's a hypothesis, then clustering on the document might actually help you to guide users to some relevant clusters. And by exploring the cluster, he or she gets a good overview over what the answer to the information need actually is. So the user would benefit very much in precision and recall because he or she would get the complete cluster. And since the documents in the clusters are very similar, they all are relevant. And the interesting thing is it's easy to formalize that so-called cluster hypothesis. So closely associated documents tend to be relevant to the same requests, similar documents, which means kind of any kind of similarity value where you say, okay, contains the same terms or similar terms, synonyms or something like that, you know. Then if that is true, then this is a good thing. And what it prohibits is kind of like that you say, okay, there's a relevant cluster here and there's a relevant cluster here of the documents. And there might be some irrelevant cluster, but these are totally not similar to each other. This is what kind of contradicts this hypothesis if you have this. And you can approve it, you have to test it on different collections and then you can say, well, it works out or it does not work out for some reason or the other. So it really is a hypothesis. And the experimental validation of the cluster hypothesis was kind of problematic because for different collections you get entirely different document types. So sometimes when you're looking for a very, very small piece of information, for example, the boiling point of water or something, this information might occur in a lot of very different documents. There might be some documents, very, very, very small documents on a very low level that explain basics of chemistry and at some point, refer to the boiling point of water, but there may be highly specialized technical description where in some sentence, it's just mentioned that the boiling point of water is like that. These documents don't seem to have anything to do with each other. So that's kind of like, it's very collection specific and you have to figure out whether it holds for your collection. It also depends on the representation of the document. So what do you count if you have a bag of word model? You may get totally different results than if you consider things like the lengths of the documents and what the document actually expresses. And of course, it also strongly depends on the similarity measures. What do I do? Do I count only the words that occur in the query that are interesting? So for example, the boiling point, then all these documents that contain the word boiling points are similar with respect to this word. But of course, I can also say, no, the whole vocabulary of the document is similar or dissimilar. So depending on what similarity measure I use, I can get different, well, different degrees of correctness for my cluster hypothesis. And it depends on the queries. So for some queries, it might be true and for some other queries, it might be totally, totally wrong. But then, you know, like saying that you're expecting as an answer to your query, a certain kind of document. That is a thing that very often holds. So you might have something in mind when you pose a query and say, well, either I just want this information. I want the boiling point of water and it doesn't matter in what document it occurs. Or you might have some idea. I want a description of something and it has to be on some, I don't know, novice level or on expert level and something. And you expect certain characteristics of the documents already. This is usually a good indication that your cluster hypothesis works, that it holds for your collection and that it holds for your information needs. So let's say it holds often enough. Fair. If you consider real world collections, they usually do have some cluster structure. You can put documents that share similar vocabularies in one cluster. You can strongly distinguish between some documents that use totally different terms. We already use something like TFIDF saying, you know, like there are terms that are more discriminative, less discriminative. So also that can be put into a similarity measure to distinguish between documents. And so if we're looking at real world situations like the web, for example, we will find that many objects or many documents are transporting the same information. Or expressing the same information. Well, we'll rather look alike. There might be different in informating, there might be different in layout. But basically what they're saying or basically what they're talking about, apart from what your information needs was, might be very similar. Question that we have in this lecture always is, can we exploit this somehow for retrieval engines? Can we build clever algorithms based on this cluster hypothesis? Well, of course, it would struck you kind of a little bit queer if I tell you about clustering and the cluster hypothesis. And there would be no way whatsoever to exploit it. And so obviously there is a way. And this is what we're going into in this lecture. We will be looking at some applications of clustering. We will then state the problem of what a cluster is, what describes a cluster, and we'll look into two different parts of clustering, the flat clustering and the hierarchical clustering. And revisit some of the basic algorithms that are part of these clustering strategies. So what has been done in information retrieval, especially in web search engines, is that you try to get documents from the same cluster. Or that you kind of get documents from different clusters that are however relevant to get different views, to have a certain diversity of opinions. And actually, I think back in the 90s, it started when people were very interested in this clustering algorithms. And some of the web search engines were claiming that clustering provides better results than non-clustered results in terms of simulanness and in terms of diversity of the results. And one of the examples is for example, Clusty. Clusty is a search engine where you type in a certain term, like that you might be interested in. For example, you're interested in me. And then it says, did you mean Blake? No, I did not. And I don't want to buy him on eBay. And I don't want to shop for him because he cannot be bought. But here are the results then. And you might find that, well, I do have a DBRP entry, which is kind of like for the computer science community, bibliographic portal, where many of the publications are shown. And for example, one publication that I did with Joachim together or something like that. And you might get my homepage at the University of Hanover, where I'm still director of a research center. And you might get my homepage here at the Technical University of Braunschweig, where I had the Institute for Information Systems. Different results, the pages do look different from each other. One is basically reciting bibliographies. That's the first one. One tells you about Hanover and how wonderful the research center is. And the second one, well, that might be actually similar because it tells you how wonderful Braunschweig is and how wonderful the Institute is. So they might be rather similar from the same cluster. The other one might be from a different cluster. But they have something to do with each other. And visualizing these clusters and telling you might aid the retrieval. So what's happening then is that Clusty does not only give you the results, but also so-called facets. Facets are kind of the meanings behind clusters. So you get different clusters. For example, here the L3S research center, here the Technical University of Braunschweig, here are some of the papers that have been accepted. Here it's a coworker of mine. So these are different topics, if you want it like that, that occur in documents that seem to be relevant with respect to my name. And they're not always sensible. So for example, if you have Zberski-Uwetaden, which doesn't mean anything, it's just a couple of names thrown together. The documents form a cluster and there are obviously 23 documents that are similar with respect to this cluster. I have no idea what they are now. But they seem to be similar on a text level and they don't seem to belong to a sensible cluster because it doesn't have a good heading or the engine did not find the heading. But some of the clusters seem to be very sensible indeed. And if you're looking for a certain kind of information, so I want to know about the publications of Wofthier-Bauer. I want to know about him as being part of the University of Braunschweig. Then this is the way to go and you can click on any of the clusters, for example this here, and then expand the list of the five documents. Here they are all similar in structure and in vocabulary. And they all deal with Wofthier-Bauer being an entity of University of Braunschweig. So as an application that might be definitely sensible and what you get from it is that you be able to scan a few coherent groups so the documents are similar. You know you get a feeling for what you can expect when you move to that cluster and you don't have to hop from document to document, but it gives you an impression of what you can expect. And also looking at the different cluster heads, it gives you a notion of the diversity what you can expect. Consider I would have some kind of a hobby like scuba diving or whatever. And I would have some home pages about my scuba diving photos, fish photography or underwater, blah, whatever. Then knowing that this is a facet of mine would give you some top level information about the entity you're going to explore. Like you did just get the top level information that I have something to do with a research center in Hanover and that I have something to do with a university in Braunschweig. So it gives you some insights. On the other side what we just saw is that the labeling is sometimes difficult. I mean Sibersky Uvertaden doesn't mean anything and how do you choose the correct labels? Do you take the words occurring more often or do you take some words from the heading of the site or from the document title or what do you do? It's not really clear. And the same goes for the quality of the clustering. How do you define what is a good quality? Which documents are similar enough and which documents are dissimilar enough to put them into different clusters? So these are questions that we will have to answer. So for example if we look at the query Apple, we might have the Apple in mind. We also might have the Apple computer system, the Operatisers Mac OS or the Apple Store where we can buy our new Apple computer or the new iPhone or iTunes with music in mind. We might have a lot of things in mind. And the context of what we have in mind might not be easily recognizable by the system. Usually it's just done by the size of the clusters. So you would start with bigger clusters obviously. And very often also by the number of people that seem to navigate into one cluster or the other. But if there are a thousand people on the web asking for iTunes and asking for new music, it does not mean that I as a gardener am not looking for my next Apple tree that I'm going to need for my garden. And this is kind of tricky what's happening there. Ideally a clustering should do what Wikipedia does manually however in their disambiguation. When they use is kind of like what is an Apple? The Apple is a tree and the fruit from the tree but may also refer to different companies. Here Apple Incorporate the computer company or there's actually a musical science fiction film. The Apple from the 80s can recommend everybody to watch it. It seems to be very interesting. Well there's a music, an LP obviously, by the Mother Love Band. Apple Records was one of the early record labels that was actually found by the Beatles. There's Fiona Apple, an artist some of you may know. All different ideas of what Apple might refer to. And Wikipedia is not putting any stress on anything. They're not saying you are a Mac user and you will need the computer company. But they just say let's disambiguate it for you and then you choose whatever you need. And these are all the things that we can think of. And if you have any obscure notion of what else an Apple might be. For example it might be the big Apple in New York or something like that. Then we don't cater for that. And of course the truth lies somewhere in between. You would love to have research clustered by sensible clusters, by recognizable clusters for the different users. Which in Wikipedia is manual work. And of course you would like to have it done automatically somehow by collating the information from theories, from different people, from what they are satisfied with, what results they expect and stuff like that. This is the challenge that the clustering engines really have to work with. This is basically what you need. One of the first installments of clustering as a useful technique in information retrieval engine is the so called scatter gather cluster. So scatter gather was built as a navigational interface where you just were navigating through an information space. And by focusing on a document you would feedback the system with your preference. And starting from that document you would get a clustering that kind of gives you deeper insight in what clusters this document might relate to or might show you. So you search by navigation, by navigating through the system. And what you basically do is you take the whole collection, document collection, so the web for example, and you cluster it on a very broad scale. So you say this is documents about computer science, this is documents about I don't know mathematics and physics here, whatever it may be, you know. And then whenever you click on one document, yeah I want computer science documents, then the computer science documents are reclustered and clustered in smaller entities. There might be computer graphics, there might be information systems, there might be software engineering as subclusters of your global cluster. So what you basically do is you start by clustering the document collection in a small number of clusters and then the user may formulate a query by selecting one or more of these clusters. The selected clusters then are merged and clustered again. And this is kind of like the scatter and gather step. So in scattering, you cluster the documents that the user has selected by basically selecting one of the clusters or one or more of the clusters, you scatter it into different subclusters and in the gathering step whatever the user seemed to find relevant is kind of combined to one big cluster again and then it goes on until you're finally satisfied or you're down to document level where you talk about individual document and the user is happy with that. So for example, if you have the New York Times stories, then you might cluster it on a broad scale and then say, well, these are all the documents that refer to Iraq. There are some that are at your occasion, there are some that are Germany or whatever and the user may now choose some of the clusters he might be interested in. So if the information needed, for example, well, I'm interested in how the Iraq war or the Iraq crisis especially with respect to the oil reflects on Germany, then I would say, well, let's choose this one, this one and this one which basically rules out all the sports documents and all the arts documents and the education document and now there is the gathering step and the documents from these three subclusters are merged together and maybe they could be called international stories, but how do you know? I mean, you don't have a label for that. Then you look at this subgroup of documents and again, you use a clustering algorithm to find out how they could be clustered in sensible terms and this is actually the scatter phase and you might end up with, oh, there's a lot of deployment in it, Germany is a lot in it, Africa might be in it, oil might be in it, hostages may have something to do with it, you know? And then you select the next clusters, for example here, you shift from Pakistan and Africa because you're by somehow interested in that, you know, and then it goes down, you again gather and scatter again and as you see, the scope of your query gets smaller, you're very broad up here, you're much smaller down here but depending on how your documents look like, some of the things may come in that you did not really notice or that you did not really feel that you're not really interested about, so for example, I have no idea why Trinidad occurs here, a small island in the Caribbean, you know, there must have been a lot of documents dealing with some incident in Trinidad, you know? And they're kind of related to what you did before, okay? And you just navigate through it and this is basically what is called scatter graphic clustering, clear? Good. Well, sometimes it's really very, very sensible to cluster the document collection hierarchically, you know? So for example, if you have Google News, you have a toolbar here where you say, okay, I only want the top stories, I wanted refined to only top stories in the US, so that I'm somehow concerned with the US, or I want the business, or I want the science and technology stuff, or I want the sports stuff, you know? So you cluster it into different things and this is basically a clustering that partitions the document collections very strictly into something. So if something is a sports document, it's not a business document. And what you do with those documents in the middle might be difficult sometimes, okay? The clustering, the collection, or the clustering of different collections is especially useful if the collections contain small numbers of topics. And each of the topics is covered in a different, or in a similar way within the cluster, but differently from the other clusters. So you should be able to distinguish between different clusters quite easily, and you should find the objects within a single cluster similar enough. And this is kind of the idea. I don't know if it's still in use on a broad scale. There was the so-called Open Directory project, Demos, where people were trying to build a hierarchy, basically, well, a table of contents of the web. And there were classifying websites and saying, yeah, this is a website that deals with health problems and basically with medicine or something like that, you know? And there were assigning websites to these labels as to allow a navigating approach towards finding what you need as opposed to the classical search box. And Yahoo was doing that quite for a long time, the Yahoo Directory structure, Demos was doing it. So that's kind of one of the visions that basically never died, you know? So this is not so much about clustering as an algorithmic way of finding similarities between documents, this is rather about people organizing things in a hierarchy, people assigning cluster heads to documents and saying, these documents are all about medicine or these documents are all about business, you know? And the big advantage is that you can basically navigate through the document collection, that you can use browsing, an informed way of browsing as an approach, as an access path to information. And you don't have to bother with queries and refining your terms and then finding all the things that are relevant, because you didn't mean it that way. So for example, if you now search for Apple and you mean Fiona Apple, then you would go into the entertainment column and look for the entertainment thing and you would probably not get the computer or not get the tree. And if you want the tree, then you go into the, I don't know, like gardening section or probably the home section, home and garden or something like that, you know? And you would find it at some point. But as you already can see, you know, sometimes it's hard to know the directory structure from where you would expect something. And if somebody has a different conception of where to put it, so somebody might, for example, consider gardening as an art, the art of gardening, you know? And then you find it under art, which probably is not what you would expect. You would rather find it in home or recreation or whatever. So you see the probably different possibilities of getting to where you want. Good. The clustering in collections can also be used to extend search results. So for example, if you find a number of documents that match the query and you say, well, I just found this single relevant document. Maybe the user wants to know a little bit more about the topic, but I didn't find anything more. Then a clustering and finding documents similar to this result document might actually give the user a broader view on the topic. Though the results are, the other results are not in the specific way with respect to the query relevant. Their similarity to the results document might add a feeling of relevance, and that is kind of the idea. So if you have matching documents and this is your query answer, then you can extend your query by all the documents in the cluster and try to give the user a broader overview. This is basically the idea. It's also interesting to know that you could gain some speed up in retrieval because finding relevant documents, if you found something, will probably not be that you have to look through all the collection, but that you basically have to navigate in different areas of the information space. For example, if you have your, like we had with the terms, the term matrices here, and these are different terms, T1, T2. You might have different documents here and maybe a cluster here and maybe a cluster here, and the documents with respect to these clusters are dissimilar between clusters. So if you can go down to one cluster, you can prune large parts of the space where you don't have to look for relevant documents. And this is kind of the idea that you say, okay, I take the points for the clustering here, maybe existing documents, maybe documents that are just like the centroids of the clusters or whatever. And then I find the clusters having best matching representatives. So I say, well, the query matches perfectly to this cluster. And then I can prune all these documents. I don't have to look at them at all. And just look at these documents and this will be my result set. Okay, this can be done and is quite effective because it allows you to skip comparison of the query to many documents in the collection. And the retrieval results, if your cluster hypothesis is valid, is not different from what you would get if you do the global comparison. Okay, if your cluster hypothesis is not valid, then it will definitely not help you. Okay, then it might well be that here is the one document that is perfectly matching the query. Okay, but if you consider, you know, like if something matches the query, then it should be similar to the documents that match the query to. And this is a good way to go. Cool? Okay. And with that, we go into some examples to see what industry has done with clustering search engines. Well, we've seen a lot of theory and a lot of ideas how clustering can be used. So however, practice is a bit different. So for example, here's yippie.com, so it's formally called Clusty. We had the example on a previous slide. So Clusty went broke for some reason and now they have been bought by some Australian company. And now they're called Yippie. So let's do an example. Let's try a branch strike. And here are some ads and here are the clusters we get. So you might be interested in the hotels and branch like is the city itself and the university might look for photos. Yes, that branch like is maybe the branch like administration. Wikipedia entries, definition. I've no idea what this means. It's definitely related to Europe. Here it gets some kind of strange, but this might be a helpful clustering. So I'm looking for photos and I haven't specified photos in my queries. Then I could simply open this cluster here and then, yeah, things get strange again. You can get photos of hotels and branch like some photo sharing websites. A branch like pictures are located, for example, here. Some photos in and around branch like. So we could also try Apple. So basically this is the same technology as it has been used by Clusty. They just exchanged the company label. So there haven't been any improvements recently. So here's Apple again. So stockpote resources, but still no fruit. So there's always room for improvement. Okay, this is how Yippie also known as Clusty works. Here's also some open source products that can be used for clustering. I'm going to take a look at this at home. Web Clust, Clues, these are services related, doing things very similarly to Clusty and Yippie. Bing for some time on their shopping website offered some kind of clustering. I haven't found this when I looked for it yesterday. So they tried to do some product clustering based by features and based by photos. To get the customers a better impression of what kinds of different products are there. For example, if you're looking for mobile phones, that might be a good reason, a good idea to cluster phones together. They're all small and those that are quite big are smart phones together to get an overview over the market. So as I said, I think they still have this feature somewhere, but I haven't been able to find it. But I have been able to find the Iplower, which is some clustering technology developed by some company in Berlin. They are also offering this as a free RP, so you could embed the system into your website if you want to. And if you want to, we can see now, or you can check now the IMAP. So basically the idea of this technology is to use one or several terms in the middle. Is this large enough for all of you? All right. For example, in this demo, they have Berlin and now they try to find similar topics. So I'm quite sure that they use the Wikipedia entries to get this overview. So if you start with Berlin, you can get a lot of places here. Indeed, it's all clustered into a places category. So I'm also quite sure that they get this label from Wikipedia because Wikipedia has some classification system where they put every page into a person, place, or whatever category to have some special description of these items. They find persons that are related to Berlin in some way. So I would have expected some more here, for example, some politicians. But yeah, obviously there are other people here. Some other stuff. President of Germany, reunification is strongly related to Berlin. Buildings in Berlin and organizations that are in some way related to Berlin. For example, the Bundestag. Deutsche Bahn has their headquarters in Berlin. Daimler. So I think they have a large office in Berlin, but not their headquarters. So this is the idea how it works. So you get deeper into these clusters and then see some more, what's more places that are related to Berlin and navigate through this. For example, if you want to know more about Kreuzberg, you should be able to do something. Yes. Ah, this is how it works. So Berlin and Hamburg, what are they related? Well, they are kind of special. They have common buildings, organizations that are somewhere related. So this is some kind of playing with the data to see some connections and relationships. So might be a good idea if you're doing some brainstorming or want to see relationships between concepts. You're currently doing research on, and yeah, this is what iPLora has to offer. All right. So I think it's a good time for a five minute break now and see you then. So let's go on then. So now we're ready after we've seen the application to get down to some algorithmic devices and see what actually the problem seems to be. So the issues that you will have to decide on in clustering is kind of, it's not automatic everything, and you just have this clustering algorithm and it will do everything on its own. So there are typical queries like how many clusters do you want, what should be the cluster size, is it a flat clustering, is it a hierarchical clustering, so do you want a navigation path from bigger clusters to smaller clusters, or should it be on one level just depending on what you actually did. Should it be a hard clustering, should it be soft clustering, meaning that should every document be assigned to one cluster and one cluster only, or could it be in many clusters. How do you go to the quality of the clustering and of course in algorithmic terms, how do you find the actual clustering, how do you derive a certain clustering. So for the question of how many clusters do you have, we will in the following just use k as the idea of how many you want. And there are basically two different approaches. You could either say I want five clusters or I want a hundred clusters. That means you define the k before searching for clustering and you only consider those clustering that result in k clusters. Or you do not define some k but say, well it basically depends on the objects in the database or the objects in the document collection. So if they are all very similar to each other, it's not useful to have two clusters or three clusters, it's just a single cluster. It should depend on their closeness to each other and this is the only thing that it should depend on. So it's kind of like a data driven way of defining what the clustering is. And of course the right choice is always problematic. Then the question is do you have a flat clustering, do you have hierarchical clustering. So for example if your documents are somehow spread in the information space and then you just define clusters like that, then it would be okay. But you also could say, well I want something that starts with all the documents and then I break it down into one, two, three basic clusters. And if I zoom in I could see that these clusters here are kind of sub clusters of what I expect. And the clusters get smaller and smaller until you are on document level and are left with, I don't know, ten documents that you might inspect manually. It would be hierarchical clustering. The good thing about the hierarchical clustering is that you can navigate with respect to the document. So you have an entry point that kind of considers all the documents in the collection and then you decide for, oh I want to go here, I want to go here, and then I'm on document level and I only get the documents that I want and probably this is the one. Okay, and I will prune immediately all those documents that are irrelevant. Very nice way of navigating to your document. Question of heart and soft clustering means basically for the heart clustering you assign the document to exactly one cluster. And it's quite a common way of actually doing that so you have to decide for each document where it belongs to. If you take a soft clustering then basically each document gets a distribution over all the clusters. So it might fit into some clusters, it might not fit into other clusters, you know. But for example if you think about newspaper articles, yes it might be an economy article. But it might also kind of mention some countries. So it might also fit in a regional or geographical clustering. Or it might be concerned about some people. So also that would be a classification that you could use or clustering that you could use. And then of course every document might fit several purposes. It would be good to have different ways leading to the same document and not be restricted to know this document is about the US. But I will not kind of file it under US. I will file it under economy because it talks more about economy than about the US. So you have to decide, you have to make up some rules where something belongs. Might not be worthwhile. The soft clustering is therefore better suited for creating browsable hierarchies. So you don't have to decide everything in advance. And once you made one wrong decision you looked for the gardening or for your apple tree. Not under home but under hobbies or crafts or something like that. And then you will never find your apple trees. Because they are not accessible with respect to the other parts. And one example for soft clustering that we already saw was the latent semantic indexing. Where the documents could be about several topics. So we had kind of a distribution of how much of a topic was in each document. Okay. And what is relevant. So this is kind of what you do hard versus soft clustering. As for the problem statement, what you have is a collection of N documents. And you specify the type of clustering that you want. Hard or soft, okay, defined before and so on and so on. And then you have some function that is kind of used for assessing the quality or the similarity between different objects. It's usually called an objective function, goal function. What you have to do is you have to find a clustering that minimizes the objective function. Or if it's kind of depends on if it's built on similarity or dissimilarity. You know, probably you might want to maximize the objective function. So you want most similar object being together in a cluster. You want the least dissimilar objects being together in a cluster. So you maximize or minimize the objective function respectively. And we don't want to deal with empty clusters. We just say, you know, like so every document has to be in a cluster and every cluster has to have at least a single document. Okay. And this problem of empty clusters is usually quite hard and reflects on some of the algorithmic properties. And we won't go into that. That's kind of complicating matters more than is actually needed. So the question stays that what actually makes a good clustering? How can we evaluate the quality of clusters? And I think we already agreed that first it has to something to do with a similarity function or dissimilarity function. So the more similar objects of a cluster are, the better it is. And of course, the more distinguishable different clusters are, the better it is. Okay. And this is the idea what we want to have is a low inter cluster similarity. So between clusters, similarities of documents should be very high. So this should be high. And we want a high intra cluster similarity. So the similarity within the cluster should be low as low as possible. Okay. Compact clusters that are spread all over the place. Okay. And are very much distinguishable. What? Oh, that's right. That's right. The distance should be high and low. This is kind of, yes. That's kind of the measure of distance. So what do we do? This is high similarity and low similarity. And this again is the high similarity. Okay. And if you're talking about distance, it's just the other way around. Within the clusters, low distance, between clusters, high distance. Okay. So if you look at some intra-inter cluster similarity, then you might find, for example, that this one, this clustering over here, putting all the documents in one single cluster is a bad clustering because, I mean, the distance between these two documents, in these two documents, is very high. And they are part of the same cluster. This is wrong. If you split it, it might become better because now the maximum similarity between documents in the cluster is kind of much smaller than it was before. Of course, if you complete this argument now, you could say, well, the lowest intra-cluster similarity I can have is building my clusters like that. They're all amazingly similar to each other because they are the same document. But of course, this is not a sensible thing to do because we can see that the intra-cluster similarity is high, whereas the inter-cluster similarity is very low. Okay. It's the other way around. The inter-cluster similarity is very, very high still. It's almost as good as the intra-cluster similarity. That, of course, gives you an impression of a bad clustering. Whereas here, if we just have these two, then the inter-cluster similarity is quite low still. Okay. This is basically the idea behind clustering. So it's always a trade-off. Between maximizing the inter-cluster similarity and minimizing the inter-cluster similarity. And depending on whether you say, well, the K is defined in advance, then you have to live with K clusters whatever the data points. Or if you say, well, depending on the data points, I can determine somehow the size of the clusters and number of the clusters that leads to different algorithms that you actually can use. Common secondary goals are you should avoid very small clusters. So a cluster that just contains a single document is either a complete outlier. So it's a document that nobody thought about except for one person creating that document. And that is very strange. Okay. And you should avoid very large clusters. So if you have documents that are very similar to each other, putting them into a cluster is a good idea. But if too many documents are in some cluster, then returning the result set to the user might become a big problem. So maybe if they are similar, but maybe not too similar, you might find a way of breaking down the cluster to allow for efficient retrieval results, for efficient returning of the results to the user. But all those goals are kind of like internal or structural criteria, and you have to kind of figure it out. The external criterion that kind of deals with the problem of the quality is usually comparing it to a handcrafted clustering. What makes sense and what does not make sense? What should be together in one cluster and what should not? This is kind of like the idea behind the Demos, the Yahoo directories and stuff like that, where people actually assign websites to certain clusters. And if you can reach this clustering by some automatic means, then this is a very good result, obviously, because it reflects the human understanding, which is good. So how do you find a good clustering? Well, of course, there's always the naive approach. You just cluster things in arbitrary orders. There's a certain number of clustering that you can have, and then you calculate the objective function, and you take this clustering that kind of minimizes or maximizes the function. Straight forward, right to the point, impossible, because how many ways of clustering document collections you have depends on basically how many clusters you get. So if you take some certain K and how many documents you have that you can put together or put into different clusters. And the number of clustering that can be derived for making K clusters out of N documents is kind of given by the sterling numbers of the second kind, and roughly the sterling numbers are the kind of formula that I want to spare you here. The sterling numbers are roughly exponential in the number of documents that need to be clustered. So the more documents you have in your collection, the more ways there are exponentially so to shift these documents in and out of clusters. And this is kind of growing very, very fast. So we can really cut out the naive approach. It doesn't work. We cannot do it like that. We have to rely on some heuristics, and we will look at two kinds of clustering. We will look at a possibility to derive a flat clustering, and then we will look at a possibility to derive a hierarchical clustering. So the most common and most renowned algorithm for flat clustering is so-called K-means clustering, or the K-means algorithm. Everybody ever heard of the K-means algorithm? Uh-huh, one? OK, then it's at least news for two. But yeah, so it's the most important hard flat clustering algorithm. That means that we have a set of documents, a document collection, you know, and then the clustering cuts out certain sets of documents, and we define the number of clusters in advance. And we usually represent documents as unit vectors, but it doesn't really matter. What we do is we try to minimize the average distance from the cluster centers. So if we have documents in some space, you know, like here, like here, like this, you know, then what we want to do is we want to derive clusters such that the centers of the clusters show the minimum distance to all the entries of the cluster. So this is kind of the idea behind this K-means clustering. For looking at the objective function, we can always rely on the centroid of a cluster, which is basically the distance into all the dimensions. So if we have m dimensions, can be the total size of the vocabulary. So we, as in the vector space model, we have one axis for every term, and we count on the axis how often this term occurs or whatever, you know. And what we do is we take for all the dimensions the sum of the distance with respect to this dimension and just normalize it by the number of dimensions. And this is what the centroid of a cluster is. Okay? And we do that for all the documents in the cluster. Same goes for the residual sum of squares. So if we have a document cluster, we might, with respect to all the documents, take the distance of the points with respect to the cluster and then compute the squared norm about it. Sum it up, and this is kind of like the mistake, the total mistake we do for this cluster. Okay? Clear? So for the documents that we have, the centroid is basically the average of the distance with respect to all the different dimensions. And the mistake as part of the quality of the cluster is the sum of squares of the distances of each point in the cluster with respect to the centroid. So if all the points are exactly in the centroid of the cluster, then we have a perfect clustering residual sum of squares as zero. But as soon as they scatter around the centroid, this residual sum of squares grows, and since it's the sum of squares, outliers count more than objects that are very close to the centroid. Okay? There's a basic idea behind it. So in k-means clustering, we consider the quality of the clustering for the k different clusters that we want to derive, as being the sum of the residual sum of squares for all the clusters. So we want the clustering that has the least residual sum of squares for all the documents. Okay? So we don't want one particularly good cluster and all the other clusters suck, but on average, they should be good. Every cluster should be as good as possible. And minimizing that value is basically our heuristic. So we're trying to minimize the average square distance between each document and its cluster centroid, which basically means that we want to assign every document as close as possible to the centroid of the cluster. And we know that there will be k clusters, but we don't know the centroids yet. So what do we do? Well, we use a multi-step algorithm that starts with a random clustering and then refines the cluster more and more piece by piece until it somehow stabilizes in this condition where every document is assigned with the least quadratical error to a certain cluster centroid. Okay? So what do we do? We take so-called seeds, which is k documents or k points in the space, doesn't matter which, that form our centroids. And then we create k clusters, empty for the time being, and into each cluster we put one of the centroids. So now we have k clusters with one document each. Of course, these clusters have to cluster all the documents in the collection. So what do we do? For every document in the collection, we determine what is the next cluster. Cluster centroid. We have chosen the k cluster centroid, so it's k distance computations until we find which cluster this document should be assigned to. Okay? Once all the documents are split over the cluster, then we can recompute the cluster centroid. Because if we took some documents that show a certain bias with respect to the centroid, so the documents of the cluster may not be centered around the cluster centroid evenly, but they may show some bias into some certain direction or there may be big outliers, then the cluster centroid that was initially chosen randomly may not be valid anymore. We recompute that and check whether our clustering is satisfying already. If it's not satisfying, we start the process all over again with the newly computed centroid. Okay? Everybody knows what we are doing? Why are we assigning all the documents in the collection to the nearest center? To the nearest centroid? Yes? Exactly. Exactly. We want to minimize the residual sum of squares, and this kind of is heuristic for assigning them to exactly the cluster with the nearest centroid. The question, of course, is what is good enough? So we can either say, well, the change of the centroids when recomputing them was not so big, so the clustering will be probably the same. That could be one way of saying it's good enough. Or we have a maximum number of iterations, so we will do that five times and then it's done. Or we think about the residual sum of squares and if that is small enough, we just stop. What we could do, for example, is we just randomly, so I just go here and pick some cluster centroid like here and the other one here. What I'm doing now, yeah, that was totally randomly. What I'm doing now is I assign the documents with respect to the closest cluster. For example, this is close and this is close and probably this is close and this is close and this is close. You see what I'm doing. This is close, this is close, this is close, this is close. There goes to the next cluster. So this is what happens. I get this blue document cluster and the red document cluster. And as you can easily see, the centroid somehow shifted because if I take the average between all the things, it's probably kind of here and it's probably kind of here. So I recompute the centroid. What I'm doing, pop and the one goes here and the one goes here like expected. And now I'm kind of reassigning the different documents. And as you see now, by shifting the centroid here, some of the objects that used to have a very small distance now have a very big distance. It might actually come much closer to the other centroid or one of the other centroid. So this then would be reclassified as belonging to the blue cluster. Same happens here, probably here. So the assignment of document to the cluster after shifting the centroid may change. This is a basic idea. So what I'm doing is kind of like doing that a couple of times and after nine iterations, I get to a clustering where the centroid is here and the other centroid is here. So this is a cluster and this is a cluster. And with respect to the residual sum of squares, this is a good cluster. This is the minimal amount of residual sum of squares. And if I see how the cluster centroid moved in these nine iterations, I can see two things. Well, first is they did move. So the one clustering moved from the randomly chosen point here to some point down here, whereas the other moved from the randomly chosen point here to some point here. The direction is not always the same, but it may kind of like twist a little bit. And what I can also see is that at first it makes rather big steps. And then in the end it starts to get very small changes. It stabilizes. And this is what I say, it's good enough. If things don't change too much, if it's just one document jumping around and back, there's no point in refining the clustering still. I can just stop. This is the idea. So after nine iterations we kind of reach a very stable point, and this is what we want to do. Okay? K-means clustering is clear? The basic algorithm? Okay. So there are some variants and some extensions of the K-means clustering. And basically this is all that we want to say about partition clustering. So you start with a guess for the K-clusters and then iteratively refine the clusters. What can be done is so-called K-medoids. Centroids are kind of virtual objects, probably virtual objects lying somewhere in space. Medoids are existing objects. And if you have a document that is very close, that is closest to the computed centroid, then this document as point of reference leads you to K-medoids clustering. And the idea is why should you do that? Why not using the centroids? Why using the medoids? Any ideas? Well, the only difference is that the medoids are existing, whereas the centroids are not existing usually. So if you want to visualize your clusters by retrieving a document and saying, well, this is kind of like the interesting document. This is the document that is representative of this cluster. Then K-medoids is a good thing. If you're just using abstract ideas, then you can use the centroid. Good. Then there's the fuzzy C-means clustering, which is kind of exactly the same as K-means, but it's a soft clustering. So you basically can belong to several clusters. And there's model-based clustering where you assume that the data has been generated by some random distribution around some unknown seeds, that's unknown centroids, you know. And basically what you try is a maximum likelihood estimator to find those K-centroids that are most likely to have generated the observed data. So this doesn't kind of punish outliers so hard, but rather looks at where the masses of the clusters are. And the most probable centroid of a cluster is considered to be in the middle of these masses. This is the idea of maximum likelihood. Okay? Everything clear? Questions? Nope? Lightly heard? Lightly heard, wahrscheinlich. Yeah. The erwartungswert. Yeah. Good. Then we go into the detour to see some K-means clustering in action. So as K-means clustering is probably the most popular, the most widely known clustering techniques, there are a lot of libraries available for doing clustering. So I've brought Matlab here, which is able to do this quite easily. Does anyone of you know Matlab? All right, that's a majority. Okay, let's have a look. It's just some numerical computation tool in which you can interactively work with the data and draw some nice pictures. And this is what we're going to do here. So I'm now generating first a random data set containing of 200 points. Let's do this and look at it. All right, so when generating the data, I used four random seeds to get four nice clusters here. This one, this one, one here, and one there. So a perfect clustering would in the end be able to find exactly those four clusters that you can see here. So I also randomly chose four initial centroids for the K-means clustering. These are these crosses in the middle and these lines. It's a so-called Voronoi diagram. It's simply the line where the clusters are divided, which divides the clusters. So all points on this side of the cluster belong, or have this centroid at the nearest point and thus belong to this cluster here. And the same way for the other clusters. So now we can do some iterations of the K-means algorithm with our random thing here. And we can see that our centroids have shifted a little bit into the right directions and become close to the center points we would expect from the picture alone. So if we do some more iterations, as you can see, this isn't a lot of code at all. So if you have some data and you want to cluster it in some way, maybe muddle up might be a good way to do it. So again, we have shifted a little bit from step two in the right, upper right corner to step three in the lower left corner. We are quite exactly where we want to be. So the colors indicate which data source actually generated the point. So here at the border there are some mis-classifications, but of course there's no chance for an algorithm or even a human to know which has been generating point and it's perfectly sensible to do this classification here. So let's do some more iterations and have a final look at our clustering. And actually, after all these iterations, the algorithm has been able to perfectly find out which point has been generated by which random seed. So at least for these simple, simple types of data distributions, K-means works very good. We cannot take a look at the centroid movements that we have been able to observe. So beginning with this picture, ending here and these other corresponding movements. So no matter where we start with our centroid, in the end we finally converge to the right centers. So actually, under some additional assumptions, one can prove that K-means always converges to a clustering that minimizes the residual sum of squares. But usually in practice it works and that's what's count in information retrieval. All right, this has been K-means and now we go on with hierarchical clustering. Exactly. So the last thing we want to do today is kind of like look at a different kind of clustering, a hierarchical clustering where we don't initially state how many clusters there will be because that very often is kind of like very difficult to say. And we already, well basically jokingly, I said, well the best cluster in terms of the intra cluster similarity is always taking every document as a single cluster, you know, of course that will lead to a large number of cluster depending on your document size, you know, so that may not be the right way of actually doing it. But this is what you could do and then you could kind of like merge all the clusters that are similar until you arrive at one big cluster containing all the documents. Or you could do it the other way around. You could start with one big cluster and divide it into two clusters where you kind of like have the most similar objects in and so on until you come to the stage where finally every document is a single cluster. And the first approach is called agglomerative or bottom up clustering. I start at a very low level, every document is a single cluster. And then I start merging some of the clusters into bigger parts, agglomerative or bottom up clustering. The other way is kind of splitting, so-called divisive or top down clustering. So I start with one big cluster and then I look what would be nice to divide. So here's a big space, you know, like a big distance between them. It should be in different clusters. So then I'm kind of dividing it into two clusters. Those are the basic ways of doing it. What you need to assume that is you have some similarity of some measure of similarity between the different clustering clusters. A simple agglomerative algorithm would be, you know, for each document you create its own cluster. You start with as many clusters as you have documents in your collection. And now for every pair of documents you compute how big the similarity between those documents is. Or what is the distance between those documents. And the one with the highest similarity or the lowest distance between them, they get merged together to form a single cluster. So basically computing the similarity between every pair of clusters is kind of very strenuous. But you can improve on that a little bit at least. So what you get is basically what is called a dendrogram. Dendros means tree because it's like a tree, it has a root at some point, you know, and then it kind of diverges into different branches, you know, and so on. So if you go upside down it looks like a rooted tree. And you start with the individual documents and depending on the measure of similarity between the documents, you form clusters of them. For example, the documents that are, well, taking the title here from the Reuters collection are for us not recognizably different. So they are kind of like different articles but carry the same title. They show a similarity of, well, yes, 0.85 or something like that. And they are the most similar documents in the collection. And thus they will be united into a single cluster. And if we do that for all the pairs of documents, we might find out that there are other documents that might be united at some point, okay? And then we might find out that also these two share a certain similarity and at some point we will arrive at the root and this is one big cluster. Okay? Clear how it works? So this example has been built on the Reuters collection using the cosine similarities. And then we kind of like can see, for example here, FED holds interest rates steady and FED to keep interest rates steady. So it's kind of a similar topic, only a slightly different wording. And if you take the hold here and the keep here, you know, like then the cosine similarity is around 0.68. And this is exactly 0.68 where these documents are merged. Okay? Concept of a dendrogram is clear. Where are the clusters in the dendrogram? And how many clusters are there? In the end it's all just one big cluster, right? Yes? Yes? So when you kind of like have a split like here, then these two documents form a new cluster, that's true. But how can you decide how many clusters do you have or what is a good clustering? As you go down the clusters. So it's a question of the numbers of steps that you did or what is it? Especially if you think about the quality of the clustering. Exactly. So this is kind of the quality axis, isn't it? The higher up you go on this quality axis, so the more you go into that direction, the lower your number of clusters become because you're putting more and more dissimilar documents together into clusters. So using the stressor, for example here, we have the 0.5 or something. We want at least to have a document similarity within clusters of 0.5. Whatever it may be, 0.6 or whatever it is, you know. And what we do is we cut on this level through the dendrogram and see the clusters which are basically intersecting this line. And these clusters may consist, like in this case, only of one document or they may consist, like in this case, of several documents, depending on if something merged below the 0.5 line already, you know. And this is the idea behind the dendrogram and using the stressor for determining the quality of the cluster. Because the more, maybe we can take that away, the more instances you have where something on a very high level in similarity already merged, the better it is, the better your clustering will be because the intra cluster similarity will be very high. The higher you go up, the less intra cluster similarity there will be. And so it does not depend on the number of clusters or any number of clusters that you defined previously, but it rather depends on the data sets that you have. Depending on the data, on the documents, they might merge early in your dendrogram or they might merge late. And if you look at this here, for example, then most of the clustering, most of what is actually interesting in terms of clustering, for clustering all these different documents down here that you don't want to look through by hand, is happening somewhere here, you know. So there is where you get a sensible number of clusters, like 10 or 12 or whatever. And this is already a similarity beyond 0.2, quite low already. What you would hope for is that clusters merge early. But it depends on your, obviously it depends on your collection, on your similarity between objects in your collection. So for example, if you take different sizes, you define different ways of, well, different thresholds for your level. So then for example, this point here gives a cluster of size 3, because these three documents are put together into a single cluster up here. Okay? And this here, as a total line, gives 17 clusters, because 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17. Because it intersects the dendrogram in 17 points. All the documents down here fall into one of the 17 clusters, whichever it may be. Okay? This is basically the idea behind the dendrogram. Yeah, we were talking about similarity of clusters and the quality measures for good clustering or bad clustering. And the question, of course, is how to compute the similarity between clusters. And we can translate this question in, well, how to record the similarity between documents. Because if I know how similar documents are, I can take the residual sum of squares with respect to some centroid and then say how good clustering is. The lower the residual sum of squares, the better is the clustering, obviously. And there are different ways of doing that. And basically it comes down to four ways of doing it, single link clustering, complete link clustering, centroid clustering, or the group average clustering. And if we, for example, go for the single link clustering, then the similarity of two clusters with some certain centriots is given by their most similar members. Okay? So in this case, we don't consider how far these are apart, no? We don't want to know. But we will know the two most similar things. This is what defines us the similarities of the total clusters. Okay? So it's how close the clusters come to each other. If you use single link clustering, it's kind of, kind of, the idea is if I take new points, if I have two clusters that are very close in terms of single link clustering and put in one random point, it could be kind of arbitrary where the points get classified, where the points get into which cluster the point falls, because the clusters are very close together. However, one of the problems is, of course, that single link clustering produces often long chains, because if I start clustering and say, well, basically, I want to put all the documents together that are beyond a certain threshold and quality, then I can say, well, if I have a number of documents here, then these documents are very far apart. But by using single link clustering, I can say, well, these are actually quite close, so they belong to the same cluster. These are quite close, so they belong to the same cluster, and transitively, also the first point belongs to the same cluster. These are very close, these are very close, okay? And finally, the edges of the chain, the ends of the chain, may be apart very far, but it doesn't count in single link clustering. I only take the closest distance between clusters or between documents, okay? This is why single link clustering may produce long chains, but on the other hand, it's kind of like a good idea of assessing how much space there is between clusters. How close do clusters come each other? Good. Other way of doing it is complete link clustering, where you say, no, no, no, no, no. I want the similarity between clusters of their most dissimilar members, so I want the furthest distance there is. The maximum, not the minimum, would be in this term, the distance over here. And the problem is that complete link clustering really is sensitive to outliers, because if you have one point that is not really very close to the cluster center, you'll get a huge complete link dissimilarity, which also is not what you would expect. And when you cluster points with complete link clustering, then you basically say, well, for my documents here that formed the chain before, and that with the complete link, with the single link clustering, where I always take the minimum between points of the cluster, would basically be one clustering. I say, well, okay, these belong to the same cluster because they are quite close, and these belong to the same cluster, or might belong to the same cluster because they are quite close, but still I have to take the complete link, so also this would have to be quite close. So I can't put all these three things into the same cluster, but have to decide, no, no, no, no, no, this is one cluster because this does not work. And then I can do a cluster here and do a cluster here probably. And the point is that every document that is an outlier, so documents that are far apart, forms its own cluster, because there's nothing that is very close. Might lead to a splitting into many different clusters usually given by outliers, so containing only very few items, okay, basic idea of complete link clustering. Good, then we have the centroid clustering where we can say, well, basically I don't care about where the clusters come very close to each other, or if there are outliers, the mass of the distribution is what counts. So what I'm doing is I take the centroid, look, I take the centroids of the clusters and basically consider the centroid, the distance between the centroid as a measure of inter cluster similarity. The further the centroid, the further the masses of the distribution are apart, the better the clustering is, okay. And what you basically do is you take the similarity between every points of the clusters and you just average it, and this is basically what gives you the inter cluster similarity then. The problem really is kind of like that if you use dendrograms, you might find that similar other clusters could be fused at some point with improving the quality of the clustering. So it's not a monotonic decrease in quality anymore. And this kind of a strange thing because the clusters are merged after the fact and the quality suddenly improves, which is kind of very unusual, and you might not want that. What you can also do is kind of like the so-called group average clustering, so the similarity of two clusters is basically giving by the average of all the similarities, you take the similarities between the clusters and you take the similarities within the clusters, and then you, well, you average it and that's it, okay. The point is if you have clusters that are close together and clusters that are different clusters that are far apart, then you will have a good clustering. However, since you have to take the pairwise distances between all the points in the clusters, it's a very expensive computation that you have. Okay? Good. These are the different ways of computing the similarity of clusters. Well, that was a clonortive, there was kind of merging clusters that are very close together. How does it work top down? How does it work if we start with one cluster and then go down by dividing clusters? And I will not give you the details here, but basically it's kind of very similar to what we've done agglomeratively. So, for example, we could do a two-means clustering on the cluster containing all documents. Then we would have our first split, okay? And for every of those subclusters, we then do another two-means clustering, which will basically split it into four clusters. And then we would get the dendrogram upside down, okay? This is what could be do. Again, you might have some constraints, very small clusters should be avoided, blah, blah, blah, blah, blah, blah, blah, blah. Some structural constraints, as you would have. Other still questions to divisive or agglomerative clustering? Everybody's clear about what they can do and what they should do? Yes. So, the question is whether the methods of measuring the quality is kind of, you know, used for doing the clustering or for just determining the quality afterwards. In a way, it's both, because if you use a k-means clustering, you know, like you have to assess somehow how good the cluster is. And that is what you do with the distance to the next centroid or with the distance to the next middle. So, you need a measure for this distance. And of course, you can reuse this distance of finding out how good the clustering is. So, for example, if you have a k-means clustering, it could be a four-means clustering, it could be a five-means clustering, it could be a six-means clustering, resulting in different clusters because you have different numbers of partitions. Still, what is better? Does a four-mean clustering fit your data distribution better or a five-means clustering? How do you know? So, you have to compute one number or kind of like a set of numbers to find out how good your clustering actually is using the algorithm that you decided for. And this was what we did before. So, you could use single-link clustering or complete-link or group-link, group-average, blah, blah, blah. This is kind of the idea behind it. And depending on what measure you use, you're punishing or rewarding different things. So, if you use a single-link clustering, you're kind of like punishing clusters that are very close, that share some items that are very close together. If you do complete-link clustering, you reward those clusters that are very compact, okay, and so on. Good? Good. We are skipping this before we are way out of time. We are way out of time. So, basically, if you take the hierarchical clustering, everybody can do it at home. Then you start with a number of points and you kind of like build your clustering. For the evaluation, how do you evaluate the clustering? We use some internal criteria. We use the centroid distance or whatever it may be. Or you could use external criteria. You compare it to a manually crafted clustering, a kind of like a table of contents. So, for example, one prominent one is the so-called RAND index. You look at the pair of documents and you look at what percentage of pairs are in the correct relationship. If a pair of documents, which is kind of considered very similar, is in the same cluster, then this is a true positive. This is what you would need. If the pair is correctly contained in different clusters, this is a true negative. If the pair is wrongly contained in the same cluster, it's a false positive. And if the pair is wrongly contained in different clusters, it's a false negative. And with these four things, you can do precision recall as we did before. And this basically gives you a comparison of how good does your clustering algorithm reflect the manually assigned ground truth that you might be interested in. Okay? Understood? Good. Then we're closing for today. Next lecture will be on relevance feedback and we will see where all this clustering goes to, like classification, how do we deal with new documents and how can we say what the interesting part of the document is. Okay? Any questions left? No questions answered? Good. Thanks for your attention.
This lecture provides an introduction to the fields of information retrieval and web search. We will discuss how relevant information can be found in very large and mostly unstructured data collections; this is particularly interesting in cases where users cannot provide a clear formulation of their current information need. Web search engines like Google are a typical application of the techniques covered by this course.
10.5446/358 (DOI)
Or that's my pleasure as always this time of the week to welcome you to a new lecture in information retrieval and web search engines. I hope you enjoyed last week's excursions or vacations if it didn't take any excursions. And today we will be back with the last topic of our lecture series which is web retrieval. So today we really want to dive into the web and see some of the interesting issues that can be done when doing information retrieval or search on the web and how it is different from doing information retrieval or search on text collections. And these differences will spawn some interesting algorithms and some interesting methods that we will investigate in the next I guess three lectures. And as always we first want to start with the exercises and who did the exercises? Yay! Excellent! So let's talk about it. Yeah, the topic of last week or last lecture was support vector machines and they are what the concept of a maximum margin classifier. So why is it a good idea to use a maximum margin classifier? It's your show today, yeah. It works, yeah, that's the best reason you can always have an information retrieval. So basically the idea is if you have some training examples here from two classes then it's quite reasonable to expect that if you get new training examples then they would lie close to their respective class and so drawing a line in the middle should be the best thing or the safest thing you could do. So quite intuitive but as you said there are some more arguments on why this is a good idea but that is basically the most important one. Alright, what about the kernel trick? Very famous exam question and oh so many people have failed with that but you shouldn't have to. Why should you want to do that? Exactly. So, we want to work in a high dimensional space with our data points, but we don't want to transform all our data points explicitly into this new space because it's typically very high dimensional. So, high dimensional means thousands and thousands of dimensions which wouldn't be too efficient to work with, but in support vector machines, if we want to use them on the high dimensional space, we only need to compute scalar products between the mapped vectors in this space and our kernel function, or with kernel functions, it's quite easy to compute this scalar product because it's a simple function of our original vectors. Therefore, we can do all this mapping and computation very efficiently without doing the mapping explicitly. So, this could be done very fast then. All right, we also talked about learning to rank, which was some kind of version of support vector classification. So, what is it and which way is similar to SVM classification? Anyone else? Okay, exactly. It was about pairs and the ideas to compute a ranking of documents from a collection of those pairs. So, for example, some user says document one is better than document two and document 15 is better than document seven. And from this data, you want to compute a ranking of all documents and the idea is to compute a function that maps each document representation in our space to some score, which is then used for ranking. And the function looks like that. So, x is our document. Then it's simply a linear weighting of all the coordinates of these documents. Very similar to the support vector machine setting where this is our classification function. And in learning to rank, you have the situation that you look at the differences between the document vectors. So, then you have the constraint that the weighting vector times the difference should be either larger than one or smaller than minus one depending on whether the first or the second document is perceived as being better by the users. And this can simply be inserted in our support vector machine problem statement in this constraint list and can be solved in the same way as it could be done with ordinary points here. So, in support vector machines, you would have just one point here, but the difference of two points always is a single point. And so, the computation is all the same, but the problem formulation is just a bit different. So, you can see support vector machines can also be used for slightly different problems like learning to rank and some other ones. Okay, last question. We have also talked about the problem of overfitting. What is it and what can be done to avoid it? Yes. Hmm. Yes, we have seen this is a quite complicated line which fits the training data perfectly. So, but yeah, what constraints we could formulate on this separating line? This is one way to deal with overfitting in the support vector machine setting. Yeah. Hmm. So, I think in this case, it should be much better to use a simple dividing line between the class and not such a complicated one as the one that also classifies this point correctly. So, then we can just give away some penalties for each type of line. For example, for linear classifier, it gets no penalty and some very complicated high-dimensional kernel classifier gets a high penalty. And then we can trade classification error where this complexity of our kernel function or mapping we used and then arrive at a good compromise or a good trade-off. Another way is to use training and test sets. Any idea what that was? I think you have one. Exactly. So, you check the validity of your model on a test set you did not use for training from which you know the correct classification and then you simply take the classifier that performs best on the test set. So, because from you, we would expect new data points, unknown data points to have similar properties as the points in the test set. And so, performance or good performance on the test set should imply good performance on any new data we could get. So, quite easy. All right. That was the exercises. And to conclude the information retrieval part of our lecture, I want to show you what products are available for or using information retrieval techniques in companies. So, Google, one of the major players in web search also offers a solution for companies. And basically, they sell these nice boxes here in different versions which you simply can plug in your network. Then it starts some kind of web server. You have a nice administration interface. And then you can simply say where your documents are stored on your company's servers and maybe which user names and password are needed to access these documents. And then this little machine automatically indexes all your data you have in your company and provides some kind of unified search interface. So, as I said, there are different versions. So, if you have a rather small companies, you can use the Google mini version and it scales up to 30 million or even more documents. So, we can even connect different of these boxes together to scale up performance. So, this is basically what Google can offer to companies when it comes to managing their documents. So, mostly it's about Word and Excel files or emails. And this can all be done with these Google search appliances. So, I've taken a close look at what they write in their ads. And simply they say it's fully automatic indexing. So, the key point is that you won't need an administrator that takes care of all this stuff and writes configuration files. You simply plug this thing in, tell it where your documents are stored on your service in your company. And then it takes care of everything. So, it has a nice web-based search interface for the users. A lot of file types are supported beyond Word and Excel and all these basic stuff. You can also connect it to relational databases and existing content management systems that you might use to manage your website. And also there, the content gets indexed. You can access it from software via API calls. So, if you want to provide a new search interface to your users from an in-house application you developed in your company, then it's also possible to connect to the search appliance and use information it gives you and present it to your users. And what's the best thing? You get a free Google t-shirt when you buy one of these nice boxes. So, clearly the Google t-shirt is the most expensive thing here. So, the Google Mini starts at $2,000 and the other versions start at $30,000 and go up depending on how many documents you have and how many computation power, how much computation power you need. So, it could be pretty expensive, but maybe with the larger boxes you get two or three t-shirts for all the executives in your company. So, actually it's pretty nice of Google. Okay, of course there are also open source projects that offer search capabilities, most notably the Lucine project from Apache. You could take a look at at home. It's just a bundle of Java classes that provide all kinds of data conversion and indexing index, index creation and search with different models, vector space or probabilistic models. It also includes stemming. So, if you ever want to build something like a search engine for yourself, don't try to try to write it all by yourself. It's all there. You just plug it together and usually it works pretty well. So, that's a Lucine product. Of course, there are many other products out there that can also be used, but Lucine from my own experience is the most mature one, I would say. So, start with this and if it fits your needs, then it's good. Otherwise, you need specialized solution, but Lucine can cover most of the situations you will encounter in a typical company setting or even at home if you have a website and want to provide some kind of very specialized search functionalities, then Lucine is what you want to have. All right. That was the first detour for today and now we start directly into web search. Yes. So, web search is somehow different from information retriever or so we think. Now, actually looking at a few statistics, if we see what people are doing and how the internet or the use of the web affected your daily lives, it has made a difference. I mean, the web was founded in 1995 or around 1995 and started off as a simple telephone book. And now what we do is a lot of all the areas that affect us most dearly. So, for example, if we think about healthcare, then we see an increase of people using Dr. Google, looking up symptoms, looking up ways of treating illnesses or preventing illnesses or prophylactics and what can I do to stay healthier in my old age or something like that. These are topics that people are affected with. Same goes for job issues. So, a lot of the advertising for jobs, a lot of the application for jobs is done via the web. And actually, if you compare it, I have here the numbers for 2001 and 2005, you see it's growing quite a lot. So, more and more of the issues are done during the web. Of course, hobbies, shopping, think about Amazon buying books online or buying music online, iTunes, stuff like that. That is really affecting our daily life. We don't have to go to the store anymore. We just get it delivered the next day or simply download it. Think about the Kindle editions of eBooks or direct music downloads from iTunes. We actually get the content immediately, everywhere and at all times. That's quite convenient. It's exactly this convenient that supports the web or makes a claim for the web. And if we see how many people actually use the web, so is it really a thing for the young people or do actually also the old people or elderly people use the web? We can see that during the recent years, the number of all age groups actually increased. So, more and more people of all age groups are using the web, where it started out as something that was usually done by young people here and very little done by the elderly people here. We now in 2010 actually have a very strong percentage of people of 65 years and older using the web on a regular base. And this is almost 50% of the age group. So, almost every second, old person uses the web in a regular fashion to look up things, to shop, to get assistance for some issues, to read about hobbies, to connect to other people. This has dramatically changed our life actually. So, if we see or if we have a close look at what people actually do when using the web, then we can still see some changes. So, for example, those things like the online auction blogs, virtual worlds or something, that happens in a pretty limited way in the old age groups. That happens very much more in the younger age groups. But of course, the typical services like email, web search, buying something, getting more information about health related issues, traveling, keeping up with the news. These are typical tasks that especially the older generation uses because it's so convenient and it's easy to use. In the younger community, you have a lot of social networks, a lot of instant messaging to just keep up with friends, but also listen to music and stuff like that. So, there's a lot of things that are kind of daily business or done on a daily basis for the younger people. And of course, as we would expect, when you get older, you get more conservative about things. So, one good example, for example, is the online banking that is very heavily used for young people. Whereas, if we look at the older people, banking online is just about, well, a third of the people use it. It's a matter of trust, basically. Kind of still nice to go to the bank and talk to the clerk and get the money directly from your little booklet where your savings are deposited. And that seems more convenient. The hassle of actually going there, of physically going there, is not so bad compared to the information that you get, the little chat that you have, and the trust that you still have in the institution. But that is changing. And the interesting thing when you do age statistics is that these people here, the 18 to 33 old people, will very soon be here. And they will take over their habits. Okay? So, they will open up more applications. They are digital native. They're used of doing a lot of things with the bank. They use trusting websites. Whether this is a good thing or not, I mean, I'm not quite sure about that, especially with all these social websites and Facebook and all the scandals going on. You know, like, maybe it's not so bad to be a little bit more on the conservative side. But still, you will see that with the demographic change, also the habits will change. And what is commonplace today for young people will be commonplace in 30, 40 years for old people, because they are the same people. Okay? Good. So, the web is kind of essential, and it's very essential that you will find something in the web, because I can still remember the times when you had basically a map of the German internet service, the German web service, like the network of German web service, you know. There was, I don't know, many of you will know Leo probably from the Technical University of Munich, which is basically today more or less a translation tool, where you look up English phrases or something. That used to be Leo is the abbreviation for link everything online. And that used to be the linkage table of the German web service. And they had a map kind of where you could say, okay, I want to go to Hanover in terms of web server, and there's one server, and I click it. And then I have the content of this server. And of course, it immediately exploded. That was the first two, three years, you know, when only universities and governmental institutions had web servers. But as soon as the internet was rolled out, so DSL and you name it, or the internet is home technologies, everybody had a web server. Not everybody, but quite a lot of people. So linking everything online became impossible. And that was a part when the web search engine started. So the content was not easy to find, and it was not like in your library where you said, okay, this server is for all health related issues, and this server is for whatever, but it's distributed all over the web. And the interesting thing, of course, is you need to index it somehow, because who would create online content if nobody can find it? It's just no sense, you know, like, I mean, how many people can you tell where this very important information is? You want people to find it somehow. And in the beginning, there was Demos, which was a directory for the web, also that pretty much exploded very quickly, and web search became the possibility to replace older services. So what is important for? One important point is collaboration. How can you work with other people if you can't share information in an efficient way? And sharing information is about putting it somewhere on the web and allowing other people to find it there. And a lot of the things that's going on in, for example, open source projects is you don't instruct people to go somewhere and do something, but somebody does something that is similar to what's happening in some project, and when searching for more information or more tools for doing the actual thing, they will find, oh, this is very similar to what I'm doing. Let's contribute. Okay? And this is why web search, and especially also the social web, where you have the mailing lists and the news groups and the Facebook's community, and you name it, you know, like you have all these kinds of user groups that share some common interest and that exchange information on the web. And another interesting thing is that web search basically is a free service, as we know it. It's very often called the Google business model. So it's for free. Google does it for you, and they buy all the service so you can use them. Oh, where's the mistake? Why should they do that? How do they earn money? And this is the interesting part, only by advertising. And advertising only makes sense if I can advertise in a focused way. I do tell people about some product, about some service, who are basically or could basically be interested in that product or the service. And this is something that can be done very well with web search. So if somebody puts in some words that show he or she wants to buy, I don't know, like a new net book, then why not letting vendors place ads, pay for that for informing the people what they want anyway. This pays for your service. This pays for your infrastructure. And then you can offer the web search for free. And even if some users won't buy stuff or are interested in knowing something about the hobbies or sharing some photos of the last holidays or something like that, you know, you can finance it through basic web search ads. This is kind of why it's interesting. And if we have to imagine the web as a couple of components, then this would be the way that I kind of imagine it. And this will be the way that we see it, because we will work our way through most of these topics on this schematic little image here to see what we can do. On one hand, there's the users. And there couldn't be more heterogeneous. There can be everybody. There can be from small children ranging to the elderly. There can be professional users. There can be users that are only trying to do leisure tasks on the web. It can be everything. You cannot really predict what they want to do. Then on the other side, you have the web, which is basically a lot of documents, a lot of information that do not need to be static, actually. It doesn't have to be a document as in this document here, you know, like a sheet of paper or a digital sheet of paper with some text on it. There can be generated on the fly from information from web databases or whatever, or scripting languages. So generating information in a flexible way is definitely one of the possibilities of the web. But even if it's generated, it has some address. It can be addressed somehow. And this is what these pages here are like. And there's another thing to this collection of web documents. And that is they may point to each other. They may be interlinked, so-called hyperlink. So I want to serve to a different page. There might be a path to the web where I can go, well, first here and then here. And I'm changing between sides. I'm changing between different information that is offered based on a path that was somehow anticipated by the creators of the websites that created the hyperlinks. Okay? But now we have the users on one hand and we have the web on the other hand. And there's a barrier between them. Because in the early years of the web, it was kind of like the user knew a site, so he could address the site. And there was some basic navigation like the Demos directory or the link everything online map of servers, you know, where you could find something that's no longer scalable, that's no longer working because the web is huge. So this direct access on website happens very rarely. It happens, it still happens with some websites where you say, okay, I type in Facebook.com, where do I go? Facebook.com, wonderful, you know. But more often than not, you will not start your surfing session by typing in an address or the address that you type in will be Google.com or Yahoo or Ask or one of the many other web search engines. And this is kind of the normal way that you go. There's a user interface that has kind of a search button. So it doesn't really matter what's the name on top of it. They all work more or less in the same way. You have a simple keyword style search, you type in some keywords, you hit the search button, and then in the back end, retrieval algorithms will start to work. And they will do something very similar to what we did with our document collections, with our normal text documents. So there's a lot of IR that is working here, but there's also some different stuff. And this different stuff will be the topic of the next couple of lectures. But of course, these retrieval algorithms does not work on the web as such, which is very similar to what we did in the information retrieval. We were building inverted indexes and then working on these inverted indexes. Also for the web, we need to do the same. We need to pre-process it. Okay. And how do we pre-process it? Well, basically we have to find out what's in the web. So-called web crawler does that. So looking at the sites and looking at the different pages, gathering the information, and trying to index or to find out what the information is about. And this information is given to some indexer, which can be seen as one big inverted index for now. And this is actually where the algorithms work on. So this is basically the big indexer. And now we see what we need is somebody who designs the interface. What we need is somebody who maintains the retrieval algorithms. What we see is somebody who maintains the disk space and buys the disk where the index is situated. Somebody who maintains the web crawler, that costs a lot of money. And therefore, you do need some business model. You do need to finance it somehow. And also that is a topic that we will go into, although very, very briefly. Okay. So these are the basic components of a web search engine that you have to understand to see how it actually works. And we've seen some of the issues. So for example, how the index looks like, how the retrieval algorithm looks like, you know. We have seen, or we have a basic idea of some of these things might work. But we will see that in practice in web services, it's a little different. Some things can be transported, some things can't. So this is actually the first part of our web, of our lecture. What we want to do today is basically see the differences and the commonalities between web retrieval and classical IRR to see what's different, what can be taken unchanged. Then we want to look a little bit at what does the web look like. So the structure of the web, can we exploit that somehow? And then in the end, we want to deal a little bit with the user side and see how users actually use the web. Okay. Good. If we see the difference between classical IRR and web retrieval, then one of the topics that we immediately see is the heterogeneity of the content. Whereas in a company or a newspaper archive or any collection that we see for the classical IRR, you can have some standards or you will basically often see the same or very similar content. In the web, everybody can participate. Everybody can put up a web server and say, well, this is my content. Take it or leave it. And I don't care about any standards. I don't care about what's happening. This is what I offer. And everybody who can access it may. Okay. So we have many different users. We have different topics that are talked about. This happens in different languages. So we have a German web. We have an English web. And they merge at some points. Also the document types. So most of them will be HTML documents, which is specifically being built to create web pages. But there's a lot of documents like PDFs or office documents that are on the web and that are available and that have to be handled somehow. And that contain a lot of information, which would be good to extract. Then it's kind of can be dynamic, as I said. So document doesn't have to be in existence. It can be created for some specific user. Especially if you think about personalization and about parametrization of searches, then creating, dynamically creating content is quite popular these days. You have a lot of scripting languages. You have a lot of things going on. So what do you index? You don't have the document. You have many different documents that can be built on the fly. Maybe they also differ in content. Maybe it's about the same topic, but maybe it's slightly different if you put it together in different way. It's a very open platform. So you have variety of authors. You have variety of writing styles. There may be different opinions. There may be biased opinions. There may be majorities kind of expressing a common consensus. Difficult to see, difficult to deal with. One of the biggest differences are the hyperlinks. So I mean, everybody knows from, well, basically scientific documents that you might have citations. And might, okay, as blah blah blah set in this wonderful paper from 1965 or something, you know, like, blah blah blah blah. And then you point to the source where you got the information from. The same is happening on the web. But more directly, what you do is you don't just cite things. You link things. You put a direct link on the source of something. And it doesn't have to be some other paper of the same type or something. It can be anything. And just brings you to some new portal page, to some new document, to some more detailed information about something, to a different page dealing with similar stuff can be done for a lot of things. Or to some vendor that tries to sell you something. It can be advertising. It can be almost everything. And so this is a connection between documents and the typical citation or references style. Then the problem size definitely is different. I mean, we saw the Google solutions, yeah, for up to 30,000 documents, 30 million documents, if you take the big solution, that's kind of annoying, isn't it? Because how big is the web? How big is the web? Any ideas? 30 million documents? 10 billion websites are indexed at Google. So they do it somehow differently, obviously, than using their small boxes. So we'll get some estimations of the web and we will see how big the web or what we see of the web actually is in the course of this lecture. It's definitely so. 10 billion websites was quite a good guess. It's far more than you would think. Then we have the problem of spam. So not everything we find on the web can be taken at face value. There's a lot of things that come unasked for. And for many people, it's a business model to sell something using the spam. You may always wonder why this happens, because who would buy Viagra online? Well, if one in every million who gets the spam message does it, the vendor is in business because it doesn't cost anything to hand out a million advertisements over the web. Totally unlike traditional advertising possibilities. If you have to hand out leaflets or if you send something by mail, email comes at no cost. Handing out spam doesn't cost you anything. This is a real business model. And then of course, the normal business models IR, if you do it correctly, is expensive. Web search is even more expensive because you do it for a lot of people. And the web is so much bigger than your company repository. And your return on investment is unclear. If you do it for your company and you have all the, I don't know, claims or all the letters with your customers or with your suppliers or something like that, you know. And that is what you index. That is what you make searchable. You have a direct return on investment because people will work more effective. And the efficiency will be better. If you index the web for a lot of people, what is the return on investment that you get? Difficult. But you have to think about it somehow. So the world internet usage, if you look at it, distributed by the different countries. And if you use it, if you look at it, how much percent of the population actually has access to the web. Then what we see is it somehow, what is often called the digital divide. You have some of the developed countries, so for example here, Europe, North America, okay, where the population did not grow too much. Well, so that's kind of reasonable growth. But the number of internet users in the same time grew very much indeed. And today in America, of the about 350 million people living there, 270 have internet access, are actually internet users. In Europe, of the 800 million people living there, more than half have internet access, are internet users. Looking at some developmental areas, so for example here, Africa, we have a large increase in population growth. You will find that you also have a large increase in the internet use. But still, that the people being able to use the web is only a very small fraction of the actual people living there. This is kind of what is often called the digital divide. So people having access to digital resources, to the information, to the services, to the education that is given by the web, and people who have not. So for example in Africa, 90% of the people do not have access to the internet. And the same holds for many of the Asian countries, for example. So if you see, it's slightly better, so a quarter of the Asians do have access to the internet. And if you look at it more closely, then most of these people, of these 25%, is not distributed somehow evenly over Asia. But of course the people in Singapore, the people in Japan, have much higher numbers than the people in Vietnam, or the people in India, or the people in the world. So the developed nations are definitely advantaged by the development. As we can see, in the last 10 years, the internet usage went up about fivefold. So 500% of the people having internet access in 2000 now have internet access. World population did not increase too dramatically, but still. The growth of the internet users happened in the developed countries, not really in the development areas. Another interesting thing to note is that the web users are totally different. And if we look at the demographics of internet users in the US, we will find that it's not a gender issue. So males and females both use the internet, similarly. Of course what we would expect is that it's an age issue. Young people tend to use the internet more often than older people. Still, 41% for the old elders, that's quite impressive already. What we also can see is that the use of the internet seems to be directly correlated with the education and with the average income of people. The higher the state of education, the more the internet is utilized as a tool for many things. Getting information, getting new jobs, doing your online banking, whatever. So digital literacy seems to account for part of the education and vice versa. There seems to be a strong influence, a strong tie between both. Of course, also with the household income, because that is directly correlated to the degree of education, of course a college professor will earn more than somebody dropping out of high school and taking over a job in roadside construction or something like that. That is correlated anyway. So also here we can see that the use of internet is definitely correlated. Over 94% of the people belonging to the upper class income ranges are using the internet on a regular basis. So it has become a commodity on one hand, but it is also interesting for effectively and efficiently dealing with the issues that are concerned with education. That is kind of interesting. Looking at the web's language, we will find that no surprise that most of the web, more than 70% of all pages of all information is in English. English has made it way to the world language and this is why we deliver this lecture here in English. It is so much easier for all the people in the world to understand if we do use a common language that is good for exchange. But also the developed countries have a web that is lingual in some of the native languages. So for example, 7% of the web is German, so the German are regular users which covers Switzerland, Austria and Germany, actually the German speaking nations. 6% of the web is Japanese, which I find always quite surprising because Japan is not too big actually and still 6% of the web is Japanese. With the Spanish I would have expected it to be more, actually, because most parts of South America are kind of speaking Spanish, but they are not too developed yet, so that might be an issue here. And it is kind of growing. So Spanish seems to be growing to become the second language of the web and these numbers here are from 2009. I guess it will have changed, so Spanish is definitely on the rise at the moment. But we can see that bigger parts of the web are in certain languages that you have to deal with somehow. And also the document types, as I already said, there might be some Microsoft application, so for example, Excel, PowerPoint, Word, there might be PDFs, there might be zip files, a very big part will be text or HTML, there are different versions of HTML that can be used. There's a lot of things out there that have to be handled, from which information on one hand has to be extracted, but which also has to be displayed for the user. And this is only the text part or the numbers part. We're not talking about images yet. We're not talking about video. We're not talking about executable code. This adds much more problems than is already there. And we do have web search engines that work comparatively well on text. I don't know of a single web search engine that works well on images or video. It's all the Google images or the Google video or whatever it is. It has severe limitations. We'll also deal with that topic a little bit. Yeah, so web search engines are kind of different in what they have to address, what they have to, where they have to attract information from, but also what the users want from the web search engines. And generally speaking, you can break down queries into four types of queries. The first is informational. You do have an information need, which is exactly what we did in our IR queries. Give us all the documents that have cat, dog, man, and moon. And this is exactly what we call what called informational queries. You are somehow interested in documents about cats and dogs. Then there are navigational queries. So for example, I want to see the Google side. I want to see, to know something about Technical University of Brownsbite. I want to know what their website is. I want to navigate to the website. And I'm not declaring what I'm exactly interested in. Is it that I'm interested in statistics about how much students they have, how many students they have, or is it that I'm interested in how to get to a certain building on the campus of the Technical University, or am I interested in a certain professor or certain institute that I want to visit and that I need contact data or something like that? I'm not telling you. This is why the query is not really informational. Of course, I want some information about the Technical University, but not as such. I want some specific information. And I want to go to the site, and then I will see what I do. I will navigate somewhere. So maybe a better example is not the Technical University of Brownsbite, but Facebook. You're not going to the Facebook site for the sake of the Facebook site. You want to address some people. You want to log in and see your page and who poked you and who did whatever, you know, who shared new photos with you or gave you a tractor for your farmville game or whatever, you know. But first you have to go to the Facebook page to log in. It's not that you want to know anything about Facebook. You just want to get to that site. And it's quite interesting that many people use Google to navigate to the Facebook site. So the query Facebook on Google is one of the top queries in Google. Why don't they just Facebook come in the address field? Why do they type it into the search field and then click and Google displays them as a first result, the Facebook com site. It's interesting, isn't it? Navigational queries. Okay, then there are transactional queries. Transactional queries are all the types of queries where you want to do something. So I want to download the new Adobe reader or I want to download a new codec for watching certain videos or something like that. Or I want to download a form to register for the, I don't know, town hall, council or whatever. So all things where you would like to do something are transactional queries. And the so-called connectivity queries, which is basically you want to find pages that link to somewhere. Or you want to find pages that site some other pages. This is a very rare kind of query actually and it's usually done to see the structure or to see who is citing whom to get a better understanding of the networks. So these three are the typical queries that everybody types into the search field. This one is a little bit specific in what you want to see. But of course it's kind of interesting to see something like whoever googled his own name. Ah-ha, everybody did. And it's interesting to see you what comes up, isn't it? It's interesting to see who cites you or who says something about you. And this is what many people do actually. Good. If we look at the top searches, this is 2008 actually of ask.com. We'll find that in the top 10. The first three are actually purely navigational queries. They want to go to the Facebook site. They want to go to the MySpace site, to the YouTube site. They don't want to type youtube.com into the address field. Interesting, isn't it? Quite convenient because the cursor is already, Google is probably starting or Askcom is probably your starting page, you know, anyway. And the cursor is already in the search field. So why should I kind of click in the address line and go for the extra.com if I could just type in Facebook in the Google field and make a web search. And Google is able to fix your spelling errors. That's right. So if you write Facebook or something like that, it will immediately come up with did you mean Facebook? Yes, I did. So that's also very, very possible. Other type of queries that are among the top 10. So Angelina Jolie obviously was a topic in 2008 in January, 2008. So maybe another adoption or a new movie came up or I don't know what it actually was. Also very interesting how to get pregnant. Typical informational query. Okay. No idea. It's among the top 10. So it seems to be an issue. January, long winter, I don't know. And then last thing that we have is the transactional queries. So what we see here, the online dictionary or the email. This is, I want to get to a site that offers me free email or where I can see something on Wikipedia or something like that. I want to do something there. I just don't want to want to serve to the email side, but I want to get an email address. I want to download an email client or something like that. That is the point of this kind of query. Okay. So we see a lot of navigational stuff going on. A lot of informational stuff going on. A little bit of transactional stuff going on. That's it. Good. Again, if we look at some statistics and see about what people using the internet on a regular basis did yesterday. So you just take a sample of all the internet users and ask them, so what is the task that you actually did yesterday? And about 72% of these users said they used the internet. So three quarters, almost three quarters of all regular internet users actually use it on a daily basis. Interesting, isn't it? About 50% used the web search engine to find information. Some web search engine, whatever. So the other 50% of people using the internet is kind of like they went to the Facebook page or did whatever, you know, like they didn't actually actively search for information, but use some of the portal sites to navigate somewhere. About a third of the people got news, read some newspaper. 30% checked for the weather conditions, also very popular. Similar, look for some hobby or just surf for fun. Find some interesting things that might keep you busy for a while. And this is kind of interesting down here. Do any type of research for your job or research for school or training, which is actually a quarter or a fifth of the internet users doing that on a regular basis. So it is really affecting their lifestyle, it is really affecting their education. And also these are statistics from 2009 or something use of an online social network site that will have increased definitely these days, but was already on the rise there. Good. And with that, we go to the next detour, see a little bit of what Google does in terms of trends and in terms of zeitgeist. Yeah, that's actually one thing I really, really like Google for because they always provide some very nice information about what people are doing on the web. And let's have a look at it. So one thing is zeitgeist. So every year Google publishes some kind of report where they review what have been the major search trends in the previous year. This is then collected in some kind of nice animation or visualization, anything you like. And in this example, they collected some major trends from the previous year from 2010. And one example is the FIFA World Cup in South Africa. So they prepared some kind of animation. I will start it where you can see how many queries have been issued about the World Cup around the world over the year. So now it is March, not much interest, more than ordinary, some interest here, interest is growing, growing the Caribbean. And now World Cup is everywhere, even in the US. And interest decreases again, because World Cup is over and only back to some normal level. So you can see there always is some normal level for each popular topic. But at certain times when this event is everywhere on the media, then people start searching for it and interest increases. This can also be used to detect recent trends. This is one research work Google does. So where they really analyze their search logs on a regular basis and try to find out what people are interested in, what are current trends. And this data also could be used for marketing projects or even this information could be sold to companies who want to just know what are people interested in and they can use the information to develop new products or start some ad campaigns. So what we can also do, the oil spill in the Gulf of Mexico. So no oil spill here. And now it suddenly happens and the world is interested mainly in the US because obviously people in the US really have to deal with this severe issue. So now after the issue has been resolved it goes back to a normal level. So another example is the ash cloud over Iceland that made flight in Euro impossible for some weeks. And I expect this to be a rather European topic. So no ash cloud. Here is the ash cloud in Iceland. And now whole Europe is concerned about it and the rest of the world. Yeah, maybe because they want to travel to Europe or just heard about it and went on to know. And now it's everything back to normal. So that was it. So this could be observed for many, many other trends that rise because some event occurs. And they also collect at Google so a list of search terms from several categories that are fastest rising in the sense that they haven't been interested, haven't been much interest in the year before, but now there is very much, very high interest. So Twitter and Facebook obviously got much more popular than the year before. Justin Bieber became popular. The iPad launched by Apple. Fasted falling topics or topics that have been very hot in 2009, but haven't been hot in 2010 anymore. Slumlock Millionaires is a movie. Susan Boyle is a singer who became very popular in 2009 before Swine flu. Michael Jackson. Yeah, people forget about him. Very sad. So the same can be done in entertainment and food and drink. And so there are many ways to analyze what people are interested in. Jörg Kachelmann, even on the global scale, Jörg Kachelmann was a very hot topic in the news. So this is not the German site of German analysis, but this is worldwide. So even some German celebrities have made such an impact that even Google reports it on their world statistics. So quite interesting to see that. So many, many stuff. Nürburgring, some races here. So do this for maps, searches. People want to know where they can watch the Soccer World Cup, BM Kuchen. So also very German here. So as you have seen, 7% of the web is German. And of course, this will definitely have an impact on search. All right, this is Google site, guys. You can take a look at every year back from, I think, 2005 or something like that. What Google also offers, what's very, very great is Google Trends, where you can see some kind of live statistics how different terms developed over time. So for example, you can compare ICQ and Skype, and you will see that Skype became popular over time. So in 2004, nobody cared about Skype, but over time, it became more and more popular. And ICQ, which has been quite popular many years ago, it's decreasing in popularity. So you can compare different terms. Yeah, here this e-spyke, it's a Skype outage. So they had some technical issues and haven't been available for one or two days. That raised a lot of interest in Skype and Microsoft acquired Skype. So this is actually quite interesting because Google analyzes these trends and then tries to detect these peaks and find some matching events to explain these peaks. You can also see in different languages and regions how the trends are there because there are many websites, or many trends that are very local or are very different in certain areas. For example, we can see the interest in Egypt. Yeah, well, here we had some issues at the beginning of the year. All the rise of Facebook versus Second World, a Second Life, it's a name. Nobody cares about that anymore. So obviously people do. No, nobody does. It's rather dying. But Facebook, yeah, started. MySpace. Yeah, their MySpace have been more popular than Facebook in the first time than Facebook. Really took off and nobody cares about MySpace anymore. So here's the news volume. So people talk about Facebook. Nobody talks about MySpace. That's also very nice to see. And Google and Yahoo. It's the same. Yeah. But this could be due to regional differences. Yeah, hard to see. So obviously in Romania, Yahoo is very popular, but Google is not. So I have to ask my Romanian colleague why is that. So nice thing to see. And if you want to find out what are current trends, it might be a good idea to look it up at Google Trends. And the last thing is Google Hot Trends. This is what people are currently looking for. So this statistics has been updated only 18 minutes ago. And they are currently looking for, yeah, many different things. But I, what is Mali and me? It's a movie. Hotness Medium. Yeah, it became hot at 6 p.m. So I think this is US time. Yeah, the peak was seven hours ago. And here are many news articles and blog posts from many people about it. Web results. So it's a movie from 2008, obviously. I have no idea why people are currently talking about it. Any idea what happened to this movie? Yeah, but this has been three years ago. So yeah. Mali and me, House for Sale. Great move for the entire family. Trailer. I have no idea what people are doing here. But yeah, if you want to know what people are doing all over the world, then this would be the place to go. Google Hot Trends or Google Trends or Google Zeitgeist. Take a little look at yourself. It's definitely worth it. I think we're going to make a break now. Yeah, five minutes or so. Okay, so let's go on. And it's a very interesting thing to play around with Zeitgeist and with Google Trends and all this technology because it's just fun to see what's moving people and what's happening here. One important structure when traversing the web, when navigating through the web, is obviously a link structure. Because when document links to each other, it's not just, you know, like a random linkage. It means something. It gives you more details on some topic. It kind of takes you on a detour to somewhere else or it gives you more general information, you know, like it's a way of showing that some sites are more helpful or also concerned with some topic than others. And what we have is we have a lot of websites that kind of link to each other and are kind of connected in a way. And what we might have is what is often called small worlds. So, bunches of websites are somehow linking each other very heavily, but not showing too many links to the outside world. So, they build their own small world and it's very often conjectured that these small words cover a topic within the world and are different from topics that are not inside this world. So, this is one possibility to see it. Also, if you have some pages that a lot of pages link to, what does that mean? It seems to be very popular. It seems to be very useful. And since many pages link to this page, it seems to be useful in a general way. So, this is very often portal pages or Google or something like that. Pages that are linked to do something. And these different structures of the linkage structure can be exploited for finding out what is interesting in the web search here. If you see the number of queries that have to be processed by a search engine, 2005 the numbers were a lot increased in the last six years, but these were the latest numbers we could find. It says that Google has about 700 queries per second that they have to process. This is quite a challenge, isn't it? 700 queries per second incoming, searching your index, answering the query. And every second that you need 700 new queries are arriving. Quite busy. Same goes for the other search engines. So, Yahoo with 600 MSN or Bing as it's called now, is a little bit behind AOL. But still, you have to have a lot of possibilities to handle these queries, to deal with a lot of concurrency between users. And... Exactly. So, availability is one of the major issues. Availability and response time. Because if your service is not available, customers will move to some other search provider and why changing back? It's so easy, you know, like you just change the page and that's that. You have good chances that you never will win back a customer. Because the services that is offered, did everybody try different search engines? Comparable results or totally different results? Okay. So, there might be some differences, but on the whole, search engines will do the trick, you know. So, it doesn't really matter whether you use Google or Yahoo. So, an angry customer is a bad thing because you probably stand to lose the customer. And if we look at it, it's not only per second, you know. It carries on through the day. So, 700 queries per second are about 60 million queries per day, 22 billion queries per year that you have to work with. And today, it will have increased. We've seen the growth of the internet users. And we've seen that the statistics of people using search engines are about 50% of the regular internet users. So, it will have multiplied. Let's see about the index size. We got more recent numbers here. And there are actually some organizations, so for example, WorldWordWebsize.com, that try to keep up with the size of the web. And they give some estimations of how many pages are actually indexed by the search engines. And 2010, the number of index pages by Yahoo was 50 billion. Quite a number, isn't it? Google, a little bit less, but still. That's a lot of pages to search through. That's a lot. What you have to index, what you have to crawl, which costs time, which costs effort, and you have to store it somewhere. That's the heck of a size of an index. And you have to look through this index in real time to answer your 700 queries per second. And you should not let users wait more than one or two seconds for the results. Because otherwise, after three seconds, they will turn away and say, yeah, who is much faster or ask or whatever. Okay? So I will not deal with Gap with taking that long anymore. It's also interesting, how do they know? Do they have a look at the inverted index of Googles or of Bing's, which is strictly confidential and probably not even Google exactly knows how many web pages they have currently in the index. So there are some ways to estimate the size of the web, the size of index pages for some search engine. And we will go into that a little bit now. So the authors of the Worldwide Websites Con have an estimation method that is basically they take the work frequencies from an offline text collection. And I take about a million web pages from one of the big directories, the Demos directory. And if you follow these numbers of the word frequencies, you can extract something that is a representative sample. Because you know whether the web pages that you pick somehow cohere with what you know about word frequencies. So this is basically to get a representative sample of web pages. And from these representative sample, you take 50 randomly chosen words to the search engine, send it to the search engine. And randomly means that you have frequency intervals and you pick evenly from these interval. So you have more prominent words, you have less prominent words. So you also get a random sample of the query. So this is a random sample of websites. And this is a random sample of queries. Okay? And when you have these random samples, then you record the number of web pages found by the search engine. So how many do they return? And then you take the relative word frequencies of your random sample of websites and estimate the index size by looking at how many of these did you capture. So from the relative frequencies, you can get an estimation by your random sample of the websites, what distribution they show, what distribution that the sites of the web pages show that were returned by Google. And you can basically take the rule of three to get the total index size of Google. So the proportion that you take out of your random sample of websites for answering this query and the proportion that Google took out for answering this query or Yahoo took out for answering this query should be relative to the million or to whatever Google's index size is. Okay? This is the basic idea behind it. It's quite a simple scenario, quite a simple method, but it works quite efficiently. And this is how you basically estimate the index sizes. Well, to get that many pages in your index, you need quite a big crawler. You need to look at these sites. You need to index the words on them or the topics or whatever you're interested in. And of course, this is not static because once you have a web server, you may exchange the information on the site every five minutes. So you are basically kind of an eShop or something, new products come in, other products are solved, you update your site. But even if you're a private person, new things in your CV happen, you update your site on a smaller scale. But still, you update the site from time to time. What also happens is that the websites are deleted completely or new websites are created. And especially since we have dynamic generation of websites, where also the addresses are dynamically generated. This happens very often. And in the early years of web service, there was a very big problem, so-called dangling links. So you got a result set and of the 10 pages that were returned on the first page, seven were no longer in existence. And you try to get it, you got the four, four error. File not found. Thank you. This is not what you want. So getting dangling links or getting deleted web pages, seeing new web pages that have just arrived and getting, well, basically in tune with the updates of the site, displaying new information, removing information from your index that is no longer relevant, is a very difficult task. And basically, it needs you to crawl all the time. And once you finish your crawl, immediately start over again, because things may have changed. So when you're running a web search engine, you're continuously crawling the web. You're continuously looking at what happens on the web. And the data volume that has to be transferred for that is actually quite, quite, quite huge. So here's some numbers from, from net competition org, stating that Google, just for building the index, transfers about 60 petabytes per month. That's enormous. The interesting thing is how much of this traffic can you allow before you clog the internet? Because if somebody is, is, is, think about the network here that we have. I mean, the university has quite a nice network. And now think about somebody just pushing 60 petabytes per month through this network. But we immediately clogged. So Google and all the other search engines have to cater for this. They have to cater for the, for the bandwidth. They have to cater for the, the power that they need to, to actually power up their, their machines, their computers, their storing devices. And that's really not just power out of your, out of your socket. That's, you build a power station for that. So it's, it's, it's really huge. And this makes it quite, quite expensive. The web grows very fast. We know that. And we always see these, these wonderful lines here, which seem to suggest exponential ground. And in the recent years, it was exponential. Yes. Nobody knows how it's going to be, or whether it's going on, or whether at some point it will kind of get to a certain, certain degree of, well, where it's just saturated or where all the information is out there. Or, I mean, nobody knows how it's going to develop. But, but in recent years, we, we have seen definitely exponential growth. Dealing with that is a big problem for search engines, because a search engine who could keep up in the early years of the web. That does not mean that it can also keep up with nowadays web. Scalability that is needed for that task is really important. And all the web search providers are working very heavily on new data structure, on new algorithms, on new crawling mechanisms, on new, new everything to, just to scale. So scalability is really the issue here. Same goes for the business model. So, the business model is, is, is kind of the way you earn money. That means the money you earn for sustaining all your infrastructure, for buying the computers, for employing the people, actually programming your algorithms, or improving your storage schemes, or the power station that you need to power your, your computing farm, or your devices. But also the revenue that you want for either your stockholders, that back your investment, or of course for yourself, because you want to earn money with a website and you're not, basically not doing it for the good of all people, you know, like you really have some, some, some, some, a commercial interest in that. And web search as a very complex thing, and as a very time consuming and device consuming, so so, computing time, and a storage space consuming thing is, is, is really expensive. You have to pay for that. And there's some, some business models that were or are in use with the most prominent probably the advertising model, where you say, well, the basic service of querying the web service, the web search engine should be free. But you get some advertising and you have to look at these advertisements and then sometimes you, you click on it or to buy something or something like that. And that basically evens out the cost. So, so the vendors that get the chance of, of advertising, they pay for all the infrastructure after all. And what you get is a free server, however, a little bit innovating by the use of advertisements that you have to look at. Second possibility is subscription model, where you just pay for using the engine. Or you might have a community model where the community decides it's important for everybody. So everybody should compute, should contribute a little bit to make something for the, for the greater good. And another model that nobody really knows whether it's in existence at some of the big vendors or but which has already received some attention is the so-called informiary model. When you learn something about the users just by them using the web search engine and this information is worth something, so you sell it. The advertising model is probably the most renown one because it's, it's what is often called or referred to as the Google business model. Make the service for free that advertisers pay for it and basically the advertisers get something out of it because if you have a clever connection between the semantics of the search or the semantics of the user and the the correct advertisement then it might create bigger revenues for your vendors and these bigger revenues a part of that can be split with the search engine that created the bigger venues and then the search engine can keep up its operation and pay their people well. This is the basic idea. That means on one hand that the search engine must attract a lot of people. So you should have a lot of people looking at things and of course it's not sensible to show the same app to everybody but it should be in somehow in some way personalized. I'm looking for shoes so shoe advertisements is the thing. I'm looking for dogs so maybe a new dog color or dog food or whatever it is may be interesting and these little semantics that you can exploit in search engines that actually create the revenue. If you don't have those users that are not available, you don't have those user numbers. There are some other ways. So for example, Microsoft started the life search cashback program when people earn some money if product abide by a life search ads. So the people have kind of an incentive to buy via the ads because they get some money back or it's a little bit cheaper and the vendors get more business though less people look at the ads. Also that can work out. So you can be creative with that. With the subscription model basically you have a subscription fee that is monthly or yearly or per use or however you do it you know and then the customers pay for using the engine either at the flat rate or per query so proportional to the use that you have. Of course if I have the choice between a service that is free as a service that asked for a subscription, the service has to offer something that is worthwhile paying the subscription fee otherwise I don't have any incentive to do it and one of these points could be either you're really good or really specialized in what you do so you can do with this search engine what you can't do with Google or you just don't care for advertisements you just hate advertisements and you want something else definitely. Advertisement free that works for some of the pay TV channels that well we don't we don't show advertisements every five minutes but you have to pay the subscription fee you know. So in the search engine arena this is rather not too interesting but for some of the smaller vendors the idea is to rent the functionality actually by some bigger vendors so for example the German telecom T online rents the functionality from Google and pays a little fee for Google and since it's a portal it's the user's monthly subscription fee and one of the biggest successes of that area was AOL I don't know if you still remember the time when you had these AOL disks everywhere you know like so you got them for free everywhere to install the portal software and then basically there was America's biggest internet provider and they just spammed everybody with the disks and people started to build mobiles and then artworks out of these disks because here's so many of them and they were just kind of friend of mine had actually made a bad frame out of our L disks that was kind of stylish in that time anyway. Third is the community model so we all know Wikipedia which is a community model of building an encyclopedia so curated content by people who invest some time and their expertise for free for the greater good of the community and the same could use could work with with with web search so the most prominent example was Wikia search where people basically build a wiki and annotated the websites to make them searchable however it was not big success so it was discontinued in 2009 the basic idea is that people work for free which will take some money that you need for the for the infrastructure out of the of the well of the costs but still you're you're you can't get around the infrastructure costs in terms of devices in terms of technical things like service or something so there are a donation from some companies or for some vendors from for hardware or donations for individual people using services are definitely necessary to support this this community model but basically the idea is kind of altruistic you work and you contribute for the bigger wealth or yeah for of the community and the last one is the informeerie model so you basically offer free services but you agree in the general terms of licensing to participate in market studies which means that your behavior is analyzed or that your interest profiles are analyzed are stored and at some point they are sold off to other companies who make some of these wonderful lists you know like you want to address all the doctors in the u.s. or you want to address all the lawyers in Germany or something like that then you can buy these lists from people who know the addresses or the email addresses of all the lawyers and then you start spending them and buying these lists is kind of expensive so this is what basically pays for the functionality of the web search engine the interesting part of course is that the user's privacy is somehow in danger here because do I want other companies to know about what I'm doing I mean obviously I can't I can't really prevent the web search engine provider from knowing what I do because he has to execute the query so he has to know the query and by putting two and two together he can get all the information about me that I want but selling off the information to a third party that you don't trust that you have any influence of choosing the right party or which information to disclose or which information not to disclose kind of difficult of course no search engine will tell whether they are doing that and there would be a great outcry if some search engine is caught red handed in selling off user information but all the search engines are collecting information so nobody knows what happens interesting to see and this is where we're going into the next detour and have a look at the google business model so you google it's a program is called at words they started this some some years ago and actually that they provide a pretty intuitive way some kind of web interface where people can can design their ads and choose what what keywords they want to assign and how it also look like and actually almost 100% of google's revenue is created from this program so in 2010 they earned 28 billion dollars with ads alone so this 28 billion is in the range of some large countries from from industry so if you are a car manufacturer for example then this would be the range of revenue you would you would normally have so it's a pretty pretty large amount of money they get with ads and this is the only real information known about this so sometimes they release how many people actually use at words or how many money is spent by typical users so I've collected some numbers from some time ago so in 2006 they had 600,000 subscribers that used the adverts program and in 2007 the average advertisers spent 16,000 dollars a year on google ads so of course they are there are some people spending spending millions into google some large companies and many many people just just spending spending a few bucks on it but on average this is much money google earns with this and what's really interesting if you search for adverts in google then you get this ads and these are actually companies who have specialized in helping people designing effective adverts marketing campaigns so by by starting the adverts program google actually created some new companies created business and there are people earning money with doing adverts for other people so I will now show an example of how these adverts tool looks like so usually you have to register at google and then you get a get a very extensive interface where you can can you can can configure anything you you would like to and get some statistics and can recommendations on what would be good keywords for your business but this is some kind of open version I want to show today you just can enter your website and google gives some recommendations what could be good keywords so maybe change languages all all locations let's do this one here and then let's find out what google suggests for the university's website a Neuerbarre Energie studium studying in switzerland jobs well yeah obviously the what's what's a Neuerbarre Energie renewable energies because of study seems to be highly popular at the moment maybe we try the computer science department yeah distance studying I have no idea why are we are we offering this oh maybe we shot okay let's try our own institute yeah it works it doesn't work too too good um some hints for facebook maybe yeah vacation rental since steward florida uh yeah doesn't work too good so so so we did this uh a year ago and we when we offered this lecture the last time that it worked definitely a bit better so I have no idea what's what's wrong with adverts today but obviously it doesn't help uh too much so but let's assume I want to I want to create an adverts campaign about Paris apartment to rent then yeah these are the results I don't want that no I think you can't do this with the public public interface so but what you can see is how many how many people are searching for this at google uh every month and how in the area you specified above so currently it's it's a whole world and then you can can try to try to create your campaign and tell google how much money you are willing to spend on this and this is basically how it works and usually it works quite well so google has an interest that customers are really really quickly able to create such a campaign and so the interface must must be easy and usually it is easy so yes yeah so so you're so you're offering is some kind of money and for each keyword there's some kind of auction um so basically if you have some some keywords keyword here then you have a list of people who want to have their ads shown for these keywords and the first one offers for example 100 dollars per click and the second one offers 30 dollars but only up to a certain amount of maximum amount so he might say he want to spend 10 000 dollars at most and then after after 100 times his ad has been clicked he gets removed from the list and the second one gets get shown so usually you have a larger list of ads so many people many different ads are shown there and somehow they aggregate this information and and write builds on that so these are the keywords with the with the highest bits so c-wire orc sometime ago started to analyze what what's the cost of different different keywords is and you can see here at the start of the keywords list uh meadow thilioma that's some some very rare but very severe kind of cancer and obviously here are some some doctors who try to try to offer that the services to people to people who are quite desperate and this is what they pay per click so you now you can imagine what these doctors might earn if they get a new customer the same is true for lawyers so if you need a personal injury lawyer in michigan uh you can bet that this lawyer will make a lot of money with every new customer so it might be a good idea to offer per click on the ad if it if if someone searches using this query then the ad is shown and if in this case the some user clicks you pay 75 bucks so definitely many many people will will not use the services of this lawyer but yeah it seems that he can afford it so driving on a influence also quite good so a last thing on that what google is using is so called uh second second something auction so the ideas that you that you say you will have your keyword and you're willing to pay let's say 65 bucks if someone clicks on this on this uh on your ad but what you're actually paying is the amount of of money offered by the second highest bidder so and the idea is that because you know you won't be you don't have to pay the price you offered but a lower one you are generally more likely to uh offer more money than you usually would so with by doing this trick uh google google uh increases the amount of the amount of bids and people actually pay more for their ads than they would do otherwise yeah could so of course google offers some some quality checks some functionality that tries to detect such behavior and only builds uh clicks made by real customers so it isn't too difficult but i think google isn't too isn't too strict on these policies so if they are cast if they are their customers who's out of a sudden have enormous builds then google will take a look at it and will be quite nice to to these people i think so it doesn't make sense to google uh to to try to trick their customers yeah but but but but but but google is highly interested in in detecting all kind of a kind of fraud here all right so this is ad words so if you have something to sell or you want to make your name a bit more popular you can spend money on that all right we go on with spam so of course spam is always a little bit problematic because why should i have the google sponsored apps and pay per click if i can get it for free and the only thing i have to do to get it for free is get my web page as the first or second or third or whatever result on the google page because then i have the sponsored apps which ads which are kind of like three or four ads and right beside them without any fee for clicking is my web page that offers a service all that kind of does the advertising so the point is really getting your page on google results lists and how do you do that well google links the relevant sites higher than the irrelevant side so what you have to do is you have to make the site relevant with respect to the query and then you're among the top candidates and in the beginning of web search this has been done very often for a lot of things that had nothing to do whatsoever with the query so well basically for all types of queries i want my page to be shown because i'm selling via grower something like that you know and everybody needs that obviously no matter what he or she queries so what i'm going to do is i will make my page relevant with respect to all kinds of queries and this is when first people were kind of like putting a lot of stuff that had nothing to do with the ad on the lower parts of the page that nobody wanted to look anyway but which google's indexer would read and that is very often called spam dexing so you you you somehow trick the google indexer the google crawler in finding your site extracting the information that you wanted to extract from the site not the information that actually is on the site and then google indexes is and if you if you type in a certain query the page will pop up and one one specific time i mean that is very often done for for real spam like the viagra pages obviously but this has also been done for creating opinion and this is called google bombing so what you do is you take some term for example miserable failure and now you try to promote a site that you want to be connected with miserable failure and what you usually can't do is is kind of you you can't change the site and in this case was a very very interesting incident where the biography of George W. Bush on the white house pages was directly linked to the query term miserable failure and whoever typed in the first the the query miserable failure got as a first result george bush biography of course the term miserable failure nowhere occurs on the pages of george bush but somehow people sneak the information that this site could be connected to miserable failure into google's index and we will show some ways to actually do it later when we discuss spam so spam dexing and improving websites for search engine access it's definitely one of the topics we will cover i think in two lectures good so now we want to move on to what does the web look like so what is interesting and the first interest in in in how the web evolved you know and what what structure it takes did evolve slightly after the beginning of the web so in the first few years it was kind of clear how the web looked because there were basically the universities and some some official organizations and they were interlinked and you had some very strong backbones and then you had some very strong or some some very hierarchical building of the web service but then the thing's just snowballed you know like and and and everybody put his server in and everybody put up websites you know and you had all these big portals like like the social websites evolving and at some point nobody knew anymore how the web looked like what the structure of the web is it still hierarchical does it still have a backbone or is it just you know like a bunch of small words that are somehow connected or does it actually have disconnected parts so is it kind of like broken into several topical areas or what what actually happened and in 2002 there was actually the the first try to get a good trace of the web so actually a research group crawled about 150 million websites every week over a span of 11 weeks so they looked at how much of these websites changed and how these websites were connected and then they had some some some research questions in mind so how large is the web page in bytes and in words and how much does it change from week to week so is the web rather static or is the web rather dynamic are most of the web pages very short and just transporting some information or tend websites to be longer and cover topics to a larger extent well this is kind of what they what they what they're trying to do and if we look at the byte then we see that there's a very small amount of very small websites and there's a very small amount of large websites but it's it's it's basically Gaussian and you see that for example dot com websites seem to be a little bit larger than um educational websites so educational websites seem to come to the point a little bit more briefly or have less image material because this is the byte measurement but still it's not too too different from each other it's a Gaussian distribution usually a website seems to have a typical kind of length which is kind of around this mean and the different types of websites do not take to different values and the same goes for the measurement in words so also here very small number of short websites very small number of big websites some mean and again we can see the educational websites tend to be a little bit shorter than the commercial websites so advertising seems to be a little bit more ineffective than transporting knowledge which is good we can say it in brief words what do we want you to know the advertisements need to get the slogan in times and again so that is kind of the difference but still it's it's still Gaussian and still distributed in the way interesting is how the websites changed within from from week to week and what we see is that a lot of the websites so basically the baseline of 60 percent do not change at all so from week to week we have very large portion of the web almost half of more than half actually that does not change which on the other hand means that about 40 percent do change and though there are very little complete changes so this is the black areas here a little bit more the black areas where we have complete changes large to medium makes oh well no this is this is wrong so this is large to medium makes for quite 10 percent or something of the original change so 10 percent of the websites have significant changes another 30 percent of the websites have changes that might be pictures that might be a little bit of a text but nothing that is substantial and 60 percent of the websites do not change so keeping up your web search engine takes you a crawling every week if you don't want 10 percent of dangling links that's interesting isn't it quite a big number so usually you would have probably conjectured that it would be more stable that it is not okay okay so okay whatever you want very well so we want to continue that is good so the question is actually how large is the web and I will try to be brief about that and in the early days of the web measuring the web size was very easy because you have the basic web pages which were files on some server you didn't have duplicate content because it was mostly university servers that catered for some content there was no spam it was a paradise at that point in time you know because not many people had email and not many people there was no e-shopping there was no e-commerce yeah and the web servers that were online were explicitly known so there was really maps of web servers where you could see what it was in 1993 there were about a hundred servers 200 000 document about four million pages that is a number that you can easily work with today how do you do it you don't know which web servers are there you don't know which web servers are online you don't know which web pages count as web pages what is a document on the web so for example we have the Wikipedia article about the worldwide web and we have the same type of content that we have here in a different website absoluteastronomy.com exploring the universe of knowledge ripped off from Wikipedia new website do we count it as two do we count it as one because the content is just there once what counts as a web page certified canadian pharmacy order online haha we all know what that is about huge christmas saving by generic viagra do we want these as web pages is this sensible content also how many different pages should we count in the case where we have some lists like here which is taken from the yellow pages in in in germany so the query was pizza and it was the yellow pages of brownschweig so you got joey's pizza and the antipasto restaurante pizzeria and avanti pizza bring deans and of course the details can be shown from this side they're extracts from this side are there different sides is it content in a bigger side what if you type in different query is the side the yellow pages of brownschweig or is that this side the yellow pages for the pizza service the yellow pages for the taxi service the yellow pages for the carpenter blah blah blah difficult to say um same goes what do we do with portals that in some way require login so for example facebook how many sites or how many web pages does the facebook site have one for every user more than one if we take the photos that are somehow shared and that somehow content on one side on one page and how do we crawl it because we can't log in because hey i'm the yahoo indexer i want in to the facebook that i do you want to be my friend uh no friend you difficult so um what what what can we say well with duplicates we will just ignore them we hate duplicates that is not a good idea we hate spam so we will ignore that also um dynamic web pages yeah so if you can get something out of the the database that is new information we we should count that but we should try to to to focus on the information that is giving the product or whatever it is that is that is generated from the database so that that is kind of okay probably rather the size of the database is interesting than the number of pages that can be generated by by by putting together content from this uh from this database um the private pages that need login or that are kind of like portal pages um it's if if they are accessed by a large number of people we should definitely use them we should definitely count them as as being part of the web otherwise if it's just you know like the intranet of some company or something just just forget about that's not a proper website that's not a proper web page okay so hmm we know know what to count uh still the point is how to count it that's kind of the interesting part now and um how do we find the web page as well it's easy we follow the links you know um yeah uh some pages nobody might link to that's bad duplicates we still have the problem of duplicates we didn't want to count them so how do we know that we are linked to some duplicate of another page do we compare pages on the web pairwise uh doesn't seem too clever um spam if spam detection would be so easy there wouldn't be spam so we have to find a way of getting it out of our system if we had a perfect system like that we could sell it for a lot of money to a lot of satisfied customers difficult um dynamic pages what do we do do we pose all possible queries to some some engine that dynamically um uh generates information out of some database how can we see that we have the whole database do we try the whole dictionary of queries not a very good idea um more or less private pages so for example we want to enter facebook as a crawler how do we do that without logging in so that is that is um a lot of interesting question that needs that web crawlers and web indexes need need to address so it's not that easy it's not that you know oh i've write a corolla that just go hops from page to page and extracts the information it's not so easy and we will focus on that next week so that's kind of kind of the interesting part here um for now we want to assume that that we have some crawler that can solve the problem um within sense sensible bounds so reasonable bounds you know but then calculating the web size that's what we set out for is quite easy huh ding we crawl the complete web count the number of pages or the size and that's it yeah problem doesn't work being it takes forever because new pages are added other pages are deleted so it doesn't end it's kind of like very difficult and even if i could catch up with my crawler with this with the the the number of websites that is created you know and say well somehow i will i will reach them uh it would require an enormous effort and with the pages that are already crawled until then they might have changed completely so i might know about the size of the web but only in some fashion what if some page was not just updated but deleted and the new page was was was created does it count what do you do so we have to think of something better and one of the approaches that is very often used is the so-called mark and recapture so this is basically taken from from uh the area of estimating how many zebras there are in in in in uh africa because there's a similar problem you know like you have you have some herds of animals how do you know how many there are you just count them and then one zebra runs around the bush you know like and uh gets gets in line again and you go oh my god did i have that one difficult so what do you do is you take two large random samples of the web and you look at the overlap between them and then you compute the total size of the web by the size of the overlap so if i take two random populations they will show some overlap and if i have f pages in the first crawl and s in the second crawl i can calculate how big this overlap is okay and the probability of finding page from the first crawl by randomly choosing a web page which we did in the second crawl is b divided by s okay we have to find b we take s pages b divided by f the probability um is all can also be computed by f divided by t where t denotes the number of pages from the total web because to get the b pages or to sorry to get the f pages the pages from the first crawl i took f pages out of the total web huh so this basically makes f divided by t and then since this is the same number we can just solve the equation for the total size of the web and then the total size of the web is f times s divided by b so the bigger the overlap between the two the smaller is the total web because the bigger the web the more unlikely is is to capture the same page in a random sample twice same with zebras i take one herd i mark the animals i let them roam free then i take another sample and count how many did i count catch again if there are a lot of animals i will have big problems problems of catching the same animals again if it's a very small population i have very big possibility probability of catching the same animals again okay so this gives us an impression of how big the web actually is um in practice the the problem is mainly in the random sample what what is a random sample of the web you know like how do you randomly draw from the web and of course the the the only possibility of doing that properly is is using the index of of a search engine if you take two different search engines then you get two different samples of the web but as we know independently of the search engines or independently of the information in how they index the information and how they crawl and what they crawl is not given so it's not it's not really a random sample you know now getting getting this really random is totally totally totally difficult and um there have been developed uh some some more advanced methods to to to try to get this this independence really really done but uh it should surface for now this mark and pre capture method for for estimating the website um to get a number and in 2005 uh the web has been um estimated to contain about 11 billion pages today six years after nobody knows really but it's getting a lot better so um the estimates at the moment are in the area of trillions which is enormous well of course when we do this um this is the so-called surface web that we that we were looking at because uh these are the pages that are indexed by the by the by the web engine because you the web engine or the the the google crawler the yahoo crawler they can't log on to facebook they have to look on the pages that they can reach you know um same goes for some hidden portions of the web where you need the address to get into um and you can't just guess addresses so web crawlers don't guess addresses they just try to follow links and they try to crawl what is what is um uh kind of known of the web um opposed to the surface web we have the so-called deep web which refers to the web purges that are currently not indexed by the uh web search engine and uh what is usually used is this iceberg model you know like where you see the tip of the iceberg but the biggest part of the iceberg is hidden inside the sea and the same holds for the for the web what we see as a surface web is just the tip of the iceberg and the deep web is estimated to be about 15 to 500 times larger than the first surface web so this is everything that is in the web that is reachable wire normal web service but is not directly addressable by web crawlers that is hidden from web crawlers that need login that is generated from from databases that are not really accessible often also called the hidden web so this is kind of like what we do so um what are these deep resources as I said the the the layer one basically is just the tip of the iceberg this is um the generic websites with static content and the niche websites that cover for some topics but are still reachable this is basically what a surface web is made of so this is the tip of the iceberg um the invisible web the deep web the hidden web this is basically what is on one hand dynamically generated usually you you you somehow generated filling out a web form or specifying some some buttons to configure something and then the configuration that you chose is is reflected on the web cert but it's also unlinked or private content intranets of companies for example or some communities that don't want to be seen that you have to know to get into it's it's it's scripted content where a script kind of generates what happens or it's kind of like just bizarre file formats that are not handled by current search engines but that are important for some communities for example scientific data is very often of that kind that is totally ignored by web crawlers but that can be addressed by special applications from physicists or biologists that know how to how to deal with these files well that's that's kind of the the interesting part well and from all these number of websites we can derive the web graph in terms of the static web being static html pages together with the hyperlinks between them and that forms a directed graph huh so we have pages that are the web pages and we have links between these pages and a link going from some page is called an outlink a link pointing to some page is called an inlink okay so the web pages are notice there are nodes in the graph the hyperlink is a directed edge which goes from somewhere to somewhere so it's an outlink for page a that's an inlink for page b and there is some evidence that the links are not randomly distributed so it's it's not really you know like you you may point to everywhere you you want to point and it's rather depending on topic or whatever but there are some some some systematic uses of links or some commonality some patterns that you can that you can see because the distribution of inlinks seems to follow a power law power laws are these typical distributions where you have basically a long tail distribution with a very small portion making most of the mass and a very large portion accounting for very little of the mass which is often called the long tail of content so the number the total number of pages having exact k inlink seems to be proportional to one divided by k to the power of 2.1 and this is why it's called a power law because it's basically k to the power of something okay and there are other studies that that have suggested that the web graph has this typical bow tie shape so it's kind of like like like a bow tie okay and that's that's kind of interesting if you see this bow tie you have a central core about 30% of the web where the things are very strongly connected with each other then you have a portion about 20% of the web that links to content in the central core and then you have a portion about 20% that are often linked to but don't link to many things so these are usually the pages that are somehow interesting for common use so for example google or stuff like that and then so-called tendrils and tubes which are kind of connecting parts connecting some of the out and the inline things or different communities to special communities and about 17 million pages are totally disconnected from the web very often private communities intranets and stuff like that very popular is kind of like this this bow tie shape however this is a study from from the year 2000 so whether the bow tie shape is still intact is really unknown nobody really sees it good and then for the last time or for the last part today which is very brief i promise very brief how do users use the web the page popularity also follows a power law and actually it's approximately zip distribution so if you take log-log scale then the zip curve is a straight line a linear component where you can see that a relative small number of pages get a lot of the traffic get a lot of page view the further most that are very often cited are typical facebook search engine pages like yahoo and google and whatever get very many page views and the long end of the tail that is however a little bit missing so it's not it's not really going down in in this this fashion but there's a lot of pages in the web that hardly gets any attention so that's basically it seems to be zippering distributed but but but still i mean the head is clearer than the tail tail is very not very dense indeed the incoming traffic traffic from other sites also follows a zip flow so if you order the referring site by traffic rent and see the numbers of visitors that are referred to from the site you have the same on a log-log scale linear connection correlation and this is again a power law the best site in this case is google that takes most of the referrals so many people point to the google page to refer people if you're looking for that go to the google page initiate a google search there are several studies that that try to find out how do people query so could the interfaces be better could we could we do something to help them and actually analyzing the query behavior made interesting observations so for example the average length of a query is 2.4 terms so most people issue queries that are two to three terms to get the information across there are very little longer queries where you enter like i mean eight nine terms it's very inconvenient and the query has good chances to become over specified and about half of the queries only contain a single term of course a lot of these queries are the navigational queries i mean this is facebook this is google and then you get the get the term but still about half of the queries only get the get the single first same the half of the users only look at the first 20 results so very little users go beyond the first page actually maybe the second page third page nobody looks at that very interesting for spam-dexing if you want to do it do it properly it doesn't help you to be on the third side it doesn't help you to be on the fourth side you are among the google top 100 there's nothing that will help you with that you have to be among the first top 10 top 20 that's okay so the aims are high less than five percent of people use advanced search ventors so google offers boolean operators where you can say and not and stuff like that hardly anybody uses that about 20 percent of queries have geographic terms so there's a lot of mesh up on the web obviously so there's of course a lot of stuff going on where oh i want to find a restaurant a good chinese restaurant well in brown shuai not in chianghai so the local contribution counts counts for a lot and a third of the queries from the same users were repeated queries again here you get a lot of the facebook stuff a lot of the google stuff so navigational stuff but also this is kind of interesting about 90 percent of the users would click on the same result so obviously search engines are used for refinding pages people don't use bookmarking or to a very limited degree very often search engines are well in a way misused to recapture information that was that was already present okay well and also the term frequency distribution so if you look at what the content of the website is follows a power law so the terms occurring on websites are very often distributed like that and top term is a viagra well not quite but there are some terms that are very very top of the list that you wouldn't say good these are some information or some interesting information about how these things are studied and in the next lecture we will dive deeply into the topic of web crawling and duplicate detection and see how it works out thank you for your attention thanks you thanks for the for the additional quarter that we got and see you next time
This lecture provides an introduction to the fields of information retrieval and web search. We will discuss how relevant information can be found in very large and mostly unstructured data collections; this is particularly interesting in cases where users cannot provide a clear formulation of their current information need. Web search engines like Google are a typical application of the techniques covered by this course.
10.5446/357 (DOI)
Alright, so then let's begin to our ninth lecture on information retrieval and web search engines. Professor Barker is not here today because he has been invited to give a talk at the National Liberian Conference, which currently is held in Berlin. So it's all up to us today, but I think that won't be any problem. Alright, so let's start with the discussion of homework, also known as exam preparation. So I will go through the last exercises, say some words to it and then this should be enough. Okay, first question, what's the F measure used for and what's the intuition underlying it? So usually you need the F measure when you want to evaluate the retrieval quality of a given information retrieval system. This is usually done by comparing the actual result set returned by the system for some query to some manually prepared reference result set. And it's done by measures like precision and recall. So as precision and recall usually on some kind of conflict, so you can get a very high recall at a low precision and the advice versa. Some people would like to have a joint measure that combines precision and recall into a single figure and this is the F measure used for it. It's simply a harmonic mean or a rated time harmonic mean of precision and recall and the ideas that if both precision and recall are high, then also the F measure is quite high and if either precision or recall is low, then the F measure also will be low. So good F measure means that precision and recall both are at acceptable levels. Yeah, we're now doing the homework number five because last week we didn't do any homework because I wasn't there. So just going through the exercise of the last two weeks and then going on with the real stuff here. Okay, second exercise from homework five. So when drawing precision recall curves for ranked lists, we have seen some sort to shape the diagram. So here was this precision recall diagram, precision and recall. No, I think it was the other way around. So actually it doesn't matter. And then the idea was when you have ranked lists from results for every result or every step in this result list, we evaluate the precision and recall at the documents lying above these step and then draw points in the diagram. For example, after the first step, let's say this document here is relevant, the first one, then the precision would be 100% and the recall would be some value below one usually because there are many relevant documents. And then it might happen that this document isn't relevant, the second one. So the recall stays the same, but the precision is lowered. We got this one. If the third document is relevant, then the recall increases and all the precision increases and this gives some nice sort to shape. And this is all about usually use some kind of smoothing technique to get a better diagram or just take the average over many, many queries. And that's how things are done in precision recall diagrams. So they look a bit strange, but this is really from these discrete evaluation of steps in the ranked result list. All right. Next one. What does the cluster hypothesis state and how it can be exploited for information retrieval tasks? So the cluster hypothesis basically says that if two documents are relevant or if two documents both are not relevant, then they tend to, no, other way around. So if two documents are relevant to some query, then they tend to be close together in the space or the other way around if two documents are close in the document space or in the LSI space or in whatever kind of space. So if two documents are similar, then they are either both relevant or both irrelevant. So this is basically the idea that we are able to model our document space in a way that there are groups of relevant or non-relevant documents and relevance and non-relevance is not mixed in clusters occurring in the document space. So how can this property exploit it for our task? So we have seen some examples. For example, we could use this to cluster our result sets into different groups. We could use this if we get returned one relevant document from some cluster. We could use this to extend our result to all documents lying in this cluster to get a better recall. We could use this for scatter gather navigation and all kinds of browsing. So there's a lot of things one can do with clustering in IR. And this usually relies on the cluster hypothesis to be true. So usually it is true, but it all depends on the document space used and the similarity measures applied there. So there's no guarantee that it will hold for a given collection, but usually it does, more or less. All right, next one. How come I determine whether clustering is good? So we have seen two ways to determine this. The first is to do a comparison within some kind of external reference clustering. It's very similar to evaluating position and recall. So given a reference clustering that has been designed by humans, and then we can compare to a clustering returned by some algorithm and then check whether all pairs of items or documents in the clustering are both classified or are both seen in the same way by the reference set and by the algorithm. So if the algorithm says that two documents should belong to the same cluster, but the manual reference clustering says that they should not, then this would be an error. And then we can simply count how many errors we make and how many correct pairs we have identified. So the second way of determining whether clustering is good doesn't require an external reference clustering. So we could simply try to measure how similar documents are that are in the same cluster. So this is the intra-cluster similarity. This should be high. And on the other hand, we can compare how similar documents from different clusters are. So we would expect them to be very dissimilar. Otherwise, they should have gone to the same cluster. So this is called intra-cluster similarity, and it should be as low as possible. So this is some ideas and measurements you can also use to check whether some clustering is good or whether some algorithm you designed or tuned gives reasonable results. So because usually you don't have the time or don't want to take the effort for creating a manual reference clustering, and so this intra- and intra-clustering comparison could be a hint of how good a clustering is. Of course, this always depends on the similarity measure chosen. So it should in some way relate to human understanding of documents similarity. So usually some kind of cosine similarity in the vector space model works quite well here. But it's no guarantee just a hint to get an idea whether clustering is good or rather bad. Okay, next one is the K-means algorithm for clustering. Well, the idea simply is to identify a fixed number of clusters in some kind of document space, and it's a rather heuristical approach. The idea is to start with K initial seeds, some documents in the space, where we are assuming that this document is the center of a new cluster, and then we are simply assigned all other documents to the cluster which is closest to the document, and then we recompute the center point of each cluster and reiterate this idea until we have some kind of stable solution or we have done enough iterations or the changes between iterations are small enough so there are many possible stopping conditions here. So yeah, and the idea is here to minimize inter cluster similarity and maximize intra cluster similarity. So we have some kind of mathematical idea presented in the lecture. So this is basically how it's implemented in the heuristical K-means algorithms. Under some conditions one can prove that the algorithm actually gives the best solution in terms of our optimization problem we formalized in the lecture. All right, next one was the dendrogram and how it can be used. So dendrogram basically is some kind of tree which depicts how similar different clusters are. So you start at the bottom level where you assume that every document in the collection forms a single cluster, and then you have some measure of cluster similarity, and then you're looking for the most similar pair of clusters. So here's the bottom level and here all the initial clusters, one document each. And then for example you see that these two clusters are very, very similar according to some measure of cluster similarity. We had different kinds of them, both with advantages and disadvantages. And assuming that this is the most similar pair, then we can join them here and assume they have a similarity of say 0.91 or something like this. And then we can continue now comparing this cluster to all the others and again all clusters to each other one. And then we find the next most similar pair of clusters. This could be for example these two here with a similarity of say 0.85 like this. And then we can go on until our whole collection got clustered in some way into a single cluster and then we can decide where we want to break down our clustering into a reasonable compromise by drawing a line for example and then we have one, two, three, four, five clusters, whatever number of clusters may lie on this line. So it's simply a visualization of the hierarchical cluster structure in some kind of document collection depending on the similarity measure and depending on the similarity measure between clusters. So okay homework for this week. So what's the underlying idea of the Rocchio's algorithm for pseudo relevance feedback? So the general idea here is that the user get returned the list of results from some kind of retrieval algorithm which is usually the vector space retrieval model and then the user is able to mark some of these results as being relevant to him or her or being not relevant and this feedback is used to modify the initial query point or query vector used by the vector space model. So simply the initial query vector is shifted from the cluster or from the region of documents marked as being relevant, as being non-relevant into the direction of the documents marked as being relevant. So this initial query vector and the user said that these documents are not relevant and these documents are all relevant then the idea would be to create this vector and add it to the initial query vector in some way and then if we are lucky get a better result. So this is how it's done in Rocchio. Yes? No, it's a mistake. Rocchio is not pseudo, it's real relevance feedback, sorry. Pseudo, yeah, it's not bound to probabilistic or vector space, it's not bound to any model. So relevant feedback just means user gives some feedback about the results and then the algorithm incorporates this feedback in some meaningful way. This could be by moving the query vector as in Rocchio's, it could be done by estimating some probabilities, probability terms in the probabilistic model, it could be done in any other way that fits the model used. So there are no general rules here how to do this. Pseudo relevance feedback of course is different because there is no user interaction so we simply assume, we simply assume, now we do really have pseudo relevance feedback and the idea here is to assume that the documents estimated as being most relevant by some algorithms really are relevant and then assuming that the user would have just given this kind of feedback and then we can try to give a better result. So it is quite reasonable because usually if our query is generally enough and we have a large document collection, we can assume that there are some kind of safe bets for the algorithm that for example the first three results are definitely relevant because they are so similar to the query that there's no point of discussing whether they could be non-relevant. So somehow it works, usually pseudo relevance feedback has worked better in times where we had close document collections like libraries but today with the spam problem it's quite easy to get results at the top of the result list that are spam and therefore are not relevant. So pseudo relevance feedback is a bit harder in rep retrieval scenarios than it was in library scenarios. So what are main advantages and disadvantages? So disadvantages I just told this and another disadvantage might be so called topic drift. We have the Apple example, so if the whole internet talks about the Apple company and nobody talks about the Apple tree then you're most likely to shift your results into the company direction and lose all information about the tree because it's so predominant. So main advantages, yeah advantage is of course that you are able to focus your results into some kind of main direction and able to filter out results that are quite strange or not the most popular way of seeing things. So if you're looking for mainstream topics in your search then usually you get good results with pseudo relevance feedback given that you have some way to detect spam and disable it in some way. So that's basically the idea of pseudo relevance feedback. So next one we have talked the last week about classification methods in information retrieval and now what are typical applications. So we have seen some examples, we will see some more today. So typical examples really might be spam detection, so simply assuming document collection is or can be split into spam and not spam and we just label some documents as training examples and use some algorithm to detect for some unknown documents whether they are more likely to be spam or more likely to be real documents. This is usually done by comparing similarities. So if a new document is very similar to known spam documents then chances are good that this new document is also spam. So another applications is for example classifying documents according to topics to support a more focused kind of search. For example if you have a search engine with 10 predefined big topics, for example sports and politics and computer stuff and something else then you are able to get a better understanding as a search engine what are documents about and try to estimate whether the user's query is about one of these topics and this could be used to focus or even sort the results. So these are typical applications of classification in information retrieval. Of course there are many many more but usually that's what's done here. Now if base was a classification method we've seen in detail last week so the idea here was to estimate the probability that some document is contained in some class. So given that the document has some kind of known representation, for example containing some terms or having a certain length or whatever it might be so we have a characterization of a new document which is given and we want to know what the probability that this document is contained in some predefined class. So and we can then use base theorem to swap around these probabilities and if we have the other way around, if we know the class membership and we have to make some probability statements about the characteristics of documents belonging to that class then we can estimate these probabilities from some training collection because in this training collection we know what documents belong to the class and what document doesn't belong to the class and then we can simply see what are the properties of these documents in the class and not in the class and then by base theorem we can use this information in a direct way to estimate this probability for new documents. So that's basically the idea. So why it is called naive, it's basically called naive because it assumes that occurrence of different terms or different document properties is independent. So we've discussed this before in the vector space model in the probabilistic retrieval, it's all the same problem, it makes your computation and your model really really easy but of course synonyms are definitely not independent or antonyms or just some semantically related words for example cat and dog are very very likely to occur together and are not really independent so all terms from the area of animals or just any term that are related in some topical sense have a higher probability of occurring together than they would have by just randomness or random or just by chance if you assume independence. So it is a problem however base classification and probabilistic retrieval all work quite well so either this independence assumption is not a big problem at all or it just works as some kind of heuristic and the theory behind it doesn't really matter at all. So nobody knows but doesn't matter, there is some nice idea underlying these approaches and they work so that's usually all what's interesting for information retrieval people. So the last one was adaptive boosting so what is it, what is it used for and how does it work so adaptive boosting is some kind of meta techniques for making a classification method better and it's usually done by classifying in some kind of iterative process so if we have some training set here, training set with labeled examples then we can train a classifier using this training set for example if you take a nice base classifier here learn some probabilistic model and then we take a look at what examples in the training set or what examples in some hold out test sets are classified wrong by this model and then we can weight the training examples to give wrongly classified examples a higher importance and correctly classified examples a lower importance and then we can reclassify it all again using a nice base for example and now we have a strong importance on the mistakes of the first model and so we get a whole series of models that are all used to classify our data and by reweighting correct classifications and misclassified items we are now able by combining all these different models to get a better classification at all so the weaknesses of this classifier can be fixed by this one and by this one and all together they make a better classifier that's the idea so if you ever have a problem with some classification algorithm that doesn't seem to fit your data at hand try adaptive boosting usually you can get a performance increase of 5 to 10 percent I would say of course depends on your on your document collection but usually adaptive boosting gives some advantage. All right now to the topic of today's lecture it's again about supervised classification today we will see probably the most successful or the most advanced or the best classification algorithm known to mankind so-called support vector machines and they are simply another approach for supervised classification so support vector machines will be our topic today and to give a brief brief summary of last last week supervised classification simply means giving some examples where some human has labeled these examples according to which classes or topics they belong and then use this information to train some kind of model which can be used to to classify new documents according to according to the human manual classification so for example if some some user said this is spam and this is not spam you all know this from me from the email program then we learn some kind of classifier from this information and are now able using this classifier to find out whether a new email is spam or not and with continuous feedback from from the user we are able to make our classifier better and better over time and we have seen these three algorithms and now we will talk about support vector machines so there are different kinds of support vector machines we will start with the most simple one so called linear support vector machines and then we will go one step further and show show more advanced techniques and how it works okay let's start with a brief problem definition we will make some assumptions which will make our life much easier in the next slides we will do a little bit math in the in the next 10 or 15 slides but it's all not not too difficult and this is also because we made some assumptions that that makes the math a bit a bit easier so understand so we assume that we have a binary classification task so documents either belong to the class at hand or don't belong to the class at hand and there's only one single dimension for classification so spam or not spam or relevant or not relevant or belongs to the animals topic or doesn't belong to the animals topic whatever so and the second assumption is which is also quite common for all types of classification algorithms that we assume that any item any document to be classified can be represented as some kind of D dimensional real vector so we have a space consisting of D dimensions so many many more I can draw it and each document is just a point in this space and each coordinate is some real number so that's the usual approach to classification and the task now is find a linear classifier so called hyperplane we will see an example soon that divides the whole the whole item space into two parts so on the left side all documents or items belonging to the class and on the other side of this divider all documents belonging not to the class so we simply try to cut the space into two halves okay here's an example the two dimensional training set so here could be our two axis and each document is a point in this space so this could be the vector space for example whatever you like to have it so the task then is for linear classifiers separated the space by a straight line for example this one or this one or this one so there are many possible ways to do this and all these are linear classifiers so simply drawing a line or if you have a three dimensional space using a plane or if you have a higher dimensional space then it's called a hyperplane so it's basically some kind of linear geometric figure that divides the space into two halves so here again are some ways so let's remove my painting to see the better one different ways to do this next one next one next one yeah and now the question arises so there are so many ways to do is to do it which one would be best so what's your intuition about this which line would be would be the best one to use so I will go back a bit so we have a clean picture so which line would be best when you want to build a linear classifier okay what about what about this one good idea okay too close to the points okay then yeah what about this one isn't that great okay oh this one what about this one yeah I see we have the same intuition so basically the idea is to find a line dividing the space that leaves room on both both sides to be to make a yeah safe classification so for example it could easily be that there are very similar points in this in this class which we have an observer now a training set and if we would use a very very very classifier that is very close to our training examples for example this one here then it could easily happen that there is some kind of data point this is very very which is very very close to the to the one class which gets misclassified if we if we use our use the line which is too close to the class so the general idea is to find a classifier which is somewhere in the middle leave some space just to be on the safe side given our training data so what we will do on the next slides is to derive some some mathematical approach how to find a good a good line and what properties it should have so usually when you when you work with support vector machines you read something about it you get a large formula with some kind of variables in it you won't be able to understand and the goal of this lecture is to give you an understanding how this formal model that is currently used for the put vector machines where this really comes from so it's basically this idea here it's totally easy but nobody tells you about it because they usually when you do support vector machines you use a you use a use a formal definition of the problem that is easy to solve in terms of mathematical optimization but which is not easy to understand so we will go the other way around start with understanding and then arrive at the complex representation okay so as we have seen somehow the idea of of margin seems to be important for defining defining a good linear or defining the quality of a linear classifier so margin simply means the space between between the points where the where the linear classifier is close to or closest to so here using this linear classifier the margin on this side would be this with here this one because they are this point and on the other side it might be this and all taken together is the so-called margin so the width the boundary could be increased without hitting a data point so the more margin the better the classifier is easy so different examples different margins different ways to do it so and then we arrive at the notion of a so-called maximum margin classifier and a maximum margin classifier is the linear classifier that gives the well maximum margin here so and as we have seen maximum margin seems to be a good idea and so our task is to find a maximum margin classifier given some data set okay so actually the maximum margin classifier is the simplest kind of support vector machine that one can use and it's also called linear support vector machine so spot vector machines may sound complicated when you read about them or use libraries using them but essentially they are just about the idea of maximum margin classification okay let's assume for now that there always is such a maximum margin classifier and that our data space always can be divided by a line or a plane or a hyperplane into two areas so for example it could be that there is enough another data point or it could be that this data point is located here because we have made some errors in measurement or it just some some human who misclassified this point then we won't be able to to find a maximum margin classifier because there is no way to draw a line between the blue and the green data points so let's assume for the moment that there is a way to do this and later I will explain how we can cope with situations where we have some some noise or some errors in the data okay um mathematically one speaks of linear separability this is just this assumption we are going to make for the moment so another important concept are the so-called support vectors and that's where support vector machines get their name from uh support vectors are exactly those data points that are pushing against the margin or touching the margin here yeah maybe this one and this one and this one and this one these are support points or if you uh want to want to uh see points as vectors it's also possible then these are the so-called support vectors and support vector machine simply tries to find the maximum margin classifiers which is directly related to the uh support vectors okay um we have always seen that maximum margin classification is some kind of very intuitive idea so large margin two classes sounds good but um there are some some more reasons why this should be a good idea um so we've also seen that uh the largest margin is intuitive because it seems to guard best against small errors so as I explained before if we have simple simple deviations in our initial data and if we have a line very close to the original data it simply might happen that very very similar data points get get misclassified by this line so leaving space on both sides always give some give some yeah give some some some margin against against errors which is which is good so another one which is really important is this here so this approach is quite robust robust against changes in the data so let's go one slide back and then you can see it if I adding new training data points here which don't really seem to change much then the maximum margin classifier or the linear support vector machine uh doesn't change a bit because it simply depends or just depends only depends on the support vectors so uh linear support vector machine is only defined by those points being most critical in terms of classification so it doesn't look at points lying very close outside so it it doesn't matter what points uh or how how how points look that could be could be really it could be classified in a very safe way so that there's no doubt about the classification but maximum margin classifiers concentrate only on those points that seem to be hardest to classify so there's also a good idea to do this um so there are also some theoretical arguments I won't go into now uh why maximum margin classification is a good idea so actually there are people who write who write very very thick books about theory of classification and they are assuming that the data has some underlying probability distribution and one can prove that maximum margin classification minimizes uh the error made under some assumptions so even from a theoretical perspective support vector machines are a good idea and probably what's most important for our people it just works well so support vector machines usually are among the classifiers that work best for any given kind of data of course there are exceptions but usually for real world data support vector machines perform perform very close to the best available classifier or just are the best available classifier. All right so now we have to do some math uh and we want to formalize our problem how to find the maximum margin classifier because we because it's not a good approach to simply draw a picture and and draw a line somewhere because we need an algorithm doing that for us and an algorithm needs clear instructions okay how to formalize our approach so let's let's define our training data at first so we assume that there are n training examples so n data points in our space that have been labeled by some human into belonging to the class or not belonging to the class so and each training example now could be represented as as a pair consisting of a component yi and zi this is the i-th training example and yi is the real vector representing the item to be classified or the document to be classified and zi is uh just the label of the class so minus one means this uh item doesn't belong to the class and plus one means this item belongs to the class so this is our training representation um of first class and second class doesn't matter so it's the binary kind of classification so here's an example um for example this point has coordinates minus one one and doesn't belong to the class and I've drawn a minus here in the picture and this one has coordinates five minus one and belongs to the class and that's why it gets a plus here so when doing maximum margin classification we would expect the line to the dividing dividing line to be located somewhere here or maybe here I have no idea we have to calculate it and which is what we are doing next okay um first of all what's a valid linear separator so how can we find out uh which which lines in space actually separate our data set into two pieces and uh for that we need to recall from linear linear algebra some of you may have learned this in school some of you may remember this from some elementary lectures here at the university so in general any hyperplane so line plane or yeah in more dimensional spaces it's called hyperplane can be defined can be defined by some kind of row vector a d-dimensional row vector w and some scalar b so some number b and the idea simply is that the row vector determines in which which direction the hyperplane stands particular to so this this would be this vector w showing the direction which is perpendicular to the to the hyperplane and the number the scalar b is related to the shift of the hyperplane from the origin of the coordinate space so in formal terms we could represent a hyperplane like this the hyperplane consists of all those points in the d-dimensional space for which this equation here x multiplied with our row vector the scalar product here and plus b is exactly zero and those are exactly those points lying on the hyperplane so some of you might remember this so and if we multiply out the scalar product then it's simply the product of all components of w and vector x multiplied and summed up plus b and they should give zero so this is the definition of hyperplane and as I said w usually is a normal vector of the is a normal vector of the hyperplane that means it's perpendicular to it so autogonal in German so when b is a shift from the origin of the coordinate system and in our example the line shown in the example the hyperplane or just in the example is just line follows this equation so all points x y satisfying this equation lie on the on the line here and all points giving a value larger than zero are located on this side of the hyperplane and all points giving a value smaller than zero are located on this side of the hyperplane so dividing space into two and have one equation to determine on which side the point is located quite easy so we will talk about this only in two-dimensional space in the following because for hyperplanes and high-dimensional space it's exactly the same but you get a larger scalar product alright okay so we can start by defining two constraints that every separating hyperplane in a macro martian classifier must satisfy so for any training example we have if the training example does not belong to the class so its label is minus one then the definition of the hyperplane so w and b in this equation must be smaller than zero must be negative so simply we say negative label means negative value of the location equation if you would say so so on the other hand all positive examples have a positive value so negative examples are here positive examples are here so of course still there are many many ways to do this but no matter which plane we want to use they all have to satisfy these two constraints so next we will define some more constraints so that we finally end up with only one hyperplane namely the maximum margin hyperplane okay so we've seen if some hyperplane or some line is a valid separating line of our data set then there must be some two scalars r plus and r minus which are both larger than zero such that if we add r minus and r plus or add r minus or subtract r plus from our equation then we would exactly touch our supporting our support vectors so no matter where we put our separating hyperplane there's some room to the left and some room to the right and if we extend our hyperplane in both directions at some point we will touch one of one one data point at each side and r minus simply specifies how much we need to extend our plane to the one side to meet the first negative example this should be r minus in some way and this is r plus it's some kind of distance to the next positive example and the margin would be just r plus in some way so I should note that r plus and r minus are not just the margin width here so if we would take this point and this point and simply would calculate the Euclidean distance but we in these points the distance would be different from r minus because r minus is simply some kind of shift and x and y also depending on their lengths contribute differently so you might remember this from linear algebra so this is basically some kind of shift constraint using in the equations but doesn't have to do something with actual length in the space so we will see this later why this is true okay now go on we now assume that we have some separating hyperplane for our data set and we know scalars r plus and r minus that give us the space to the left and to the right and now we just want to get some normalized hyperplane that lies simply in the middle between our left support vectors and our right support vectors because as we have seen it doesn't make any sense to draw our line closer to the positive example than to the negative ones or the other way around we simply want to have our separating line in the middle of both classes or right between both classes so we can do this by just taking our minus and r plus as a shift constraint to both directions divided by two and then use just this average shift as a shift for the two directions so use equal shift constraints here r plus and we can simply define and we see after we have defined b prime to be b plus these average shift that this one is also a valid separating hyperplane because it lies just in the middle between the two hyperplanes touching the positive and negative examples so this was our initial hyperplane which has been too close to the negative examples and then we look here and here and move this one right to the middle by redefining our shift constraint and now we have a way to get a normalized hyperplane that lies directly in the middle given any valid separating hyperplane. So now we can safely assume that we already have given a hyperplane that lies directly in the middle and now we want to calculate how large the margin really is into both directions because in the end we want to find the hyperplane having the maximum margin so we simply the idea is to try all possible separating hyperplanes calculate the margin for each of them and then simply take the hyperplane having the maximum margin and for this we need a formula to calculate the margin and to do this easier we can again normalize our equation so we simply divide w and our shift constraint b' and our margin with r' by r' so we get some kind of unit margin and we just define so by dividing the whole equation by a given positive number that doesn't change the hyperplane at all because on the right side there's a zero and after dividing zero by some number it's zero and after dividing the left hand side of an equation by some positive number the equation it's still the same so let's do some kind of normalization we define w' is w divided by r' and b' is just defined by b' divided by r' and then this hyperplane still is a valid separating hyperplane so and the shift constant then is one because here we have our new hyperplane and the hyperplanes touching our support vectors are this hyperplane with a one added and our original hyperplane with a one subtracted here so it's again some kind of normalization and now given some hyperplane we are able to find a normalized one which having a unit shift constraints to the left and to the right okay let's put it all together so if there is a valid separating hyperplane then there always is a hyperplane such that it's still valid such that the margin to both sides to the positive and negative examples have the same widths and the bounding hyperplanes to the left and to the right are shifted array by one so what the benefit of doing this so as you have now seen that for any valid separate hyperplane there is this special valid hyperplane we are now able to limit our search for the best hyperplane only to those planes having or satisfying these constraints which will make our optimization problem a bit easier so we will now forget about the class of all possible hyperplanes and we will now concentrate our search on the hyperplanes having unit margin to both directions because there always is such a hyperplane so and again when doing this it's a good idea to use a linear classifier that uses equal space to both sides so this is some kind of formal step before doing this and now our search space because we want to search for the hyperplane having the maximum margin so we want to go through all possible hyperplanes and find the one having the maximum margin our search space is the set of all row vectors that which are perpendicular to some hyperplane all shifts so all possible hyperplanes which are separating hyperplanes so splits the space in positive and negative and the constraint is that exactly because we have done this is normalization we now can see that at some point our hyperplane touches one negative example this is exactly at the point where this equation is minus one because we had introduced this minus one shift and we have the constraint that the hyperplanes we are looking for have the property that there is a positive example just one shift away from it so at now as we have specified these constraints the question is what really is the margin of such a hyperplane this is what we do next so we can use some results from linear algebra so let's assume we have given such a hyperplane satisfying this equation here and the hyperplanes touching the negatives and the positive examples are shifted by one and minus one into each direction so then we know from linear algebra that the distance from the height the distance of the hyperplane so let's assume this is our coordinate space this goes right here somewhere then we know that the distance of a hyperplane to the origin of the coordinate space which would be here this exactly distance in the Euclidean sense is exactly the length of the vector B divided by the length of the vector W so we don't want to prove it take it as it is and because of this we know that the distance from this line to the origin of the coordinate space because here is B plus one is just length of B absolute value it is here B plus one divided by the length of W and the distance of this hyperplane to the origin of the coordinate space is B minus one absolute value divided by the length of W and then we can see that the margin width is exactly two divided by the length of W because B is the same in both distances and the distance only differ by a number of two therefore the margin width is two divided by the length of the row vector defining the hyperplane so and if we want to maximize our margin width subject to the constraints we have seen on the slide before then our goal must be to minimize the length of this vector here so it doesn't matter how large the margin is it should be minimal among all hyperplanes that are possible so then finally we get the following optimization problem and the solution of this optimization problem is the one hyperplane having the maximum margin in our space so we want to find parameters W and B so we want a definition of a hyperplane dividing our space that satisfies the following so we want the hyperplane that maximizes this number maximizes the margin width and at the same time satisfies the constraints that the positive examples or the next positive example is exactly one shift away and the negatives are exactly one shift in the negative side away and of course there is one point touching these shifted margins at the shift of one so there is one support vector and there is the other support vector so as our goal is maximize the margin the margin will always be taken as big as possible and this means that we don't need these two constraints because if we want to maximize the margin then we will always make the margin so large that we have equality at this place so it doesn't make any sense to use a solution here to look at some hyperplane that has a smaller distance to the next support vector than has a larger distance to the next support vector than one so this can become quality so we don't need these two anymore and then the problems get a bit simpler we want to maximize the margin over these parameters in our search space and the constraints simply are these two here so for each training example we want to have that the training examples the negative ones lie on the left side of our plane and the distance should be at least one and on the right side all points should be a positive example should lie on the right side and the distance should be at least one so there is enough space between our training examples and our dividing hyperplane so next step would be to make this term a bit simpler so if we want to maximize two divided by the length of w we could also minimize just this number because that's essentially the same if we have two solutions and the one and the first solution has a larger value here then it will always have a smaller value when we just take this one so because it's a monotone function so we could also instead of minimizing just the length of w or the vector w we could minimize 0.5 times the length of w squared so it doesn't seem to make any sense here but the idea simply is because this is a length write it down length of w as w is a vector you remember is always something like this w1 squared plus w2 squared so all the coordinates squared and then the square root taken out of it so if we want to make calculations in algorithms a bit easier then it's always a good idea to square this one to get rid of to get rid of the square root because taking the square root is always quite expensive operation and the optimization problem still stays the same because squaring a positive number again is a monotone operation and this won't change our optimization problem so then we could also add some factor 0.5 which doesn't change anything in our optimization problem and actually there's no good reason why we should do this but in mathematical optimization theory there is some kind of standard form how to define these kind of problems and they usually put a 0.5 in front of their term to be minimized or maximized and that's the only reason why we're doing this here so to get some standard form which can be solved by existing algorithms so next slide this is now the problem we want to solve so we want to look through all hyperplanes satisfying these constraints and from these hyperplanes that satisfy these constraints we want to take the one that have the minimum value here and this is a maximum margin hyperplane then so to get a simpler representation we could also combine these two constraints into a single one we simply could put the z i here as a factor and don't need to distinguish between two cases between negative and positive examples because when we do this let's do this for the minus one example so for negative examples we have minus one times w times y i the coordinates of the ith training examples plus b minus one greater than or equal to zero and this simply becomes minus w y i minus b plus one is greater than or equal to zero then we could move the one on this side gives a minus one so we want to arrive we want to arrive here this is strange ah the sign is yeah so here is a parenthesis yes this must be plus then thank you for the hint and then we could simply multiply the whole thing with minus one and we get w multiplied by y i plus b less than or equal to minus one and this is exactly what we get here and the same is true for the positive example so this is just a fancy way of expressing this this distinction of two cases so this is now our final optimization problem we need to solve when we want to find the maximum margin classifier given some training set and it just constraints consists of a single constraint for each training point so looks pretty simple but as you have seen in the last 13 slides the derivation is a bit complicated but every step is really really simple and this is just a way of expressing this but on the other hand if i would have started with this expression you would have never known what we are doing here so i hope you now know at least have an intuition what we are doing here okay this is our optimization problem how can it be solved so now we need some kind of numerical optimization algorithm so fortunately there is some discipline in mathematics which deals with so-called quadratic programming problems qp for short and there are many many algorithms to find the optimal solution of problems that look like this and this is also the reason why we added the 0.5 here that's because all the people in the qp area just define their problems this way okay so now we are able to find the maximum margin hyperplane using some some standard algorithms there are also big books on this how this can be done this won't be the topic of this lecture just use some library for solving this kind of problem or if you have heard some kind of numerical computation lecture they should have dealt with that is any one of you there who has some knowledge on qp problems so then just trust me or look it up if you like to it's it's quite quite complicated stuff but these problems can be solved in a standard way okay let's go on and then have a break no I think it's better to have a break now we will see again in just again in five minutes okay then let's go on so we finally arrived at some optimization problem which we can solve and when we solve that we find the maximum margin classifier which is the linear support vector machine we would use to classify a data set so unfortunately that's not the end of the story because mathematicians are quite clever and they found out that if we want to solve this problem we've just just seen it might be easier to solve a different problem which is equivalent to the one just shown yeah we won't go into details here it's it's just that you've seen it and and heard the terms and don't and don't get confused if you read something about it because this is usually the representation of the problem you will find in books so they simply tell you support vector machines are a great idea and you just have to solve this maximum maximization problem and you have no idea where all the this comes from so the trick is called duality and each optimization problem or each quadratic optimization problem has a so-called dual representation which can be derived by introducing Lagrange multipliers we've already used them in Rockio relevance feedback and then we can also use some some kind of data transformation it's all quite complicated I've done this myself some time ago and I'm not a mathematician and found this really confusing but actually I found out how it works but you don't need to if you want to look it up in a book so the thing really is you should remember that there is some kind of dual form of the problem to be solved which looks like this and in this representation we are going to maximize n variables so alpha here is a vector consisting of n numbers n real numbers so in our initial formulation we wanted to find d dimensional vector representing our hyperplane at the shift and now we want to find alphas you should also remember n was the number of training examples so now we have one number per training example which usually is much smaller than the number d when we find to want to find the d dimensional row vector so the problem is usually simpler when we take a minute when we define it in this form so the term to be to be maximized is this one here so we want to maximize the sum of all the alphas minus some kind of strange product here doesn't matter where it comes from and the constraints to be satisfied is that all our alphas are greater than or equal to zero and for any training example this equation no any for any a this should be for any i this should be larger greater than or equal to zero and also this sum of products should be zero so it's some some constraint on how our training examples are linearly scaled together in some way and this should be zero so you directly arrive this from the initial problem if you use this duality mechanism but it doesn't matter here how this is done so it looks it it looks looks a bit more complicated but usually it is easier to solve when we formulate the problem in this way so one very important property of this optimization problem is that you have seen each alpha variable corresponds to one training example and any solution or any optimal solution to this optimization problem has the property that if for the in this solution one of these variables is larger than zero then the corresponding training example is a support vector so another important thing is that because for all non support vectors so all points in the training space lying far outside and they are not really important for defining the separating hyperplane for all these points these variables are zero and so we can just ignore all training examples that are not support vectors so because all non support vectors have zero values here we can simply ignore all data points that we can rule out as being support vectors which also makes solving the optimization problem much easier so and as you've seen our examples usually most training examples are zero so we don't have to deal with any training examples usually but just a small fraction of it which also makes solving our problem a lot easier okay when using this dual representation we could also give a different representation of our training of our classification function so do you remember the classification function simply is the equation of the hyperplane and it returns a negative value no not minus one returns a negative value no no this is what I wanted to say and negative values for all for all data points lying on the left side lying on the negative side of the space and a positive number for all data points lying on the right or the positive side of the data space so when we use the dual representation one can derive a similar classification function which is just this one here and we take the sign of this value so this directly corresponds to the distance in some way from the separating hyperplane and if for some data point this distance is positive then our classification function returns plus one and for all the other points the classification function returns minus one so this classification function is used to classify new data points x is a new data point I put it into this function and then calculate this value here so the alpha is unknown from the solution of our optimization problem which we have just seen the z is unknown from the representation of our support vectors the y is the labels of our support vectors either positive or negative x is as I said the vector space representation of our new training example and b is a constant that also can be computed directly from the solution of our optimization problem so when we found a solution to the dual optimization problem and get all these alphas we are able to directly derive a classification function from these alphas and again the benefit from using the dual representation is that we only need to look at the support vectors because all summands that are zero or non-support vectors do not contribute to our classification function and also they do not contribute for calculating our shift constant b so using the dual representation makes classification a lot easier so this is what I said and you should also note that our classification function and also the definition of b also only depends on scalar products between our initial data points and some support vector points we do not need to do any other computations with our data points to data point to be classified then comparing it to a series of support vector points by taking the scalar product so this is here the case and here we don't even have our data point so this is an important property a key feature which we will use very very soon so we only need scalar products and we only need support vector points okay yeah now we know how we solve classification problems which are linearly separable this was the assumption at the beginning but you know real life isn't that beautiful as you would like to have it usually but it might easily happen that some data point might be located here right in the middle of our positive examples as a negative examples and we have no chance to draw a separating hyperplane between the two classes so this won't be a valid hyperplane because it misclassifies this example so we now we need some way to cope with these exceptions in a reasonable way okay what we can do it what we can do now are so-called soft margins so we do not assume that the classification is crisp that all positive examples are on the right side and all negative examples are on the left side but we assume that the classifier may make some mistakes when drawing the hyperplane so you don't need to divide the space into two or crisp halves but there might be training examples being on the wrong side of course when a training example is on the wrong side of the plane this should introduce some some kind of classification cost or classification error which should be used to to find the best hyperplane used for separating the data so so so the best classification so the best hyperplane separating our data space then is a hyperplane having a margin that is as big as possible and at the same time makes as little errors as possible so of course these are these are two two factors you have to you have to wait in some way so the goal is to make a good compromise between width of the margin and errors in classification all right so and the error simply is the distance of our misclassified training examples from the hyperplane which would make it correctly classified so this would introduce you some some kind of cost which is very similar to the distance so now we can extend our optimization problem a little bit by so-called slack variables these are these better one to better n so for each training example we assume that the training example can be shifted a little bit to the left or to the right and each training example or each misclassified training examples is then shifted into the right direction of the space such that we really have a correct classification okay then the formal definition is as follows we now have these better variables as an optimum as a as a parameter to be optimized so again this is our our initial optimization problem and not the dual problem so now we want to minimize the margin with plus a constant c multiplied with our shift constraints but our slack variables c is a constant that is chosen in advanced and that is used to to weight the margin with versus the error we make in classifications so if c is large then making errors in the classification is much more important than having a large a large margin and if c is small then having a large margin is more important than making errors or then than avoiding errors okay then we have to increase our constraints so the better eyes should all be positive or or zero should be positive shifts and we assume that our distance here so this was the constraints constraints saying that all positive examples should be at least one away from our separating hyperplanet both directions and now we losing this we are losing this constraint a little bit saying that of course the distance don't have to be has to be one it also could be smaller it also could be negative and this distance then is is changed by better eye so and if for some training examples let's go back for example if this is our hyperplane then we would need so again here our one hyperplane this is shift one and there's also shift one we won't be able to get a shift of one for this data point because already the distance here is for example say five then we would need to define if this is our eyes training example we need to define better eye equal to five to allow a margin or a margin correction here of the size of five and now we we assume that this data point is located here we would treat this data point as it would be located at this location and introduce a penalty for shifting this data point and now we are able to use our hyperplane because it now makes a correct classification because this point is now located here we shifted it but for for shifting we introduced a cost of five and this is all what's happening here so if the eye's data point gets misclassified with a shift of better eye then the price we pay for for having to use this shift is the penalty constant c multiplied by the strength of this shift so as I said c some positive constant which regulates how expensive errors should be treated in optimization problem again if c is large then making classification errors is a significant factor in our in our value to be minimized so if c is large then we try to avoid errors at any cost possible so in the extreme case we could choose c equal to infinity and then we would have our initial optimization problem and then our problem would all would only be solvable if there is a hyperplane separating the data as it is so by introducing a c which is a number different from infinity we are able to allow for errors and weight them accordingly yes for exactly for every for every training example we have now this constraint this has to be zero before in our initial formulation and all training points that can be classified correctly and exactly the way we did it before this better would be zero because it doesn't make any sense to to take a larger zero because this this would introduce a penalty and for all those data points that cannot be classified correctly we are forced to you to take a better value which is larger than zero and thus we introduce some kind of penalty here and then we have to weight margin with against penalty and arrive somehow at a solution that compromises on both okay yeah with soft margins we don't need to assume linear separate separability anymore and again one can formulate a corresponding dual problem and which looks simply simply like this so you might remember our original dual problem simply doesn't have this constraint here and now we simply simply assume our alpha values lie between zero and c so c is just some kind of upper constraint in our alphas so it's not very intuitive why there is an upper bound in the dual problem now but it simply is the way it is and this is also a positive thing for using the dual problem formulation and it really stays easy and doesn't introduce any more any more parameters it's just a slightly change constraint when we use soft margin so and again still for these kind of problems there are algorithms available that are that can find solutions efficiently okay now we're able to classify any data set we have using a linear classifier and we also can account for noise in the data or just situations that are not so clear that you could use a straight line okay next question is we assume to that we only have a binary classification problem so spam versus not spam or relevant versus not relevant what happens if there are more than two classes so in in naiv base for example or in k nearest neighbor it's really easy to to use different classes for example user user space where where you have sports and and politics and something else and you just use three classes so support vector machines are in some way bound to a binary classification but there are methods to to extend them to to multiple classes so one idea is to for each class train a classifier um yes for each for example we have three classes this is sports this is politics and this is science and we assume that each documents belongs to one or to exactly one of these classes and this is a problem we cannot solve using a support vector machine because the vector machine can only can only make a distinction between two classes so what we can still do is train a classifier for each of these three classes so we build a classifier for sport versus non-sport we build a support vector machine for politics versus non-politics and we build a support vector machine for science versus non-science and then for each new document we want to classify we use we use each of these three classifiers and check the margin of the new training point so if this is our space and this is our our sport classifier so here's sport and here's non-sport and our new document we want to classify is located here then this would be the margin for this new document with respect to sport for example margin of six and then let's say we have a different classifier this is our politics classifier here's politics and here is non-politics so no let's take it a bit differently to give a better example this is our politics classifier here's politics and here's non-politics then again we can use this classifier to determine whether our new document is politics or not politics and we will find out for example that this document is politics but with a much smaller margin than it is sports two for example we do the same for for science classifier and then we assign the document to the class where the margin is largest so in this case we would assign the new document to the sports class because it's sports with a weight of six with a margin of six and it's only politics with a margin of two so it's more likely to be to be a sport than this politics all assuming that each document can only belong to a single class okay then what we can also do is build a one versus one classifier instead of training sport versus non-sport we would train a classifier sport versus politics sport versus science and politics versus science and then again use all classifiers here to classify our new our new training set and count the number of times an item is classified as sport and an item is classified in the other classes and then simply take the class chosen by most of these one-to-one classifiers of course if you have 10 classes you need 50 no you need you need you need you need 50 times 49 no 100 times 99 divided by 2 this is the number of classes you would need the number of the pod vector machines you would need and as you can imagine if you have a large number of a large number of classes then this is not a good approach so then you would you would stick to this one here what's also possible are so called multi-class support vector machines which are quite complicated extension of this binary support vector machine concept where you use multiple hyper planes at once and combine them in some ways we will not talk about this in this course so there are ways to deal with support vector machines in a multi-class setting there are different ones you have to decide for yourself which one you want to use okay the next part of this lecture so-called non-linear support vector machines yes yeah you could also you could also retrain the support vector machine with each new document you have and that has been classified by the user but usually you take a training set that is large enough use this training set to create your your classifier create your your hyper plane and then stick to this hyper plane for classifying all new documents so of course it it makes sense after some time to to reevaluate your your support vector machine given the new information you have but this is how it's usually done so then you can't use this information because you don't have any there is a concept called transductive support vector machine which tries to use the data to be classified so then the the problem becomes you have a training set with labeled examples and you have a set of documents from which you only know the coordinates and there are the ideas to to look at some patterns of the space whether whether you could use this and you could use some clustering technique to to estimate what the correct label of trainings of these examples could be if we don't know them so it's it's rather complicated and it doesn't work too well I would say it sometimes is a little bit better than a normal support vector machine but usually you can only use the training data set that has been manually labeled as as data to build your classifier and that's the same for all types of classification because in supervised classification you you need a training set that consists of a document representation and for each document a correct label or a label known to be correct so um yeah yeah it's a different situation yeah um yeah in clustering you usually don't have any hints what what might be might be correct information so one could also think of a supervised clustering approach where someone gives you a reference clustering that has been created manually and from this manual from this clustering a more general clustering of the space has to be derived that also could be could be a problem so actually these are different problems we I think two weeks ago we had this distinction into different classification or different machine learning tasks one was supervised where you provide correct information to the system which can be used to to learn something then you have semi-supervised setting where you give some correct information and some additional information for example the data to be classified which also could be used if you if you analyze how this data is located in space to learn a better classifier or a better clustering or better model and you have these unsupervised classification where you don't have any information which usually is true in clustering because you don't know what should be correct clusters you only have some coordinate representation of your documents that you have no idea what to do you can just use some heuristics and assume that some kind of similarity measure is good in some way or resembles human human similarity judgment this is all you can do sometimes if you work with documents and you want to make classification you can use the document content for example you want to if you want to classify or politic documents politics versus non-politics you could try to analyze the user could give you some politic terms for example politician minister and something else and then you can try to analyze using co-occurrence of terms what also could be other politic terms and use this information to do classification so there's a huge range of how classification or clustering can be done and the extreme points are just you learn from data known to be correct and the other extreme is you don't have any information you just have the data and do whatever you like and and how how good you can do it but there's no guarantee so the most the most clear setting is supervised learning but there's a large space in between so there are also approaches where the classifier is able to ask the user for additional information for example if the classifier identifies a set of especially especially vague examples which are which which seem to be critical and it doesn't seem to be clear whether this should be classified into one class or the other then the system could ask the user for more information this is also possible so again that's a huge range of ways to do it in this lecture we are only dealing with the most simple one and in my opinion these are complicated enough usually so if you're interested in classification get a book about machine learning that's usually the the major term spending over all these approaches there's a lot of theory there's a lot of different approaches how to do this so machine learning is where you want to go then all right okay non-linear support vector machines so we have assumed that data always is linearly separable or in cases where it is not linearly separable then this is because noise in the data or some classification errors but sometimes it might be true that the data itself isn't linearly class linear is separable for example here it might be perfectly reasonable to assume that one topic or one class is located in the middle of the space and or negative examples are just outside so there's no way to describe this situation using a line so it doesn't make sense to divide here because then would be here would be misclassification here it doesn't make sense to do it here or here either the only way to do it would be to have to have two separating lines so the middle where those outside so of course this is not linear and the question is how to do it so how to do non-linear classification idea is quite simple how to do it if we have this situation then we transform our data into a higher dimensional space so in this example we have a one-dimensional data space each training example is described by a single value on a on a on a single line and if we transform this data into a two-dimensional space so for example by giving giving the values that lie on the outside a high second coordinate and the ones in the middle a low coordinate then we could arrive at this representation and here we can easily draw a line to separate the positive from the negative examples so and this is the whole idea underlying non-linear support vector machines there's always a transformation step bringing the data into a higher dimensional space in which we can use a linear classification so the question is how to do this transformation what is a good transformation and what is not and how to deal with data in very high dimensional spaces so in some cases these spaces might have 10,000 dimensions or even more doesn't sound too handy so here's some kind of visualization again of this non-linear non-linear classification I hope it works but obviously it doesn't therefore we will open the video all right this is how it works so here in the middle of the circle we have the positive examples and on the outside we have all the negative examples what we want to do is have a classification circle inside where the outside and now comes the transformation we are now mapping our two-dimensional space in a three-dimensional space by introducing a third dimension and now we can use a hyperplane or a plane in this case in this three-dimensional space to separate the positive from the negative examples and if we again break down this linear cut into our original two-dimensional space then we finally arrive at a circle which divides outside from the inside so this is how this transformation step usually works quite simple idea and the good thing is that it stays quite simple because of the good properties of the dual problem we just discussed so of course when we work in high-dimensional spaces when we do this transformation step then computing the transformation and then doing the linear classification in a very very high-dimensional space might be very difficult so it would be very good if we don't have to do this transformation explicitly so we just yeah so we don't so the step is as follows here is our original data original training data which is not linearly separable this is our transformed data in the new space which is linearly separable and the support vector machine can only work directly on this data so what we usually would do is first transform the data and then work in the transformed space with a support vector machine so what we are going to do because transformation might be expensive for large data sets and for large and for high dimensional spaces to work directly in the original space but doing all computations as if we are working in the transformed space so we are looking for some kind of implicit transformation that is computationally efficient and fortunately there is a way to do this and this again depends on the fact that we only need to deal with scalar products when we define our support vector problem as a dual problem so we only need to deal with scalar products of our original coordinates of our support vectors and we only need to compute scalar products of new data points with with support vectors so we don't have to make any other computations and if we are now able to compute the scalar product of the transformed space directly then we just have to have to change this part of the optimization problem and the rest still stays the same. This is called the kernel trick. A kernel is some kind of mathematical function with some special properties. Have you ever heard of kernels? Very good so then you know all of it. We don't go into details here how this really works but the idea simply is if you use kernels then you are able to compute scalar products in a transformed space in a very very simple way. So let's assume that h is some function that maps our original data from some d-dimensional space in some d-prime dimensional space so typically d-prime is much larger than d and then if we want to solve our optimization problem we would transform all our data points and all our new points to be classified and then just formulate our optimization problem in terms of this new space. So all yi becomes 8 and all x becomes 8 of x. So and again you can see that we are still need to only compute the scalar product of these transformed vectors. Scalar product here, scalar product here, scalar product here and if you are able to compute the scalar product efficiently in the new space without having to perform this transformation then this can be can be done really really quick when solving the optimization problem. So if our transformation and that's a good thing about it has some, is of some special type so a so-called kernel function. So there are many many possible kernel functions and all have the have the property that you can compute the scalar product of two transformed vectors using only the original coordinates in the original space. So we will see an example soon. This is for example a polynomial kernel transformation of the second degree. Our original space is two-dimensional and we can compute a mapping into a one two three four five six-dimensional space by saying that the first coordinate is always one. The second coordinate is the square root of two times the original first coordinate and so on and so on. And the interesting property is the following. If we want to compute the scalar product of two transformed vectors so given two vectors x and x prime in our original space and we need to compute the scalar product of these vectors in the in the transformed space then this is simply the scalar product in the original space multiplied by one and everything squared. So we don't need to compute the transformation into the new space itself. For computing the scalar product in the transformed space we can just rely on simple computations involving the original coordinates. This is all the kernel trick is about. There's a whole mathematical theory behind it but that's essentially the idea. Compute scalar products in the transformed space efficiently because that is all we need to do when dealing with support vector machines. Okay I have a demonstration of linear support vector machines. So there are a lot of a lot of Java uploads in the in the web available illustrating how support vector machines work. This is one of them. So I encourage you to to play around with this at home to get a better understanding of this. So here we can we can define different kernels we want to use for example a polynomial kernel as an our example for example a degree of 5 and here we can just place our example data. So these are our positive examples and unfortunately our negative examples lie just in the middle of all the positive examples so there won't be any way to derive a linear line separating the space and we need a higher dimension or non-linear classification in this example a polynomial kernel which is now computed and this is what a polynomial kernel would do. So these are the support vectors three support vectors on this side here one another one another one another one five support vectors on the other side this is how it is done. So again think of this as a transformation in some high dimensional space where we can do a linear separation and projecting it down to the two-dimensional space and so the separating hyperplane becomes some kind of curve. So this is a polynomial curve. Another popular choice are so-called radial basis function. Many many training examples and again looks quite differently. Radial basis functions usually draw some kind of closed curves like the one here so this is closed and goes along this way. In the polynomial setting you usually have large areas for example here starting here and going around and being opened to the other side this is polynomial and radial basis function usually only draw some closed curves. There's a difference between these two. Good question. This is also a property of these radial basis functions. They are it's hard to imagine in some way this area is continued somewhere on this side here. So so when you have to transform space then this is connected to an area lying on this side which you can see in the picture now because it's only a very small view of the of the whole space but there is some area being located here which is the same as this one here and therefore we have here a number of support vectors. It isn't very intuitive so there is always a problem when drawing these things in the original space but for purposes of classification it works pretty well. So of course the problem is if you now have data points lying here they are likely to be misclassified or lying here because this could be an area already belonging again to this to this class. So it isn't perfect but usually if you have a representative sample of your of your data if you have good training data then using support vector machines with kernels usually brings very very good classification performance. Okay this is how it works how the kernel trick works. So now to the question how support vector machines work in information retrieval. So I will now show some some small examples to to prove this because classification is a quite general problem and it's just applied in information retrieval. So of course one one important area of application is text classification as I said politics versus sports versus something else. This could be for example important for libraries which have to index or classify new books they get and if they have the books in a digital form they could use an automatic classification systems to derive some some classes the book might belong to given some training data and then the librarian can decide whether this classification is the one they want to use or just have to make little changes but doesn't have to have to make all the classification by himself. So the scope of the application spam detection as I said is also very very popular. So the documentary presentation you can use for support vector machines either are the standard vector space model as we have seen it you could also add some additional features some additional dimensions like for example document length so if if you have some reason to think that long documents might be contained in some other class than small ones for example if you if you have made so observation that in your collection spam documents tend to be very long or very short then adding the document length to the to the document representation might be a good idea. So the same is true for other derived features you could also use an LSI representation for classification purposes all is possible the only thing you need to have is some kind of feature representation of your documents so that each document is a point in space and that documents belonging to the same class tend to have similar properties. So as a support vector machine we'll find out what is really meant with similar and then derive a classification model from it. So dimensionality of course when using the vector space model is quite high because every term then becomes an axis in your space this usually is not a big problem because for most documents only very few axes are different from zero so no documents document contains all terms of your collection so usually each vector or each document is represented by a vector where there are some numbers in but many many many zeros so this is also a situation where support vector machines can deal with the algorithms that can handle the situations where axes containing zeros are just ignored in the algorithms and you still are able to find an optimal solution. So support vector machines can deal with sparse data in a very good way. So here's an example of classification performance. This is from 1998 where some experiments on the writer's collection have been performed. So remember the writer's collections is a set of news of news flashes from the writer's company and all these and and for some categories in the data set positive and negative examples have been collected. A classification algorithm has been trained and then a test set of new documents has been classified using this algorithm and then it has been evaluated how good the classification was in terms of precision and recall and each value you see in the table below is the f measure of performance so 100 percent means perfect classification zero means very bad classification. So these are the categories which have been tried and these are the algorithms. So this is naive base this is rocker classification and these are decision trees another way of performing classification. This is k nearest neighbor and these are different forms of support vector machines. Two linear support vector machines using two different penalty constraints for misclassification and and support vector machine using a radial basis function kernel with a parameter here. Yeah doesn't doesn't have to bother you. So and they found out if you look at the average over all these classes that the classification performance for for support vector machines always in the is on average what around 68 78 percent and that the other algorithms performed significantly worse. So rocker has been quite good of course and k nearest neighbor but support vector machines have been performed very very good here and this was quite surprising at the time because support vector machines around 98 have been a quite new technique. So the community was very happy to have a technique that really really performs well compared to other techniques and can be can be applied also to document collections. Of course if you look at different categories there might be situations where support vector machines perform worse than other techniques. So for example here for the interest category support vector machines here are around 70 percent or 75 at best and k nearest neighbor is better in this example but over the whole when doing the average support vector machines have been proved to be better. This doesn't have to be always the case of course it depends on the collection you have it depends on the representation of your documents what kinds of space you you're using but usually support vector machines perform very very well. Okay and another application of support vector machines is called learning to rank. There's also quite new topic information retrieval and here a special type of support vector machines is used so-called called ranking support vector machines. So and so and here again the training set consists of documents but here now you have in pairs of documents and the idea is that if you have a training pair of documents y i1 and y i yi and yi prime then these pair expresses that document y i is preferred to is better than y i prime with respect to some query. So for example if you have a ranked list of results for some query you might derive some pairs from it that for example this result is better than this one for example if you're looking for Viagra then the Wikipedia Viagra entry might be better might be a better result than some spam page and the Viagra page might be better than the manufacturer's official page about the product and of course the manufacturer's official page is a better result than some spam page. So here you would have three pairs, Wikipedia is better than spam, Wikipedia is better than the manufacturer's page and the manufacturer's page is better than the spam page. Three pairs, three training examples and again every example is represented as a point in space and this information then is used to learn a ranking function and with this training data the ideas that we are now able to decide for every new document whether it is better or worse than any other document known in advanced. So the task is find a ranking function that assigns a numerical score to each document. So and the idea is that if the score of one document is higher than the score of another document then the first document is preferred to document, to the second one is better result fits the query better in some way. So a straightforward approach would be to limit yourself to a special kind of ranking function, so linear ranking functions where you just do a scalar multiplication of the document space representation by some other vector and scalar products always have to do something with support vector machines and here again one can formulate this problem as a support vector machine task. Again we want to find a hyper plane with maximum margin such that the score of the better training example, so scalar product of the hyper plane times the representation of the document is better than the score of the second document and here we enforce a standard margin so that if we say that one document is better than another then the constraint is that the difference in scores must be at least one. So again we can define a list of constraints from this. This constraint is equivalent to this formulation here, some number to be determined multiplied in a scalar fashion by some vector because the difference of two documents again is a vector in space or is some point in space and this is exactly the form we used in our support vector machines original formulation but now each training example is a pair and each pair now gets treated as a single point by subtracting the training pairs from each other. Now we simply have the same support vector machine formulation as before and can solve it the way we did before so using self margins or using non-linear scoring functions. So and again we can use this to increase result quality. So again Torsten Joachim's which is some support vector machine guy, information retrieval, he's done a lot of work there, also discussed the question where these preference pairs come from. We need to learn a ranking function and his idea was that if users get returned from a search engine a list of results then users tend to linearly scan through these results from top to bottom and the users click on those results they think they are relevant. So for example if a user gets this result here and his first click is on the Wikipedia page then we can assume that the Wikipedia page is better than this page, better than this page and better than this page. It might be true that all these three pages still are relevant in a binary sense but the user feels that this result is the best one and the search engine should in future retrieval tasks, in future queries, should put this result at the top because it seems to be more relevant than the other three. This is the whole idea, just observe what other users doing, learn a ranking function from this feedback and if some other users ask the same query, in this case Viagra, we could use this ranking functions to optimize the ranking in the future, all with the power of support vector machines. So again here computer initial result list lets the user click something, learn a ranking function and then use the ranking function to create a better ranking in the future. So they are yeah of course one could also use the ranking function to compute the initial result, if we already have feedback from other users then of course we would present a result that already includes user feedback. Okay yeah this is a link I think you should take a look at yourself. This is just a list of applications of support vector machines. So actually support vector machines are used in many many different areas of science, so in biology, chemistry, geographic problems, so support vector machines are some kind of universal tool for many many different classification tasks. This is a page of someone who collected a large list of applications, take a look at it might be interesting. Okay finally we have a detour about recognizing hand written digits. This is very very important for for for mail companies and does anyone know the correct term? So in Germany it's a Deutsche Post mail carriers. Is this the correct term? Brieft such DHL something like this so any kind of logistics company I don't know how to call it in English but they usually have the problem. You have zip codes on your letters and these zip codes are the most important information for distributing the letters because depending on the zip code the letter either goes into this bin or this bin or into this truck or this truck and gets distributed into different directions of Germany or the world. So the first information one needs to know is the zip code written on the letter and of course when dealing with zip codes you need to read the zip codes and since they are often hand written you need to find out whether this is a three or nine or something else or this is a seven or a two or a zero and it would be a very good thing if some kind of machine would be able to read the zip codes and translate it into a machine readable form. So if you get a letter usually there is this barcode on the bottom of the letter where zip code and other information already has been transformed into a machine readable way and then the distribution machines in the big logistics centers of the mail companies just reading these barcode when dealing with the letters and distribute the mail where it belongs to. Okay this is a problem and this data set is a popular data set in pattern recognition. They have just taken a large collection of real zip code numbers or zip code digits which have been hand written and then some people manually annotated all these numbers and the task is to learn a classifier from this data that is able to decide for new digits which digits it is. So again we can directly derive space representation from this data because every digit is a two-dimensional image so each pixel is either one or zero and has a coordinate and we just simply treat each pixel as one dimension of the space and then each digit simply becomes a binary. Yeah let's say these are 100 pixels and these are also 100 pixels then we just have a 10,000 dimensional feature space for representing our data and we have different classes for example the class 6 versus the class non-6 or the class 6 versus the class 5 and this also could be done using support vector machines. Alright so they used a special kind of support vector machines using 10,000 test x using large training sets I have no idea how large it is but they used 10,000 test examples to evaluate how good the classification was and they found out that when trying to do this 10 class classification into all the 10 possible digits the error rate using support vector machines was in the best case using a special kind of support vector machine by only about 0.5 percent which is very very good. So of course one could ask whether it is impossible to make it even better but if we look at the 65 misclassification these machine made on these 10,000 test examples then you would see that it's even difficult for humans to determine what number this might be. So I've seen this picture and for each image for each digit two numbers have been provided the correct number and the number assigned by the algorithm. So unfortunately I don't know which number is which so I have tried to find out but I'm not able to find out whether this is a human judgment or this is a machine judgment or vice versa. So for example yeah if you look at this this could be a five or nine so there might be people who write a five like this but there might also be people who write a nine like this. So these are all situations where you don't have any chance to determine whether this is a five or nine. In most other cases here the machine doesn't have any chance because even humans would not be able to do this classification correctly. So in other words having an error rate of 0.5 percent might be the best you can do because of these strange examples here because the reference shouldn't be zero but the performance of human classifications. Next topic and last topic for today is the overfitting problem. So when we use support vector machines especially when we use non-linear support vector machines there's always the opportunity to use a transformation in some higher dimensional space that is really really complicated so that we can make a perfect classification even of the of the real this data. So for example if we have this data set here we could use a linear classification and just assume that these here are two outliers that shouldn't be taken too serious but we could also say well this data set definitely is correct so we use a very very complicated kernel and then we arrive at this classification boundary. So the question is which one is better? Is an easy solution as a simple solution better or do we always want to have a complicated solution that fits the data perfectly? So obviously we wanted something in between we want a solution which isn't too complicated but is complicated enough to to be able to represent systematic properties of our data set. So what is the problem here if we have a perfect classification a very complicated boundary then it could be the case that it fits the training set very well but on new data it doesn't really work because if you have just three training examples it could look like this and then our bound or classification boundary for example might be something like this for some reason but the truth could be that this would be a perfect fit because if we would have had more data then it would have looked like this. So we are always wanting simple solutions that fit the data but that are complicated enough to find systematic properties. So what can we do to avoid overfitting? There's a technique called cross validation. We randomly split our training data into two parts so-called training set and a so-called test set and then we use the training set to learn a classifier to learn a support vector machine to learn a classification boundary and then use the test set which is unknown to the classifier to check how good the classifier really is. So in the ideal case the classification performance on the training set would be very very similar to the classification performance on the test set because this would mean that the classifier has found all relevant information in the training set because this information is also available in the test set but hasn't used any properties the data just by random chance available in the training set but not in the test set and then you can use different different kernels and different parameters for waiting the errors in the soft margin task to find this support vector machine that is equally good on the training and the test set or is just best on the test set. It's also an opportunity. So there's a different method to do this apart from cross validation so in many cases you don't have much training data so it won't be a good idea to to split the training data in two halves and use only one half of it for training purposes and the other half just for just for testing your classifier. So usually you want to use all data you have for training the classifier so you need you need a way to to avoid complicated classification solutions and this is where regularization comes into play and a simple form of regularization is introducing penalties for complicated solutions. For example if you have different ways to different if you have found different classifiers classifying the same data set for example one linear classifier which makes some errors and some non-linear classifier which has a perfect fit then you would want to assign scores for complexity of the solution. So the linear solution would would would get a penalty of zero and the complex one for example would get a penalty of 100 and then you combine the classification performance and the penalty to a to a general score to an overall score and then at the end you use the classification algorithm that has the best score overall that combines good classification performance with simplicity of the model found. So this is a initial idea and essentially the soft margin technique we use to support vector machines is a good example of regularization because here classifiers having large margins and a few and a few errors are preferred over those having large margins and making no errors. So and by using the parameter c you can you can you can determine what's more important to you. All right the last aspect of overfitting is a so-called bias variance trade-off. So usually as I already said there isn't isn't some kind of trade-off when choosing the right type of classifier. So on the one hand if you ignore some specific characteristics of your training set you might lose some information that might be important. So you would do some kind of bias in classification. So for example if you would simply ignore that here's some kind of curve then this might be a mistake and would result in misclassifications in the future. So on the other hand if you try to account for all possible things that might be special in your training set by using a very complicated classification functions or for example even if there's here some negative examples you might be tempted to use such a classifier then you perfectly fit the training data but you get a large variance over classifiers when you randomly sample your training set. So for example in one random sample there might be a negative example here and in the other sample there isn't a negative example here and then you would get very very different classifiers just depending on how your training set or how your training sample looked like and what you really want is that quite independent of what training sample you used the classifiers should look nearly all the same. So you would want to have a small systematic bias by being too simple and you would also want to have a small variance by being simple enough. So typically you cannot have both you have to decide you have to use some kind of weighting and this already always depends on your application as always information retrieval. Try it out in your collections look what works best and this it is for today and next week we will talk about web retrieval. Thank you very much for your attention.
This lecture provides an introduction to the fields of information retrieval and web search. We will discuss how relevant information can be found in very large and mostly unstructured data collections; this is particularly interesting in cases where users cannot provide a clear formulation of their current information need. Web search engines like Google are a typical application of the techniques covered by this course.
10.5446/355 (DOI)
Yes. So, also hello from my side. And as a summary for last week's lecture, we were talking about a rough structure of the web and said, well, basically the web is a set of pages that are connected by hyperlinks. And if we want to get anything out of this, we have to query the web. There has to be a possibility for accessing the information that we are interested in. And accessing the information always means you have to have something like a directory. You have to have something like an index that shows you where you have to go. And this is exactly the main component of all web search engines, the indexer on which the retrieval algorithms work that can then be used to power Google and to power Yahoo and to power all the web search engines and all the ideas. Of course, to build up an index, you need to know what is inside the web. And this is the typical topic of so-called web crawlers and this is what we're going in today. So, let's start with a little look at what the web actually does, how it is built up, how does the web page look. And then we dive into the topic of web crawling and I'll show you some techniques on how to detect duplicates and how to manage the content of the web. So, the worldwide web basically is a number of resources. This is typical web pages. For example, the Technology only does it take page or the ethus page. And they have been created by people that want to transport some information. So, for example, who is currently employed at the ethus? Or what does the Technical University offer in terms of, I don't know, study programs? And this information, of course, is somehow linked to each other. So, when ethus says, well, we are part of Technical University, a branch like, they will add a link to the page of the Technical University. And this is a way to navigate through the pages. So, it's not true that the only way of accessing pages, though we have seen a lot of navigational queries in Google's statistics recently. It's not true that the only way to go through the web is basically by asking queries, by using Google. But one of the types of navigating the web is basically by following links and going from topic to topic and making a stroll down the information highway and just see where it leads to. And at some point, we will end up, hopefully, at some page that covers the information that I'm interested in. This is the basic idea. Resources plus hyperlinks. Hyperlinks is exactly these navigational patterns. Of course, the resources have to be identified somehow. They have to be unique. Otherwise, I wouldn't know where to go. If there are several pages having the same ID or the same address, that's kind of useless because where should I go? It has to be unique in some way. And this is where the idea of uniform resource identifiers, or it used to be called uniform resource locators, URLs. Now, it's not to be a page, maybe something else. So, it's been generalized to identifiers, URIs, and they have to be really uniquely built to identify a certain resource. And most common is in terms of the protocol that you have to use, basically the Hypertext Transfer Protocol, HTTP. But it could also be file transfer protocol or whatever it may be. So, you could have different protocols. Then we have the so-called authority that shows us where to go, where the actual page is located. May have a path. So, the authority may have a file structure leading from some directories to sub directories. This is basically where the path leads us. It may have query elements where we go like question mark, name is ferret or whatever. And there may be fragments shown by this little count sign here. And that leads us to a special part of the page. So, that is for navigating inside one page. Okay? And if we restrict ourselves to HTTP, the Hypertext Transfer Protocol, then we start with a host name, which is the authority, where should we go? We have a path that points somewhere in the directory structure. So, for example, this is the Wikipedia page about New South Wales, part of Australia. And then there may be a query or some fragment. So, this will jump directly in the Wikipedia entry of New South Wales into the section that has been headed by history. So, if I want to know about the history of New South Wales, I go to Wikipedia. This is what I indicate here. I go to the English Wikipedia. This is the authority starting with EM. Then I sneak my way through the file structure of Wikipedia, bringing me to the correct page. And then I may navigate inside the page to the paragraph that I'm interested in. And that will be the history of New South Wales. Okay? Well, in HTTP, of course, if you have a protocol, you have to normalize the entry somehow, because there are different ways of entering the URI. And you don't want to rely too much on different spelling ideas of uppercase and lowercase and stuff like that. So what you would do is you would have special characters that could be represented in a different way. For example, typical German fashion are umlauts that are not available over most of the keyboards anywhere. So you need some way to describe them. For example, this here is the version of the tilde. And it will just be replaced in the URI. And then it is unique. You will have case normalization. So everything will be lowercase. Doesn't matter if you type it in with case sensitive. You will also have very often the port on which some site is addressed. And HTTP's default port is 80. So this is removed, if you say, colon 80 at the end. It's just, so this is the idea here, that you should listen in on port 80. It's immediately removed because that is standard for HTTP. And also past segments that show, you know, stay in the directory or go one directory further are removed. So all these addresses are basically the same. They are all normalized into the one down here. Lowercase unquoting of special characters, query where there is no query. Okay. So empty part of the address. The port that has been replaced by the standard port. Okay. It's all taken down. Now how does it actually work? Of course, you can't address the pages by themselves because you have a logical name. You have the name of the authority. You have the name of the page structure. But you need to know where it actually is. And that means you have to ask a client. So you're filling out a request by your client and sending the request to a server. And the server responds somehow. And the protocol that is used here is TCPIP. So the servers are always uniquely identified by IP addresses. This is typical IP address as you may know it. If you have a configured your laptop or something and chosen special IP address or part of a network. And if you type in IP config in your execute environment, you will get to know what your IP address currently is. There's some dynamic approaches of handing out IP addresses for networks or you can have a fixed IP address. Basically the registered IP address is how your device as a server is approachable. The host names like dub dub dub Google com or whatever it may be Wikipedia have to be mapped somehow into IP addresses so the TCPIP protocol can probably work. And for this you have so called domain name system or the DNS. And you have a DNS server where you basically put the query, okay what is the correct IP address for Google and the server will respond and say well it's basically some of these addresses. Of course different host names can cover different IP addresses. I mean the Google server would be immediately out of work if it was just a single IP address. So it has to be different IP addresses and basically for the service that you're taking it doesn't matter which of them you approach. It's just important that either one of them is available to answer your request. And also an IP address may have many host names so there may be a lot of content on a single server and the server may serve different sites. Well since we need the IP address from the name we first have to perform a lookup and DNS server then we get the correct IP address and then we send the HTTP request to this IP address usually over the standard port and then we get basically get the website back so the web server will tell us something about the content that is available or not available on this server. The idea behind it. The typical HTTP request looks like this so if we have for example a search query on Google so we basically use the authority Google com we use the search interface and we post a query for the EFAS institute then the request type is get. I want to know something from a server so I post a get command. I hand over the query and the pass that I want to know. I hand over the host name so that everybody knows where this is to be directed to and then there are some connection details that I need so for example what character said I will accept or what possible encodings of the pages are acceptable, what languages do I need and stuff like that. So this is kind of like a little bit of compatibility information that shows me what a sensible response to my request would be. So if the server just speaks Chinese you know like and gives me Chinese character set and I ask for something in English it just doesn't help me so I can put it into the request. After asking the request there is a response of the server which basically tells me what the exact version of the protocol is and this is a very important code over here this is the status code so for example if it is 200 it means everything is okay the resource has been located and is still intact then there are some caching information that shouldn't concern us too much here. There is the content type telling us what it actually is so this is a normal HTML page that then can be interpreted by the browser so sometimes I need different programs to interpret some content but this should be doable with every browser. We have the character encoding over here so also the browser can see what character encoding is needed and then after the header tells us all the information about what the information or how the information actually looks like what is used to transport the information the actual body of the site is just the contents with some markup information as I said this is obviously HTML okay. Good there are basically two types of requests of HTTP one is the GET request the other one is the POST request with the GET request I asked for a response I want to have some resource with the POST request I want to transport information to the server so for example if I fill in an HTML form I can post the information to the server so the server can evaluate the information for example a database query and then generate the answer page according to what I was actually looking at. There is a smaller version of the GET which is called HEAD which basically does the same as GET but just asked for the HEAD of the message not for the content of the page so I get all the information about the encoding when was it last updated when it is the resource still available or are there some error codes or whatever you know and I don't transport it is basically just for brevity I don't transport the full body of the page that may be lengthy that may consist of several packets but just get the header. Of course this HEAD message is very important for crawlers because they want to know has something changed do I have to really crawl this page again or is everything still like it used to be or is it a dangling link so I will follow the link with a HEAD request and then just say okay the resource is there or it's a 404 error code and the resource is not there you know I can't crawl it I can't index it. Important status codes are basically what we all like 200 which means everything is okay recourse the resource is there and is available and I can just download it. 301 is basically a forwarding address like you do with the mail service when you move houses you know you just say okay this has been moved you know like so update the link whatever it may be. 302 means yes the request the site is there but it has been moved temporarily for I don't know server maintenance or whatever it may be you know so ask for it at a different address but next time you come around try again under the same address. 304 means it has not been modified since the last request very important for crawlers. Very bad 404 not found which just means anything could happen the website could not longer be no longer in existence the server is down there's too much traffic whatever you know something something happened that doesn't allow me to access the website and then there's the 410 which basically means the website is there but it's not there anymore so it has been permanently removed and it will not be available so this is not a server or this is not maintenance work or something that the resource is just there. Good so what we see is that we have URIs to identify web resources and that we use protocols to retrieve the web sources that means basically get and post commands in HTTP which is the most renowned protocol of that time. Of course a resource does not only contain some content but it also has some layouting information this is a different idea so there is a structural difference between the layout of information and the information itself the text or whatever is said on the web page and this is the same for web resources. What we do have is very often HTML resources the hypertext markup language and the hypertext markup language is one of the early inventions of the web and was basically done by Tim Berners-Lee in 1991 and he developed this protocol just for well in the beginning just for looking up phone numbers and then it started to snowball and he was adding more and more and was building well basically it's always get and post you transport information to the server you get information from the server but you also have to transport some more it's not only the information but also how the information is to be displayed and this is the idea of the browser that you have some piece of software some client software that allows you to view the information as the author of the information intends you to view it so the author of the information cannot just give you the basic facts but can also give you some style information about how the fact should be shown how the fact should be displayed and this is what is called a markup language markup basically means that you describe the structure the layouting of text based information in a document and there are some typical things that you can do in writing documents you know you can you can have headings in which you will basically put the part that is the heading into these brackets here with h1, h2 or if you have a subheading h2 or h3 whatever it is you can have paragraphs so if you want different paragraphs you just put it into the brackets of this p here for paragraph and you always have an opening bracket and an ending bracket so you would put the heading between the two brackets and this backslash here shows you that this is the end tab the other one is the opening tab you could of course have lists starting in this environment over here and for every list item you will have the list bracket okay and then you will get a list of first item, second item and so on and there is one very important thing that is the link you might have a hyperlink to a different resource again you start with a certain bracket so this is a but you add something to the text that is within the bracket and that is where the link should point to shown here basically this is the string of the URI that the link should point to and if you click on the link if you follow the link then the HTTP protocol has to do exactly the same thing it has to copy the string it has to send the string to a DNS server get the correct IP address and then get the resource under the IP address okay good and let's look at the formatting a little bit so this is typical HTML document it just says here doc type is HTML and it also tells us what kind of HTML it actually is so different versions of HTML and it tells us what document type it uses then it is put in large brackets this is the HTML body and then we get some information like here the main heading in the age brackets or a paragraph made up might have a link inside the paragraph for the word link see that here so most browsers display it that way that they just say this is a different color and it's kind of underlined and this shows you that it's clickable that you can click it again an age infrastructure where you can kind of have a different heading and number between the ages age one and age two age three shows you what is the size the respective size of the heading so age one is the main heading and age two is the subheading you may have list items and so on okay this is basically what the HTML page looks like that does not only contain the information here for example and here but also how to display the information here for example here okay and what you can do with the information here for example that's basically what you can write in HTML good as I said HTML comes in different versions and started off with HTML one zero that was designed directly by Tim Berners-Lee in the beginning and as the web took off there were always more ways of dealing with the structure of web pages became more complex and so you had to invent new HTML commands you had to invent new structures on HTML and that went on through the 90s basically 95 the web was available for a large number of people so that was kind of the actual birth of the web where it left the confines of the academic institutions and in 2000 we had what is often referred to as ISO HTML so HTML was standardized by the international standards organization after 2000 XML became very popular and HTML started to work on different standards age HTML based on XML currently we are in the working draft of HTML5 which is about to come so and that starts me up on a detour to see a little bit about the history of the web and how it actually developed. Well we've already seen that Tim Berners-Lee has invented the web so what most people do not know is that Tim Berners-Lee did not invent hyperlinks so in fact there have been before the magic year of 1989 there have been two research communities working on hypertext and one working on the internet so the internet usually means therefore there's a network of data connections as we know it between usually between different universities and research institutions at this time and they basically shared files they already have been emailed they send emails and then they could use protocols like FTP to share files for example the research data someone could upload it to its own FTP server and then someone could download it in the US for example so this has been the internet community it was about exchanging data in a very technical and not very intuitive way on the other hand there have been the hypertext community so basically in the 70s and 80s they try to build yeah some kind of intuitive information systems that are easy to use for users where information can easily be accessed and structured in a nice way and it is something like maybe you know these online helps that some applications offered in the 90s where you press F1 button and then some kind of web like interface started where you could click links of course and came to different pages and this is what hypertext is about basically text pages with links connecting these pages however the central idea of hypertext was that there is a central instance managing all documents in inside the hypertext system and managing all links so the idea is that you could easily move a page and all links are changed accordingly so you don't miss any information and everything is well connected and high quality of the whole thing so usually these have been some kind of specialized software systems and didn't have any connection with the internet so what Tim Berners-Lee did so he was at the time working in the physics department at the European organization for nuclear research in Geneva abbreviated as CERN and this is a large nuclear research project funded by the European Union and because it's so expensive all European countries wanted to join forces and create a central agency for nuclear research and of course because many European countries are involved they want to share their results they want to share their data so there is an urgent need to distribute the data and the experimental results gathered in Geneva to distribute them across Europe to other researchers for analysis so of course this could have been done by FTP downloads but it would have been much better if there have been methods for true collaboration so that researchers really can share their information and data and results and experimental designs in an intuitive way so and he recognizes this problem that there was no way to share data in a nice way and no common presentation software the people could use so it was always about exchanging files which isn't too nice usually so and then he had the idea to write a proposal a large hypertext database with typed links and his idea was basically to define something like web equals hypertext plus internet and his application case was for starters the phone book at CERN so CERN is a pretty large agency with thousands of researchers and it would have been great to have some kind of online phone book where people could look up where they can reach their friends and researchers they are going to work with in an easy way so he had this dream of having such a collaboration platform in some way and just started to implementing it on his next workstation some kind of state of the art server at the time so not very impressive hardware but that basically was the first web server so they started building his system and who knows what's written on this sticker here yeah don't switch it off this is a server because they are always stupid guys at CERN thinking someone left his workstation on over the night and switches off so it was basically what he was doing implementing the web or the roots of the web on his small own workstation and leaving it on for days and nights and hoping that other people are interested in the information he had to offer so of course it is always is with good ideas his proposal that he initially wrote has been declined so nobody recognized his brilliant idea so some people of you know that it always happens if you think you have a big idea but other people are able to recognize this yeah well that's life but Tim Berners-Lee had found some colleagues who wanted to support him so Robert Kaleo a computer scientist at CERN decided to join forces with Tim Berners-Lee and they started a new proposal and presented their idea at some European conference on hypertext technology where the hypertext vendors at the time came together every year and discussed their new software systems but yeah nobody was interested in their wonderful idea so it would have been a big chance for vendors of these systems to say well we will support you and we will sell your idea and then they would have made a lot of money but nobody decided to do so and because of I think mainly because of this the web now is as free as it is because there is no big company behind it steering the web standard standardization process and how things are working so because nobody wanted to the web or at least no commercial vendors wanted his technology Tim Berners-Lee and his colleagues just started to or just continued to implement their idea into an into a bigger bigger framework that can be distributed around across many servers all over the world so by Christmas 1990 they had gathered and created all tools they need for working web and most of these tools are still in action today so HDHTML we have HTTP for transferring the files we've seen that then a web server software of course you need some kind of server that is responding to your requests hardware the server could run on so the first web host was info.sirn.ch and of course you need a web browser and this was called World Wide Web and ran only on his next workstation at the time so not a very common hardware at the time but at SIRN they had some of these machines and he has been able to distribute this software to his colleagues and was also imported from the beginnings Tim Berners-Lee thought as the web as some kind of interactive medium where people could easily edit the pages so in HTTP there are some requests that are intended to modify pages directly and so his web browser was a browser slash editor so this feature was dropped after some time because people didn't want to edit pages by themselves at the time but you see projects like Wikipedia or wiki software in general today people are going to interact much more and want to share the information and edit pages and it's interesting to see that Tim Berners-Lee already anticipated this need that would come up 10 years later so this is how the first web browser looked like yeah it's this Unix type interface not very comfortable but it looked like the hypertext systems at the time looked but links have been designed in a way that you could refer to other servers all over the world and this was his main contribution that he built a distributed hypertext system which was very novel at the time so then things started to get going the web became more and more popular and some people created simple text browsers for the first home computer that usually had a text-based interface so some kind of Unix or DOS or whatever and then he extended his idea and made his large telephone directory public at CERN and which previously was located on a large mainframe computer where you have to log in and some text-based interface and with his phone directory that you can easily search via web interface now people recognize that there was something like the web and they also saw well if you could build a telephone book with web technology then you could also present your own research you could collaborate on research projects and share information in an intuitive way and this is maybe the point where the web really started off and finally they made an announcement in some hypertext news group some kind of internet discussion forum where they simply told yeah we have this web project and we are looking for people who are interested in this please join us please try it out we are very happy to find people who are going to work with us and some people did it and the web as you know spread around the whole world and new web browsers have been created so the web browser Mosaik for example one of the first mainstream browsers that have been able to run on most most Unix computers and in fact this browser has been programmed by the team led by the founder of Netscape Mark Andriesen one of the great entrepreneurs at the time I have no idea what he is currently doing maybe spending his money in the Bahamas but this was one of the big names at the beginning of the web. So 1994 the company Netscape was founded so Mosaik became the Netscape navigator some of you might remember one of the most famous browser before Microsoft launched the internet explorer and at the same time also Tim Berners-Lee founded the worldwide web consortium at MIT in the US and the goal of the W3C is to standardize web technology so they develop all these HTML standards they say how HTTP should look like they develop technologies like XML and are currently developing HTML5 the next big thing in hypertext technology and World Wide Web Consortium is designed as kind of meeting platform for all for different companies and different people all over the world where they really discuss how standard should look like so like any other standardization organization in industry this is now the World Wide Web Consortium for web things so you know how the story continued web is really famous W3C is inventing new things some more popular some less popular but yeah that's the way the web has been invented and now we are going to talk about web crawling well as I already said web crawling is the first step in building an index and knowing what is actually out there what is part of the web and this is actually a very interesting problem because if you think naively what do you do well basically a crawler or some say a robot or spider would just queue all the URIs there are retrieve the web sources process whatever is given or returned by the request so basically all the HTTP data then you would have a page parser that extract links from the retrieve resources add them to your queue add all the information on the page to your indexer and then work on with your queue and basically starting with any couple of seed pages where you say okay this is important and just let's go from there we know this bow tie structure of the web if we pick some seed pages from here they will lead us into the core at some point and from there we start kind of exploring all the different pages in the core and of course also exploring the out links in this other part of the web and that would give us very good impression of how big the web is and what has to be indexed and how we can reach it so from the seed pages we would just work our queue retrieve and parse the pages extract all the URIs at new URIs to the queues takes next URI and so on so this would basically be the naive approach of web search this is what a basic crawler does but if we look into it a little bit deeper and just take a very conservative estimation of how big the web is 60 billion pages this is one of the really conservative and let's assume we want to crawl each page once a year so this is we had some statistics recently last week on how often web pages change and stuff like that so let's assume for a moment that most of the web pages are kind of static and the information doesn't it does not change too much so and let's leave out dynamically generated content and stuff you know like just focus on a couple of web pages that are rather static but we want to revisit them once a year to get some updates on our index then how many pages would we have to core per second let's break it down that would be 60 billion per year divided by 12 makes 5 billion divided by 265 we will also take Christmas and Easter and everything into account so we're working all year round makes it whooping 166 million that we have to process per day which means about 7 million per hour 100,000 per minute divided by 60 gives us about 2000 pages a second that we have to retrieve that we have to pass that we have to extract the relevant terms for our indexer plus extract the links add them to our queue 2000 per second it's quite a bit it's not something you do with your smartphone so crawling really is about scalability as really a problem of how do you get to the pages what is the important part how can I quickly get the information out of a page how often do I have to revisit pages and of course it comes with further complications because it's not only the scalability it's also what do you want to have in your index do you really want to index everything or do you want to check if it's sensible information or do you want to break down the information somehow do you want to avoid duplicate content could it even be that your crawler is trapped somehow by pages so consider pages pointing to each other and you always add the next uri to your queue go there add the back link to your queue go there add the uri to your queue again you know like you will run in circles again and again this is obviously not what you want to have I mean the web is obviously not a tree shape so it's not depth first search or breadth first search or something like that that you can do but it is a cyclic graph how do you avoid cycles 2000 per second that doesn't sound like one machine doing it that sounds like many machines doing it if many machines are doing it how do you synchronize between the different service how do you synchronize between the parts of the web that are crawled because the web doesn't break down it's not one branch here and one branch there it's this all connected graph and as soon as two machines crawl the same locations you're doing double work doesn't make sense what about latency and bandwidth so typical networking problems connection problems I mean if you crawl the pages all the time and this consumes a lot of bandwidth if I have a lot of web search engines that crawl my website will not be open to customers anymore because all the connections all the bandwidth that I have available is given to the crawlers it's not what I would want also with the sites sites becoming are becoming bigger and more complicated by the minute you know you have a lot of dynamically generated content it's not just your normal file structure anymore like it used to be in the 90s where you had okay you have the home page in the top directory and then you have three or four sub directories one for the photos and one for the blah you know you name it but you have a lot of dynamically generated addresses that are filled in time so how deep should you crawl the site is it sufficient to know what is actually on the top level well this is a site about so fast that you could buy furniture or something like that is that sufficient leading you to the top page of the portal or do you want to know about that special table that is the wonderful seminar room technical university brown strike table that you want for home because you miss it so do you want to index that well some people might want to know and sometimes it's also a little bit complicated what the owner of a website wants does the owner of the website want the website to be publicly announced is it more or less private information that should be accessible to some people but not to the world at large or other certain parts of the information that are kind of maybe not confidential because then I wouldn't put it on the web but maybe I would be more comfortable if not everybody looks at it or if Google does not display it as a first result set when curing for some information so how do we do that there are a lot of questions that have to be discussed and some of the must have features is definitely robustness because the web is very diverse it's very heterogeneous in the beginning there was a couple of universities that were having web service and very much basically what you experienced in terms of content was rather homogeneous because they did it the same way or copied sites from other locations but nowadays for all the problems that you might have cyclic links or dangling links or whatever your crawler might experience there will be web pages experiencing this problem it's gotten so big and so diverse that you will definitely find all kinds of problems that you may run into the web pages and all the information that is transported by the protocol can be assumed since it is a protocol to be correct of course all the browsers allow for some robustness in terms of displaying web pages that are not well formed where for example what happens very often is if you have some kind of this paragraph you know like and here the end paragraph very often this is just missing okay and somehow once a browser sees another one of this type it will just add the end paragraph to the page and just display it in the right form and this is happening very often actually so it's very often malformed and if you read it your crawler may crash and may just have some loop or whatever so robustness is always a very good idea but it's also or it should also cover not the mistakes not only the mistakes that are inadvertently made but also the mistakes where some websites want to trap you but somebody knows how a crawler works and exploits that for whatever reason be it just pure maliciousness or whether he wants to kind of tease people or there is a sense in keeping crawlers creating traffic for some large amount of time or whatever it may be so this is what is often called spider traps where the crawler thinks he's crawling new sites but basically is running around in circles we will discuss some of these techniques when we go into spam detection and similar techniques and discuss them a little bit in more detail. For the must have features very often politeness is argued about because I mean Google can do whatever Google wants to do and Google's slogan always was don't be evil and if you believe that well that's okay and that's polite but is it the reality so what is more important to have a very 100% index or to accept the wishes of the owner that this page should not be crawled or that you shouldn't have too many requests on this page to create a little bit less traffic or crawl it yes you may crawl it but just at a slow pace so that it's already open for always open for available for customer traffic so this is kind of what is perceived under the heading of politeness because the website owner usually has to pay for the website traffic if I just hammer it with requests from my crawler that doesn't create anything for the website so it doesn't transport information it does not do anything good for the website owner creates a lot of traffic the owner pays for that traffic bad idea also if I if I hammer it nobody else may access it and it has amazingly long loading time my customers will be missed usually the policy for some website for crawling some websites which area you should avoid when crawling how often you may crawl whether some engines may crawl it at all this is given by the by the site owner in a file called robox text and we will discuss that like now there's no time like now I see a little bit is now yes robox clues the standard standard is some way to tell yeah crawl us what they should do with your website so this is not only about politeness so if you if you are if you are some kind of web search engine and have your own crawler then you could simply ignore what the people are telling you but if they do hurt them too much then they will simply simply lock you out of their pages so they will simply use an IP filter something like this and so you get no content at all so usually it is a good idea to adhere to this standard and to listen what people really want crawlers to do and and what not so the basic idea is that if you are if you are having a website you put a file named robots dot text into your root directory of your domain so for Wikipedia for example it's this URL and in this file you can specify what resources crawlers allow to access how often they should access them and what they should not access well and a crawler then the first URL they are retrieving from the site is these robots text they pause it read it and then load all the other pages but obeying all things that have been written in the robots text so a bit of caution this is not a standard in the w3c sense so there never has been some kind of standardization process for robots text it's just yeah it's just there and if some some web crawler is visiting your website every day and downloading gigabytes of content then yeah you could use a robots text but you don't you won't be able to to to to sue this this crawler company yeah because they are yeah there is no no standard at all so be aware of that okay some small like small examples for example this is a very easy robots text that simply allows robots to view all files so the idea is you can you can distinguish between different crawlers and here you say the following rule applies to all crawlers and disallowed pages are none so all crawlers can access everything so some more examples to keep all robots out all user agents for all user agents the root directory and everything below it is disallowed so it when you say just say slash it also applies to all pages beginning with slash so you hold domain you can also exclude certain resources for example some dynamic content or your secret private directory nobody should know about and if someone links to it by accident search engine shouldn't index it you could exclude a specific bot so usually crawlers submit their own name in the HTTP request and then you could could tell him yeah dear bad bot do not read my private directory you could also say that only one request every five seconds should be made or every 10 seconds or whatever you like you could also say that crawlers should only come in a certain time interval for example when you think that that there will be no traffic on your site and so crawlers won't hurt your performance and there are a lot of things you can you can adjust here so a very nice example is the Wikipedia robots text this is pretty long because they they accounted a lot of problems in the past years and they have nicely commented all the things they have been doing in the robots text so for example they they disallow every every bots from Google that are solely focused on on adwords or something like this or some some advertising centered crawling type so they allow Wikipedia's own bot explicitly and some other bots and essentially they they created a large list of bots that behave badly and disallowed them yeah this goes on and on this is also quite nice so Wget is a is a Unix tool that you can use for downloading webpages and Wikipedia have have seen that some people use it in a bad way some people just try to use this tool to download whole Wikipedia and and don't leave any any any pauses between the loading of different pages and so Wikipedia get get instantly hammered especially if people do this from some kind of university connection which is pretty fast and large so there's a lot of traffic and because of that they decided to log out this tool so if you now try to download Wikipedia using this tool you will get the message that the site doesn't allow it so some more clients that have behaved very poorly according to Wikipedia so blah blah some more and again you can see in in the comments what what have been have been happened here and why these user agent has been excluded so if you have your own web page and you want to start a new robots text and maybe the Wikipedia page could be good starting points and that you can immediately log out all bots you don't want to have there because Wikipedia probably encountered all all bots there are in the internet and they made their own their own own things here okay then some some special pages are excluded that shouldn't be indexed so these are some some internal pages so here's a page called trap where they are simply enforcing web crawlers to to obey the robots text so I guessing trap is just a page linking to itself or linking to randomly generated URLs and so they can simply check which crawlers actually obey the robust text and which not so if they identify a crawler which is permanently loading the pages linked by the strap page then they can can identify it and and log it out using IP filters or something like that all right this is robots text and I think we make a five minutes break now okay apart from the must have features that is so that the spider web crawler should feature there are also some features that are nice to have that are not not strictly necessary but that will speed up the process that will keep the index fresh and stuff so one of the points is a distributed mode of operation where the crawler resides on multiple machines and makes you robust against hardware failure obviously and also distributes the work so you can do it more efficiently it should be scalable because I mean a stale index is annoying your customers as a web search engine all the customers get the first top ten are stale pages that either do not contain the information any longer that you have been looking for or you get 404 errors because the page has been moved you know it doesn't help you either so to satisfy your customers you should have fresh index and that means scalability given the size of the web performance efficiency is always a good idea because you can do clever crawling and you can do stupid crawling and depending on what you want in your index and and and how fresh you want it to be how many resources you want to spend in in building your crawler you should consider efficient techniques on the top of your your agenda and of course you should consider the quality of the index so if you index all the spam pages and for every query return some spam page among the first a few results then your customers will definitely be dissatisfied so you have to find the useful pages you have to find the non spam pages the info the pages are delivering better information than others or more complete information than others or the information in a nice mode of displaying it you know like you can you can also well it is often claimed that that information and display should better not be mixed but it's it's actually that way how we you must think and how we perceive things that a good display of information helps you to understand the information so also those pages having a good display or having clever display should be preferred freshness is kind of one of the basic ideas of getting getting good information to your customers so your crawler has to operate in a continuous mode and should crawl the pages once in a while that reflects the frequency of changes on this page so there are some pages where you might afford to well visit it once a year and see if it's still there and still kind of having the same information but but a lot of pages are changing continuously or changing very very quickly so for example if you think about a newspaper portal having some some page giving the newest information you can crawl that once a day because there will be new messages every day it doesn't doesn't help you to have stale messages from from a newspaper page they will be offline and the same goes for pages like well like like I don't know like governmental information census information or something that will change once a year there's no problem if that is a little stale so this is this kind of the the idea that you can should consider the rate of change to a page within your crawler also if you see that for some page large demand is currently developing then crawling this page more often to get fresher information would be a good idea so one of the typical examples always the world cup so as soon as the world cup comes we did that was the Google side guys last week as soon as the world cup comes there's a spike in queries and of course the people are customers that are kind of supporting your advertising model your business model so which engine which search engine gets the the most attention by by by the people interested in the world cup is definitely a business issue it's a financial issue so better provide fresh information better provide good information people are immediately finding what they want because if they have to click through 10 pages that are spam pages or that are last year's world cup or four years ago or something like that that won't help them and they will change the search engine so this is a good idea another feature that is very helpful is extensibility so if you have new data format new protocols developing as we said the world web consortium is kind of currently in the process of refining some of the web languages we get HTML5 but also other kinds of languages XML some of the semantic web languages OWL and RDF and you name it so they have working groups for a lot of areas and once they develop something new and once they standardize something new it it it will be out there there will be some people using it and relying on these issues so your crawlers should should reflect that and basically what you need is a modular architecture where you can just just plug in modules that are interesting for crawling new types of content. If you look at it from a large scale view then the basic part of your crawler is doing that naive process that we were describing so you have a queue of URIs you fetch the resources first in the list second in the list and so on and then you pass the resource extract the new URIs and then you put it into your queue and this is basically the mode of operation but with all the requirements that we have we have to do some more the resource fetcher has to handle the DNS somehow so it can't hammer the DNS server for all the different resource but should be a little bit more clever. The resource parser should first kind of try to find out whether this is good content whether this content has already been indexed whether it's it's kind of spam or whatever so duplicate checking text indexing other analysis spam detection and stuff like that is very important here for all the links not only for the content but also for the links the same has to apply I can't check I can't crawl the same links times and again but I should also arrive at some point at new links so whenever I have already crawled some URI in the current batch so not a year ago or something but in the current crawling version that I'm doing I have to check whether I already did that URI I might have to check what's said in the politics so in the robots text what can I do what may I do and I have to have some kind of distribution component that manages different parts of the crawler that are working more or less independently that also manages how often a page is retrieved so also this to avoid this hammering and if I'm doing that and if I have all these components working nicely together then at the end I will have a sensible index an index that contains useful information that contains fresh information and that doesn't annoy the owners of the web sites this is the basic idea how it works so for the DNS handler the fetching of DNS is usually quite quite slow due to the network latency because everybody has to go for every request to some DNS server and look it up so your crawler is just one among the crowd and if you start what did we say 2,000 pages a second you would hammer the DNS server so there has to be a lot of DNS servers that are created in parallel and the handler is actually a local DNS component that prefetches some information that will be needed in the near future so if I already have the URIs in the queue depending on how fast my parsing is and how fast my processing duplicate checking and stuff like that is I can decide to already query the DNS service for the correct addresses for the correct IPs in advance and keep them in my queue so I can immediately request the page then and it could also use a relaxed policy with respect to the DNS updates so avoiding unnecessary DNS queries is a good idea for the DNS handler that you have loaded. A little bit more complex is the duplicate checking both in URIs and in context so let's start with the URI checker. The basic idea is if I have crawled URI within the last whatever it is week or month or whatever my turnaround time for the crawl is I should definitely not crawl it again immediately but delay it at some future point but of course checking the URIs against everything that I've crawled recently remember 2000 a second maybe a little bit difficult because I mean string matching over the last I don't know week or something like that 2000 a second that's quite a task so what you basically do string comparisons that's just out of the question doesn't work even if you index your strings you're doing some kind of inverted file index or something like that doesn't help you too much because it will just end up in building this index is more complex than doing the actual maintaining the index more complex and doing the actual matching doesn't make it fast enough so what is usually done is you use not the URIs as such but you use fingerprints of the URI so you abstract the URI but into some hash value so you use URIs in the normalized forms that's one point that already reduces the search space and then for any of the URIs that you get you compute hash value or the fingerprint and use some hash function for that one of the most prominent is MD5 which is basically delivering 128 bit fingerprints so for example if you use the E-Fist web page this is what is delivered by MD5 can be computed very quickly and now indexing this number becomes far easier than indexing the strings so for example you can build a B-tree or hash table or whatever you may use so if you're using a lot of service and working in a multirational in a distributed fashion something like distributed hash tables might be a very good choice for actually doing that for illustration we will do it with a B-tree here so what you basically do is you take the numbers and for every block of the B-tree you will decide which part of the B-tree which a subtree of the B-tree you have to write so for every number that is smaller or equal to 3 you will be pointed to the first block for every number that is larger than 3 but smaller or equal to 5 you will be pointed to the second and so on and so on yeah so the block size over here usually reflects the size of the disk blocks that you have to load to read it but usually the index structure for detecting duplicate URI should be done in main memory and you should just traverse the B-tree until you arrive at different fingerprints and then you can efficiently look up whether a fingerprint has already been used or not since it is a hash function collisions are very improbable depending on the size of the hash function that you used and the numerical comparisons can be done quite quickly so if you find that some fingerprint is already in your B-tree it will with a very high probability have been produced by the same URI might be that there is a collision but usually there is no good so what you basically do is what I am saying you ask whether a URI is contained in the B-tree if it is not contained in the B-tree then it is definitely new so there is no URI that has created this hash if it is contained in the B-tree then it could still be a collision so what we basically do is we ask if the same URI strings originate or basically led to the fingerprint and this results in only a couple of string matches so here you need string matches but it is a very limited number all the strings that kind of result when being hashed in this collision bucket and that is kind of okay so if that is true if they were actually hashed from the same string then it is known so then we will not take it into our list if they were not created by the same string then again it is new and this is basically put into the list of or into the queue of URIs that still have to be crawled okay quite efficient for the fingerprinting of URIs yes. So the question is if we have a good duplicate checking how can traps work so that you would not look up the same URI and you are definitely right that the simple traps like we will just create a loop of pages do not work anymore if you have a good duplicate checking however the traps become cleverer as the crawlers become cleverer and one of the ideas of creating a simple spider trap is dynamically generating new URIs that are however mapped to the same site which then again creates as a link a dynamically generated URI so you get new URIs and there are no duplicates of this URIs they are in the same site usually but they have never occurred before however they link to the same page basically and this results in a typical spider trap so it can still happen. Good the thing becomes a little bit more problematic if we consider the size so let's say we have one billion URIs and we fingerprinted and the fingerprint is at least 16 bytes this means for one billion URIs we have 15 gigabytes of storage plus more space to store the actual strings that we might need for string comparison in the case of collisions and as always you have two options of storage you can have it in main memory which would be very advisable at that point or you put it on disks if you put it on disks loading from the disk is a very time consuming issue and time consuming is a word that a crawler doesn't like because the crawling itself is quite time consuming and quite a task as we've seen so in either case enforcing locality is kind of a good idea so if you considering the structure of the web you usually have multiple pages within a single site so you do have the ethus website and then there's a sub web page for every researcher at ethus or for every lecture that ethus offers or for every I don't know what's on our website for every paper that ethus publicized or whatever for every project so keeping them close together and saying well basically the idea is we crawl the site and then do the individual pages and if I already crawled the site I won't look into it again is a good idea so locality can usually be enforced if you consider the host name as being the main feature of the site and then all the pages are just variations in the past that is given to some pages so one idea to make it a little bit more localized is you take two fingerprints one for the host name and one for the rest of the URI so if I have for example here the Wikipedia I take the Wikipedia orc which is English Wikipedia as the site name the host name and then I have all the different parts that are different pages within the English Wikipedia for example here the news house wave page and then there will be the northern territories and the rest of Australia pages so you just concatenate both hashes to form a fingerprint which means that the URI of the same host name are located in the same subtree of the index because they start with the same prefix and our B tree always looks up the prefix and guides you to the correct part of the tree okay and the longer the common prefix the deeper you are on the tree okay this is basically how you can do it otherwise if you would just hash the whole URI also if I just remove this S the hashing function would hash it probably to very remote places in the index even if it's almost the same you cannot predict that's the basic idea of the hashing function you cannot predict where it will end up good a little bit more complicated than checking the actual URIs is checking duplicate content this is a bit of problem so of course we can do something like okay let's see the web page and then we will just have a fingerprint of the web page and do the same trick as we did with URIs but that's kind of strange so for example if we take the some of the well dynamically generated pages here where basically the time is running and it's always different information so that would result in different hashes which would not be a good idea then the same kind of information can be transported in different layouts and different wordings you know you can build different sentences that basically have the same meaning is that duplicate content or isn't it if you just switch sentences around you just exchange them somehow you know you just move them around you still have the same content on the page but of course if you if you compute the hash since the order has been reversed it's different same goes for web pages that that offer advertising and just exchange the advertisement that ever visit has something changed on the web page well a very negligible part of it because where there was said by Viagra there's now by Cialis or I don't know what ever these advertisements are and this this of course is a problem so so what do we do about that and this was often called near duplicate detection so you don't want to detect whether it's an exact copy but you want to detect whether it's almost the same as the ordering has been switched around or it's no longer by Viagra but by Cialis so just few information on the page has been exchanged and the first step is always to focus on the content only because whether the the layout changes is not interesting to see if you have the content already in your database so you just remove all the styling information you also want to focus on text because with the images is it the same image over there or is it not it's not yeah that's right so here we can see it's not the same image though it looks amazingly similar so if that is hard for us to see it's hard to see for the crawler and you can't just go pixel by pixel through the image and then say well one pixel is different so this must be a totally different image no it's not just skip it you know drop the images drop all dynamic content drop everything that is not the focus of the page consider the text only and in terms of text only the text that is written in paragraphs and and headlines and stuff like that not navigational elements where we go to okay this is the ethos part researchers or teaching or projects or whatever you know I don't want to know about that I want to know what is on the page what is written on the page and this quite quite a problem actually how to how to segment a web page to see what is content what is navigational what is dynamically created and sometimes you see it on the web sources sometimes you don't it's it's kind of kind of difficult to to to extract there even some visual techniques where we kind of like have the perception of the web page to see well the navigation bar is usually on the left hand side and the ads are usually in terms of banners and if you see something blocky that might be an ad or stuff like that you know so there's a lot of techniques and it's it's usually ongoing work and so what stays of a website is not that much anymore I skip all the navigational stuff I skip all the navigational stuff over here I skip all the pictures I skip all the pictures over here and just focus on what is interesting okay I'm breaking it down relate to some very small text fragment this is what I want to extract this is what I want to index because now I know this is a web page about some institute for information systems and actually it's at the technical university lounge bike and so on this is the interesting part well still if I change round the order of words that helped me or not if I just exchange sentences same content not the same well at least near duplicate so I can't do the comparison on on a text base you know like if I exchange one word it's not the same text anymore so again it does not work in terms of hashing it and then considering different hash that that were created by the same or by different content if I just exchange one word the hash value of the whole page will be totally different terms of content okay so again I cannot do it on a word level I cannot do it on the hash level what do I do and there's one clever technique that has been called shingling shingles are what we use in building houses for covering up the roofs so they're kind of overlaying so the water can drain easily and the idea is that we should do exactly that with a text for every text that we have we have a number of shingles size which we will call a and we will have the terms in succession that make up the text okay and then we say a K shingle of the sequence of terms are all the consecutive sequences of K terms of the document so if I have some document over here and decide for some K for example 4 what I will do is I will create the first shingle that goes like that 4 successive words then I will create the second shingle that goes like that then I will create third shingle that goes like that okay so they're overlapping shingles and they're kind of like showing snippets of size K of my text good so these are typical 4 shingles of my text as I showed you just and now what is the trick in near duplicate checking what does it have to do with the shingles but yes but these are shingles of the same text so we're still talking about one document here and we get multiple shingles of that document exactly so the trick is that we now can say well if I have two documents and I do not focus on the sequence of words but just on the set of the shingles if two documents have a large overlap in the set of shingles this is near duplicate content because it doesn't matter whether in the second document there is not rows here but some other term it just affects the blue and the black shingle the red shingle is not affected by this exchange also I refer to the set of shingles not the sequence of shingles this means if I take the red shingle and put it at the end of the text it doesn't matter it's still the same shingle though the order of the words has been reversed okay it means the same so we can say that two documents are near duplicates if the two sets of shingles generated from them are nearly the same the larger the overlaps in terms of shingles the higher the similarity in terms of the documents well take two documents shingle them and then what we can do is compute the jacar coefficient the jacar coefficient we very briefly had it in the fuzzy retrieval part so what we do is we measure the overlap between the text we take the shingles that are in both documents and divide it by the number of shingles that are totally in the documents okay and if the two documents perfectly overlap this will be one if there are a lot of shingles and the overlap is very small zero it will be zero and the higher the jacar coefficient the more it turns towards one the more similar the objects are because the shingles are the same okay so we say kind of like I don't know 0.9 or something like that 90% overlap that is a near duplicate and that accounts for the different words taken from the advertisement so it's not Viagra anymore it's Cialis or whatever you know can be done good if you have all the shingles computing this jacar coefficient is kind of easy so you could have the you could just sort the sets of shingles then you find the intersection and then you merge the sorted lists so you get the total number of shingles that is there and then in n log n you could compute the jacar coefficient this is a little bit tricky because what we don't do is we compute two pairs of shingles and then and and consider them but what we have to do is we have a set of shingles documents already that we somehow crawled and now we have to find out whether for a newly shingled document the jacar coefficient with respect to any of these documents that we had before is higher than 90% 0.9 okay this is kind of hmm we can't do that because I mean keeping the shingling sets of all the documents that we have before doesn't work you know like that's too expensive then computing jacar coefficients for every newly shingled document with all the documents that we have crawled before definitely prohibitive we can't do that so again we need clever indexing technique to deal with that problem that's really a difficult problem and it sounds kind of kind of unintuitive what is done now so bear with me for a minute and I will show you because a very clever way of dealing with the problem is a randomized approximation so I will not do the exact thing I will use a randomized algorithm to approximate the result for my jacar coefficient so a shingle is comes as no surprise hashed into some value for example 64-bit integers then the set of shingles is basically transformed into a set of hash values simple a hash value could be derived from several shingles collisions may occur but if a hash value is not there there is no shingle that created it okay so we have positive false positives but we never have false negatives then we could say well the jacar coefficient between the sets of shingles is basically the same as the jacar coefficients between the sets of hash values because I mean collisions do occur yes but that in some document the collisions are exactly the same as in the other document so different parts in each document created exactly the same hash values that is rather improbable collisions are improbable as such but you know synchronized collisions are kind of very improbable so with that argument I just assume that the jacar coefficient of the hash values will be sufficient to approximate the jacar coefficient of the of the shingles now I use another little trick and use a random permutation on the set of all 64-bit integers that means I just take a 64-bit integer and shift around the numbers in the 60 font in some way it's a determined way you know but it's any permutation just randomly picked good simple permutation obviously is identity I map them to themselves that doesn't help me very much but could be kind of like just add one and so every bit is kind of increased by one and if I have an overflow I go back to zero that's it could be a possibility I take one way of our permutation so I have to decide for one which one random okay so I do not randomly permute every hash value and then everything is gone but just decide for one permutation whatever it has so I have a set of shingles I hash it I have a set of hash values I permute it by this chosen permutation randomly chosen permutation pi and then I look for the smallest value that was derived by this permutation okay good what I do is I shingle it I hash it I randomly permute it flop flop flop and then I choose the minimum which is the one over here okay and now my claim is that the jacar coefficient of the two hash sets of documents is basically the probability that their minimum numbers are the same so the overlap is higher if they create the same minimum number by random permutation so it's no longer the issue whether the smallest shingle is there or not in terms of the hash value but the random permutation creates the same minimum okay so that is the idea do you believe it or not no it's a probability it's a probability it's not a possibility so I mean a probability is always one or zero after you've seen the fact so that's the same with I don't know Schrodinger's kit open the box and the probability is gone it has become certainty you know but the probability is the interesting part and that is somewhere between one and zero as is the jacar coefficient so the domain is the same exactly no no no no no no no no either the minimum is the same or it's not but what is the probability that a random permutation of your hash values will create the same minimum or not that's different question that's not the question whether the minimum is exactly the same I mean that that was what we were doing before you know like we that is the basic idea you know I have a lot of random permutations I have a lot of possibilities to apply permutations to our numbers okay and if the probability is very high that either of these permutations will create the same minimum then the overlap must be big because then I mean I create the minimum I create the minimum with one fixed permutation in each random experiment so the random experiment is like follows I choose randomly a permutation and then I apply this permutation to all the hash values the same permutation okay this permutation will make one specific hash value in both documents the smallest because it doesn't even if I draw a different permutation it will do the same if no matter what permutation I draw this is where the probability comes in I will end up with the same minimum then it must have come from the same shingle in the original set or from a shingle that had the collision which is rather improbable okay this is the idea so the higher the probability that random permutations really result for two documents in the same minimum the higher the overlap must be because the minimum is directly permuted from a shingle and the more possibilities I have to permute the shingle the more shingles must overlap okay yes well getting the probability is a different thing from knowing that the probability is proportional to the to the jacar index we will talk about how we get the probabilities but but but for now the point that I want to drive across is really this probability is really proportional or well equal to the the jacar index of the hash values okay this is what I want to want to do this is exactly what we do you know we have two sets of shingles and we have the corresponding hash values then we take a random permutation so we choose one apply it we calculate the minimum compare it with respect to each other we choose another permutation apply it calculate the minimum and so on huh so as a random experiment this goes on like that and we can prove that well if we have the the sets given as as bit strings huh what basically happens is that we have the positions in the bit strings and the hashes for for for the documents huh so a permutation is basically a random swapping of the of the of the columns because we cannot say what it will be transferred into huh and a random swapping in the columns means that we swap I don't know zero here with the one here and then we have the one in front huh and so on what is the minimum number of this hash values now well minimum means obviously that has it has leading zeros because the more leading zeros it has the smaller the thing is that remains okay this is a bit representation of the number if I have a one here will be quite a large number if I have a lot of leading zeros will be a small number okay and this is this is the basic ideas if I take the the minimums as positions of the first zero columns huh so this would be the first zero columns in in in this case then the probability that the minimum of one is the probability the minimum of the other is proportional to the probability that the first zero column occurs in the same position huh because I mean the rest is noise but if there's zero column is not in the first position it's definitely not the same number okay so this has to be proportional what is the probability that both have the first zero color of the position well since the matching zero columns over here can be ignored it's a probability the first non-zero column is a column of the form two ones huh so we don't want a one zero or zero one column we want a one one column that means that the probability that the minimum hash values after a random permutation has been applied or that the same random permutation has been applied to the different hash sets of the shingles is the probability that the first non-zero zero column indeed is a one one column huh I've basically crossed out all the leading zeros look at the first column and if that is one one then chances are very well that's the same minimum model of noise but we're talking always about you know like it's proportionate and this has to do with efficiency so we can we can well lose something here so if we continue our proof what is the probability that the first non-zero zero column is actually a one one column so this case happens I cross out the leading zeros and then look at this column and this is indeed a one one column well the probability is the number of one one columns in the whole vector divided by the number of non-zero zero columns of course if I have a one one column or if I cross out zero zero columns the column that is remaining could be of three types it could be a one one column it could be a one zero column or it could be a zero one column the probability that is the first one the one one column is the probability of a one one column existing upper part divided by all the possibilities there are the number of non-zero zero column or the number of one one column plus the number of one zero column plus the number of zero one columns okay good and this is exactly the definition of the jacar coefficient cool isn't it I grant you that this is not I mean you you wouldn't probably see it in the beginning when somebody says well it's easy you know like you shingle it and you permute it and then if the minimum is kind of the same then it's kind of the same thing near duplicate content but it makes sense doesn't it I always like that it's it's it's fun shingling is really fun and the father of shingling is Andre Broder actually who worked for Yahoo quite quite some time and this was so this was actually a problem that search engines really so this is not some theoretical issue and some some nice hobby of mine but this is really what Yahoo did to do to detect near duplicate content so idea is we can estimate the overlap by applying random permutations complaining the minimum and now your problem comes how do we compute this do we really have to do it and the answer is yes we have to do it and what we will do is we will just take 200 random samples of functions apply them and that is a good measure sufficiently good measure for for for what we do we take 200 randomly chosen but fixed permutation we take the minimums of the permutation worked on the shingle sets well the hash sets of the shingle sets actually okay and this is what is called a sketch of the document okay so the jacar coefficients of two documents can now be estimated by counting the number of places where the two sketches of the documents agree that is where the random permutation creates the same minimum if that happens in a lot of places it's near duplicates if that happens only rarely they share a shingle or two but it's not the same content okay that's a basic idea and since the jacar coefficients for the the the hashes and the and and the actual shingles so just just just forget about the collisions we don't want that so that's that's reasonably improbable that the collision happens that is kind of kind of similar the sketch is a good way of estimating the jacar coefficient and this sketch is actually a very efficient way of the sketches are actually a very efficient way of representing a document because it's 264 bit numbers that can be properly indexed and where I can easily compute the permutation so again to recapitulate I take a document I shingle the document I hash the shingles I permute the hashed shingles and basically break it down to the minimum this is where the actual reduction is so far I have really have taken the document and blown it up into shingles which is kind of very much overlapping shingles to hashes makes it numerical but still nothing saved hashes to permutations gives me 200 fold the thing and then I kind of do the cut and say okay I just want the minimum I can argue for the minimum being all I need good this leads me to the sketch of the document and I do the same for all the other documents end up with a number of sketches and then I will just put the sketches on top of each other see where they collude see where they are different and then I get the number of places where the minimum are equal can be between 0 and 200 divided by 200 gives me a number between 1 and 0 which basically can then be computed for the threshold everything above 0.9 is a new duplicate clever technique isn't it maybe anyway so now back to the initial problem what we have is a large collection of documents and the sketches and now we have a new document the near duplicates of this new document can be found by computing the sketch of the new document compared to the sketches of all the documents that I've seen before and of course this is much faster than looking at the individual shingles or trying to fiddle around with words and how many words and what is the overlap in terms of the text and blah blah blah this is much superior to all the string based techniques and gives quite good results still if you have crawled just a billion documents and you have their sketches and in comes a new document then you have to compare it to the billion sketches and compute the jacar coefficient for all of them that seems rather bad this doesn't seem to be too impressive again we can use a trick because for every index document and the sketch of the document so the part of the sketch of the document we can create a pair of the minimum and the document ID so if you have the document or if you have N documents you basically get 200 such pairs because you have 200 documents 200 minima in your sketch and that for every document but now we can order it by the minima and just for every new document look up whether this minimum has been reached by some document ID so all the document IDs basically could also be done by an inverted index just for every minima for every minimum recording in which documents does it occur and then I just ask for all the different minima that occur in the new document sketch and for every ID document ID that is there I just add a counter and if I find one document that has a 90% overlap that is what is it 180 so if one document occurs more than 180 times that's it okay quite efficient I can do that good again new document sketch it that means shingle it hash it permute it blah blah blah sketch it now we find the B tree to find the index document who sketch contains at least one of the minima so it has a minimum overlap and then we look at the set of documents such the first document is in the sketch and so on only the document sketches can have a nonzero overlap with the sketch of D if they have at least one minimum in common then the jacar coefficient is bigger than zero and I just need to check between that makes it more efficient okay well there's also an extension so if you consider that that two documents you know that two documents are near different duplicates if their sketches match in at least m places then we can restrict the search to all the documents that have at least numbers m numbers in common again I don't know whether the B tree here is the best choice or whether an inverted file index could be used or hash table there's definitely several possibilities of what you could use for performing the check but the idea is really order the thing invertedly by the minimum and then for a new document look for all the minimums which document IDs have the same minimum and then count the documents with respect to the places where they overlap and then if you are above say 180 that's it okay good well the last thing I want to do today is spend spend some a little bit of time on focus crawling so let's assume we own a web search engine that focuses on a specific topic you know like I just want news items or I just want a sport events or I just want tropical fishes or whatever I'm the tropical fish search engine and every diver wants that or loves that do I need to crawl the entire web probably not and the idea is that that some focus crawling would be enough so if I know okay there are a couple of pages that deal with tropical fish or with sport events or whatever it may be why don't we crawl in their surroundings why don't we only take their outlinks that then may at some point lead to I mean also the tropical fish web page may may transfer me to Google and from there I can go everywhere you know but chances are that I was that if I hit one of the pages about tropical fishes I will enter a small world as they call it very often you know like so there are other fish lovers and and and they interlink each other and crawling this part of the web seems far more promising for having good results on my search engine than basically crawling the entire web and throwing out everything that is not fishy tropical fishy that is so what do we do we train a classifier that is able to detect whether some web page is about the relevant topic that I'm interested in and we the e-fasion classifier whatever support vector machine you name it then we take a number of pages a seed pages for our crawlers that are totally on topic and follow the outlinks of these on topic pages and then for each page that we land on we can we can decide whether this is still concerned with our tropical fishes or sports events or whatever it may be and if so we again follow the links on this page and if not we just cut it out that's the basic basic idea you could also extend that and use clever probability models like we just seen for finding out whether some links probably points to something that is interesting or not you know and then crawl in some some some ranking of the pages that are more fishy or less fishy or whatever you know you can do a lot of things with that but the basic idea is just take a couple of pages that definitely on the topic and use them as seed pages explore their surroundings and you will have a lot of information about your topic and avoid a lot of unnecessary crawls of irrelevant signs basically the idea of that if you do it and if you if you compare it to unfocused crawling and look at the URLs that you that you fetch and the the relevance of the of the topics in terms of the harvest rate so you're kind of kind of try to not to start from from interesting sites but kind of like start from just just follow all the links you will find that the more URLs you fetch the average relevance will go down so you start very very focused and you have the a good average relevance and and and then you follow the first link and leads you somewhere else and that leads you somewhere else you know like and at some point you're at google and you're basically everywhere so the way that you crawl will not lead to high relevance harvest rates if you use a focus crawling approach and crawl the URL we can see that the relevance is kind of well it's jumping a little bit up and down you know like more or less relevant but but but staying basically on on a higher level for the first couple of URLs which would basically be here in the other thing you know so focus crawling really helps you getting a good harvest rate and and getting getting the most relevant pages first which in turn helps you in keeping up your good result sets which is very important for focus crawling engines because usually they are specialized they have in general a smaller community so who's interested in tropical fishes I mean a couple of divers or whatever people having aquariums or whatever you know very small amount of people but still you have to maintain your search engine you have to pay for it you have to pay for the traffic you have to pay for the crawling getting high quality harvest rate is much more sensible than crawling the whole web and then saying well yeah out of out of order because I can't afford it anymore you know this basically idea and our last detour for today will be how to deal or how to make websites exactly crawler friendly so you can help google and yahoo and ask and whomever to do what they have to do yeah we've already talked way too much because that I will do this here very briefly so to help crawlers what you can what can you do first of all use a robots text to exclude everything you don't want to have indexed use the static site map containing all pages in your site listed in a good way such as the quarter can easily see it services such as google already offer a standard for this for this use good html which makes it makes it much easier for the web crawlers site parser avoid some dynamic and stripped content wherever you can so use it only when it's really necessary provide fashionists and caching information in your HTTP headers for example right there where the page has been updated the last time so that crawlers don't have to fetch the whole page but only the head piece of it that only states when the last updates has been has been made for the page send send correct HTTP status codes this is particularly relevant for redirects so some people still make redirects in the browser and this could be difficult to crawl us to detect so there is a status code in HTTP for redirects use it use the correct mime types and content encodings for your documents again this will help crawlers to process your content in the right way so that's all all umlauts and all the other stuff are indexed correctly use canonical host names which basically means do not provide the same content content using different url so there are some websites that are available with exactly the same content using different host names don't do this this will be tagged as doubly case and will cause trouble avoid some spider traps some some session IDs some people might generate within their URLs these are almost always a problem for crawlers try to avoid it and if you have some time try to annotate your images in your page by some textual description this will help image image search task and and also might have you help users uh you you don't who cannot deal with images easily but definitely will also help crawlers indexing your content all right that's it the next week we will talk about how we can exploit the link structure for better web results in web search and these are the hits and the page rank algorithm this one used by google very famous well that's well that's it's for that's it for today thank you very much for listening and have a good lunch
This lecture provides an introduction to the fields of information retrieval and web search. We will discuss how relevant information can be found in very large and mostly unstructured data collections; this is particularly interesting in cases where users cannot provide a clear formulation of their current information need. Web search engines like Google are a typical application of the techniques covered by this course.
10.5446/354 (DOI)
All right, here we are. So thank you for your patience. I've been in an appointment that took a bit longer than I expected. And so does Professor Barker. So he will be here very soon. But yeah, welcome to the 12th Lectures on Information Retrieval and RepSearch Engines. Today we're going to talk about PageRank and the HITS algorithm to very famous methods to exploit the link structure present in web graphs so you can find what pages are really important and what are not. So before we come to that, as always, let's discuss the homework. So some easy exercises today. So what is a web crawler and what's its basic mode of operation? Yeah? Uh-huh. Yeah, that's exactly right. Just crawl through the web, collect everything you can find, analyze it, and give it to the indexer. Any additions? So then it's basically what a crawler is. So what features does a good web crawler have or should a good web crawler have? Maybe one of you guys. OK, so why do we need robustness? So there are standards for HTTP and how HTML pages should look like. So why should we care about robustness? They are well-defined protocols that they're used on the web. Yeah, exactly. So most people absolutely do not care about anything Tim Berners-Lee says. And just coding the HTML the way they want, implementing their web service the way they want, or configuring it the way they want, and nobody cares about whether it's obeying some standard or not. So basic rule for web crawlers, be prepared for anything. So what are additional features that should be present in a web crawler? So the web is quite large. So when Floral calfes on network sites that are Yeah, so performance is a major issue in scalability distribution because we had these kind of calculation last week, even if you want to fetch every web page once a year we need to fetch 2000 pages a second, so that calls for a very reliable and distributed and parallel and scalable architecture, which is not a trivia task. So coding some web crawler that works quite nice, it's easy, but doing this on a large scale and a good integration with the index and other components is very, very difficult. So if you have any innovative ideas, Google will be happy to hear that. All right, these are web crawlers. Okay, why do web crawlers need to check for duplicate URIs and how do they do it? Hmm. Yeah, why should you have to host them separately? It only makes things more complicated, does it? Yeah. Yeah, the problem is data locality and having good data locality means better performance. Yeah, we can distribute the B3 in a better way. Well, that's mostly the most important part here. Callers also do need to check for duplicate content. Why do they do to check for this and how does shingling work and how does it help here? Okay, first question. Why do callers need to check for duplicate content at all? Yeah, sometimes people just make the same content accessible via different URLs. And you have no way to determine that using just your URL or normalized URL or whatever. So you definitely need to check the content. All right, so this is really important. And yeah, shingling can be used to do this, but how does it work and what problem does it solve? Just go on. Just to put it's face. Try it, try it. Just a basic idea. I don't need all nasty mathematical details. So this is very important thing for the all exome. Be prepared to answer these kind of questions very, very quickly. All right, so in the naive way, you would extract the shingles from each of your documents, these four grams or whatever you said. And then you try to compute the overlap between these shingles sets for two documents using the jacar coefficient. And this is the similarity of content. So obviously, if you have a new page, you cannot compare to all existing pages in your index and you go, then you would go really mad at doing thousands of comparisons each time a new page comes in. And so you do shingling. And here you can simply represent each document by its sketch. So it's derived by hashing the shingles and taking random permutations of it, maybe 100 or 200. And for each permutation, then you compute the minimum value of each hash set. And the sketch is simply composed of all this minima and one can show that two documents are similar if they share the same minima. So at least in a very probable way. So if you have 200 sketches of lengths, 200, then chances are very good that you can approximate the true jacar coefficient by just comparing the minima. And you can do this in an efficient way by building a B3 and storing all the sketch values in it and then computing the similarity of a new document or all existing documents in a very, very effective way or efficient way better. All right, last topic was focus crawling. So what is it and what do you need it for and how does it work? Basically it's a process of doing this and the process of doing the following and then the process of doing the following and then the process of doing the following. So that's basically it. Very simple idea, only follow those links that seem to be most promising or most relevant to the topic. You want to index in your focus crawl and just train some classifier. We had a lot of them and then just be happy. We have some nice curves seen the last week that really does work. So very interesting technique if you just want to collect all documents relevant to some topic that's interesting to you. All right, so that was the homework for today and now we're going on with retrieval algorithms and the connection to the web link structure. So what we want to do today is kind of use the structure of the web because what we did in information retrieval was usually using the words, using the text, the content of a page or of a document and indexing it somehow. But with the web we have a very big difference with respect to the collections of documents and that is the linkage between documents, hyperlinks. And this is what we want to look at today. So first a short brief glance at the link structure of the web and then I want to introduce two algorithms that are much renowned, especially the page rank algorithm, which is one of the main features of Google and actually made Google what it is today. So it's a very important, very influential algorithm and of course I would like you to understand it and I will explain it. So links between documents are in the sense of the web a little bit broader than you would say for example one paper like scientific paper cites another paper or something. But it's rather a network of social interactions between the people behind the web servers, the people creating the documents. Because if you see something like the academic life, you might have papers, this is typical document that you would find and then you have kind of authors of the paper and they co-author something, they work together on something. And thus they cite each other and by authoring the same papers or common papers they are kind of connected. Could also be social interaction by for example in the movie domain directing and acting. So there are actors that have worked together, there are actors that have never worked together. In any way what is formed is kind of a network and you could say that the network structure, the idea that somebody co-authored in paper lets you induce, well then the topic that both authors are interested in must be the same. Because they worked on the same paper. Or the genre of movies that an actor usually does must be the same as the genre of his co-actors. If they start together in a film, some movie, then obviously they both cater for the same genre. Thus the idea is that I can detect or kind of see the genre or the topic of research groups or something by looking at with whom they are connected. If I know for example somebody did very many films with an action star, a typical action star, what does it tell me about the person? Well he seems to work in action movies quite often. This is the way of thinking, this is the way of deriving information that is totally different in the web where everything can be interlinked and where kind of like a link to some other side is a reference, is a vote for this side. Also in social networks. So this basically applies to all areas of applications. You can have musicians, you can have soccer stars, you can have friends and families, you can have relations between different countries that go by trading contracts or whatever it is. You know as soon as there is such a relation, the strength of the relation may be different, but the topic that the relation is about may be interesting. Same goes for people making phone calls, people transmitting infections, it's all networks basically. It's all the networks. Scientific papers, very typical like Skyline queries have been studied in blah, later somebody here introduced the query type and blah blah blah. Siting this means what does this guy work on? Well obviously Skyline queries. At least he did one paper about it. Doesn't mean it's main topic or something, but the more citations we collect, the more secure we become in predicting his favorite topic. This is kind of the idea that is also in the web. So pages side each other and each citation, each hyperlink is a vote of confidence. A vote of confidence that the contents of the linked page is adequate, is a sign of quality, so you believe what's given on the page. And it has something topically to do with what you were talking about, because otherwise why should you link to this site? And there may be some sites that are well linked, for example Google, everybody loves Google, so there might be a link to Google on many sites. But usually it has some topical indications. For example if you look at the ETHIS website, at the institute's website you will definitely find a link to the Institute of Computer Science, Department of Computer Science here, and the University of the Technical University of Braunschweig as such, because they are topically connected in a way. And to model this or to exploit this, you first need to formalize it, and formalizing it is quite easy, isn't it? Because you have resources, be it soccer stars or websites or people carrying infections or family members or authors in the academic sense or whatever. And these are the nodes of a graph, and as soon as there is a hyperlink, a citation, somebody infected, whatever it may be, a collaboration in any way, you add a link between them. Okay? And the more links there are in the whole network, the merrier the thing is, you know. You get a graph consisting of edges and vertices, nodes, and that is something that we can handle quite efficiently in mathematics. Because we can represent this graph by so-called adjacency matrix, where we just say, okay, if a link goes from node one to node two, we will just say there is an adjacency, so these two nodes are adjacent. And if there is no link from one to node three, we will just say, no, there is no adjacency between these nodes. And the matrix is, so the graph is directed, so you link to something or you cite somebody. And so it's not really a symmetrical matrix, but of course there may be backlinks in which case we would have the symmetrical part, but usually it's just the, well, in general it's an asymmetric matrix. And now the social analysis of such networks has a lot of classical questions as what are central points or focus points of the networks. Which authors are highly cited, for example, which stars work together with many different people. And this is always a sign of high prestige or high status. If somebody is cited very often and by different people, that may be an indication that his work is of very high quality, and maybe in different areas, because many people cite it all, it's a very specific, large area. You can also say, talk about connectedness. Are there points in your graph that are well connected? Are there points that are isolated? Is your graph coherent in some way? Is it connected? Or does it consist of several subgraphs that are isolated with respect to each other? Typical things that you can find out by network analysis. And also, are there some central points? Are there some points that connect networks, so that act as mediators between different parts of the network, and which you cannot simply remove and then the network breaks apart? And this would be an example for such a point, so if you remove it, you stay with two subgraphs. They have nothing in common, just because you removed one connecting point. And the structure of these graphs is very interesting and has spawned a lot of research, actually. What we want to do is look at the prestige of pages for the sake of saying, well, if something is highly cited, if something is highly linked to, the quality and usefulness of this site seems to be high. High, at least, is something that is not cited so often. That's the first approximation. That seems valid, doesn't it? So, I mean, everybody can put up a website and put it on a web server and then people can see it. But if other people, site this site, actually do take the effort of adding a link on their site to the other site, then this is kind of a vote of trust, a sign of confidence. You seem to be happy with the quality of what is written there, or you seem to find it interesting, or you seem to find it valid. And so the content on the site seems to be better in some way than some totally unconnected site that nobody ever links to. Of course, it's a heuristic. It might be the reason that the new site has just become available and is perfect in content. It's mostly wonderful, you know, but how can you know? More often than not, the new page that has been put up is crap. So, what you do is basically you say, well, let's, as a first indicator of prestige, use the indigree of a note. That is, any note that is pointed to very often seems to be an important note. Seems to be a high quality note, because all these guys over here take the effort of reading this page and adding a link to it. That seems to be a sign of prestige. The indigree just means that you count how many pointers you have to a site. At this stage, it doesn't matter kind of how many links point out, but let's just see the incoming links. And if we count them, we can get a quality criterion of a site, of prestige. And it actually has been quite some while when the web was not even invented yet, that people investigating such network structures in the area of sociology became aware, well, the indigree actually is not a very good indicator, but it has something to do with handing over trust, yes, but it has also something to do with the quality of the people or the sites or whatever it may be that give you a vote. So if there are a thousand people that vote for one page and a thousand people that vote for a different page, how would you decide which one of them is more important? Or are there of equal importance? Any ideas? There could be of equal importance, but what if the thousand people pointing to one site are all university professors that must be believed and the other parts, the other thousand people are persons, just your usual street persons that have no connection with the topic whatsoever. What would you say then? So of course, as everybody does, believe the university professors, because they know right from wrong, obviously. And this is what a sociologist very early on realized really. It has a certain recursive nature. So if I have a large prestige, my vote counts more than somebody who's just off the street and who has no credibility. So the higher my street cred, the more important my vote. And this is of course recursive, because if I have high street cred, I must have a high indegree. If I link to some page, I do not only increase the indegree of this page, but I transfer some of my credibility to this page. And the higher my credibility, the higher the credibility I transfer to this page. The more important it becomes. And this is kind of recursive, obviously. So in sociology, the prestige was kind of modeled as follows. You say, well, you have a node in the graph, and you have a prestige of that node. This is just some positive number. And the higher the number, the more prestige has a node, and the lower the number, the less prestige has a node. And then you go and represent the prestige score as a vector, having exactly one entry for each node. And the prestige of each node should be proportional to the sum of the prestige of the nodes linking to this node. So the more people link to the node, the higher this prestige will be. The better, the more prestigious the people are linking to the node, the higher the prestige will be. And usually this is just a weighted sum. And what you can say is, well, you can be a page that is not very often linked to, but if your people linking to you are of very high quality, you can nevertheless beat a system and prestige that more people link to. Because I would rather trust the opinion of one university professor than the opinion of thousands of people on the street. Or something like that. Okay, so this is the basic idea. And of course, if you consider the adjacency matrix that we have, the indegree of each node can be determined. But if you want the prestige of the nodes pointing to it, you have to consider the indegree of the nodes pointing to them. Okay? And to determine the degree of the pointing node, you have to have the indegrees of the nodes pointing to the nodes pointing to the page. And so on and so on. So this is kind of recursive and what you do is basically, you have the adjacency matrix and you do a fixed point computation. So you start multiplying the adjacency matrix such that the condition holds, the prestige is some factors, the prestige, and if that doesn't change any more, then a fixed point is reached. Okay? So this is kind of the idea behind it. And we will see in a short while that this can actually be done by a typical linear algebra. And we will see what are the characteristics of the matrix or what the characteristics of the matrix need to be, such that we can arrive at such a fixed point condition. So what we want is we want to consider all the prestige of the sides pointing to the prestige, pointing to the prestige, and at some point it has to end. I mean, we can't run round circles. Good. So with the simple model of prestige, we have the adjacency matrix of this graph and then just take the transposed matrix. So this is just the transposed matrix. Everything has been flipped. Okay? So the columns and the lines, the columns and the rows have been exchanged. And we might start with the prestige of sides that are kind of connected to the nodes here. So this has a prestige of no 65 and this has a prestige of no 65 and this has a prestige of zero. Nobody likes this side and this has prestige of zero. So nobody likes this side. Nobody points to this side and this has a point four. Somebody points to this, but just one. So this is kind of the idea where we say, okay, it has something to do with two pointing to this side, only one pointing to this side, non pointing to this side, and it is reflected somehow by the weights. And taking just the indegree is a good first approximation of what we want to have in terms of prestige. And what we can now say is, well, a link coming from this side down here should probably be worse less than a link coming from either of these nodes because they are more prestigious. And similarly, a link coming from here should basically be worth nothing because the side is not prestigious. Though this page has two inlinks and also these pages up here only have two inlinks, we feel that the importance of these pages should be higher because more important pages point to it. And if we take the prestige as such and say, well, then we will take the prestige a little bit higher here, we will add no prestige here and a little bit lower here to reflect that the second link of the node four came from a very shoddy side, from a side that is not really trustworthy because nobody points to this side, it has no prestige at all. And if we take the factor alpha as 0.62, we reach the fixed point. So everybody can try it, it works. Okay? This is the basic idea behind it. Another interesting idea about graphs is the notion of centrality. So how central is a node for the network? Is it on the outer areas of some network or is it in the middle of the network? What can we do here? And we can investigate networks not only with respect to kind of co-author networks and the web or whatever, but there is actually graph analysis, it's kind of a discipline in computer science and mathematics that has been well investigated and the people were interested in such notions because they could be used for different application areas. So what is the idea of a central website in the web? Just from the feeling. It seems to be very popular and many, many people point to it from different areas. So what would be a very central website, for example? Exactly. Search engine page will probably be very central to many things on portal page. Wikipedia might be a very central page because it covers a lot of topics, so a lot of different pages link to it. It has a very central part because it relinks a lot of pages. Exactly, they seem to exhibit a higher centrality than the ones in the outer part of the boat. So we need a couple of definitions here. We will talk about distance between nodes, which means two nodes show a certain distance. If there is a number of links between them, so if there is a path from one node to the other, and in the case that there are multiple paths, I will take the shortest one. So the distance between two nodes is the smallest path, so the path with the least number of hops between them. And then you can kind of consider, well, if the graph is connected, you can say you have a radius of a node given by how far is the furthest point in the network that you can reach from this point, shortest path. It's not going around in circles or something, but really taking the shortest path to the rim of the network. And then you can say, well, the center of a graph is the one that has the minimal radius. So for example, look at this graph over here. What is the distance between this node and this node? Okay, this node and this node, also one. I mean, you could go down here, but that wouldn't be the shortest path. You have to go up here. Okay? Good. What is the distance to the node here? Two, because there's no way of getting there with just one hop. And what is the radius of the node? The radius is, yeah, okay, the distance to the most distance node. If we look at it, we can reach this node in one, this node in one, this node in one, this node in one, this node in two. So the radius of the point is two. How about the point over here? What's the radius? Guesswork or would you settle on something? Two, three, four? Exactly. One, two, three, four. It's the only way to get to this node, because there's no direct path. There's nothing where you can take a shortcut. So the most distance node is really four. And thus the radius of this node over here is four. And the idea now is taking the one point or the set of point with the smallest radius. This is exactly the one that are very well connected and kind of in the center of the graph, in the core of the bow tie, if you want it like this. Good. If you look at a scientific citation graph and you say, well, basically, if one paper cites another, that counts as a link. So this paper over here cites this paper. So we add a link in the citation graph. Then the papers that have a small radius are likely to be very influential, because they are not only cited by a lot of different papers, but they are cited from papers from different areas. They can be reached from almost every area of the graph within a few hops. And if you don't take the citation graph, but the collaboration graph, so people who worked together who actually co-authored papers, this is something that is done in mathematics very, very often. It's called the Erdersch number. Erdersch was a mathematician, a Hungarian mathematician, who was amazingly influential. So he worked on a lot of topics, and he worked with a lot of people. And to find out how fringy you are, or these days how young you are, the Erdersch numbers is kind of the distance of any node in the network to the famous mathematician Paul Erdersch. And the idea is basically that if you co-authored a paper with somebody who co-authored a paper with Paul Erdersch, then you have an Erdersch number of two. This is your distance. If you co-authored a paper with somebody who co-authored a paper with Paul Erdersch, you have a number of three. And I think the lowest Erdersch number that you can get these days is kind of two or three. I think something like that. So you cannot have an Erdersch number of one anymore, because Erdersch is dead. So you simply cannot co-author a paper with him. But some of his collaborators are still around. So if you want a small Erdersch number, you should co-author a paper with one of these quite soon. And then you can count your Erdersch number. There are some other options to define centrality. For example, you can look at cuts, which basically means that you just go through the graph and try to find out what is the number of links that connect the two parts of the graph. And you will find that, well, to disconnect these two graphs, I would have to cut through a lot of links. But if I want to disconnect these two components, cutting would be very easy, because I just have to see via a single link, or I just have to take out this vertex and the graph would fall apart. This can also be a notion of centrality. You say that the existence of some node is central for a connection graph, because if you remove the node, the connection breaks. This is kind of the idea. And it has been very often applied to epidemics, or espionage, or something like that on telephone networks. So who are the central points? Who are the focus points that kind of are spreading the disease? It's kind of interesting to see how it happens. Another important measure is cositation. If two documents sites, two other documents, then the two other documents seem to be in one, in some connection. Of course, as always, it's heuristic. There could be a paper saying, well, this paper here has definitely nothing to do with this paper over there. But that's not usual. The paper will usually say, OK, so we're looking at the problem of blah, and the foundation was laid in blah, and was worked on by blah also. And then you would have the connection between the cited documents. And if documents are cosited by many documents, there seem to be a strong topical relation. If they are cosited only by a few documents, you could have one of these cases saying, you know, like, oh, man, this is two documents from totally different regions, and for some reason or the other, I will just mention them here. In terms of the adjacency matrix, it's again, you have the link to the document to some paper, that is, a edge in the connection graph, and the number of documents cositing the other two documents is the entry in the adjacency matrix transposed times the adjacency matrix, and basically this is the idea of the number of documents citing both of the documents. The higher this number, the closer the cosited documents are in the graph. The entry of this adjacency matrix corresponding to the connection between the documents is the so-called cositation index, and it shows how related or not related they are in terms of topics. And you can do a lot of things with that, so it has been proposed to use it for clustering, for finding topics like we did with the latent semantic indexing and stuff like that. And it's called multi-dimensional scaling here. It's actually very similar to the singular value decomposition that we did for the LSI. It's kind of very similar. The basic idea is that you have the similarity matrix, and with this similarity matrix, you embed the different documents in a Euclidean space. This is the idea of multi-dimensional scaling. You add some dimensions and you scale the relation between the documents, so they fit into the Euclidean axis. And if you then look at the clusters that result from this cositation, you will find that those documents are clustered but really have topically something to do with each other. They really cover the same topic. And if you do it, for example, for journal articles, based on the cositation here, a couple of others took a million of journal articles published in 2000, every point represents a journal, and you see how the different papers cosite each other. You will find that there are topical areas. So, for example, here's the social sciences with sociology, law, polysciences, history is over here, geography, communications, so it's all very similar and the distance is much smaller than the distance, for example, to zoology, plants, biochemistry, which again are very closely related to each other but are further away from the social sciences as they are to each other. So this is the idea of multi-dimensional scaling. You embed it into the Euclidean space somehow and by decomposition of the matrix, and then you can see looking at the clusters or if you take a closer look at the clusters, you can find commonalities between the papers within them and the distance between clusters actually means something. I cannot say that the difference between history and economy is kind of the same as the distance between computer science and robotics just because it's kind of the same distance in a way. But there's a tendency that the larger the distance, the more disconnected the areas become in terms of topic. Good. So let's talk about the web again. We were considering the document types used in classical IR and of course co-citation and stuff like that applies to these documents. But what we used was basically the idea, not so much of co-citation and similar issues, but rather of what words to occur in the document, how similar our documents were suspect to each other, what topics are covered by a document. So more or less intrinsic values of the document. And we considered each document to be a self-contained unit and described it basically by its own representation. So if you think of the vector space model, what we did was basically, we opened up an axis for every term and then located the document in the term space depending on how much weight was put on a term. So how often it was used, how important it is for the collection, a lot of measures that you can use, TFI-DF and whatnot. But we kind of represented every document by the words occurring. We did consider documents as being similar and their distance in this space was small. And that means they basically had the same terms, used the same terms or similar terms, synonyms for example. Now with the web we have something more. Well, we do have the describing content of the websites obviously. And we did this with shingling. So we could consider the similarity of objects and now we could say, well, then basically it's a good idea to return all the similar pages based on what we did with the content check. But we also have this notion of somebody took the effort to create a link on this document. And the page creating the link is more or less trustworthy or better connected or not connected at all. And this brings us back to our network analysis ideas. So why don't we apply the ideas from network analysis to the web graph and see the link as a recommendation of something that makes a website more important. And the more links there are, the higher ranked the page becomes. And of course I have to mix it with the content of the page because I cannot just say, well, it doesn't really matter what your query is. I will always return the most popular site ever. Well, yes, it's very popular but it doesn't have anything to do with my query. Bad idea. So you have to consider the content of the page nevertheless. It has to be a mixture between both of them. And what is very often done is using the anchor text as a description because you point to something by saying what it is usually and then you click on it and you get referred to the site. You use the anchor text or you use the surrounding text of the link and find out what the link seems to be about. Good. So the first assumption that we're making is hyperlink is a signal of quality or popular interest. And it's like a recommendation. And what you do is kind of you say, well, what is this link about? Well, obviously it's about an advertisement of the Budweiser company that was prepared for the Super Bowl. So big football event in the US. And just looking at this anchor text tells me actually a lot about the site. That it points to. I mean, it may also be the case that it just says click here. In which case the here is the actual link and doesn't tell me anything about the content of the page that is linked for. So usually can do kind of take the surrounding text of the link also into account, make it a little bit bigger. The assumption too is kind of the anchor text of the link or the surrounding text describes the target space in sufficient detail. So if I have a link on IBM and then see, well, IBM manufactures and sells computer service, hardware and software also provides findings. So as a support of computer business, blah, blah, blah, that describes it. That sums it up very nicely. If I take this text to index the page that is linked to, that is kind of the interesting thing. And this here, the link is from Yahoo directory. So the link to something. And if we actually look at IBM's homepage, the word computer is not mentioned very often. So it doesn't seem to be very interesting for IBM to have computers because they have smart energy and they have the dryer that shrinks things like socks and whatever. No, it shrinks the utility bill. So it's all about green IT obviously. But since they expect that anybody looking at the IBM webpage know that it's about IT, that it's about computer, they can afford to not even mention that it's green IT, but they can talk about smart energy and the next thing you're dryer shrinks could be your utility bill. So this is kind of the idea. Sometimes the anchor text tells us actually more than the actual page. These are the two assumptions that we want to make for this today's lecture. So we will say, well, basically we will analyze the link and we know the assumptions are just heuristics. They don't have always to be true. Click here doesn't tell us anything about the page. Or I might link to some page because it's a very bad example of something. This is a total spam page, PIM, and you vote for it. What happened? It doesn't happen very often. Usually if you take the effort to introduce a link, you want to say something. So we have to be on the safe side, we have to be aware that this is heuristics, but heuristics do help a lot in common. And this has also been shown. And especially this first assumption, links are kind of quality signals or links are recommendations of the page. And it has spawned two algorithms, and especially the first one, the page rank algorithm, has founded or laid the foundation for an empire that has probably been beyond the wildest dreams of the people creating it. Because it was these two guys over here, and they are smiling so much because they were grad students at Stanford when they worked about network analysis and kind of came up with this page rank algorithm. They used it for their new web search engine, they kind of considered this thing, and that was Google. And at this point, I mean the market was clear cut. There was Alta Vista probably being the biggest one together with Yahoo, and HotBot, Lycus, and many search engines that you don't even know the names of anymore. And in the midst of it all was Terry Vynograd, who was one of the big founders of web search engines at that time. But then these two guys came with the link analysis and said, well, it's not just about the content of the page, it's also about the link structure of the web. And if we mix it somehow, if we blend it and introduce a ranking based on both, then the total quality of our web search will be better, and that proved actually true. So Google was immediately kind of accepted by the people. It was built in, or it was started basically in 1969 by Larry Page and Serge Brin. And the idea is to have query independent measures of prestige for each web resource. And thus, if you do a web search based on the query, you can also take into account this page rank, the quality of the page for the final ranking function. And the second algorithm that was basically developed around the same time at IBM Almaden by John Kleinberg, who's actually a physicist, I think, and kind of made its way into computer science and contributed quite a lot. I mean, it's really amazing to see his record of what he invented from peer-to-peer networks, and the navigability in peer-to-peer networks up to the HITS algorithm. And his idea was basically that there are two types of pages. Every web research can be a hub or an authority. So hubs are typical things like Google, portals, pointing to things. And authorities are the pages actually delivering the information. And this is far more inspired by the social networks than the page rank is. Because, I don't know, in your friend list, you might have some people that know everybody and the world. And if you have a problem that is slightly unspecific, you know, and I need somebody who knows about them, and you don't know the exact person, you will ask your well-connected friend, do you know somebody who deals with them? On the other hand, if you know somebody who is a perfect authority on your topic that you're interested in, you will directly deal with the authority. So in this sense, every site could be kind of a hub score and an authority score. The authorities are relevant in terms of the content. The hubs are basically relevant in collecting links to authorities. So they're portals in a way. And this gives me a short break because we go into the history of web search in our first detail. Last week we talked about the history of the web itself. Founded by Tim Berners-Lee at CERN, doing research in Geneva on nuclear physics. And now we come to web search, how to find information in the web. So the situation before 93 was quite simple. Well, actually, the web wasn't very large, though there was no need to have any search engines. And it was growing at the time a little bit. Tim Berners-Lee himself maintained a small list of web servers. So he created some kind of HTML page. And yeah, back at the time, this was how the list looked like. So just a list of institutions having some web servers set up. And yeah, this page is actually available. So here you see the note. This page is here for historical interest. Only the content hasn't been updated since late 92. So around this time he realized that, yeah, people need a more scalable way of finding the information in the web. The same was realized by the labor service in Germany. So we had a short story about that some weeks ago. So the idea of labor back then, I was to link everything online, a large map of Germany, where you can see which universities mostly provide web servers. And you can click on it and then you got the web server of lower Saxony. And that basically was it. So they soon realized that there were too many web servers to draw them onto a single map. And around this time, the first search engines came up. So this was the first phase of web search where basically these classical engines, like as Alta Vista excites, told me hotpot as chief and some more, basically implemented information retrieval algorithms that have been published some years ago, mainly in the 70s and 80s, and designed some web crawlers and basically collected information from the web, stored it in a big index and just run information retrieval technologies on it. So worked rather well and the focus of research was on how to scale these techniques to large document collections, as you can find them on the web and how to update the collections frequently, so how to build good crawlers and all this stuff. So search quality was simply what it was from my R times and nobody really cared about that. So it was mainly a problem of collecting all the information you needed. So in 1996, we already heard that, Searchbrinn and Larry Page worked at Stanford as bachelor students and they worked on a project which aimed was to build a large scale search engines. So something like this, maybe in a more modern way, focus on scalability. And somehow during the bachelor thesis, they had this idea of using the link structure in the web for measuring prestige and finding out which pages are better than other pages, independent of the pages content. And so then they devised a ranking system that took both types of information into account, page content and link structure. So they found that their search engine has been quite successful. So they didn't pursue their master's degree, but decided to establish a new company. So that was the beginning of Google in 1998. So they started with some help from Stanford University and implemented all the stuff, a lot of work, very innovative at the time. And then people noticed that Google was a search engine that worked way better than all this stuff up here because they used link structure. And before all the other companies realized what Google's real secret was, most people in the US or even Europe used Google for their searches. These companies all went broke, basically out of business now, and Google, yeah, it's hard to beat, they're very innovative. And the key for that was their page rank idea, using link information for ranking their results. So today, the search engine business is a bit smaller. So maybe Microsoft and Yahoo are the big competitors here to Google, basically Microsoft. So it's a very hard business and Google is kind of the big player here. So since the introduction of page rank, the web search hasn't changed that much. So there has been some kind of evolution, making better user interfaces, integrating more information, refining the ranking formula, which is a big trade secret at Google, doing all this kind of stuff. But there hasn't been no big idea since then. So the question is what could be the next big thing in web search? Here are some ideas. So one thing could be clustering that really works. We already had some demos some weeks ago. Yeah, usually it doesn't work too well, but it would be really nice to have it. So if I enter a query that is in Beacon some way and the search engine would tell me, yeah, do you mean Apple the company or Apple the fruit or Apple the whatever? But yeah, it doesn't work right now. Maybe this could be interesting. Really natural language processing that you can enter your questions in a way, yeah, human people really ask questions and not using just keywords. And the search engine is then able to understand what you're really looking for. Could be helpful if your information has a high level of detail and you need to explain it in some way, but then you need to search engine to understand you. This could be a good way. Yeah, well, then there has been the big idea of the semantic web. Also proposed by Tim Berners-Lee where, yeah, enrich your web pages by all kind of structured information. And then the search engine is reasoning about the web. So for example, Apple is the fruit and all these kind of connections that are between entities in the web. And this all could be exploited by answering, for answering search queries. Yeah, but the set reality that it really doesn't work because it's too complicated the way it is designed. Maybe there are some ways to make it easier to make it better. Personalization could be a nice thing. For example, if Google knows that I'm looking for computer science stuff as long as I'm at work, then it would be good to focus my research, my search engine results into this direction. It is done in some way, actually, could be improved, maybe. There have been some efforts to create open source search engines. We have seen the Vikiya project that has been discontinued some years ago where the idea was, hey, let's build a community engine and everyone can tweak the ranking formula and with the wisdom of the crowd we can create the best search engine there is. So actually, also this didn't work. Meta search will come to this next week in detail. It's the idea of combining the results of different search engines and then to get a better result quality. Also, that doesn't work very good right now. So, federated search also combining different search engines in a clever way. Maybe the key is inter-user interfaces where you can very easily navigate through the web. We have this scatter-gather clustering idea. This also could be an approach to what's better search results or maybe something else nobody knows. But there are many, many ideas around how web search could be improved but for most of them nobody really knows what's going on there or what is the trick or what is the key problem to be solved. So usually, improving web search means making some slight evolution here, tweaking the user interface or integrating new information like Twitter, accounts of Facebook, social networks in some way. This is how it's done today but nobody knows how web search will look in a few years. Okay, I think we are going to make a five-minute break now and then we will continue with PageRank and hits in detail. So, let's go on with the lecture. So what we will be discussing in the rest of the lecture will be two algorithms. The first one is a PageRank algorithm invented by Page on Brin. The second one is the hits algorithm invented by Kleinberg. And the idea of the PageRank algorithm is how do you get this measure of prestige that each site is assigned recursively in a way over a large number of web resources. You have the web graph and for every page you need the prestige number, the PageRank. And of course, you could just rank the web resources by popularity and assign a number that is proportional to the popularity and that is. And the question is of course, how do you measure popularity? In web to zero environments you have the I like this page or I dig it or whatever, you know. But usually that is not the case. Or could you look at the incoming traffic, so how many people look at these sites, many sites that have a site counters, so this number of counts could be a version. But this is all rather theoretical to do it. It's very easy for a page to fake this number. I had so many calls and I'm so popular and I'm so important. You don't want to have that, you want to have something that is more objective in a way. And the PageRank solution was quite simple. I mean they were bachelor students, so what can you expect? But this is one of the wonderful ideas in computer science where simple ideas, especially if taken from interdisciplinary fields, can work so well that it forms the basis of a multi-billion dollar multinational company. And so what they did was they just stole the model of our sociologist and applied it to the web graph. And what they did was they said, well, the number of inlinks is correlated to the prestige and the links from good resources should count more than the links from bad ones, which was exactly like we had. So the prestige of a site is basically the number of the prestigees from different sites, or the sum of the prestigees from different sites that point to the site. The more sites point to it, the higher the prestige will get. The higher the prestige of the sites pointing to it, the higher the prestige will get when you have some scaling factor. And the model behind that is often called the random surfer model. Where you just say, well, if I imagine a web surfer that just randomly walks through the web, then the web surfer has different possibilities. He can navigate from page to page and just follow links. So you go to one page, you look at the links that are there, you choose one randomly. You go there. Or you could type in a new name of the website and hop to some totally different area of the web altogether. These are two basic navigation skills that you have. So you can click a random hyperlink, or you can type in a random URI. And very often you will navigate the web, you will follow the link structure, and rarely you type something in. Let's say 90% of times you just navigate, 10% you hop to somewhere else. And the page rank then is the long-term visit rate of each node. So consider a random surfer, how probable is it that it will visit your site? Well, the more links point to your site. And the more links point to the sites pointing to your site, the more probable is it that randomly the path to your site will be chosen. But this is exactly the ingredients that we want. We want the number of inlinks, and we want the prestige of the pages pointing to us, which is the number of inlinks to these pages. And thus the random surfer model covers exactly what we need in terms of the page rank. And the model is kind of crude, obviously. I mean, nobody surfs the web like that, and the randomness is not given at all in reality, because of course you will surf from topically related websites to websites of other topics that are similar and you will make your way, but in a more focused way, not this random surfer model. But it's kind of very useful to consider it that way, because you do have the idea of what happens mathematically. Well, some of the things that our browsers are not considered, so for example you might collect bookmarks, which are very often clicked, so the random typing of a URI doesn't happen very often, so this is kind of not too bad. What about the back button of a page, you know, like I mean this is not in the model at all. So this is not really a good model of web surfing, but it is a sufficient model for modeling the prestige of a site. That's all I'm saying. So let's see a more detailed version of the model. We start at a random page. It can be anyone. Then I flip a coin that shows head or tails, probability 90% 10%. If it shows head and the current page has a positive out degree, I follow one of the links randomly. This is the navigation. 90% of the time, so 90% of my coin shows heads. If the coin shows tails or the current page has no out links, serve to a random web page. Okay? Choose uniformly and flip the next coin. That's kind of the idea. And we get the intuition of what is happening there. So if we have a graph, the web graph, something like that. Okay. Maybe here's something else. So if we have the web graph, we just do the random thing and start here. And then we flip the coin and say, well, if the coin shows 90%, then I will follow a random link. This is my only choice. And I'm on this page. Second times a slot in the model. Then I do the same thing again. I say, well, flip the coin. Oh, it's the 10%. So I do again the random thing and I'm here. So every point on the web, every web page has the chance to be addressed uniformly. But then if a page has many inlinks, like the page over here, there's still some other notes, if it has many inlinks, the chance that one of these pages is chosen randomly as a starting point is higher. Because this is three out of the whole web graph, not just one out of the whole web graph. And the chance that when you start on this page, you navigate to this page is then also higher. So the probability that this page is accessed by our random surfer is higher because first more links point to it. The resources can be chosen randomly. Probability is higher. And of course, if these sources are well interlinked from other sources, it has a high prestige themselves. Then the chance is higher that randomly I end up in one of these. And again, have the chance of getting to one of the nodes leading me in the next step with a probability of 90% to this node. So this is the idea of determining the prestige of the site. And if we look at an example and look at the adjacency matrix of this very small web graph over here, and I leave out the nulls, the zeros, just look at the adjacency matrix. Then I have the transition matrix. And I will say that, for example, there is an initial probability that I start in every page. And I set the lambda to a quarter depending on where I can go. So if I navigate from page one, I am in page one and want to navigate with 75% probability to some page, the only chance that I have is going to page five. So in 75% of the pages, I go starting in page one to page five. Good. This is what this one means over here. If there would be ones also here, so if there would be a link down here, then the 75% would have to be distributed on both pages. And how would it have to be distributed? Evenly, because it's a random choice. So half of the 75% would go to page three, and half of the 75% would go to page five. But good thing, we don't have this link. So all the 75 go to page five, and this is what is written here. Still 25% missing. What happens with the 25%? Well, in 25% of the cases, I type in a random web address. How many possibilities do I have to type in a random web address? Exactly. So I have five possibilities. I can even type in the address of the page that I'm on right now. So I have to distribute it evenly, and 25% distributed over five pages makes 5% for every page. This is what is written here. And of course, if I have 75% to go to page five by navigation and the 5% by going to page five by typing in the URL, then I have 80% of possibility 0.8. That page five is accessed after I have been at page one. And now I do that for all starting points. So for every starting point, I record what is the probability of ending at a goal page in one step? Good. Let's look at it, for example, here. Page four. Okay. Let's take this away. Okay. Page four. What's the idea here? Of course, 25%, five pages, everybody gets the 5%. Good. 75% navigating. Where do I navigate to? Chance to get to page three. Chance to get to page one. Distribute it evenly. 37.5%. And page three, 37.5%. And this is the distribution. Okay. And, well, the number of connections between sites, this is called the adjacency matrix. And from start to go with the probabilities of getting there is called the transition matrix. And in every step that my random server takes, the transition matrix shows the probability of getting to each possible next site. So if I start in, for example, node one and then go to node five, in the next step, the probability of ending up at any of the nodes will be, well, start from five and so it goes. Clear? This is what I'm doing. This is basically my random server model. So this is transition matrix that I have. I introduce the idea of time and say, well, if I am at time t in any of the starting points, the probability that in time t plus one, I will be at any of the goal. Pages is basically, well, five percent, 80 percent, five percent, and so on and so on. Okay? No, this is stupid. The start is at page three. Okay? So if I'm starting at page three, or if I'm staying at page three in step or time t, the probability that in the next step, I will be at step one, five percent, 80 percent, five percent, five percent, five percent. Okay? This is the basic idea behind it. And of course, the total probability that I will be at any of these pages always sums up to one. Okay? This is the basic idea. Good. Now we can do a simulation. I think we start in step one. And now I want to know the probability that after t steps have been made, I'm in a certain page. So what is the probability here? What is the probability here? Probability here, probability here, probability here. Of course, this has something to do with t because it will stabilize over time. And whereas in the first step, it is kind of rather random where I go to. The decision of where I went will influence where I can go to in two steps. So if we do it, okay, I start in state one, then using the transition matrix says, well, with five percent, I will stay in state one because I just typed it in. With five percent, I will be in state two because I typed it in. With 80 percent, I will be in state five because I navigated or typed it in. Good? Now I'm in step five. So what do I have to do to do the next step? I have to multiply the values from the transition matrix for starting in step five and going to wherever I am onto the probabilities that I got there in the first place. So to get the second step, assuming that I went to state five in the previous step, I multiply the value from the transition matrix. So with five percent starting in step five, I end up in step five with the 80 percent of getting into step five in the first step. And then five percent of 80 percent is 0.09. That is basically the idea. All the ways I got into state five in the second step, and this is not really only possible if I got to state five and then stayed in state five, but also possible if I somehow got to step two and then went to step five, or if I got to four and went to five, or if I got to three and went to five, and I just have to add this up. And the probability, again, over the whole row is always one. And if I do that a lot of times, it seems that the values do not change very often anymore. So it stabilizes. A question, of course, is now, is that pure randomness that it stabilizes in this case? Did I choose the right example? Or is there some law of convergence at play here? And it is indeed the probability vector does converge. So if you see the limit case and just let time go by for any initial probability vector, you will get convergence. And this can be proven by some linear algebra and the theory of stochastic processes. When we say we have a network of size n, so we have n nodes in the network, and the probability vector is an n-dimensional vector that all the entries are the probability of getting to the point or being at the point. Some of entries is one. You have to be in either point, in either node at any state, and the entries are non-negative. So the probability that you hop into a point is always positive or at least zero. And a stochastic metric, we will just take an n-cross-n matrix. Again, that the rows add up to one, like we had in our transition matrix. You have to go somewhere, but the probability of going somewhere can't be higher than one. You have to choose either one, and the entries are non-negative. Then you can say that you can build a Markov chain with a stochastic metric. Because a Markov chain basically is a sequence of states and a stochastic metric that gives values for the transitions between states. So you have a state 1, you have a state 2, and you have the probability that you are in state 2 after you have been in state 1. So kind of the transition probability going from state 1 to state 2, and of course from state 1 you could also go to state 3, or you could stay in state 1. And all these probabilities here have to add up to one, because you have to do something. Okay? This is a basic idea. So a Markov chain basically has n states and stochastic metric that kind of reflects to the rows and the columns, the starting states and the goal states, and is a transition matrix. Then you introduce time as discrete steps of time. In every time step you do something, you go somewhere, and the probability with which you go somewhere, that is exactly given by the transition matrix. So the probability that you are in a certain state at the next point in time, given that you were at a certain state at the previous point in time, is the transition probability between two states. Okay? Well, basically what happens is that you have a finite state machine. You have a lot of states, and you have transitions between these states. Just by assigning these probabilities of the transitions, you just don't say anymore, it's possible going there, but you say how probable it is going there. That's the only difference between finite state machines and Markov chains. And what you know about the current state of a Markov change can be expressed by probability vectors. So at every point in time, you know how probable it is to be in a certain state. And the probability is just the sum of the ways of getting there. The probability of the ways of getting there. So if there is hardly any way of getting into some state, at any point in time, it will be very improbable to be in that state. If there's a lot of ways of getting in some state, the probability of being in that state at any point of time will be very high. And this is what the finite state machine or what the Markov chain expresses. The probability vector that we get here is basically the idea of being at a certain time in a certain state. And if we let run time to infinity, the probability vector will be stable and just give us the probability that we are taking any point in time in that state. If we don't know the point in time exactly. Going back to our example, we find that the current state of the chain is in state U. And this is kind of the starting probability. So that is kind of like, we know it is in place U and we have a transition probability to any other state from U. That is obviously influenced by the adjacency matrix. Obviously influenced by our choice of how often we draw random states and how often we navigate and the higher the possibility of navigating somewhere, the higher the possibility of being there at some point. So for example, if we have probability vector 3 states and we have 20%, 50%, 30%, that means that the chain's probability taken any point in time. So we let time run for a while and do the random process. And then at some point we hop in and look. Then the probability that we will find we are in state 1 is 20%. The probability that we find the chain has somehow moved into state 2 is 30%. And the probability that we will find the state is actually in state 3 is 30%. We can never be sure that we are in a certain state, but it's just a distribution of probabilities. Good. That's a basic idea. So the state transitions can be formalized using the matrix vector multiplication. So I have the starting distribution. And to get to the next step, I just multiply this starting distribution with the transition matrix. Because that tells me for every new state what is the probability of actually getting there from the current starting distribution. And this is what I do times and again, because I have to let time run. So what are the state probabilities p at the next point in time? It's basically, I take the probabilities that I had before, I multiply it with the transposed transition matrix, and then I get the new ones. This multiplication means row times column. So this is basically this product over here. I take it entry-wise. Let's look at a small example. I have one page over here, I have a second page over here, or node as it is. And I have possibilities of going from one to the other. So t12, or I could stay in state one. What can I say about t11 and t12? They must sum up to one, because I have to go either way. I cannot just do something totally else. Oh, I hope to state three, there is no straight strip. I cannot do that. I have to stick to either one. So the probabilities have to sum up. Then I can say, well, if I have a certain probability of being in state one and being in state two, the probabilities of being in state one and being in state two in the next step of time will be, well, the transition probability of 1, 1, that I was in state one, or the transition probability 2, 1, 1, 1. Oh, the probability of being in state one afterwards will be either I came from state one, okay, and stayed there. This is written here. Or I have been in state two and went to state one. This is written here. This is the probability of being in state one in the second step. Similarly, the probability of being in state two in the second step is I started in state two and stayed there, or I started in state one and went over. It's basically starting in step one, going over, starting in state two, staying there, okay? And there's no other way I could have that, so again, it adds up to one. Yeah? Good. And this actually gives us everything that we need to talk about, the convergent properties of the Marcus chain, and we can say that if we have some initial probability vector and have probability vectors at certain points in time, then for some t, we reach a fixed point where another transition to step t plus one will not change the vector. That is, the chain of vectors of probability vectors converges to a fixed vector, right? That means that, well, basically it has to be the same vector. The transition matrix doesn't change anything anymore. And this is kind of idea of eigenvectors. This fixed point characteristic is basically true for all the eigenvectors that have an eigenvalue in the matrix that is size one. They don't change. And so, starting with any probability vector, this process of multiplying it with the transition matrix, multiplying it with the transition matrix, multiplying it with the transition vector, the result finally at the eigenvector is an eigenvalue two. And now what you immediately say is, how can I know the transition matrix has an eigenvalue two, one? Is that always true that it has an eigenvalue of one? Rather improbable, isn't it? Well, it's not that easy, actually, that you can say. It always sums up to one, so it has to have. You need some heavy linear algebra for proving it. And I don't want to go into the details here, but the Peron-Frobinius theorem definitely states, a stochastic matrix containing only positive entries has one as one of its eigenvalues. So it is definitely true that you always have an eigenvalue of one. And thus, this algorithm of always multiplying, it's called a power iteration, always multiplying the transition matrix, the stochastic matrix, onto some starting distribution will lead in the end to the eigenvector. So the theorem also claims that one is the largest eigenvalue of the matrix, and there's only one, so the eigenvector having this eigenvalue is unique. So we will end up with one and only one probability distribution. Well, since we have a random teleport with a probability that is larger than zero, and probabilities cannot be negative, so transition probabilities cannot be negative, we do only have positive entries, and that is good. Thus, we have an eigenvector that is unique, and this is called a stationary probability vector in Markov theory. Good, so what we do is we take the random surfer model, we take a unique, or we want to know the unique stationary probability vector, and looking at the notion of prestige, we find that this is exactly what we're looking for. We want to recursively define the prestige of a node by citations or recommendations or votes, whatever you call it, of all the other nodes weighted by their prestige. This means basically we start somewhere, and then look in what node we will end up in the long run. So using this power iteration will definitely lead us at some point to a stationary vector, because we have all the ingredients, we have a stochastic transition matrix that has only positive entries, such as it has the largest eigenvector of value one, and it has a unique eigenvector that belongs to the eigenvalue, and by power iteration we can get at this, it converges, and the numbers that we get that after a lot of steps we are in a certain state reflect the idea of prestige, like given from social sciences. Perfect, isn't it? Well, as we know, the page rank was invented by Larry Page and Serge Brin. What is not clear is whether it's named after Larry Page. I mean, it's kind of unique that somebody who deals with web pages is really called Page, but the page rank is very often claimed. Either it has something to do with web pages or it is named after Larry Page, I don't know. It's like the B3, nobody really knows. And it has been patented in the US as a method for node ranking in a linked database. That is kind of the idea of the web search engine. So it has been patented and the pattern was assigned to Stanford, actually, and Google as a company has the exclusive license right. So when Google was founded and Google took off Stanford, received for the licensing of this invention 1.8 million shares of Google, and these shares were actually sold in 2005 for over $300 million. So if any of you invent something clever, just tell me and Braunschweig University will be liquid for the next couple of years. That's an amazing amount of money, obviously. I'm not quite sure 2005 probably was not the best point in time to sell it. So I'm not quite sure what this amounts to what 1.8 million Google shares would have been worth at the peak of Google's fame and the highest prices for the Google stock. Maybe it would be interesting. So it's kind of interesting to see what PageRank actually does. And as I said, you can't take PageRank on its own, but what you have to do is you have to blend it somehow with the content of the page. But if we look at it with respect to what happens, if you for example search for the term university, you will find that by pure IR techniques, the optical physics at the University of Oregon is the top priority because it very often mentions the term university and some terms that belong are quite similar to university. It just says the structure that as a document it is amazingly relevant. Well, knowing the University of Oregon, who does? Oh, so nobody does. Because it's a very small place and optical physics is a very small department in a very small place. This is probably not what you want when you query for university. This is a very specific result. How would you deal with PageRank? So with PageRank on the query of university, the first page would be the Stanford University homepage. Who knows about Stanford University? Well, everybody does. So the popularity of the site measured by the prestige ranks it higher. And what you have here is kind of like Illinois Urbana-Champaign, UIUC, also very good university. You have the University of California in Irvine. You have the Indiana University, Minnesota, Iowa State. So it's not perfect. But at least you have some of the prominent US universities, the popular US universities among the first results. Whereas here, good, you have Carnegie Mellon, which is a very famous university. I don't want to know where the West Layam University is. Cairo University, Shonanfui-Salva campus, interesting, but not very relevant. University of Sydney, Moncato State, you know, a lot of universities, definitely all relevant results if you take the query university at face value, because they are all universities. But if you also assume that a user has something in mind, that the user has some expectations when asking for universities. I want the prominent university, I want the popular university, I want the good universities, whatever it may be. The Google results set, the PageRank results set seems to be much better than the other results set. Okay? Good. So, short quiz. We have the web graph, very small here with five pages. And which of the following note lists is ordered by PageRank? What would you say? Hmm, hmm, hmm, hmm, hmm, hmm, hmm, hmm. So, E seems to be a very important note. It has four inlinks, huh? And these seem to be good inlinks because they are also connected. These seems to be not as good inlinks. But overall, a lot of inlinks. So, when looking at the others, we have two inlinks here. We have two inlinks here. We have one inlink. It's here. Should be the same, shouldn't it? Because they both link to E and they just share links. Okay? So, they should get the same PageRank. What about B and D? It's also the same, yeah? So, this is kind of the one over here. And one here, one here, one here, one here, one here. So, they should be the same. They should be the same. This should be the highest. This should be second. This should be third. A is the correct answer. Okay? And this is something that we found out by note analysis. And in this case, it is also consistent with the number of incoming links. So, if we would have just ordered them by number of incoming links, we would have the same result. But that doesn't have to be the case. And as soon as I give one link here, the thing perfectly alters. Okay? Good. Okay. So, now we still have the problem of how to compute the PageRank. I mean, we've seen that we have to basically take the random server model, decide for how often do we look at random pages, how often do we follow the navigation, and then just do the power iteration by choosing a random vector, that is the initial probability vector, and then doing the iteration always multiplying with the transition matrix, and at some point it will be stable. So, once the fixed point is reached, it will be stable. So, we start with an arbitrary initial vector. We do the iteration with the transition matrix here. Then we look at the normalization of the vector, and do it all again. And at some point, the vector becomes stable. If the vector doesn't change anymore, we are done. It is the stationary vector. We already did prove that the power iteration converges to the eigenvector and to the largest eigenvector, and from the Perron-Frobénios theorem, we know that the largest eigenvalue is 1, so this is the vector that we want. And actually, the number of iterations that you need is dependent on the number of links. Obviously, the more possibilities you have, the more difficult you will get. But actually, it's quite fast. So, considering 300 million links or 160 million links, you'll find that after about 50 iterations, we are already quite close to the original vector that we need. So, 50 iterations is okay. We don't need the perfect vector anyway. We just use it for the ranking together with the content score. So, it's not too hard to actually do it. But of course, we don't have a small vector with five webpages and 300 million links. But we do have a page-ranked computation of over 60 billion webpages with a lot of links. How do you compute the power iteration here, even if it's just 50 iterations? Nobody knows. It's very hard to do. Google uses a distributed algorithm and has a lot of computing centers all over the world that are only doing page-ranked computations together with IR style. Evaluations or retrieval and somehow ranks the pages. Nobody really does know one. So, it is known that the ranking function of Google does not only consist of page-rank and the IR style part, but there are a lot of different things. So, it is known that they have over 70 or they claim that they have over 70 ingredients for this ranking function. Page-rank is one of them, but a very important one. And how they actually do it is their business secret and they won't tell. So, we can only conjecture here, but in principle, it can be done and this is kind of good. But at least part of it, I doubt that they know that they really materialize the full web graph in a transition matrix because there will be a 60 billion cross 60 billion matrix. That's kind of hard to do. They have to kind of break it down somehow. Well, and probably the transition matrix will rather be blocky. There's a lot of zeros here and if you choose the blocks right. So, it has been shown in network analysis that the web used to be at least a small world graph. Small world graph means that you have many communities that are heavily interlinked. And these communities are somehow connected with each other by rather sparsely. And if you block these parts, these small worlds, this might be a good idea of calculating the page ranks because then you can restrict your problem to a small part of the matrix. In a way, yes. So, if you can detect the small world parts of the web and then kind of use this for distribution and then say, so this is the first billion of web sites and this is their page rank and you compute that for different portions of the web, then this kind of works. Nobody knows. I mean, if you have ideas, compete with Google. You're always interested in new engineers and it has to be something of this in this way. Well, you need the infrastructure obviously and you need the clever algorithm to do it efficiently. And of course, I mean, as we had last lecture when we were talking about crawling, once you are finished crawling, you can start all over again. And the same applies to the page rank computation. Once you're finished building the page rank vector, the stationary probability vector, just take the next crawl and do it again. And this also calls for a distributed algorithm because if your crawl is distributed in a clever way, maybe in a focused crawling way, then computing the stationary vectors for this portion of the web is much easier than can be done over the whole web. Basically what I'm saying. Well, the importance of page rank has sometimes been exaggerated and that probably stems from the impact on the market that Google had because the only thing that was new at Google in the first few years was this page rank algorithm. That was the groundbreaking idea using the web structure, the link structure of the web. And since this kind of outperformed all the existing algorithms and all the existing search engines, you will very often find the opinion that this is the ingredient that made Google what Google is and therefore it is still today the ingredient making Google search better than all the other searches. And it has been patented and stuff, but patenting an algorithm is always a very, very difficult thing because how do you know whether somebody uses it or not? And believe me, all the search engines that are popular or that are sensible on the web today use the link structure of the web. Of course they don't use page rank because then they would have to pay licensing fees and since Google is the exclusive licensee that cannot be done, so they use some other algorithm. That probably is very similar to page rank. So the problem is it's one component. It's an important component. It's probably not the most important component because you need the textual things, you need the considering the anchor text, the surrounding text, the content of the page, proximity of pages, the structure in the small world graph, these ideas, centrality of pages. You need a lot of things to really do the ranking well. And Google uses really a lot of different features. As I said, I remember Paperware said it's over 70 and probably it's much more than that, especially for the personalization tasks and stuff like that. There will be a lot of ranking features. And there are rumors that page rank actually in its original form only has a negligible effect on ranking, but link structure is employed. At some point in the ranking, link structure is employed. That is something that we can be sure of. One of the problems when people became aware of that Google is so successful because it uses this link structure was, of course, the spam industry. It says, well, it uses link structure for ranking. So why don't we create pages that give Google exactly the link structure that they want, that they are looking for, that they are ranking in a higher way? And so the competition between spammers and search engines that want to filter spam started off and Google had to change their algorithms and see what link structures point to spam and so on. It's very difficult and if you just rely on the link structure, your web search engine that you built would be easy to trick by spammers. You have to do something more. There are certain varieties of page rank. So, for example, one of the big disadvantages of page rank is that it has a single overall score for each web resource, which is the prestige. But you don't usually have a prestige, but you have a prestige with respect to something. So, probably, Kinder is the most beautiful girl in this course. She's the only one. That doesn't help us here or somebody is the most clever person or somebody is the most whatever. So there can be different types of prestige with respect to something. When you have a certain topic in mind, you don't want to know who is good at something totally else, but you want to know who is good at this topic. So, for example, you're asking a query in the area of physics, then even a mediocre physicist would help you more than the most brilliant chemistry guy there ever was. However, the page rank is computed independently of the query. Thus, the most brilliant chemist will have a much better page rank than your mediocre physicist. This led to the idea, basically, that the page rank could somehow be made topic-sensitive. It basically defines a set on popular topics, for example, footballs or Microsoft products or politics or whatever. Then you use classification algorithms to assign each web resource to a certain degree to these topics. Then, for each topic, you do what is usually done in focus-crawling. So you build focus-crawling into your model. So you compute a topic-sensitive page rank by limiting the random teleports to pages of the current topic. At clear time, you detect the topic and use the page rank scores that were computed, pre-computed with respect to the topic. That at least takes your physicists and chemists and stuff apart. I don't know how much help it is, but it can be done. Of course, you have to anticipate the topics that people are interested in. So if we take a query like bicycling and look at the different pages in topic-sensitive page rank, if we just use the normal page rank, no topic whatsoever, then the best pages we get is the Rail Riders Adventure Closing, Florida Cycling, Waypoint Org, GORP, whatever it may be, company, obviously. But if we introduce a topic, for example, arts, computers, business, games, whatever, then the ranking of the pages strongly change. So, for example, with business, we would get books about cycling that you can buy, or you could have some bike-building kits with computers. You would get a GPS pilot, so a device that tells you when cycling, where you actually are. If you bias it towards, I don't know, recreation, you would get a travel company. If you bias it towards shopping, cycling, closing, shop outdoors, bike.com, kids and teens, the camp for boys, whether or if you offer some cycling. So it really shows an influence on the result set if you use a topic-sensitive page rank. Interesting thing. If you take the comparison to the normal page rank with a precision at 10 measure for a set of queries, then you will find that for different queries, manually define the relevance of the first 10 results that are delivered by both algorithms that the biased page rank, the topic-sensitive page rank, usually is much better in terms of precision than the unbiased one. There may be different situations, for example, here, the query on HIV, but on average, taking the means of topic-sensitive page rank performs better. Good. That was basically all I wanted to say about the topic-sensitive page rank. There are a lot of other extensions. So, for example, you could eliminate the navigational links on a page that are just for referring you to subpages, and that, of course, is not a quality vote for the subpage, but it's just kind of a directory service. And on my page, I can have a lot of navigational links, and I can basically navigate from every page to every other page, which does not mean that these pages are particularly interesting. So, this is kind of what you would try to get out of it. You would also try to eliminate nepotistic links, so nepotism is kind of making your brothers or giving your brothers good jobs. And what you would, of course, do is kind of this link-farming or link-donating. You link to my page, I link to your page, and also what's happening on Facebook, who gets the most people following them, whether they are interested or not, you don't want that. You want to get rid of that, too. So, for example, if the same person authored different sites, there's a very high probability, even if they are not topically too well connected, that he will introduce some links. Oh, look at my other sites, they are also so wonderful, but that is not what you would want. Or the links between friends, you know, like, oh, the site is crap, but he's a good guy. This is not what you would want in your page rank. And it's kind of difficult to find out what these links are, and the same applies to the spam detection. It's very hard to find out whether this actually is a fraudulent structure of a web page, or if somebody did it or designed it in that way, because it's very practical or because it makes actual sense. And with that, we get to the next detour, looking at the Google toolbar and looking at the web pages of some interesting pages. No, we don't. We are well over time, and we will have some time left in the next week. So thank you for attending. Have a nice lunch and see you next week. Thank you.
This lecture provides an introduction to the fields of information retrieval and web search. We will discuss how relevant information can be found in very large and mostly unstructured data collections; this is particularly interesting in cases where users cannot provide a clear formulation of their current information need. Web search engines like Google are a typical application of the techniques covered by this course.
10.5446/353 (DOI)
Alright, good morning everybody. So today it's just me because Professor Bark is attending a conference in Hanover today so Today we will have a look at some more sophisticated retrieval models than the Boolean model and basically these are the models used by most most state-of-the-art Retrieval systems for example, I strongly believe that Google still relies on some of these techniques presented today But before coming to that, I would like to discuss the homework exercises with you So who did the homework exercises? Very well, so the two people can actually check what they try to do and the other ones will Maybe learn something new As I said last week, it's always a good idea to do these homework exercises because It's your only chance to get immediate feedback from us Before the exam So otherwise you will have usually have to learn it all on yourself when preparing for the exam and here you can just Try out something explain your guesses to us and we will give some detailed feedback Okay first exercise There I gave you a link to the ACM computing classification system that was some some Mesh-like system provided by the ACM to classify research publications from the area of computer science, so it was last revised in 1998 so it's not not too up to date But the exercise was how a book about Web search engines such as Google would be classified according to the system So I will now open this classification The website at least I try to do so Okay Another try And then you will tell me where to classify this wonderful book So currently it seemed that we don't have any internet access Yeah Without a cable we don't have internet one of the most fundamental principles of computing Maybe you can you too who did it can can give some some feedback on what you tried so what have been your results Are you your first impression of of the ACM classification system? Okay So that's a rather broad topic isn't it? Hmm, okay, you came to the same conclusion I guess Oh, he spared it's bad bad boy. So okay, I cannot show it right now, but that's not a big problem so actually this classification system is rather small and covers only rather broad topics or any book written about search technology would be classified into information retrieval something And there are thousands of books and publications that are newly created every year So using such a classification system is not very helpful for practical purposes when you're looking for specific books or Publication, but it may help librarians to sort their their shelves in some way So this is always the problem with these classification systems You must maintain it. They must be must be quite sophisticated to be really helpful and the ACM classification system usually At least didn't help me in the in the past very much So some some publishers require that you that you classify your own papers according to the systems and then you would usually end up in somewhere information systems something that That's okay, but it isn't very helpful because because computer science is a huge field and even information systems is a rather large part for example all all the things we are doing here at this Institute are all Information systems or databases and so even there you couldn't distinguish these different parts using the ACM classification Okay second as this exercise last week I showed you an example of these mesh classification systems from the medical domain and the question was what are possible problems with these complex classification schemes and in which scenarios These approaches are mails those approaches be worth the effort Any opinions on that? Yeah, it's your personal question answering round today Okay Hmm yes, yeah Yes for for each new term you have to review all documents But usually if you have some domain experts at hand They know where these old documents would have been classified before because Madeleine Strongly strongly emphasize that Classification is consistent in some way so if you have a new term and new disease that would have been classified Different in the past then you are able to find relevant publications and to reclassify them accordingly So you're totally right Accuracy is a is a large benefit of these of these schemes But it it of course requires that you keep this scheme up to date. So mesh is updated every year and You really really need a highly complicated classification tree with many many Small sub areas to be able to to find what you're looking for and get a complete picture of your of your research domain in this case medicine Of course, it's a huge amount of work and it usually is very expensive. So It's only worth the effort in scenarios where life depends on this knowledge or Or science could not do well without it without knowing without without being able to refer Specifically to some some works that have been published in the past So that's always the trade-off you have to make classification schemes can be very helpful But but to be helpful they have to be expensive Yes Yes, that's a that's a very good point so so because it's it's handcrafted and designed to to Fit fits a mental world of scientists as people working with it. You usually have a have a well understandable system of classification that's Yeah, that can can really be used by people at least for searching for information for classifying Of course, you need some trained experts to do this consistently But to find information it usually suffices to to browse through the tree as you did with the ACM example All right Third exercise Okay, we have seen information retrieval is about text documents But in relational databases for some time now we also could easily store text documents for example character large object data type or We could even apply the back of words model to a database table with two attributes The first is the term the second is the document and maybe that may be a third column indicating the number of occurrences of the specific term In the in the respective document So why why the heck do we need information retrieval systems because we easily could use databases What's a what's a what's a point about information retrieval? You know, well, as I said, well, you really could could put the back of words Into a database table if you would like to so one column is the term Second column is the document the term occurs and third column is the number of occurrences Yeah, that's exactly the point so basically the distinction between databases information systems is of historical or philosophical nature so databases try to focus on Exact retrieval tasks where you can define logical conditions on your data or what you're looking for And information retrieval is always about best effort and relevance and such wake terms like information need So you try to resemble what humans intuitively might be looking for what might be relevant to them And do this in a in a most mostly effective and efficient way and database systems are just about fulfilling Logical constraints in some way In fact, there are some approaches trying to combine both into one big system But we are just at the beginning of this area. So some databases currently offer support for text search. So you could do a keyword based text search on on text documents stored in database for example Using a character large object data type But usually these two paradigms of databases and information people do not integrate too well So there are also some information retrieval Systems trying to use structured data for example in online shops. You often have these Faceted browsing on the one side of the screen where for example could select only Only mobile phones costing less than 150 euros or something and then you just click the respective checkbox Um, there are some approaches trying to combine both but usually it's not that easy because These are completely different perspectives On the problem and it's very hard to integrate it All right, um Next one, um Yeah, what is the meaning of the terms information need and relevance and information retrieval? And what is the connection between both? Okay So relevance is a completely subjective concept Yeah, basically basically that's it relevance Or to put it differently, there are many many different Different ways to define relevance. We will see in some lectures how this could be done They are actually different degrees of relevance. Some people say so relevance is a completely personal thing and even if I uh, if the search engine shows me a document that's completely unrelated to my initial query, but helps me in some way because I have some hobbies and some problem in my private life I want to solve but did not query for it this time then this could be helpful to me So this would be relevant on the other hand that there are people saying that relevance can be objectified in some way um That for each query usually people largely agree That a document is relevant or not then relevance could also be a graded a graded thing Very relevant not so relevant a bit relevant Medium level of relevance. So there are many many definitions to do this um And information need and relevance always go together because relevance is yeah, what's What satisfies this information need from what? Whatever perspective All right, these are very vague terms and that's one of the main problems of information retrieval And that's also a big distinction to to different areas because information retrieval systems If you build a new system or design a new algorithms, you are not able to prove rigorously in mathematical terms that this algorithm Brings the right results because you don't know or you can't define what's right You always that's the only chance you have can compare your results to what humans have Have what would manually have found relevant or helpful or whatever So it's some kind of empirical work you have to do when you design information retrieval algorithms In two or three weeks, we will take a look on how Information retrieval algorithms can be evaluated in a more or less objective Fashion, but that's a really big problem information retrieval Okay, next one. I want the difference between the back of words model and the set of words model. Hey, that's an easy one Anyone else Back where the set? Exactly so in a set of words model each document is represented as a set of terms And in the back of words model it's represented as a bag of words which is Silly way to say multi set or set with multiple occurrences of the same term So sometimes mathematic mathematicians use multi sets and In fact, they really really helpful information retrieval. So set of words model is rarely used Usually you use back of words model because how often the term occurs in a document is highly informative about the document's content Easy Okay, Boolean retrieval. Um, yeah, that was a One of the big topics of our previous lecture When is Boolean retrieval helpful because to me Boolean retrieval looks like a database like information Uh Retrieval algorithm and that's not too helpful for finding out for really finding the information you're looking for What are what are possible scenarios where Boolean retrieval can be useful and why is that? Um Oh, it doesn't necessarily reduce the number so Because you cannot control the number you can you can use a bit more relaxed formulation of your query which was with less With less constraints and then get huge results. So you really don't know how many results you get so controlling result size Is a bit tricky in Boolean retrieval. Um, yeah database like search right, but uh Uh, we're looking for text documents that's inherently different from from what definitely from what databases do usually Uh Yeah, so in in domains where you usually have a highly standardized vocabulary Uh That often occurs in technical domains. For example, when you're searching in a database of technical specifications or patterns, for example, then this might be helpful or for example in the legal domain Some people say this could be could be quite helpful. Um Um a third big advantage of Boolean retrieval is that you really can verify What the system returns to you? So when you adjust enter query in google, you have no idea Why exactly you got the results you actually got? So the algorithm is hidden and even if the algorithm would be would be would be would have been published by google Or would be made available to the user. There's no chance Uh Trying to really reconstruct what has happened in the background and all these algorithms To get the results that google showed you so if you really want full control about what's happening then Boolean retrieval is the retrieval algorithm you want to use however, as I said, it's Quite it can be really tricky to use it because of a synonym problem Or because of controlling result size, but even you know what you're doing here So, um, I think that's the last one. Um, So we have seen in the last lecture when we're representing our documents or our document collections We can use a term document matrix. So rows are terms and columns are documents or the other way around. It doesn't matter It's a matter of taste. Um, but usually these matrices are very sparse So only very very or only a very very small number of entries are non zero because most terms do not occur in a document So in a document about computer science, I won't have many psychological keywords in it Of course, so, um, from my experience usually, um, 99.9 percent of uh, of these term of these term document matrix Contains the zeros was empty Um, well, it doesn't seem to be a good way to store these matrices completely in memory with all these zeros in it Uh, can you imagine any better ways to do it? Okay Yeah, so that's some tree like structure if you have the term term for example, then you would start with a t Oh, you would start with a with an empty root and then for any character There is You would make a branch And somewhere here's the t And from this also again here's the e And in each note if the word is finished here here would be the r m Then here would be a reference to the document where this term occurs Yeah, that's a another way it's suffix or prefix trees whatever you might like to call it So it's basically the same idea, but this can also be used to to find terms in large large databases So and in both ways however you do it doesn't matter Um, you are you are able to store your database very very efficiently So you don't need a huge amount of space because uh when you want to store the whole internet Or the whole web like google wants to do they cannot afford to Uh fill their memories and hard disks with zeros that mean nothing and because this information Although has to has to be searched through Uh, they heavily rely on a on a very compact representation Sometimes or most most most popular is is this representation here as at this list form Um, maybe you also add some information about how often the term occurs here For example here the two would mean in document three the first term occurs twice And in document nine it occurs only once And then you can build some list data structure and easily scan through these these lists sometimes you can even apply compression algorithms to these lists because Reading a small amount of data from a slow disk and doing the decompression in memory Could be faster after all than reading a large list from from the disk So there are highly highly sophisticated ways to do this we will look at some of them in Two or three weeks And if you're really going for efficient retrieval So if you want to make your search engine really really fast, this is the construction site you have to work at Doing this representation and retrieval on this basic list level Uh, very very efficiently and using cpu caches or whatever you can do or using a clever distribution system in your network about Across many different different servers So there are many ways to do it And this is usually the the hard part of designing search engines So the all the algorithms we are we are discussing in this lecture Usually could be could be easy to be understand Well, it's it's usually not a big trick there But implementing them efficiently. That's a really hard part of doing it. So building a search engine is 1% having a clever idea and that's a very important part of course, but it's 99% Clever engineering so That's how it works Oh, okay Yeah, of course you could uh, you could use a database for this task, but But usually you are using specifically designed data structure for this because you don't need transactions in informational retrieval systems when you build such an index. The basic idea how these things are organized or even vTrees are sometimes used in informational retrieval for matching the data. But it's not used by putting all the data into a database and then running some algorithm on the database because that would be far too slow. So databases are optimized for modifications of data usually and for transactions and data consistency and all this stuff. But they are not designed to modify information or access information on such a low level as we want to do it here. And one major difference in informational retrieval systems is you really want to control how data is actually stored on your hard disk. In databases you can only say, well, we have our table with our terms and documents and maybe an index on it and the database completely decides how to manage all this stuff. But if you don't know what the database is doing and if you cannot tweak this storage in some way that helps your efficiency, then maybe you end up with a really slow system. So databases in principle could do this work, but you usually build your own data structures. So that also means building an informational retrieval system like Google or whatever basically means building anything from scratch. You cannot use any existing large blocks of technology because they are usually made for different purposes than you need in this case. All right. So these questions are very exome-like, I would say. So in the exome we don't require you to learn some stupid definitions that even we don't know, but we want you to talk about all these concepts we're using in this lecture, all these ideas and be able to critically reflect them and discuss them with us and tell us what's working good, what's working bad and why things are done the way they are done. So usually all the algorithms we are discussing here, there is some clever idea behind it and there is some motivation why people are doing it this way and don't do it another way. So we are able to explain to us why things are done this way and why other solutions would be stupid, then that would be a great exam you're doing. All right. I need to go one slide back. Okay. Now to the contents of today's lecture. So first recap of the previous lecture. Some of this we already discussed in the exercises. So we learned about Boolean retrieval. Queries are sets of words, so called index terms. Queries are propositional formulas. So step and China and mankind or something like this. The result is a set of documents that is exactly the set of those documents satisfying the query formula. And for example, if you have three documents, three sets of terms, then of course, you have step, mankind, man, step, China, Tyconaut and step, China, mountaineer, then the query is step and China and Tyconaut or man. You could use brackets here to easily state your information need in some way and the result would be document one and document two. So it's just a set because we don't know which document fits the query better because both satisfied those SS no graded way to judge which document helps more. And the third document is not returned because although it contains step, it does neither contain man nor China and Tyconaut. So that's rather easy. Okay, but we have seen working with these sets and binary membership of terms in documents often is not very helpful because usually there are index terms in the document that are more important than others. So if I'm writing a document about the history of computer science, then probably the term history would occur somewhat more often than some name of some famous computer scientist in this document. So and therefore it would be helpful if someone is looking for computer science history to just to raise the importance of this document because it contains computer science very often and history very often and probably more often than other documents that only, the term history and the term computer science just occur once. So and here the ideas to use so called fuzzy index terms and fuzzy means that you can assign weights to the terms in each documents. We will see an example soon. And what you also can do with these weights is you can account for synonyms of a documents term. So if I'm writing a document about science in general and I'm not using the term research, then it would be really great if people would be able when they're looking for research would be able to find my science documents because research and science are somewhat related in a way. They're not exactly synonyms, but they're highly, highly related. And so it would be a great thing if the system automatically would assign the same weight to them or to the synonym slightly lower weight than to the original term. Okay, these are the main motivations of using weighted approaches. Synonyms are related terms and number of term occurrences in each document. Okay, what can we do now? We can can just improve the Boolean retrieval model and describe documents by so called fuzzy sets. And fuzzy sets mean that no terms don't have any more than a binary set membership to the document, but a graded membership. As I said, some document that occurs in the document would have grade one and some related term that does not occur in the document would have a rate of 0.5 or something like this because it is related, but does not occur in the document. Another advantage if we're using weights in some way, we are able to use this weights for computing a ranked list of our results. So we are finally able to order our results and put the most relevant documents in front of the list where users easily can find it. So it's actually what such such engines do for a long time now. Okay, that's about that. In the next lecture, we will now take a closer look at this, how this fuzzy retrieval model works and then we take a look at two other very popular retrieval models that are similar from the idea of using using weighted terms. And finally, because next week we are going to talk about probabilistic retrieval models, we will give a short recap of probability theory because I am assuming that most of you are not too experienced in all these terms and just need a short refresh on that. Okay, let's start with fuzzy retrieval. So, for example, we have our original document. It's a document of people climbing in some mountains in China like Reinholt Messner here, the famous mountaineer and depending on the document, it could be the case that it's not about how he steps forward but how he climbs somewhere in China and so it would be a great idea to assign a highway to China and also a highway to mountaineer and only a low weight to the term step in this document. So assigning weights looks pretty easy but the problem is how to work now with these kind of sets, how do we calculate with it. So the main problems are how to compute with these sets in some way and where do we get these degrees from. We will tackle both problems from now on. So it all depends here on so-called fuzzy logic. This is Lotfi Zadeh in the 60s. He had the amazing idea that Boolean set membership, so something is contained in a set or it is not, might be a bit too restrictive for real-world applications and so he decided, well, possible truth values are not just false and true but also any number between these two numbers. So it sounds rather strange and actually in the 60s many people thought he is just stupid and did not understand anything of Boolean theory but actually he was exactly right in how people use these concepts because when you talk about tall persons then usually this is some kind of graded concepts. So people who are below, I would say 1 meter and 70 wouldn't be considered tall at all. So maybe in kindergarten but not in most other scenarios and then of course all people that are taller than 2 meter and 10 for example would definitely be considered tall and anything in between, yeah, he is rather tall, he is quite tall, yeah, he isn't really tall but he is also taller than the average person. These are all terms people use in real life and so this is just the idea. And the set of tall persons not by simply creating a list of tall persons but by taking a list of all people and assigning a tallness score to them in some way. Okay, the next thing is now that we have some sets with graded set membership, it would be a great thing if we just could translate the Boolean operators. We learned about last week to operate on these fuzzy sets and some design goals. Propositional logic with only 0 and 1 should be a special case of the logic that Sade wants to develop and fuzzy operators about how to combine fuzzy sets should have some mathematically nice properties, doesn't matter much here, whoever wants to know more about it can do with very huge articles in Wikipedia about all the theory. But for our purposes in this lecture it's enough to know the following here. So let's introduce some notation mu of A, so if A is some graded fuzzy variable, so in fact it's a number between 0 and 1. For example the tallness degree of a person then mu, no, A is just a variable and mu of A is the degree of class membership of this variable. So if A is the membership of a person in the class of tall sets then mu of A would be its membership degree or its truce value if you transfer to Boolean logic. So and here the idea is conjunction when you say A and B, so combining two truce values by conjunction then its truce value would be the minimum of both individual values. So if someone is a bit tall and very large then he is a bit large and tall or big and tall whatever. So you always take the minimum value. So for this junction you just take the maximum of both and for negation simply 1 minus this value. So as we can see propositional logic indeed is a special case because if mu of A and mu of B are both in the set 0 and 1 then the minimum of these both value is indeed the truce value of their conjunction and the maximum is the truce value of their disjunction and 1 minus the truce value is exactly the negation of it. So just taking the ideas of Boolean logic a step forward to weighted or fuzzy membership degrees. Okay, now the example. Let's say we have a document containing three terms set China and mountaineer with different membership degrees. So say the document is highly relevant to the term China and to the terms mountaineer and it's just a little bit about steps in any way and the query could be step but not China or mountaineer. Then the question is how relevant is this document with respect to the query. So here the result is 0.8. And how do we arrive at this value? Any ideas? So exactly we have not. China would be 0.1 at the end because but not is and not. This would be no end is the minimum. This would be 0.1 and this one or this one which is the maximum and the result is 0.8. So this document is contained with degree 0.8 in the set of all documents that satisfy this query. So because it satisfies the mountaineer term and the mountaineer term is quite representative for this document. So this is one simple example how to use this just Boolean logic and Boolean retrieval taking a step forward to weighted degrees. And if you use this correctly in a reasonable way then you can end up with weighted results sets or ranked results sets. So the problem is whether this is really intuitive. As I said these operators we have defined have really nice properties spoken from a mathematical perspective. There is a huge theory about T norms and T code norms that are all different ways to define this stuff. As I said it doesn't matter here. But when looking at these examples here sometimes these operators are not doing what we would intuitively expect them to do. Another example the first document has assigned the term step in China both with the fuzzy weight of 0.4 and the second document contains the term step in China where the step has assigned 0.3 and China the weight 1. So China is really fully contained in this set. And then the query would be step in China and means taking the minimum and this would result in document 1 be getting the relevance degree 0.4 and document 2 the relevance degree 0.3 because we take just the minimum. So as we can see here China is completely contained in this document and step just a little bit less than in this document. However nevertheless this document is ranked higher because step so the minimum is 0.4 and the minimum is just taken at the most relevant thing here. So we would expect as China has such a large weight that document 2 would be more relevant than document 1. So this is a limitation of these operators to fit the mathematical conditions of Boolean logic and some other theories they have to be defined in this way and therefore we get some kind of strange results here. To put it more graphically this problem so if we have a query term 1 and term 2 and this would be the membership degrees of term 1 and term 2 then all these documents lying on this line would be assigned the same membership degree 0.7 because for all these documents the minimum is 0.7 and the membership of the second term or the first term is completely ignored. And this is what we usually don't want to have here because if the second term occurs very often then this of course should have some effect on the result. So similarly when you have the Boolean OR in your query then all these documents here rely on the same line again membership degree 0.7 and it doesn't matter how much the other term is contained in the document or how less. So we would expect if there are changes in this direction because term 2 is more representative for this document or not then this really should change the overall score. So this cannot be done easily with fuzzy set theory. So these are the limitations when doing retrieval but there are scenarios where you really can use it. I want to say this is this just two reasons and I will have to say all of them are求 Okay, yeah, definitely there are scenarios where handling, handling, in this way, might be reasonable, but your scenario is more database-like, I would say. So you have some hard, two hard criteria and both should be satisfied as good as possible. And so, yeah, it's what you would require from a database system that is able to cope with graded membership, but an information retrieval system maybe should rank some information higher and should not ignore the other information completely. So, yeah. So that's, and because of this, when doing fuzzy retrieval, it usually is used in database-like scenarios. So, it's not too popular information retrieval, but it shows how to come from Boolean retrieval to more sophisticated retrieval models. Okay, the second problem is where do we get these fuzzy membership degrees from? Because in our documents, we only know that a term occurs on a document or it doesn't occur, or maybe we know how often it occurs. We could use this to translate this into weighted degrees, but it doesn't solve our synonym problem. For example, as I said, I have the terms research and science, and in my documents, only science occurs, that would be a really good idea to also assign the term research to this document, maybe to a degree of 0.8 or something like this, because it's only related. So assigning membership degrees could be done manually by some nice person here, that usually is a lot of work. So we, since we are computer scientists, we are looking for a more automatic solution to this problem. And our scenario here is given some back-of-words representation of documents. The question is how to convert it into some reasonable fuzzy set representation, because that situation we usually have, because in advance, we don't know what terms are related semantically in some way. We just know what terms occur in our collections and occur in our documents. So in one way to transfer these documents, or these term documents containment into fuzzy weights was proposed by Ogawa and some of his colleagues in 1991. And the idea was here in this case, first take the crisp set of terms that occur in each document and assign some more terms to the document that are related to the terms occurring in the document. So for example, if we have this document that contains the terms step China and Mountaineer, we would assign step China and Mountaineer or the weight one. And some related terms should also be added to our document representation. For example, alpinist that is related to Mountaineer and Asia is somewhere related to China. And the weight makes clear how strong this relationship is. So this would be the document representation we would be used for answering queries. So if people are now looking for documents where alpinists do something in China, we are now able to find this document and with booty retrieval, we would have no chance of finding this document. So that's the basic idea. And as you have seen, assigning these new terms to the documents and assigning weights to them has something to do with term similarity. So Mountaineer is similar in a way to alpinist and we would like to automatically derive this weight or this similarity somehow. There is some way to do this. This is called the JAKAR index, which we take a look at now. It simply measures what terms tend to co-occur in a collection or what terms used to appear together in documents. And given two terms, T and U, then the JAKAR index will be called C of T and U. And it's simply the number of documents that contain both terms divided by the number of terms containing at least one of these terms. So it's just the relative amount of documents in which both terms occur, given that at least one of these terms occurs. So if two terms are complete synonyms and always appear together in a collection, then they would have the JAKAR index one, which means complete similarity. On the other hand, if you have two terms that never occur together because they are completely unrelated, then the JAKAR index would be zero. So rather straightforward way of defining document similarity. Again there are many, many more ways to define term similarity by analyzing documents collection. But this is some very popular way and it's especially popular because it's so simple and works so well in practice. It's also called term-term correlation coefficient because it resembles the idea of statistical correlation in some way. Correlation in the sense things appear together. So there are some technical differences to mathematical correlation, but that's also not too important here. Let's look at an example, given three documents, again here set of words model to make things a bit more simpler. Document one, step man mankind, document two, step man China, document three, step on mankind. And we now would like to compute the similarity by means of the JAKAR coefficient between step and step. And of course that's easy because all documents, or in all documents in which either step or step occurs, both terms occur because it's the same term. So the similarity is one. So second one is step and man. So we first count the number of documents containing the first term, that's step in this case, step man two three, and then containing either one of those, so step or man. And since step is contained in all documents, the denominator is three. And now we are looking for documents containing both terms. Step at man is contained in two documents. And therefore the JAKAR index of step and man is two thirds. So let's do it for China and man. First find out the number of documents containing either man or either China, or both possibly. Then we have this document and we have this document because man is contained in both. And the third document neither contains China nor man. So here would be a two. And now we look for the number of documents containing man and China. This is only one document here. And so the JAKAR index or the similarity between man and China would be a half. So I left this part of this matrix empty because the JAKAR index is defined symmetrically because it doesn't matter whether I exchange U and T in this formula, the results are the same. So it usually suffices to just compute the term similarities for the upper triangle of this matrix and the lower triangle would be exactly the same. And on the diagonal as I said, there would always be ones for obvious reasons. Okay, now we have a measure of term similarity. How can we use it now to add related terms to each document? Okay, now comes a complicated formula, but we will try to understand it. Okay, given a document D that is represented as a set of terms, then the weight assigned to a term T with respect to this document D is one minus this product here. So let's first look at some obvious example. Let T be some term occurring in the document. So this would mean that when we take the product over all terms contained in the document, then also we would have T in some factor here. And the similarity between T and T would be one. This thing would be zero then and the whole product would also be zero then. That means the weight of each term actually occurring in the document would also be one because one minus zero stays one. Okay, that's the easy part, but not very helpful because we already know that there should be a one at the end. Now here's the idea how it's done for other terms. So if T is now a term not occurring in the documents, we compare T to all terms actually occurring in the document. And for each of these terms, we compute the similarity of the term T to each product. The term U of the document. So if this similarity is very low for all terms occurring in the document, for example, I have a document about computer science and I have some biological terms like, term like genetics, then the similarity between genetics and all terms occurring in the documents would be rather low. This means this is always low for every factor and so this is close to one in every factor and the product of many ones tends to be rather large and one minus a large number tends to be close to zero in this case. So terms that are unrelated to all terms occurring in the documents get a low fuzzy weight here. So it's a bit complicated but on the other hand, if I have some term in the document, so for example, our science and research example, when comparing our term, for example, this is science and research actually occurs in the document and both have a high similarity, and then this whole thing would be close to zero and zero multiplied with something, some other factors would still be close to zero and therefore the documents get assigned the term science with a high weight. So maybe you think about this again at home. It looks more complicated than it really is because all this minus one and multiplication makes it a bit fuzzier than it should be. But this is basically the idea. Try to compare each term to all terms occurring in the document and judge how similar this new term is to the terms of the document. And if there's some term that's highly related to the new term, then assign it to the document with a high weight. Otherwise, leave it out. Okay, next example, below here is our jacar matrix. I just use an example and here again our three documents as in the previous example. Okay, the ones are assigned because the terms are actually occurring in the documents, so step occurs in document one, man also occurs there on mankind also, document two is step man and China and document three is step and mankind. So these are the ones. Now to the difficult part, document one, to what degree do we assign the term China to the document? Okay, we take the term China. This is our T here and then compute the product over all terms occurring in the document. So it's three products, it's three factors, one for step, one for man and one for mankind. These are our use. And for each of them, the factor is one minus the jacar similarity between China and step and China and man and China and mankind and this would be for China and step zero three three is one third. So one minus a third is two thirds for man, it's zero point five, one minus zero point five is a half and for mankind, the similarity is zero and one minus zero makes one. Assuming the product of all three is this is a third. Now takes a final one minus this weight and we get two thirds here. So in China gets assigned with a high weight through this document because mainly because China often co-occurs with step and man in other documents. Chopin man seems to be highly related to China, we know this from document two and because we also have step and man in document one, we assign China to it with a degree of two thirds. So that's works much better for larger collections when we really, really have some data, we could analyze to find semantic relationship between terms but that's a general idea. So just assign new words to each document that are related to the terms already contained in the document. So now we have weights and a way to use Boolean formulas to ask queries and some way to calculate the results by using the minimum and maximum and now we have a complete retrieval model that we can use. So what are the disadvantages and advantages of this model? So as you have seen the computation of these fuzzy membership weights usually tends to be quite difficult because if you are first to compute the similarity between any terms per pairs in your collections then you have number of terms times number of terms computations. This is usually rather large for large document collections and assigning these weights is also problematic because when using fuzzy retrieval all weights must be within zero and one. So we couldn't just simply use the number of times a term occurs in a document for doing this, we have to normalize in some way to the run and one needs to have some clever methods to do this. It's not too easy actually. So then we have seen there might be problems with intuitiveness when it comes to query processing. Granted there are scenarios where this really sounds reasonable but these are scenarios usually are some kind of spirit of databases but for information retrieval we need something bit more intuitive than us taking the minimum and maximum. So there is some mathematical theories I said using so called T norms and T code norms to define alternatives to the minimum and maximum that has to satisfy some mathematical conditions and these T norms and T code norms can use some weighted way to compute the scores but usually that's not done when doing fuzzy logic so it's not too popular and information retrieval doesn't use it in this way. Okay what are the advantages of the fuzzy retrieval model? Of course we are now able to use non-binary term assignments to documents which is very intuitive because some terms are highly typical for document or characteristic and some just aren't and this could be reflected in these weights and by doing this as we have seen we are able to find documents that are related to query terms but do not contain it. For example the research and science example and the third big advantage is we now have ranked result sets we just can compute a large list of all results and put those documents on the top that have a high graded membership score when evaluating our query formula. So basically the first step towards the search engine as we know it. Alright since I'm alone I have also to do this wonderful detour it's about the philosophy behind fuzzy logic so it's an easy thing to say just we forgot about binary memberships but we say yeah it's graded in some way so there could be a membership degree in some set of 0.25 but what does this mean intuitively? We have seen we can do some calculations with all these things but what the meaning of this? So there are many possible interpretations for example it could be some probability so if I say something x is contained in the set A with degree of a quarter then this could be that it is contained with the probability of a quarter so in some cases it is contained some cases it is not this could be one way to see it then it could express missing knowledge or I'm to 25% sure that x is contained in the set A yeah might be one way well only yeah just a quarter of x is contained in A just a small part of x could be interpretation maybe something else I just forgot maybe it's also complete nonsense and as I said this is what most people thought when Lot Fizzardi proposed his idea of fuzzy logic or the logicians from the 50s and 60s just thought hey this man is crazy for what do we need a graded membership it doesn't make any sense and so Zaddi also worked on an intuitive interpretation of his idea so and in his view fuzzy sets describe so-called possibilities so for example I have two statements here Joachim is 29 years old and Joachim is young and then possibilities about the degree of compatibility so is 29 young doesn't sound this reasonable yeah and this degree of compatibility it's exactly what fuzzy membership degrees try to measure how reasonable a combination of things sounds so assigning the term research to a document about science with a degree of 60% sounds reasonable to 60% maybe so that's the interpretation that's usually used in when dealing with fuzzy sets okay the clear focus here is on imprecise concepts because you cannot define young with crisp borders young is yeah it's a vague concept it's fuzzy and that's because it's fuzzy logic because you cannot give clear borders if there's an age line zero years and 100 years then young is somewhere here to here maybe or starting here but it's not easy to define borders here so it's about imprecision and vagueness fuzzy logic is not about missing knowledge so when when assigning a degree of compatibility of 29 to young then I don't want to express all I don't know how young is defined assuming that there is some definition I just don't know it it's it's simply simply saying well there is no clear definition so it's not that I don't know the definition it's that there is no definition and this is quite natural to human language because we are using fuzzy statements all the time so I said this is rather large he is quite big something in this style okay this one of Zadie's original examples used in one of his papers Hans 82 8xx for breakfast so here we can see the difference between possibility and probability for example we have some background knowledge about Hans and we know that usually he eats 2x sometimes he eats just one egg for breakfast and when he's very hungry he eats 3 but he never eats more than 3 so no 4x no 5x and not even more so using this knowledge we can we can construct a probability function that says yeah when chosen some random morning then with a probability of 80% he will eat 2x because he normally eats 2x and maybe he eats with a probability of 10% only one egg and with a probability of 10% exactly 3x that's probability possibility would assign this numbers here it's completely reasonable to assume that he may eat one egg he also may eat 2x eating 3x is also possible for human being 4x yeah well if you're hungry this might actually work 5x then things become complicated and when you try to eat even more eggs then yeah you can imagine what happens then so we can also see that when using possibility you don't have to sum these values don't need to sum up to 1 here in this case probabilities as you know always need to sum up to 1 and here 1 means well that's completely possible there's no doubt about that this might happen and here if I tell you that someone ate or that that hannes ate 8x this morning then you would you wouldn't believe me probably and so this is actually what possibilities about sometimes possibility and probably are combined in a way that possibility provides some some upper bounds for probability so probability says this is what actually happens and possibility says well that's completely reasonable that this is a function if I would assign the weight point 0.8 here then we could easily see because that's not very possible that this wouldn't be a very clever assignment and there's possibly an error that's how possibility is used so another example that I read somewhere is thinking about a glass of water and assuming someone just gives me a glass of some clear fluid and I don't know what's in it but I know it's either poison or it's water but I don't know which it is so in probability theory you would say well the glass is either full of poison or full of water for example my degree of belief that this glass is full of poison is 20% so that actually might be the case but it's highly likely that it's only water so my chance of dying here is only 20% so but in probability you have to decide the glass is either completely full of water completely full of poison and in possibility theory you could think of this as a mixture of both so it could be a bit poison a bit water so it could be water be poison or poison or water so this is what possibility theory tries to formalize in some way. Okay we will next do coin and double matching this is a quite short topic and then make a small break. Okay we have just dealt with set of words queries so each document is just about document contained in a binary fashion but we have also seen this example here which is quite complicated to use another alternative are back of words queries these are the kind of queries used by search engine like Google you just type in a set of keywords you are looking for and don't want to use some complicated formulas and queries then are just documents so whatever you put into the search box of your search engine in some way is a document it's usually a very small document but it is a document and that's basically the idea behind all retrieval models I am presenting in the following and this approach goes back to some idea of Hans Peter Luhn we also saw him in the last lecture he worked for IBM and developed one of the first information retrieval systems and his simple idea was when someone is looking for a document then this person simply should write some words that describe this document and then we compare this description to all documents in the database so queries are documents and the documents in database are documents and now we only need a method to compare documents for similarity we don't need to distinguish between queries and documents and don't need to use some formulas or some special handling of queries we just can say each query is a document and now we only need to compute the similarity between documents which could be made quite easy here in this simple framework and most retrieval models as I said are based on these type of queries and as you know this is a standard querying in all search engines so coordinate level matching is a very simple way to answer these back of words queries. The simple idea is if some document in the collection has exactly n different terms in common with your query then the relevance of this document can be measured by the number n if a document has only n minus 1 terms in common with your query then the relevance score would be n minus 1 in this way and the coordination level is also called the size of overlap here and you just count the number of terms that a query and a document have in common so it's actually really easy and this query could be answered just by sorting the document collection by this coordination level and then returning the 20 documents having the highest scores that means having the largest overlap with your query document. So again an example of our three documents the query is man and mankind and now we are looking for the overlap between each document and the query and we see that in document 1 man and mankind is contained so the overlap is 2 here only man is contained overlap is 1 here is only mankind is contained overlap is 1 and so our ranking would be document 1 in the first place with a score of 2 and documents 2 and 3 on the second place both with a score of 1. Another query China man and mankind there is an overlap of 2 in documents 1 and 2 and only an overlap of 1 in document 3 and now we get ranked results really easy that's the most simple way to deal with this kind of queries so that usually doesn't suffice and the most popular typical model is a vector space model which we will discuss after the break in 5 minutes. Alright then let's continue with the vector space model so this is probably the most important model you need to know in information retrieval and actually if you are building an information retrieval system on your own this is probably the first approach you are going to try here and it's one of the oldest and most successful approaches also so what is the idea underlying the vector space model so it comes from the idea of information spaces let's call it this way for example if you go to the library then usually the books that are related in some way are standing side by side so here is the shelf about computer science over there is the shelf about genetics and on the other end of the library is the shelf about psychology and documents are ordered by topic in some way and the idea of vector space model was whether we could transform this location principle to information retrieval models in some way so just group similar documents together in some space not three dimensional maybe much more higher dimensional but similar documents about the same topic should have the same location in this abstract semantic space so in similarity between documents could be measured by some proximity measure like the Euclidean distance here for example the distance is small and this distance obviously is large and as we are now using back of words queries each query is a document and we just can compare queries to documents as we can compare documents to query to documents really easy thing we just need a clever representation of our documents in some space we need some coordinates okay the vector space model was proposed by Gerard Sorton the famous Sorton who received an award named by himself one of the fathers of information retrieval and here the ideas the documents and queries are represented as a point in some n dimensional real vector space so n usually is very very large like half a million or so and n is the size of the index vocabulary so the ideas each term that occurs somewhere in your document collection produces an axis in this space so if you have 10 different terms in your whole collections because it's a very small collections then you have a 10 dimensional space and each document is located somewhere in this 10 dimensional space so and how do we get these coordinates if terms are axis yeah well we simply use the number of occurrences of each term as coordinates so for example if the term science occurs two times in document one then document one gets the coordinate two on the axis assigned to this term so really easy so coordinates are the incidence vectors of documents so simple example document one contains this term step and China where step where China occurs three times and document two contains also step and China and step occurs two times and China only once and document three just contains the term step only once and then since we have two terms step and China in our collection we have two coordinate axis so I restricted myself to two axis here because painting would be a bit difficult with more axis and then we could easily locate the documents in this space so document one would have a coordinate one on the step axis and coordinate three on the China axis and the same could be done with other documents and documents that are related in some way are close to each other that's idea so you cannot easily see this in this example because it's so small but documents that are very very similar tend to have the same coordinates in the space and also tend to have small distance are similar in some way okay now how to define or formally define similarity or proximity in some space of course there is some some tools from mathematics which is called a metric so a metric on some set is a function that compares two elements of this set to each other and returns a real number that should measure the distance between these two items so there are some properties a method a metric should have one is non negativity so these numbers are zero or positive the next one is that the this value should be exactly zero if those elements to be compared are the same so if I comparing comparing document one to document one then the distance should be of course zero it should be symmetric because the distance between document one and document two is the same as the distance between document two and document one and it should satisfy the so-called triangle inequality if I have two points that I want to compare then this d of a b should be smaller than if I have a third point going some some way around here then I would have a distance between a and c and distance between c and b and this simply means that going the direct way is shorter than going some detour so that's a triangle inequality some basic mathematical features and measure of distance should have and one popular example is a Euclidean distance you just take individual coordinates of both elements to be compared takes the difference square it sum it up and take the square root of it and that's exactly the kind of difference we are using naturally in space so it's an easy one okay what's the geometric meaning of the Euclidean distance yeah it's it's simply it's simply distance in space as we know it as I said for example all documents on this circle have a distance of one from document one so it's always about circles and that's all documents having the same distance from it so you know this from school quite easy okay another concept that is related to metrics is a similarity measure so metrics measure distance the larger the more dissimilar some objects are and a similarity measure measures similarity so again we compare two objects and now we have a score that lies between zero and one zero zero means that these two objects are maximally similar for example they are identical and zero means that they are maximally dissimilar so are completely unrelated so it all it always between zero and one and it's not just a real number as with a metric okay there is some there is no generally agreed mathematical theory about what the similarity measure should have regarding properties but the there's one simple simple measure that's called the cosine similarity in vector spaces and it's simply the angle between the coordinates of these two points I think I have an example of that yes so okay when comparing two documents for example document one and document two I just draw a line from each document to the origin measure the angle here and take the cosine of this angle because the angle is between zero and ninety degrees and since our measure should similarity measure should lie between zero and one where one means maximally similar we take the cosine and the cosine of zero is one so a document being very very similar to the document one would have a very small angle here and a large cosine documents which are highly dissimilar document a here and document b here would have an angle of 90 degrees and the cosine 90 degrees is zero these documents are maximally dissimilar so in all documents lying on this line are equally similar line similar to document one for example or to each other document that's a that's a general idea of cosine similarity okay how do we do we compute the angle between two vectors so in drawing it and measuring it it seems rather easy so we need some mathematics here so we know the cosine of this angle is the dot product between the two vectors so the coordinates of document a and x and coordinates of document y taking the dot product that's simply the sum over all products taken on individual coordinates in Germany it's more popular under the name scalar product and then we divide it by the length of both vectors because the length of a vector doesn't matter as long as the direction is the same so sounds pretty obvious yeah length here is Euclidean norm just take the square of each coordinate sum it up and take the square root pretty easy standard mathematical operation so determining the similarity of the cosine similarity between two vectors simply means computing this determining the length of each vector and dividing this product by these lengths rather easy okay now we can come back to our coordinate level matching because this is only a special case of of this vector space model when using cosine similarity let's assume for the moment of for this slide only that our term vectors only contain binary term occurrences so no weights and then the scalar product of the query vector x and the document vector y is exactly the coordination level of x and y because if these things only are one or zero then this sum is simply the size of the overlap between the query and the documents the number of terms occurring in both so coordinate level matching is a special case of the vector space model with cosine similarity okay now we have seen their metrics there are similarity measures there is Euclidean distance for example there is cosine similarity there are many many many more similarity measures so which one to use usually this is a very typical thing for information retrieval there is no correct answer to this question it always depends on the type of data you're dealing with for some document collections Euclidean distance could be a really great idea for some collections cosine similarity could be a great idea since both are in some way intuitive and usually you have to try what you want to use so the most reasonable baseline mostly is using the cosine similarity because Euclidean distance has some problems and usually it is the case that two different measures you could use behave somewhat similar but not always for example here is a comparison between Euclidean distance and cosine similarity and as you can see if you have two documents having a quite low Euclidean distance so between these two points the distance is rather small then also the cosine similarity will be rather high because the angle is small so this is the case but it might happen that if you have a high cosine similarity then the Euclidean distance could be completely different because if you have some more dimensions then in this dimension that could be a large Euclidean distance although the angle is rather small so these measures usually are quite different and depends on the type of document collection you're dealing with. We have seen that cosine similarity does not depend on the length of the documents and queries you're dealing with so it doesn't matter whether a document occurs step and China each only once or China and step occurs twice in the document or three times it's always the same vector here and for measuring cosine similarity it's always the same so cosine similarity focuses on comparing the relative amounts of how often each term occurs so it simply is about we know that China occurs more often than step or both occur equally often in the document but it doesn't matter how often they appear so this is also related to the length of documents so if document 2 is just document 1 plus a copy of document 1 pasted at the end so D1 and afterwards D1 again and this is called D2 then it would have the same properties regarding to the cosine similarity so but using Euclidean distance these two documents would be different because they differ in length so and depending on what measures you use it might be important to distinguish between documents that have different lengths or not. You could also use some normalizing function to account for document length so most popular is for example dividing each coordinate by the vectors length and then you get a length 1 vector so here again our example two documents which are copies of each other if you normalize both to length 1 then this one would maybe be mapped to this point and this one would be mapped to this point because we are dividing each coordinate by the same number and then you get a length 1 vector and both are exactly identical and then even the Euclidean distance would treat both documents as being the same. One could also divide each coordinate by the vectors largest coordinate or by the sum of coordinates so when using the normalization by the length you use a square here a square here and a square root so that's the difference between these two. So it's also a matter of taste there's no clear rule how to do it usually one doesn't do any normalization and just applies the cosine similarity because it's independent of any length issues so if you're using measures like Euclidean distance you need to care about normalization issues depends on your collection as I said. Okay special case is our first option normalizing two unit vectors and in this case all documents and queries after normalizing are located on a circle around the origin or in higher dimensions on a unit sphere, hypersphere around the origin and in this case Euclidean distance and cosine similarity are rather identical because if you compare these two documents they have similarity that depends on these angle alpha and if you measure the Euclidean distance between these two or these two it also is proportional to the cosine similarity so normalizing two unit lengths or unit norm produces a document representation in which Euclidean distance and cosine similarity are identical so rather tricky. Okay sometimes normalization is not a very good idea because if you have longer documents that might indicate that they cover some topic more in depth so if you have a book about quantum physics that's probably more helpful than just one page summary of the idea so if you really want to know about quantum physics however both documents could have a very very similar representation in vector space and only differ basically in length so there you need to need to account for document length in some way so how could this be done one one way would be first to compute the query result on the normalized documents and the query and then give the long documents some some bonus in the ranking that they appear higher in the final ranking proportional to the length because then you have long documents at the beginning of the list and the short ones at the end and if you have a highly relevant short document and not so relevant long document they both might appear at the same rank position in the result. What you also could do you could measure the effect of document length on the relevance in your current collection and then try to determine what's the actual effect of document length on informativeness about these documents and then when you have to devise some clever model to do this you could also use this model to re-rank your results according to document length. It also depends on what you're going to do with your collections and how your collection actually looks. It basically does ignore the length because only the relative frequency between document terms does matter as we have seen in this example if you have many documents that are only different length but not in appearance of terms then they all would have the same cosine similarity to this one here. It doesn't depend on a single term, it depends on how the frequencies of different terms are related to Yudh Dhaja. So if you have a document where one term occurs 10 times and the other term occurs 20 times then for cosine similarity it's the same as a document in which the first term occurs only once and the other term occurs twice. So length is only ignored by cosine similarity if the content copies more or less of each other. But since we are dealing with the back of words model and ignore word order even documents that look like copies of each other could be completely different. So there could be a large document with really much information in it that is essentially in the back of words model just the first document copied twice it might be completely different content. So that's the problem. Okay how could copy use now this vector representation for ranking our results? Hans-Peter Luhn also made the following observation if some words are repeated in a document occur very often then they seem to be important of course. In my previous example if I have a document about history then probably the term history would occur very very often and in the back of words model this is expressed by the so called term frequency. As a short end we will use the notation tf of d and t. This means that the term t occurs so often in document d. So just term frequency. What also have been recognized is that if you have a large collection of documents then some words might be highly specific and some words might occur in almost every document. For example if you have words like the or the introduction in scientific papers these are terms that occur in every document and are not very typical for each document. So here in this example if you have a corpus of documents about psychology then the term psychology might not be very characteristic for a document if some documents contain the term psychology. But in a computer science corpus if there is one document containing the term psychology then this document seems to be special and seems to be relevant to psychology because the word psychology occurs much more often than in a typical document of this collection. And so the idea is to measure the discriminative power of this term by some number. For example discriminative power of psychology would be low in a psychological corpus and high in a computer science corpus. There are some ways to formalize it but in general you would like to have the following. If a term occurs more often then you would like to have a higher term weight, higher term frequency and if the term has a higher discriminative power is more specific then you would also would like to have a higher term weight in general. So basically the idea is to weight terms by the term frequency multiplied by its discriminativeness. So if psychology occurs in a computer science document then it might be equally characteristic for this document than the term computer which also occurs very very often in the collection. And since computer occurs in almost every document it's not very very typical for the collection. This principle was formalized by Kevin Spark Jones in the 70s who introduced the TFIDF measure. So the general idea was that the specificity of a term is exactly what the discriminative power is about. So the question then is in how many documents some term is contained and the specificity of this term is negatively correlated with this number. So if a term occurs very very rarely in a collection then it seems to be very specific and if it occurs everywhere then it's not very specific. So and the more specific a term is the larger is its discriminative power the better it can be used to distinguish different documents or the topics of documents. The set about psychology in a computer science documents strongly indicates that this document is related somehow to psychology. So this leads to the development of the TFIDF measure. DF is the document frequency for each term that's basically the number of documents containing a given term. This measures the specificity of this term. So and Kevin Spark Jones proposed the so called TFIDF term weighting scheme and so the weight of each term in some document D is simply the term frequency of the document. So how often this term occurs in the document divided by the document frequency of this term. And since you divide it it's called the inverse document frequency and therefore you have TFIDF. So some term gets a high weight for a document if it occurs very often or if it occurs very rarely in other documents because if it occurs then in this document it seems to be very important because other people are not talking about it. You could also use a more refined weighting scheme. For example Spark Jones proposed that the relationship between specificity and inverse document frequency should be logarithmic. If you have double amount of inverse document frequency then it should only account as a slight increase in perceived specificity and this is something, yeah, some weighting scheme that just works well. So it also, whether you use logarithm or not, it's just a matter of designing in your algorithm. Depends on your collection you just have to try it out. So usually the most common form of TFIDF is this one. You take the term frequency, how often a term occurs in the document and multiply it by the logarithm of the number of documents in the collections with a small added weight to account for very frequent, very rare term so that this logarithm never gets zero or this argument never gets zero and the document frequency also plus 0.5 to get results that fit the argument of the logarithm. So this N divided by the document frequency normalized with respect to the collection size so this basically is the relative amount of documents in which this term occurs. So if you have terms that occur everywhere that's not worth mentioning if you have terms that are very rarely used in the collection that appear in this specific document then this term seems to be important. That's the way it is used. Looking at the time I'm skipping this one. This is another approach to term discrimination. You also could try to analyze the similarity across documents and try to find out what's the influence of each individual term on similarity and try to measure the terms importance by this effect but that's rarely used so I want to skip it here. Now we have seen some different models how a tree will could be done. Boolean, fuzzy and vector space and at the beginning of the 80s, Jeb R. Soulton and his students analyzed how good the results of these different models are. How these numbers are computed will be a topic of one of the next lectures. What's only important is that higher numbers mean better result quality. So on some different document collections, some medical documents and these are communications of the ACM, some computer science journal. They defined some example queries and then compared the results of their algorithm to what humans would judge relevant. And if the results are very close to what humans would judge relevant then the score here would be rather high. So we can see that the vector space model usually gives the best results. Here with some large distance to the other retrieval models and Boolean and fuzzy retrieval usually are not that good on these typical text collections. So you can see here when the vector space model was invented it was a huge step forward because result quality increased a lot. Okay, what are the advantages and disadvantages of the vector space model? Big advantage, it's easy to understand. Documents are points in space and querying is just by writing a document and comparing documents in the collections. As we are also seen it can be highly customized to the individual collection. For example you can use different distance and similarity functions to measure how similar to documents are or how similar the query is to some document. You could apply normalization schemes for document length. You could use different methods for term writing whether you include a logarithm or not or some other weights. So vector space model basically provides a framework in which you can plug in different functions and different schemes and different term writing methods and you are able to highly tune your retrieval algorithm to the specific properties of your collection. So fuzzy retrieval is rather limited in what you can do there. The only chance you have is changing the way weights are assigned. With a Boolean retrieval model you have exactly zero degrees of freedom. Just take the terms at the IP and the documents and then query with Boolean formulas. There's no way to change anything here. But with a vector space model you are highly, highly flexible and what's most important for information retrieval it just works. So results are pretty good and that's because as I said it's one of the most often used documents in information retrieval and most probably if you find some basic retrieval system somewhere then they will use the vector space model in some way. So what's also a pro is that relevance feedback is possible on the vector space model. That means that when given a list of results users can simply say I like this document and this document relevant. This document is not relevant. Based on this feedback you can easily compute a new list of results with the vector space model. Okay, what are disadvantages? Of course you have to deal with quite high dimensional vector spaces. So each term is an axis in the space and each vector then has a lot of coordinates and you have to work with them in some way. You have to compute dot products and for this you need some specialized algorithms and we will do this in the next lecture or the lecture after that. The vector space model also relies on some hidden assumption which usually are made but not clearly stated. For example the cluster hypothesis which simply means that documents that have a small similarity are located in similar positions in space tend to be relevant with respect to the same queries. So this is basically about the idea that we can apply the back of words model for representing documents. So of course it could be the case that you have two documents with the same back of words representation that have a completely different meaning because at the right places there is a knot in the document and it completes the opposite topic or the opposite opinion but still these two documents have the same representation in our space. So this is a cluster hypothesis. Another assumption is the independence assumption and as we take one axis per term and all axes are orthogonal to each other we simply assume that the occurrence of one term is independent of the occurrence of another term. So this is highly problematic when dealing with synonyms. Because for synonyms both terms are related so if one term occurs then the other term also tends to occur in the same document. So these two axes shouldn't be too independent axes but there should be just one axis for the synonymous two concepts. In three weeks we will see a way how we can deal with that in a very clever way. Now a short detour about manual versus automatic indexing. So classical in library science a long time librarians thought the only indexing of documents that means assigning keywords to documents was to do this manually. So like it is done in mesh, intensification, there are some librarians who simply assign keywords to documents that seem to be highly related to the documents and this is the best way to do it. Manual assignment might not be very efficient but the ideas, the result quality or the quality of document description is really, really high that manual work is worth the effort. So in modern IR and web search you automatically assign index terms to documents just by assuming that each word in the document is an index term. So this is our back of words model. Each document is a back of words and in classical libraries you would just have assigned four or five most representative keywords like in this classification systems. And here IR says efficiency, so doing it automatically, doing it fast is more important than finding exactly the right keywords to describe the documents and if something strange happens that you by some reason choose the wrong index terms because some strange words occur in the document that are basically unrelated to the topic then this just happens but we can at least do this fast. So basically there are these two lines of thought, doing good document description with a lot amount of work and doing fast document descriptions with doubtful quality of descriptions. So and in 1960 the situation was like this, you could either choose a high quality of your library index with a lot of work or you could use an automatic indexing approach with assumed low quality that could be created easily. What you really want is something like this, an easily createable high quality index. So and then the research question at this time was how we could speed up or simplify this manual indexing process to get at this sweet spot here where we really want to be. So they tried some clever classification methods and tried to pre-process the document that you don't need so many experts. For example you could take a bunch of students pre-classifying the document and then the experts only do some fine tuning which could speed things up. So this is one line of research that was done at the time in the 60s. Another approach was talking, what was taken in the Grand Field Research Project and they wanted to look at exactly this problem, how to make indexing more effective and efficient at the same time. So they investigated 29 novel indexing languages that should enable librarians to index documents better and faster. The surprising and then they also evaluated some information retrieval systems regarding the quality of indexing done here and the really surprising result was that automatic indexing leads to at least as good results as careful manual indexing. So in their conclusion you don't have any advantages when trying to do manual indexing because automatic indexing already is up there and not down here. So manual indexing is inferior to automatic indexing and nobody actually could believe it at this time. So Cyril Cleverden was a leading scientist in this Grand Field project and he has some citations which he made some years after discovering this result. And for example, this conclusion is so controversial and so unexpected that it is bound to throw considerable doubt on the methods which have been used. So he couldn't believe by himself that his automatic measures could be better than manual indexing. So they decided to do a complete reshake of their automatic indexing approach but everything seemed to be correct. And since they didn't find any possible problems here, they had to accept the result and it was rather surprising because as he says here, there is no other course except to attempt to explain the results which seemed to offend against every canon on which we were trained as librarians. So the thing is, librarians have for decades trained that only manual indexing is the best way to do it and here comes this project and it proves that automatic indexing is just the better way to do it. So librarians become obsolete, at least for indexing here. Another approach was the smart system by Zerar Salton. That stands for system for the mechanical analysis and retrieval of text and there was also a large information retrieval system developed in the 60s. Again one of the first approaches to do or process documents automatically and make a step forward from classical libraries to automatic processing of documents. Actually, he was born Gerhard Anton Zalman, a German immigrant who came to the US and some people also say Jerry Salton was information retrieval. So one of the major figures in the field for several decades. Smart has the first implementation of the vector space model and also relevance feedback which we will see in some weeks from now. Here's some picture I found about the hardware they used. So computers just became quite popular in large companies to do some calculation and this is an IBM 7094 which was actually used for the smart project and usually this looks like this. You have some control desk with some guy from IBM in a suit and you gave him your programs and he put it somewhere in his machine and this is how computing worked at this time. So here's some citation about the speed of this machine. A basic machine operating cycle was two microseconds. That are 500 operations per second and that are 500,000. Millie a thousand micro is one level more. So this actually is 500,000 per second. This is 500 kilohertz processor if you compare to modern standards. So currently we have gigahertz, so a rather slow machine. Okay, this is how smart looks like. The text interface and it was still developed until the mid 90s. Here's some examples. You can read in the document collection here. Then you can show some statistics on the data structures created and then create the index containing all these inverted files and then you can do a retrieval run here in this file. You had to specify your queries and then you got the results from the smart system. Of course, as I said, when dealing with information retrieval, you always need to evaluate your result quality empirically. So usually one uses some test collections that are publicly available and in the smart systems a large variety of these collections have been used. For example, CACM is quite popular. It's a journal for computer science with mostly survey articles from many different fields. Some collection from library science or even the Time magazine in the US. So there was a broad range of different document collections have been used to test the smart system and then all these systems result quality has been really convincing. So the vector space model doesn't just work on a specific collection, but it does work always. Finally, a small recap of probability theory to get you prepared for the next lecture. Because we will discuss probabilistic retrieval models in the next week and we will have a look at some concepts we need for this. So that's probability, independence, conditional probability and Baye theorem. I will make this rather quick so you can look it up at home again to be better prepared for next week. So probability basically is the likelihood or chance that something will happen as we have also seen with the fuzzy example. And usually you have some well-defined random experiments. For example, roll a six-sided dice, then roll it again. And if you roll at least nine in total or if your second roll is one, then you win. That's some game you might play and otherwise you lose. And probability theory historically has been devised to compute what are your winning chances in games like this one here. So this is an example game and you can analyze this game by looking at all different events that can happen. This is the first roll of the dice. This is the second roll. So what can happen? We can have six times six, like 36 different events and you win. If you have at least nine in total, or you win if your second roll is one, you also win here. And your winning probability is just the number of winning events divided by the number of all events. So you have 36 events in total. So you have 13, 14, 15, 16 winning events. So your winning probability is this here and this is less than a half. So this is probability. So this could also be done for some individual events. For example, the probability of rolling at least nine in total. This area here, these are 10 different events divided by 36, the probability of about 28%. Probability of getting a one in the second roll is just this column here. Probability is 17% and probability of winning, as we have seen, is 45% in total. So statistical independence is a concept between two events that might happen and two events are considered independent if the occurrence of one event doesn't change the probability of the other event. So for example, here's a start definition, two events a and b are independent if and only if the probability that both occur is exactly the probability that one occurs multiplied by the probability that the second one occurs. And here are some examples. These two events independent, three in the first roll and four in the second roll. Of course, they are independent because the two rolls simply doesn't depend on each other. So another question is whether two in total and six in the second roll, whether these are independent events and these are not independent events because if I already know that I have 10 in total, then my second roll must have been at least four. So I can already rule out the situations one, two and three for the second roll and these are not independent. And the same is true for this example here, 12 in total versus five in the first roll because if I already know that I have 12 in total, it could never have been happened that I had five in the first roll because getting 12 means six in the first roll and six in the second roll. Other way around, if I know I had five in the first roll, I can be sure that I don't have 12 in total. So these events are not independent. So conditional probability is exactly the probability that a specific event occurs given that I already know that some other event already occurred. So for example, what's the probability of winning the game given I got four in the first roll? So then I only took at look at events with the four in the first roll, then only six events are important here and this is a winning event and these two are winning events. And so the probability of winning the game given I got four in the first roll, if I already know that, is a half. So probability of having had a four in the first roll given I won the game is just the other way around. So given I won the game, if I already know that I won, then I'm somewhere in this area here, here or here and four in the first roll would be only these three events. So the probability of having had four in the first roll when I know that I won the game is three divided by the number of events here. That's three divided by 16. That's 19%. So that's conditional probability. In mathematical terms, the probability that both events occur divided by the probability that the event with the condition occurred. Then finally we have Bayes' theorem after Thomas Bayes lived in the 18th century and he found out that the probability of A given B is the same as the probability of A divided by the probability of B times the probability of B given A. So basically he found a way to relate these two events by swapping around the condition and the event. So for example, what's the probability of having had four in the first roll given I won the game? So here we have P four in first given I and according to this formula, this should be the probability of four in first divided by probability of winning times the probability of one divided by four in the first. And this we have already calculated in our examples. A half divided by this times this and this makes 19% in the end. And since we already had these examples, 19% are correct. Okay here A is usually called the prior probability of A because that's the probability that A occurs before I knew anything else what happened and B is called the posterior probability of A because it's a probability that A occurs after I got some new information and the information is that event B occurred. So A the prior is my knowledge before I have some information about the general context and B and the posterior probability is the probability that A occurs after I know something about the situation. So when I get some new information by observing B, then the probability of A gets updated to the condition probability of A given B. So this is idea of prior and posterior. Alright, that's it. Next lecture we will take a look at probabilistic retrieval models and until then I wish you a nice week. Thank you.
This lecture provides an introduction to the fields of information retrieval and web search. We will discuss how relevant information can be found in very large and mostly unstructured data collections; this is particularly interesting in cases where users cannot provide a clear formulation of their current information need. Web search engines like Google are a typical application of the techniques covered by this course.
10.5446/349 (DOI)
Very well. So as always, it's my pleasure to welcome you Thursday morning to the multimedia database lecture. And we do have a wonderful lecture today because we will look a little bit into video retrieval. We will see what video retrieval is about, where the problems from video retrieval come from, and then we can kind of make our way to the end of the lecture. So video retrieval today is basically about shot detection. So we will have to see how we can segment videos in an automatic or semi-automatic way, such that those scenes or those parts of the video that are concerned with some topic are segmented from each other. So that is one thing that we want to do today. Oops, didn't work. That is one thing that we want to do today and we want to go into very detailed today, shot detection. We will then look into different ways of getting these shot detections. So we will look at temporal models, at shot activity, so different characteristics of shots that may help us to determine whether this is a different topic or this is actually the same topic or what's actually happening in the video or what class of video is it, an action movie or is it a romance movie and all these kind of things we will deal with today. So let's first talk about video abstraction for a while. Video extraction basically means that we structure the content as the video into temporal and spatial features. So for example, a typical question that you might pose to a video is, find all clips in which some object falls down. What do we have? We have a certain event that is happening in the video that is shown in the video and we obviously have some activity in the video where there's a trajectory that is kind of pointing down or find the video where the cat is jumping over the fence. What do we have to do? We have to determine in which videos are cats, kind of like what we already know from image retriever and then we have the trajectory of a cat jumping over something. This is the idea of video abstraction. On one hand we have the idea of modeling the video so we have to find out what's happening in the video, what is shown in the video and somehow represent it. On the other hand we have the problem of segmentation and summarization. We want to find the parts where the cats actually jump or where the object actually falls. If you have some kind of movie and at some point some of the actor throws a ball or whatever and it falls down, you don't want the whole movie. You want the piece of the movie where the ball falls down or you want the piece of the movie where the cat actually jumps. That is what we are dealing with today. So in general on the very lowest level a video consists of frames. You have a certain number of images per second and if you play them quickly enough there is movement in the video. That was one of the early ideas of how to make movies, how to make cinema and of course this is the same in digital videos. You do have the images basically which are called frames and some of them may be more important and kind of represent what is shown in some section of frames, so-called key frames and basically you can group these frames into shots. A shot means that you take the camera, that you film something, that you record something and at some point you turn off the camera or you just pan into a different direction or you do whatever. But you stop recording or you are blending, you have cuts blending into some other topic and this is what we would call different shots. Above the shots or kind of orthogonal to the shots are so-called structural units. Structural unit means that we have certain visual appeals in the shot. So for example if I take the camera and make a pan over the whole room I might focus on the detail where people are in the image. That is a structural unit and if I turn further then I only have the furniture over there and the walls that is a different structural unit. So maybe belong to the same shot, so maybe one big shot where I just move the camera or may consist of several shots, I take a shot of you guys, that is one structural unit, then I turn off the camera and take a shot of the furniture. So that is somehow orthogonal and above all is so-called story unit where basically I deal with a certain topic and dealing with this topic may take several shots and of course several structural units and will cover a lot of frames. So this is the basic idea from the basic frames that I used for representing the video, I have abstract on several levels up to the story video. And if you want an example, wonderful examples are news broadcasts. So there might be a story unit before in Iraq. There might be structural units with some kind of introduction. There is fighting around the city of Baghdad, blah, blah, blah. Then we might have a transmission where we see scenes of war or where we have some reporter directly from Baghdad telling us how awful it all is or whatever. And then we may have a summary. How did the German Bundestag or the parliament or whatever react to this or what did the stock market do or how did the oil prices develop. It is all the same story because the oil prices are affected by the war in Iraq. But it is different structural units because on parts of it I see the anchorman, I see scenes of the country, I see somebody telling me what is happening in the country and I see a commentary on what actually the thing has to do with oil prices. If I go down, I can see the shots. So for example, I can see the anchorman in the studio sitting on his desk and telling me something. I can have a pan across the desert landscape, I can have the bombing of the city of Baghdad, I can have pictures of refugees or whatever. It is all different shots because obviously the refugees are not with the anchorman in the studio. So there must be cuts between them. And then you have the frames on the lowest basic level and usually you are represented by some key frame because if you see what the video is all about, I mean for example the anchorman. I don't need several shots of the anchorman, I need just one shot to see, okay there is some news anchor sitting in the studio and there is something happening or he is telling something. It doesn't change much. Different for pictures like this where you have kind of like burning oil rigs or something like that here, the parliament or something but usually I just need one picture to get the point across to see what it is all about. If I see this picture down here, I will always know it has something to do with politics because this is a parliament, the German parliament actually, and they are discussing something. They are always discussing something. I don't know what from the shot, I don't know what from the key frame. But I know this is basically what it is about, okay? And so this story unit of the war in Iraq would be well represented or abstracted, this is why it is called video abstraction, by just showing these couple of frames. You would basically know what is happening, you know, like, oh the anchorman tells me something, so it is kind of one of the famous German news shows, the anchorman is telling me something, then I see here war scenes in some desert country, what will it be? There are some American soldiers, the anchorman is telling me something again, and then the parliament is discussing something, you know? Without having seen anything, without having read anything, I can draw some conclusions just from seeing these images. This is the video abstraction. Good. Now the question is of course, well, I know how to see the single frames, the individual frames, but how can I group them into shots? This is kind of the interesting question, yeah? And there was actually a problem that people were concerned about quite some time. So when they said, well, why should we do it automatically? I mean, if somebody takes a shot, or if some camera takes a shot, and it's turned off, it can just annotate for what duration it actually took the shot, or somebody cutting a movie or cutting some music scene. Why doesn't he just annotate, you know, like, okay, here's a cut. And this is exactly what's happening in one of the metadata standards, that is MPEG-7. So MPEG-7 is a metadata standard that just annotates media data, just annotates video basically. And the correct decomposition is then stored in the metadata. And today we have a lot of cameras that actually geotag the images and that tell us how the shot was made, you know, lens was used, or what you name it, you know, like Zoom factor or whatever. And this kind of metadata can be automatically derived and automatically added to the actual media. This is where this metadata standard comes in, which makes it quite easy to see. But that only goes so far, because we might know what shot it is. We don't know yet what it shows. And this semantic annotation is exactly the same problem that we had in music or in image annotation. So how do we know it's a cat on the image? How do we know that some people are, that's a news anchor or that's a target shower or whatever? How do we know that? And so even if you have very good annotated or automatically annotated media data from news archives, from broadcasters, from movie makers and so on, the semantic annotation is usually missing. And by the amount that is produced currently, also considering user generated data, you have a lot of YouTube videos or something like that. What's in the YouTube video? Does anybody annotate that? No, usually not. And that can be important. I mean, most of us use YouTube videos for fun and then the cat is falling down, ha ha ha. But some of them might be important. I mean, a lot of the information, the unrest in Egypt or in Iran, in Persia, was transported via Twitter, via YouTube and different channels. So these social networking channels, people didn't annotate it, but it became interesting as a media. And I mean, more often than not, today news shows actually show YouTube videos. So we still have a problem in that bit. So that is why we want to deal with shot detection as a topic. And shot detection means that a clip has to be decomposed in the scenes, in the different scenes. And what we can easily see is that the images that belong to some scene, to an individual scene, are relatively similar. So if I see the anchorman in the newsroom, he may move a little bit or he may turn his head, but that's about it. It will be still the same visual impression. And this is the idea where I say, well, annotating a video on a frame-based, so every frame should be annotated on a frame-based manner, is simply not possible. It's not very useful because I can represent shots by a single frame or two frames, you know, like key frames. And so annotating these key frames is perfectly fine. It will capture all the semantics that I ever know. So we will only focus on the key frames, actually. Of course, we have to find the key frame because the camera doesn't say, okay, I'm recording this, I'm recording this. This is a key frame and then I go on recording, you know. But we have to decide for either one. It doesn't really matter. So if I see the anchorman that is kind of static, take either one. It doesn't really matter. If I take the camera here and just pan it around the room, you know, I might take one from the sequence where you guys are in the picture. It would be so much nicer as a key frame than having the blackboard as a key frame. So that is kind of the art. So if we have to find a key frame, we first have to detect the scene transitions to recover the shots and then decide for either frame in the shot to be the key frame. Okay. And if you have seen transitions, they can be either hard or soft. So a hard transition is called cut. So this is kind of something here. The anchorman tells us something and then the video is shown about the landscape or something. You can easily see that this has nothing to do with each other. But of course there are also soft transitions. So for example, the blending or the solving and the fading. So it goes to black and then comes back again. You have a new type of picture. And these are more difficult to find. After you found the transition for every area that makes a shot, you select a representative image. You can do that either randomly. So for example, for the anchorman, it doesn't really count which one you take. Just pick one randomly if they are too similar or with regard to the camera movement. If you see a camera movement, maybe you should take one from the middle of the movement because you will start a little bit off and then pan over what you're interested in. And then you will end somewhere. So probably taking something from the middle of the movie is actually a good idea. Or you take something with average characteristic values where the desert really looks desert-like. So you find that there is a lot of yellow and brown in the images in terms of color histograms. Then you take the one with the average colors or something like that of the whole movie. So there are a lot of ways or a lot of heuristics to find actual keyframes, which is very good. Good. But first comes the grouping of the frames into shots. The grouping into shots. And we have to recognize the transition. We can do that either with uncompressed videos or with compressed videos. Of course, if I have the uncompressed videos, I can exploit many of the features that we had in image recognition. So we can do color histograms and so on. But decompressing the video and then looking at every frame is very difficult to do. I mean, it's done automatically. So if I have the time in my news archive or whatever I have, it's not too big. I can just let the computer run overnight and then find all the shots and do all the things. No problem here. Or we can use compressed videos. In compressed videos, it's kind of interesting because only the data about the change is available. Compressing usually works that you start with the image. And then you just add the changes to the original images, the record, the changes to the original images. And when playing the video, you calculate the changes into the image. You blend the changes with the image and so the video moves. So it's not a sequence. It's not strictly speaking a sequence of images, like in the frame-based uncompressed version. But it's rather some images and from time to time you record the changes that happen. You interpolate how the image will look like when these changes are applied. And at some point you get a new picture to synchronize the video again. And this compresses quite a lot. So short detection in uncompressed videos, the most prominent case is so-called template matching. So what you basically do is a pixel-wise comparison. And for each pixel in the image, you look at the color values or the brightness and compare it to the color values or brightness in the next frame. And if the change between two frames is large enough, take some pixels, then you assume, okay, something has happened. It's no longer the blue background and the anchorman, but now it's all yellow desert. There's a big change in the color values. Of course, if we do it that way, it only works for hard transitions. So what about if I fade out and fade in? There's obviously very little change between the fading pictures, because you do it very gradually and they might be well below the stress-hold. What do we do then? It's interesting, isn't it? Well, we have to think of something else. What we have to do is basically, okay, we take the intensities of the color values from a pixel, x, y, at time t, and same pixel at time t plus 1, next frame. And we sum up over all the pixels basically the changes, okay, the differences. That gives us the total difference and allows us to recognize hard cuts. If the difference is high enough, then there must be a hard cut. But if we have something like this here, you know, like the geese are kind of like starting to fly, we will have different changes. So for example, look at this area here. Same area here is without the wing, because the goose has moved, okay. And the area that used to be kind of free is now covered by the wing of the goose. So some pixels change, but a large portion of the image does not change at all. So the colors are kind of simple. The problem is really when we use this template matching approach, a very hands-on technique, you know, like it's a very straightforward technique and it actually doesn't work that well, is that if we have noise, if we have object movement, if we have changes in the camera angle, if I'm panning, then at some point there might be a transition that is recognized as a cut where there was no cut. So depending on how strong these differences are. This actually led to the inception of histogram-based methods where we say, well, maybe we should not consider every pixel and kind of match the pixels, but we should consider the histograms for the images. And if I then have the geese, you know, like that are kind of black and the background that is kind of orange-brown, and the goose moves a little bit further, there's still the same number of goose pixels and the same number of sunset pixels, okay. It doesn't matter where they actually are in the image. The histogram doesn't record the location, it just records the number of pixels of a certain intensity or the number of pixels of a certain color value. So this was kind of the idea. So if we have frames that contain identical foreground and background elements, the brightness distribution will not change even if the objects are moved in the image. And this, of course, is quite a good idea to do it that way. So what we basically do is we define the distance that is necessary for detecting a cut by the histogram difference between a frame at time t and a frame at time t plus one, okay. Good, that was kind of nice, predefined threshold, and if the change is big enough, there has to be a cut. Again, this is hard cuts. So if we use histogram, we are invariant towards rotation, to slight changes if some objects move around the image. A little bit of occlusion is allowed. So even if I have the hand here, you know, like which is kind of pinkish, and I move it here and you can't see it anymore, part of it, then the change recorded in the histogram is not too big. So a little bit of occlusion, of course. If I am the main item of the image and I suddenly duck away, you know, and you can't see me anymore, then the image will probably be totally different from what it used to be. And this histogram difference will detect a cut where there was no cut. But while these are the good things, you know, like that occlusion, smaller occlusions and object translations are not noticed by the histogram difference, and slow camera movements and zooming is also not noticed, as is panning and fading and all that kind of stuff. So also the soft cuts are not noted by the histogram difference. So it is less error sensitive than the template matching, but still not perfect. So how do we choose the threshold? If I take the threshold too low, then I will have a lot of cuts where there are no cuts. Small changes in the image and I already say, ah, it's a cut. If I take the threshold too high, I will miss a lot of cuts. So this is kind of like a very subtle way of finding the right threshold. Of course, it strongly depends on the type of video that I have. If I have music videos, for example, I can live with a very high threshold because they are very quickly cut. If I have like the typical animal pictures where a lot of fading, a lot of panning happens, I should consider rather smaller cuts. So it depends on what you do. And usually you would have to do some training with your collection. You have to see where you end up. The good rule of thumb is choose the threshold such that as few cuts as possible are overlooked, but not too many false cuts are produced, which is always what you have to do when you go with thresholding. But the interesting thing is that you could, for example, use a distribution function for training. So what you do is you look at the distribution of the differences within and between cuts. So I take, let's say, 100 videos and I manually segment them. And for every transition between images of the same kind, I'm noting how big the differences were and how many times the differences were so big. So what I will probably see is that within a shot, there are very many subtle differences. Not too much happening here. And it's quite rare that there are big differences within the same shot. This is manual. And then I do the same thing for the differences between shots. So I look at the difference and I will probably find that there are a lot of large changes between shots and there are kind of little shots where there are small changes between shots. Very small number. And what I usually can do if I have these two, the intershot dissimilarity and the intra shot dissimilarity, I usually can see where is the sweet spot and just use it to be my threshold. And the error that I'm making is kind of like the error here and the error here. And this is what I have to live with. So this is kind of like how you can train your shot detector to find a good shot detection. Good. Again, we now can find all the hard cups where really something changes between them. What about phase? What about dissolves? Where images are blended into each other. So between each two frames, that's not much different, but it's definitely cut because, I mean, three seconds before it was a cat and now it's kind of like a dog. It was the anchorman and then I fade it into the desert scene or something like that. So how can I deal with that? And the idea is so-called twin-threshold holding. So what you basically do is you use not one but two thresholds. And the rationale behind it is kind of, well, I cannot really detect from frame to frame that there was much change. But going a couple of frames, I will find that there's very big change. Because it used to be the anchorman with a blue background or something. And now it's a desert scene with a yellow background. So if I take it apart a little bit, if I kind of go beyond the blending or go beyond the fading in time, then the change should be very high. And that is different from when I'm just panning or something. Because even if I pan over the room here, where there are brown covers for the tables and the chairs, where there are kind of pink covers for colors for kinder and kind of blueish colors over there, they change. But also if I take two pictures from the sequence that are far apart, they're still very similar. So the pink may have dissolved because I moved over there. But the brown of the tables and the flooring is still there. So many parts of the image are still there. So what do we do? We use two thresholds, one for the determination of hard cuts and one for the determination of soft cuts. And it works like that. So what we do is a threshold, setting up a threshold that corresponds to the size of an intolerable change. So we will immediately say once this threshold is reached, there is a cut. So this is a transition between two images that cannot be done by panning or by an object moving or whatever. And then we will use a second threshold for the possible origins of smooth transitions. It seems to be something happening here. And of course this threshold is much lower because saying, well, this is definitely a cut, it's much harsher than saying, well, there might be something developing here. If we detect a smooth transition or the beginning of a smooth transition at some time, then we will keep this frame and stop kind of like looking at the difference between two adjacent frames, but start looking at the difference between this frame that we kept, that we have seen as the origin of a possible smooth transition and compare it to the following frames. And if then at some point we reach the higher threshold, Tc, then we can see it has been a cut. So we kind of like not going on to compare adjacent frames, but if we assume or at some point suspect there might be something developing because we have seen a change that is still not big enough to be a cut, but it's unusual. Then we will kind of stretch the distance between frames that we compare. And this allows us to see the bigger picture beyond the blending or beyond the fading. This is kind of the idea behind it. So all the difference of the subsequent frames in the interval T plus 1 until T plus n, so for the next n frames are not computed regarding the previous frame, but the frame at the beginning of the interval at time T. So the reference frame is often called. And if then the difference rise above the cut threshold, then we can say this has been a smooth cut. The images have changed over time. And if I do not detect a smooth cut within the n frames, then I just say, well, false alarm, that has not been a smooth cut, but somebody had a bright pullover or whatever, you know, like, and that kind of set of the threshold, but it's still the same room or it's still the same background or whatever. Okay. So how does it work? We have a smaller threshold that tells us, ah, that could be a soft cut. And we have the hard threshold that tells us this is a hard cut. So these are the different frames. So here's the time. These are different frames and the difference between them for the first three images, for the first three frames, I just look at the edges and frames and there's not much change. So no shot. And then at some point, the difference is huge between two edges and frames. Okay. Hard cut. Can immediately recognize that because I'm beyond the threshold for the hard cut. And then it goes on, small changes, small changes, small changes, nothing to do. But now I find, ooh-hoo, there might be something developing. There might be a fading or whatever it is. Now I will start not looking at the adjacent frames anymore like I did here. Yeah. But now I will start looking at the reference frame because this is my reference frame. Okay. And see the differences there. If the differences build up over the cut threshold, then this is okay. Then there must be a smooth cut. If not, well, then at some point I say, okay, let's go back to looking at pairwise differences. That was no cut. Okay. No cut here. And then it goes on. So all the different kind of like, no, this is what I always did. Kind of like, aha, I find another one. And again, I go on from the pairwise comparisons to the comparisons with the reference frame. Here is another reference frame. And what I can see is that here I surpassed the threshold. So that must be a cut. Okay. So here's cut. And here's a cut. And here's no cut. Okay. This is basically how it works. Okay. Good. There's also some techniques which are kind of block based. So what I'm trying to do here is not just looking at the histogram of the entire image, but rather looking at the average color or histogram at blocks of the image. And this is basically because the camera is usually focused on something. Whereas the background stays rather similar. So the blocks of the background should not change. The blocks in the focus, they are prone to change. And if I even out the number of blocks, then this can avoid the problem of noise and different cameras. Each frame is divided into some number of blocks. And you calculate the characteristics, the color distribution, or average color, or whatever you want for each block. And then compare the corresponding blocks. And add up the differences in the blocks. And this is kind of like the difference between two images. Same kind of idea. The advantage is, of course, that you can detect and ignore effects occurring only a part of the picture where something moves or where something has kind of changed by this block wise comparison. And the rest of the picture stays as it is. For example, if you think about our new example, the anchorman may turn his head. But that's just this change in this block of the image. The rest of the image is unaffected. And if enough of the image is unaffected, then it doesn't matter what happens in this single block. It might be that the anchorman turns red because he's, I don't know, annoyed or whatever. Or he turns blue because he's lacking air or whatever. It doesn't matter because the rest of the picture stays as it is. And the rest of the picture can be done very well with block-based techniques. If a high number of blocks are the same in the second of two consecutive frames, this is an indication that there was no cut, that the frames belong to a sanction. If only change was in some blocks, we can ignore it. Good. We can also use model-based procedures because, I mean, how many types of cut do we have? We can have a hard cut, we can have a disolve, we can have a fade. And then it's getting kind of like difficult to think of some more, you know? It starts kind of like, hmm, what else can we do? And this is the idea of modeling the transitions as kind of operations. And then detect the patterns that are caused by this transition. The good news is it doesn't only recognize transition, but you can also refer to the type. So you see this is fading, this is dissolving, this is hard cut. If you need that or if you're just interested in the shots and the shot boundaries, it depends on what you want to do. But of course it's always possible. So for example, one type of model would be, if you use a fade, what happens? You fade to total black and then go back to some other kind of image. The amount of black occurring in the image and then fading back would obviously be seen in the color histogram or the brightness histogram. If I over time a change to the black of the brightness and then back to normal histograms, then this is obviously a fade. And this model can be detected. Okay? So this is actually what happens, temporal model for fades. If you fade out of the picture, the shot becomes darker. This histogram is compressed in the x-direction. Histogram, you can see here, 0, here's 255 in the brightness. And basically what happens when you fade is you go to black, which means your histogram looks like that. And then you go to some normal image where you probably have some normal distribution and you're very little black. Okay? So that's kind of the model that happens in the fade. When fading in, the image becomes brighter, so the histogram is again stretched in the x-direction. And you can see that basically could be seen as a mathematical operation on the histogram. So if you take the histograms between the different frames, then you can detect whether the histogram is stretched or whether the histogram is compressed. And if you have a compression followed by a stretching in a certain time interval, then this might be a fading operation. So that's kind of the idea, and you can do all similar models for other transitions for dissolving and for hardcover, of course, too. So I took here the first part of a trailer, so you have this typical 20th century fox picture. And if we look at the histogram, it's kind of rather bright. I mean, the whites are missing, still a dark image, but it's kind of having some distribution of different brightnesses. And then it fades out, it fades out, and we can see that the histogram is compressed. It's going into the dark direction. And at some point where it's really black over here, no, at some point where it's really black, there's only black. And then the title of the producer Lucasfilm in that case is kind of faded in, and we see that the histogram stretches again. Mathematical operation on the histogram, nothing else. And the point where it's black, this is the cut. This is a soft transition. Good. And now let's go into the compressed videos and see what we can do there. Okay, so we've seen some approaches for the uncompressed videos, but this is not quite feasible for today's situation. We have YouTube, we have a lot of video content providers, and they have huge amounts of video data. To hold such data in an uncompressed way is no solution. On the other side, we have networks. You have to transmit the video over the networks. Again, uncompressed video, no way. So we somehow have to deal with compressed video, and then doing short detection on uncompressed video, which you decompress from compressed video, again has huge computational costs. Just imagine you have today full HD devices. They are capable of such resolutions. In uncompressed, you will have about 3 gigabits of data per second, and this without the audio, so just the video. This means a lot of computational power is needed just for one video to decode it again and then do the short detection. So why not try to do something more clever, like short detection in compressed videos? The idea here, yes, of course, when you have compressed videos, you have lost some of the information from the original video, and this might influence the short detection. So you have a trade-off between the accuracy and the performance. How fast the short detection is performed and how good are the shots detected. But as we will further see, this is actually quite a good trade-off, so the short detection performs quite well in compressed videos. And the two approaches we are going to discuss are based on the cosine coefficients. So if you remember, when we've spoken about compression in images, we've spoken about cosine transformation for JPEG, for example, wavelet transformation, where we've kept only the first two, three, four coefficients with most of the information in an image. And we've cut the rest. We've lost some information, but the image was still quite good. So that's the cosine transformation. And the second approach, the motion vector information, information which is available in the compressed video in standards like MPEG, 4, 7, and so on. But let's start discussing a bit about general compression methods. So in compressed videos, we have different types of frames based on their content. The most important type of a frame is the eye frame, or the independent frame. An independent frame is just an image. A JPEG, for example, can be considered an independent frame. And the idea here is that compression method is used inside the frame. So I'm going to have wavelet transformation applied to that image and cut the coefficients with a certain precision hold, keep only to the first coefficients and cut the rest. So I'm compressing the image itself. Then I have P frames which follow this eye frame. And for these P frames, I estimate the content of this frame based on some other reference point, like, for example, the previous eye frame or the previous P frame. Imagine, for example, what we've started with. So a newscast like, I don't know, Tagershow we've started with and the anchorman staying in the studio. And that could be an eye frame. And then he starts moving. He moves his head. That could be, for example, a P frame with some moving vectors recording, okay, this region here of the head has moved in this direction with a certain shift. That's the pre-frame, the predicted frame. It only records the changes between that and the previous frame. So everything, like the background, is not annotated anymore. That is as the previous frame is. So P frame will have only a small amount of data. It will be smaller. That's the compression method for the predicted frame. And then we have the B frames, which are B directional. They depend also on the previous frame and also on the next frames. The idea here is to make the movie smoother, you kind of need interpolation. This is what you're doing with the B frames. So they are just interpolation frames to calculate this motion vectors more smoothly. Okay, so now just to shortly recapitulate, we have these independent frames, images. We have these predicted ones, predicted from the I frames, and we have B frames for interpolation. How can we do shot detection based on this? Well, what is a shot? A shot is a sequence of such frames. I don't know, you have some kind of a sequence like this of independent frame, B directional, and predicted and so on. And I don't know, maybe you have a shot taking like 10 seconds, then compared with the numbers of frame per second, you might have 130 of such frames. And they are transmitted in a certain order on the network. And then you have to do shot detection on these sequences. We've said we'll speak about two approaches. One is the discrete cosine coefficients, which we get in MPEG and H264 from the way I frames are encoded. So I frames as JPEGs are encoded on basics of blocks of 8 by 8 pixels. So the image is split in 8 by 8 pixels. And for each of these blocks, discrete cosine transformation is applied and each has a sequence of such coefficients. The idea in this kind of a detection based on the I frame is to keep for each of these blocks only the first coefficient, the DC coefficient, and with each DC coefficient, so for example, this one here, this one here, this one here, and so on for the entire frame, build a DC frame. Well, this is lossy, of course, but it's good enough. So then for all the I frames, one could build such a DC frame and then a DC sequence of the movie, just from the I frames. And I'm not going to decode anything. I'm just getting this DC coefficients from the discrete cosine transformation. I'm building them in blocks, sequences, and sequences of frames. And then the idea was from Tuske and Delp in 1998 to build a series of generalized traces from these sequences, which actually means for each of these DC frames to calculate features like the histogram, the brightness, and so on. And then calculate the difference between these histograms and with the threshold establish where the difference is high. So again, a thresholding approach. This way the scene change detection is performed without decoding anything. Again, threshold based method only on the I frames. That's the cosine transformation idea. The advantage is that I frames being independently encoded. I don't have to decode anything. I have access directly from the data stream to the DC components. And I have a quite good accuracy based on the encoder which has been used. So usually you have an I frame after each 15 or so B and B frames. So you have this discretization of the movie with a step of 15, around 15 frames. The other idea is to use the motion vectors. Motion vectors, as I've said, express what parts of the frame have moved in the next frame with how much and so on. And these are expressed in the P and B frames. So the other idea that Zanghead in 1993 was to determine the number of motion vectors in P and B frames. And with the assumption that in a shot this number of motion vectors is quite constant. See when the number drops. So I'm going to film something. If the shot is quite dynamical, it will keep its dynamical movement. For the length of the shot. And when something happens and I'm changing the number of moving vectors will change. Of motion vectors will change. And there will be my cut. This was their idea. And of course based on this idea, a number of hybrid approaches have been also built. Some with using the discrete cosine transformation for the iframes and some using on the motion vectors. That Zang discovered in 1993 to use the full length of the information for the MPEG. And I've tried this with a tool. It's called MSU. If you want to download it or something, just let me know. I will provide you with the link. I've tried it for the avatar trailer to see how this works for a real video. Of course, so I've tried it for the classical approaches, the pixel wise comparison, global histogram, block based and motion based detection. All of them can be applied. The first three are applied for the iframes. And the last one is applied for the B and P frames. Unfortunately MSU doesn't have a tool with this hybrid solution to apply all of them to I and P and B frames. But I think this is enough to make a point. Let's see the trailer first. So you see soft cuts. You'll see a lot of hard cuts. So again fade out, fade in. Here a lot of hard cuts are possible. But it's a trailer, so lots of action. So again dark cuts, with hacen Luc wat test Project Okay, so close to the end you've seen a lot of hard cuts, a lot of changes of the seams and so on. Of course this is a trailer, so we'll see also a lot of shots. Okay, so the results with the four approaches look rather similar. Motion based is what we have here, the first one. The differences that I don't know one can maybe see is here at the global histogram, there are some frames which actually have been kind of falsely detected as I've seen, but they are quite similar to each other. Here a lot of shots have been detected and this is because of the action period. These ones here have also been correctly detected by pixel level and motion based haven't been correctly detected. And let me show you an example so that you can actually see what happens here. So I'll do just a short live test with the... I'd like to do it with the block-based histogram, okay, that's number three. Yeah this is of the video and now it divides the images, the iframes into blocks and then it compares the histogram of each iframe block-based image and based on the thresholding method, I don't know what thresholdings they are using, I didn't find they don't say, but based on their thresholds they managed to detect the shots. Okay, so let's go from the beginning. So the first shot is here, this is the first shot in image, you'll see here, the next shot is this one, the next cut actually, another cut here and here and what I was wondering about is this one here. This is not discovered by the first two methods, so by motion based and pixel level that shot there is not discovered and actually as I've seen I think it's because of the darkness of both of the shots, so pixel level is not able to see that. The histogram based approach actually is better at that but the rest of the shots are quite similar for all of the approaches, so they are quite well recognized and here a lot of action, lots of shots, lots of cuts. And that's it, so all the approaches work actually quite well in practice and I've tested it also for some other videos, not trailers because trailers are selected so they have a lot of cuts, so they impress, this is the idea of trailers. But regularly in like three, four minutes of video you have something like 20 cuts, so not like here, here I had I don't know 50, 60 cuts, that's a trailer. So these approaches work quite well. Okay, let's continue with the lecture. Yeah, and that brings us back to the idea of, well, we kind of were able to detect the shots, but what about detecting the story in the shot? What about detecting what changes in the shot, the semantics of the shot? And this is the idea of using statistical structural models, so kind of like we want to decompose the video in semantic units, okay? And what we were doing just now with the shot detection was kind of like we were using the graphical primitives, the brightness, the colors and just looking at them. And what we want to do now is kind of look at perceptional features, the visual structure of the video and then see, well, if the structure changes from time to time, then this might be different shots, that might be different story units. So for example, look at a movie, your normal movie. There might be some scenes that are full of action, where people run or car chasing or something like that. And there might be some scenes where people just are romantic and kissing each other. Those scenes couldn't be more different from each other, perceptually. And this is actually what's happening in film theory, that you have certain stylistic elements, which is often called a montage of a movie. So you have a temporal structure, you have a certain editing that makes sure that not the whole movie is one action scene after each other, but that you blend in some scenes that are kind of romantic just to make the story a little bit rounder. Or that you mix scenes of action with scenes of inaction, just to make it more pronounced that something is actually happening. And this type of montage is kind of like there's a theory, film theory behind it, where you have kind of like the spatial structure, the scenery, the lighting, the camera position, which is very typical for some things. So for example, if you have action, you will usually use big shots, because you want to see things moving, you want to see things flying around. There's no use doing an explosion, you know, with a close-up, because then you go boom and that doesn't help you. But you pan out and boom the whole explosion. This is actually what movie directors do. And this is quite deliberate. If they do it deliberate, we can find these structures, because we know they must be there. So the idea is we want to build models of stylistic elements, and that allows us to extract the semantic features for the characterization and classification of structural units. And of course, this can also provide the background information for then using low-level features for shot boundary detection. There are a couple of editors, for example here, he is one Alfred Hitchcock, who was very, very popular and very famous for the way he edited movies. He usually was not in the story. I mean the stories are kind of what is this here? Psycho, okay. What's the story of Psycho, who can kind of recall it in a couple of words? Anybody ever seen the movie? No? Oh, it's a classic. Well, basically, somebody drives to a hotel and a young couple, and then there's a mad hotel manager who's living with his mother, and in the end you will find out the mother has passed away a long time ago, and actually he is kind of in his mind split. He's kind of schizophrenic, and he's playing the part of his mother in a way, and he's killing the people that are staying in the hotel. That's too impressive in terms of the story, but it makes for a good movie because of the editing, only because of the editing. And probably, even if you haven't seen the movie, you will probably know the shower scene where they go with the knife, the ee-ee-ee-ee. That's exactly one of the most famous scenes in film history, and we can detect that. We can detect that because those movies are built along the same lines. And Alfred Hitchcock was one of the directors that kind of put up the bar for later movies, and you can still detect the basic patterns that he had in many of modern movies. So what we did here is we took a couple of movies, so I don't know, Santa Claus, Miami Rhapsody, Crimson Tide, Edward, so Jungle Book couldn't be more different movies. And then we just looked at the average shot lengths, which basically goes down to montage and film editing for disturbing people, for transporting action, for transporting activity, like you do in modern music videos, you have very short cuts. You have very short cuts and very short shots, which will make the video seem hectic. So if you go to the psychoscene again, it's the same cut over and over again, very short. How would out of Africa be with short shots? Terrible probably. Oh, kiss me, kiss me, kiss me. It doesn't work. It needs time. You know, like it needs time to develop romance, okay? So the scenes should be longer. And also the activity during the shot, so it means our sin, should be different. I mean, if we have Di Hardo or something, Bruce Willis walking down the pavement, boring, but on the other hand, I mean, pretty woman or something like that, Julia Roberts, come to the hotel. Oh, you don't want that. You want to want to be on a shopping spree. So if we take those two, if we take the, if we take the, what I was saying, if we take the shot duration, the relative shot duration, and the shot activity, we can see that the short of the shot, the higher is the action, usually. Action movies use short cuts because they don't tell long stories. They don't kiss long. They just want to get through. And an explosion doesn't take long. And it's boring to look at an explosion after it has happened. You know, you want to see explode and then something different, okay? Short shots. And if we make up some basic calories for movies, for example, film, comedy, love movies, you know, then we can cluster according to the two categories, degree of action by the motion vectors, okay? And shot length, average shot length of movie. And if we do that, what we can see is that the action movies are down here, high activity, short length, that the romance movies are up here. This is long shots, very little action. I mean, Titanic, think about Titanic, how long were they standing in the front of the ship like, oh, yeah, sure, I've seen it, hello, something happening, you know. And the comedy is somewhere in between, you know, so you would have medium shot length, medium, medium activity. And of course, there are some things that are, for example here, the Riverwild is an action movie that features rather long shot length and rather low shot activity. So I don't know, anybody knows Riverwild? No, never seen it? No seems to be a boring action movie from what I see here. And there are also some romances, jungles, so there's something happening there, obviously. But I mean, in usual, you have the, I don't know, like the Judge Dredd and Final Wendons and stuff like that, you know. You can really, by decomposing the movies from the editing, from the structural editing, you can see actually some things. Well, and we can explain these classes through film theory, so this is what basically makes it. Always, we can say, if emotions should be carried, activity is a very bad thing because activities are usually not very emotional. You would have a close up of a face, somebody crying, somebody, you know, like screaming or something, that would be emotional. And you need some time to take it in because you have to see what's happening in the face, you know, like if somebody is kind of changing from being happy to being sad or being disturbed or something, you know, like you need time to see that. And this is exactly what you would do, you know, like long close ups. Also the development of the character and for you to bond with the character takes time, so that explains the long shots. And Charlie Chaplin once said, tragedy is close up comedy a long shot. So if I want to have something that is dramatic, I have to see the face. If I have something that should be comic, I can take the full picture, you know, like people running around and doing funny things and I don't know, like the cake flying and stuff like that, you know, like, I mean, I can see that from far. This is kind of the idea. For action or suspense, we very often have rhythmic patterns. This is the famous psycho scene with the knife under the shower, you know, and it's repeated. It's repeated. It's very rhythmic. Also if you have fast cuts, you will confuse the viewer. And of course you do that deliberately. So you want to confuse. You want to get him mixed up to make him nervous to keep up kind of suspense. And yeah, same goes for long dialogues. You know, if you have to develop a character, you need some dialogues. You need to transport how a person feels or what a person thinks or what makes the person do something. If something, if somebody hops out of the way of running car, you know, speeding car, you don't need time to understand what he's doing. He's saving his life, you know, obviously. You can do that very quickly. You can do quick shots for that. That's kind of the idea, what you do. If you have the semantic structure, you cannot only detect the shots, but you can also annotate the shots in a way. So either based on film theory or learn from some sample collection ways, okay, this is a typical shot that transports this and that, you know, and let's find them. Of course, if you have high level structure patterns, you get kind of more semantic than low level features. So statistically, if you have several, certain colors or something in the image, how do you detect porn on the net web, for example? Well, you're looking for skin color. A lot of it. But even so, some of these simple rules work very well. And also, if you have romantic movies, you will usually have warm colors, not blues and green, but rather reds and pinks, oranges. That would be more to the romantic scene. So finding some of these issues can really help you to semantically annotate, automatically semantically annotate a collection, but of course, it's difficult. Well, anyway, the more a video is structured, the more semantic information you can get from it. And that was actually why some of the first approaches of automatic video processing was always done with news programs, you know, because they had such a wonderful structure. You have the Anchorman, you could easily detect what is the studio, you could easily detect when it's changing to something else, you know, like, and you can fragment it very easily. The most of the user-generated content is rather unstructured and quite impossible to fragment properly. But of course, this is much of the content that is out there these days, you know. YouTube counts for a lot. I mean, that's a huge number of videos that is uploaded on YouTube every second. And you have to find out what you can do with it. So classical elements of movie directors are always the short duration. We already said that. And the classic elements of the mise en scène is the activity within scenes, you know, and there are explosions, there are speeding cars, there are all kinds of kind of motions or running animals or something like that. Very often this activity is correlated to violence in movies. So if you have a speeding car, the accident is not far away usually, you know, like otherwise it wouldn't make sense for the actor, for the director to show us speeding cars, you know. It's nothing happening. But also, you can very often also capture the mood. For example, the brightness of the colors or if it's depressed, if it's dark, if it's kind of blueish, cold colors, then you have a ghostly mood. And again, psycho is one of the prime examples, you know, like, I mean, it's really forbidding house in the dark colors, you know, like that was the house of the hotel manager. And where he was living was his mother. And that was kind of spooky, just by the colors, just by the mood of what was transported. If you look at the temporal structures of the video, you have the short boundaries. And basically, the short boundaries make the movie. And I can say things about the movie by just looking at the sequence of short boundaries. So basically, it's at each short boundary, each cut is an event. And you have a sequence of events. And if you want to investigate segments or model statistically model sequences of events, you usually go to queuing theory, which is kind of like the arrival of persons in a queue and the longer the queue, nobody will stand at the back of the queue. And then, you know, like you work your way through the queue and it becomes shorter and you people are arriving and stuff like that. And this is typically modeled by a Poisson process. The number of events, what's happening is kind of given by a Poisson distribution. And the temporal distance between two successive events is exponentially distributed. So there are very little short arrivals and very, very many short arrivals and very little long term arrivals. So there's this. It's very rare in a usual queuing situation in some, I don't know, McDonald's counter or the ticket office or the bus station or whatever it is, you know, like that for a long time nobody comes. But people will arrive rather constantly in short intervals and very rarely over the long time. The problem is that if you use this exponential distribution also in movie making, also in filming, then you will have many short but very few long shots. This is the type music video, you know, like shot, shot, shot, shot, shot, shot, shot, shot, and then maybe one refer or something like that is a long shot. And this is not really what we want to see in normal videos and normal videos, movies. The second problem is that the exponential distribution basically has no memory. That means the probability that within the next time units a short change will happen is independent. You always kind of like a conjecture with the same probability whether a shot happens or not. But if you see it as a global feature of the movie, the longer a shot is running, the more probable it should become that there should be an ending to the shot. So this is kind of like, it's not really dynamic. And what we can do is we can say, well, let's not stick with Poisson processes, let's not stick with skewing theory, but rather consider alternative models where we not use the exponentially distribution of shot detection but rather use other distributions and typically examples of the air lung distribution or the viable distribution. And estimate the model parameters from some training collection where you see what the shot boundaries actually are and how long the shots actually are. And then you get a maximum likelihood estimator that will help you to discover shot boundaries or the probability of shot boundaries in newer, unknown videos. And I would say we make a short break and then go into the statistical stuff. Five minutes? Five minutes. So after we've seen some real-world film editing here, let's get back to our temporary decomposition of video. So what we want to do basically is we consider the shot duration to be air lung distributed and the length of a fixed shot has a probability density that is dependent on two parameters, r and lambda. And if we set r as one, then this is exactly the exponential distribution. So this is basically just a generalization of the exponential distribution and the expected value is r divided by lambda, which is the average shot duration that we need. So we have kind of the independent random variables depending on the r. And if we have r independent random variable that are exponentially distributed with the parameter lambda, then the sum is r lambda early distributed. Okay? It's just for calculation, so don't be too worried about the distribution as such. How does it look like if r is one? This is just your normal exponential distribution. Okay? If r goes to higher values, you will find that it is rather like a Poisson distribution, you know, like it has a head and it has a long tail. And depending on how high r is, the head will shift. Okay? And with this, you can build a lot of distributions that are very useful in queuing fields. So basically, this is a Poisson process, like in queuing theory, since exactly one or exactly the r's event is counted. And if you have r as two, then basically the structure is the context of the whole image and you zoom in on essential details. So this is a typical director's idea of what you could do and it would be covered by an Erlang distribution with r as two. If you take r as three, then this is very often an emotional development followed by some action and followed by the result of this action. So you have three shots basically. The first one is rather long, then comes one that is rather short, the action, and you fade out as the result of the action. So different numbers or different parameters of this distribution can be attributed to different ways of movie editing, like we had before in the Hitchcock idea. And if you then take the likelihood function for a single Erlang distribution variable here, then you can build the log likelihood function. Basically, just you get the exponential parts out of it and you have to choose the optimal parameters r and lambda for some sample of independent identically Erlang distributed random variables. And depending on what you find as best matching parameters of most likely parameters, you can attribute the parameters to the scenes. So for example, if you find that r as two matches the distribution very well or your sample very well, then you might find out that this is a shot that has a certain, well, top level and then you zoom in on details. If r is three, you see some emotional development and stuff like that. You can basically find the best ways of estimating the most likely parameters and by knowing the parameters, you will know what kind of shot it is and what to expect. So we basically have an optimization problem of a discrete variable and a continuous variable that is the same as in the original exponential distribution and film theory tells us that the r is rather small. So it's like between one and ten basically. And so what we could do is we could just use brute force and test all the different r's, compute the optimal lambda and then we have the pair of parameter and lambda that maximizes our optimization problem and knowing what maximized our optimization problem, we know which parameter is kind of responsible for this shot duration and then we know what it means because every of these parameters is kind of tied to some kind of, well, montage or to some kind of movie theoretical editing. Okay, this is the basic idea if we do it that way. If we know in advance what the r is, then the determination is kind of simplified because we can just put in the right r and if we kind of take this expression and use derivative to find the maximum, use the derivative with respect to lambda and set it equal to zero like we did kind of like finding maxima or possible maxima in function theory. First derivative is zero, second derivative is non-equal to zero, you know, then we just can basically look at this equation over here and estimate the parameters from some training collection. So what we do is we take a training collection, this is the green stuff over here and record the number of scenes with a certain length, okay, and then we get a distribution and we fit an Erlang model over it by solving our optimization problem for different values of r, the different values of r can be, well, limited down to 1 to 10 because movie theory does not consider, you know, like, I mean, a lot of different possibilities of how things are, of how things are combined. If you have r is 10, then basically you have 10 actions that are combined together to form one stylistic element in movie theory. That's quite a lot. So it's not, definitely not true that more shots can be combined to form one element. So we can just try that. This is, we fit the best matching Erlang distribution and this basically solves our first problem. We now know the distribution of shot durations and know what is more or less probable. And the longer a shot takes, the more improbable it is that it should not end, okay. Our second problem remains so the Erlang distribution does have a memory but the underlying random variables are still exponentially distributed. So they don't have a memory. It's just the same at every point in the process of every point in the movie that the shot will go on. And we can change to a different distribution to solve also this so-called viable distribution. Again this is a generalization of the exponential distribution. Looks a little bit different. And what we have to do is kind of like we have to, well, I don't want to go into the viable distribution now. Will be solved. It can solve the problem. The idea is basically the same. Then we can use the activity with a one shot. And again we can do that on low level features. So either histogram differences or we can have the, another statistical model for the activity with the help of the histograms or we could use the motion vectors. It doesn't really matter. We use, we just want to know how probable it is that a shot has more action or not. And again in film theory we can use the rule that all directors know about or usually adhere to which is kind of a continuity in editing. So what you would do is kind of like you don't do cuts if you don't change anything. There's just no point in showing a certain picture, doing a cut and showing the same picture. Because people should notice something has happened. Something is developing. So we can always assume that if there is a shot then something has changed. So we can build based on shot activity also a model, a statistical model for determining shot boundaries. Where we segment the video in regular frames. These are all the frames within a shot and the shot boundaries which are the frames between two shots. And we will just record a certain state for that. So if we're in a regular state, state zero, shot goes on and at some point in the boundary it changes to one shot is over, new shot starts. So we'll just distinguish between two kinds of shot boundaries. One is the frame within a shot, the other is the frame between two shots. And what we can do now is kind of we can classify each frame as a regular frame or as a shot boundary. And if we know which frames are the shot boundaries, the shot detection obviously is quite easy to see. So training data for the shot activity often cannot be approximated very well if we use kind of this standard deviation thing. But what we can do is we can use different distribution components. We can just see, okay, if the activity in the shot is high, we are probably within a shot because nobody will end the shot with very high activity. Not in the middle of the explosion or not in the middle of the car crash. But it will kind of like take a dive and then you will change the shots. So you can use different components. What you do again is you take the activity within shots, you get a histogram of that. Again, you derive a model, a statistical model. In this case, it's a mixture. It's kind of like three early distribution, one uniformly distributed model. It doesn't matter really. You know, like the mixture of the model is just for catering for exactly what I said a moment ago, you know, like that the activity is not evenly distributed over the shot. But usually at the beginning of the shot, there's little activity, then it starts and starts and so on. And then it ends and then there's a shot boundary. And so you need kind of like a mixture of different models, not a single model for the activity within the shot. But that doesn't mean anything. So basically, whether it's a mixture or whether it's a single function doesn't mean anything, what we are interested in is basically the curve that is derived from that. And if we see the activity between shot transitions, we will find that it looks totally different. So between shot transitions, there's a lot of distance in the activity. Because within shots, either it's active or it's not active, but there's not much change. Okay? See what I mean? Not much change here within the shot. Quite a lot of change between two shots. This is when actors kind of like have the love scene and then there's another scene where something happens or you have some landscapes and then you have cars running through the landscape or something like that. So between scenes, when scenes are changing, you have a lot of activity change. If you have one scene, there's very little activity changes. Either active or it's inactive. But it usually stays this way. Okay? That's all I'm saying. So we can again use some statistics here. So if I have two frames, I can have two hypotheses. There's the zero hypothesis. They both belong to the same shot. There's no cut between them or there is a cut between them. And as I said, if between these two shots there is no shot boundary, then the activity between them should be similar. If there is a boundary between them, then the activity between them should be different. Okay? Again, this is overlapping. This was the one for the in shot. That was the one for the between shots. So if I measure the activity in this spot, it could either be within but quite unusual or it could be with a higher probability between two shots. Still could be both. Okay? This is the basic idea. So how do we compare the likelihoods, the probabilities between them? Well, we do a likelihood ratio test. If the probability that there was a boundary given the two frames is higher than the probability that there was no cut, then we will decide for there's a cut. Okay? Or we could just say that the probability for there is a cut divided by the probability that there's no cut is larger than the euro. Okay? Don't care about the algorithm here. And if the probability that there is a cut is lower, then we will just assume that there's no cut. Okay? Quite easy. Well, the likelihood ratio test usually uses no knowledge about the typical shot duration. But what do we do is we use the a priori distribution of the shot duration. So from a training set, we can get parameters to determine our functions between shots or within shots. That means we don't know it for the very video we are trying to segment. But we know what is usual. Okay? And this basically means we can use Bayesian statistics to test the hypothesis. And basically this is just a way of doing the threshold. So to get the Bayesian set up, we need to get some notation done. So we have the duration of each frame. We have the shot boundary or the state for the shot boundary. Zero, there's none. One, there's one. For a certain time interval, time t, two times t plus delta. We have the distance between the frame at t and the successes until t plus delta. And we have the vector of the states basically shows me when the duration. So usually this is zero. And then it goes to one when there's a shot boundary and then it goes back to zero. And we have the vector with the components of the distances between whatever, maybe the activity or the color changes or whatever. The first hypothesis that there is a shot change is valid if the probability that in a certain interval, tau to tau plus delta is one. State one means there is a shot boundary given that the distance starting from a, within a shot starting from a not boundary takes a certain value. So I measure, if this is my movie in time, at some time tau, I'm within a shot. This is exactly the meaning of s t is zero. So this frame at this time is within a shot. And then I say if I go to tau plus delta, oops, okay, and between the two shots, there is a distance whatever it may be between the color histogram between the activity, motion vectors, whatever, however the distance may be defined, okay. If there is this distance and I started within a shot, okay, then the probability that this frame belongs to a new shot, which basically there was a shot boundary between tau and tau plus delta. So at some of these points s tau or tau, tau plus delta was one, okay. This is a basic idea. Everybody understand that? If that is larger than the probability starting at the same point within a shot, having the same distance, but there was no shot boundary in between, okay. If that is bigger, then we will say there is a shot change, otherwise we will say we're still in the same scene, okay. Good. Well, this is exactly equivalent to one is bigger than the other, meaning if I divide them it's bigger than zero, okay. Don't care about the lock here, that's just the technical reason. But if there was a cut at some time t and non in the interval t to t plus tau, then the probability for a cut in the interval between t plus tau and t plus tau plus delta, okay. So we have a shot at time t, s is one, okay. Then for some time t plus tau, nothing happened, s was always zero, okay. And then t plus tau plus delta, okay. What is the probability that s is one or zero in this? This is basically the idea of the longer this phase before was without a cut, the more probable depending on the ratio between the lengths within cuts, so within shots and the lengths between shots becomes more probable is that a shot is going to happen, a shot boundary, okay. So the longer this blue stretch here is, the more probable is that there will be a shot boundary in the next interval, okay. This is the basic idea. So the probability, just rephrasing what I said, the probability that there is a shot boundary from tau plus, to tau plus delta, given that there was no, that we're still in the shot at time tau and have experienced some certain difference is simply the probability, this is just a factor, the probability that this difference happened giving that we have a shot change times the probability that we had a shot change given that we were in a shot before, okay. It is the probability that a shot happened after some time, this is the last part over here and if this shot happened the probability that it is shown by a certain difference, the higher the difference, the more probable it is that it actually happened. The longer the interval without a shot boundary before, the higher the probability that a shot is going to happen, boundary, so the end of the shot is going to happen, okay. And of course we have to put that together, we have to multiply it because the higher the interval without a shot detection, the lower the difference has to be so that I see, okay, this obviously has to be some, the end of the shot. And vice versa, if I have very little small shots then it takes a lot of difference to detect a shot, yes. Yes, so basically the tau is here, okay. So basically I start from time t when there was the last shot and then I go tau seconds, okay, then I am here and then I go another delta seconds, then I am at t plus tau plus delta, okay, in time. And the longer this interval was with no shot, okay, this is basically given here. And the higher the difference is, this is the part over here, the higher is the, or the more probable is the hypothesis that actually a shot boundary happened, okay, good. As I said don't care about the gamma, gamma is just for normalizing the stuff and we can do exactly the same for the hypothesis that there was no shot boundary, so the shot is going on still. Since it was the shot before and we saw this difference basically the difference is dependent on the shot and the shot goes on, okay, just the same thing, good. Taking it into perspective, building the ratio between us, this is just, well, try to figure it out at home, it's quite easy to see if we take the ratio between them, many of the parts will just even out and we will basically end up with this expression here. And this expression is kind of, seems difficult but if you consider what it is, it's basically the length of the shot duration before the interval that I'm currently looking at and it's the difference between the motion vector at the current interval I'm looking at. The higher the difference between the motion vectors, the more probable is the shot, the longer the interval before my interval with no shot, the more probable is the shot, okay. Good, so what happens basically is you have this part over here which is the behavior of conditional probabilities for activities. So what is the correct or the necessary difference between motion vectors or histograms given that there was a shot boundary and what is the same difference given that there was no shot boundary. And of course you can learn that from training examples, you can say, well, is this difference that I measured really sufficient? Well, in a lot of cases it was, in a lot of cases it was not based on your training data, okay. So what is the second part? The second part basically is the behavior of the probabilities for cuts. So how probable is it that a cut occurs when it did not occur before versus how probable is it that the cut did not occur, that it still belongs to the same shot, okay, t plus delta. So also that can be estimated from the training collection because that is basically how long can a shot go on without being interrupted. And of course it becomes less probable with the size of the interval. The longer the interval, the less probable it is that it is still one shot. These are the both parts of the equation that we have to look at. And of course the hypothesis one, which is basically the upper point here, is valid if the logarithm of the above expression is positive. And I said before, don't care about logarithm, that's just for normalization. But this part over here is h1 and this part over here is h2. Yeah? At zero is nothing happened, same shot still. And this is the probability that a certain difference occur, but there is no shot boundary. Again here, no shot boundary in the whole thing. H1 is there is a shot boundary, okay. The difference occurred, there is a shot boundary and there is a shot boundary after an interval where there was no shot boundary, okay. These are different parts of the equation. So the intuitive interpretation that I was trying to get across is that the left side, this part over here, uses the information about the normal frame distance within shot and transition. Between shot transitions, there should be higher differences. Between within shots, there should be lower differences, okay. The second part uses the knowledge regarding the distribution of the shot duration. If a shot has gone on for a long time, a boundary is very powerful. If a shot has gone on for a short time, a boundary is very improbable, okay. Those are the two parts of our equation here. Yeah then we can define basically an interval starting from the last cut and we look at an interval where last cut, t, s was one here, okay. And we have seen an interval until t plus tau with no cut, s was zero here all the time. And now we are looking at t plus tau plus delta, okay. And want to know what s is here. Obviously depend on the length of tau. The longer tau, the more probable there will be a cut. Obviously depends on the distance between the motion vector or the distance between the histograms. The higher the distance, the more probable is a cut. Good. If we take p as the distribution density of the elapsed time, then we can say that the log posterior odds ratio is the probability given a certain distance in this interval. Under the assumption there was a cut, we are between two shots. The same distance under the assumption there is no cut, we are still within the same shot. And this basically caters for the distribution of the elapsed time. So the longer the shot goes on, the more probable it becomes. So I am looking at tau to tau plus delta and I am looking from tau plus delta onwards. How probable is it that this shot will go on in future? Yes? Right. Exactly. So, same as before, just in a different notation. Okay. That's right. So when we get to the Bayesian approach and in the beginning, we have to decide whether there is a short transition at some point, at some time interval or not, using a certain threshold estimation. If the last cut took time at place t and we now observe a difference of certain size, then there is a new cut if this part is bigger than this part. If the difference is big enough to justify a cut given what time elapsed. Right? The more time elapsed, the less different I need to know there is a cut because the cut becomes more probable with elapsing time. The shorter the time elapsed, the more different there should be to detect a cut because the shots have a certain length and it's very improbable to have very short cuts. So these are both players and they kind of even out at some point and this is the sweet spot where I say, okay, there must be a cut in between. Clear? Yes? Okay. What happens basically is if I introduce the r priori probability and the verification of the hypothesis doesn't depend on a fixed threshold, but the threshold is going down over time. The longer my shot lasts, the less difference do I need to say this is a short form. Okay? It's not a fixed threshold anymore. That's the interesting point here. By using the statistical model, I can say, well, it does depend on the difference between frames, right? But it does not depend at the same time in the movie on a fixed threshold. It's not, there has to be like we did with the thresholding or the twins' thresholding. There has to be some notable difference and then there's a shot. And what we can say now is that the time a shot lasted already influences how big the threshold needs to be. If a shot is very short still, if it's not old yet, then we need a high difference. The longer the shots last, the lower the difference can be. The lower is the threshold. This is the basic idea. We have a dynamically changing threshold that lowers with the time elapsed since the last cut. And the density by what it lowers, you know, like the exact curve, can be Erling distributed, Weibull distributed. We don't care. We can compute that. But just estimate the parameters of your distribution from your training collection. We don't care about that here. We just want the idea. So what happens basically is if you have the Erling distribution, you can build it, you can transform it into a threshold function where you say the threshold, depending on the point in time where you're in, is basically given by this function. Good. Typical time distribution of a threshold is kind of like it starts high. And the longer the shot lasts, this is basically the time, the lower the threshold gets. And as soon as I experience, after some time, a certain difference. So I'm going through the video frame by frame and recording the differences between the frames. Okay? And I'm not only recording the difference between the frames, but I'm recording the difference between frames given the time. That's the shot lasted. So I'm noting shot boundary here. And then I look at the differences with rising time. Then at some point where I say, okay, this is d tau plus 3 delta. So three time limits, three time units have passed since the last shot. Then I will look up the time limits here. So this is say 3 delta. And then I will find out how high my threshold has to be to justify a shot boundary. And the longer it takes on 4 delta, 5 delta, and so on, the lower my threshold has to be. Okay? Good. Initially the threshold is high, cuts are unlikely, and cuts are only accepted if the frame differences are very high. Hard cuts. Then the threshold drops, starts dropping, starts dropping, and cuts are accepted for less changes to the features. One of the problems is that these distributions in the Poisson process always are positive. So what happens is kind of like they look like that, you know, like they are never crossing the zero border. So there's always a probability that the shot will go on. And this is kind of like it converges to a positive value. It doesn't cross zero at any point in time. And that means that it has kind of constant level for consecutive soft cuts. So at some point it will say, you know, like, I mean, you have to take this positive value and say, okay, must be a soft cut that starts with a high threshold again. Just assume the cut here. Kind of problematic. For all the Erlang thresholds, we can see that as the time goes towards infinity, so the duration is positive. There is a boundary line threshold. So there is such a positive number that it converges to. And it just could be that we cannot detect something in that tail of the distribution. And yeah, the notion is basically the viable distribution or the Erlang distribution is based on the exponential distribution. The exponential distribution never crosses zero. It's always positive. Yes? Well, at some point there should be a cut. Otherwise the whole movie would be one shot, which is extremely unlikely. But since there's this convergence, since there's not, you know, like, it does not become any more unlikely with time. But since it converges, you know, like, the probability that the shot goes on stays the same. And this is definitely something that we do not want. And that can be, again, cleared up by using not the Erlang distribution, which is simpler, but the more complicated Bible distribution that doesn't have this force. It's just for the records here. So interestingly, this has also been experimentally verified. So this statistical model and Vaskon-Selos and Lippmann tested it with a collection of cinema trailer. And so they were using some for the training and then were determining typical length of shots and the typical difference between frames within shots and outside of shots, you know, I thought between shots. And then they segmented a new trailer for the movie Blankman. And probably this is not going to work, is it? Okay, this is not going to work. There we go. There we go. Well, obviously, move you one has to watch. Very promising. And if we look at what is happening for the shots in this trailer, then we can use simple color histogram distances for determining the activity. And based on the collection they had before, they did something that was very interesting. And as I showed you before, we have two distributions. One is within the shots and one is between the shots. So what I can always do is kind of like I can get a fixed threshold and say, well, this seems to be the speed spot. I'm making very little mistakes here and very little mistakes here. And if you choose this fixed threshold, then you can, of course, kind of segment the video based on how we did it before with the thresholding, okay? Use the fixed threshold. You look at the differences in the color histograms and then you just segment it. And if we do that and if we use the method before where the thresholds change over time and where they're dynamically assigned over time, then we can see that with a fixed threshold we get a lot of missed cuts. These are the kind of the bubbles here, okay? So we see the peaks in the differences. So this is basically the differences and this is basically the time axis. And we see some of the cuts that are above this fixed threshold. So this is the fixed threshold. And so, for example, here is a proper cut. And here is a proper cut. But here we can see there are missed cuts. Here is a wrong cut and so on and so on. How does it look like if we use our dynamic threshold? Well, basically, we don't have a fixed threshold any longer. But what we do have is kind of like a decreasing threshold. So whenever there was a cut, start of the movie, it starts high and decreases according to the function that we just computed, okay? And when there's another cut, it starts high again and starts decreasing again, okay? This is a typical way of finding dynamic threshold. And what we find is that we don't miss too many. So we don't over-segment the video, but we use a little or we miss a couple of the cuts. Before was a fixed threshold. We had a lot of points where we actually indicated cuts where there were no cuts, okay? Now we are kind of missing some cuts that we don't see. But we are hardly ever indicating cuts where there are no cuts. So this is kind of different and if we have two samples here and do a direct comparison, we can see that with a fixed threshold, for example, these two cuts here are wrongly detected and with a viable threshold, this mountain that is kind of like building up here, but that belongs to the same shot is not detected wrongly because our threshold dynamically thinks and then the correct cut is determined over here, okay? Can happen that this cut over here that was correctly classified by the fixed threshold is missed over here because after this cut we start it again very high with our threshold, okay? So we are rather prone to missing cuts than to predicting wrong cuts with a dynamic threshold, but after all the number of cuts that we wrongly predict is much lower. Okay. Yes? No. The distance measurement is the same all the time. Oh yeah, that's, well, yeah, basically this is not because the distance changes, but because the reference frame changes over the time, okay? So the above one is basically between the adjacent frames and the one below here is building up in the time intervals, okay? So if we look at the total number of errors, we see that the misses for the fixed threshold are less than the misses for the viable of the air length, so the dynamic thresholds, but on the other hand, the false positives, maybe I should use blue here, the false positives for the fixed threshold is much higher than for the viable or airline thresholds. And in total, we can see there's a certain gain that we can make, okay? Good. That was basically what we were doing today, so we're considering shot detection. Shot detection is a very important tool for video abstraction where we want to segment videos into certain, well, first shots, structural elements, story units, and to get this semantics across we need a good detection on what is a scene in the video. There are some basic shot detection methods that are threshold based. If we consider hard and soft cuts, a twin thresholding algorithm will be the best thing where we kind of like keep the reference frame fixed. We've talked about statistical structure model where we see how the changes or how changes can be used to classify images, for example, the short cuts in action movie, the longer cuts in romance movies or comedies, so also that shows something. And then I showed you some statistical models where we could deal with the time that is a shot actually is running and how the threshold can be dynamically adapted to get better shot boundary detection, okay? Next lecture we will talk about video signatures and the idea is how to determine whether two videos are similar or not. What do we do? Compare them image by image or what is the idea here? It's not so easy. Was much easier with the images or with the audio but with the temporal perspective of the video kind of very difficult. This is what we will deal with next time. Thanks for the attention.
In this course, we examine the aspects regarding building multimedia database systems and give an insight into the used techniques. The course deals with content-specific retrieval of multimedia data. Basic issue is the efficient storage and subsequent retrieval of multimedia documents. The general structure of the course is: - Basic characteristics of multimedia databases - Evaluation of retrieval effectiveness, Precision-Recall Analysis - Semantic content of image-content search - Image representation, low-level and high-level features - Texture features, random-field models - Audio formats, sampling, metadata - Thematic search within music tracks - Query formulation in music databases - Media representation for video - Frame / Shot Detection, Event Detection - Video segmentation and video summarization - Video Indexing, MPEG-7 - Extraction of low-and high-level features -Integration of features and efficient similarity comparison - Indexing over inverted file index, indexing Gemini, R *- trees
10.5446/348 (DOI)
Okay, so let's continue with sound creation. So the most basic way of creating sounds, we know it from humans. It's pretty easy. So the air coming from the lungs passes through the chord and this vibrates. The chord vibrates. It transmits this vibration through air to the receivers, the ears, and this is how after the vibration is received through the membrane of the ear, it is transformed into electrical impulses and transmitted through the brain. The brain then transforms this back to what we perceive as sound. So this is the most basic example of how sound creation works. But we usually classify instruments based on how they generate this vibration. So like for example, we know there are string instruments. You pull on the string like a quitar for example and this vibrates and this vibration again is transmitted through the air. And there are blowing instruments or percussion instruments like a drum for example. It has a membrane. You hit the membrane and it again vibrates and transmits the sound. And again the acoustic, this depends on this vibration generator. So this is the important factor. For example, if you have the membrane or the string, you hear and perceive the sounds as being different. If you have the same pitch or the same tone, they still hear differently. For the synthetic generation, what we've just seen in the detour, if you want to create the standard tone A, you need some kind of an oscillator. And this oscillator generates voltage oscillations, but they are transformed into sound by speakers. So these are the ones, speakers are just membranes and through their vibrations, again the sound is transmitted through the air to the receivers. The oscillator can be influenced so by inputting higher voltage, it results in a higher frequency. And this for example has been exploited by Mug, who in 1964 has put the basics of the first synthesizer and again with this kind of manipulation, he could achieve different kind of frequency and with the amplifier it could affect the volume, so the sound could be more loud. But as you've seen in the detour, these synthesized sounds, they sound kind of metallic. We don't really want that, we want natural sounds. And if you want to achieve natural sounds, for example the human voice, if you sing it's not perfect, you sing the standard tone A and the human voice has a period where it starts from low, it raises up to the pitch, it's going to sing the note. Then maybe overshoots, reaches up to a point above the note is going to sing, then there's this period here, this is the classical attack period, when you prepare to make a sound. And then in order to reach the tone you are going to sing, there's a decay period where you compensate for this overshooting. This is also rather short in time and then comes a sustain period, the period where you actually do the actual sound you wanted to do and then a release. So you are slowly ending the sound you wanted to make. So this is a classical envelope curve that influences the loudness of the sound based on the time. And in order for the sound not to be that metallic anymore, so in order to produce more realistic sounds, one has come up to the idea to add this attack decay sustain release, envelope curve also to synthesized sounds, just to make them here better. Yeah, you can see here an example of how such a modulator synthesizer looks like, this is the version of the MOOG synthesizer from 1967, it's a bit more evolved, this is the original one. As you can see you have here a keyboard one could press on, but actually what this keyboard does, it displays synthesized notes and this actually says what kind of a current should pass through, you can adjust the loudness, you can adjust the frequency with a lot of adjusting possibilities. So this is how you can produce synthesized sounds. And what's interesting to see is that with such a synthesizer Emerson Lake and Palmer have held a concert, so you can also find this video on YouTube, the great gates of Kiev from 1974, they actually have done the first step store, Electronical Music. Let's see if I can play it from here, it's better to go directly to the source. Okay, so, let's see, this is it. Yeah, this is Emerson Lake and Palmer. Yeah, yeah, yeah, yeah. Yeah, quite interesting. So I wouldn't necessarily hear only such music, but they were the first steps towards such music, and it's quite interesting how he played with the buttons and he managed to influence the sound like that. So this is what you could do with the synthesizers back then. Right now, today, you know, most of the music is built on the computer and all the synthesizers are now software made and they work quite well. So something like that put the basics of actually what we do today here in radio. Okay, back to the attack decay sustain release curve. As I've previously mentioned, the problem with synthesized sounds, they sound rather metallic. So for producing a sound which is more close to the imperfection of typical instruments, one has used this kind of behavior of the sound in time. So the first phase, the attack with a certain overshoot in the level, then comes a decay where the actual desired level is reached, then a longer sustain phase where the note, the one that was aimed for is sung, and then a release which is usually rather short in time, so it just decreases up to zero. Okay, one of the most important parts in audio is the digitalization of audio data. So we have spoken about audio data as representing a signal, but in order to save such a signal, one would need to save each point on the signal. And this is actually not a very good idea because you imagine for music piece, then you would have a lot of data to save. So the solution is to perform sampling, and the concept of sampling is just of looking at the regular intervals in time on the curve. Let's take, for example, this curve here. So this is our signal, and I'm going to look at different intervals in time. I'm going to start here, and here, and here, and here, and here, and so on up to the end of the signal. And in these discrete moments, I'm going to check what's the amplitude of the signal. And the first amplitude I'm going to measure is zero, then it's this one here, then the next amplitude, it's somewhere here, and so on. So this is how I'm going to discretize the signal. But the most important part is when discretizing, I have to make sure that the resulting signal is enough in order to reconstruct the original signal. And of course, the purpose is to save less data, but the second one is I need to take care that I can reconstruct the signal. And when performing sampling, the basic characteristics I need to consider are the sampling rate. This basically means how many times in the time unit, so in the interval I've drawn before, do I have to tap into the signal? How many times do I have to look and see on the curve what's the amplitude of the signal? And the higher the sampling rate, the better the quality of the digitalized signal and the better I can reconstruct the original. And the other side, so actually I try to lose less data if I increase the sampling rate. The other characteristic is the resolution. So how do I save the digitalized data? And with which accuracy do I save it? And often a resolution of 16 bits is used. This means actually a two or the power of 16 different amplitude values. For the sampling rate, it's actually application dependent, so it depends on what do you have. For example, for music, it's quite common to use a sampling rate of 44 kHz, while for phone the sampling rate, which is common, is somewhere around 8 kHz. The idea is because of the difference in the data. For example, for audio we are interested in the quality, we are interested in the full spectrum of frequency, we don't really want to lose anything, so we are interested also in a higher sampling rate. On the other side, in a telephone talk, we are interested in understanding what the other one says. We don't really care to get all the background noise or something like that, actually we really want to get rid of that. So we kind of usually use filters to filter that out. This is why it wouldn't make sense to use a higher sampling rate. So it's also good for the networks transmitting that signal because lower sampling rate means less data to transmit. Another advantage. Okay so what's really important when performing sampling is that after sampling, after I discretize the signal, I need to be able to uniquely reconstruct the initial oscillation. The higher the sampling frequency, the more I have to save. Of course, the less I lose from the original signal. And Niquist says with his sampling theorem that actually the sampling rate that I really need to use must be at least twice as large as the highest frequency occurring in the signal. So if I have the highest frequency in a signal of like 20,000 Hz, then I should consider sampling with 40,000 Hz. This is what Niquist says. And let's look at some examples. So if you have a simple sinus curve and you perform simple sampling of one sample per period, so a constant one, I'd say I'm going to sample here, here and here, so once per period. And I want to reconstruct the signal. The problem is there are other signals, like for example the simplest one is the constant signal that passes through the same point. So if I have this sampled signal, this point here, I can't be sure which of the two has generated it, the sinus curve or the constant signal. This is why such a sampling rate of one sample per period is not enough. Another example would be this one here, so going to 1.5 samples per period. This actually means so, in a sequence of two periods, I would have like three samples. I would look like three times. So what this basically means is that I'm going to look and see the value of the amplitude here, then here and then here. So now I have my two periods, I've measured in a sequence of two periods, the amplitude values three times. The problem is again, there can be another curve, another sinus curve, and this is this one here, which passes exactly through the same points I've checked with my sampling procedure, but it's different than the sinus curve I wanted to discretize. So I can't uniquely go back. As you can see, the curve, the blue one, it has a lower frequency, so it's a lower curve. I can't know afterwards, I can't reconstruct and say, okay, this one was responsible or the other. So this is why, and Quist says, we should use two samples per period. This is one example of this case, so I've just done my two samples for that period, then two samples, and again two samples, and there is only one curve, only one sinus curve, perfect sinus curve without noise or something like this, which goes through exactly these points. So this is basically the idea of Innequist's theorem. Okay, so typical sampling rates, again for the phone, cell phone, and phone talks, it's about 8 kHz. For DVD, you can have up to 192,000 Hz, so 192 kHz, which is quite high. You could wonder why such a high sampling rate. So the idea is that such mediums, even for the audio CDs, you don't really want to lose anything, you don't want to lose quality, you don't want to lose maybe noise, maybe that noise was supposed to be there, maybe you don't have perfect sinus waves. And this is exactly the idea here. The more signal you are able to store, the better you represent the original signal. And yeah, we are not all, we don't have a trained ear, but if someone with a better ear hears a signal, he can differentiate between the quality of a DVD and of an audio CD or classical mp3. So this is why you can go up to very high sampling rates, and it also makes sense. And if we look at such sampling rates and consider that the resolution is somewhere of 16 bits per measurement, then you have a throughput of like 176 kbps. This means that actually, for a minute of sound, you have like 10 Mb of data. You are probably used to the idea that a CD holds 635 or about 700 Mb of data. It also holds like an hour of music, depending on how long the audio songs are and so on. So it's quite a lot of data. And for space reasons, usually we compress this data. We have compression, we usually apply it for files, we know it, zip or run length coding, a lot of procedures. So on the other side, we have some uncompressed formats we use for audio, where they are built for quality, and then we have some compressed formats where they build for storage or simply for network transport and so on. And the most well-known uncompressed formats are the one from Apple, the Apple Inter-Opportunity file format. You may be know the Wave file, the one from Windows. Well, this format here is actually not that used anymore. It was used for the Institute de Recherche Coordination Acoustique and Music. It's probably more used in the research, in sound labs and so on. And San also had their format, the AU. Okay, let's discuss a bit about compression. So as I've said, compression is a big issue when we speak about audio sound. 600 mega for an hour of music, it's already a lot. What we actually want to achieve is some data reduction, but we have to give something back. There are two ways of performing that. One is lossy. So we lose some data. The data is not perfect anymore. Or one which is lossless, where we don't really get to compress that match. Usually you get to obtain a factor of 2 or something like this. So for example, if you have 600 mega audio data, you may compress it lossless to a half of that. But if you compare it with the lossy, the lossy can achieve even a factor of 10. So you can compress 600 mega to about 60, which is a lot. Okay, so the most used here is the free lossless audio codec for the lossless compression, and it achieves about 50 to 60% from the original size. Another is the April lossless or the waveback. The lossy compression algorithms usually use transformations like the discrete cosine transformation or the modified discrete cosine transformation or the wavelet. The idea here is to transform the signal in the frequency space, and then in this frequency space obtain the most important frequencies based on their coefficients, hold only those frequencies and cut the ones which are minor for which the coefficients are small. So practically when you perform this transformation, you get a series of coefficients and the corresponding waves, and you just cut, for example, after the first five. By cutting those, you lose some data, but that data is not actually very important. The most important is the beginning part of the series. Okay, when performing compression, you actually have two steps. The first one is the encoding where you transform the waveform in frequency sequences or the sampling, and the second one is the decoding. So you have to play it somehow, you have to reconstruct these waveforms from the values you have obtained through the encoding. But the big question here is what are we going to cut? What can we afford losing? The goal or what we want to obtain is we want to lose some data, it's okay, we want to obtain better space efficiency, but we want to maintain the subjective perception. So we don't want to lose that much data so that we won't even recognize the sound anymore. And here we can do some tricks. For example, we can omit either very high or very low frequencies. We've said that the human ear can hear from somewhere from 50 up to 20, 50 hertz up to 20 kilohertz. So basically, I don't really need to store something which is 50 or under 50 because I won't hear it anyway. The same goes for sounds above 20 kilohertz. So I don't really need my dog to hear the sound, I'm just interested to keep what I'm going to hear. So I can cut that also. On the other side, I can, for example, save the superimposed frequencies with less precision. So frequencies which come after other frequencies can be saved with less precision because they are not that important. So something which is more powerful will screen something which is less powerful. So if I'm talking to someone and near me there's a construction yard and they make a lot of noises, he won't really hear my voice. So if it's not hearable, I can blend it. This is called blending low tones after very loud sounds. So my ear tunes to the loud sound. It hears that one but it doesn't hear anything else. Some other psychoacoustic observations which come in handy here are the changes at a very small distance are impossible to hear. So if the change is very slight, I don't know if I should consume any data to, you know, save that change because it won't be perceived anyway. So I could save myself some data and leave the tone as it was. From the compressed standards, one of the most known is the MPEG standard with different layers and most of us, we know the MP3 standard. You have MP3s right now on your cell phones or on your iPods or MP3 players or whatever. And the quality of the sound here is near to the CD quality and the bitrate is of 128 kilobits per second. The course idea for the MP3, so what they are basically doing is they are coupling stereo signal by registering only the difference between the left and right channels. So for example, they are recording what happens on the left channel. We know we have a stereo signal but we don't really care to save all the data for the other channel also. We just measure them and say, okay, I'm going to store only what's different between the left and the right. And this way, I will have a lot of zeros in my right signal because most of the signal will be the same and I can compress that very good. The second thing MP3 uses is cutting off the inaudible frequencies. So what I'm not going to hear under 50 or above 20 kHz is going to be eliminated, making use of the PSIHO acoustic effects and again using the Hoffman encoding as I've said in concordance for example with coupling the stereo signal, so left to right and not only. So Hoffman can be used anywhere on the signal. What we usually have today is the advanced audio coding. So if you have some newer device like a newer receiver or something, a 7 plus 1, you really don't have stereo anymore. You have home cinema system and this is what AAC is able to do. It provides, it's an industry improvement of the MP3 so it actually basically is the same but with more channels. It's usually a heap for TV and radio broadcasts and it actually offers better quality for the same file size. As I said the most important point, multi-channel audio. It actually supports 48 min sound channels with up to a 96 kHz sampling rate, so quite high sampling rate, much higher than we used to have. Okay, you can search for more information on the AAC if you're interested on the internet. Other compression formats are the OGG4, the real audio from real networks and Windows media audio 9 I think. I think we are already by version 10 right now. Anyway, okay, I've searched for some experiments and I've seen that the lossless compression as I've said, obtained somewhere of a factor of 50 for the compression. The most important factors are the compression rate. How can I obtain better compression rate and some other important factors are the speed of the compression and decompression. We don't really want to wait for a week to compress our library of music and on the other side if I'm going to play it I don't really want to decompress it and then play it. I want that the decompression process happens on the fly and if I press the play button I also hear my sound. So taking into consideration these two factors, for example I have here the result for different compressors and for example the FLAC compressor we've previously discussed about obtains a relatively good compression ratio, quite good encoding speed, so it's a factor of 20 in real time speed and a very good decompression rate. So actually this one is very used if you have a library of sound of music which you really want to have in original quality. For the lossy compression, besides the decompression and compression speed which of course are of importance, the compression rate is very important so I'm interested if I'm going to lose data I'm interested in something better than 50%, I'm interested in something like 90% compression rate. And the most important factor is the quality, so I'm losing data but I really don't want to lose the sound. I'm not going to accept that when I compress it with lossy compression what I receive I can't recognize anymore. So in order to measure the quality of these compression procedures I've observed an experiment which has been published on the internet and the idea here was to perform a mean opinion score measurement with different human subjects. They were given a scale from 1 to 5 and above 5 where they had to rank a sound as being heavily distorted or unpleasant up to they didn't recognize any difference between the compressed and the original sound. And the results are quite interesting so for example for the AAC we can observe that the average quality is above 5 so most of the subjects didn't see any difference with the highest rating received by human subjects being to somewhere with the value of 9 and the lowest being somewhere 4.5 or so. So actually even the harshest critics didn't give a bad grade for the AAC and combined with the very good compression rates and the fact that it supports multi-channels it's a great way to compress music. Again you have here statistics for different codecs but as I've said AAC and its variations is the winner of it all. Okay let's go further to another music format the MIDI format I don't know if you're accustomed to or if you've heard MIDI format it was quite hip in the 90s or 92, 94. Actually the MIDI format was considered as a communication protocol so the idea was to transform to transmit the music or the recording between digital instruments and the PC so the sounds have been inputted from a keyboard and inputted to the computer and the computer saved them as commands to the sound card. For example now it will be played a certain tone with a certain length in time with a certain speed so a certain note, key, velocity, pitch and what instrument is that. And that was it so this is the MIDI format of course if you only have such a note sequence you don't really get to say for example voice so you won't hear in a MIDI sound the voice of the singer. Let me play you an example. I'll go to the source again. Okay so that's the MIDI. So actually this is where the part where the singer was supposed to take in and sing over the notes but you don't have that in a MIDI file so in a MIDI you have just the notes and notes and that's it. Let's see how the original looks like. Yeah so here you have, you already have the singer, it's quite a big difference but taking into consideration that for example in the 90s you didn't really have a real sound card that the computer and MIDI files could have been played also by the computer speaker it was a great hit for the 90s as I've said to use the MIDI format for music storage. And the great part of it, here comes the great part of it, 10 minutes of music are not 10 meg, they are 200 kilobytes of data so it's a great difference between what you were supposed to store by storing a MIDI data and the original sound. As I've said the data is inputted into the PC via a keyboard or and are outputted via a synthesizer. Sequencer can be used for caching the data and if you want to do any changes like for example if you feel that the notes are synthetical you can add this envelope curve and transform it to hear more natural. Okay let's go to the next section, the audio information in databases. So we actually have for audio data we have music, CDs, we have sound effects or ear chords like for example we have a database of sounds which you can use like for example for editing music maybe you know this modern software we are going to use with pieces of instruments with different notes you can blend in and create music. So this is the audio data and you can all have them in a database and search for them or use them and somehow query for them. On the other side the audio data may also represent also the process of information transfer so if you had there just music to listen to on the other side you may for example store historical speeches where not the data itself is important but the message extracted from it and if you would have for example the transcript of the speech the text it wouldn't be the same because you lose information in text you have what the person doing the speech has said but you don't really have for example the retro-ics the way he said it the intonation for example or the reaction of the public it can also be used for recordings of conversations so protocol phone calls or negotiations so these are typical information you can store in a database. Usually when dealing with audio databases there are three typical applications of audio signals in the context of databases one of them is the identification of audio signals so for example the audio query. The classical example here is when you go to a music shop and you want to buy some music piece you've woken up with a certain song in your head and you don't know how it's called so you don't have the title you can't go on internet and buy it from amazon because you don't know how it's called and you go to the music shop and tell it to the guy do you have that CD that CD with that melody that sound like this and you have this audio as a query while having a database which is able to understand your query so if you sing it or whistle it or whatever then it could deliver you either the sound or information about it so the melody or the information about it. This is a typical scenario of audio as a query or identification of audio signals. Another application is the classification and search of similar signals like for example I want to cluster together similar music pieces or music pieces belonging to a certain genre like for example I've heard this song and I want something similar I like it it's okay but I like this type of song so I want something similar. This is a typical case of classification and similarity search. And there's also phonetic synchronization where for example I have the text and I have some spoken speech some speech and I want a synchronization between the two so what's spoken and what I have here but actually phonetic synchronization is not something we're going to deal in the lecture so what we're going to focus on is the identification of audio signals so audio is a query for example. Okay so typical tasks in the identification of audio signal I want to find the title for a music piece I have in my head maybe you already know Shazam it's one of the first applications I think it was first written for iPhone and that's exactly that so well not from the human voice but you can pull out your cell phone start Shazam and let it record some sort short piece of music from the radio and then Shazam will tell you how this music piece is called who sings it and maybe if you want to buy it on iTunes or something like that. It's also a great idea so this identification of audio signals can also be a great idea for monitoring audio streams like for example if I'm doing advertising on the radio and I want to be sure that the radio helps its promise and advertises or runs my advertisement three days a day as we've done our contract well for that I don't need to sit near the radio and try to be careful if he did play my advertising three times or four times or no time but I can perform this automatically if I am able to perform identification of audio signals so what is going to happen it's a tool will monitor the radio program and compare the radio program with my advertisement and count the matches and when it sees that the program compare the different parts windows of the program match with my advertisement then it's great to count them and if I have three of them I'm happy. Another use case can be used by the copyright control like for example looking at what the radio play just compare them with different song pieces and see if they have license on that or so on. Another typical application is audio on demand so I want to hear some song from iTunes or something like that then I can request it and it will be sent stream to me through through net or a lot of services you would use that. Okay the second type of application is the classification and matching so the task here is to find audio signals which are perceptionally similar so I want to find pieces of music which are kind of the same and there is a great really big field the field of recommender systems doing just that so for example I don't know if you are aware of last FM they also have a great app so program will be programmable interface where you can use it to input like for example give me songs which are similar to this one or give me artists which are similar to this one like for example you sort something similar to Queen or something similar to Madonna and they give you a list and this is actually done based on matching of songs how well they match how well they classify together so basically genre classification. Audio libraries this is nice application for audio libraries to perform this classification automatically and the synchronization of audio as I previously mentioned synchronization between speech and text or between notes and audio where am I right now following the notes and what is he singing right now or retrieval of text from speech for example so to find a specific point in a speech but as I've said this is a part we're not going to concentrate on so we are much more interested in query by sound. Okay so the state of the art of all these three applications let's start with the identification which we will also treat in this lecture it's the simplest of these three problems and actually it's successfully being resolved as I've said Shazam is an example Mido me does that online so you can check it and test it you'll also do it as a detour in the next lecture very interesting application. For the classification and matching it's still a lot of it still leaves a lot of room to manual annotations so actually it's done with a lot of manual work metadata and so on the automatic classification works only roughly on small collection of sounds so this metric process it's still problematic it involves a lot of training usually and it's prone to error it's a probabilistic approach typical procedures here are machine learning techniques and they work but it's not as good as how the identification for example works. For the synchronization in the meantime one can obtain tolerable error rates like for the synchronization between language and text. Okay so we've spoken about the state of the art of general applications but before speaking about audio databases we need to speak of how to make this data persistent how do we store it and usually the audio data are stored in blobs in the database so actually they are nothing more than binary large objects most of the databases support blobs and you can store either videos or audios they don't really care what you store there and they're actually not that useful from this point of view because you can have metadata about them you can know that in the blob you've saved a song with this title or so on but this doesn't really help you do some content search. There's also the concept of smart blobs or usual blobs the difference is one of them is managed by the operation system and one of them is managed by the database itself. Usually there are metadata so like for example the title or the file size and bytes or the last time it was modified and so on or the feature vectors like for example sound features the amplitude or the loudness or we'll speak about them they're just the same as we've done in image you remember if you when we spoke about images we said we have some feature vectors like I don't know the color the brightness and so on this is exactly what we have here also and these metadata together with the feature vectors they help us perform the search functionality for example they help us perform transcription of languages text or annotate music pieces or or midis. That's the section we're going to concentrate on and this is the most important part that we want to perform the audio retrieval so this is the central point of our our lectures regarding audio. How do we search in audio and of course the most easy approach is a metadata driven search and it's great if you have metadata because you can have semantic metadata and these are for example the title the artist or the speaker if this is a speech or some kind of keywords but all this it's manually generated. It's like again in the in the image case where this semantic metadata was a photo of me in Paris or photo of me near I don't know my best friend and my best best friend somewhere. The semantic metadata is difficult because it's difficult to generate because it's all manual information so if you have it it's great it can help your search if you don't you have a problem when searching only on metadata. On the other side you have some automatically generated metadata like for example the time or place for images it can be generated through geo sensors the recording the file name how it's called the size the hour that's automatically if you can use it it's great. Metadata is great as I've said if you have it you can use it and this is the foundation of typical music exchange markets you most surely have heard about the success Napsterhead or you may have also used probably Kaza you are searching for a certain title the title has been already introduced there by someone uploading the file or holding the file on his computer and this is basically how you do the search you do search through the metadata but this manual indexing regarding title author or whatever you are inputting as metadata is labor intensive and expensive and this information is usually incomplete for example the genre classification one might feel that this music piece is pop but he's not really an expert so maybe this is something else or maybe his dynamic stake or maybe this music piece belongs to more genres so if you're going to perform a search with the correct genre however that music piece has not been labeled correctly you won't find it. The most the major problem here is that you have no possibility of performing query by example and this is actually what the core of a multimedia database should be so I want to search for a sound that hears like this and I want to hum it or to sing it or to whistle it and I want the database to return the information the file whatever I'm going to search for and for this we need to be able to search as I've said through query by example directly in the audio file so not quite in the metadata but the current systems managed to do is to use something like SQL with approximate string search on the metadata so the like close select I don't know music piece from music database where title like begin Japan or only Japan and it will find it based on string similarity but that's not that great that's not really what we want here so the core of multimedia database should be using content of audio files again of course using metadata to if you have it but the core should be using content of the audio files and the most trivial idea if you have two pieces of sound so to audio files and you want to compare them you want to establish how similar they want they are to each other you can compare them measure versus measure so you can take point by point each of the two signals and compare them now imagine that you have a big database so this will mean will mean that you compare your query sound with each sound in the database and that's a lot if you do it point by point we've discussed it also in the case of images is not really promising and it's really inefficient because on the one side you have a lot of points to compare on the second side it may be that your query contains for example only the refrain so it doesn't begin from the beginning or you have differences in sampling rates or in resolution so you can't even match them even it's the same song but it has a different sampling rate so the solution in this case is to use again features you may have low level features or high level features or logical features in the case of low level features you may have information like for example what's the loudness how loud is the sound or what kind of frequencies are there in the sound what's the intensity of those frequencies and you can compare them and this is actually the foundation of the content based search in audio so as I've mentioned it's basically the same as an image databases the same basic idea I want to describe the signal by means of a set of characteristic features these will be the feature vectors I'm going to compare of course there's a big difference when compared to what we've discussed about in the image information because here we need to consider that audio is a time dependent signal image in image you have a two dimensional signal it's the space of the image here you also have to add the time so the vector has to be dependent on time this is why the vector is actually time dependent so at the time point I have to compare the two vectors of two different sounds typical low level features are the mean amplitude or the loudness how loud is the first sound how loud is the second how does the frequency distribution look like for example for voice I'm going to expect lower distribution for music I'm going to expect higher distribution and I can already imagine that the frequency distribution could help me to differentiate between speech and music yeah just simple typical low level features I can use to filter a lot from the from the database so do some pruning for example another typical low level feature is the pitch the pitch we're going to discuss next lecture into more detail it's the frequency of a note so what is the note being played the brightness or how high does a sound a music piece feel like are the frequencies higher or lower like for example the brightness of voice is lower than the brightness of music because I have a lot of high frequencies where in the voice I have a lot of low frequencies and the bandwidth so if you measure the lowest and the highest frequency that interval you get that's the bandwidth again for voice it's lower than than for music and then you have this low level features which can be measured in the time domain so you in the time domain you have the signal which is amplitude represented as the amplitude versus the time you have something for example something like this that's the amplitude that's the time and the signal is something like that or you have the frequency domain where you have intensity like for example a spectrum or something like that versus the frequency yeah so you have here I don't know maybe like 20 kilohertz you have here something like zero frequency and then you have a lot of I don't know maybe this is 400 something like the representation in the frequency domain will speak about spectrograms in this case okay so the amplitude the amplitude is the fluctuation around the zero point it gives me the loudness so the silence is then equivalent to zero amplitude well actually if I want to detect a period of silence I have to perform some heuristics but usually when you have zero amplitude there's no noise there's no movement there's nothing also in the time domain you have the average energy so this characterizes how loud is the entire signal and you can calculate it by by summing the value of the signal through its length so summing the square of each of each amplitude of each point into the signal in the signal that's the average energy dividing by the number of points that's the average energy another feature in the time domain is the zero crossing or the frequency of sign changes in the signal so what we're basically told here is that if two consecutive points have the same signal this is evaluated to zero so this won't add up here if they have different signals the value will be one and it will count here as one change and it will add up to the number of of changing changes and everything is then normalized so then you can compare zero crossing rates for two two sounds without taking into consideration that they have or they might they might have different lengths the silence ratio is another feature in time domain and it actually tells me the portion of values that belong to a period of silence but the great question here is what is silence so you can say that if you have an amplitude of zero that silence because you might have zero crossings for example and in the case of of a crossing the amplitude is zero so actually that's a good it's a good heuristic to establish a low pitch a pitch under which everything that you have so under a certain amplitude that is considered noise or silence yeah and what one also must establish is the number of consecutive readings or the numbers number of consecutive points for which the sound signal must have an amplitude lower than the established threshold so that it is considered a period of silence so if for example I have just one point under I don't know maybe a pitch of 10 a pitch an amplitude of 10 that represents my my silence threshold it doesn't really count as silence but if I have a consequence a consecutive sequence of like five or six such readings then that might be silence so it's it depends on on parameters and how you define them but silence can be detected that way okay so we've spoken a bit about the time domain what about the frequency domain we can perform a Fourier transformation of the signal and this actually means that we transport the signal that we have into the frequency domain we decompose it into frequencies each of this decompose frequency with corresponding coefficients and this is how we get the representation of the frequency spectrum of the signal the most important part here are the coefficients the coefficients of each frequency represent the amount of energy for frequency the bigger the coefficients the import the more important is that is that decomposed part of the signal so as I've said also in the compression you can hold the first five coefficients and cut cut the rest you won't lose that much for example the energy is the important part here and it's usually measured in in decibels and there are some features we can describe on this frequency spectrum for example this is how the sound looks like in the time domain this is here and in the frequency domain and you can already observe that there's a fundamental frequency here I think it is somewhere to about maybe 400 Hertz there is some noise here some noise here then there are some harmonics of this fundamental frequency which is the double of this of this frequency then there are smaller harmonics until they have no energy anymore so for example this one here this has the highest energy this one has less energy and and so on the bandwidth the bandwidth represents the interval between the occurring frequencies so you calculate the difference between the lowest frequency for the minimum the minimum frequency and the the highest frequency I've already defined the silence as being threshold dependent so if you have a certain threshold under which you consider that everything is silence then the next frequency which is about the silence threshold is is the one that's going to count as the minimum above that frequency so for example you don't hear anyway something under 50 so you can start considering 50 is the is the minimum if you have it in your signal or what's closest to 50 that will be the minimum this is a great feature to be used in classification like for example the bandwidth in music is higher than for the voice in music you may have a lot of instruments and those instruments may produce higher frequencies like for example 10 20 kilohertz frequency that's you still use you still hear but those frequencies you don't really create with voice maybe if you're an experimented opera singer or something like this yeah you may have a higher voice you may achieve that but usually in normal speech you don't have that so you have it in music but not in voice so that's that's great for for performing classification another feature in the frequency domain is the power distribution power can can be read directly from the frequency spectrum so what you actually can distinguish is the frequency with how power versus the energy or high energy versus those with low energy so basically the ones with the high decibel value are the ones with high high energy and based on this energy distributions you can calculate frequency bands with high or with low and you can calculate centroids for example to establish how high is the average frequency based on considering also the energy and this is how you calculate the brightness like for example in music you have a lot of strong higher frequencies so music may have a higher brightness than the voice the voice is lower you have a lot of like 4 kilohertz or 3 kilohertz so the brightness will be around 3-4 kilohertz harmonics again a feature in the frequency domain it counts for the lowest of the loud frequencies it's also called the fundamental frequency so if you have a fundamental frequency for example for for for music instruments you have also have harmonics which means that the signal increases repeats this this domino frequency in multiples so like for example if you have a standard pitch somewhere here like at 440 hertz and this is also your fundamental frequency this is how it looks on the synthesizer so it doesn't have any harmonics but if you do the same or on a flute for example you will also have its harmonics at like 880 which is the first harmonic 2 times 440 and then you will have at 1323 times 440 that's the third and so on harmonics and they decrease in intensity and what what this harmonic oscillations basically means so yeah this here is the fundamental frequency you may see here there might be some noise but this one is loud enough to take into consideration this is the first harmonic should be the 880 and the next ones and if you consider for example the harmonic oscillations again for string instruments what this means is that for example the first harmonic would look something like that the second harmonic would have doubled the the frequency and also something like that the third harmonic 3 times the frequency 4 times the frequency and so on and they all they all participate together in feeling the sound so in in in putting it in the the note it's being or the pitch is being played so as I've said it's a big difference between how the spectrum of a sound for an instrument looks like and the synthesized one because synthesizer doesn't have harmonics you may be able to simulate them but clear synthesized sound doesn't have something like that another feature which is one of the most important features is the pitch we're going to address this in in the next lecture and it's detectable only for periodic sounds so like for example the case of the sinus oscillations and it can be approximated by means of the Fourier spectrum and usually in most of the applications it's the pitch is considered to be the fundamental frequency the value is calculated from the frequencies and amplitudes of the peaks so as I've said this fundamental oops this fundamental frequency is usually used as an approximation there are also procedures one can use so algorithms one can use to perform pitch detection most of them are so one of the most well known are is the harmonic product spectrum we're going to discuss this in detail or like for example using auto correlation of the signal to detect the pitch we'll discuss about them in the next lectures this lecture we've discussed about the introduction into audio retrieval we've touched the basics of audio data we've discussed a bit of about what kind of audio information is stored in multimedia databases and we've started discussing about feature vectors and how we can perform audio retrieval and searching audio databases next lecture we'll discuss about classification and retrieval of audio so the second major application will continue with low-level audio features we'll go into the smallest difference a human may differentiate difference Lyman and we'll go deeper into procedures we can use in order to detect the pitch that's it thank you for the attention.
In this course, we examine the aspects regarding building multimedia database systems and give an insight into the used techniques. The course deals with content-specific retrieval of multimedia data. Basic issue is the efficient storage and subsequent retrieval of multimedia documents. The general structure of the course is: - Basic characteristics of multimedia databases - Evaluation of retrieval effectiveness, Precision-Recall Analysis - Semantic content of image-content search - Image representation, low-level and high-level features - Texture features, random-field models - Audio formats, sampling, metadata - Thematic search within music tracks - Query formulation in music databases - Media representation for video - Frame / Shot Detection, Event Detection - Video segmentation and video summarization - Video Indexing, MPEG-7 - Extraction of low-and high-level features -Integration of features and efficient similarity comparison - Indexing over inverted file index, indexing Gemini, R *- trees
10.5446/347 (DOI)
So, hello everyone and welcome to my lecture series this year about multimedia databases. And a fun semester it will be because we're covering a lot of ground with respect to all kinds of media. We will do images, we will do audio, we will do video, so everything that is kind of media. And we will show you how to store it, how to retrieve it and what to do in terms of general algorithms, general applications with it. So, let me start with a number of preliminaries. So the lecture will be three hours, it's two plus one. And we will have the exercises integrated into the lecture. You probably know if you've attended any lectures in my courses, the concept of a detour. So detours are usually some parts of the lecture where you can just sit back and relax and we will tell you something about applications or how to use it or what industry does with respect to what we are telling you. So you get a more practical impression of what this is all about and why you should know about this or why you should learn this. And we also will have exercises. Exercises, well there will be the sheets available on the web page, on the courses home page. And I really encourage you to do the exercises but they are not mandatory. So usually they are, well, just for you to understand what's going on, you know, like how the algorithms actually work, you know, to get a better feeling of what happens in the retrieval scenario. And if you do them consistently then I would say preparing for the exam is not a big deal. If you don't do them you might experience some problem or run in some time issues at the end of the semester when the exam is closed by. The exams as that is oral also in this case since this is a master's course and will take about half an hour per person or something like that. Usually we just try to figure out whether there's an understanding of the contents of the course or not. And as I said, the exercises will readily prepare you for the lecture so that shouldn't be a big point. Depending on the examination rules that you belong to, this will be a four or five credit course. If you're in one of the newer rules like the ETSC course of studies or the new master courses then it's five credits for the old master courses. It's still four credits so make sure in what or what examination rules apply to you. Good. In terms of recommended literature, I selected a couple of books. There are two German books. Ingo Schmidt, Ähnlichkeit suche in Multimedia Datenbanken, quite recent book that gives a good overview over the topic especially with respect to the retrieval paradigms, a little bit less with respect to the actual algorithms. For the actual algorithms, Ralf Steinmetz, Multimedia Technology, Grundlagen Komponenten und Systeme is also a very good book to see some of the algorithms to find out about it. Of course this course is taught in English since it is also part of an international master course so we have some English books here. For the image part, there's Castelli Bergmann, the image databases, Wiley, very nice book and one of the classics is the Multimedia and Imaging Databases of Koschafian and Baker. This is one of the books that I recommend for the course which will also cover a lot of ground that we are doing. However, most of the algorithms we are discussing do not occur in this way in any of the books so we will also rely on original papers especially when it comes to similarity search and how the similarity is defined and stuff like that. The original papers will all be provided on our web page so you can look them up and you do have the access over the university network to the ACM Digital Library, the IEEE Digital Library and stuff so you can get the papers at no charge and this is quite convenient. Good. Of course there's the course web page, it's at the E-Phys teaching MMDB and then you see the course web page. The course web page contains all the slides, it also contains a recording that we do of each lecture so if you want to do a little bit of e-learning or if you miss a lecture, no problem, you can see it, you can see the recorded versions and see all the slides, all the paintings that I do, the audio track, the only thing you cannot yet see is me because we don't have a camera in this room but that probably will change. However, I mean as much as it might aggravate you to not see me, we found it more important that you see the slides and the audio track. As soon as there are any questions, just drop by, we're in the second floor, or just send us mail, drop us mail, no problem and we will handle the thing just spontaneously. So whenever something is, just drop by. Same applies for the lecture. If something is unclear or you have some questions or need some clarification or whatever, just ask. This is interactive style and I encourage you to take part in the lecture and as I said, if there's something that is unclear or something that is bugging you somehow, just tell us about it and we can discuss it. There are probably questions that I will delay until the end of the lecture or say, well, let's talk about that separate moment if it takes too long but the usual stuff can be done right when the problem occurs. Good? Any questions? Nope? Very good. So what are we going to do today? So today's lecture is not the most refined lecture of the world, it's just an introduction. It gives you an overview of what multimedia databases actually are about. So what will we cover and what are the interesting topics? Aha, interesting. So break. So what are the interesting topics? What are the applications? What can we do without it? And how do we find out whether we have good multimedia database or good retrieval algorithm or whether we have bad retrieval algorithm? These are the three things that we will cover today. I do have to leave a little bit early today so it won't be a problem so we will take the rest of the lecture because there are some application talks for a professor that is going to be a new professor at our department. Very, very interesting. So if anyone feels like it's directly after this lecture, there will be an application talk of one of the candidates and there will be more application talks for the professors over the course of the day. So whoever is interested. Distributed systems. Weigel. So he left us and now we are looking for a new one. If you are interested, please drop by. Good. So what we want to do today is basically finding out what multimedia databases are. What typical applications are used for multimedia databases or when can they be of help and how to evaluate the quality of a multimedia database. And this is my big friend, the saving. Let's go back. Here we go. So the first question is what is a multimedia database and it's easy. It's a database plus multimedia. It's kind of simple. And we have two key words here. We have the classical database. Everybody knows that. And databases are a great concept. They have relations. They have tables. Usually they are not object oriented or something like that. And within these relations we can store data. Usually this data is in the form of numbers, strings, whatever. What happens now if the data is no longer in the form of strings or numbers but in the form of movies or audio files or images or texts. The thing becomes a little bit more difficult. Because there is one possibility to store such things in database, which is the blob binary large object data type. But that doesn't help you. And you can store it. But the ways to retrieve it are very clumsy to say the least. So databases are good technology. Multimedia is kind of what is it actually? We need some kind of definition for that. And multimedia basically contains the term media. And this we have all kinds of digital media types. Images, audio. In video we have images plus audio. And we even have a time dimension that are not so hard to handle as such. We do have browsers. We do have players. We can put them into a file system. And which are very difficult to handle systematically. So how do you know how a file is called? How do you know what a video shows if you just have the file name? Might be a sensible file name or might be XYZ509. Which doesn't hint at the content at all. Are there any possibilities to know a little bit more about the media you're looking at? So usually a media object can be even complex or more complex. It may contain text. It may contain images which are referred to by the text. As you can see on figure blah, the number of blah blah blah goes down to blah blah blah or something. You have a connection between the different media types. Or you might have an audio explaining some of the slides we're doing here in this lecture. They are connected because at a certain part of the audio stream, a certain slide is shown on the blackboard. Okay? So they are somehow interconnected. Which means they are integrated into a single document to show the connection. And some media is totally not understandable without the rest of the document. I mean pictures can be very nice. But if you don't know what they show or why you should look at them, it gets hard to understand what the intention of the author was. And the author, authoring a multimedia document, has some more intention than is given by any of the media types. Text is usually the most helpful. But as we all know, images can say more than thousand words. So for illustrational purposes or commentaries in terms of audio or whatever are very interesting. So the basic media types are text, image, special kind of image, vector graphics, audio and video. For the text and the image, they are static. For the audio and the video, they do have a time dimension. They are dynamic, they are changing over time. Good. Typical text documents is just normal text data. But could also be spreadsheets, for example Excel data. Could be emails, so communications, could be anything. The text part we will not cover here. This is basically the area of information retrieval. And as some of you know, there is a wonderful lecture parallel to this one, which is specifically concerned with information retrieval and web search. So this is I think Wednesday. Wednesday is in the morning, yes. So yesterday basically, yes. If you want to know more about the handling of text in databases, this is the place to go. This is where it gets interesting. We will cover the image, audio, video part in this lecture. And for the images, you do have photos typically, bitmaps or some compressed formants. You have vector graphics. You have very big libraries of CAD, computer aided design. So for example, Volkswagen, when they model a new car. Very little parts of it are actually non-digital. At some point they will build a physical model, yes, to do all the analysis for the wind channel and stuff like that. But still, most of the design process is done computer aided and has to be stored somehow. And that's a lot of data. So you have speech and music recordings, which can cover either popular music or it can cover speeches as done in parliament or in public places. Very much of those are kind of cultural heritage and should be preserved. Preserving means storage. And storage means at some point databases. Also, annotations like we are doing here for the slides. And they come in a lot of different formats. So wave files, there's MIDI files. That is kind of a music format. MP3 files is the most popular format for compressed music. And of course we have the video files, which is basically dynamical image records. So frame sequences of image with a time dimension, also in a lot of different formats like the MPEG or the AVI. When we were talking about documents, we want to know what it actually is. And one of the earliest definition of information retrieval is that documents are logically interdependent, digitally encoded texts. That is very old. People didn't think about pictures and audio streams commenting on the text or something like that. It was not yet the time for that. But they just focused on texts and said, well, it has to be logically coherent somehow. I mean, something that contains different words that have nothing to do with each other is not a document. It's not a sensible communication. But there should be a certain grammar. It should be about a certain topic. There should be a certain intention of telling somebody something. That is important for a document. And well, actually during the 80s or early 90s came the idea, well, text probably is a little bit too short. That does not go far enough. We have to address any other media types. The first were images because, I mean, in most documents, you do have some images, either charts showing something that might be not understandable if you don't have the text that goes with it. But on the other hand, the text might lack the exact information that is shown in the graphs. But also, of course, illustrative images that are just nice to look at. But those are typical parts of the text. And when it became clear that also things like recording of lectures and a lot of other things, multimedia could be embedded in documents. For example, graphical animations of something, showing a process, or models, 3D models that could be rotated or something like that, then it became clear that document as logical, interdependent, digitally encoded text definition does jump too short. But we need other media types to be integrated in all those documents. And still, they are logically interdependent. They somehow refer to the same task, to the same process, to the same topic, to the same intention of telling somebody something. That is a very important point that also sets the state for what we need to do. So the document types are basically either compound or pure. So media objects are all documents which are only of one type. An image is a media object. A text is a media object. Some video is a media object. The compound multimedia document or multimedia object allow any combination of the different types of media. And the data that is transferred, the intention of the document, the topic of the document, that is transferred to the user, or as we say in multimedia documents, to the consumer is through the use of a medium. Now we have the next word, the medium. What is a medium? A medium is a carrier of information. A carrier of information in a communication connection. I want to tell you something, so I need to say it, or I need to write you a letter, or some email, or I need to show you my slides. This is a process of communication, and the carrier of information is basically just the means of what I'm doing. So it could be the sound of my voice. It could be the optical signals that you get from the slides. It has no influence whatsoever of what is on the slides, what is the semantics of what is on the slides. And of course I can always change the medium during the information transfer, so I can talk to you about something, I can show you something that would then be a different kind of perception. I can let you handle videos and stuff like that. An example, typical example for a medium is a book. What is a book about? What is the important part of a book? Well it's a communication between the author and the reader. The author wants to tell a story or make some point clear, whatever kind of book it is, and the reader wants to know about that. He's excited how the story will end or what the flow of the story is. He wants to know about some things, so he reads a text book. So all that kind of stuff. The basic book is just paper that is printed somehow and put between two covers. So it's totally independent from the content of the book. Doesn't matter whether it's a fairy tale or the most important information in the world, still it's a book. Nothing more. It hierarchically builds on text and images, so usually a book contains text, but very often it also contains images not only for the small children, but also books for the grown-ups are illustrated either just to illustrate the point for beauty or to carry some information like statistics, usually very nicely shown and very concisely shown in some graphs that you might see. If I give you the book as a medium, you can read it. That is one way of consuming the book. Or I can read it to you loud. That is another way of consuming the book for you. The medium changed from the written book to audio. So the way of transferring the information, the way of communication has changed. The book still is unchanged. That is the basic idea of a medium. You can classify media usually by the receiver type. So they are visual or optical media, which most prominent example is the book. It could also be a slide or whatever. There are acoustic mediums. So there are kind of like CDs that have some music. There is my spoken word. That is all acoustic media. There are haptical mediums where you kind of like use your tactile senses to get some information. Any examples of this? Yes, Brei, the writing for the blind. Anybody ever seen that? So you have the pages of the book that are kind of show some specific pattern of points. And by feeling the points, you can find out what letter it should be and you can read without actually seeing it. That is haptical medium. All factory mediums. I was hard pressed to find an example here. All factory mediums is information that is transported by smell. I don't know how many of you know the scratch and smell books. They were very popular during the 70s, 80s or something like that. So basically the idea was that you scratch at some point of the book and you get a smell because there was some chemical substance smelling like pineapple or whatever it was. So you had an illustration of what it was. But still an all factory medium could also be a burning fire that kind of senses your excitement to be ready to flee or something like that. So some of the old instincts that we have are still built on all factory media. Something smells funny. And last but not least, there's the goose to tory medium where you transport information by taste. As hard as I was pressed here, it's a sensible example came to my mind with that one because that's very rare indeed. I mean, usually if you taste something or a little bit of something, you want to find out whether it's poisonous or not. Stuff like that could be used. But it's not too prominent, I would say. So these two are probably what we need in our daily world with the other ones kind of coming in at some point. Based on the time, if you have a time whether you have time dimension or not depends on the kind of media, the medium can be static or dynamic. A picture does not change over time. A video or audio file does change over time. You do have a time dimension there. Well, we now know what multimedia is and how we transport the information by multimedia. Still why do we need the databases? So what is the database part of it? The problem is that most of the media that we use in daily life is immediately consumed and never stored. So if we do have a conversation, you get some information from me, but after that you have to remember it and you can never prove it took place. Sometimes very good. On the other hand, you could use a transcript or recording of that conversation to show that it kind of took place or to renew your knowledge of what was actually talked about. So if I explain some kind of algorithm to you and we do the recording here of the lecture, at a later point in time, how did this algorithm work? I want to retrieve the sequence of the video where he explained the algorithm. And I want to see it again, listen to it again, and then I understand how the algorithms work. This is the basic idea of data storage and data retrieval. The storage means we keep the transcripts of our lectures for eternity. Data retrieval means we get from the stored data exactly what we want in an effective and efficient way. So what we want and now, not in half an hour's time. This is what we will be dealing with most of the time for this lecture. We will be very interested in what happens. So how do we retrieve data? How can we index data? So it's ready for retrieval. How do we store it? That is rather simple. But the retrieval part that will be with us for the rest of the lecture. Good. As for the storage of multimedia data, we need a way of persistently storing text documents, vector graphics, CAD, images, audio, videos. Those two can be stored pretty efficiently. For these, it's kind of more difficult because they tend to be rather large and compressing them only works to some degree. Using text or compressing vector graphics, not too difficult. And we don't go into that too far during the course of the lecture. Who's really interested in how to compress stuff. We do have a lecture on digital libraries where many compression techniques and stuff like that are presented. This is not our main focus of this lecture here. Our main focus is on content-based retrieval. So we need to get the information out of the database about that media object. And that means about what information was presented in the media object. What was communicated in the media object. And that needs to be efficient, of course. I mean, we all need to have some real time, most probably, algorithms. And it needs a little bit of standardization. So we'll get to know some of the typical metadata standards. MPEG 7, MPEG 21 for indexing media objects. But most of the algorithms that we will get to know will work directly on the media object without the use of metadata. Metadata is one of the major terms here. Metadata is information about information on a meta level, describing what is in the media object or what is transported through the media object. As I said, I will show you some metadata techniques, but usually we will have to do without metadata. Because indexing objects is quite a tedious task. And manual labor is not up to handling the information flood that we are producing currently in terms of media anymore. I mean, consider YouTube or something. We will do that in one of the following lectures to see how it grows. And it's all video. And it's all not indexed. And nobody has any idea what's in the video. Might be helpful to have some automatic techniques here. So there are basically two ideas when dealing with multimedia data. One is the classical standalone file system idea, where you go like, OK, I just put it into the file system. And the file system structure shows me what is where. So these are the videos from my last vacation. And these are the video from the press conference I gave last week. And these are the videos from the lectures or whatever. Or you could have a typical database storage model, where you say, OK, I don't want to deal with where it's stored and what the excess pass actually is. I just want to put it into the database. And once I need it, I get it back from the database. This database management system. And the database management system is actually well prepared to deal with all my questions. And all my questions means I have a declarative queer language, meaning that I say what I want, not where it is supposed to be. Which should be in the folder with the lectures. But oh no. Maybe it wasn't. I don't want to think about these things. I just want to say, well, it's lecture from 7th of April. And then I want to have this lecture. Databases also offer an orthogonal combination of queer functionality. So if I do remember a couple of things about the thing, maybe the date when the video was recorded, maybe the title of the lecture I was giving, maybe whatever it was. I can just put these two together, or three or four or five clues, in a single database statement. So where clause can contain lots of logical conditions that are just combined with or and or any Boolean operators. And it works. Also important is the term of query optimization and index structures for actually addressing the data. So some queries are more difficult to build than others. If I do have a way of automatically optimizing it, retrieval times will benefit. Same goes for the index structures. If I do have clever indexing schemes, the retrieval will definitely benefit. And I do have the usual goodies, transaction management, recovery, so I can have multi-user working on media databases. Once my system goes down, or the hard drive goes haywire, or my building burns down, or something like that, I do have recovery techniques automatically controlled by the database system. I don't have to care for setting it up again. The database basically will do that. So that's kind of the advantages of a database, and this is why we want to do stuff like that. If we look at a timeline, the story of multimedia documents, media documents as such, started basically in the 1960s. But in the 1960s, the main focus was on text. That was the golden age of information retrieval, where people started to work on text for the first time and got all the interesting algorithms. So for example, the well-known vector space model for text retrieval is from that area, and was invented in 63 or 62, I think. The 1970s saw the advent of relational databases of a whole new paradigm, and the structured query languages, declarative query language, the way, don't care about where something is located, but you locate it by some logical restrictions on the content that you're looking for, on the type of data that you're looking for. And the computers at that time were basically filling entire rooms like this one, and they couldn't do more than your laptop today. Interesting, isn't it? So the people at that time were not really thinking about images or videos or stuff like that. But during the 80s, when the personal computers became available, it was clear that also the personal information, like your vacation videos or your vacation photos, were of importance, and a digitized version of that had to be managed somehow. And especially with the growth of storage devices, so I mean, I still remember the times where you had 64K at your disposal in terms of main memory. Now main memory ranges in the gigabyte area, and the secondary storage ranges in the area of terabytes to petabyte, not a problem. So you can store the information. And I don't know, there are kind of like estimations of my space that the whole life of human with all the documents that he or she produced, all the photos and stuff like that could be stored in as little as 300 terabytes. So your life in 300 terabytes. Cool, isn't it? Sounds a little bit like Tron, I must admit at this point. So we had an increasing presence of multimedia objects. Thus beginning of the 90s when the computers were in a state to really handle those objects, the people became concerned with how to store them, how to retrieve them, and that is basically when they started working on multimedia databases. So 1995 was part of the standardization. SQL92 in the standard for the first time introduced binary large objects, not somehow content addressed or indexed or whatever, but just you could store videos in the database or images in the database, and you address them by some handle that was usually the file name. That was all there is to it. But 1995, 1998 came the first multimedia databases. So by now we have like 10 years or 15 years of multimedia databases, and it's been an interesting research part on one hand, but also an interesting application from the side of industry, what is possible today. And we'll show you some of the possibilities that are in place today and that probably everyone of you uses once in a while. So also the social services like Flickr or today we will do a bit of PICCASA that are used for communicating multimedia objects, used for storing multimedia objects. So it will be interesting to see how this all works. So the first commercial system used the blob data type, blob is short for binary large object, and it is really uninterpreted data. It just takes the image data and stores it somewhere. It's a shielded part of the database, yes, so it is recovered, it has access control, blah, blah, blah, or the goodies, but still it's just addressed through metadata. You just say when it was stored, who did store it, what's the file name, are there any tags on it or whatever. You cannot address the content of the stuff. It's not in the nature of a binary large object. And the first database, the first multimedia extensions for relational databases usually refer to as object relational extensions because people were seeing it. It's not about the relation of data really, it's also about storing objects like an image which is a very complex object. Also the retrieval functionality was enhanced. And the idea was a more semantic search. So you want to address what's in the video, what does the picture show, what type of picture is it. That was more important than the file name of the picture. And the big databases, IBM DB2, Oracle and stuff like that, all built extenders or cartridges or however they named it. And integrated into their normal relational kernels the possibility of handling those objects by user defined functions, user defined type, stored procedures. So basically routines that could run algorithms on the objects. And those algorithms were more or less convenient or more or less efficient. But they all had the aim of getting down to the semantics of the object. That was the interesting point. Christopher Doolakis was one of the first to think about what multimedia databases actually do have to offer. And he was kind of like doing a requirement catalog that is still not there today. But parts of which were worked on very successfully. And parts of which can actually be shown. And he said, well, first I want the classical database functionalities. I want all the goodies. I want recoveries. I want transactions. No discussion about that. Okay. And I don't want to care for it by hand manually. I want the database system to take care of that. Then I need the maintenance of unformatted data, which is kind of the first glimpse of the blob idea. It is unformatted and it has to be handled somehow. I don't care how. Then I need special storage devices because those tend to be rather big and how you store them could affect their usage. And of course presentation devices. How do I present a video to the user from the database? A database does not have viewers. Somehow the database has to interact with such devices, with graphical user interface devices to show the results. It's one interesting problem of how to show what's going on that we will also address in this lecture. How do you present video results so the user can choose what video is actually the one that he or she likes? It's not easy. To comply with the basic requirements, we need to consider the software architecture. Is that something totally novel? Or can we somehow build it into the existing database kernels? Is it just an extension? We need to identify the objects. We need to identify the content of the media. What's the video about? Who does the picture show? Where was the picture taken? All very interesting things. And if anybody shows you a picture, sometimes you do have a distinct idea where it was taken. For example, you see a picture of pyramids. Where was it taken? Mexico. But Amaya pyramids. So yes, but the type of pyramid will actually show you where it was taken. And Egypt is a good guess. Then the problem is performance. Performance is one of the major hindrances of effective multimedia system. Because you can't start searching through videos where you have a terabyte of them. When the user asks, oh, I want all the videos where there's a cat in it. And then you start taking the videos, oh, let's look for a cat. It doesn't work that way. You need to pre-index them. You need to address the information and extract the information very quickly. Then you need to use the interface. How do you interact with the system? How do you state your queries on one hand? I want a picture that looks like this. Maybe you give an example. Or you draw something. I want the pyramid picture. There's a palm tree. And there's a pyramid. And there's a camel. Give me all the pictures that look like this. How do you do that? It's not easy. Well, kind of nice. Or aces. You never know, you need to extract information from the pictures. This is a pyramid. OK? Then you know. And everybody else knows what it should be. Probably I should have rather said this is a camel. It's far more difficult to identify than the pyramid, I would say. Information extraction. OK? Then you need storage devices with very large storage capacity, redundancy control, and compression possibilities. So you don't have to store everything in the highest resolution. And of course, you have to do a thorough information retrieval. That means you need the semantic search capability. OK? These are the basic requirements that you need. The next term that I want to introduce and that I already used a couple of times was retrieval. What does retrieval mean? Well, retrieval is what your answer to query is. Basically it chooses between data objects based on a select condition. If you do exact matching, I want this picture with file name, blah, blah, blah, or a defined similarity connection. I want everything that looks like this. It's totally different, isn't it? It's not the one picture. It may be a couple of pictures that are more or less what I had in mind. And that introduces a ranking on the picture, an ordering of the result set. Totally different from what we have in classical databases. Retrieval may also cover the delivery of the results to the user because the user has to have a possibility to see or get an impression of what the results look like, what are the interesting part of the results. If we take a closer look at the search functionality, I already stated it has to be semantic somehow. It's not just the syntax, it's not just a video that is three minutes long or something like that, but it's the video that shows the camel. It's the video that shows City of London. It's a video that shows Sylvie. And somehow you have to recognize that. Of course, it might be somehow connected. I want the video of Sylvie when he was in London. It connects to semantic things or it could connect to structural things. I want to see how Sylvie looked 20 years ago. I want a video taken of Sylvie 20 years ago. This is not semantic anymore. That's based on metadata and semantics. It's a combination, but still it has to be considered. Search should never directly access the media object because once you start fiddling around with videos during retrieval time, your doom to failure will take for hours. You have to pre-index it somehow. You have to extract and normalize all the features that you need for later retrieval. So there's a preparation step somehow. Prepare your data, your media objects, and then at retrieval time, just work with the indexes, just work with the metadata information. And last but not least, you need meaningful similarity or distance measures. So what is similar to something and why? Can you state that two people look similar? Probably you have an idea. Can you say why they look similar? More difficult, isn't it? They may have the same hair color or same color of the eyes, still. What is it exactly that makes them similar or not? If you want to measure it, you need a solid representation. And you need some mathematical function that goes like, okay, similarity is 0.1 or 0.2. That may be. We also cover that. Good. One of the good examples is kind of retrieve all the images showing a sunset. And then you get this result and you say, well, good result, bad result. Most people would agree it's rather nice. But why do we see that all these images are sunsets? What makes us believe they are sunsets? The color, very important aspect, more. There are some clouds in it. Sunset is always good for sunset. The interesting thing is none of these images shows a sunset. It all sunrises. So the direction of the camera that is pointed east would have been a nice clue. So it's more than meets the eye. See, nobody notices, which is the next good thing about media. Using media, I can prove a lot of things by actually showing you pictures. And if I told you that most of the pictures were indeed sunsets, you are in no position to argue. So the more metadata we have, the better chances we have of actually getting what we want. But sometimes we don't need the exact semantics. But a semantic that is close enough, I mean, it doesn't really matter if there are sunsets or sunrises. We can use them as either. And nobody will be the wiser. So sometimes similarity is a good thing. Good. To do it in a schematic view, we usually have two main steps in working with multimedia databases. One is creating the database, where you digitize your images or your videos or whatever it may be. You analyze it and extract features. Whatever they may be. They could be colors. They could be cloud-shaped forms. They could be the camera angle, whatever. Anything that you extract, you do, and you put it into the image database. If you have a query, you also have to digitize it, because you need the same representation as the data that you already had. And you also need to analyze it and do the feature extraction. And now you compare the features of your image against what you have in your database, and you end up with a search result. That is a basic idea of databases. And if we look at it more detailed, we on one hand have the query, and at some point we get the result. But before we can do that, a lot of things have to be done. We have to have the basic multimedia objects and some relational data, for example, the creator of the object, or the rights owner, or whatever it may be. We insert it into the database. We extract the features. And now, if we do the same for the query, we are ready to do the comparison. Once we have considered some of the good results, so we computed a certain similarity and did some optimization tricks and whatnot, we can prepare the result for showing. However we do that, so for images it's relatively easy. You show small pictures on your result page. For video it's relatively hard, because what do you do? Just one picture, or do you kind of play the video, or do you kind of build a storyboard where you tell what's in the video? Do you summarize it somehow? Different possibilities, but it has to be done. And then the result is finally given to the user, and the user is very happy. This is basically how it works. If we get an even more detailed view, there are a lot of things that have to be done. So for example, the multimedia object can't be simply put into a database. There has to be some kind of pre-processing. You have to decompose complex documents. You have to analyze the images. You have to normalize them somehow. Does the size of the image matter? Or is it just like every picture should have the same size, so you can compare the colors in a good way. You have to segment the videos at some point. So you have to find out what is in the video. Are there camels in the video, or are there silver in the video, or whatever. It's all very interesting, and it has to be done somehow. Most of the algorithms that are used here, we will cover during this lecture. You then do the work of indexing. So you recognize features, you extract the features, you index the features, everything that should be used in a valid similarity computation needs to be pre-computed. You cannot address the media objects during retrieval time. You have to pre-compute it, which means you have to anticipate everything that a user might want. Very difficult, isn't it? So same goes for the query, and then you just compare the feature values to the feature values stored in the database. Sounds easy. Well, yes, optimizing these queries is not very easy. Building a good index for multi-dimensional data is not very easy. We will cover also part of that. Yes? Sometimes, sometimes, so if you look at modern cameras, for example, they will immediately add a lot of metadata to the digitized images. And some of these metadata is standardized, as for example in the MPEG-7, MPEG-21 standards, we will cover also part of that in the lecture. And some of it is basically free text. And if you look at social portals like Flickr, for example, where people can just assign text to images, also this might be a source of inspiration. So there's one way of extracting features directly from the media object. There's another way of just taking the metadata as an easy solution. You will have to analyze the object and then extract it. Yes. There has to be algorithms in the database that actually do that. It's a totally new kind of component that a usually relational engine does not have. It's still not offered in most of the state-of-the-art databases. So even with the DB2 and Oracle, the intermediate cartridge and the DB2 audio image video extenders, they offer a very limited functionality that does not go far beyond what colors are in the picture. Sad, isn't it? We will see some of that too. Okay, then you do the query processing. You get the results. You transform it somehow and present it somehow. And then it goes to the user. And now the user might actually say, no, this is totally what I wanted, or this is totally not what I wanted, which is called relevance feedback. The relevance feedback can be used to tune either the similarity function or include, exclude some of the features for comparison. I don't want the color information. I need the shape information. It does not matter what color the camel is. Well, probably it does matter what color the camel is. But still, it does not matter what color the review is. I want the party photos. So there he has a distinctly green color. Yeah, and that brings us to our first detour. And I will hand it over to Sylvia now and leave you for today. Okay, so let's see some applications of these multimedia databases. You've already mentioned social media and ways to find multimedia content in such social media. Of course, we have Facebook, MySpace, HiFive, and some other social networks, and they all also allow updating and uploading photos. And usually, when you upload such photos, you are also allowed to tag the persons that are in the photos. That's the metadata you are talking about. And this is actually the state of the art in such systems. When you want to perform retrieval, then you say, I would like to see all the photos containing this friend of mine. You give the name or the nickname or whatever, and the system searches the photos which have attached the metadata with that tag. This is what Facebook does, for example. Then you have some smarter systems. One example here is PICASA that tries to do automatic recognition of faces. So it allows only to start the tagging, the metadata input, and then it tries to do some automatic recognition. We'll see this soon. And then we have a lot of video sharing. YouTube, Mega Video, MetaCafe, a lot of systems, lots of videos. You can upload your videos. Also tag them and say, OK, here I'm doing something in London, or here I've just filmed a car or something, whatever. The problem with such systems is when you do retrieval. We've seen retrieval is very important. How can I do retrieval on videos? The same metadata. I'm searching for all the videos where I was in London. Or if I have a longer video, I want to see the specific part, the 10 minutes where I filmed a certain part of London. This is difficult. And usually today you have to search by yourself. You have to scroll through the video and see, OK, I have a hit. I'm in the middle of the event. I was going forward. Let's scroll back. Let's scroll forward. Yes, I'm here. This doesn't work on YouTube. If you do the same on YouTube, you lose a lot of time, pre-buffering and so on. And it's not available. The only thing you can trust today is metadata, and this is not great. Because metadata can have more meanings or it can actually lack. It mustn't be there. So the quality of retrieval suffers also in video collections due to these kind of searches. Great example. I've said about London. They were writing a while ago in the newspapers that they have around half a million cameras in the center of London. They wanted to prevent theft. Here is just an example of how they prevent or how they recognize theft. They see such situations and then they say, OK, this is the burglar. Let's search for the burglar and we've solved the case. But you have to imagine that on the other side, such guys sit in a room and they have a lot of monitors. And they have to look for such situations. This is not that easy, right? You have a room full of monitors and then you have to zoom in and catch that moment. This doesn't work that way. They would need a clever system that recognizes potential burglars that connects to the police database and says, OK, I'm doing face recognition. I know this person here. Let me use another color. I know this person here. She was condemned for burglary. So I'll focus, zoom in and stay on this person for a while. And this should the system do automatically. But this is not possible without clever databases that are actually able with their algorithms to recognize what happens who that person is, connect relational databases from the police with images, with face recognition algorithms, and so on. So then how do we do this? A simple example is Picasso, an example that I've prepared for today. Some while ago, TechCrunch was presenting Picasso as one of the first systems that brings facial recognition to photos. And this actually works. I've tested it myself. And actually, I'll do it live. I think it's faster. I don't want any drawing. I have here a collection of 50 photos. It's good that Thilo is not here anymore. He's in a lot of the photos. And they are, at the current moment, unnamed, un-tagged. So I'll start taking some photos. What Picasso here says, it zooms on the photos, recognizes the skin color, and says, OK, I've zoomed. I've gotten you some faces. Please tell me who these persons are. And then I say, OK, this one is Thilo. Thilo is a new person. OK, he already recognized four photos, one I've tagged, and three others in which Thilo appears by itself. Then he shows me other photos. And then I say, OK, but this one is Thilo too. And then the collection already grows to eight. He recognized other photos in which Thilo also appears. But these ones are with glasses. So I take one photo with glasses, and he recognizes three others in similar positions. The similarity search we were talking a while ago, Thilo was showing you, is what happens here. Automatically, I don't need to tag the rest. And if I do it any further, I don't see any more Thilo's here, so I'll start with Christoph. It's a new person. Tag just one photo, got five Christophs. See the similarity. And then another Christoph. I got nine photos. Christoph with glasses. Eleven photos. And this is how it works. Pretty smart and pretty cool. Because look, the photos look something like this. So it's not just the small face, but a lot. Good. Let's come back to the lecture. So then we've talked about this. We've talked about the learning phase. Let's start with a sample scenario. So typical use of such databases are police investigations. And I've just shown here, drag operation I've read in a book, and it was a book about multimedia databases where the database actually played a defining role. And in such operations, you usually have video, so some surveillance. You have audio data, recordings, persons were wired. They were speaking about something. Images, photographs that have been taken during this investigation. You have some structured relational data, something like the accounts of these burglars, what happened, what money transfers have been done, money laundry, something like this. Everything is in a relational database, which is connected to our multimedia data through a multimedia database. And then, of course, you have also the geographic information. This was not that important in this system. OK. But how would a query look for something like this? How would I imagine to perform retrieval on such a database? Easy, classical word, keyword search, hoping that I have metadata in my multimedia database. Then I would just like to see the pictures where Tony Soprano appears. Assuming that I have a system like PICASA under it, where I've just tagged my photos in my digital library or in my multimedia database, then I can get with such a query like retrieval images from the library where Tony Soprano appears, I can get those images. But this is easy. I just rely on the metadata. The other possibility would be that the police officer comes with this image here, shifts it under the scanner, scans the images and says, can you please give me a copy? All the images where this person here appears. This is a bit more difficult. This is what PICASA does, content-based recognition, face recognition, content-based retrieval. The assumption here is that if I retrieve the photos where this person appears, hopefully one of the photos or at least one of the photos would be tagged by someone as with this person being Tony Soprano. And this is how I already find out that from this picture, I already find out who that person is. Okay, so we've seen query by keyword, then we've seen query by image example. Murder case, quite difficult murder case. We have here the victim and we have the criminal. As in most of the cases, the criminal has a mask. You can't identify him, but you have it on video. Typical assumption of the police officer, well, the killer must have known the victim, he must have interacted with the victim in the past. So why not just ask our database to find all video segments from the last week where the victim appears? And then I get this and see that actually this guy here without a mask this time was not that nice to our victim. And the same guy here created trouble for the victim. So he is a possible candidate for jail. Yeah, this is how clever such databases are. Yes, so this was a possibility of querying the other would be to use hybrid information. So this means that I'm going not only to use keywords, but I can also use photos or videos and I can say, okay, I want to find all individuals that have been photographed together with one person. So I want to find these guys here under the conditions that they have been convicted of attempts of murder. So I'm connecting again to a relational database and that have recently had electronic money transferred from some corporation. Such heterogeneous queries are also possible if you have a multimedia database working for you. We've seen some queries, but what are the characteristics of multimedia databases? What do we really need to know about multimedia databases when we have such queries? Well, it depends first on the data. So we can have a static behavior. This actually means that we have a lot of reads, a lot of queries on statistical data and that the data doesn't really modify in time. Even an art collection, pictures, paintings, something like this. On the other side, we have dynamic data, data that very often is to be updated. Meteorological purposes is a very good example for that. So satellite images of regions of the Earth, something like an tsunami wave, if you can imagine or something like a hurricane can be identified and is a classical example of a dynamical information. We can have active or passive functionality of the database. What does this mean? This means that the database, in the case of an active functionality, the database doesn't really wait for the user to perform retrieval. The database is proactive. It says when my data changes in a certain way, then I'll activate a certain function. You can see this as, for example, triggers and stored procedures. If you are accustomed with such a mechanism, so you can, for example, have a trigger when the difference between two images is big, then trigger something, activate a function. That's an active functionality, for example, of multimedia databases. Passive means that it will only respond to selects from outside. So simple retrieval operations from outside. Then based on the retrieval process and the retrieval technology, we have standard search. Queries are answered only based on the metadata. This is what happens with classical web search engines when we are looking for images. You know Google, Bing, Yahoo, allow you to search for images. You just switch from web to image and say, I want all the pictures with or containing this. What actually happens is that all of the search engines look around this picture or in the tag, HTML tag, and looks for hints for similarity between the words. So it's classical information retrieval based on text, actually. This is the standard search. And then the content-based retrieval functionality, which you have actually already seen a bit, it's how Picasso performs face recognition, for example, in images. It's not the metadata search anymore. I'm not looking for Tilo by name. I've input the face and he's performing similarity match and he says, OK, this picture here is similar to this one and to this one. This is content-based. OK, let's see some examples of mixed characteristics. So this one here, often historical use, is a passive statistical retrieval. What this actually means is that the database doesn't trigger anything and it's also rather statistical so the images don't really change over time. You can see this as a classical example where you go with your smartphone in a museum and you start photographing images and you see this one and you wonder yourself, who might that guy be? And you have an art database or a database with such pictures stored and you perform an example-based retrieval, a select. I've done a photo and I want to know who this guy is. What could happen, actually, is that you don't get this guy but you get such symbols. For historical use, you may know that families, rich families, famous families had so-called flags or symbols, family symbols. And this is actually very, very important for art galleries that you can perform similarity search also on such symbols or elements from your image. So you are not doing here only the face recognition part but, for example, in this case, what was recognized was such a symbol compared with another element in the multimedia database where such an article was delivered as a result. An example of active dynamical retrieval is an early warning system for weather. You can imagine yourself that such information changes relatively quickly. So you have here at this moment one image, then five, ten seconds later the weather evolves, you have another image. So this kind of change changes dynamically. The time interval is relatively short. And since it's an early warning system, the database has to do something about it when it recognizes the possibility of a typhoon. So when it recognizes a pattern, it knows how a typhoon looks like. And then he just has to compare the evolution of the data with the pattern for a typhoon. The database is able to do this and when it recognizes a match, for example, it's 80% similar to a typhoon, then it releases a typhoon warning. This is the dynamical part or the proactive part. Okay, so we, I've already talked a bit about the standard search or the image search, exactly what happens here. I just input a person name and look, this doesn't have anything to do if you take it semantically with what I've searched for. Of course, the first match was good. This was based, as you can see, on the tag, but it has nothing to do with the content. If I want to retrieve the person, this, for example, is bad retrieval. The good retrieval, so to say, or what we're going to discuss in this lecture is what PICASA actually does, so content-based retrieval, where it actually is able to recognize faces, for example. Okay, we've talked a lot about retrieval. We're ready also with the detour. But how do we evaluate the quality of retrieval? We will talk further also a lot about how one can perform retrieval, but the deciding point here is to be able to evaluate it. And there are two basic methods or characteristics to keep in mind. The one is probably for any system, the efficiency. So how good is the system able to perform the tasks in terms of speed, of resource utilization? How fast is the system when the database is really big, and we have really big databases? Well, the other evaluation metric is the effectivity. The effectivity shows the quality of the result. So what did I return? What the system returned, and how does it fit into the query that I've asked for? Of course, there's always a question of the trade-off. What is more important? Is it more important to build a system that's efficient or effective? You can't always have both. So it actually depends on the application. If I'm focusing on a system that retrieves with a high quality, that I would expect that the query, for example, takes longer, or that I really need a lot of hardware to cover for the efficiency. That's the trade-off between the two of them. The efficiency can, of course, be measured in terms of technical usage. I can see how much memory my system, my database uses when it's queried on the CPU time, how much of my CPU is loaded over time, how many operations I have, read operations, and how many operations. And, of course, in terms of the response time. The goal is always that the response is efficient enough. This means that I get the query result in a small enough time. When I'm happy with it, then I can say, OK, the system is efficient enough. We are actually not going to talk deeper about efficiency right now. We're going to talk about indexes and how to make multimedia databases efficient, but not in terms of technical stuff. When it comes to technical stuff, then you can attend other lectures like distributed databases where you'll talk about technologies and how these databases can actually be spread on different technologies, on different machines, and still perform their task. OK. What about the effectivity? I've said the effectivity shows the quality of our results. And of course, this is more difficult because it depends on the query. So it's a query dependent measure, and we need something that it's objective to measure the quality of the effectivity. This has also to be independent from the querying interface. So I don't care actually what kind of interface is used between the query and the database. And I also don't care about the algorithm that is used to perform, for example, the face recognition. I only just want to measure how good is the result compared to the query. This is the only thing that I want when I measure and I want to evaluate the effectivity. And when speaking about effectivity, as I've said, what we need to evaluate is the behavior of the system. So what the system returns with respect to the given query. This can be performed by measuring the relevance. So effectivity can be measured regarding an explicit query based on the relevance. On the other side, effectivity may also consider some implicit information needs. Things like how useful is the result, the usability, the user friendliness of the system. But we really won't concentrate on this in this lecture. What we are interested in is the relevance. So then let's talk a bit about the relevance. This is the measure for information retrieval. And of course, relevance can be measured as it's query dependent and it can be measured when you can classify the data that you have. So all the data collection is being either relevant or irrelevant regarding a certain query. Imagine you are searching for a person and you have an image database. You have 10 images. From these 10 images, 5 are of that person you are searching for. You are searching for Mr. X. This means that 5 of the images will be classified by a human expert who knows that person as relevant, 5 as irrelevant from the 10 image collection. Then you will query the system underlying this database for this person and you'll compare the result to what this person, this expert, has tagged as relevant and as irrelevant. So what the experts tag is your golden standard. That is the perfect result. And you compare what the system returns against this perfect result. This is how relevance functions. Okay so this is how it looks like in an image. You have an image collection and then you have something, you have a query and based on this query the experts classify the information as being not relevant, everything which is with blue here and as relevant as being this part here. On the other side you have the system and the system returns this part here. What actually is to be measured is the good result. The intersection between what the system returned and what the expert said is also relevant and the bad part here. But the system returned but actually is not the correct result. We know this because experts said we shouldn't be interested in this part. We should be interested in this part here. Okay under these conditions we can talk about false positives. Those are the irrelevant documents which have been classified as relevant by the systems. The so called false alarms. I mean this here. These are the false alarms. The problem with these false alarms is that they increase the result set. So although the result set should have been this part here, okay another color, yeah. The result set in our case was unfortunately this part here. So I've got a lot of irrelevant data. This can be easily eliminated by the user. Consider for example when searching for an information on Google. And you get the top 100 documents. And you can see that the first one might interest you, the second one might interest you, but the third one doesn't really have to do with what you're interested. That is what happens. This is a false alarm. And you as user can recognize it and say okay I won't read this because it doesn't really matter for me. I'll skip to the fourth, to the fifth and so on. These are the false positives. On the other side you have the false negatives. They are really dangerous because they can't really be detected by the user. The false dismissals, the so called false dismissals are these ones here. What happens is that the system doesn't return from the document collection these documents here. The human experts say these are important for me, but I don't get to see them because the system doesn't return them. So I don't know if they're there. What we usually do in such cases is we change the keyword search in Google and say okay maybe I wasn't that clear. Let me refine my search. Let me write the search in another way and hope that I get more of the results. If they're there, I don't really know. This is why this is a bit more difficult to detect. It's not that easy as in the case of the false positives. Then we have the remaining set, the correct positives and correct negatives. The correct positives are these ones here, the ones that are identified as correct by the experts and that have already been returned by the system. Then we have the correct negatives or the documents that have not been returned also by the system and that they are also irrelevant. So the blue part, this one here, that was irrelevant for the system and for the export users also. To sum it up as a metric, this information that I've provided you with can be summarized in a confusion matrix. The confusion matrix basically compares what the experts said is relevant and irrelevant with what the system said is relevant and irrelevant. And here we have the correct selected document, the correct alarms, and we have the correct dismissals. And the hope here is that these numbers are big. And then we have the false dismissals, the documents that we'll never see but that the experts said are relevant. These are the worst kind of false alarms we have of false dismissals. And then we have the false alarms, the documents we or the system includes in the result but aren't actually relevant for us. These numbers here should be small. When these numbers are small and the correct alarm and dismissal are big, then our system performs great. Okay, the interpretation looks something like this. We have the relevant results. The relevant results that have been handpicked by experts are the false dismissals. So the stuff that hasn't been recognized by the system, although it was recognized as relevant by the experts, the correct alarms, what the system recognized as relevant and was also relevant. And the sum of this is the relevant results. And then we have the retrieved result, what the system has returned as being the sum, again, the intersection of the correct alarms plus the false alarms. Okay, as metrics, we have the precision which shows the ratio of correctly returned documents relative to all return documents. What this actually measures is this divided by the whole. How much impurity do I have in my result? If that one is big, I have a lot of impurity compared to that one. And this is a measure between zero and one. And for example, if I have 10 documents and I've returned five which are from here and five which are from here, then I have a 50% precision. If I've returned only one document and the one document is here, then I have 100% precision. You can already imagine that this is actually not that helpful by itself because I can always return one document and hope that I have 100% precision. So this has to be combined with another measure. This one is recall, just to see actually how many documents do I return and how many from these documents are also relevant. And the measure that performs this works as follows. So the recall is a ratio that compares the retrieved documents, again, our CA, compared to the whole set of relevant documents. If the CA is small, the correct alarms is small compared to the CA plus FD, then I have a small recall. So the hope here is that I get a big correct alarms so that the recall would be big. So increased precision, increased recall is what I'm aiming for. For this purpose, we have precision recall analysis. And this can be expressed in terms of graphical representation. Well, this is just a recapitulation of what I've said right now. They can be expressed in a graphical representation as precision recall curves. This is actually a standardized metric in comparing your retrieval system. And for example, you have here a precision of 100%, for a 10% recall, then your precision drops down to 80% for a 20% recall. And of course, the idea is when you expand your result, you get also garbage. With the garbage, the precision drops down. But with more garbage, you relax your search and you get also more relevant documents besides the garbage. This is the problem. The system can't be 100%. It would be perfect to have something that works like this, 100% precision, no garbage in, 100% recall. But this is only the ideal case. Doesn't really happen. OK, we are at the end of this lecture. Hopefully also punctually, this room will be used for some lectures for the professors' interviews. So next lecture, we'll talk about retrieval of images by color histograms, introduction in color spaces, and matching images.
In this course, we examine the aspects regarding building multimedia database systems and give an insight into the used techniques. The course deals with content-specific retrieval of multimedia data. Basic issue is the efficient storage and subsequent retrieval of multimedia documents. The general structure of the course is: - Basic characteristics of multimedia databases - Evaluation of retrieval effectiveness, Precision-Recall Analysis - Semantic content of image-content search - Image representation, low-level and high-level features - Texture features, random-field models - Audio formats, sampling, metadata - Thematic search within music tracks - Query formulation in music databases - Media representation for video - Frame / Shot Detection, Event Detection - Video segmentation and video summarization - Video Indexing, MPEG-7 - Extraction of low-and high-level features -Integration of features and efficient similarity comparison - Indexing over inverted file index, indexing Gemini, R *- trees
10.5446/345 (DOI)
So, it's my pleasure to welcome you to the lecture Multimedia Databases today. And today we will be going a little bit deeper into video retrieval. Last time we were kind of pre-processing the video, we were abstracting the video and segmenting it in the shots or story units that contained similar images or images that could somehow be ordered. And our example was, well, a news program on TV, where you have the typical elements like the anchorman, like reports from some locations, people giving opinions on something or describing something in depth, so reporters or something like that. And these are all different shots that could be represented by a single frame, thus making the video a little bit easier to manage, because obviously video is a very storage-intensive format. And if you do shot detection and then take just one key frame out of each shot, the video is abstract to something that is far less than the actual video, and you don't have to handle the video. We are talking about some models, how to do shot detection, starting from simple, thresholding models where you could just have something like, okay, if the colors change too much, then just assume it's a new shot to statistical models where we looked at the structure, at the motion vectors, and then said, well, between shots, the motion vectors should be different from within shots. And so we had some ways to exploit statistical distributions to find out which was which. And after you've decomposed the video into a set of key frames, into the video abstraction, what do we do now? Well, what we want to do now is kind of compare videos. Are they similar or not? And measure or rank these videos, and then we can kind of find out what it's all about. And well, so today's topic is video similarity, and we'll talk about basically two ways of defining video similarity, the intuitive or the ideal video similarity, and the vernoi video similarity. Determining the similarity between videos is of course very important for the ranking. So if I want a typical query, I want that video with the person in it that did whatever. Then we have to rank all the videos according to how similar they are to the video that we mentioned. And how we do that, either by metadata or by just providing a key frame of the video that we're looking for, that is something that we will talk about. And it is also interesting for finding duplicates. So for example, if you have a typical video portal like YouTube, what would you think is the number of duplicates or near duplicates in such a portal? Well, it's considerable. And storing all these duplicates is not a very sensible thing to do, now is it? So if you find duplicates, you can say, well, that's already in the database. We don't need to store this. And another application that immediately springs to mind and that is done very often is detecting copyright infringements. So if you know which videos are similar to your video, very often done by the music industry, because many people in the community just upload music videos to YouTube and to my video or whatever the name of the portal. And of course, that is not their content. They are not the content owners. So finding these infringements also needs a description of how similar videos are. And for the similarity, we of course need a measure. We need some degree of similarity that we can say, OK, this video is more or less similar to some other video. And of course, if we see it naively, we can just say, well, let's just take the frames or even only the key frames. And the similarity between videos is the percentage of frames with a higher visual similarity. And we already know how to determine high visual similarity because we can do that with a normal image retrieval technique. So if the color histograms are similar or if the textures are similar or if the same shapes occur or whatever, we already were through that. And this kind of similar what is done in text-based retrieval, the TANIMOTO similarity, where you just have the jacar coefficient and just say, OK, I take all the words and I take the overlap between the words and kind of divide it by the number of all the words occurring. And then I know how similar these texts are. So this is kind of what is often referred to as TANIMOTO similarity in text. Why don't we just adapt it to video? It's the straightforward idea. And what we now need, if we want to do it, is we need the identification of visual features from the frames. And the interesting part is here that we do have a time series of features. We just don't have the one feature. But for each image, for each frame, we have a feature vector. That means we have a sequence of feature vectors. The feature vectors change over time. And again, we can take the color distribution, but we can also take something that we didn't have in images, in still images. That is, for example, motion vectors. We can really determine motion vectors from image to image. And that is something that can help us. And for efficiency reasons, we probably should not determine the similarity between single frames, but rather between shots. So we take the key frame out of each shot and just compare the key frames with respect to different videos. And each key frame represents a shot. And if the key frame is similar, the shot must be similar. And then we kind of go from there. This is our way of defining similarity. So a few considerations that we do have to make is that if we have a high number of feature, the better the similarity measure can work. Because we don't only focus on color, but also on texture, shapes, motion vectors, whatever. But of course, having more features also means we have to do more calculations. We have to talk, we have to do more measuring. And given that a video consists of multiple shots, of multiple key frames, this is multiplied by the number of shots. So the bigger the feature vector and the more shots in the video, the bigger the representation in terms of similarity for the video becomes. And this is why in video retrieval, usually you would rather say, well, I may sacrifice some of the accuracy to be efficient, to be more efficient. So efficiency is the crucial part of video retrieval. And think about portals like Google Video or even YouTube. If they are going into similarity search, like Google did with the image search, at least to some degree recently, then they would have to compare loads of videos. And efficiency is obviously a very interesting thing. So this brings us to our first detour, where we can see how big the problem actually is in the case of YouTube. OK. So as Thilo already mentioned, YouTube is a very big video database. About 65,000 videos are uploaded each day. And for each, for this kind of an amount, there is a high probability that a lot of these videos are duplicates. Just imagine, I don't know, a new movie coming up. Lots of guys will upload a trailer or a new product. Lots of people will upload, again, a presentation of that product or so on. And that's a problem on one side for YouTube. They would really like to eliminate those duplicates and store less data. And on the other side, that's a problem for the user. If I'm going to search for something, then I'm going to get a list with a lot of the same video, maybe with different encodings, maybe with a small difference in the title, or I don't know, maybe with a few frames more or less, but the same. And it would be actually a great idea if you were able to identify those duplicates, cluster them together, and say, look. These are all the videos that are kind of similar with respect to your query. These are another cluster and so on. So for example, here, I've searched for the lion's lips tonight. For some reason, I've observed that there are a lot of duplicates for this video here. And indeed, if you search this on YouTube, you get, look, here, all the same. This one, this one, and the last one, again, all the same. And I get the first three pages with actually two videos. It would have been great to have the first one, with, I don't know, 20 duplicates, the second one with 30 duplicates and so on, so I can choose and concentrate on what I'm interested on. But I think the most important question here is, what are these duplicates? How can we define them? Yeah, it could be easy, so one could say, duplicates are two videos, we share exactly the same frames, so frame, same frame number, frame characteristics. But in most of the cases, we don't have this ideal case. We usually have near duplicates. So this is a term that Wu introduced in 2006, and he said, well, near duplicates are about the same. They just differ in either the file format or the encoding parameters or some variations, like the brightness or the lightness. They have maybe suffered some editing operations. For example, it's quite common that when someone uploads a video to YouTube, they brand the video, they put first the title or something like this, I uploaded it, or at the end, or in the middle. And due to this introducing of logos or stuff like that, they may have different lines. Again the lion sleeps tonight example, here I have in example A the first five frames of this video, the same stuff with some missing frames, so I don't know if this one was cut out and this one was cut out, but it's still the same video. Here with more frames in case the another resolution, maybe with some text introduced, and so on. So you can have different modifications. For example, here a bit of scaling, here it's a bit of brightness modification. All these are near duplicates, and it would be great to identify such cases to group them together in a cluster. And so that you can get a feeling how big this problem is and how much redundancy there is on the web, just on this study on YouTube, the lion sleeps tonight, appears on YouTube 792 times, out of which 42% are near duplicates. This is huge, and if you look further, there are a lot of videos again with huge percentage in duplicates. So just grouping this would help a lot in easing up on the user when performing searches. Now that we got the feeling about how big this problem is, let's see how we can solve this and some metrics. So our idea was that we have to compare the individual frames and see are they similar, and are they similar means basically compute their feature vectors, compare their feature vectors. Of course, large feature vector for every shot or every frame doesn't work, so we need to restrict ourselves to just a small number of features that represent a video with minimal errors. And that is very dependent on the type of video that you are actually working on. This is very, very interesting to look at your collection and see what you actually have. That means that you minimize the distance between the video and the representation. And for example, you can just take the feature vectors in n dimensional space, and then, for example, the method of least square, the K-means algorithm to cluster what is interesting. And this is the idea of building a video signature. So for every video, you get a number of feature vectors that represent the video with respect to all the other videos in the collection with minimal error. So if I see that the clusters are well chosen, then my clustering is fine, the representation is fine. If I see that the clusters are kind of very fuzzy or distributed, then obviously I need some more information to distinguish between videos, to distinguish between clusters. And then I would add some more features, making the problem harder because my feature vectors become bigger, but making the discrimination between videos better. That is the idea of a video signature. So then we just assume for the rest of the lecture that each frame is represented by some feature vector in a metric space, and we have the measure, be it the n dimensional Euclidean space and then just the Euclidean distance or whatever we choose. So all the methods and all the algorithms that we will see in this lecture basically can apply to all kinds of measures and all kinds of distances, whatever you choose. And as I said, it's very dependent, what you choose is very dependent on your collection and what you basically want to do. So the similarity measure for videos is obviously invariant with respect to the sequence. So if I just have video containing of two shots and just swap them, is it the same video or not? But in terms of similarity, it is obviously the same because I don't have any other frames. In terms of the understandability of the movie or video or whatever it may be, it might become hard to understand it because you're kind of like disintegrating the storyline, the plot line. But we would rather focus on the technical notion of similarity because otherwise it would be very simple if we have to take the shot sequence into account. It would be very simple to trick our algorithm, our similarity measure into assuming something is not the same, though it consists of the same shots. It's somehow wrapped in some shots that don't belong to the video or something like that and immediately the similarity would be gone. So we will not look at the shot sequence, we will just see the set of feature vectors. And sets are usually unordered. Well then we can say that we have some distance measure, which is basically a dissimilarity between two feature vectors. And the vectors are represented by the frames and are visually similar if our similarity measure is beyond some threshold. So we allow some epsilon of mistake, but that's it. If it's more than epsilon, if the dissimilarity is higher than epsilon, we will just say this is not similar anymore and we will not count it as a similar frame. That has been done very, very often actually, so the approach is quite representative, most people do it in that way. What we want to do now is we want to compute, according to the TANIMOTO measure, the percentage of similar frames in the video. And if we do it in a naive way, then we just say, okay, this is the total number of frames of the video that are similar to at least one frame in the other video divided by the total number of frames. For every frame in the first video, I look whether there's a corresponding frame in the second video. Then in the second video, for every frame, I look if there's a corresponding frame in the first video. And then I divide it by the total number of frames in both videos and that's it. This is the number and this is what is called naive video similarity. So for all the frames in video X, I look whether there's a frame in video Y such that the distance is smaller. This is a characteristic function being one if there is, being zero if there's none. And I sum it up. The same I do over the frames of Y, a characteristic function if there's an X such that it's smaller than an epsilon. And then I divide by the number of frames in both videos. This is a naive video similarity. And the video similarity is one if each frame in X can be mapped to some similar frame in Y and each frame in Y can be mapped to some similar frame in X. And the video similarity is zero if there are no similar frames in the two videos. So if neither frame in any video can be mapped to the other video. Seems like good idea, doesn't it? Yes? No, it's not. Because consider the case where we have images in our video X. That is, I will just abbreviate the feature vectors. And then I have my video Y. Are they similar or not? Probably not. Because here's a lot of content that is not in X. But if I just consider the video X, perfect. It's all mapped to the same shot. And of course you have to account for different lengths of the video. It's not clear that there's a correct matching. So each frame of X is matched onto an according frame of Y and they have the same number of shots in common. So it's not that clear. You have to do it on both sides. Otherwise you will not detect dissimilarities. Good idea. Well, it's called naïve. Of course it's called naïve for a reason. And it being naïve stems from not being intuitive. Of course the same trick that I just did with the videos can be done also by duplication of videos. So for example, if I have a shot that occurs in a similar fashion in a different video, then I take the second video and I duplicate this shot a lot of times. So I take one shot from video A of video X it was, from video X. And then I generate video Y by just duplicating this frame, this shot several times. Then the naïve video similarity between the two videos gets closer to one. The more often I duplicate this shot, this single shot, the less is accounted for different shots in either of the video. So it does not work. So let's look at an example. I take one video X over here that has two shots and this is my feature frame, space. Whatever it is. And then I take a second video, this is Y and these are the frames of Y, somehow distributed around it. What is the video similarity? Well, you would say, well, obviously for the video X, I do have this shot down here and this shot down here and video Y only has this shot up here. So it should be around 50%. If you calculate it in the naïve video similarity way, you have a lot of functions saying, this is similar, this is similar, this is similar, this is similar, this is similar, this is similar, this is similar. And you only have one possibility saying, okay, this is dissimilar. So what you actually calculate is a video similarity of 90%. This is not impressive. This is why it's called naïve. It has something to do with distributing the frames of the video over the feature space rather than just measuring the distance between individual frames and then saying it's similar or not. And this is what we're getting at. So we should consider the quantities of similar frames as fundamental units. We should look at the clusters of frames rather than at the individual frames. And as we said, we skip the temporal structure, so we just see the set of features. We combine the visually similar features to clusters and then count whether the clusters are made up by frames of both videos. Because if there is a cluster that is just populated by frames from one video, then there is a big dissimilarity. And the degree of dissimilarity, the degree of overlap is basically the percentage of shared clusters rather than the percentage of shared frames. Okay? Good. Well, now two frames belong to the same cluster if the distance is beyond some epsilon. The problem here is how do we cluster? What do we do? I mean, there are several possibilities. There's complete link clustering, there's single link clustering, there's a lot of different clustering algorithms. And the big problem is consistency. Just consider we have a couple of frames and we have frame x and a frame y and the distance is smaller than epsilon. And we have a frame y and z and the distance is smaller than epsilon. What about the transitive distance, the distance between x and z? Can we say something about that? Well, basically we can't because if we consider the feature space, it might be the case that we have x and y and that and while these always may be smaller than epsilon, we can say that this is probably bigger than epsilon. Okay? Smaller than 2 epsilon, but that doesn't help us. Of course, it could be the case that it's like that. Well, we have the x and the y and the z and this is smaller than epsilon and this is smaller than epsilon and this is also smaller than epsilon. Could happen. But there's basically no guarantee for that. And that is one of the big problems with single-link clustering. In a single-link clustering, a distance smaller than epsilon between two members of the cluster means they belong to the same cluster, but not vice versa. If two objects or if two frames belong to the same cluster, it does not have to mean that their distance is rather small because it could even be kind of like we start with x and then comes y and this is smaller than epsilon and then comes z and then comes a and then comes b and then comes c. We have such a chain of frames and it's always smaller than epsilon. But this may be arbitrarily big. So our clusters can degenerate if we use single-link clustering. And they're not nice round clusters like we would like them, but they could be chains basically spanning the whole feature space. This is obviously not very helpful. Yeah, so what do we do? Let's just introduce a notation for some video x. We take the square bracket with epsilon. That is the cluster of the things, the frames in x that are within an epsilon distance from each other. And we will call a cluster epsilon compact if all the frames of the cluster have at most the distance epsilon with respect to each other. So if the case is like that, perfect. That is epsilon compact. Basically the distance between all three of them is smaller than epsilon. If the case is like that, this is not epsilon compact because the distance over here may be larger than epsilon. Just for the notation. And if we have a union of two clusters, whoop, here, we will just do the same thing. So we do not care whether the frame is of x or of y. We will just say there is an epsilon cluster for both kinds of frames and it is epsilon compact if they are at most all the features are at most an epsilon from each other. Good. Yes? Yes? That is right. Different shots. Well, let us do not think about fading so much. Think about a typical news program. So for example you will have the anchorman that is returning periodically doing one report and then comes some, I do not know, debate and parliament or images of some politician or whatever, you know, and then it is again the anchorman. It is a new shot. You may have moved, you may have a different background image, so it is slightly different. Then comes a new report, maybe I do not know, war in Iraq or something like that. Then comes again the anchorman, new background image. So again, slightly different. And thus different shots may definitely move over time. They will be somehow similar exactly, but from shot to shot it may move. So for many types of videos we will have compact epsilon clusters. So if you consider this kind of news anchor, you have the news anchor and you have the typical image over here and this part will change and the rest of the image will probably be the same. I mean, he will move this hat or whatever, you know, but that is not different in times of the colors or whatever it may be, you know. So you will have part of the image that is different and the rest of the image stays the same. So in terms of the feature vector, that will be a typical case where different occurrences, different shots of the anchorman will result in different points that are however probably epsilon compact, but if you have the different, the images in between, you may have the war in Iraq, which is kind of like a desert landscape and I don't know, something like that and then you may have totally different images of a politician or something like that. And that could well be that the points start moving, that pairwise they may be similar, but that they start moving apart and then this would be also a cluster because kind of the difference between either one of them is smaller than epsilon, but it is no longer epsilon compact. Yes? This is still the X cluster, so this is just video X, it's a news program for example. Okay, if I now have a video, the next day's newscast, which is my video Y, then I probably will have very similar videos over here that will give me probably some X's here, okay, images in the background, but same anchorman, same studio, okay, and this would be X and Y, the cluster epsilon, okay. I can just, hmm? This one over here? Yeah. Yeah? Well, that will probably also contain some, or there might be some here, you know, might be new clusters that come with X, might be existing clusters that come with X. Depends. Let's assume that on one day there's a war in Iraq is a big issue, then you will have a lot of politicians, you know, and doing things, maybe different politicians that might be the cluster down here, okay, that is somehow sneaky. And of course you will have the reports from Iran and there will be desert landscape, desert landscape, desert landscape, all looks alike, little cluster over here, okay. Next day there will be a big problem with the oil spill in the Gulf of Mexico, again the anchorman sure will add some to this cluster, there will be politicians saying something about it, will probably add to this cluster, there will be probably no desert images, so this clusters will not contain any Y videos. But there will be some images from, I don't know, like the beaches or the sea where the oil spills or the birds that have oil encrusted feathers or something like that. That will be a new cluster, huh? So not every cluster has to contain frames or shots of both videos, but there may be typical elements that are very close to each other in both videos, there may be elements that are totally different and this is how we distinguish, you know. So if we say, well basically there's a news program that only deals with political issues, then they will all have parts of this cluster and all have parts of this cluster and so they are very similar in what they do. Here we can see that news programs from different days may be very dissimilar because they just share the anchorman and then the topics are totally different. And so it's the visual impression of the shots in between. Okay? Clarifies your question? Good. So we can define the ideal video similarity as the percentage of clusters taking videos from both, taking the shots or key frames from both videos, which contains frames from both sides relative to the total number of clusters. So the more clusters that we have, containing excess end dots and the less clusters we have containing only excess or containing only dots, the more similar are the videos. Meaning I take the clusters all over the feature space and for each cluster I take a characteristic function that I say, okay, R images from video X in the cluster, then the characteristic function is 1, otherwise it's 0. And R images from Y in the cluster, characteristic function is Y and is 0. So this whole thing will evaluate to 1 if there are features or if there are frames from both videos in a cluster, otherwise it's 0. I just sum it up and divide by the total number of clusters. Okay? That should do the trick, shouldn't it? Let's show it. I have two videos. Both have two frames, two shots and I take the key frames. One is the video that is shown by the X's over here. The other one is the video that is shown by the dots over here. How many clusters do I have? Well, I have one cluster because they have a difference of epsilon. I have a second cluster, that's round one, and I have a third cluster over here. So how big is the ideal video similarity? Well, it's a third because of the three clusters, only one contains shots from both videos. And I don't care if there's a second cross here or if there are multiple points here. It doesn't influence the results. I'm just counting clusters. I'm not counting individual frames anymore. So our trick was duplicating one frame from the other video and then putting it into the video doesn't work anymore. Okay? Yes? Well, I'm not sure if the half would be intuitive because you have three kinds of frames. I mean, these three frames, if we consider it like that, are visually totally different. So this might be the desert scene. This might be an oil spill scene. And this might be, I don't know, the anchorman or whatever. So I guess you could argue for it and say, well, if the anchorman is the same and half of the content is somehow different than it should be a half, or you could say, well, basically, I just want the different expressions in terms of frames. I don't care from which video they actually are. And there are three kinds of impressions here. The oil spill, the desert, the anchorman, and our videos just coincide on one of them, which makes it a search. Matter of taste, probably. So usually it's defined like that, that you take the ground truth of all the different images and just collect it like that. And as soon as you find that a video is missing something, because you also have to cater for the case where the videos do not have the same number of shots. So there might be some cluster here and some cluster here. What's the similarity between the videos now? Is it still one half, because one video has half of the other, or should it be smaller? And I guess it should become smaller, that would be my intuition at least. So probably taking the total number of clusters is a good idea. I don't know. But I guess you could argue for both things. Good. So that's the basic idea of ideal video similarity. And now if we have to calculate that, I mean it seems rather simple, but if we have to calculate that, we have to calculate the distances between a lot of pairs, because basically we have to make up the clusters for all the different frames. And then look in every cluster whether there is a, well, there is a representing frame from each video. That's kind of a time-consuming method. So what we could do is we could say, well, maybe we don't want to look into all the clusters and all the pairs of frames, but we would rather like a sampling approach and just say, okay, we'd randomly take some clusters out of the feature space, look into them, and the number of clusters that contain frames from both videos is kind of representative for the total number of clusters containing the same. So if I just have my feature space and I have a number of clusters in the feature space and I say, well, I take this one and this one and this one, and this is a random sample, really, then probably if my sampling size is big enough, it will be a good approximation of what happens in those clusters over here. Okay, then the idea would be to represent each video through M randomly selected video frames and estimate the ideal video similarity by the number of similar pairs in the samples. Okay? The problem is it doesn't work because if I take small values of M to speed up the calculation, if I have a small random sample, maybe 10 percent or something like that, it may severely distort the results because if I consider some videos of the same lengths and for each frame in X, there's exactly one similar frame in Y and Y-seversals. So it's really the case that these are the same videos. Then the expected value of similar pairs with respect to M, so if I draw a frame and draw a frame from the other video, the probability that I exactly drew the matching pairs is arbitrarily low because if I have the sample size in the video, it's quadratic in the sample size divided by the possibilities that I have to draw from each image, which since they are the same lengths, it's either the number of frames in X or the number of frames in Y. Okay? Thus, it takes on average the square root of samples to find at least one similar pair. The square root is not a very good random sample because it's quite a lot already. That kind of gives me at least a similar pair and we were assuming here that there were only matching pairs. So this is a very bad idea to do it via sampling. We have to think of something else. I mean, we don't want to compute all the clusters and then compare all the clusters. We can't do it with random samples because that would result in a very bad distortion of what we see. What do we do? Well, the idea is to divide up the feature space. Let's not look at the clusters. Let's look at the feature space and at partitions of the feature space. Partitions of the feature space directly carry the notion of Voronoi diagrams. This brings us to our next detour about Gergib Voronoi. We want to partition a space and a field of mathematics deals with these desolations. Gergib Voronoi is a big name in this field. He was a Russian mathematician of Ukrainian origin and his diagrams are well known in the field of mathematics for this. So what he actually has done is he has decomposed the metric space in these joint parts by using some sort of polygons with a certain property. So the problem was starting from a metric space and the set of origin points from this metric space, the goal was to divide this space in exactly the cardinal of this set of points or the number of this origin point parts. And so that in each of these parts, there is just one point from X. So if I would take this space here and like two points here, then I would have this possibility or this possibility or an infinity of possibilities to partition this space. And as I've said, these two regions contain each of them just one origin point. The idea of Voronoi in his desolation, however, was that if we take the same space and again the same point and split them somehow in these regions, for example, in the first region, every other point, let's take this point here, is closer to the origin point in that region as opposed to the other one. So this distance here is smaller than this distance here. And this way, you can obtain a unique desolation of this space. If you take a more complicated desolation with more origin points, you obtain something like this. And again, the property holds. So for example, I'm going to take just a random point somewhere here. This point here is closer to the origin point of this region than to this one or this one or this one. So it's a simple property, but a very useful one. And if you take the Euclidean space for each pair of points, then you have a hyperplane between that two points. For example, I'm going to take, let's say, these two origin points. Between them, there's this hyperplane computed out of points which are equally distance distant to both of these origin points. And the property is that to the left of this hyperplane, I have only points which are closer to the first origin, so to this one here. And to the right, I have only points which are closer to this one here. And probably, if you think of this property and the hyperplane, you can already imagine a possibility to compute and calculate for own idea grounds. Let's see an example here. So I'm going to divide this space to partition this space based on four origin points. I'm going to calculate the Voronoi region for this one in the example. So compare this point to this point here. There's a hyperplane which goes exactly through the middle of the distance line between these two points. It's perpendicular to that. That's the hyperplane, and each point on the hyperplane is equally distant to both of these points. What this basically says is that the points to the left, everything was white, belongs to my left origin point, and everything here doesn't. So I've already obtained a partition for these two points. Okay, now I'm going to perform this again for these two points. So the second one. I've done it with this one. I'm going to do it with that one. Again I'm computing my hyperplane, and I can observe that. So I know already this part here from the previous comparison doesn't belong to my original region, and I also get that this is how it was before. That this part here again doesn't belong to my Voronoi region for this point here. And then I go further with the third point and do exactly the same thing. I get this hyperplane here. This was from the second point. This one, this was here from the first point, and I get this from the third point somehow. And this way I get my final Voronoi region for this point here. And then I do the same for the second point, where as a starting point, calculate the Voronoi region for that one, for the third one, and for the fourth one. And this is how I get the final Voronoi tessellation. And as you can see with this simplistic algorithm, well, it has complexity of quadratic end. So I have to compare for each region with all of the points, but with this simplistic algorithm I'm already able to compute the tessellation. And there are different applications of Voronoi diagrams, one which is well known is for the growth of crystals. You all probably know LCD monitors and stuff like that. When growing crystals, you start with some original points like in the Voronoi case. And in conditions of the same temperature, the crystals grow and meet somewhere where the hyperplanes cut. And you can calculate this based on this Voronoi diagram. And there are also some more efficient ways in calculating this, which results in a complexity of n log n for the Voronoi diagrams. But of course we've done this with the reason, so further we will continue by showing you how to use Voronoi diagrams for video similarity. Okay. So the video similarity in Voronoi diagrams, we've seen that we can compute Voronoi diagrams quite efficiently. So that's a good idea. And basically the Voronoi diagrams are nothing but divisions of spaces. If we take the clusters as the smallest part of the video similarity, then we can division or partition the space, the feature space according to the clusters. For each cluster, there's a Voronoi region around it that belongs to it. And the further apart the clusters, the bigger will be the Voronoi regions. If clusters are very close to each other, there will be very many small Voronoi regions. If I have video that has a number of frames, the Voronoi diagram for this video is a division of the feature space in basically L cells. What happens is I take the feature space, I take video X, and I put video X into the feature space, resulting in its frames X1, X2, X3, X4, X5 and so on, until XL being part of the feature space. Then I can say, okay, then let's see at the tessellation, like that, like that, like that, like that, probably here, over here, over here, over here, and over here. This would be a Voronoi tessellation of the space for the video X. I can do that for every video, obviously. Good. The Voronoi cell for some frame in the video contains all the vectors which lie closer to that frame than to all the other frames in the video. So if I have this Voronoi diagram, somehow given, and this is basically done by the frame X1, then all these feature vectors lying around here, somewhere in the cell, are closer in distance to X1 than to any X2, X3, also on. They belong somehow to this vector. We will say that the Voronoi cell of some frame is basically the feature vectors for which the closest, let's just say, this means the closest with respect to X is frame XT. Frame XT, of course, is from our video. We will see G as just a distant function telling us which is the closest frame of the video for every feature vector. If we have equal intervals of several frames, you just take the frame that is next to some predetermined point. So there might, of course, be a point here. So where does it belong to? X3 or X5? Well, it's right on the middle. It has equal distance. So we will just say, well, if it's equal distance, we will always attribute it to the point that is closest to the origin. So we will decide for one direction and then say for one fixed point in space, and then just say the border points are assigned always to one point. So also the points over here and the points over here are always assigned to the same point. OK, yes? We're not talking about the clusters yet. We're talking about the frames of the videos. Again. No, no, there's just one origin point. Well we have the feature space. These are the frames or the key frames of the video. And the dots between are basically all the feature vectors that are around it. Possible feature vectors, not justified by any frame in the video. It's just a possibility that there might be a frame that's lying there. And they are closest to one frame. The idea is, why we are doing it that way, the idea is, of course, that if any of the feature points would be instantiated by some other video, why one? OK, then they belong to the same Voronoi region. This is the idea. We're doing the Voronoi tessellation on the feature space for every individual video. And we can uniquely identify or assign any point in space to one of the frames in the video, except for the points here on the border lines that have exactly equal distance with respect to two frames. And there we have to decide for either one, but in a consistent fashion. So we will always decide for the one frame, assigning to the one frame that is closest to the origin of our feature space. OK, this doesn't influence anything. This is just a minor number of points being on the border lines between Voronoi tessells and deciding for either one is OK. Good? OK. So the Voronoi tessells are combined for frames of identical clusters. So we can say that if we have points of a cluster, the Voronoi tessell of the cluster is just the union of the Voronoi frames of the features, the feature vectors, the frames being part of that cluster. OK? So if I have a couple of frames and I say this is basically one cluster because this is smaller than epsilon and this is smaller than epsilon, OK? This is also smaller than epsilon. They form a cluster. Then the Voronoi tessells, the Voronoi tessells of the cluster is basically all that. It's the union of the individual Voronoi tessells. All I'm saying. Clear? Good. Then I can define similar Voronoi regions for two videos. Now we have two videos and they are two respective Voronoi diagrams such that we say, well, we have the two videos. We have a given epsilon that is basically the admittable distance, the possible distance between them. Then we will look at the Voronoi tessells of the frames x and the Voronoi tessells of the frame y and their intersection with respect to all the pairs that have a smaller distance of epsilon. So all the x-y pairs belonging to the same cluster. If two frames from different videos are close to each other, then also their Voronoi tessells will intersect. I have the very simple case here. The Voronoi tessells for video x and then I have a video here and here that will make the Voronoi tessells for the blue video. If I now say these are very close, the Voronoi tessells intersect. If two frames from different videos are close to each other, the Voronoi tessells will intersect and the more similar pairs I have, the more intersections I will have. Thus the surface area, the volume, will be larger. Good. Let's look at two videos. I look at the video here. This is the x-video and I will look at the y-video having two frames here. This is our feature space. This is our feature space. X-video here. Y-video here. If I now look at the respective Voronoi tessells, ignoring the red crosses, these are the Voronoi tessells for the blue video. If I now look at the Voronoi tessells for the x-video, ignoring the blue dots, these other Voronoi tessells for the red crosses. What is the clusters that contain shots from either video? Well, I do have one cluster here because this is smaller than epsilon. I do have one cluster here and one cluster here. If there is a cluster containing shots from two videos, from the different videos, then I will count this cluster by intersecting the Voronoi regions of the both frames. What do I have to do? I have to intersect this area over here with this area over here, which yields basically the area over here. The gray shaded area. For these clusters down here, they don't contain any cluster from the y-video. They don't contain any frames from the x-video. Nothing to be done about that. What we say now is that this intersection of the space for the clusters that contains frames from both videos is a good indication of how big the video similarity between them is. The volume of this intersection is a measure of the video similarity. There are some technical problems. The Voronoi cells must be measurable. We must have a way of measuring in our feature space. We have to consider our feature space as compact. It's not like dissimilarity cannot be arbitrary, but it has to range between 1 and 0 or whatever. We need a normalization. We will just assume that the total volume of the feature space is 1. The fraction of the volume that is occupied by clusters having frames from both videos is the fraction of the video similarity. Since clusters and Voronoi cells do not overlap, the video similarity is basically, so we will call it the Voronoi video similarity, VVS, is basically the volume of the intersection, which means, as we have said, that basically if we find two frames from different videos with a distance smaller than epsilon, we will just take their individual Voronoi cells and intersect them. Then take the union of all these parts and get the volume of this union. Since they do not overlap, Voronoi cells are always strictly separated. The volume is just the sum of the individual volumes. So the volume over the sum is the sum over the volumes. This is what we need to do. We need to find the volume of the intersection of the different clusters containing both videos and just sum it up. The more clusters we find containing both, the bigger the Voronoi regions there will be and the bigger the volume thus will be, meaning that we kind of end up with a good video similarity. Good. In the example, we get a Voronoi video similarity like we just did, like seeing this as our intersection of about a third, which is also consistent with the ideal video similarity in this example. Ideal video similarity was again, we had one cluster, we had two clusters, we had three clusters, so it was one third. Yeah, that does not always have to be the case, however, because the good correlation between ideal video similarity and Voronoi video similarity here stems from the fact that our video frames are distributed quite evenly over the space. So it really makes up a lot of the space and that may be different. It does not always have to be the case, but what we could get for the Voronoi video similarity is now a random sampling. That is different why we did the Voronoi video similarity, because we did not want to count clusters because then we would have to count all the pairs between frames in both videos. Random sampling did not work. But if we use the full space, then random sampling in the space and seeing whether they fall into some shared region between the video or some non-shared region between the video, that gives us a good impression. So what I am going to do is basically I am going to have the feature space, I am going to put all the videos in the Voronoi tessellation, something like that. And the second one for the other video, like that. And now I am going to pepper the room with random sample points. And for each of the random sample points, I determine with respect to the videos, is it in a shared region or is it not in a shared region? And the number of points that I am using is kind of, of course, correlated to the error rate that I am incurring. If I just use one random sample point, then it is rather simplistic. Either it is in such a region or it is not. And then video similarity will be 1 or 0, that does not make sense. But if I take a thousand of these points, I will get a very good impression of how big the spaces actually are. Because the number of shared clusters, where you say, okay, this is a shared cluster, this is a shared cluster, and maybe this is a shared cluster. Since it completely partitions the space, the probability of hitting it with a random seed is directly proportional to the volume that it has with respect to the total space. The more shared space there is, the more intersections there are, the larger the space will be as opposed to the total volume of 1. This is the basic idea behind it. And by kind of shooting at this space with seed points and seeing where they are located, I can figure that out. So this is what I am going to do. I am going to generate so-called seed vectors that are independently and uniformly distributed over the space. And then I check for each of these points, whether it is located in some intersection or not. If it is located inside some intersection, it has to be a Voronoi cell of x and of y, such that the two frames at the center of both Voronoi cells are very close to each other, smaller than epsilon. And then I take the frame from the x video that has the smallest distance to my seed point and the frame from the y video that has the smallest distance to my seed point. And it is in the intersection, my seed point is in the intersection, if the distance between these two points is smaller than epsilon. Again, we take the feature space, take a seed point as one. For each seed point, look at the closest point from the x video, x3 for example. And look at the closest seed point from the y video. Then if the distance between these two is smaller than epsilon, as one is in an overlapping region, yes? If the distance is bigger than epsilon, then these are two different clusters, as one is not in an overlapping region. If I do that for a lot of points as one that are randomly and uniformly distributed over the feature space, I will get an impression of how many intersection areas there are. This gives me a sampling intuition about the Voronoi video similarity and thus the idea of video similarity. This is the basic idea that what we are doing now. We describe each video through the tuple with respect to the m seed points, where I take the frames from the video that are most similar to my seed points. This is what we call a video similarity. So as opposed to the random sampling that we tried before on the video, just take some random frames from x and just take some random frames from y and compare them, which didn't make much sense. We now build a video signature where we say, well, take those frames from x that are closest to my seed points and take those frames from y that are closest to my seed points. And now we can compare them because we have matching pairs already with respect to the seed points. This is the idea. This is usually called a video signature. So every video gets a signature, an m-dimensional signature consisting of the frames closest to the seed points. Good? Clear? And the similarity measure for two videos is basically the degree of overlap between the respective signatures. So if the frames in the signatures at the same position, so with respect to the same seed vector, is smaller than epsilon, I count it. If it's larger than epsilon, different clusters, no overlap. Don't count it. Divided by the size of the signature, this is my basic Voronoi similarity. And it has been retrieved by random sampling. I have abstracted every video into m frames. Well, the obvious question is how big is m? Well, we come to that in a minute. But that is at least a possibility of doing sampling. That is at least a possibility not of looking at the pairwise similarities between all the clusters or the frames. So that is good. Well, this is called the basic video signature similarity because it's kind of like based on this video signatures and basic because it has some problems that we will deal with later. Since the seed vectors are uniformly distributed, the probability of it is in the intersection of two Voronoi cells, so the Voronoi cells of a cluster basically is the volume of, or is directly proportional to the volume that is made of bishared cells as opposed to non-shared cells. And thus it is the Voronoi video similarity. So the video signature similarity is directly proportional to the Voronoi video similarity. And of course, over a video collection, you have to use identical seeds. You have to decide to randomly draw seeds one time and then abstract every video, build a signature for every video in your collection with respect to exactly these seeds. Otherwise you can't compare the videos. Okay? Yes? It doesn't matter, but since your seeds are kind of distributed evenly over the feature space, uniformly distributed over the feature space, this is rather a little bit of a little rather improbable that you get it like this. But if they are in the same Voronoi cell for the videos, then this seems to be a large Voronoi cell. And a large Voronoi cell is good for the overlap, which is good for the video similarity. Okay? Good. The number of seeds, we were just kind of arguing that of course we need a couple of seeds, just a single seed is not enough, but we need a couple of them. So how many? It's obvious that the larger the M, the more accurate is the estimate. It gets better if I take more samples. However, the smaller the M, the more easy is the signature calculation. The feature vectors for the videos become smaller. So the selection of the correct M is kind of based on the error probability. So what should I select really? And how high is the error probability? If I take a video database with N videos and M seeds, and I take some constant that I will define as maximum correlation, the error probability with respect to M is basically the probability that the database contains at least a couple of videos for which the difference between the Voronoi video similarity and the video signature similarity, the basic video signature similarity is greater than some constant. Okay? So we don't want too much deviation between the two, so we fix it with some constant gamma. Okay? And what we want to know is the probability that this is respected by the videos. And what happens is that the probability that the video, the Voronoi video similarity and the video signature similarity deviate from each other more than this gamma with respect to the pairs of videos in the database. Okay? That is what we have to compute. And what we can show is that if we choose M larger than the logarithm of the number of videos in the collection minus the intended error divided by the standard deviation, this will be sufficient. And I can actually prove that. So if I take the Voronoi video similarity and the signature similarity that is dependent on the M. Okay? So I can use Hofding's inequality, which basically gives me the maximum probability that a sum of independent random variable deviates with more than a given constant from its expected value by using the exponential function. That can be done. Is a statistical measure. Then I can say, well, the deviation between the two over all the videos in the set is basically the sum of the individual probabilities because the probabilities are independently distributed. It doesn't, if one pair is larger than my gamma, it has no influence on other pairs. And assuming this is independent, or we know about the independence, we can just take the union and make it into a sum. So how, then we can kind of like take that with the Hofding inequality into the exponential function. Okay? And how many pairs of videos do we have in our collection? Well, we basically have N to the power of 2 divided by 2. So we have N videos in the collection, which makes N square pairs, and we have to take half of them. Then a sufficient condition, if we take it like that. So this is the error probability, and the error probability should be smaller than some value delta. This is the maximum mistake we want to make for choosing the M. If we want to choose the M, we just have to solve the equation with respect to M, which will basically yield us the error term that I showed before. So this is basically what we have to do. So a good choice of M is directly correlated to the number of videos in the collection, and to the error that we want to make, and inversely correlated to the normal discretion of the similarities from each other. So the closer the similarities are, the smaller my M can be. The less videos I have, the smaller my M can be. The larger the error I might make, the smaller my M can be. If I want smaller errors, raise, increase the M. If I have more videos, increase the M. Okay? This is the basic idea behind it. Good! The bound for M, and this is a good thing, is logarithmic in the size of the video collection. Logarithm is one of these typical functions. So it does deviate in the end, so it goes to infinity, but very slowly. So it really doesn't matter how many videos I take. It only grows very slowly. So that is good. And of course, the smaller the error, the individual error of the method is, the greater the values for M should be chosen. Good? Okay. So maybe, exactly, maybe we should make a small break. Let's say, five minutes, and then we go to the C vector generation. Okay. So let's go on. I pointed out before that the video, the Voronoi video similarity is not always the same as the ideal video similarity. So we were dealing with the case when I said, well, it has something to do with the evenness of the space. If the frames are distributed well over the feature space, this is okay. If the frames are kind of like biased in the feature space, this is kind of not the case. And here's an example where you can see that this is very often not the case. So for example, if I have the case with the ideal video similarity of one third, so I have one pair of matching images and two clusters of non-matching frames. Then there could be two connections. It could be either like here in the image, then the space covered by the Voronoi cells is very small. Or it could be like here where one cluster, two clusters, three clusters, again, ideal video similarity of one third. However, the Voronoi video similarity is quite big because the videos do not use the whole feature space. But the rest of the feature space, so to speak, that is totally empty, is kind of up for grabs either for a cluster or for a non-cluster. And that affects the Voronoi video similarity. So what we want to do for the rest of the lecture is we want to estimate the ideal video similarity to the basic video signature similarity, even if the Voronoi similarity and the ideal video similarity differ. And obviously what we have to do is we have to kind of even out this mismatch in space. So we have to look at the density of the space and cut out the non-densely populated areas of the space. They should not be regarded as being valuable for the estimation. So the seeds are spread evenly through the feature space. But the estimation of whether it matches a cluster containing both videos or a cluster containing just one video is obviously influenced by the Voronoi cells. If the Voronoi cell is very big, it will have a big weight, it will have a big influence on the similarity that is covered. So what we have to do is we don't have to distribute the seeds evenly over the space, but we rather have to distribute the seeds evenly over the Voronoi cells. So for example here, if I take this example, clear it off, what happens with my seeds? If I distribute them evenly through the space, I go like this. I take the pen over here and go flop and flop and flop. Okay? This is evenly distributed over the space and randomly assigned. However, it's all the same Voronoi cell because in this time I have a lot of empty space. What I should be doing is I should distribute the seeds evenly over the Voronoi cells. So the probability of hitting this Voronoi cell should be exactly the same as the probability of hitting this Voronoi cell or this Voronoi cell. Without considering how big they actually are. If I distribute evenly over the space, the size of the partition will directly influence the probability of getting a seed vector. Yes? If I look at the partition in terms of Voronoi cells, only the number of Voronoi cells should affect the probability of getting a seed vector. The more Voronoi cells there are, the less the probability for each individual Voronoi cell. But the size should not matter. Okay? Yes? Well, that is exactly the problem, isn't it? Because we have to take the same seed vectors for all the different videos. So we have to decide for a good, yes, it's true. We should take the Voronoi cells for each individual video. That would be the best way of doing it. But we can't do that because then we cannot compare videos anymore. Because different videos would result in different seed vectors. Different seed vectors would result in different signatures. And the matching of the frames would no longer be sensible. So what we need to do is, we will come to that in the end, we need to decide for one distribution of seed vector that is not too bad with respect to all different videos. So if the videos in my collection usually leave this space free, then it's a good idea not to put seed vectors into that space. But we will talk about that later. So the idea is really distribute the seeds evenly over the Voronoi cells regardless of the volumes. And to generate such seed, we can't use a uniform distribution over F anymore. So this is not valid. But we have to use a distribution with some density function. That is the idea. We have two videos and the distribution density here at some point in the space is basically how many clusters there are. And the volume of the Voronoi cells has to be taken as a normalization. So I have to divide by the volume of the Voronoi cell. That means the probability of getting a seed in a cell is dependent inversely on how many cells there are. That is also dependent or should be dependent on normalization by the volume of the cell such that the bigger cells don't get more seed vectors and the smaller cells do get less seed vectors. So I normalize for every cluster. I normalize the Voronoi cell by its volume and I normalize the total number of Voronoi cells for the basic probability. This is the idea behind it. Well, obviously, we build it like that, it is inversely proportional to the volume of the cell. So we get a uniform distribution on the set of clusters. Every cluster has the same chance of getting a seed vector, not counting the size of its Voronoi cell around it. And if we have a Voronoi cell of a cluster, the density is a constant. It does not change. So we have an equal distribution within each cluster. So what we can do is we can just randomly choose a cluster or we choose a random point within the cluster. But this random point within the cluster is not randomly distributed over the space anymore than it would be affected by the different volumes of different Voronoi cells. But it is normalized by the size. So it's the same probability for every single cluster, for every Voronoi cell. Good? Now, if we do not want the uniformly produced seeds, but we want the density factor, then we can estimate the ideal video similarity as the number of clusters, so the number of overlap with respect to the uniformly over the clusters distributed seed values. This is the intersection of the cells. So the intersection of how many times with the seed vector I end up in a certain cluster is still valid. And I count the ones that are an epsilon apart that belong to the same cluster. Okay? So for the new generated seed points, I look at the Voronoi cells at their overlap and look which overlap is generated by a common cluster between the videos and which overlap is generated by a single video. And then I can count only the ones that I've been created by common frames of the video and that's it. Easy isn't it? So if we take this as being the characteristic function, so uniform distribution over F, this is exactly what we said with the Voronoi video similarity, okay? But if we use this new density here that is normalized by the cell size, that is normalized by the number of clusters, then we got it. Okay, the video signature similarity approximates the ideal video similarity if the clusters are either identical or very well separated. So what we do is we have two videos such that for the pair of clusters in X and in Y, either the clusters are the same, belong to the same cluster, or all the frames in the X cluster are further away, larger than X-ilon, so further away with respect to X-ilon from the frames in the clusters. So if the two clusters are either an epsilon apart or they are the same, then it's just one big cluster, okay? This one I want to add. I don't want any clusters that go like this, that are very close to each other, because then what happens if I have a seat vector here? There could be clusters that contain both frames from both videos, but it could well be that this is the closest here and this is the closest here. This is not what I want, okay? I will say, okay, they are well apart. So if I'm somewhere between here, I get either representatives of this cluster or representatives of that cluster, but not both. That's the basic idea. Then the idea of video similarity is the density function and the overlap, the size of the overlap, the volume of the overlap taken for all the pairs. That's what we already had. Now we can show that for each term in the sum, if the frames are just an epsilon apart, then they must belong to the same cluster, since we assume the clusters to be different. It cannot be the case that a point in one cluster and a point in a different cluster are smaller than epsilon, okay? Cannot happen. They must be just excluded that case. They must belong to the same cluster and since they belong to the same cluster, I can just rewrite it and say, okay, this is basically the same cluster and this is exactly the part from over here, okay? Meaning I take all the clusters apart and the intersection between the clusters, this is all the pairs of frames having smaller distance than epsilon. Good. A Voronoi cell for all the frames that we have in a cluster, so we have a cluster that is kind of in the intersection of different clusters, just means that the intersection is nothing but the volume of the cell if I take both clusters together. So if I have a point and I have the Voronoi cell with respect to x and I have the Voronoi cell with respect to z, then I can take the union of the Voronoi cells with respect to my z. Otherwise, I would end up in a different cluster. It wouldn't make sense because that is more than an epsilon apart. What I can then do is kind of like I can argue since this is the same anyway, this intersection here is basically the union of the clusters. Good? And that means if I put in the density function normalized by the size of the cells and normalized by the number of the cells, well, this obviously has nothing to do with the Voronoi cells, so I can draw it out. And this means that we have the sum over the integral with the characteristic function divided normalized by the volume. And what is that part over here? This is obviously the size of the number of clusters that have an intersection with objects from both videos. Because this counts one for every cluster from both videos and this normalizes the size. So it doesn't matter how big the cell is. I just count it. Leading to the number of intersections divided by the number of all the clusters and then we have the set of similar clusters and the set of all clusters, which means the set of clusters containing frames from both videos divided by the number of clusters there is, which is exactly the ideal video similarity. So if I use the density function normalizing the generation of seed vectors by the size of the Voronoi cells, I end up with the ideal video similarity. And this is what we just proved. Okay? Clear? You don't seem to be too convinced. Okay. So now comes kinder's remark basically because obviously it is not possible to use the same density function for the calculation of video signatures because the density function is different for every video. It takes the volumes of all the different Voronoi cells into account. The Voronoi cells differ for every video, so also their volumes differ. That gives us a set of density function one for each video and that will make sense because for the comparison of videos the same seeds must be used otherwise the frames don't match. So we deal with it like it's usually done in that area, we will just say, well, let's assume our collection is not too heterogeneous. It's not totally different videos but they share something. Then there will be some general characteristics in terms of exploiting the feature space. These general characteristics can be learned by just taking a representative training set and determining the density function for this training set and then using this single density function for all the videos. Then we don't have the problem anymore that we have different density functions for different videos. But we can compare them. The mistake that we make since we used a representative set that somehow captured the general characteristics of our video collection is not too big. So again, we are not in the ideal video similarity case but we are close to the ideal similarity case. So this is what we can do. The algorithm for generating a single seed and then we just do this m times to get m seeds is we use some value epsilon that is for the seed vector generation and the training sets of t frames which reflect the collection as well as possible. So this is kind of the idea here. We identify all the clusters of the set and then choose any of the clusters. In this cluster we will put the training set. The idea is basically we choose a value that is our epsilon value. We choose a training set that was used to build our density function and then we identify the clusters with respect to this epsilon of the training set. We randomly choose any of these clusters that is normalized, that is any Voronoi cell. We generate the seed point for that. Good? So after we generate the seed point in the Voronoi cell, we generate random vectors of the feature space, well until one of them is in the Voronoi cell. So I mean the idea is how to do that. I do have the feature space. I do have clusters in the feature space. I have different sized Voronoi cells, so this one is probably very small, this one is very big but due to my clever normalization, the probability of choosing this cluster is exactly the same as the probability of choosing this cluster. Because I have normalized. Now after I, for example, set on this cluster, this is the cluster I want to generate the send and feed vector in, what do I do? Well I can generate random vectors of the feature space just going like pump and pump and pump and as soon as some lies into the cell, I take that one. That's the easiest way of generating that. One can also use a random frame from this cluster as a seed. I just look at the cluster and take any random image out of the cluster and say this is the seed value. So the seed value doesn't have to lie somewhere in the Voronoi region but it can also be part of the cluster. Two possibilities of actually creating the seed. Yes? Either randomly where I look, did I really hit the Voronoi cell or just taken out of the cluster. Good. If we do that as an experiment, so for example we have 15 videos here from the MPEG7 content set which is kind of like a typical test set and we create new videos by a random deletion of frames such that we have different ideal video similarities, 0.8, 0.6, 0.4, 0.2. The experiment tried to determine the video signature similarity and generated 100 seeds for that either uniformly distributed on the feature space or based on the text collection of 4000 photographs. So that kind of reflected the MPEG content and what we can see is that for the ideal video similarity that was used of 0.8, 0.6, 0.4, 0.2 compared with the similarity by the uniformly distributed seeds and the distributed seeds with this density function used, with the training set used, was definitely much bigger deviations for the uniformly distributed features and much smaller standard deviations for the trained density function. So it seems to be the case at least for this collection here which is kind of a very representative, the MPEG7 content set contains videos of all kinds basically and it seems to be the case that really some parts of the feature sets are not really exploited by most of the videos. So that's the difference then normalizing the density function by the size of the Voronoi cell helps a lot even if it's just done over a representative feature set. So what we have done until now is basically we've looked at the basic video signature similarity and we've looked at the ideal video similarity and we've argued that well basically, it's the same because the video signature similarity reflects somehow the Voronoi similarity if we normalize it by the size of the Voronoi cells. Good, you can do that. But we did make a second assumption. We made the assumption that the clusters are well separated, they are at least an epsilon apart. So I don't get to these items where I kind of fall in this gap and say, well, this is the closest from here and this is the closest from here, they belong to different clusters. Over here, they are only epsilon apart. So I was always arguing that I couldn't well distinguish clusters by looking at the distance between frames and if they were kind of an epsilon apart, they had to belong to different clusters and if they were smaller than an epsilon, they had to belong to the same cluster. This is not really true. So if the clusters are identical or clearly separated, this is definitely true. But the feature vectors of course are an approximation of the visual perception that we have and there are some discrepancies. So it can be that visually similar clusters are close together and this is often seen as if we have a video, for example, that is slightly tilted. So I change the color tones in the slide variation. So I make it brighter or add more contrast or something. Still the same video. It has still the same visual impression but the feature vectors will change slightly. What happens is that the point in space will shift slightly and though the ideal video similarity definitely is one because all the clusters that I have contain frames from both videos, the Voronoi diagrams are slightly different because this cluster has a certain diameter that shifts the Voronoi diagram and therefore these typical gaps over here exist because this is the cell for the red point and this is the cell for the blue cross and they are about the same but only just so slightly. What happens now if my seed vector should fall into one of these places? What happens? What are the closest frames of each video that I assigned? For the red video, the Voronoi cell is down here so the video assigned is obviously the one over here. For the blue video, the Voronoi cell is up there so the one assigned is obviously up there. The seed is assigned different clusters though there is a cluster that contains both frames and just because the frames are not identical but they are slightly tilted. Of course that is bad because as soon as we have a seed being positioned in one of these gaps we get a wrong or suboptimal measurement of the space. Since the Voronoi video similarity is defined by the similar Voronoi region, by the intersection of the Voronoi region, the Voronoi video similarity is strictly smaller than the ideal video similarity. It does not account for the gaps. You can calculate this difference by the offset, the free space. The more the tilt is, the bigger the clusters are. The more free space you will have and that means the bigger the mistake you will make is. If we consider a seed between the Voronoi cells, we call this a Voronoi gap. Then exactly what happens, it happens what I just said. This is assigned to this and this is assigned to this and both of them are larger than epsilon. They do not belong to the same cluster and this is an artifact because they both have correct clusters, intersecting clusters. What we should do is we should avoid gaps in considering the seeds. We should always put the seeds into Voronoi cells, not between Voronoi cells. We have to avoid the gaps for seed generation. The Voronoi gap for two videos is basically the set of all feature vectors where the difference between the next frames with respect to the x video and with respect to the y video is larger than epsilon. But there is a frame in x that is very close to the assigned frame in y and there is a frame in y that is very close to the assigned frame in x. This is exactly what we had before. Here is our seed vector and we have the x vector here and the y vector here. This is gy and this is gx but we have some y vector here and we have some x vector here and they build a cluster, a correct cluster. This is the case of the Voronoi gap. One can show when you do experiments that for simple feature spaces, in complex spaces it evens out somehow. If you have simple feature spaces like color histograms or motion vectors or something that is very often used for videos, the error incurred by the Voronoi gap is considerable. So it cannot be neglected and you usually have seeds that fall into the Voronoi gap and thus distort the estimation. Of course the smaller the epsilon, the closer the clusters are, the smaller the Voronoi gaps. So what we want to do now as the last part of the lecture is we want to avoid the use of seeds that, well at least with some probability, lie in a Voronoi gap. So if we randomly generate m seeds of which n lie in the Voronoi gap, then the video signature similarity of the remaining m minus n vectors is exactly the ideal video similarity because there I don't have the problem of assigning wrong clusters. So can we somehow find out whether some seed is in a Voronoi gap or not? And the answer is, well the definition of the Voronoi gap does not really help in the verification because you basically have to do a distance calculation between each signature vectors and all frames of the other video. So as soon as I want to have a seed at some point and assign some value from the x vector, then I would have to do distance calculation to all y1 and y2 and y3 and so on to find out whether there is a single vector ym where this is smaller than epsilon. So for each assigned point I would have to have a pairwise lookup of all the frames of the other video to determine whether there is something that is building the cluster. That is not a good idea. And if I just had the video, I just want to have these frames x for the video signature with respect to the m seeds. When I need a lot of time to calculate these signatures, the use of it would basically be invalidated. So this is something I cannot do. But of course one could say, well, there are certain probabilities for the fact that the seed is in a Voronoi gap. I can just say, well, I don't have to really check it out. I just have to find a probability distribution of the space that tells me where the clusters are and where the probable regions of Voronoi gaps are. So if both video sequences have roughly equidistance pair of frames with respect to the seed, so there is something in the video x that is close to the assigned point and there is a frame in the video y that is close to the y point. It is clear that the frames as such are dissimilar. So the seeds in the Voronoi gaps are near the border of different cells, the red cell and the blue cell. The blue cell here belongs to the blue x point and the red cell here belongs to the red x point. So if I have an x point and the assigned point to the vector and this is larger than epsilon, so they don't belong to the same cluster. They are both from video x over here and over here, but they don't belong to the same cluster. But the distance of the seed point is somehow very similar to these points. Then the seed point very probably is in a Voronoi gap. Clear? I take two points from the same video and they should be apart more than epsilon, so they are not part of the same cluster. But they share the same distance to some seed point. What does it mean? It means that the seed point is very probably close to a Voronoi edge. Because between those two points in x and similarly those two points in y, there must be a Voronoi border in the middle. Because if I put the seed point somewhere very close to this Voronoi border, the chance that there is a gap by slightly distortions of the y versus x video is very high. If I cannot find a different point from the other video that is kind of equidistant, then I am right inside some cell. So there are two cases like that. We have an x value here, an x value here, a seed point here. This is rather similar distances, but these two do belong to different clusters, the black cluster here and the red cluster here. And there must be a Voronoi distinction, a Voronoi border between them. If the blue stretches here are very similar, then the chance that my seed lies on this border is high. And if then the same holds for some y values, they will also have a border there. And thus the chance is I lie in the Voronoi gap. So the Voronoi gaps are always in the vicinity of Voronoi borders. Keep our seed vectors well away from borders and we don't have the problem anymore. This is the basic idea here. So given two videos with epsilon compact clusters, for every seed in the Voronoi gap, there is a vector in x and a vector in y such that the x is dissimilar to the assigned x frame for s. And thus the distance between the two x frames is larger than epsilon, but the distance between the x frame and the seed point and the second x frame and the seed point, this is very similar. So we will say it's equidistant if the difference in distances is smaller than two epsilon. Good. So we can just prove it. Since our point is in the Voronoi gap, we have the difference between the point assigned to the seed point here. So the one assigned for the x is gxs and the one assigned for the y is gys. And since it's in the Voronoi gap, the distance between the x and the y point is larger than epsilon. Okay? So this is larger than epsilon. Good. Since the clusters are by assumption epsilon compacts, the gxs can't be in the same cluster as the x. So there must be some x over here and there must be some y point over here, epsilon compactness of the clusters. There must be something around it. And this is definitely smaller than epsilon and this is also definitely smaller than epsilon. That means that the distance between those points is also larger than epsilon. Yes? Good. The distance between the x and the seed point and the assigned x and the seed point, so the distance between this one and the distance between this one. Okay? This is nothing but the difference here. Okay? This means by the triangular equality, I could also estimate this difference by the y going to point y and then continuing to go to point x. Okay? Triangular inequality must be larger than this one. First going to point y, then from point y to point s. And since this over here, and now since we know this one is bigger than epsilon, we can just put in the epsilon. Okay? So this is nonsense. And since it's in the Voronoi gap, we know that there is a y in the other video such that the y and the assigned point in x are smaller than epsilon and we know that's the same thing for the other part. So these are the epsilon compact clusters. And this means that we can basically take the chosen point in epsilon and the epsilon and thus the point over here is similar to the point over here. It must be further away. So this is nonsense. We can estimate the seat over here. This is the assigned point of epsilon, g epsilon of s, and this is the other point epsilon. And if this would be smaller than gy of s, then it would have been the assigned point. So it has to be larger. Okay? So the distance between y and s is obviously larger than the distance between gy of s and s. Okay? And that means the distance between gy of s and s and the distance between g of x and s is smaller than epsilon. Okay? Because here is also an x value. And again the tree angle inequality yields that if I go there and then go there, this has to be smaller than epsilon and this is g of x of s. Okay? Here is somewhere my x got lost. Yes? Healing to epsilon. So the difference between y and g of x is smaller than epsilon. Right? Good. So we can build a criterion from that. I'm not sure I'm wrong. We just test whether a seed is in the Voronoi gap between some video and another random sequence. If there is no vector in the x video such that the x is dissimilar to the frame chosen for this seed vector, but the distance between either one is smaller than 2 epsilon, then we can say the seed is not in a Voronoi gap. So for every point x we assign to our seed s, g of x, s, we do not find any other x such that the distance or the difference in distances is smaller than 2 epsilon, then we know s is not in a gap. Cannot be. It's far away from the border. Okay? This is what we do. So we define a ranking function for the signature vector, ordering the possible seeds by the minimum of the distances. Yes? We just look where do we have the minimum distances between frames of x that do not belong to the same cluster. And the further away the seeds are from the border of Voronoi cells, the higher is the value of this ranking function. Okay? Because the minimum obviously gets higher. Higher values of q are marked in bright, lower values of q are marked in black, if we use this ranking function. And we can see that the Voronoi cells are basically made up by these three lines over here. So we can see this ranking function really computes, well, with a little error obviously, whether something is in the gap or not. And that is good because the gap that exists is kind of like given by the different y's that still belong to the same cluster, but are slightly off. And since we want to not take seeds from this area, this is the black regions of our ranking function. Okay? So by excluding all seeds that have low ranking function, we will immediately get seed points that is far away from the possible Voronoi gaps. Okay? This is the basic idea behind it. So safe seeds have q values larger than 2 epsilon. This is what we just proved. And of course this is not required. Also a value within these two epsilon rates does not have to be in a Voronoi gap. But it could be. So why should we risk it? That's the basic idea, you know? It's not required, but it's sufficient. And sometimes, well, in general, many seeds with q values lower as 2 epsilon are not in a Voronoi gap, but we just don't care. We just generate various seeds and choose only the one with the best q values. That's what we will do. So we will definitely avoid the Voronoi gaps. And that's kind of the trend. So we do some more seeds than the actual size of the signature should be. We generate a set of m prime seed vectors, then compute the ranking function for all the seed vectors, order the seeds according to the decreasing q values, and take only the m first. Thus we avoid the Voronoi gaps and everything's OK. And analogously to the basic video signature similarity, we can now define a ranked video similarity where we take the ones that are highly ranked in q into account more than the other ones. So basically the symmetrical video signature similarity with the ranking function between two videos is defined by the seeds with the highest ranking in y and s. So I take half of m that are highest with respect to the x video, and I take half of them that are highest with respect to the y video. So I do it for both videos, choose the m of each part, and well, then I just look at the different rankings of the signature frame and order the rankings with respect to the q value. I only take the highest q values into account, that means the video similarity used 50% of the frames with the highest ranking with respect to the x video, and 50% of the highest ranking with respect to the y video. And again, I just end up with an m dimensional signature. Cool? That's kind of good. Of course, one can also use an asymmetric video signature similarity just choosing the m highest ranking with respect to one video or a total order on both videos, and then use them, doesn't matter. But then it would be asymmetrical. Good. The asymmetric form leads to some distortion, of course, in the estimate. If a video is a partial sequence of another video, the asymmetric video's signature similarity is significantly higher when calculated with the shorter video than with the longer video. Because I have less degrees of freedom for the shorter video, the ranking will deteriorate more quickly, and then I have basically gained nothing. I still have some in the Voronoi gap. That's the idea here. Good. If you look at the retrieval effectivity, taking the basic signature and the ranked signature, then we see that precision recall for the ranked signature is actually higher than for the basic signature, so we get an improvement. If we use a manual evaluation against the ground truth for the precision recall analysis, this also holds for different sizes of the video signature. So the ranked video similarity is definitely always better than the basic video similarity. And this is what we lose basically by the Voronoi gaps. Can be considerable, that's what I said before. Good. This lecture, we're kind of like considering video similarity. So good idea for YouTube or something like that to find similar videos and maybe just store one or store the ones with the highest quality or the ones that are best for the users or best appreciate by the user or whatever. And I was showing you a naive approach of doing it, just computing the clusters and then looking how many shared clusters do we have, but that doesn't scale obviously. And we cannot randomize it, we cannot answer the naive video similarity by putting up random samples. So what we did is we get to the Voronoi video similarity where we just said, well, there's basically a tessellation of the different frame clusters and the intersection of the tessellations for the clusters containing frames of both videos, that is the interesting point. This is what we want to have. And then we were kind of defining the similarities along the lines of, well, if we randomly distribute the seeds over the complete feature space, we might end up with many features or with many seeds in a single cell that is just very big. That is what something we don't want, but what we rather would want is that we take the density function as a function of the Voronoi clusters, of the Voronoi cells, not as the size of the Voronoi cells or the volume of the Voronoi cells. So it should be the same probability for every Voronoi cell not considering its size to be hit by a feature vector. And then we were still clearing up the Voronoi gaps that are kind of like shifts in the clusters and that might have a considerable effect. And this is what we did today. Questions? No? Everybody's well informed about video similarity. Good. Then next lecture we will talk about video abstraction. We will talk about how to present videos or the retrieval results of video search to the user because that is obviously not a simple thing. What do you do? Do you use video skimming, highlighting? We will discuss a lot of possibilities to show the results to the users. Thanks for the attention.
In this course, we examine the aspects regarding building multimedia database systems and give an insight into the used techniques. The course deals with content-specific retrieval of multimedia data. Basic issue is the efficient storage and subsequent retrieval of multimedia documents. The general structure of the course is: - Basic characteristics of multimedia databases - Evaluation of retrieval effectiveness, Precision-Recall Analysis - Semantic content of image-content search - Image representation, low-level and high-level features - Texture features, random-field models - Audio formats, sampling, metadata - Thematic search within music tracks - Query formulation in music databases - Media representation for video - Frame / Shot Detection, Event Detection - Video segmentation and video summarization - Video Indexing, MPEG-7 - Extraction of low-and high-level features - Integration of features and efficient similarity comparison - Indexing over inverted file index, indexing Gemini, R *- trees
10.5446/344 (DOI)
Okay, so we should probably begin. Welcome to the last lecture of multimedia databases. I'm happy that you've managed to stay with us until the end. So, previous lecture, we've discussed about video retrieval. We've continued video retrieval and we've focused on the presentation of the results. We've presented some basic approaches, like for example, just selecting the keyframes of a video and going through the video based on the keyframes. But such an approach loses a lot of semantics. So, if you remember the Flintstone example, it was pretty fast. Nobody really understood what was going on. Another solution is to introduce some more intelligence in the process. So, video skimming and video highlighting do that. The idea is to consider the quality of the shots. What is it about in those shots? So, if I'm interested in extracting the most important parts of a movie, then I'm going, if it's an action movie, I'm going, for example, in the shots that might have most of the action. And this is symbolized maybe by a powerful sound, like an explosion. And I'm going to leave the shot entirely in my highlight, for example. Like, if it's a romantic movie, I'm going to focus on dialogues or stuff like that. I have to consider all these factors from movie making when performing with video abstraction with highlighting. And we could observe some pretty interesting results. And they are quite comparable with what you get when you do professional video trailers. Well, if you do it professionally, then you can also edit it, add some other sound than what you get from the video. But still, video abstraction does work, and video highlighting is really interesting to see. Today, we changed the topic. So, we've discussed until now about images. We've discussed about audio. We've discussed about video. We've said about how we can present the results. The question is, how can we perform the search efficiently? And the answer for that is to use indexes. As in any database, multimedia databases also have indexes. And the most important index structures for multimedia databases are the R-trees and the M-trees. This is exactly what we're going to focus on today. Okay, so to recapitulate, we have this kind of data in our multimedia databases. And we've discussed about how to describe these multimedia objects. If you remember, for images, we said we have different kinds of features. We have low-level features. We have high-level features. We have, for example, the Fourier transformation, which is a high-level feature, but we can describe it with feature vectors, the coefficients of the Fourier transformation. This is not just one. You could reduce it to one Fourier vector, Fourier coefficient, but the idea is to get more precision, so you might have more of them, like, I don't know, 5, 10, 15. And this results in a multi-dimensional structure. So then, we should consider multi-dimensional indexes. On the other side, we don't have only real-valued feature vectors. We also have spoken about skeletons, if you remember, or chain codes, editing distances. Such features can't be represented in a Neuclidean space. So we have to do something special here. Okay. The easiest approach and the naive approach would be to perform a sequential search. Without indexes, you start from the beginning with the first object. You compare the similarity between the query object and the first object. Then you go to the second. Then you go to the third, and so on. And then you deliver the most similar of them. That's very inefficient, because we go through the whole database. And as I've said at the beginning of the lecture, a multimedia database contains a lot of objects, regardless if they're sounds, video, or images. So that's inefficient. So the big question we want to answer today is how can we speed that up? Of course, the topic of the lecture. So we're going to use indexes. And the idea in indexes is that we need to achieve an efficient management of this multi-dimensional information. The way to do this is by pre-structuring the data, so pre-calculating what I'm going to calculate, what I need to calculate for the objects in the database. What my goal is, is that the search functionality, the structure that I'm preparing, is optimized for the search functionality. At the same time, a decisive factor is the comparison algorithm for the similarity. So if I want to establish the similarity between the query and an object, this comparison has to be optimal, so that it runs fast. The distance, for example, between an object and the other one, or how similar are two images, like the editing distance, for example. What I'm going to achieve with indexes is to transform from a set of objects in my database to a list semantics. A rank or a list with the most similar objects to my query at the top and the list similar at the bottom. So this way I can achieve a degree to which the object or an object in the database satisfies the query, or is most similar to the query. Let's see some formal requirements for multi-dimensional indexes. First, of course, I want to make sure that my indexes are correct. This means that they point to the data they were meant to point to. Second, I want to make sure that all the objects in my database are indexed. Not that I'm going to check the query against my index, but at the same time there are objects that I'm going to lose through my fingers because they're not indexed or something like that. So I need to be able to comply with correctness and completeness. Another important task is the scalability. I've said that we have to deal with huge databases. This is the point of indexes. If I'm going to introduce some more images, the index should perform very good, or at least much better than a sequential search. The second point of the scalability is the scalability with the dimensions. I'll come back to the images example. We have a Fourier transformation, and our feature vectors here are the coefficients of the Fourier transformation. If I want bigger precision, I'll choose 15 or 20 of the first coefficients of this transformation. If I want them to index all the objects in my database for each with 20 such features, then I need an index that is capable of indexing each object with 20 dimensions. In practice, actually multidimensional indexes can go up to 10, 15, maybe 20. It depends on the distribution of the data. But this requirement is very important for this reason in multimedia databases. Because the more you want precision in describing your data, the more dimensions you'll have in your space. And the index has to be able to cope with that. Another important factor, we have to support objects which are not real valued vectors. Remember the chain codes or the editing distance. And of course, the efficiency requirement, it has to be sublinear. Because if it's linear, I can just go through the objects in the database sequentially, and I don't need an index for that. Because index is cost 2, creating the index takes time. Updating the database, inserting some new objects or updating an object, an object means also updating the index that also costs time. So if I don't achieve a search efficiency which is sublinear, then I actually introduce costs. So the index has to be search efficient. Then I have different types of queries. I may have, for example, an exact search where I say I'm interested in exactly this image, the point search. I can do this also through metadata or whatever. Or some area search. These are typical queries in databases. Then the K nearest neighbor, so like a top K, find the most 10 similar images to this one. This one is calculated usually on an approximation basis. So you remember the possibility of drawing an image, drawing something, then waiting for the database to search on the similarity between what I've drawn and what's in the database or what Midomi does for sound. I'm singing, I'm humming something, and it searches. So this is an approximation basis. It has a threshold of error. If the similarity is smaller than a threshold, then I don't want that result. But if it's a higher similarity between the objects, then those should belong to the result. What I'm also interested in is in the efficient update operations. As I've said, updates cost because you have also to update the index structure. Modifying something means modifying also the index structure because otherwise it wouldn't be complete or correct. So coming back to what we want to achieve. Supporting different distance functions, the editing function comes in mind here. Of course, we don't really want that the index is bigger than our database. If the index structure is bigger than the database, then it really also doesn't make sense because I'm going to search through the same amount or bigger of load of data. I didn't really achieve that much. We'll speak about tree structures and one of the well-known tree structures are the Bay trees. Have you ever heard about the Bay trees? Yes, no? No. Bay trees are successfully used in database management systems, just normal database management systems with just one dimension. The idea here is that such structures reduce the search to a logarithmic cost. The point is when you have a database with name, income and stuff like that and you want to index those persons by the names, then you build with Bay trees an index structure that looks like a tree where you have a root node. This root node has any other internal node navigational purposes. For example, I don't know, here I could have two or B, for example, for the first letter of the name. Here I would have a D instead of six, or B, D, I don't know, G, just to give an example. On the left, a pointer to the first sun, which has only values that are smaller than what we have here. Everything here is smaller than B, would be only A. Here it would be the values that we have between two and six, or between B and D, for example, for the names. Here I would have a pointer to an internal node or a navigational node with values which are bigger than that. The great idea is that the size of each of these nodes as well, of this one here, fits with the size of a block on the secondary storage area. So if you have a hard drive with a certain block size, you know, if you install Windows or something, install Windows or something like that, somewhere it will tell you, okay, the block drive is of this size with different partitioning systems and so on. The size of such a block is exactly the readable block on the hard drive. And this is how you win a lot, because when searching, the hard drive reads a whole block at one time. And what you then want is that at a read process, you win as much information as possible that you possibly need. Because if you would have in a block just one point information that you need, you've read the rest without needing it. So you want to minimize the overhead. This is the idea here. And the information, so all these are internal nodes, I will circle them with red. They help me navigate. And at the end come the leaf nodes, which hold the actual data. So the pointers to the actual data in the database. And as I've said, they are exactly size of a block. And the great part is now, if I'm going to search, for example, for I don't know, three. I'm going here to the root node and I'm searching for three here in this node. I'm just reading a block. I've performed one read. Three is bigger than two. I'm going further, is smaller than six. Okay, I don't need to go any further. I'll follow the pointer between them. And I'm reaching this node here. I don't need to go here or here. So I've already performed pruning. Going here, I was searching for three. I already find three. And I follow the pointer and I've reached this block here. So with three blocks, I reached my information. Three reads. This is the main idea. In multimedia databases, however, we have multi-dimensional data. The batteries support only one dimension. So the idea would be to somehow use some structures or get to some structures that are able to offer the same functionality as we've seen before, but for multi-dimensional data. Okay, so let's discuss a bit about such a tree. How could it look like? So I need some kind of regions since I'm in multi-dimensional space. So we're going to talk about geometrical regions, which will comprise data points. These data points, since they are in the same region, have to be somehow similar to each other. So they build clusters. Actually, when performing such search, the clusters are considered the search you are going to look for, because if you have a query point and you want to search similar objects to it, you're probably not going to do a range search or as I've said, an approximate search. Drawing something, find the images similar to them. And as long as you've hit a cluster, you've won, because you returned the objects from that cluster and say, these are the most similar objects. And going further on this cluster idea, clusters may have a hierarchical structure. So actually, you can build a geometry on the idea of clustering and split it in smaller hierarchical structures, starting, for example, from, I don't know, a rectangle and splitting it into smaller rectangles. Have we spoken about clustering in this lecture? We've spoken about it, right? Hierarchical clustering, exactly the same idea. Okay, now let's see some different criteria for three structures considering multi-dimensional information. So based on the cluster construction, they might completely fragment the space or to group data locally. You have a whole space, a unit of one, and you either have a complete fragmentation of that or just a locality where the data is grouped together. The clusters may overlap. You may have some kind of a soft clustering or you may have disjoint clustering, where each data point may belong just to one cluster. Or you may have balanced or unbalanced situations. Balanced, where all the clusters are roughly of the same size, there's no disproportionate situation or the unbalanced where you have maybe three objects in one cluster, three data points in one cluster, and a hundred in another. So all this stuff differentiates between what kind of trees we can build. Another important topic is where are the object stores? One can store it like in the B trees, in the leaves, or there are some other variations where you can store it in the internal nodes itself, themselves. So not to need to go through the whole path up to the leaf nodes, but say, okay, I've reached a navigational node where my information is already there, like the tree I was previously searching for, just point it to the data. And we'll also see that geometry is important. What kind of geometry are we going to use for describing these regions? One solution would be to use spheres and hyphers spheres for more dimensions and cubes and hyphercubes, again for more dimensions. One of the most successful indexes in a multi-dimensional context is the air tree. It was invented by Gatman and published in 1984, and it's actually nothing more than an extension to the classical B tree. It's used in data warehouses or geo-information systems where you have more dimensions. For example, in geo-information systems you have maps, you have to describe different dimensions. It's usually available to go up to 10 dimensions without big performance issues. But as soon as you go up to 20 or 100 dimensions, the air trees are very slow, and it doesn't make sense to use them anymore. There are, however, some variations, like the air plus trees. We're going to discuss a bit today. And the air star and x-trees, which are able to go up to 20 dimensions for uniformly distributed data. The stuff with the uniformly distributed data is to achieve comparison between these kind of index structures. Because if you develop a new structure and you want to say, my structure is great, it outperforms everything else, then you have to test it on the same distribution of data like the other index structures. And it's usually on uniformly distributed data. But this is usually the worst case scenario. Because in, for example, multimedia databases, you have data clustering together. You have a cluster here of images which are kind of similar. You have some clusters here and here and here. So they are not uniformly distributed. So in real life, with real data, you actually get too much more dimensions than 24 disvariations. So the indexes are quite influenced, and the performance is quite influenced by the distribution of the data in the database. Okay, so the structure of the index is dynamical. It, of course, allows the usual operations like inserting in a database, updating the objects, deleting is also possible. So classical index structures. It has data pages. So those are leaf nodes which are storing the pointer stores the data. So the data is stored in the leaf points if you want to point to, to say it like that. And it has internal nodes for navigation purposes, the so-called directory pages. They only store pointers to the, to the suns. And how am I going to get to the leaf nodes where my data is stored? As a geometry, the R trees you use the minimum bounding rectangles. So rectangular geometry. Okay, let's see an example so that you can get a feeling what we're talking about. Let's say we have here a two-dimensional Euclidean space. And we have some objects around here. I don't know. For example, you see that I'm drawing this point in the corners of these boxes. This is why they are minimum rectangle bounding rectangles. So what this actually means is that I have one point here. And for example, one point here and one point here. And this rectangle, R5 that I have here, is the smallest rectangle that contains these three points. So if I have these three points, then the rectangle would look something, sorry, something like that. And if I have these points and I do like this, it's not right. That's not a minimum bounding rectangle. Yeah, so you got the idea. Okay, so we have the root. And the root is our entire space where the objects lie. Then we have the first three sums, R1, R2, R3. Each of them hierarchically including some other clusters. Yeah, so for example, here I would have other points and so on. R1 includes R4, R5, and R6. They are clustered together hierarchically. The left side, R2 includes R7, R8, and R9. This part here. And R3, R10, and R11. So the three structures here, the geometrical representation here. Until now nothing complicated. Everything is pretty easy. So for this kind of geometrical representation, we achieve some local grouping for clustering. Yeah, so it's hierarchical clustering. I get to build my hierarchies from the whole going to the left sun, which has its minimum bounding rectangles which are close together and going deeper and deeper. But as you've also seen, we allow overlapping. So some of the boxes overlapped. Of course, this might create a problem when performing search, because due to overlapping we might need to go through different branches. We'll see that soon. And what's also nice, this R3 structure is height balanced. So we won't see any degenerated trees starting from the root and having only left suns. That would be very bad for search performance, because in that case you would actually perform linear scan through the database. And the objects are stored only in the leaves. So talking about the criteria that define trees previously, R3s allow overlapping. They are height balanced. The objects are stored in the leaves. And the minimum bounding rectangles represent the geometry of those trees. As rules, the root has to have at least two children. So you won't have something like that. You have root here and just one child. This is not okay. What you need to have root and at least two children. Then each internal node has to have between a small number and the maximum number of children. This condition is created so that we won't have any sparse navigational nodes, because with sparse navigational nodes we kind of waste space. And a quite good heuristic is that the minimum number of the occupancy of such an internal node is usually the half of the maximum. So I don't know if your block size, for example, is of ten such points in your internal node, then probably the lower bound would be something like five. This is a good heuristic for something like that. Usually when creating such an index structure, you kind of say, I would like that the minimum degree of occupancy for an internal node is 60% or 70% or 50% or there is none, for example. But usually a good heuristic is the 50% occupancy degree. Now for each entry in the node, we have a pointer, so a child pointer and a rectangle. This eye here actually describes the geometry in which all the children lie. For example, if I have something like that, then the eye is described by these four points. Actually only two of them are enough, so the extreme points. Just by choosing these two points you can describe the complete minimum bounding rectangle. So that's the eye and the child pointers are the minimum bounding rectangles inside it. Here are the structure. Now for each entry, in the case of a leaf, for each entry, I actually need the tuple ID. So I actually need the identification of the object in the database, the pointer to the object. Because at the end I don't have any more navigational nodes, I have nodes with pointers to the data. In the structures, if you are going to implement something like that, there's the basic difference between a leaf node and an internal node. At the end you want to navigate towards the data. Another important factor is that all the leaves in the tree are on the same level. So as I've said, you don't have any degeneration. Any way you go in the tree, you go through the same number of moves. At maximum of one difference, that would be the maximum difference you would get. And again, yes, the leaves node also have the same condition of being filled with between M and the maximum number of objects, again for the occupancy degree. The essential operations, you know them from classical databases. They allow search, which is actually the fundamental operation for indexes. They allow inserts, updates, deletes, and there's something new, splitting. The splitting operation that we're going to talk about comes from these two boundaries we've spoken about. We've spoken about the minimum and maximum boundary of occupancy within a node, the small M and the big M. When I'm going to insert an object in a node that already has the maximum filling degree, it will spill over. I would have M plus 1 object, which is not possible for my index structure. So I need some kind of an operation to deal with that. That's splitting. Again, what's not in this list, but what should be in this list, the same case for deleting. If I'm going to delete, and I have already the minimum number of objects, and I'm deleting one, for example, I have five as my minimum boundary in the node with five objects, I'm deleting one, I'll get four objects. What do I do then? I kind of need to condense to nodes or redistribute those data points so that I don't have any internal or leaf node, which has only four objects. So condensation has to take place. Okay, let's start with the search. The search is performed recursively from the root to the leaves. And the path to search for, so you have different possibilities. Usually the path is selected randomly, and then you check, is this path okay? No, then I'm going to the next one and so on. And if I'm going through a path and I'm not finding what I need, what I'm searching for, then I'm going to the next sub tree and I check for the next path to be traversed. As I've said, the path selection is arbitrary to avoid any kind of special, I don't know, cases where, for example, I always search something which is on the last branch from the right and I start from the left and this means that I'm going to end up each case with the worst case. So random is better than choosing an order. In the B trees, I've said the search cost is logarithmic. In the case of B trees, there's no guarantee for a performance. So the problem is we have overlaps and we allow overlaps of the minimum bounding rectangles. And if you have overlaps, it might be the case that all the paths need to be traversed. Imagine something like this. This is our space, this is the root, and you have the first child here and the first child here. Because you had one data point, for example, here, one data point here, and here, and here. Let me say that the filling degree is 3. You had something like this. You have an overlap because of these two points here. I'll try to draw it with black. And if your search is somewhere here in that area there, then your tree looks something like this. That's the root. It has the first child and the second child. This is R1, this is R2. But if you're searching there in the middle, then you can't decide should I go there or should I go here. Because my search here is in the coordinates of R1 as it is in the coordinates of R2 also. Because I'm checking against this I1 and I2, and I1 is described by these two coordinates, and I2 is described by these two coordinates. So checking against both of them will return true for me. So I'll have to go here and here also. Is it clear what the problem is? This is why I can't guarantee anything for the performance. Still, the idea in R trees would be that I build these structures so that I can do as much pruning as possible. So somehow avoid overlaps. We'll see soon. Okay, so I was mentioning the search. How the search is being done, the idea is our query is described through a search rectangle, let it be S. And every minimum bounding ranked angle starting from the root, that intersects with my search rectangle, has to be traversed. I'm checking those eyes that I was telling you about, the points describing the minimum bounding rectangle of each node. Okay, so the idea is for internal nodes, I'm going to check intersection with S. It's a trivial mathematical operation, regardless of the number of dimensions. And if I have intersection, I go deeper and check further with the children. If I have more intersections, I have to follow them. In the leaf nodes, again the same, I have to determine the entries that intersect with S. And for those intersections, return the tuple IDs or the IDs of the objects with the objects from the database as a result set. To say, okay, for my search, these are the objects that are intersecting with the search query. Okay, let's do an example of a search on our previous tree. So the search minimum bounding rectangle is here. We're going to start randomly with, let's say, R1. It's described through these points here, and we will check it against the coordinates of the search query. And we see that there's no intersection. We read R1, but not its children. So we don't really need to go deeper, because there's no intersection. So that's pruning. I'm not going to read R5, R4, or R6, and I'm not going to compare against them, because that's hierarchical. If my search query doesn't intersect with R1, it can't intersect with the rest of them. I'm going to choose randomly and go to R2. And I can see that there is an intersection between the coordinates of R2 and the search query. This means I have to go inside, go deeper. And I'll do the same for R7, R8, R9. I see that there's no intersection here, there's no intersection here, but there is an intersection in R8. And let's suppose that R8 is a leaf node. This means that it has pointers towards the data, the objects. I'll check all these objects and return them, the ones that intersect with the search query. And the advantage here is, from the 12 minimum bounding rectangles and the structures that I have here, I've checked only 7. So I've checked, well, the root is usually the case of if I'm going to search here outside, I can prune everything. That's the idea. So I check the root, I check R1, R2, R3, and then R7, R8, and R9. And that's it. That's a trivial example, but you should imagine that these trees are kind of mushy. So they are quite large, so you can prune a lot. That's the great idea about R3s. Okay, so let's talk a bit about the second operation, the inserts. The idea by inserts is that inserts happen in the leaves. You don't start from the root, as we've done previously with the search, but with the leaves. So we go to the leaves and we think about what the best leaf would be to insert our node into. And the best leaf has to respect some kind of a spatial criteria. The idea is, if I choose my insert point and it falls into a leaf, and that leaf still has some place left, so it didn't reach its maximum, I can leave it there. I just put it inside the box and it's fine. However, if the point is near a box, a minimum bounding rectangle, then I kind of need to modify that minimum bounding rectangle. And if there are more of them, I kind of have to choose one. The idea is to choose the one that grows the least, so that makes the minimum effort in including the new point. There are a lot of heuristics and discussions about it, so if you have, for example, two minimum bounding rectangles that are at the same distance and would grow with the same volume growth, then you have to choose the one which is the smallest. Because the goal is to have a minimum bounding rectangle which better describes your data. So with as less dead space or space where there is nothing as possible, more dead space means that when you're going to search, you're going to, with a high probability, have your search bounding box somewhere in the dead space. And you're going to search like crazy to the tree to reach the conclusion that there was nothing there, but you'll still have to go to the tree. So that's the idea. I want to make the least effort and produce as less dead space as possible. Okay, so let's say we have the happy case where we landed in a leaf node which has enough space. If it's enough, I just modify it, so the number of nodes is smaller than the maximum. The big problem is if it's not enough, if the node overflows. In this case, we kind of need to divide the node and say, well, you're too big for one single node, you'll have to be split in two. By split, we have to follow the same goals. So I have to take care of avoiding overlaps as much as possible and avoiding dead space as much as possible. So I don't want to create the possibility that later on, when I'm going to perform a search, I'm going to land in space which is not actually used. Let me give you an example. So I want to insert this point here. And I have some minimum bounding rectangle there. And maybe I have some minimum bounding rectangle here. If I'm going to grow this one, for example, to include that point, everything here is dead space. If I'm going to grow this one, everything here is dead space. So there's a trade-off. Where's the smallest dead space? Am I going to overlap with something? What's the volume that I'm going to end up with? All these heuristics have to be taken into consideration when performing insertion and at the same time, also when performing splits. Yes? But the problem in the tree is when you're going to perform the search and for example, your search query is exactly in the dead space. You'll still have to read this note here, although you're not going to reach any object because it's empty space. Let's see if I can draw a better example. So I'm starting with the root. That's the root here. And I have two nodes. If my search query is here, the tree looks like that. That's the root, that's R1, that's R2. The search query needs to be intersected first with the root and the root says, yeah, we have an intersection. Then I have to intersect it with R1 and R1 will say, no, we don't have. R2 will say again, no, we don't have intersection. But I still have done these three reads because I have a lot of dead space. This part here is dead, it's empty, there are no objects there. This is the idea, I need to avoid somehow this kind of a dead space. And also when I'm doing splits, I have to avoid something like that. So if I'm going to grow here, I need some kind of a way to avoid or to grow so that the dead space is minimal. Because if a search query lands in that area there, I still have to read stuff from the hard drive, but I won't go to any object because there's nothing there. This is the idea. But we'll see some heuristics that allows us to perform exactly that. Okay, so let's go to an example. We want to insert a point. The point after transforming it to a future vector, future vector looks something like this. So it's our point here. There are different possibilities. One would be to enlarge R7, one would be to enlarge R9. So if I'm going to enlarge for example R7, then I'm going to need more space. So it will grow more than R9 grows. But it won't overlap at all with R9. If I'm going to increase R9, then I get this area which overlaps. Such an overlap and a search query over that area would force me to read both R7 and R9. So it's kind of a trade-off between these two. The question is what kind of database do I have? What do I want to achieve? Do I want to avoid overlaps or do I want to avoid this dead space? Again, growing with R7, I have some space there which is empty. This is a database administrator decision. So it has to decide between these two criteria because you can't win them both. Yeah. Okay. Let's take this example then. Let's see. I would have here another rectangle that includes this point here. And now I'm performing the search. And the search is here. Do I need to read R7 to see that actually I'm not going to reach any lift node? No. Let's say then it's the other case. My search query is in the same place. Do I need to read R7? That's the difference. When I need to read R7 and I say I have a match with R7, then I also need to compare to all these children because the question is, I have an intersection with R7 but do I have with which children? Where should I go now? And then I'm going to check with this one, with this one, with this one. So here I'm stopping early. Here I'm doing three more intersections. And say, oh, it doesn't intersect with that one, it doesn't intersect with that one, it doesn't intersect with that one. So I'm in a dead space of R7. So I can stop here because my search query doesn't return any result. And here I would have stopped earlier. That's the idea. No, I've just, I wanted to show you what this would mean and R7 would remain in that position and here what it would mean to include some more dead space because we'll come to splits and you'll see what kind of bad split means and what's a good split. But the idea with the dead spaces, if you create that space, it will lead to much more comparisons until you reach, so you'll have to compare a row more than you actually have to. That's the idea. So you'll lead to more comparisons and consider that your similarity function is something we've done with images, for example. And then you have to compare this similarity between the image and all of the children of some bounding rectangle, which you don't know if they would actually intersect or not. Well, here in this case, no, but in the case of a leaf node, you'll have to do all that. Okay, so let's see about those heuristics because as I've said in the case of an insertion, there are some kind of heuristics that tell me what kind of operation I should do. Should I increase that one or what should I do? So an object is always inserted in the node where it produces the smallest increase in volume. This is one of the heuristics which has been proven to work with impraxis. The happy possibility would be that the point you need to insert falls already in an MBR, which doesn't need enlargement and which also doesn't need a split, so the structure remains the same. If there are several nodes which would produce the same increase in volume, we choose the one that already has the smallest volume just to produce this balancing. We've once said about clustering that we want to achieve some kind of balanced hierarchical clusters, not the one with 100, the one with one node. We try to grow the small one so that it still has the smallest volume. These are typical heuristics we take into account. Okay, now we were talking about the nasty possibility of performing splits, so if I'm going to insert and I have an overflow, take the same point here and let's say I'm going to extend R7 because I want to avoid overlaps. Then I'll have here four nodes because I have inserted the XP here, I have three more here, I have to do a split, the split will happen somehow, we'll discuss about it, and then we'll have in R2 four nodes. We'll have the left part of the R7, we'll have the right part of the R7, each of them has now two nodes because we had three nodes plus one four nodes, and then we have done change R8 and R9. But since our maximum limit is three, this split here will also translate to R2 because R2 has now four nodes and this has to go upward and it's kind of recursive. Because afterwards I would kind of obtain here R2 one and R2 two and this will split the root. So the insertion with overflow, once we have discussed, let's say it's magical right now, how we choose where to split it and how to split it, but once we've split the node, it can propagate towards the root. That's the idea here and this is why insertion with overflow may be dangerous and may be costly. Is this clear? Okay. Now let's talk about the splitting. If we have reached the M plus one case, these M plus one nodes or objects have to split between two nodes. The goal, as I've said, comes now from the heuristics. The goal is that it should rarely be needed that when I perform the search to traverse these both nodes. So if I have the big node and I'm cutting it in two, and I'm doing such a search like that one, I need to traverse them both. What I should try here when performing split is to keep them as separate as possible so that the probability of a search query going there and making me to go in both of the resulting splits is as small as possible. So kind of keep them away from each other. That's the idea. Of course, a good choice in this case is to use small minimum bounding rectangles, again leading to small overlapping with other minimum bounding rectangles. So you can get an image of what I'm talking about. If you have, for example, these four, let me draw them to you. So I've said this stuff propagates. So I've inserted somewhere in a leaf and it propagated upwards into a node which was previously like that. One possibility would be cut them in half, like I've said before. If I have my search somewhere here, I don't know, let's see, I have to go through a lot of stuff. At the same time, I have a lot of dead space, here and here. If my search falls in here like I've done now, then I have also to check with this one and with this one, which I could avoid if I split it like this. If I search here, I just go through the root. The root says, yeah, it's in me. I check against that one and that one and that's it. I'm not going to check through additional this one and this one. On the other side, this is also a better split, but you'll say, oh, okay, you have overlapping. Yes, I have overlapping, but it's still much better because, look, searching somewhere here, oops, that should have been read, searching somewhere here produces about the same result as searching here because I still need to check both of them, but searching here produces better result because I need to check only this one here, where here I need to check both of them. Can you follow that? So this is kind of the idea and this is also the idea of the dead space. I reduce a bit. I try to avoid the size of the overlapping just to do as less read and check operations as possible. Is it better now? Okay. Let's see how it's done because, as I've said, these are heuristics and they can be built upon with some approaches we're going to discuss right now. So deciding on how exactly we perform the splits as you've seen is not that easy. The objects of the original bounding rectangle can be divided in a number of ways. The goal, again, the volume of the resulting MBR should remain as small as possible. We've seen what happened when we performed growing the left split with a lot of volume and we've seen what happened in the right side with a bit less volume. The naive approach would be to check them all but it would take a bit of time. Imagine that you have a lot of objects to check all the splits. It's not a viable solution. In the implementation of R3s, there are actually two classical possibilities for that. One that has a quadratic cost and one that has a linear cost. Actually, tests have shown that the linear cost is quite close to the result as the quadratic cost. The idea for the quadratic cost is to compare for the node being split. The minimum bounding rectangle that has M plus 1 nodes inside, I have to choose two objects. For that, I compare each two objects and the necessary minimum bounding rectangle that would be needed to include those two objects. I'm having, for example, a bad choice. Actually, it's a good choice. I'll put it like that. I'm comparing, for example, number 1 with number 2. What kind of a minimum bounding rectangle would I need to include them both? Well, it would be probably something like this. Then I'm going to compare 1 with 3, something like this, and 1 with 4. Then I'm going to compare 2 with 3, 2 with 4, 3 with 4. Each of the points. As I've said, the idea is to choose as starting points the points that would produce the largest MBR. Then I would probably choose something like 2 and 4, or 1 and 4, is the same. They produce the largest. I'll say, you know what? You have to keep apart, because you're no good choice for being in the same MBR. They will be the roots that start two new MBRs. Then I'll set this one and this one as a starting point for the new MBRs, because they are the furthest apart. They would create the lot of volume if they would be together. Then I would then compute for all the objects the difference of the necessary volume increase with respect to both of these MBRs. So this MBR here and this MBR here. I would compute the one with 1 and I would probably do something like this and end up with this kind of a split. In this case here. Because adding 3 to this one here would increase 1, 2. So this, let's say this one is R1, this one is R2, would increase R1 very much. And the other way around, so this would be the split with the quadratic cost. So I've done nothing more than to insert the objects with the smallest difference in the next step after I've chosen the one with the biggest distance between them. It's easy. What I have to do is repeat that for all the objects again. Something like that. I've chosen my bases. Those are the two starting points for the two new regions. Then I'm going to reallocate that. What's the increase for R1 to include that one? Well, it's smaller in volume than increasing R2. I'm increasing here. I'm going to reallocate that. Should I increase R1 or R2? Easy choice, R2. And that's it. Well, it was easy for four points. What do you do if you have a lot of them? Because you usually have hundreds up to tens of thousands. So the quadratic cost is not the best solution. The question is, can we do something better with a linear cost? And actually, there is a method that can perform quite well. The idea is to go in each dimension. Remember, we have a multi-dimensional space. We've described here only in two dimensions. We'll stick to it also for this method. But to go in each dimension and for the rectangles that I have there, find the one with the highest minimum coordinates and the one with the lowest maximum coordinates. Determine the distance between these two coordinates and normalize it by the size of that dimension. This way, you get a score between this highest minimum and lowest maximum so that you can get the idea of how much distance is between them. Then you do the same on the other dimensions. For example, you do it on the x dimension, then you do it on the epsilon dimension. Then you choose the one with the higher score. You say these two that have produced the higher score, meaning that they are the furthest away apart, will be the starting points of the new MBRs, which are going to be obtained from the split. Let's see how we can do that on an example because I imagine it's hard to get like that. We start on the x dimension and we said we are going to search for the highest minimum. On the x dimension, the highest minimum, so A is quite low, B is also quite low, it's somewhere here, somewhere here, C is somewhere here, D is here. The highest minimum is E. Then I'm going to select the lowest maximum. For the lowest maximum, I'm starting with A, the maximum of A is somewhere here, clearly there's no other maximum that is smaller than A. For the smallest maximum, I have A. Then I select A and E, and I calculate for them how far apart are they. The difference between the values I've established is 5, and I have to normalize it by the size on the x-axis. The size on the x-axis is the minimum of A and the maximum of E, which is 14, this part here. I have to normalize it somehow so that I can compare it later on. It's a ratio of the distance between these two, and that's the result. I do the same on the other axis, the epsilon axis. I'm calculating again the highest minimum and the lowest maximum, and I get the two rectangles, the prune for that are C and D, and I calculate the difference. Between C and D, there's a distance of 8 out of a total distance of 13 on that axis. This is my normalization result. I then compare them, and then I get that the distance on the second axis, on the y-axis, is bigger, and this is how I choose C and D as my new seed for two different minimum bounding rectangles in splitting this. So C and D will be the starting point. Now I have to do the operation all over, and then I'll get, for example, A, and I'll ask myself, should I add A to D, or should I add A to C? And because the smallest increase in the volume would be produced if I increase D, then I will increase D and include A. And then the same I will do for B and for A, and it will result something. It's not always best, for example, for this split right here. If you finish to add it to perform the split, you will see that actually you'll have a lot of overlap. So probably it will result in something like, let's see, I say add it here. Yeah, I don't see any other way. It will probably be something like this at the end, and you'll have this overlap here and a lot of that space here. But this is how the heuristic would end up with. So is it clear with the linear cost and the quadratic cost how they work against each other? Okay. So something which I've already done, classify the rest of the objects to one of the seed or the other seed. This is nothing more than a simplification of the quadratic method, and it provides usually a similar split. The overlap is minimum regarding the minimum bounding rectangles. So as I said, it works quite good, the linear quadratic approach. For the delete, again, we can have the simple case where I search the object, I delete it, and I'm not under the minimum filling degree we've spoken about. So if I have m objects in a node and I delete one, then I have somehow to reconstruct the tree. For that, I need to perform a condense operation, and what the condense operation actually means is that if I delete the node, if I delete the object in a node which had the minimum number of elements, then I delete the entire minimum bounding rectangle and try to redistribute the remaining elements to reinsert them. That's the delete operation. So practically the delete lays back on the insert operation, the condensation. A special case is for the root. In the case of the root, as I've said, we have to have at least two children at any time for the root. If the root remains with one children because of propagating these condensation and deletes, then the root is going to be deleted and the next inline will become the new root. Let's see a simple example. I want to delete an object from R9. This is this one here, but the minimum value is of 2. So I'm deleting R9 completely, and then I reinsert this object here. Then again, I'm doing inserts. I have two possibilities. One is to grow R7 and the lower overlaps. The other one is to grow R8 and introduce some more volume. Here I will just increase the size of R8 to include the remaining node from R9. And that's basically it. If I can't include it in R8 or if I include it in R8 and it produces a split, then I'm going to perform what we've already discussed in the inserts. But that's the basic idea for deletes. In the case of an update, points and objects can shift. Because if you modify it, its features modify, its position in space modifies. With this kind of a modification, it will propagate an update that modifies minimum bounding rectangles. Because if the point modifies, then I have to build the next minimum bounding rectangle. And this kind of propagates with the direction to the root. The idea of update is again to lay back on the delete. So if I'm going to update one object that influences the structure of the minimum bounding rectangle, so it's not inside, but it's a border node, there's a check like that that tells me, it will modify the structure, so then I have to delete the entire node and perform reinserts. Okay, now we come at discussing a bit about the overlap and the dead space. So what does it cost for me to perform such a search? Let's say we have a tree structure like described here with some overlap between all of the root children A, B and C. And I'm going to search, a search query, oops, sorry, that fills into such an overlap. Actually, if you look at this tree here, G belongs to A. Yeah, so G is right here, and it belongs to A only. But because of H, that belongs to B, B has grown to overlap over A. I want to search something that is going to intersect with G, but because of this overlap, I'm starting with the root. I have intersection, I'm going on A, but I also have to go on B. Now, if I choose, I will go on A, I will have to check against all A's children and find some intersection with G. But I can't stop here because the question is, so I found some match somewhere here, but there might be some other match under this border. So I have to go also in B, and I'm going to have to check also in B with all its children. So the point here is, I have to avoid overlapping, and overlapping is only possible if I know the data nodes in advance. If I know exactly what kind of data nodes I'm going to insert and how they are going to look like, and optimize maybe on something like that, so if I would now, with this information, delete everything and construct a new A3, it will probably not look like that. I would probably try to do something else, which doesn't have that much overlapping. I would probably, I don't know, try a different approach when the data is known in advance. The idea is, the block access cost is big if you have overlapping, and if you want to minimize the overlapping, you might have to destroy the entire index and reconstruct it. This has a huge cost. So some companies really do that, after like for example two, three months, they say, okay, our index is so bad after so many inserts, updates, deletes and so whatever, that we're going to delete it and leave it one night to be reconstructed. With all the power the machines, our machines have reconstructed the index, and then it will be again as fast as the beginning due to that. But there are also some other possibilities that deal with overlaps in some other ways. And AirPlus3 is such a solution where they say, you know what, it's not allowed to have overlaps. When you perform inserts or whatever, the overlaps have to be avoided at any cost. Not every overlap, but overlaps at the same level in the tree. So as we had A, B and C, they were all nodes of the same level in the tree, so you have to have disjoint areas. They shouldn't overlap. You can have overlap between different levels in the tree, but not on the same level. The disadvantage here or the side effect is that for this reason you might get to have the same node copied in two different leaves. And this might result in some space overhead. But the great part is that it improves the search efficiency. Due to the overlaps I don't have to follow multiple paths to reach my data, but it's enough to follow just one, because the old overlapping stuff would be found in both paths. So for the same example, the AirPlus approach, which doesn't allow overlaps, would look something like this. I would have A and B not overlapping and maybe another node P. A and P don't overlap. Again, G, however, belongs to both A and P. So now my search query goes into P. It doesn't intersect with A or B and still reaches G. What's interesting to note is that because G also kind of intersects on another level with A, it's also a child of A, completely. So I don't split G anyway. It's like A would contain the children of G, which belong to the side of A, but will also contain some stuff under it. And I don't allow any of these overlaps. The rectangle G, for example, is divided somehow between A and P, with it being completely present in both leaves. With the trade-off of redundancy, I, however, gain search efficiency. There are some differences to the classical AirTree, like for example, when you start with the leaves, you will end up with objects being inserted in several leaves, the same object, so this redundancy stuff. And something which is actually quite costly is the splitting. In this case, the splitting doesn't continue only upwards, but also downwards. So I'm starting from a leaf. I'm going upwards with the split, and I reach some point where I'm not allowed to have overlap, but the split would produce some kind of overlap. So I also have to go downwards and perform some splits not to allow that overlap. So the split will go in both directions. And for AirPlus trees, actually, they've eliminated the minimum number of children. It seems it doesn't bring that much, although the classical AirTrees use such a minimum number. Regarding the performance, the main advantage of the AirPlus tree is that it has much better search performance. The point queries are not really very often, but still, just for example, you achieve 50% better access time just for point queries. The drawback is the low occupancy of the nodes because of the splits, and it results in degenerated trees because of the number of changes. So you have a lot of changes which propagate the splits, as I said. And due to this, if you implement such an AirPlus tree, you actually have to rebuild the index more often than in the case of AirTrees. So you get search performance, you get to use more space, and you have to rebuild it more often. One advantage, the search performance, two disadvantages. Maybe the storage space is not the problem, but the fact that you need to rebuild it more often might be. That's the trade-off. Okay, we should take a 10-minute break before we go further to the second part of the lecture. So, see you at half past. Okay, so we've discussed about the AirTrees, but then we said that there are some things we might want to index, like the editing distance. In that cases, we can't do it with the AirTrees. We don't have Euclidean matrix, we have some other kind of matrix, and for this feature, we need some special kind of a tree. And that was the idea for the M trees, the metric trees, which were invented in 1997, and it allows for different kind of metrics, whatever metrics. So, the tree structure is there, and then you can decide on your metrics and use it along with the metric space probabilities. So, in this case, one of the most important probabilities is the triangle inequality, which helps us perform checks for the sub-trees, and the geometry is actually determined by the distance function or the metric we're going to use. Let's do some short recapitulation of the metric spaces. So, if we're talking about the metric space, we're talking about a pair between the universe of all the possible values, and the metric we're going to use. And then there are some properties, like for example, the distance between two objects or values in this universe is always bigger than zero, so we don't have negative value metrics. On the other side, the identity probability, so if the distance between, so considering the given metric, the distance between two values in the universe is zero, then we're talking about the same values. You should actually imagine these properties on the multimedia objects we're talking about, like for example, the images or the sounds or whatever. So, if you're comparing two sounds in how similar they are to each other, and you're using chain codes or editing distance, what do I have to change in this image so that it looks like this image? Then these properties are viable. The distance is not negative, you have identity, and you also have symmetry. So, the distance is symmetrical between the two of them. And later on comes the triangle inequality. So, if you have three values, the distance between, the distance between one and three is smaller or equal than going through two. It's clearly smaller in this case that I've drawn here, and it's equal if two is on the line between one and three. That's the triangle inequality, and this is what we're going to use in M trees. No questions here, right? Let's take an example to get an idea about what happens here. So, let's consider we have three points in our database, A, B, and C, and we have a query Q. The goal is to find the object with the smallest distance towards the query. We don't know, since it's a metric space, we don't know exactly the position of the points. We know they are somewhere there, and we have only information about the matrix. It's a metric system, so the distance between them. We know we have pre-computed the distance between A and C, we have pre-computed the distance between A and B, and B and C. And we have the values stored here. Okay, I want to find the closest to my query. Now, I can start by comparing the distance, so doing one similarity check between the query and one point from the database, a random point, let it be A. And I'll check, for example, that the editing distance is two. I'll choose another point. And I'll see that the editing distance is seven. The question is, is or can C be closer than A and B to the query, and should I check the distance between Q and C to know that? Because of the triangle inequality, I can calculate that. So, I can actually say that the distance between the query and point B, which I've already calculated, has to be smaller or equal than the distance between my query and C and C and B. So, going through C. I know this here, and I know this here because it's in the database. I don't know this one here, and I don't want to compute it because it's going to cost me. But I can underestimate it, because as I've said, I know them. I can then shift the distance between B and C in the left side. And I can see that the distance between my query and C is at least five. Five something, yeah? I know that A is closer because the distance of A is two. So, I've just computed it based on the triangle inequality with the information I had, without checking the actual distance between my query and this object point. And now imagine that you have a lot of C's, and that you can prune all of them without checking the editing distance, just by computing this simple triangle inequality. That's the basic idea of M trees. This is how they work. Okay, so the idea is M trees partition the objects in some environments with a certain radius. They start with a point and say, okay, all the points within a certain radius from this center, like for example here, are in partition number one. Or in a certain region. And points belonging to other regions are in other partitions. And usually for M trees, the idea is to choose these partitions so they are quite, there's an equilibrium. So they're not disproportionate, they are balanced. Like for example here, it's a trivial example with what's inside the radius and what's outside, we have two regions which have roughly the same size. Now, having a query which starts from here, and it's an approximate query, this means I'm interested in similar objects, approximately similar objects up to a certain threshold. I can define a radius based on this approximation. So I'd like the objects that look like this image more or less, so like, I don't know, 80, 70% similarity. And this is how I can define myself a region based on this radius of objects I'm interested in. So all the objects that will fall in this region, I'll have to return to the user. The question is, what can I prune? What can I say won't interest me for sure? And if I get, I don't know, this object here, should I return it or not? Well, if I have the radius of interesting objects, and the radius, this one here, of the first partition, and I calculate that the distance between these two is bigger than the sum of the radiuses, I can prune it. Due to the triangle inequality, we'll go deeper, so that you can see it on examples. But let's see the overview of this. So we said M trees are trees, M metric trees. They are similar to R trees, but they use the distance information as an unknown metric. You can use any metric you want. As in the case of R trees, you have a geometry, and then you have also the tree representation. Like for example here, because of the radius, you have the circle sphere, hypersphere geometry. And the tree information looks like this, the major regions including all objects, and then each region including subregions. Each region is characterized by such a central point, its radius, and then its children. Like for example here, I have children C, D, A, and F, and so on, up to leaf nodes, which have pointers to the information I have in my database. So this is the similarity between M and R trees, and again when you perform searches, you can perform them based on the tree structure and on the geometrical representation. Okay, now each node in the tree describes a region. This region actually says that it contains all the points which have a distance to the central point of that region, which is smaller or equal to the radius. The central point as I've called it is called the routing object, and the radius is called the covering radius. So what's the coverage of that region? But the index points that we have here, the points in the region, the guaranteed fact is that the distance between these points in the region and the routing object is a maximum of the radius. So at least it can be zero or maximum the radius. So what this basically tells me that query is somewhere, I don't know, let's take a query here, and this query has a certain radius. This query, where the distance between the query point and the routing object of that region is further away than this radius here, and this radius here put together. So I have this area here and this area here. If this condition is respected, then I can spare myself the entire region, because any point in there can't be similar to an object that I have to return, because it's too far away from me. So actually by considering this I can prune a lot of stuff. Now going back to the tree, we have two types of nodes, we have the internal nodes. The internal nodes have routing objects, the radius, and they also hold the distance to the parent node. This means that with the abstraction of the root, so the root doesn't have a parent node, so it has no distance to the parent node. But all the other children that are internal nodes, they hold also a distance to the parent node. The leaf nodes have the values of the indexed object, so are the actual objects and they only hold the distance towards the parent. So that somehow this hierarchy always allows me to navigate through it up to the node by knowing how far away they are from their direct parent. Okay, probably everything, so most of it is similar as the trees with the only difference being this distance. So we hold this distance pre-computed from the child to the parent, and you might ask me why is that? Well, it's because of the pruning I was talking about. If I'm going to do this triangle inequality, I need this distance in order to perform fast pruning. And I'll give you an example right now of what this means. So let's say we want to check. So this is our root, this point here is the central point of this region, and it has a child, which is this point here, and it has a distance which we know. It's this distance we were talking about. We have a query and we want to know if we should check this child here. Now, we say that this region can be pruned if the distance between the query and this point is bigger than this radius here. Let me draw it blue. And this radius here, considering that the worst case would be, that the radius, I consider the radius on the same line as the distance. So as I previously said, if the distance between the query and Vn is bigger than the sum of these two radiuses, then I don't really care about the nodes inside this region. The question is, do I need to calculate the distance between the query and Vn to do this pruning? Well, no, I don't need to, because I know the distance between the query and the root. I've already computed it in the previous step, and I know this distance to the parent I was telling you about in the previous slide. So then I can apply the triangle inequality here and here, and say that the distance between the query and this root node, this has to be smaller or equal than the sum of distances between the root node and the child node. And the distance which I don't know between the query and the child node, the triangle inequality. I don't know this one, I know this one, I know this one. I do the same as I've done in the previous step. I'll get this one here in the left side, and I get to the distance between the query and the root point minus the distance between the root point and its child has to be smaller or equal than the distance I don't know between the query and the child point. Now, if I combine this one with this equation here, with this inequality here, then I know that this is another estimation from equation 2. This one is another estimation of this distance here, and then I can replace it with if this sum here, which I already know is smaller than that distance, is smaller than this one, then this one is most surely smaller than this one. Is clear? It's quite easy. So basically, I just need to compute this part here. And this is everything I know. I've eliminated the unknown part, which is this one here. This way, without computing the distance here, I can say sum up the radiuses, calculate the difference here, and decide if I'm going to prune the entire region or not. It's as easy as that. And I don't have to do additional operations. Is it clear? Everybody? Okay. Now, this was the basic idea of how M trees work. Regarding the operations, the insert is performed as in the case of R trees, with the smallest expansion. So I'm going to insert where the smallest expansion of the radius is. Of course, they will expand somewhere where I insert, but I'm choosing the one with the least radius to expand to. Again, splits are performed, and we don't have any volumes here. And in the case of deletes, the idea by deletes is that two new routing objects have to be chosen. So again, this choice of extreme points that are far away from each other. So the choice here is exactly the same as in the case of R trees. Again, we have some heuristics, and the heuristics is to minimize the maximum of the two resulting region radiuses. So the maximum has to be smaller. I kind of have to bring them to a similar level. I don't want to have disproportionate radiuses. After I have chosen these two seeds, then I have to attribute the rest of the objects, in the case of the R trees, to the one that produces the smallest increase in radiuses. So the big idea of M trees is that, and the biggest win in M trees, is that I get to do something which I can't do with R trees. I can use different metrics like the editing distance. I can use the triangle inequality for pruning. And this way I can save myself a lot of uninteresting regions, regions holding a lot of nodes. And again, regarding the dimensionality because of the metric cost. So when I compare, for example, between the query and the root, I need to perform some checks between the query and some nodes to start somewhere for the triangle inequality. And that costs. And that cost is related to the dimensionality of my data. This is why, again, here, as in the case of R trees, I have a problem with very high number of dimensions. So experimentally, it has been proven that with uniformly distributed data, R and M trees reach acceptable results up to like 20 dimensions. For more than 20 dimensions, there are too many comparisons that need to be done. And this is actually a problem which is well known in the field of multidimensional indexes. It's the so-called curse of dimensionality. The volume which such regions cover increases exponentially with the number of dimensions. And actually, there's a solution for this problem. There are more approaches to solve this problem. One of them relies on, for example, the principal component analysis or something which you probably have heard in information retrieval. At least there, it's quite useful, latent semantic indexing. The idea there is to find dimensions which are really definitive for your space. So to put together that dimensions which kind of correlate and represent them through a dominant dimension. This way, one can reduce the number of dimensions. I don't know from, for example, 50 to like 20 because of this correlating property. One approach that takes into consideration dimensionality reduction is the Gemini indexes. However, all these approaches are just for, if you want to go into it and implement something like that. So for further reading, but are not the scope of the lecture. So they are just for your information to know that there is such a problem and one can tackle such a problem with these techniques here. If you need some more information, just let me know and we can discuss about it. Okay, so we've kind of reached the end of today's lecture. Today we've touched a very interesting problem of databases, classical databases, that's indexes. And we've expanded it for usability and multimedia databases. So here we need to cover multi-dimensional data. And for this reason, we have discussed about the R trees. R trees use a certain geometry, the minimum bounding rectangles. The idea here is that overlapping of such rectangles may lead to performance issues when performing search. And one of the most complicated operations when considering R trees are the inserts. And all other operations rely on inserts. As I've said, delete, translate, insert, update, translate, delete, which translates into inserts. So everything relies on the quality of the insert operation. And inserts may lead to splits. And there are certain heuristics one can use to perform those splits. You may have bad splits, which you really want to avoid. You won't be able to calculate in reasonable time the perfect split. But with linear approaches, you can avoid the quadratic constant. With linear approaches, you can calculate a good enough split. You just need to follow some simple heuristics, like for example avoiding overlaps, trying to increase with the smallest volume. Minimum bounding rectangles with the smallest volume to avoid dead space. And as I've said, avoid overlaps. The second part of the lecture we've discussed about another kind of trees that can cope with, for example, editing distances we have mentioned in time series. We have, for example, audio. And in audio, you can calculate the editing distances similarity between two pieces of music. In order to perform indexing of such data, we need metric spaces, metric spaces which can be used in M trees. And in order to perform optimized searching here, we can use triangle inequality to do a lot of pruning. That's kind of the idea of this lecture. And since we are at the last lecture of the semester, just let me take you through the stuff we're going to do next semester. So, of course, next semester we also have a lot of interesting stuff. We are going to discuss about the world of data warehousing and data mining for some of you who are planning to go in the industry and get a great job. This is a very important lecture. So, yeah, you would be surprised to see how many jobs ads require for people to know not only classical databases but analytical applications of databases. And this is where you have to speak about large volumes of data which data warehouses deal with. And our lecture doesn't stop here at how you can store this data. The interesting part comes in data mining. How can you get intelligence from this data? How can you run analytical queries? How can you, yeah, get most of what you have? And here we'll talk about association rule mining. We'll talk about classification, clustering, things we've also touched a bit in multimedia databases only from a different perspective. So, this lecture will be in English, so it's also suitable for the ITIS students. Another interesting lecture we will have next semester is distributed database systems and peer-to-peer data management. It will be held in German. Again, a very interesting lecture you see here, the expansion of databases in the field of the distributed systems. Amazon, for example, offers such services. Their S3 platform expands really fast, they have a huge database of products which wouldn't be achievable with normal technology. Google is another example with their big table and their approach. You can learn everything about that in this lecture here. So, distributed database systems, I can only recommend it to you. The only disadvantage for the ITIS students is it's in German. The slides are in English, however, and we can answer your questions in English. The third lecture we will hold next semester is spatial databases and geo-information systems. So, for the some of you who are interested in geo-information systems and maps and how to compute coordinates and how to store them and how to index them fast, that's the lecture to pursue. The last lecture we are going to offer for the master's students is digital libraries. It's kind of a similar lecture to multimedia databases only from the library's perspective. We want to achieve that the data lives on for hundreds and hundreds of years. So, we have a lot of data, we have again text, we have music, we have images, we have videos, and we want to make them searchable in a fashionable way. And at the same time, we want to ensure persistency. So, somehow it's at the border of multimedia databases and distributed database systems where some redundancy has to be achieved to ensure this persistency. Hard drives fail, you see it, for example, on YouTube they fail every day, they lose a lot of hard drives. What would you do if you would lose information which you don't have replicated anywhere else? You've lost a part of your digital library. Well, this sucks because you can't reconstruct that information. Stuff like that, you'll find more about it in digital libraries. That would be about everything I wanted to tell you. And hope to see you, of course, in the examination, but hope to see you also next semester at our lectures. Thanks.
In this course, we examine the aspects regarding building multimedia database systems and give an insight into the used techniques. The course deals with content-specific retrieval of multimedia data. Basic issue is the efficient storage and subsequent retrieval of multimedia documents. The general structure of the course is: - Basic characteristics of multimedia databases - Evaluation of retrieval effectiveness, Precision-Recall Analysis - Semantic content of image-content search - Image representation, low-level and high-level features - Texture features, random-field models - Audio formats, sampling, metadata - Thematic search within music tracks - Query formulation in music databases - Media representation for video - Frame / Shot Detection, Event Detection - Video segmentation and video summarization - Video Indexing, MPEG-7 - Extraction of low-and high-level features -Integration of features and efficient similarity comparison - Indexing over inverted file index, indexing Gemini, R *- trees
10.5446/343 (DOI)
So welcome all to our new lecture on multimedia databases. And today I want to talk to you about a couple of things. Last time we were dealing with textures. So we were talking about what makes a texture, what makes a structure of some image. And we're considering some low-level feature on one hand, like contrast or directionality. We were also talking about some probabilistic ways of dealing with textures. So we could predict the color or the intensity of some pixel based on the surroundings of that pixel if it's a pattern. And we were talking a little bit about high-level features, about Fourier transformations, about wavelet transforms to give you an insight into what we can gain from other spaces, like feature spaces or frequency spaces or whatever it may be that the transform transports our image to. The difference between high-level features and low-level features was... Major difference. Anybody? Nobody? Huh. Well, basically, from high-level features, you're not losing any of the image information. You can reconstruct the texture as just a different kind of transform. However, if you use low-level features, you're extracting some characteristics, some measurements on the image. And it's usually impossible to reconstruct the original image, the original structure from low-level features. So that is the basic difference between the two. Today, we want to go a little bit into shapes. We want to see how we can put our fingers on objects that occur in images and shapes. It's one of the best descriptions of what actually is in the image. And therefore, we will work a little bit on first multi-resolution analysis and then go to the real shape-based features with different ways of preparing, of segmenting the image and detecting the basic edges, the salient edges that make actually the visual impression of the shape. But first, we want to talk a little bit about multi-resolution analysis. And the problem that we've seen last time was that even for very small step functions that we use for the image frequency, for the intensity distribution, the transformations can become arbitrarily complex. I mean, we've seen the wavelet transform. We just had seven different wavelets. So we used a mother wavelet to children and four baby wavelets. And even that was a high-dimensional equation system. And if we use more, so a bigger base for our feature space, this is getting even more complex. So we have a lot of information in the image, on one hand. We have a lot of variables given by the base vectors or given by the different frequencies in the sine and cosine. We have waves. And solving this linear equation is definitely a very expensive thing. So what can we do with it? Well, we already did go into some algorithms like the fast Fourier transform that makes it a little bit easier to calculate the actual linear system. With the wavelet transform, we have something similar, which is called the fast wavelet transform, who would have noted. And I want to show you how this works, too, because this is basically the foundation of multi-resolution analysis. And that is actually one of the topics, or one classical way of dealing with high-dimensional computations that you will experience very often in the form. So with fast wavelet transform, you can actually compute the result of your linear equation system in linear time. And what you basically do is you change between two steps. You convert the image down into reduced resolution. So you get smaller images with less resolution. And you store a second part, and that is what you lost during the reduction step. So I can represent images always by two parts, the basic features and the details. And this is what we're getting a part here. So the basic features are also visible in lower resolved representation. You still see the big edges. You still see the dominant colors. But if you drill deeper into the image, you find that this is not actually the whole of the same cover, but the color changes slightly. And these changes could be stored in a different part of the image or a different representation of the image. And with these two images, you can actually reconstruct the original image by adding the details to the salient parts. And this is basically what is called multi-resolution analysis. So you have the picture in many resolutions. And for each resolution that is smaller than the original resolution, you store the details that could be used to reconstruct the original resolution to but in a different file. So you get one file that just shows the basic features. You get another file that you can use for reconstruction if you want to. Okay? This is basically what it is. The thing to do is you consider the image in different resolutions. And the signal of the image is just the intensity signal. You go through the image and you record the intensity of each pixel. What can you do? Well, you can start with a single pixel. A single pixel has just a single intensity. If I want to add more detail, I have to say, well, actually, it wasn't a single pixel, it was six pixels. And the distribution of the brightness or the colors in this pixel was like that. So I add something here, yeah, which goes towards the deconstruction. If I do that several times, I end up at the original image. Okay? This is the basic idea. So the representation of the image in certain blocks, in certain resolution, is just done by averaging the intensity information and keeping the differences what you use for averaging. Maybe an example will make it more clear. So what you basically do is you take a big picture that might have 16 pixels. In the first step, I will always take four pixels, average their intensity values, and store them as a single pixel. Okay? If I do that a couple of times, I will end up at one pixel. Yes? The only thing I have to note is what happens here. What did I lose by the averages? How did the original values look like? Okay? And this can be just done by the differences. So if I have a rested image, VK, and say VK minus one is just a lower resolution, where the pixels have been averaged. And at some point, we arrive at the worst resolution ever, just a single pixel. And the single pixel shows me the average color. Note that when we were talking about low-level features for color, the average color was a necessary and a meaningful feature in a way. So even this image tells us something about the image. It gives us a way to distinguish between images, because if the images were totally different to start with, also their average colors are different. But actually, this way around, it's not as true as the other way around. If the average color of two images is different, then the original images must have been different. Images that are different can still come down to the same average color. So we have the example of an image being half red and half blue, and an image being totally lilac or violet. They have the same average color, obviously, but still they are different images. But if the other image would be green, no way of getting the same average color. That is basically the idea. But still we have to go back from the single image to reconstructing. So we still have to ask, what's happening here? How do we go back? And that is basically how the intensities of pixels are obtained from the Corsa image. How does the back transformation work? And basically, the intensity of the pixel in the lower grid is the mean of a set of corresponding pixels in the higher resolution grid. And I always go one step by halving the number of pixels. So I always halve the number of pixels. Two pixels become one, basically. And if two pixels become one, then storing the information contained in the two pixels can basically be done how? Okay. I have two pixels, intensity i, intensity j. And I go to one pixel, which has intensity. Yes? I plus j. I plus j divided by two, basically. Okay. Just the average. What do I have to store if I also want to have this step? Yes? Exactly, the difference between i and j, because if I know what the mean is and I know the difference with respect to each other, I can put my fingers on what they actually were. If they had different zero, they must have been exactly the same color on the spot. If they had difference two, they must have been right one lower, one higher. Okay. So this is actually what I'm doing now. I'm storing the difference here. Okay. And this is what it makes. So what happens basically is I go from 300, 200 to 150, 200. So I'm always kind of like boiling it down in the horizontal way and in the vertical way. Okay. This is always vertical. So from 300, can you read that actually? 200. I sample it down to 150, 200. So I sample it down in the vertical axis. And then from 150, 200, I sample it down to 150, 100. Okay. So now it's exactly a quarter of the size of the original image. Okay. So this basically is halved vertically and halved horizontally. And this is, I go on and as you can see, the image is getting coarser and coarser. So you cannot see the actual objects anymore at some point. But you will just see a very coarse representation of the feature. Well, so for each pixel in the original image, there is a corresponding pixel in the lower resolution raster. That is derived by repeated averaging. And if I have the intensity of a pixel at some point, then for each pixel, we have what is left. What is left of the average part plus the difference that we use to kind of like sample it down. Okay. And this detail information, that is needed to reconstruct the intensity of the actual picture. And if we are K steps away in resolution, in terms of resolution from the original picture, we have to add K times the differences and then we finally arrive at the original picture. So this is a lossless decomposition. If we only look at the first part, it's very lossy. So we can do a lot of things with the third part, which is interesting. But we need the detailed information to kind of like break it down. So one of the ways that was basically already hinted at averaging and differencing. So what do we do? We take each two pixels and transform them into their average. Okay. Easy. Then we take the next two pixels, always horizontally and vertically. And take them to the next level. But what do we have to do? Well, basically, if I do it here, the five and the nine do have an average of seven. And now I have to say, how were they different from each other? How do they differ from the seven? Okay. Well, basically, the first one or the second one is seven plus two. So I keep the two. Nine is seven plus two. If I do have a nine and I want to arrive at an average of seven, what will the other number be? Obviously five. I do the same for the eight and the four. Again, for the second number, I record the difference with respect to the average, which is four and six makes minus two. Okay. I record the minus two if I have to think of what was the first number. Well, I have the six as an average. I have the four as one of the parts. Then the other part has to be eight. Okay. I have to decide for either one. It works either way. Just pick one. You can store the difference with respect to the first or with respect to the second. Doesn't matter. It has to be the same always, obviously. Huh? It's a five would be the nine. No, because what you basically do is you always put together two pixels to form one average pixel. Okay. And the information that you need is depending on the average, obviously. And on the difference with respect to either of the numbers. It doesn't matter if you choose the first number or the second number. Okay. Pick one and stay with it. It's totally confusing if you do it randomly any number, you know, like, then it doesn't work. But I can immediately say, well, if I know seven is the average, I know nine is one of the numbers. And I want the other number that have to do this solution. Okay. Which is easy. Good. This basically what I can do. But you're right. I have to decide at one point for either one. Doesn't matter which I take, but for either one. And what you can see now is that we keep the averages and we keep the differences. Okay. It's the same amount of information. Isn't it? So what are we saving? I had 16 numbers here. I have eight numbers here. Oops. And eight numbers here. So I kind of replace 16 numbers by two times eight numbers. Yes. I think we could say that we save a family. What do we save? Well, yes, you do get a preview of the, if you just skip the details and use this. Yes, we can do something with that. But still, we have to store the same amount of data, don't we? Tricky, isn't it? Yes? Maybe we can throw away something. We don't need to save a family line, which is to be. I don't need to use the same information. I can throw it away and just store the smallest things. Yes, there was basically the original argument of, yeah. So we can see things in the lower resolution version that are interesting for us, right? And this is actually why we're doing it. But still, there's a very clever argument about the storage here. Yes? Can you just do the normal size? Yes, exactly. If we store the differences with respect to the average, it's small numbers usually. And most of them will actually be zero because think of pixels in some image, you know? You will always have here, I have the orange sweater, you know? Like these pixels all do have the same color. Difference zero will be very often the same difference. And if I look at this, lots of white, all the same difference. And that happens one black pixel. Okay, there's a difference here. This is the interesting part. So this is where the actual compression comes from. Many of these are zero. Okay? And this is kind of like the clever way of storing that. And then at some point, you arrive at the single pixel. Okay? 4.5 is the average intensity of this image, however it looked like before. Okay? Everybody understood what I'm doing? Good. So in signal processing, there's actually something that is even nicer to do than the averaging and sampling. What you could do is just a high pass filter and low pass filter. So you have a signal like this here. Weeeeeee. And what you do with a high pass filter and a low pass filter is the high pass filter will only figure out the higher frequencies, the low pass filter will only figure out the lower frequency. So this is basically what happens with the low pass filter, what is left of the image. And this is actually what happens with the high pass filter, what is left of the image. And the high pass filter is always basically the baby waflets of higher order. And the low pass filter is the baby waflets of lower order. The low pass information contains the salient information, yes, the averages. The high pass information contains the detail information, the differences. Okay? Good. High pass filter extracts the image details, low pass filter the averages. And of course you could use four possible applications of the filters. Again both horizontally and vertically. You can do two high pass filters. You can do a high pass filter followed by a low pass filter. You could do a low pass filter followed by a high pass filter. Or you could do two low pass filters. Okay? So this is basically, should be L here. Yup, yup, yup, because it's a low pass filter. And you save the result of the high pass filter for the subsequent reconstruction of the image. This is the detail information. And you look at the low pass filter image for the salient edges, for the average colors, for whatever you are interested in. Okay? It basically looks like that. So what you're doing is you take some image, you down sample it along the Y axis with a high pass filter and a low pass filter. Okay? The high pass filter gives you the details, the information that was changed, the differences. And as I said, a lot of them are zero. Those are the white parts of this image. Okay? Only where something happens, there is a difference. And these are the averages. This is the low pass filter. Okay? I've done it along the vertical, the horizontal axis, the X axis. Now let's do it along the vertical axis. So again, I can do a second low pass filter, which will still give me a smaller image, quarter the size of the original image. Okay? Right? And still containing many of the details, because it's two times sampled the averages. And I get four possible ways of getting all the differences. These are the differences in the vertical reconstruction. These are the differences in the horizontal reconstruction. And these are the differences in both reconstructions. Okay? So this is the high pass filter. Right? This is basically what I'm getting. If I put together all these four images, what do I get? The original image reconstructed. Perfect reconstruction, yes? Not throwing away any data. These four images are, in terms of resolution, exactly the original image. It's just that some of them are almost white. Yes? Very small differences, very many zeros. And I can see, I can get a good impression or a preview, as you said, of the original image. Two times low pass filter. Okay? All I'm saying. So this is basically what you do in multi-resolution analysis. And if we're doing it, it looks like that. So this is always the double high pass filter. And these are all the differences. So the total number of each pixels in the step is the same. The original size of the image does not change. Yes? One question. Am I doing all those four combinations? No, no, you're doing all the four combinations. Four. Yes. Okay? And what you're basically doing is you're doing it stepwise. So you go from the original image to four times three differences and one double pass filter image for all the details. Sorry, for all the basic features. Then you take this and again, split it up. Do all the four different combinations of the filter, end up with an even smaller. You can do that until you're left with a single pixel up in this corner. Okay? And by adding all the detailed information to the original pixel, I can reconstruct that image. Okay? This is basically multi-resolution analysis. Everybody clear? It's wonderful, isn't it? It's actually so easy and I wouldn't have thought of it. Well, so what you basically do now with the feature vector is you save the expected value and the standard deviation of the wavelet coefficients at each resolution level. So for example, if you have a street stage resolution, you get a 20-dimensional feature array. Because for every single one of these images, you save the expected value and the standard deviation. So two numbers. You have 10 of those fields, times two is 20. You have a 20-dimensional feature vector. And this actually gives you a very good impression of the image. This actually allows for a very good comparison of different images. Okay? Yes? Because you need the expected value and the standard deviation. It's just two measures. This is what the factor two is. It's two numbers. You always store the expected value, which gives you kind of like the average color or whatever it is, the average intensity. But just having the average intensity is a little bit too messy. So you could have a lot of images having the same expected value. But then they might be different in the standard deviation. Yes, they might also be different in the skewness and in the whatever it may be. So you could add more moments to distinguish between them. But the basic idea is the same as we had with the grayscale images for the texture mapping. You know, like just taking out statistical moments to describe the actual image. And this is what we're doing here with the first two statistical moment expected value standard deviation. We could add some more if we wanted to. But those two are actually okay. And the 20 dimensional feature vector is very easy to handle. If we had three, we'd have 30 dimensional feature vector. If it would take skewness, we would have a 40 dimensional feature vector. Okay? That's kind of getting more and more. Good. This actually brings us to our second topic for today, shape-based features. So now it's really down to earth. We want to see what's in the picture. We want to see objects. We want to see shapes, tables and chairs and human faces and everything. And we have to find a way to describe these shapes. Describe these shapes using the picture information, but not representing it by picture information, but by representing it by something else. How could we represent shapes? Thinking about it's not so easy, is it? So, nevertheless, we have to think of a way how to represent them. Because the shapes in some image really contribute significantly to the similarity. If we see two images of cats, one can be brown, one can be white, we still notice a similarity because the image contains cat. And if we see the image of a tree, or a color, it could be a birch tree or a beech tree or oak tree or whatever it may be, the colors may slightly change, but still we get the impression it's similar, and the impression is very often transported by the shape of things, because it's just tree-shaped, you know, like weeee. Tree-shaped, okay? And seeing that shape, everybody would agree, yeah, it's a tree, okay? And I will not draw a camel or something like that anymore. I failed last time with that, so... So, basically, also, if you have shapes, you may have a deeper semantic information, that this is a tree-shape, gives you an idea what the object depicted in the shape is. Yes, it could be a bunch of cotton, that I put on a stick. It's tree-shaped, but it's not a tree. But still, if something walks like a duck and talks like a duck, it might probably be a duck. So, if something is shaped like a lemon, it might actually be a lemon or banana or whatever it may be. And... if we look at some typical shapes, we find that things come in very many forms. So, all these are immediately recognizable. Maybe that not. But still, all these are almost immediately recognizable as chairs. Though the images are totally different. They differ in color, they differ in texture. So, here you have this wooden texture. Here you don't. Okay? They differ in... well, actually in the shapes, but a basic shape... always seems to be there. Okay? And this is probably what makes us believe those are chairs. And if you have a combination of simple shape features, so, something... some round shape, some triangular shapes, maybe squares or whatever, and you add up other features. So, for example, the color or the textures. You could get better retrieval. So, for example, if you have a round object in an orange and red, yellow-ish picture, it might be a sunset or sunrise. Okay? And I always have this kind of like round part, and the rest is kind of like orange, yellows, all different kinds of colors. Of course, it might not. Yes, it might be an orange ball on the beach. Very well. But, I mean, it's all we have. And I did try here with one of the IBM tools, looking for cross-shaped kind of things. And it works quite well. You know, like, you see that most of the images actually do contain crosses. And even if they have kind of like this zigzag pattern here, or if they have the split arms, they will still be recognized as crosses. Okay? So, this is something that can be done that is kind of sent. The fundamental problem, of course, is how do we recognize the shapes of things in the image? I mean, it's always easy to construct something like I did with the chairs. You know, like here is the part where you sit on, and here is the part where you lean to, and these are the feet of the chair. I can do that very easily. I can kind of like somehow segment the image. Yes? And find out what is the salient shape. But a computer has to do that in some automatic way. And, of course, also this idea of semantic mapping, I mean, it was a little bit suggestive, the shape of the chairs. A chair has a certain shape because it has a certain function. And it doesn't make sense if you have a chair that is upside down because you can't sit on it, obviously. So it has to have the chair. But is it always true that from the shape of something, you can actually conclude what it is, what the semantics behind the thing is? And this is definitely not true. Then we have to kind of describe the shape, again, with some feature vector, with some low-level features, high-level features, I don't know. And once we are able to describe shapes, based on this description, we have to find similarity measures to compare between different shapes. So is a table shape really different from a chair's shape? What are the similarities? Where is the difference? How do I compute a difference automatically? These are questions of fundamental problems that we will have to deal with during the next part of the lecture. And the first topic we will have is segmentation. So segmentation is really fundamental because not all the shapes that we know are in the picture are also shown on the picture. So for example, this image is a sundown. Everybody can see that. Where is the sun? Well, it's behind a cloud. It's not shown in the picture. So I don't have anything sun-shaped here. See? We know it's there. The picture does not show it. It's occluded. Happens very often. What are the interesting shapes anyway? I mean, I have ship here. I do have ship here. They are obviously ship-shaped. I have cloud here. I do have cloud here. Obviously cloud-shaped. Wave? Something wave-shaped? Which do I record? What are the interesting shapes in this picture? What are the shapes that would make the picture seem similar to some other picture? So do I really consider all the different shapes? Do I consider only the important one? Maybe only those in the foreground. So this is probably more a boat image than it is a sundown image or than it is a cloud image. All problems that we are facing. And that we have to live with somehow and that we have to consider. Segmentation can be a nasty problem. Because what represents a shape in the image and what does not? Hmm. Zebra-shaped. Zebra-shaped. Interesting concept. Very hard to make out in this image, isn't it? Because there are lots of zebras and they occlude each other and they have stripes and I can hardly see what part belongs to what zebra. So this is kind of like a new zebra or not. Or where does this zebra end? Difficult, isn't it? Same here. I have this... I don't know what it actually is, you know, like it's some rock formation. But what are the shapes? So this is kind of rock shape. Let me take blue. Maybe a little bit more counter. Probably here. Yeah. Rock. Very hard to make out. Still we got a visual impression from it. There is something in terms of shape. And we can definitely see something. Though it's not your usual, haha, it's a tree. It's not happening simply. It's just very complicated images. And as I already said, one part is really all parts of the shape visible. And should I record it as being round? I know it's the sun, I know the sun usually occurs round. So how do I store it? Do I just take the knowledge and say, yes, this is definitely a round sun. And if somebody looks for a sun, I always look for something round. Or do I just store this part of the sun? So anybody can look for images where the sun is half gone. Difficult. How to segment it. And the segmentation is one of the problems that the multimedia community is working on for ages now. And automatic segmentation, image segmentation, still kind of the holy grail of computer vision. Maybe it hasn't to be done automatically, but could be done semi-automatically. That you kind of get suggestions of shapes that occur in the images and just accept them or decline. And say, no, no, no, this is an artifact somehow. And also in the early versions of multimedia retrieval, it was not done automatically. But there were some semi-automatic tools to kind of figure out. But usually you had to draw it into the picture and that was the shape that was somehow then stored. So for example, with IBM's Cubic, which is part of the DB2 database, the image extenders, there is no way of doing automatic segmentation. It's the way it is. So IBM had this prototype thingy here and there you really had to go and do the, I mean, you had a lot of tools like you probably know from Photoshop or something like that. So here the lasso or the box kind of like that you could just do something. You had some automatic or semi-automatic things where you could kind of flood fill some of the forms that had the same color like you do. But basically what you had to do is kind of like you had to draw an elephant shaped. Okay, and you ended up with the different elephant shapes and this was recorded by the system as being the shape. Good. As I said, one of the ways of doing it was kind of like so-called flood filling. So if you had clearly distinguishable regions, you know, like no zebras, but rather things of the same color, what you could do is you could just mask the image and say, I want basically this shape and then it filled all the pixels with the same color and you would end up with a decent shape. And this shape could be recorded for the image then. However, it only works on monochrome surfaces and as soon as you have something like here where the shape is somehow a little bit broken, it will immediately run out and this is no longer moon shape now, is it? This is kind of like nobody would see a moon like that. Okay. So it's a little bit difficult and there are many research projects in multimedia retrieval that have been working on the topic, Blobworld, FotoBook, you name it, you know, we're just trying to figure out how to see at least foreground images here, for example, tiger shaped. Anybody would see this as a tiger? Well, probably not, but you know, I mean, it's better than nothing. It's better than sitting there for every image of the web and kind of like annotating the shapes, the occurring shapes by hand. There are some solutions. However, usually the algorithms only work on special cases. So if you have a special collection, for example, this was built on movies and the idea was to recognize people shapes in movies. If you have something like that, then there's usually a way of doing the segmentation automatically or at least semi-automatically, with arbitrary high precision. But if you're working on the web or, you know, like general images, it's virtually impossible to do it as yet. Maybe somebody thinks of something clever, but it's difficult. So due to the segmentation problems, actually, from all the commercial databases, shape features were finally removed. So the prototypes and the first sets already had them, and the representation and the comparison of the shapes usually went very well, but the segmentation problem made them unusable, so they were removed for IBM. So the DB2 extenders for Oracle, the Intermedia cartridge, and also for the Informix data blade. They all decided to remove them. Still, it's an interesting point how to do segmentation and how to do kind of like the shape features. So we will cover them in the lecture. Maybe if somebody thinks of a clever segmentation, or your selection of images is of a specific kind, so you can have segmentation automatically, then I don't want you to not know how to represent shape features and how to compare shape features, but you should be prepared for that. So in principle, you can always define a form as the auto perimeter of something. The shape of something, the silhouette of something if you want it like that. And for the actual segmentation, so how do you get the outline, how do you get the silhouette of things? You can have some heuristics. So if you have areas with similar brightness, color, and texture, they may belong to the same object. Okay? It's reasonable. If you have differences in brightness from one area to another, there might be an edge between them. An edge is always a good candidate for being part of a silhouette. Okay? So this is what we call edge detection. You can fill spaces with morphological operators and just mask them. We will go into that a little bit later. Just see how, like what I did with the flood fill. Okay? Just see how far some space extends to. You may segment the outline as a closed curve. So you find the edges and put them together to a closed curve and that might be the shape. You could also kind of approximate it with some polygons or splines, you know, like just interpolation. And there's a large number of other procedures, actually, how to derive shapes. And none of them works perfectly, but they kind of complement each other. So depending on what collection you have, you might use one or the other. The first one is basically... Shall we do the thresholding? It's not much. Yes, exactly. So thresholding is basically one of the basic ideas. The shapes or the areas that have the same gray value belong to the same shape. And by thresholding, I can distinguish different areas with different brightness, with different intensity values. Okay? Some area that is beyond a certain threshold and some area that falls under a certain threshold. And it's more often than not, in images, you will find that the foreground images are lighter than the background images. Or at least they're different, even if they are darker or they are black, then the background is lighter. So photographing something in front of a similar looking background, I mean, that's not usually what you would do. So a certain threshold can actually separate between regions. And what you do is basically you say that areas that are semantically related, like the screw over here, have similar gray values and can be separated from the surrounding space. Okay? Not like our zebras. And this is actually very often done in automated manufacturing. So quality control of parts you try to find out if something is in order or not. And computer vision knows a lot of these thresholding algorithms, and we will just cover a couple basic of them. What happens basically is you take the brightness information out of the image, and then you say, okay, I put in a threshold, these are the brighter areas, so like here, and these are the black areas, like the screws over here. Okay? And by just figuring out where the correct threshold is, I can separate background from foreground, and then I can do the shape segmentation. Well, the easiest way is just fix some threshold and say, okay, this is the threshold, everything beyond or above 200 in brightness is background, everything beyond that is kind of like foreground, something like that. And in the case of binary images, this is very easy to do. But usually you're dealing with gray value images that show a range of different colors, and where the difference is actually there and are noticeable. But you can't really put your finger on whether this is a lighter image as a whole or a darker image as a whole, and fixing the threshold would probably be not a very good idea. So we go for the flexible threshold that depends on the gray value histogram, and tries to kind of like figure out what part of the histograms belong to the foreground and belong to the background. Usually, if you take a histogram, you smooth it somehow, but you make sure you keep the peaks, you keep the masses of the distribution. Okay, and then you go looking for the actual threshold. One of the early algorithms was the Isodata algorithm by Riddler and Calvd, which basically tried to divide a gray value histogram into two parts. And you calculate the expectation values of the gray values in the left and the right part. So you separate it in the middle and do your calculation of expectation values. And now the new threshold should be the average between the two expected values. So you shift the threshold to the average and recalculate your expectation values. The idea is that the expectation values in both parts of the image move toward one of the peaks of the histogram. And once it reaches the peak, it stabilizes. It doesn't move anymore, so the average doesn't move anymore, and that's basically when the algorithm is finished. Okay, clear? So what you basically do is you have the histogram here. Ding, ding, ding, ding. You just hack it into two parts. You compute the expectation value of this part and of this part. So maybe here and maybe here. Well, maybe a little bit more here probably, huh? And then you recalculate the delimiter. Zup, zup, zup, zup, zup, zup. And you take it as zut, zut, the middle of the expectation values. You recalculate expectation values, blah, blah, blah. Okay? Easy to do. Second one is a triangle algorithm, also about the same time, which kind of says what you should do is you do have a peak in the histogram, depending on whether the image is all foreground or all background. So small images will have a big background peak. Big foreground objects will have a big foreground peak. Okay? And what I want to do is I want to connect the highest peak with the minimum in the distribution. Okay? And then I put the threshold where there is a maximum perpendicular distance. So basically I want to get at the foot of the peak. And this is zut where I take the threshold. Okay? We connect the highest peak in the histogram with the highest brightness value. We maximize the distance to the connecting line. And the threshold is a minimum, well, probably shifted by some constant value back or forth, so we can allow for some error margin. Okay? Also a possibility to kind of determine threshold. The first one, the isodella, is an iterative way of finding threshold. This is just doing the solution for the perpendicular problem. Okay? Questions? No? Easy, isn't it? Right. Application examples, so where this is very often done, actually this thresholding is in medical segmentation. So if you have sonograms, for example, and want to find out something about babies or cancer or different organs, you do have imaging tools that kind of color them due to a threshold, like we just showed, some of the thresholding algorithms. And using the actual histogram, you can find, okay, this is threshold here. So this is the interesting part, okay? And this is kind of the background part and can be removed from the image, and then the doctor can see, you know, what actually happens, what is there and what is not there, or what should be there. Okay? Good. So for thresholding, there are also area-based algorithms that evaluate thresholds on different image areas. There's not just one background and one foreground of the image, but there might be more objects in the image that belong together, and I will just split the images into different parts and do the thresholding on every part. It can also be done. It depends on your image collection, whatever you have and whatever the nature of your images, that's what you're getting. Good. And if you have the area-based algorithms, one of the possibilities is especially doing color images. So if I look at this image here, the segmentation works very well, and you find all the flower beds segmented. So they are kind of like you see here, there are these parts that are not really belonging to it. So there are changes in color. You couldn't do that with flood fill, but with thresholding you can. And that's kind of very good segmentation in that case. On the other hand, as you see here with the, whatever it is, kind of lizard thing, the different parts, the dark part here, the light part here, are broken down into different, well there are different parts of the animal, one is the arm or the paw, the other one is the body, but you probably would like something on the image that is lizard shaped, and not really having all the different things. So it can be a little bit tricky with thresholding. You have to try it on your collection. Sometimes it works very well, sometimes it doesn't. Okay, questions? Thresholding? Nope. Then I should tell you the pros and cons. Advantage, amazingly simple. Discriminate foreground background. Disadvantage, you have to find the right threshold. You suppose that the background and the foreground image really do change, which is not always true. And complex objects that have different parts like the lizard can be decomposed by thresholding things. Good. And with that picture I want to have the break. 10 minutes? 10 minutes is okay. Good. So let's start over with the second part. So the thresholding was one thing, but I already said that you could not only take the areas of the same color, but you could also figure out what the edges, the delimiters of these areas are. And this is what is usually done in edge detection. So you don't look at the areas, you look at the limits of the areas. And the idea is that once you have a closed curve surrounding some area, then this is the shape of the area. And if you can construct the curves in such a way that semantically consistent objects are surrounded by the curves, then it's perfect. Then you have really the silhouette of some object. And what you basically do is you look at the brightness function. You do a little bit of what we already did when we were trying to figure out what textures are. And how they can be considered. We looked at the directionality of part of the Timurur measure. So we walked through the image in some direction and then saw, well, here it is light, here it is light, here it is dark, here it is light, here it is light, here it is dark. So the intensity function runs like light and then it's dark and then it's light. And then it's dark and then it's light and then it's dark again. Like that. And looking at the intensity function as a real valued function, you can always look at the derivatives and see where are extreme points of these functions. So this point over here corresponds basically to this point over there. Because this is where you have the darkness and you came from light and you go back to light. So finding this extreme points, the maxima and minima in this function, how do you do that? Or you do it like you did in school, when you were discussing the characteristics of curves. You look at the first derivative and look at all the places where the derivative is zero. Because once you have a maximum or minimum, it's a point where the derivative is zero. And as soon as you move along the curve, the derivative gets non-zero again. And interestingly enough, there are different kinds of curves. So for example, there might be curves going like that. Okay? For example, this is one of them. Also here, you do have a point where the derivative is zero, but it's not a maximum or minimum. It's just a saddle. And how do you check whether it's a saddle or whether it's a real maximum or minimum? Well, both occur only once. That's right, exactly. So if you look at the first derivative, it has, before and after this point, the same kind of increase. It's either decreasing on both sides or increasing on both sides. But it doesn't change its sign. What you would need is something that really changes negative to positive or the other way around. And the first one, the first function is the gradient. If you think two-dimensional in terms of the function, this is the first derivative. And the second one, so-called Laplace operator, is the second derivative. Describing the increase and decrease of the gradient. So what we are basically looking for, we are looking for the gradient at some point. And the problem with our image is that it's a step function. It's not a continuous function because a pixel has a certain intensity value, and the neighboring pixel has a different intensity value. There's no way how they are transported or how they move into each other. But what you can do is you can try to estimate the differential functions from the point. So you consider the points, the pixels and their intensities as samples of a function. And then you kind of interpolate the actual function. For example, Fourier transformation does it like that. Or you just estimate the course of the function for each pixel from the immediate neighborhood. So you don't look at the function of the entire image. But once you have a picture, you just say, well, I'm interested in the gradients of the pixel here in the middle. So what I will do is basically look at all the neighboring pixels and just consider the differences with respect to these pixels. And then I can see, does the intensity go up or down? And this is all I'm interested in. I'm not interested in the real function and how it behaves. I'm just interested, is it more or less? And if I see some intensity when going through the image in any direction, goes from brightness to darkness and up again to brightness, then I know this is one of the points I'm looking for. And I actually can see that looking at these three pixels. Because probably here the intensity is 35, here it is 0, and here it is, I don't know, something like that. And I don't need to reconstruct the function. And if you do it like that, one of the tools that you have is so-called Sobel filters. They do it exactly like that. They estimate the gradient just by looking at the differences in the neighborhood of a pixel. And usually the neighborhood is a little bit bigger than just one pixel, you know, because you might have some noise that will give you a lot of false positives. But if you look at bigger neighborhoods here, you can really see by averaging out the right-hand side and left-hand side pixels of each pixel that you have to determine the gradients, you get a very good impression whether this is an edge, a minimum, a local minimum or not. Okay? The Sobel filters are also much faster than reconstructing the actual function based on the sample points. Well, obviously they are. So this is the way that we will go now. Okay? Clear? Good. So for the gradient-based method, we calculate the magnitude of the gradient at each point. Hmm? Edges show high gradients because you have a change in color. Areas of the same color show no gradient at all. It's just flat. Okay? On whatever level. And then we use a threshold algorithm to separate the edges from the region. So if we do it like that and we go through the image, we find, for example, that the gradient here is not changing much. But as soon as we hit something, the gradient has a steep increase. Okay? And then this point here is a steep increase. And it doesn't change much for the next time until suddenly it goes into the background. So we have a strong decrease here. Okay? And after that, it doesn't change very much. We record these two points. And these two points are basically the points over here. Okay? By taking all these points from the image and thresholding the points with a high gradient and the points with a low gradient, we will get quite a good separation. Okay? Hmm. Now there was a little bit suggestive, wasn't it? Because I'm using a binary or kind of binary image here. So what about the edges where it's kind of like changing very little gradually into something? That's a little bit more tricky, isn't it? So the actual advantage is it, that it's amazingly simple to do it that way. But as soon as you have noise, as soon as you have slow gradients where something, some area, merges into some other area, it does not work anymore. So if you have merging contours or blurred contours, it does not work because you don't have this single point where the gradient all of a sudden changes. Okay? So what do we do? Well, we look at the second derivative, the Laplacian, and look, if it really crosses zero, was the first derivative, the gradient, really positive before and negative after, or negative before or positive after? If so, it went through some valley or through some mountain. If not, it just changed somehow. So the zero crossing is really a test that shows you whether there was an extreme point, a minimum or maximum or not. Okay? And this can be used in noisy images where you do have blurred edges because what you see in a noisy image, even if you have, I will go into that direction and have the pixels here, even if you have small changes of the intensity. So you have the five here and you have the three here and then you have the zero here and then you have the three here and you have the five here. Okay? What you can figure out, it's not a big change in what you do. It's a very slow change, even if you come to from, I don't know, a hundred or something like that here. Even if it's a very slow change. You can see that it's diminishing here and that it's kind of taking up here. So there must be a zero crossing. Okay? Because you go down here and you go up here. Okay? This is the basic idea. So what happens, the idea is basically that if you have a blurred edge, so this is your edge, high intensity afterwards, low intensity before, and a gradual ascent. This is the edge. And the edge goes from here to here basically. So you need 10 pixels to really cover the differences in the level of the function. If you look at the gradient, the gradient is kind of slowly, so see the gradient here, the gradient is slowly rising as the function goes up. At some point it's zero. This is the interesting point. This is where it changes. It starts to fall again. So this is the highest point in change that you will notice. And then it's kind of like going down again. Okay? So it's going up before and it's going down again and at some point it's kind of like back to zero. Okay? And if you look at the Laplacian, finally, the Laplacian is positive when the gradient goes up. It is negative when the gradient goes down. So it has to do a zero crossing at some point. Where it does the zero crossing, you have the maximum of gradient change. And thus this is where the edge is. And though you had before an interval where the edge actually happens, by looking at the gradient and the Laplacian zero crossing, you can put your point to exactly where the edge should be. Okay? And you're kind of like ignoring the noise that is on both sides of the edge. Good? Yes, but I like yes, but. Yes. Exactly. Exactly. So one thing is to really make sure that it is a gradient change that actually leads to a maximum. And not just, well, it changes somehow and then stays on that level. It could also happen in which it is not an edge. You know, like it's just a different shift of shapes. It's just, you know, like you have slightly different colors in that part because there's a shadow or whatever it may be, you know. But this is rather slow. And if it really changes, if it goes to something totally different afterwards, this is the sign that it probably is an edge. And you're perfectly right. The edge could be rather a large margin, you know. But to represent the shape, you have to decide which the edge actually is. You cannot have arbitrarily big margins for the edges. Otherwise, it will be very difficult to compare between different shapes. Because if they're just given by some wild margins, how do you compare between them? You know, like the real shape, the real contour, or the silhouette of the thing could be all within these margins somewhere. Makes it very hard. So deciding for the highest change in the gradient is heuristic. I grant you that, but I mean, it's the best we can do. Okay? Good. So for the edge detection is kind of like interesting, as I just said, not to look only at the gradient change, because then you get these margins and you get lots of false positives, but really to zero crossings of the Laplacian. And that gives you better contours, where you just ignore changes in colors that are kind of systematic in the image. If you apply a smoothing filter before calculating the actual derivatives, then you even can lower down the noise further. So you won't get a change in the gradient every pixel that is slightly discolored. So also that is a good idea. Yes, important, it's only zero crossings, not the zero points. Otherwise it could be just a continuing change in terms of the gradient and the gradient for some reason just stopped over two or three pixels and then keeps on changing. This is not what we need as an edge. We need the zero crossings. What we do is we mark all pixels with zero crossing and multiply them by the strength of the edge, so the magnitude of the gradient. And this is our expected value that this really is an edge. This is our certainty. This is an edge. And the actual segmentation can then be done by thresholding. You keep all points that have been proven to be edges and everything below a certain certainty, you just cut out. And you keep the one where you're pretty certain that you have things like that. So for example, if we look at some images, even if they are heavily filled with noise, we get some of the basic contours. So for example here, this is a strong edge that is also detected by a soboil filter. Same goes here, here, like that. Goes here. So we can see that really only the edges are covered by the soboil filter. And for this image it works quite well. Well, if you see at the face, for example, here the nose, that's a little bit messy. And that's a little bit messy after the segmentation, after the thresholding too. You know, like, I mean, we cannot really reconstruct the nose out of the few points here that are there, considered to be edges. So for example, we have this point here which reflects this edge here. But it really doesn't help us in the nose shape, but it was very blurry and very noisy in the original image already. If we have clearer images, it works much better. Okay. This is what I'm trying to say. Good. If we compare the gradient procedure and the zero crossing technique, if we only look at the gradients, we can see that all these crossings that go to some point here, for example, that is discolored, that is noisy, will also lead to that point in the gradient image. But if we kind of look at the zero crossings too as a second indicator, if there's really an edge, if there really is something else before and after, and not just, I mean, in this case, it obviously was light before and it was light after, and there's just this one dark point, you know. So this is not a change in the gradient, but here, it was light before, it is dark after. This is a real change in the gradient. So with zero crossing, we can find that out. We don't get these points. If we just look at the gradient, we will get these points and we will get the other points too. But it's kind of noisier. Okay? Good. Which brings us to our next detour, so we can see a little bit how Matlab does Sobel filters. Okay, so for the detour, I've prepared a small example of how to play with Sobel filters and zero crossing filters. And actually, it's quite simple to do it with Matlab. So first, you need to transform the image from colored image into grayscale or intensity image. And then you just have a very simple function, edge. This is the name of the function. You give us parameter the image in intensity, so gray level image, and the type of filter you want to apply, like in this case, Sobel or zero crossing. But in order to see how good these functions, let's do a live preview of what Matlab actually can and can't do. So I have here loaded two images. I think you have already gotten used to how this can be performed. I'm transforming the images with RGB to gray. This function you've probably used in your first homework. Can you zoom in a little bit? Actually, I don't think I can. Maybe with the magnifier or something like this, but the rest is... Yo! Wow! Nope. I'd like a 150. Hmm. That would probably be easier. Can I? Nope. No. Well, I can copy it somewhere else. Like for example in WordPad. But then I will lose some information. Hmm. Okay, so the first transformation would be getting the images into the gray values. I've performed here also a transformation of image to doubles. This is what is needed for the edge function. And then I'm just applying different edge functions to the images. Don't mark it. Otherwise nobody can read it anymore. Okay. So, yeah. The first image with a Sobel filter and the second with zero crossing. It's actually quite easy to perform this. Let me show you what actually also happens. Go away. I'm going to start with an image you should know from the homework. Okay. So, the quality is not that great on the projector, but anyway. The imaging grayscale you have in the right side of the screen and in the left side is the Sobel filter. Well, as you can see, most of the edges are pretty good recognized. You can see here the eyes, the mouth, a bit of recognizing the teeth. Some powerful gradients are here over this necklace. The head is somehow recognized and so on. Let's see how this looks like for the zero crossing. And actually, go away. To my first surprise, I've seen that the zero crossing shows a lot more detail. So, recognizes a lot more edges. And I've kind of investigated to see why what happens here, what's the reason. And of course, these filters are implemented as functions where you can specify as parameter the threshold, which defines a gradient as an edge. And for the Sobel filter, if you play with this gradient, let me show you. What happens, you can decide the amount of detail you can go into. Zero point. Zero one should be a good starting point. So, I just lowered the threshold up to which a gradient is a gradient and a lot of edges are now detected. Due to the low threshold, MATLAB actually has a function that auto calculates this threshold and estimates what would be a fit threshold for an image. So, it can be image dependent. Of course, you want more detail, you increase this threshold, but not all the edges that you will get are actual real edges. This is where the zero crossing filter comes in and checks the second derivative to see whether this is actually an edge or not. I've brought also another picture which brings more blur. As you can see, the plane is easily recognizable. The sun, however, is rather blurry, the rays of the sun. So, the Sobel filter doesn't really recognize anything here. Then I played with the zero crossing filter just to see if it would be able to go past this blur and recognize something. I've seen that already with the MATLAB algorithm for estimating this threshold. It is already able to draw some edges of the blur. So, as you can see here, this one is a bit recognized, this one. You can play some more also with the threshold for the zero crossing and get a good representation even if part of the images are blurry. It depends on where you want to set your focus, but you are also able to get past blur with zero crossing filters. For homeworks, you will get a bigger collection so you can play with some more filters. You also have the possibility to use some smarter filters besides zero crossing. You can find all of them in help. So, just go in MATLAB, press F1 on the edge. Hopefully, this is not... Yeah. It's everything locally saved. So, you can get also something like the pre-wit filter, Roberts filter, or even the Laplacian filter, so you can test and see differentiations between what happens when you apply those filters, how you can get past the blur or how can you get edges where they are actually a bit more spread over the gradient. Yeah. This was it for the first detour. Good. So, the next part is some of the transformations that allow you to do even finer segmentation and one of the renown transformations for the so-called watershed transformation. What is a watershed? Well, basically, a watershed is that part of the mountain from where the water flows into one direction on one side and into the other direction of the other side. So, for example, there's a watershed all through Germany from where the rivers start going down to the Deneub River and then to the Black Sea. And on the other side of this watershed, they start going to the Rhine River or the Elbe River or something and going to the Northern Sea. And this is the idea, basically, of the watershed transformation. So, shapes and areas are considered to be basins that collect water. And if you grow them, they will grow together at some point and this is the watershed. So, every gray value has a zone of influence. Every pixel has a zone of influence where the intensity changes not too much, starting from some of the pixels and then actually flooding the surface. You will have these points where you meet other floods that started from other pixels and this is where you put the watershed and the watershed is a very good approximation of an edge, basically, the idea. So, the gray values are kind of like a topographical surface of the mountains. You have areas with high gray values, this is the top of mountains and then you have the areas with low gray values. This is kind of like the valleys. And if you mark the distinguished points and start going from there, you will meet at some point. This is where the edge should be. So, what you can actually do is you take the minimum values, the minimum gray values of some areas and then you start flooding with respect to that area. As we can see here, nothing much changes. Same here or here. But then at a certain point, if you have two points and you go here and from the lower point, you also go here, they will meet. And the change of the gradient that they have to overcome is kind of a measure for the speed that you can flow. So, it has to be even, at least spread it between the two different areas. And what you get is a watershed that exactly shows this specific line. Basically, you flood all the pixels that have the same intensity as your seed pixel and once you go a notch higher or lower in intensity to flood that, you have to do that with all the other pixels too. And this is kind of like the level of water in your basin. It has to be the same level all over the picture. And at some point where you will meet, you will have the watershed. This is a basic idea behind watershed transformation. And what you do for image segmentation is you do a watershed transformation of the gradient. So, you look at the images and you just see the intensity function through the images and you record the gradient. So, you have low gradients over here. But once you get into this blotch, you will have a high gradient. If you're inside the blotch, you will have a low gradient again. Nothing much changes. You go out of the blotch, you have a change in the gradient again. You will have a high gradient and after that, you will again have no gradient. This is basically what you do. So, you go from the original image to the gradient image and now do a watershed transformation in the image. So, what you do is you take the points with the lowest gradient and you start flooding the areas of the same gradient. Okay? And as soon as you move up a level in the intensity, so if you enter this area or if you enter this area, you have to do that for all the other basins too. So, also, this can move up an area. And at some point they meet and this is what actually defines the watershed. Good? Clear? Okay. And then you segment the image by just putting the watershed separation delimiter on top of the original image. This is what you get segmented. And you can see that the blurry edges around here are kind of taken away and only the contour of the watershed is kept. One possible way or different possible way of doing it actually. Advantage, you have enclosed and correct bordering. However, the point is really how do you implement it efficiently? Because this change in gradient and how do you kind of like fill the basin is not really trivial. You have to have the right seed points and then it's an iterative procedure to you always kind of like add a grayscale level, which can be pretty annoying. And what happens actually very often if you have two blurry images is that you start with very many points. And since the watershed transformation always ends when you reach the boundary of some other point, you can end up with totally over segmented images. So choosing good starting points for the watershed transformation is very important. And since you choose starting points usually automatically, over segmentation is one of the known problems of watershed segmentation. Okay. Second possibility is active contours. So the idea here is that regions are kind of like limited by a closed curve, which is the salient boundary. And the closed curve could always be a circle. And of course it usually is not a circle. So the way that the image information, that the intensity information along the circle changes gives you an idea of how the circle should be deformed to actually get a good contour. So if I start with a single boundary and you call it the snake, because it kind of snakes it way into some things, and you have the image behind it, like an image that is here dark and here also dark and then goes here, then you can see that the contour in these parts is really on a boundary. But in this part it is not. And the idea of the active contour is that you minimize the energy of the snake curve. So you have two kinds of energy, the internal energy, that is kind of like you want it to be a circle. And as soon as you have to deform it, as soon as you have to do spikes, because at some point you should of course make it like this, as soon as you have these spikes you lose energy, you lose internal energy. It's hard for a circle to be deformed into spikes. But you have the external energy, which is the image energy given by the gradient. If the external energy is very strong in places like here, where there is really a strong gradient, then you can't deform the internal energy. But if the external energy like here is very low because the gradient is basically zero, then the deformation the internal energy can take over. So you model the segmentation problem as two energies working against each other, the internal energy trying to build a circle like curve, and the external energy pressing on this curve with the power of the gradient. The stronger the gradient, the more pressure is applied to this kind of deforms the curve. That is the basic idea behind it. So what you do is you start with a circle and then the internal energy will try to broaden up the circle. But as soon as you hit some area with a high gradient, you need lots of energy to push against it. At some points you don't manage. So you expand and you expand and you expand some further. But here you cannot expand anymore because the external energy gets too strong. Stronger than the internal energy, stronger than the expansion energy. Same here, very high gradient, so this spike in the curve cannot be filled. The internal energy tries to make it a circle, tries to really smooth it. Try to get like this. But the external energy is pressing down here through the gradient, through the great change of the gradient. And this is kind of what gives the spike. And then you end up finally at some curve that has a good curvature. So the internal energy has tried everything to make it a circle, to make it some good shape. And the external energy in all the places where it is very strong has deformed the circle. Okay, yes. So you mean something like a ring shaped? No. Like that. And there's kind of a spike here or something like that? Because it's a closed contour. It always depends on the external forces. So if your external force is really of the kind that it looks like that, you know, but very specifically there's nothing within the spike, then the internal energy can press into the spike. And basically you again have the balance of power. So the external energy presses back in all the directions. And the internal energy wants to do a circle so it will probably not go all the way. But it will look at a good size. So probably you end up with something being slightly smaller than what you need. Okay, so this balance of energy does not really do very spiky parts. Okay. Yes? The balance? Well, you could, but of course that's not a sensible thing to do. I mean you will have to do it automatically for segmenting your image. And of course the curve that you finally get, the active contour, that is basically what you end up with as a segmentation. And of course you probably do want basic shapes. You don't want very spiky things where every little kind of like shadow or whatever it may be is taken into account but you want the basic shape of things. That is why you use active contours. So if you kind of like have human shapes, you don't want every detail, but you want kind of like probably something for the head and something for the arms. Oh, that's bad. Something like that. So with active contours you will probably end up with a gingerbread man. But still, I mean this is the basic contour that you get because it has a good internal energy. It's kind of like almost circle shaped. And if there really was a person behind that, like in the picture, the changes in the gradient kind of put the pressure on the curve as an external energy. Okay? Good. So what you can do is you can also work with fuzzy edges because you're kind of like balancing this energy and if the gradient is slow in some area, the curvature takes over and makes it a nice curve, so to say. The disadvantage is of course that with the accuracy of the contour, so what you actually want from the contour, the complexity of the curve increases. And this of course means that the energy has to be calculated in a different way. I mean the easiest way is always keep it a circle. But if you want more detailed shapes or exactly spiky shapes or something like that, defining the inner energy is very difficult. And of course you have this initial snake curve that you paint and that then starts expanding. But this is a semi-automatic process. So usually you draw that and then it fits itself to the shape that you have that's very helpful. But you have to draw the initial curve. You usually cannot get it automatically. And that brings us to our last D204 today. Another way of actually segmenting things, more for logical operators. Okay, so we said we want to segment in order to get shapes out of the images. But this kind of operation might be difficult when we have noise. Or imagine in the production environment when we said we had nuts and bolts. And there's an area where you have some kind of shading. The light doesn't fall that good and you have for example something like this. Something where in the gray value the shading becomes actually gray. And you can't really tell the difference. Where does this bolt start and the other one ends. When you have something like this and you start performing segmentation, then what you come up with is something like this. Probably also with another circle inside, you can't recognize this kind of shape. So what you actually want to do is to make these shapes recognizable and also easy to describe. And in order to perform this you kind of apply a pre-processing step that cleans the image from artifacts like this or this or this. Or as I've said, this connection here. And this is what morphological operators can do. So the idea is that morphological operators are actually operations that can be used on neighborhoods. So I don't take the whole image but I look around a pixel and look in all the directions or four or eight directions and see how I can actually improve the surface. And I can do this in a binary way by either adding new pixels or removing pixels. And of course I have first to establish what is my neighborhood. What's the shape of my neighborhood. And this usually is provided in the case of morphological operators by the so-called structured element. The basic operations that morphological operators can do are dilation. You should imagine dilation as actually inflating the surface. So I'm making the surface a bit bigger by adding pixels to the area. Or the opposite of it, the erosion. So I'm shrinking the surface by removing some of the pixels. And now we have to talk about the structured element, so the neighborhood, what I'm applying these operations for. And two of the most used such structured elements are the cross, so symmetrical. This is the pixel I'm considering and its neighborhood is represented by these four pixels. Or a square neighborhood the same only in eight directions. So if here I'm looking and my neighborhood is given in four directions, up, down, left, right. Here I'm considering also the diagonals. Okay, so the dilation. I'm adding pixels to the image. The idea here is that I have my image as a matrix of intensities. And I'm going over each pixel with my neighborhood shape, either the cross or the square. And I apply it this way to each pixel and look at my neighborhood and say, if in my neighborhood there is a black pixel, then this pixel should also be black. That's everything. So I basically want to increase the black area by looking in the neighborhood of each pixel. If I don't have any black pixel in the neighborhood given by the structured element, I leave the pixel as it is. Unchained. Unchanged. And actually the effect of this dilation process is that one, I increase the area, I create more black pixels, I change white to black pixels, and the other one is that I'm connecting small objects or like the small holes you've seen in the first bolt. But let me show this on an example. So I have here an original image given by this. Wow. Gray pixels here. And what I'm doing in the dilation process is taking either one of these neighborhood shapes, this structured element, and putting it onto each pixel. I'm starting from the first pixel up, down, I'm getting out of the image, so nothing interesting happens. So on the right I have a white pixel. It's not a black pixel. I'm not going to do anything. Down I have again a white pixel. It's nothing going to happen. Here. So I continue this in this direction and here. And then I reach this pixel here. Well, no, sorry. Something happens when I reach this pixel here. Applying the cross again, when I look down, I see that there is a gray pixel, so a black pixel. And then I change this pixel here by adding it to black and say I'm inflating the area. This is the same also for these pixels here. Down they have a different gray intensity pixel. And for this one and for all the pixels that have been denoted here in the image with black. So these are the inflated pixels. This is how my shape grows. What you see here with black is the growth. And if I apply the second structured element to the same image, then I get some bigger inflation because I consider two additional directions, the diagonals. Just imagine taking this pixel here. If I put the structured element, this square here, then I have on this direction this black pixel. So this one becomes also black. Is it clear? Okay. Now the erosion is exactly the opposite. I take again the whole image, pixel by pixel, only this time, again with the two structured elements, I'm choosing one of them. Only this time, I make black pixels to white. Based on the same assumption of the neighborhood, I'm on a pixel and I look in the neighborhood, is there a white pixel? If yes, then make this pixel also white. The effect of this is that then thin spots disappear. You have a thin spot as a black pixel and you look around in the neighborhood and you see that upwards there's a white pixel. Then this pixel should be also white. The other effect is that you separate areas with small intervals, exactly as we had with the two bolts. If we come back again to the original image and if we perform the dilation, then this pixel here will disappear. It will be transformed into black. Again, this happens also here. But if you take a look here, what happens? This connection here grows, it inflates. On the other side, performing erosion to the same original image. It erodes this hole here, the whole growth. But it also destroys these connections here because of the small hole it was originally there. If you consider the two operations separately, then there's kind of a problem. On one side, you want to fill in the holes, so you want to perform a dilation. But on the other side, you want to perform an erosion here. You want to cut such connections. Actually, what morphological operators do is perform some kind of composed operations, the so-called openings and closings. The idea by openings is that you first perform an erosion, so you cut those connections, and then you perform a dilation. You fill some of the holes that were created, you reconstruct, you inflate a bit. So you shrink, then inflate a bit. You eliminate thin and small objects, some artifacts that you have in the image. You also break the areas, the thin areas, and you smooth the edges. This is what opening does. On the other side, you can do this the other way around. You perform first an inflation, the dilation, and then the erosion. You first fill the holes, then you join the closed objects and smooth the edges. And this looks something like this. So if you perform an opening first to our image, then the screws remain of the same size. So you've performed first erosion, then dilation. You've broken this connection here, but you still have this hole here. On the other side, by performing a closing, again, you keep the size, the area intact. You don't destroy it by inflating it or eroding. You've filled all the holes, so you don't have any hole here, you don't have any hole here. And again, you create some new connections. So actually, you have some advantages. The idea of using some morphological operators for image processing is to obtain good shapes, shapes that you can further segment with new filters, something like the Sobo or Zero-Crossing, whatever you can name it and use it on the new image. But the disadvantage is that the gray values of the images you use as input for these morphological operators, they must be rather uniform. If you have jumps in the gray levels, then you have a problem. You can't use them. And there's another issue. You've seen the control is rather difficult. If you have holes and connections, then it's rather difficult to decide what am I going to apply in which sequence. How can I make sure that what comes out of this is actually much better and can be easily segmented? So in order to get this image the precision, the control is rather difficult to perform with morphological operators. And it seems we are at the end of the lecture. So today we've spoken about multi-resolution analysis. We've seen actually how we can reduce an image through different resolutions, through averaging of pixels, folding, and folding all the vertical, folding on the horizontal, and I'm getting up to the smallest representation of an image, the average, one pixel, representing the average color. And the most important is that I don't lose any information. So at each step I have to store the difference so that I can reconstruct the image back. The biggest advantage is that the difference contains a lot of zeroes, so it's great for compression. Then we've discussed about shape-based features. The first possibility is using some kind of threshold-based procedures. We've spoken about the ISO data and the triangular approach. The second possibility here is that with these algorithms you can differentiate quite fast between the foreground and the background in an image, just by choosing a dynamical threshold in the histogram of the image. We've spoken about edge detection, about the Sobel filter, and the zero crossing, about how we can use the intensity information in order to detect edges in images. And we've spoken about some pre-processing possibilities in order to clean up images for artifacts and other problems. These are the morphological operators. Next lecture. Next lecture we'll discuss about query by visual example. We'll continue with the shape-based features. We'll discuss about chain codes, Fourier description, and about the more complex moment invariance. That's it for today. Thank you for the attention.
In this course, we examine the aspects regarding building multimedia database systems and give an insight into the used techniques. The course deals with content-specific retrieval of multimedia data. Basic issue is the efficient storage and subsequent retrieval of multimedia documents. The general structure of the course is: - Basic characteristics of multimedia databases - Evaluation of retrieval effectiveness, Precision-Recall Analysis - Semantic content of image-content search - Image representation, low-level and high-level features - Texture features, random-field models - Audio formats, sampling, metadata - Thematic search within music tracks - Query formulation in music databases - Media representation for video - Frame / Shot Detection, Event Detection - Video segmentation and video summarization - Video Indexing, MPEG-7 - Extraction of low-and high-level features - Integration of features and efficient similarity comparison - Indexing over inverted file index, indexing Gemini, R *- trees
10.5446/342 (DOI)
Okay, so hello everyone, welcome to the lecture of multimedia databases and as Tilo said last time I will hold this lecture today. Previous lecture we've started speaking about the basics of audio. We've started speaking about how to represent sound. We've spoken about the time domain, how this is represented through waves, about the frequency domain, about the Fourier transformation we can use to transform sounds into the frequency domain, how we can represent the sounds then through this Fourier coefficients or wavelet coefficients, and we've introduced the idea of audio retrieval. In this lecture we'll go further, we'll recapitulate a bit of the low-level features, we'll go deeper into them, we'll speak about the difference between, the smallest difference between sounds human ear can perceive, and we'll also speak about methods to recognize the pitch. Before I go further with the lecture I just want to say I'm sorry for the technical problems, we have experienced with the last lecture, so we can, I have only recovered just a part of the video, I think only the first 40 minutes, it seems we have some technical problems with the multimedia PC we are using for our recordings, and it seems also for the information retrieval lecture, the sound is missing from the lecture from yesterday, so we'll try to resolve these technical problems as soon as possible. Hopefully this lecture will be complete in the video. As we have spoken last time there are some typical low-level features which can be used to describe audio. The first one is the loudness of a sound, so how loud is the respective sound? We call this the mean amplitude, another low-level feature is the distribution of the frequency, the sound contains, so the bandwidth, the lower frequencies up to the higher frequencies. As you remember the human ear can hear somewhere between 50 and 20,000 Hz, and of course the bandwidth will be somewhere in between. Sounds can be also above for example 20 kHz, but it doesn't make any sense to keep them, so I have already spoken about last time such unperceivable sounds can already be removed, improving the compression rates. Another low-level feature is the brightness of a sound, this actually tells us how many high frequencies are there in our sound. Where is the center of the gravity of our frequency spectrum in the sound? If I have music for example I would expect that the frequencies are quite spread all over this frequency spectrum with a relatively high brightness, so a lot of higher frequencies. Then there are the harmonics as we have said last time, besides the synthesized sound, the standard tone that I've created with Matlab last time. That one had just a 440 frequency, but no harmonics. However, in real life sounds have harmonics. There is a fundamental frequency and then there is an oscillation, which represents actually the harmonic of this fundamental frequency which repeats and decreases in energy. Then there is a pitch, the pitch that actually represents what the human ear and the human brain understands as this frequency. Where does the human ear think is the height of the sound? What frequency was that sound? Was it a 440? Was it a 5,000? This is the pitch. These low-level features can be measured of course in the time domain as we have seen this sine representation. For each time you have an amplitude, the loudness, or in the frequency domain. In the frequency domain you can see what kind of frequencies are there in the sound and what amount of that frequency. If you have a pretty high bar at 440, this means that the 440 frequency has a lot of energy, or that there is a lot of this frequency in that sound. We already see that we have two things, time domain and frequency domain. How can we travel from one to the other? Of course we've spoken about the Fourier analysis. It's simple to perform this and this also helps us characterize the sound through the coefficients of the Fourier transformation. The issues are that these two representations, the time domain and the frequency domain, they don't show separately the whole truth about our signal. The time domain doesn't show any frequencies. On the other side, the frequency domain doesn't really show when these frequencies occur. They show how much of that frequency, but not when. So there was the idea of using some kind of a new representation which shows both the frequency and the time when it happened. This is nothing else than the so-called spectrograms. The spectrograms combine the representation of the time with the frequency domain. The idea is to use just an image which has on one axis the time and on the one the frequency component. The gray values, as you have seen in images, show the intensity and the energy of that frequency. So how much from that frequency is provided in the sound at that time? There's nothing more. Practically this is used as, for example, in pattern recognition to analyze the regularity of occurring frequencies. How often do some frequencies occur so that I can compare sounds based on frequencies that recur at a certain time phase? I have here the spectrogram for the spoken word durst or thirst. As one can see, here we have the time domain, here we have the frequency domain. So the frequency representation. As you can see at the beginning of the word, we have a lot of low frequencies, durst, de. Somewhere here, we already reached probably S, so we have less from the low frequencies. I mean from this spectrum here, we have a lot of white in this region, so we don't have a lot of low and high frequencies. Then comes something like a pause and then comes the T, as you can see. So this would be the way of representing just the sound. And it's then quite intuitive because then you can see, okay, I have here at the beginning lows. Then I have a break, no frequencies. Then I have again some frequencies but higher. And we're in the first D2 roll ready. Let's play with some spectrograms. I'm already here. Okay, so I've prepared a sound for you. The corresponding histogram, spectrogram, go away. As you would expect something like this. So again, time domain, frequency domain. We are starting with some low frequencies but then we are going up in the frequency domain again with growing time. Going up, up to 500 Hz and then going again down. So if you correlate the sound with this image, it's kind of intuitive, right? You get a graphical representation with frequency and time of what you've just heard. It fits. Good. But then I found something which is, I find rather interesting. Let's hear another sound. This is actually a music piece from a CD from a band called Venetian Snares. And the album is called Songs About My Cats. This song is called Look, Not Listen. So look, but we should start with listening. Can I stop it somehow? Probably not. So I don't think that, it's over. So this is what they are selling. This is music. There is more to that. So the idea is actually quite interesting. What they've done is they've encoded images of cats in their spectrograms. Look. I think you can best recognize probably this part here, the shape of a cat. What they've actually done is they've encoded like frames, like in a video, pictures of cats in grey intensities. And then they've transformed this into sound. So it's actually quite a nice idea. So this is why it's called Look. Actually they've started a trend, they have Wikipedia site where they describe, this is the last song on their CD. I have no idea if the rest of them are something like this or they're real musing. I didn't hear it and I actually am also not interested in the rest, but the idea was nice. Then I've checked YouTube yesterday. Let me see. Spectrogram face. A lot of people played with these effects, turning their face or pictures into sound. So this is how this guy sounds. So nice ideas. If you want to get rich fast, just make an album of your face or favorite pictures and who knows, maybe someone will buy them. Yeah. That would be everything about spectrograms. Okay, so now we have the means of bringing sound into time and frequency domain and we can compare maybe images visually. But what we actually want to do is to differ based on these low level features between audios, between sounds. So we actually want to perform some kind of a classification. And then we want to build different audio classes with different properties or our low level features, which you might say that they could be pretty helpful in describing these classes. This is exactly what we want to do. And we want to store these low level features, pro sound or pro class, in a feature vector as a representation of this class. The most basic example is of course to take to different classes of sounds. This would be speech and music. The characteristics in each case are relatively hard to predict. So you can't out of the blue say, well, I would expect that for the sound, for the music, the brightness or the bandwidth is so and so. But there are general trends. Usually music, for example, contains much more frequencies, higher frequencies than voice. And we've already spoken about this, the standard for the telephony. So it goes up to, I don't know, maybe 9 kilohertz, 8, 9 kilohertz. So 8,000, 8,000, 9,000 hertz. On the other side, for the music, the uppers are up to the spectrum of what human ear can hear. So of course, maybe you can expect that the brightness and bandwidth for the music are higher. But the secret is actually, if you want to differentiate between different classes of sounds, like speech or the music, you may be forced to use not only these features, but all the spectrum of available features. And we have dependent and independent features. I will repeat a bit about the bandwidth. So for the speech, the bandwidth is rather low. As you can see in music, this covers a brighter spectrum for the brightness. The brightness represents nothing more than the central point of the bandwidth. And in language, of course, it should be concentrated somewhere between 100 and 7,000, probably somewhere for a man voice between 250 hertz to 300 hertz, and for a woman voice, probably a bit higher. For music on the other side, you have a lot of high instruments, high frequency instruments. And the high instruments will pull the brightness up. One other deciding factor is the proportion of silence. As we have, I think, spoken also last time, and as you see when you speak, you do a lot of pauses between the words, also between the sentences. But this doesn't really happen in music, at least not in pop music or what one hears today. Maybe in classical music, you would have had someone singing, then opera or something like this, then a bit of pause, then someone else singing. But usually, this proportion of silence speaks for a differentiation between human speech and music. In music, you always have the instruments on the background, they make some noise, so there is no silence, at least not that much silence. Another factor is the variation of the zero crossing. Again, music tends to be rather constant when performing zero crossing over time. This means you get to see a regularity for the zero crossing moments, a pattern. For the voice on the other side, this tends to be rather invariant, so the variance is higher. So we already see that if we consider all this stuff together, we might be able to differentiate between voice in speech and music. Actually, this idea has been used since 1998 by Lou and Hedkinson, and their idea was, okay, why not build a system? An audio file comes as input, and we start with the brightness. We measure the brightness. If the brightness is higher than a given threshold, it must be music, because humans don't really speak with that high of a voice. Well, maybe, I don't know, if you've had a bad day and you want to scream at someone, you might raise the tone of your voice and speak with rather high frequencies. But I wouldn't expect that to be normal. So as a heuristic, this function is quite good. If the brightness is above a threshold, this is music. If it's low, we need to look at something else. They've considered next the portion of silence. So let's look at the portion of silence. If this one is low, again, the assumption that music comes with instruments on the background, not only with the voice, so then you will have a low portion of silence in music. If this is high, it's a high probability that it is voice. But let's do another check. The variance of the zero crossing rate. And if this variance of the zero crossing rate is low, again, this is music. Music but solo music, again, without instruments. Because we've said we can't really differentiate between solo music, this has a lot of portion of silence also. So in order to do this last differentiation, we may look at the variance of the zero crossing rate. And if this is also high, so if you have a low high high, then the audio file is pitch. It's a rather simple classifier. And actually the idea is not that bad. Of course, the highs and lows, as I've said, actually depend on a threshold. So somewhere you need to say, yeah, brightness is high, but what does high mean? Above 7000, about 10,000. So it depends on a threshold. And this threshold, again, depends on the collection. If you have, I don't know, pop music and speech held in a large room, you may have a threshold. If you have other conditions, then you have other thresholds, which define your highs and lows. Yeah, of course, these thresholds can be used also to define, to differentiate between typical male voice, typical female voice and so on. But usually these thresholds are learned. They are collection specific and they are learned through a training example where I can say, okay, I have here five speeches. I have here five music pieces. And just by comparing, I will tell you this is speech, this is music, cut somewhere in the middle to make the differentiation. So this is how you can, by training, find out these kinds of highs and lows. However, if you have something which doesn't really fit into what you have trained, so some kind of data which is not similar to what you've trained, then you have a problem. You might have misclassification. So what you really need to do is to choose a rather large spectrum of sounds which will cover your query information, will be similar to the environment you are going to use your query for and train on that. Of course, after you've performed this kind of training, you can compute future vectors, again, brightness, bandwidth, portion of zero, variance of the zero crossing, they can be seen as a feature vector also. And then you can compare a query sound to the classes, maybe to the centroids or to the sounds that are most representative for a class, and compare their feature vectors. It's just like we've done in images. You can compare the distance, any distance you want, and by computing this distance, you classify a new sound as being more similar to one or the other class. It's really not difficult. And we start with another detour, which shows you exactly this approach of Lou and Hankinson. I have here speech and music. First we'll hear Mr. Bush. Yeah, so an interesting speech. I've just extracted a part from it, and I'm going to compare it to this. Okay, let's see some graphical representation of what we've just heard. First we have the speech in time domain. Then we have the music piece, again in time domain, and then in the frequency domain, each of them. One could look and say, okay, for the speech, I have somewhere up to maybe, I don't know how much is here, 7,000, 7,500. And if one would look, this is a factor of 10 minus 3, so you can imagine that these values are quite small compared to this one. This one doesn't have any multiplier, but this one goes a bit above 10,000, so maybe 12,000, 12,500. This already gives us maybe some hint, and you say, okay, the bandwidth is much larger for the second sound. It's about the double of it. Then maybe you can also see here from the loudness something, this is much louder than this one. These are just some, yeah. Sorry. Microphone, where is the microphone? How would it be if you compare the speech to... Out instruments, only voice? And we might have solo music. You can compare the portion of silence. You can compare also the zero crossing, and probably the zero crossing is what will make the difference. This is what Lou and Hankinson have said, that speech is much more irregular than song. And in song, then you'll have these zero crossing moments that repeat themselves. So the variance of the zero crossing would tell you that for a cappella, this should be smaller than the variance of normal speech. But this is just what Lou and Hankinson have said and what I would expect. Actually, I've tested this further, and just to see the portion of silence. So it's quite a difference of what you see. I've split the histograms in 50 bins. And what you see is the low frequencies, so the null frequencies, nothing, the silence. And for the speech, this is quite large. It's with the factor of 10 of the power of 5, which is already a factor in power more than what you get in the music. So this is a clear hint that this is speech and this is not speech. So it's a clear difference. But of course, as I've said, if you would have something like the case you have mentioned, then you can calculate also this variance of the zero crossing. Actually, I've calculated here and it's again a factor bigger, but it doesn't show me that much. So actually, I'm not that happy about this crossing rate and according to their algorithm, I would stop probably after the portion of silence and say, OK, this is clearly a factor more silence in the speech. So this is speech. But it's not the best algorithm as you can see. OK, let's continue with the lecture. So we have some ideas about how we could classify this kind of stuff, but it's not enough. We need to get better. So why not divide the audio pieces into time slots and try to characterize these pieces of music through some features. Maybe the features we've already discussed about. And then what we really want to do is compute a vector for each of these windows. This vector will represent our low level features we've just spoken about. And then why not take also some statistical characteristics, things we have already done in images to characterize this kind of features and compare them for two sounds. And this should bring me a perceptional comparison of audio files. At least this is the goal. I want to use some coefficients in a vector to compare, perceptionally compare sounds. OK, the features again. What kind of features? An example was taken, was implemented by Wolff in 1996 based on the aesthetic coefficients. Well, the most important features that would allow me to differentiate between these are the loudness. And I could measure this based on the root mean square of the amplitude values in the time domain, the decibel. And of course, maybe I should consider more sophisticated values that respect. I've said I want this classification to be humanly, to respect the human perception. So I should not take care about sounds which are coming after a very loud sound. So if you have, again, the construction yard example with someone with the stone breaking device, which makes a lot of very powerful sound. And at the same time, someone speaks on the street, you won't hear the one speaking on the street. So you shouldn't take that one in the calculation of the loudness. You should just take in the calculation of the loudness what humans hear, because you want the loudness to respect the human perception. Again, don't take the loudness for frequencies which humans don't hear. Under 50 hertz, above 20,000, they might influence the loudness if you compute them to this root mean square. But then it won't be perceptional anymore, because you will get a value which humans don't really hear. Again, he took the brightness. He defined this as the center of gravity of the Fourier spectrum, and he defined it as a logarithmic scale, considering that, well, you have some, a few low frequencies and a lot of high frequencies. So if you consider above 7000, the spectrum is much more comprising. This is why a logarithmic scale would fit something like this. And this actually should describe the amount of frequencies, of high frequencies in the signal. So if someone speaks with a higher voice, then the brightness would be higher. Again, for music, if you have a lot of high frequency instruments, then the brightness of the sound will be higher. And then the bandwidth, or the frequency bandwidth, which is nothing more than the weighted average of the difference of the Fourier coefficients for the center of gravity of the spectrum. The whole idea for the bandwidth is that you need to weight it by the amplitude. So actually you are going to provide more importance to frequencies which are lower or higher, which will define your spectrum, but which also are hearable, so are louder. This will define your bandwidth margins. And as the fourth feature, he has used the perceived pitch. And what this actually tells me is how the human ear and brain perceives the frequency height of a tone in a certain interval. So for example, if you synthesize a 440, maybe the human ear hears the standard tone A. But there is a difference between the sound you can see in the frequency domain and what the human brain synthesizes. We'll speak about theoretical models and how the human ear actually functions and how humans actually perceive pitch. But what is important to know is that you need some kind of a way to estimate, as we've done in the in the spectrogram, what's the frequency currently being played at a certain moment. And this is the pitch. And the pitch tracking, as I've said, is rather difficult to perform automatically. We'll speak about three algorithms to perform this, but it's quite related to something we call the fundamental frequency. The fundamental frequency being the lowest, highest frequency. So with the most presence, most intensity in the sound. And if you can't manage a software that is able to perform pitch tracking as humans can perceive it, then just relate to the fundamental frequency. This was the idea of the static coefficients. Okay, now, as I've said, the signal is split into windows. And for example, this is the representation of laughter. As you can see, it starts quite loud. It's ha ha ha ha ha. So it goes up and it's a bit in steps. So ha ha ha. And this is also what you see in the loudness, a peak in the loudness, then another one, then another one, and then the sound terminates. Also given by this trend here, this is what the loudness describes in the time domain. Then you have the brightness. It starts quite high. And then with some variances at the end, you have some peaks and then it goes down up to the nothing to no sound. Again, the bandwidth here shows you that the first portion of the sound, the ha, is quite limited, high. So you probably have somewhere between 5,000 and 6,000 Hz. And then it's compactor and compactor. The pitch, again, the first ha, is perceived quite high. And then the frequencies go down. Now, the idea is I want to describe this kind of sound with these four features statistically. How can I do this? Well, I get the statistical moments. One would be the average value, because as you have seen in these windows, I have some irregularities. I won't be able to get the whole window and do matching on windows, but I'm going to describe each of these windows through the loudness and all these four features. And then each of them with the expected value, the variance, and the autocorrelation. The idea of the autocorrelation is that actually when you have time domain, signals may be quite similar. You have sinusoid, yeah? So if you shift them by a certain phase, you get the same signal. And using autocorrelation is then a very good idea to characterize the self-similarity of the signal compared to other signals' self-similarity. This is what the cubic system from IBM has done. So then you have three statistical values, and then you have four features. This means a 12-dimensional feature vector for each sound. Yeah? And, well, this is a scanned image. Unfortunately, we don't have it in digital form, but these are the results for the laughter sound. We have previously heard, and we have the loudness property with the mean, the variance, and the autocorrelation. As you can see, well, here the loudness is, has a quite... I'm wondering right now why this is minus. Oh, I think that the decibels are on a different scale. They are measured on a different scale from minus 60 up to zero, where zero, I think, is the loudest. I don't know if you have any receivers at home. Sound, Versterker? No? If you play with something like this, you can see that the loudness takes negative values up to positive values. And, again, for the laughter going back, the variance is quite high. Why is that? Because we have very powerful sounds, then we have some portions of silence. Yeah? So quite large variance. Again, then we have the repetition of the sound. So a quite good autocorrelation. So we have ha, ha, ha. These parts, they correlate with themselves. So we have to shift the signal. The first part correlates with the second part, with the third part. So this is the autocorrelation coefficient, which is quite large. Afterwards, we do the same for the pitch, for the brightness and for the bandwidth. And this is the description. This is the feature vector for the laughter. Now, each sound has its typical values. So then, by comparing this feature vector, you can see which sounds are quite similar. You compare just the distances of two vectors. It's nothing more than this. Okay. So, in order to perform this, the idea is to use a training set of sounds for a class, which leads to a perceptional model for the class. So, for example, you have a collection of laughter. And then, you compute the feature vectors and the means for each of these values. That one would be the representation of that class. Again, you can also calculate the covariance matrix. The covariance matrix, if you remember, we have discussed it in colors. It uses principal component analysis to put together dimensions or values, which vary together. So, I'm going to reduce these dimensions only to those that fundamentally differentiate the class. So, I'm going to use the covariance matrix additionally to the vector mean. And then, I'm going only to calculate the Mahalanobis distance. It's exactly as in the idea of colors. I take the query, which is my sound. I calculate its 12 feature vector. I look at the representation of each class. And I calculate the distance to each class. And I do this with the Mahalanobis distance. Okay. So, now, I have to decide for one class. And one solution, again, collection-specific threshold. Or take the minimum distance. Of course, I could have that this is a new sound. Let's say that I would have laughter in one side, music, and I get speech. Where do I order it? I would probably expect that the distance, this Mahalanobis distance, between speech and laughter, as well as between speech and sound and music, is quite large. So, then, I might have to decide, okay, it's above a certain threshold, so I don't care that it might be closer to laughter. It's not laughter. It's some kind of new sound I've not trained for. It's something like an outlier. This is why I have to use a threshold. The problem here is that it's a probabilistic way to decide what kind of sound I have. So, this might lead also to misclassification. And this misclassification is exponential in depending on the distance. So, actually, if the distance in similarity between the query sound and the two classes, or the class where I'm going to assign it, grows, shows the probability that I'm doing some mistake, which is quite obvious, right? If it's near laughter, then, yeah, I'm sure it's laughter. It's maybe a bit different, but it's still laughter. However, if I have quite a big distance, then it might be laughter, but I'm not that sure anymore. So, the error is higher. Proportions are exponentially to this distance. Okay. This is an extract from what Wilt has done for the three, for the four features. He has calculated the statistical moment, and he has gotten the mean, the variance, and the importance, where the importance, as I've told you, has been extracted from this principal component analysis. There, you can decide which of your features are fundamentally important for making a difference between the object in your class, yeah? And as one can see, for example, here, the auto-correlation of the loudness has a very high weight, a very high importance. This means that if this is correct, it's a high probability that the query sound for which I'm calculating also the auto-correlation of the loudness is laughter, yeah? And this auto-correlation of the loudness is much more important than, for example, the variance of the pitch, which is not that important because as one has established, again, with this analysis, the pitch doesn't really vary between the laughter. They are quite similar, yeah? So this would be the idea of their experiment. Of course, this works great for short audio data, but this and these parameters should represent a human perception and it's quite easy to use for indexing in databases. So query, by example, would work great. You are just laughing in the microphone. He calculates this 12-feature vector, calculates the Mahalanobis distances to laughter and other classes I have already trained for in my database and says, okay, you are laughing. Or returns similar laughter, most similar laughter. Ranking with documents from audios from the laughter class, yeah? In commercial databases, this has been implemented in the DB2 extender, and in Oracle multimedia. But the problem, however, is that it's okay for differentiating between speak, music, laughter, but not between musical pieces. Just using purely statistical values on complex sounds like music, which is not that short, it's rather difficult. Just imagine that with the length of the signal you might find autocorrelations, the variance may grow. It's difficult just to use only these constants. So what we actually need in order to go the extra step, in order to be able to differentiate between musical pieces, it's what Shazam does, yeah? So you are going to sing or maybe record on the radio music piece, and it's going to return you the title of the song and so on. So it's able to differentiate between what you input and other pieces it has in the database. This is now our goal. You can't do this with the statistical features we have just described. So what you need is some kind of definition of the melody. You need to know what happens here. You need to know if the melody changes. If the singer goes into higher tones or lower tones, you need to compare somehow what you have in the database with the query based on the same melody. For this, of course, we need to define this melody term. And the problem here, so you can say melody is a second, a sequential of notes, and the problem here is how do I recognize the melody? How do I recognize notes from a signal? And yeah, it might be easy, for example, for midis like you've heard from Alfaville, you have them already in a document, the notes that are going to be played. But what do you do with a music piece where there are a lot of instruments that they overlap to build the sound overlapping of instruments and voice? It's difficult. You can't really extract good quality notes if you have such difficult sounds. So actually, what we want to perform from a melody, which represents a sequence of musical notes, we want to perform querying for a melody, as in the case of Shazam, but we want to remain invariant under pitch shifts. One example, we are doing query by humming, and we are going to sing into the cell phone, but we differentiate as humans. I might sing lower, some human may sing higher. So our pitches of the same melody are different. Soprano versus bass, this is exactly what happens. It's the same song, but different pitch heights. So if I'm going to do such kind of a query, I want that my system is able to recognize the melody without considering this differentiation in pitch, so long as the same melody. Something else we want our system to be able to do is be invariant under time shifts. We have a three-minute song. If you are going to do a query, you are not going to sing the whole three-minute song. You are not going to stay there with the microphone, the Deutschland's 10 Superstar, and I'm going to sing the whole three-minute song just for the database to return me the song. If I would have the damn song in my mind, I probably wouldn't need to listen it. But if I know, I know only maybe the beginning or the refrain, the main part, yeah? I'm going to sing this part, but I want the database to compare it with what I've just sung from the songs in the database and return me the correct music piece. So time shifts. I want the system to be invariant for time shifts, to go into the music and see, aha, okay, you are singing this piece here from this melody, and this is what you're going to get back. What I also want is to be invariant under slight variations. As you may expect, I'm not quite a genius in music, and I'm not that talented also. I don't really sing in shower. So the problem is, even if I would try to sing, I would most probably do a lot of errors. Maybe a bit higher somewhere, a bit lower somewhere, I won't get the pitch right or so on. These variations, when performing the query, it's exactly the same story. And as we're performing a query for images by sketch, you're not going to draw perfectly. You're not going to sing the refrain perfectly, but you still want that the database returns the correct melody piece. So it has to be invariant under slight variations. Of course, if I'm singing the totally different song, no database can help me, but slight variations should be considered. Okay. So for coping with these three problems and requirements, actually it's not the sequence of notes that should represent the melody, but the sequence of differences. So for example, sorry, I'm going up in the pitch or I'm going down. It doesn't really matter at which octave I'm singing, because it might be a bass, or I'm going to sing it as a soprano, so I'm going quite high. But still I'm respecting the melody. I'm going up, I'm repeating some notes, then I'm going down. So the difference is what I'm going to need for solving this problem. Okay, we've spoken a bit about the pitch. The pitch has something to do with the frequency. I've said that simple systems don't really care about humans perceiving a different pitch than the fundamental frequency, and they say, okay, pitch is the fundamental frequency. Well, this is a problem for humans, because the human brain actually synthesizes pitch differently than we have seen it, and we see in the Fourier spectrum. So identifying pitch is quite useful when we have, of course, periodic frequencies. So if you have noise, then you have just probably one frequency, or a periodic, non-periodic signal. But pitch has something to do with frequency related to harmonics. So you have the main oscillation, like for example, in this signal here. This is the main oscillation. You have maybe some noise here, as you can see. Then you have harmonics, which usually decrease in energy. There are multipliers of this frequency here. I don't know, this probably is, again, the noise, not the 4. It might be the 440, somewhere near to 500. Then this would be 880, and so on. These are the harmonics of this frequency. Harmonics. Let's play with harmonics for a bit. No, I don't want to. Okay, so last time we've discussed a bit about the fundamental frequency. I've introduced some sounds so that you can get a feeling about how their frequency spectrum looks like. We'll come back to these sounds and their frequency spectrum. We'll also hear them. The first sound was the standard tone A. As you can remember, the time domain was full, since it was pretty compressed together. But I've took last time the first 300 oscillations, and then you could see a sinus wave. It shows here a very powerful clean frequency at 440 with no harmonics. It's a synthesized sound. Then we've taken the flute. It looks something like this in the time domain. Can I draw on this? Probably not. It has again the fundamental frequency at 440 with corresponding harmonics, with decreasing energy as they increase in frequency. Of course, there is also a bit of noise. Maybe if I increase the size of the window. As you can see near these frequencies, there is some noise. It's not a clear sound. It's synthesized or clear as some other instruments. Then we had the piano. A more clear sound. Again, the same fundamental frequency with very, very clear harmonics. These are the harmonics. Then we had the human voice as I remember. What's interesting to note about the human voice, again, it has harmonics, but it's a lot of noise. As you can see here, compare it to the piano. The piano is clean. The harmonics are clean. Here are the basics. You see small shifts of this frequency being also present in the sound. This is not missing as you probably have suspected. This is someone who has been trained to sing. Imagine that the human voice also produces such noise. When performing pitch detection, something like this has to be considered. It's not only the clear harmonics which is present, but also some noise near it. Then we have the tuning fork. Very long sound, but quite clean sound. Very similar to the synthesized sound. Almost no harmonics. Maybe I don't hear them, but I also don't see them in the frequency spectrum. And last we had the violin. Again, a very powerful fundamental frequency with relatively low harmonics. As you can see, you could probably easily identify the pitch, but here, for example, there's a problem. This frequency here, although it's lower, it's also less intense than its second harmonic. You see that this one is very intense. This has to be considered when performing pitch tracking, because actually, even if this frequency here, the second harmonics, is much more present in the frequency spectrum, what the human brain establishes as pitch, is this frequency here. So this has to be considered when performing pitch tracking. Okay. Let's continue with the lecture. So the automatic detection of this dominant pitch in the sound is rather difficult. These interferences make the problem rather harder. What's even more complicated is that the human perception differs from the measurement you have seen in the Fourier spectrum. So it would be easy. Okay, I'm going to calculate only the fundamental frequency and take this as a pitch. However, if you consider experiments which have performed on human subjects, one has observed that even if I don't have a fundamental frequency, I cut out the fundamental frequency from a signal, humans are still able to establish what the pitch is. So pitch is not the same as the fundamental frequency. But we still need to extract the pitch in order to extract the melody line. We still need to establish for a small time frame, I don't know, half a second, for example, what was the height of the signal, what's the frequency of the signal, so that I can decide if the next one is lower or higher. I need this in order to extract the melody to compare between two songs. Remember Query by Humming, remember the invariance problems that we want to solve. Okay, but before going into extracting this pitch, let's speak about the difference Lyman. The difference Lyman means what's the smallest change in the signal which is reliably perceived by humans, the human ear. What's this threshold that humans, it's the so-called just noticeable difference, the smallest difference in the variance of the sound that humans already receive as something happened. This is, as we will further see, actually dependent on the pitch. So if I'm going into the low frequency pitches, it might be harder than in the medium pitches, and then going into higher pitches, again, it might be more difficult. If I'm going to choose between, for example, 19,000 Hz and 20,000 Hz, it's clear, there's a problem. It also varies based on the duration, how long is the sound, and on the volume. If it's just a low volume that I'm barely hearing, then it's quite difficult to make this differentiation. And the difference Lyman has been experimentally established by Yes Did in 1977. And what he has basically done, he has performed some psychological testing with half a second tones, and they are played one after the other. And the subjects have to establish if the second tone was higher or lower. And of course, then he has measured in any system his precision and said, okay, if my subjects have classified above a 70% my sounds correctly, then it's considered as being reliable. So the difference between the two tones I have played is my difference Lyman, can be heard by humans, can be noticed. If not, if it's rather 50%, then it's like guessing. So it's not much better than just building a random generator and say, yeah, it's lower or it's higher. So this is what he has done. He has collected human subjects, played two sounds, and they have said, okay, the second is lower or higher. Okay, and now he has played then with different frequencies and he has observed that the difference Lyman is somewhere by 0.2%. It's what you can see here in this region. And this happens for a reasonable loudness and for a frequency spectrum, which is somewhere between 400 Hz and 2,000 Hz, 2,500 Hz. If you look into the lower bound regions, this Lyman is quite big. This means that the human ear isn't able to perceive small differences in low frequency range, and this also happens in higher frequency ranges. So they need a higher difference between two consecutive tones to establish that they are different, upwards or lower. So what this difference actually tells us is that most people can distinguish between 1,000 Hz and 1,002 Hz reliably. This is actually great. So the human ear is much more accurate than I would have thought. So just to be able to notice 0.2% in difference, well, in music we usually work with semitones and they are at 10, 20, 100 cents of a tone. So it's clear that human ear can differentiate between, for example, C and C sharp. As you have already probably observed from the graphical representation, this separation, this 0.2%, this is not uniform over the whole frequency band. It's worse at high and low frequencies. It goes up to 4, 5%. So it's quite a big difference. And another thing that is important is the tone duration. So starting from, I don't know, 1 to milliseconds up to 100 milliseconds, the Lyman improves. So for short sounds it's quite difficult to establish what kind of tone that was, if it was upward or lower. However, above 100 milliseconds, the result is constant and constantly reliable. The third factor that is important is the volume. Obviously above 40 dB the volume is loud enough so that the human ear can make for a better difference. With lower volumes then there is a problem and this difference is bigger. We'll take a short break of 10 minutes before going into the third topic about pitch recognition. I'll see you after 11, after 10. So we entered the third part of our lecture, the pitch. And the ANSI has already considered about building a definition of what pitch should actually be understood like. So pitch, they've said is that attribute of auditory sensation in terms of which sound may be ordered on a scale extending from low to high. So you characterize sound from low to high. It depends mainly on the frequency content of the sound stimulus, but it also depends on the sound pressure and the waveform of the stimulus. This is what they've basically said is the pitch. Typically we limit to the melody line to distinguish pitch from the timbre. So for example there are sounds like S or Ash, sounds that rather have a different timbre but the same pitch. So if you order there on a scale, they are on the same scale from high to low with just a different color in the timbre of the sound. But we're not going to speak about the timbre, we just care about how do we determine the pitch. And there have been experiments, for example performed by Fletcher in 1934, who has tried to order sounds on a frequency scale by using an adaptation of the sine waves with the loudness of 40 decibel. What he has actually tried to perform is to build a histogram of a sound where the subjects that have received this sound tests had to organize the sounds as being higher or lower, again the same idea. And performing some analysis on this histogram, so subjects have said, okay, I order these pitches being high, I order these pitches being low, looking at the histogram of what subjects have said are the pitch in this sound, indicate that several pitches have been recognized when the distribution is multimodal. And this is usually due to multiple pitches, for example, in the polyphonic music. You may remember somewhere like five years ago appeared on the market the first cell phones with the polyphonic ringtones. And what they basically do, they are able to play different instruments. The problem with this experiment is that humans may concentrate on different instruments, therefore reporting different pitches. For example, one hears the same sound, but reports the pitch for, I don't know, a violin playing in the orchestra. Another one concentrates on the bass or something like this, and this is how you get different distributions. So the pitch determination is actually not that easy to perform. And I want to perform some experiment with you so that we can try this together. Let me see. Okay. What happened? So what I have here is a series of sounds, and I'm going to play them one near to the other one. And what I want you to tell me is if the second sound goes with a lower pitch, the same pitch, or higher pitch. And we'll respect the majority voting, so the majority decides. Let me write this so that I know these are the first two sounds. How is B compared to A? Higher, lower, the same? Everybody of the same opinion. How is B compared to C? Higher. Like this? B is low, yeah. So from B I'm going up to C. Correct? We have the same opinion. Boring, I know. Right? I knew. And we'll do this until your ears hurt. If you have another opinion, just tell me. And since I don't have any blackboard left, I'll stop here. So A goes up to B, so B is higher as a pitch and as a tone as A. This is how you've perceived it. C is higher, D is higher, E. So if I would represent this, then I would have something like A goes to B, goes to C, D and so on to I, right? This is what we've all heard. I'm going to play A and I'm going to play I. I'm going to play them again. A. How are they? Wow. How's that? A and I are the same sound. This is what happens with the brain. The brain tries to identify the pitch. So it performs some kind of automatic recognition. But due to the way the brain works, one can fool the brain. These are the so-called shepherd experiments and he has concentrated his work on fooling the brain in the way it perceives sounds. And the idea is this is the A sound. You can see this is the curve of the sound based on the frequencies that are present in the sound. So frequencies and the curve of the frequencies. For the B, the curve remains the same, but all the frequencies are slightly shifted to the right. You can see here just a slight shift, all the frequencies, to see more shift to the right. So A, B, B, C with such shifts, small steps of the frequencies to the right. Each frequency with the same power is now added some kind of right shifter. But the shape remains the same. So it's damped so that it falls on the same on this shape here. And this is done repeatedly. And through this shift, the brain gets the sensation that the sound gets higher and higher. At the end, you've reached a phase where your shifts, for your shifts, this frequency here will replace this frequency here, moving over the curve. It grows to fill this peak. So I'm pulling the frequencies inside the same shape. And I get actually the same sound from the beginning. Only now the brain synthesizes only the last comparison and builds a higher sensation of the pitch. And I have some more experiments we should play with because they're fun. Let's just hear this one. So we're going deeper and deeper. Yeah, it's exactly what happens. If you really concentrate, you notice it without saying anything or without doing this experiment first, nobody noticed it. We've performed this in a year and they didn't know that there can be this kind of tricking the brain. They said, yeah, clearly. It's going deeper and deeper. The same with this one. Faster and faster. And it seems it goes like to the infinity, but it actually repeats a sequence. It's what happens here. And if you scroll through this, it's the same. So it's exactly what I've done here. If I would play this separately and not continuously, the brain would compare large chunks and jumps and would say, yeah, it's clear. If you leave it continuously, it's tricked. So we can see that it's not that difficult to trick our brain into extracting the false pitch. But what we need to do is to extract the actual pitch that the brain perceives. So we don't need the frequencies from the frequency spectrum, but we try to build up the pitch that the brain understands. So for this, we need to rely on some theoretical models that come from how the hearing system is built. We go back to the basics of audio. As you remember, here there's a drum. Sound comes in. There are some bones transmitting the vibration. And the sound comes here into the cochlea. And through small hairs, which are playing the role of receivers and are stimulated by the sound waves, the sound is transmitted as electrical impulses to the brain, through the neurons, which are connected to the base of the small receivers. But what's interesting to note is that this cochlea perceives the different frequencies at different places. High frequencies are perceived here at the entrances. You should imagine high frequencies as having very high wavelength. So they are pressed together. High wavelength, high frequency. So they are rather short. They will be received by the receivers at the entrance. Low frequencies, which have longer wavelengths, travel further up to the end of the cochlea. And they are received in these regions here. And those receivers will be stimulated and send the impulse to the brain. And then based on these principles of the locality, the brain decides which neurons have sent the impulse and build up lower or higher frequencies. So it's actually location dependent. It says which part of my cochlea has just been excited. It's a high frequency. This is the idea behind this location dependent pitch detection. Building on something like this, humans have measured that the cochlea has about... So if you just take it and elongate it, it has about 3.5 centimeters in length. And based on this length, they've built a typical pattern for audio coding. And then the pitch can be detected from these patterns. The idea is you have here the length of the cochlea, and you have here the place where this is excited. And if this is excited here at some place, at some length, then you say, aha, it must be this frequency here. So it's nothing more than stretching the cochlea, building a function that based on the place where the cochlea is excited, extracts the pitch. And this function has been estimated by Greenwood in 1990. It looks something like this. So it's a logarithmic function out of which one can extract the frequency. And with the increase in frequency, so right now we have on the X axis the frequency, with the increase of the frequency, increases also the distance from the apex. So you have to imagine now the cochlea on the other way around, because high frequencies are perceived at the entrance, as I've said. And this is the function that estimates this in millimeters. Turning this around, you can detect the frequency. But this is not the only model. There have been also other theoretical models which say, OK, it has something to do with time, with how these receivers synchronize themselves. So the sound is not only location dependent, only the place where the cochlea is being excited, but the temporal synchronization of these neurons has to play a role. The idea is that all neurons, they are fired spontaneously in the random sequence, based on their characteristics. But when a sound with a certain frequency starts, some neurons start to fire synchronously. For you to imagine what I'm speaking about, are you familiar with fireflies? When they fly and they blink, and when they are in large groups, they start automatically to blink together in the same frequency, it's kind of like the same principle here. A sound of a certain frequency stimulates a synchronic reaction from the receivers. So the time dependent model is based on the idea of these neurons firing synchronously. So then the brain determines the pitch based on the autocorrelation function. Now he explains the idea that autocorrelation plays an important role. So on this autocorrelation function of the pattern. Later we have also some noise. Hopefully we can continue with the noise on the background. So both models address recognizing the pitch. One is location dependent, one is time dependent of individual sounds. The problem is now easy if you have simple sounds, individual sounds, but it's complicated if you have complex tones. Because look, in the time domain, groups of neurons are excited in several locations with different synchronization patterns. You have multiple pitches. Which of the neuron excitement actually represents the pitch? It's difficult to say. We come back to the fundamental frequency. One of the first assumptions we have started with is that the lowest frequency that generates the harmonics should be the pitch. So the pitch could be considered the fundamental frequency. But again, psychological experiments show that even a sound that doesn't contain the fundamental frequency. So you artificially remove the fundamental frequency from a sound. In the same note, they are still rated the same as from human subjects. Meaning that humans perceive somehow the same pitch even though the fundamental frequency is not there. So if it would be location dependent... That's not my beer. I don't have a reason to drink it tomorrow. What's important to me is that everything stays calm and that I can continue. Everything is clear. Sorry for that. I didn't know that something like this would happen, but I will cut it at the remake of the video. I was talking about this removing of the fundamental frequency and how actually human brain works. If it would be just a location dependent theoretical model, then the fundamental frequency... without the fundamental frequency, it won't be the same. Because the fundamental frequency excites some area of the cochlea. If it wouldn't be there, that area wouldn't get excited. So the human brain wouldn't be able to recognize the pitch. On the other side, the time dependent model makes a bit more sense. Because the synchrony remains there due to the harmonics. I don't care where it happened, in which part of the cochlea, because that was bound to highs or lows. I care about the synchrony, the harmonics. They give me the same synchrony, so the brain still reacts to it and is still able to recognize the pitch. In this context, actually the time dependent model makes more sense. So we should go on that. But how do we evaluate this synchrony? We go deeper on this problem. The hearing actually analyzes complex sounds in different frequency bands. It's rather complex how the human brain works. Because the hearing organizes different impressions of the sounds. And tries to perform an integration between different impressions. What one can do, what Goldstein has done, actually, is he has built a series of this harmonic templates. And tried to decide the pitch based on these templates. So had templates for different harmonics and then said, OK, this kind of match together, so this is the pitch here. Another important factor is that pitch is actually not established in our receivers, in our ears. Very important experiment shows that if you play on each ear a disjoint sound, so you cut a sound into disjoint parts, components of frequency. And you play it on left and right ear, the pitch is still correctly constructed, even though the sound is split. This means that my left ear is not independently extracting some pitch, my right ear is not independent. So this happens centralized in the brain. Ears are just the receivers. Of course, the listeners, so when building this process, the listeners may be disled by ambient noise, as you've seen. Or perceive a false template relating to the Goldstein experiment. A nice experiment with the missing of the fundamental frequency is done here when the brain actually manages to synthesize the pitch at 296 Hz. Although this frequency is not here in the sound. What happens is that 296 would be somewhere here. What happens is that there are harmonics, like this one, this would be 2 times 296. Like this one, which is almost 3 times 296, so the factor is almost 3. Here the factor is almost 4. So what the brain actually does, it says, I see some harmonics. These waves here are harmonics of some frequency, which is not there, but the brain synthesizes. And then the brain says the pitch is 296. This is what his experiments show. So although the pitch doesn't occur in our frequency spectrum, the brain recognizes it somehow and synthesizes it somehow based on the frequency. This is what I've said with the time domain. The harmonics stimulate the synchrony, so it's still detected. So if we go through what we've just spoken in the theoretical models, the pitch is actually a feature of the frequency at a particular time. What we need to do is to perform pitch tracking once in the frequency domain and in the time domain. Of course, an important factor to establish is the length of the window for the frequency spectrum. And this is usually chosen as twice the length of the estimated period. This has to do with the autocorrelation that I've just mentioned is very important. In order to be able to calculate autocorrelation in the time domain, you need a length of a signal that allows you to shift the same signal with a certain phase so that you can calculate this autocorrelation. This is why the frequency spectrum you calculate your pitch on should be at least of this estimated size of this estimated period. But what's our goal yet? Now, the pitch tracking algorithms should perform the frequency resolution in the range of a semitone with the correct octave. So I'm on the correct octave and I'm going to get the correct frequency resolution. I have to be able to detect different instruments which have well-defined harmonies like the cello or the flute. You can already think about ideas of how to capture patterns of these kind of instruments and say, okay, these are the patterns of a harmonic for a flute. I'm going to compare my signal to these patterns and establish if this is a flute pitch. And what's also important for some system is that the recognition of this pitch has to be performed and transformed into symbolic notations in real time for some interactive systems. Imagine that I'm going to have something like Google has for a web search for a music database where I'm singing and while I'm singing the system already builds candidates which are dynamically then adjusted by my singing. So what this has to do is to extract from my voice the audio query, the pitch in real time, transform it into a sequence of a melody where they say, okay, this is the reference pitch and then he goes up and then he repeats the same frequency, repeats, repeats, repeats, goes down, goes up again, and then compare it dynamically to the same representation in such up-downs, for example, such encodings of melodies in my database. So this would be another requirement for our pitch tracking algorithms which we need to consider. Okay, let's go into the last part of our lecture. How do we do this? How do we recognize pitch automatically? How do we implement a system that is able to perform this? The first solution came from NOL in 1969, well, 68 by Schroder, NOL-69, and the approach is called the harmonic product spectrum. It's one of the simplest methods and the idea is to start with a range of a frequency and analyze it in the hope of detecting a pitch. For human voice and for male voices, this has been tested, for example, for range between 50 to 250 Hz. And what happens with the harmonic product spectrum is going with small steps between, from frequencies in this window, we analyze the harmonics of each frequencies. For example, for the 50 Hz, we then multiply it with 2 and get 100 and analyze the value of this frequency. How much from 100 is there? Is it a real harmonic? Is 50 my pitch? If it would be my pitch, then it would have a harmonic at 100, a harmonic at 150, and another one at 200, and maybe more. It depends on the instrument you have seen. For example, the violin has more harmonics, the human singing has a lot of harmonics, but there is something like, I don't know, that fork. I've forgotten how it's called, that has a rather clear sound, or the synthesized sound, which doesn't have any harmonics. So I'm going to detect between 50 and 250 the frequency that has harmonics, and that one is my pitch. I can do this basically by using a product of the signal. So I'm calculating the value of a product by analyzing the first air harmonics. I'm going with a parameter starting from 50 to 100, so for an air 5, for example, I'm going to examine it 50, 100, 150, 200, 250, and I'm going to multiply the value of that frequency in its presence in the signal. So if, for example, 50 has, I don't know how much, a certain amplitude, a certain value, I will multiply that with the value of 100, 150, and so on. It would look something like this. And if I have something like this, then I can establish, this is my pitch. So it's a rather simple idea. I'm going between 50 and 250 and shifting, and I'm going to get the product, which is the highest, has the highest value, and then the base frequency that started this, to which the highest product corresponds to. As a process, you should imagine this something like this. You have your window from the audio signal. You transform it with fast Fourier transformation in the frequency spectrum. You can perform down sampling. But on the Fourier spectrum, what you actually want to do is taking, for example, some interval from here to here, you start computing harmonics for each of the signals and building the product. The hope and idea is that you multiply this point with its harmonics, and this product is maximum, is the maximum product. Because the other products would look something like I multiply this one with this one, with this one, with this one, which clearly is lower in value as this peaks here when they are multiplied. It's clear? And a great way to do this efficiently is actually by performing down sampling. I have the original sound here. Then I am down sampling it by half, the sampling rate. I'm increasing the sampling rate so that I get the signal pressed together by a factor of 2. This means that I now actually multiply each point with its second harmony. So I go into my window here and I multiply each point with its second harmony, also this highest peak. And then I perform another down sampling, a three-order down sampling. I scale the signal one more time, four times and so on, and just multiply the values. Get the product values represented as a function, take the maximum, and that maximum corresponds to the pitch, or the fundamental frequency. Do you have any questions right now? Okay. Now, as we have seen, we may have some noise. We've seen the voice of the singer. It had some noise near the fundamental frequency and near the harmonics. There were some small frequencies with harmonic product spectrum. This is great because those small things multiplied as a product, they will decrease in power. They won't be that powerful. The other problem which harmonic product spectrum confronts with is something we've seen, I think, for the cello, that the fundamental frequency is actually smaller than the second harmonic. You remember the second harmonic was bigger. So harmonic product spectrum suffers from that because it will recognize that that one is the biggest frequency with the highest harmonics as a product. In order to establish which the pitch candidate is, one needs to look back one octave. Half the frequency one time, look behind and see what was, if there was also another harmonic which was lower than me, and at least half my size. So this is actually threshold-based solution. And if there is such a lower harmonic, then take that one. And in practice, this is usually sufficient to detect the pitch. Well, this is the theory, and I said that why not see it in practice for the sounds we've heard throughout the lecture. So I've added to the previous representation from the time and frequency domain of the five sounds, also the harmonic product spectrum. And for example here, for the synthesized sound, actually I would have expected here a line at the 440, which is this one. I'm currently wondering why this one appears. This one shouldn't be there, maybe I've done some error in plotting it, I don't know, I will look further. But anyway, it doesn't really count because it's at the power of minus 10, so fast everything, other than this one is noise. For the second sound, we already have the issue that I've said about the harmonic product spectrum. So this one here is smaller than its second harmonics. So I'm going to multiply this with this with this with this with this, and I get a peak here. Actually, I would say, okay, this is my fundamental frequency, but I go one time to the left, and I see that there is another harmonic, which is smaller than the one I've detected, and has a presence of at least half my size. So then this is how I detect the pitch here. The pitch is correctly detected also. Here, this is a very clear sound as you can see, so the product shows no noise or something like this. I think here it was the soprano, and as you can see, there is some noise, but the noise is lost due to the product magnitude. So the order of magnitude of the product is much bigger than compared with the noise. For the violin, the case is also clear and quite easy to detect, and for the last instrument also. But if you really, I've tried it on more sounds, and for some it gets very good results, but here if you look, this is quite a low value. So these values are actually very near to noise, which doesn't really give you a big confidence in detecting the real pitch. It's a clear difference between here and here, 10 at the power of minus 4 and 10 at the power of minus 10, but still the values are quite low. So you would probably again need to work with some kind of a learning method where you learn a threshold and decide for your collection, if the difference is relevant or not. Okay, now the second method is the maximum likelihood, the so-called maximum likelihood estimator, and the idea here was to consider some kind of an other approach based on the shape. We are still in the frequency domain, but based on the shape of the frequency domains. And the idea was to build ideal specters of these harmonics and compare them with the frequency representation of a new sound. And the template, the ideal specter, which is closest to the new sound, will be considered as being the pitch. In order to get close to real sounds, the idea, these ideal specters are a chain of pulses which are dampened by a function so that they look like real sounds. So one needs to compare with real sounds, then these patterns have to be close to the real sounds. The signal window is chosen at a length of around 40 milliseconds and is usually dampened at the edges. This dampening function removes usually the artifacts which are mostly false high frequencies. And let's see how this looks like. So for example, when generating an ideal spectrum, one starts with a signal with a series of harmonics, the dampening function looks something like this, and you overlap this dampening function over each of the harmonics, and you get such a convoluting signal. This is your pattern, and this is a pattern which says, okay, this is my ideal spectral representation of a certain pitch. Now what I just need to do is to compute the difference between my pattern collection and the new sound in the same representation. And the one with the smallest difference is the pitch. Okay, so I'm going to build this difference here. This difference can be mathematically rewritten as this difference here, so it's a difference at the power of two. As these two, one of them is the pattern and one of them is the signal, they can be considered as being constants. So what we're actually interested is in this product here, because the error is smaller, so the similarity is bigger, with the maximum product. Yeah, I'm going to subtract this, so as this one grows, the error is smaller, the similarity is bigger. So the maximum likelihood estimator makes no more than takes a collection of patterns, makes them look as sounds, and then estimates the identity or the similarity based on the difference of the patterns and of the input sound. And again, this can then be reduced to calculating the product, so I want to maximize that product between these two elements. And for the patterns where the product is maximum, is the maximum, that one is the most similar to my original signal. So I reduced my difference, my operation to a multiplication of a vector with a matrix of ideal specters. The ideal specters are my patterns. I'll show you what I mean right now. This would be here, for example, a lower pitch, because as you can see, I'm starting with my first frequency at a quite low area, then I have my five harmonics, this would be a higher pitch, and then I grow, yeah? And then I have my signal, and I'm multiplying the first row being the first pattern of a pitch with my signal. Then the second and so on, where this product is maximal, there is where I have the greatest similarity, and then I say, uh-huh, okay, this is, for example, my result. It's not the perfect pitch as you can find it in here, but it's the perfect estimation based on the ideal specters. And now the secret is, with the degree of resolution I build my database of specters with, I increase the exactity of my approach. So if I have here a lot of specters in my matrix, which are in very small steps, I'm more likely to find the correct pitch. But of course, for this, I have to do a lot of multiplications. So the size of the matrix is, again, a trade-off between how precise I get the pitch from the input signal and how much computation I need. So we've spoken about the harmonic product spectrum. We've spoken the maximum likelihood based on the patterns. But we've spoken only about approaches with dealing frequency domain. What about detecting the track, the tracking the pitch in the time domain? So again, by the waves. For this, there is another possibility, so using the so-called autocorrelation functions. The idea here is that we need to measure how the signal is similar to itself by shifting it. We use the harmonics idea. So somehow the harmonics need to be found in the signal. So if shifting the signal by a certain phase, which happens to be exactly the harmonical frequency, the signal we have a very big autocorrelation. Let me draw this to you. Say this is the signal. Shifting the signal in this direction will continue like this. But I've shifted the signal with this much, with the phase. And now what do I observe? The signal looks quite similar to itself. So the autocorrelation coefficient is quite big. This must be because I've hit the harmonic. You get it? Hitting the harmonic, I can say, okay, how much did I move? I moved this distance here. This is nothing than this phase here. And when this autocorrelation factor hits its maximum, I say, uh-huh, I've got the harmonic. This is my pitch. The shift is my pitch. Yeah? So it's nothing more than just building a sum of autocorrelation coefficients and maximizing it. Again, look, here is the signal. I'm going to shift it progressively, starting from the overlapping signal until I hit the maximum correlation value. And then I say, okay, this is what interests me. This is my pitch. And exactly what I was saying, so since the signals are strongly correlated with each other due to the harmonics, there is a peak in the autocorrelation function for the good pitch candidates. Of course, this was the first approach, so they've said, okay, we can reduce also this to the multiplication problem, but there are also other approaches when one can use the difference from the two signals rather than the product. And so basically now I'm summing a series of differences. The idea is that when the autocorrelation is maximum, so when I hit the phase, the difference is minimal. So if I subtract 1 minus 1, I will get 0, right? This is the idea. When the correlation is maximum, then I hit the harmonic, both signals are high. Substracting them from one another leads to a drop in the difference. So then with the average magnitude difference function, the second approach, this function is minimum where I've hit such a phase, so where the correlation function has peaks. The advantage is that this one is faster to compute. So then based on the same signal, the difference, for example, here at the first harmonic would be close to 0, here and here. And this difference would increase with the loss in energy of the harmonics, because as you have seen, usually the spectrum looks something like this. So I have my pitch, my first harmonic and so on. The differences will increase, because I have the original signal and I'm shifting this one, I'm shifting this one, which is quite big, over the signal with its end harmonic, which doesn't have the energy as the original one, so the difference will increase. But I can still see this tendency here, you can already see it. They are both independent functions, both the correlation function as a product and both the differentiation approach. So Kobayashi and Shinamura in 2000, they've increased their fault tolerance against the noise by using both of them. So they've just built a composed function based on a ratio of both the product and the differentiation. So then pitch can be relatively robust for a time slot. So it's great if I want to recognize the pitch in a certain time slot, I can use the algorithms we have just spoken about. But this doesn't really make a melody. In order to recognize the melody, I need a sequence of pitches. As I've said, I'm going up, I'm going down, what am I doing? This sequence of pitches is what gives my melody. So I'm going to consider the melody as this continuous change in the pitch. Of course, here I need to consider also the envelope that we've spoken last week when projecting sounds. When starting a sound, one has an attack, then a small decay to reach the pitch I'm interested in, then a sustain, and then a decrease. I'm ending my sound. So when recognizing the pitch, I have to be aware that when the sound starts, it has a powerful attack. A short, short, short time span when the frequency reaches its maximum and probably overshoots. I'm going a bit upward, then I'm toning my voice down, I'm decaying a bit to the tone I want to hit. I'm sustaining the tone, I'm making it hearable, and then I decrease, I end my tone. So this is something which one needs to take into consideration when performing this melody recognition, because if I'm going to take, for example, the attack period or the decay period as two different pitches, what can happen is that I have... Wow. I can't sing and I can't draw. This is a bad combination. So I would have something like this. Attack, decay, sustain, release. If I falsely misinterpret this one here as a pitch, this one here as a different pitch, this one as another pitch, then I would say that my melody is what? Up, I'm going up, then I'm going down, then I'm repeating something in the sustain, and then I'm releasing, which actually is not correct. What I'm going to have to consider is that sounds come... Wow, I didn't want to do that. That some sounds come in such a configuration, and that what I'm interested in reading the pitching is this area here in the sustain. And that one is my pitch, and I get sequences of these envelope curves, and I get the sustains, and that one is my pitch, and that one consists my melody. I have here an example of how the pitch detection performed with the harmonic product spectrum looks like for a cello. Again, we have time and frequency. And as you can see, there are some artifacts. These artifacts come exactly from the noise one receives, and from approximating bad window lengths. So if I'm going to measure the pitch in a release phase, I'm going to get a pitch somewhere here. Understand what I mean? Which actually is atypical, because if you perform an analysis on this result, you can clearly see it can't be, it's an outlier. And this is how actually you can clean this pre-process, this pitch recognition phase. You clean the outliers, and then you can filter it to a more understandable representation. So actually what we need to do is to monitor the size of the peaks, perform again, perform filtering for spontaneous octave jumps. If I'm going to jump all over the spectrum, something is bad here. Then I'm going to monitor the size of the peaks for all the allocations for multiple windows, and the better the resolution is for some special cases, cases where I'm uncertain, the better the precision will be. This is the post-processed signal for pitch detection for a flute. MATLAB also has some typical filters for removing such outliers. They are just outlier filters, which actually say nothing more than what is wrong with this image. If I have something here, this jump here, from here to here, doesn't really justify I'm not doing something else here to continue. So this is an outlier, this is something I can prune. So our goal now, we know how to extract pitch, our goal now is to extract the melody. The problem here is, well, we have a lot of instruments, and they overlap. So if I'm going to have the same time, a cello, a flute, a violin, they have also different patterns. You've seen some create more noise, some less, some have powerful harmonics, some not. I somehow need to go through all this polyphony and really extract the pitch that the human perceives from this composed sound. The second is, I could say I'm building my own database of shapes, like the maximum likelihood algorithm has performed. So I'm just building ideal specters for the violin, for the cello, for all of them. And I'm trying to overlap them over the signal and see where the product is maximum. I've got my pitch. The problem is that the violin played, for example, in a room with higher humidity, when it heats, it starts to produce different harmonics. So it shifts the pitch a bit. So then my algorithm has to adapt. My algorithm has to know, okay, something happened here. I'm not able to recognize anything anymore. I need to build my database and to shift my ideal specters based on the shift that happens with my instruments. And of course, even the smallest changes, just second changes can lead to detection, force detection of two nodes. So I'm going to detect where one violin performs a sustain. It plays the same note. I'm going to detect two different nodes. If I'm just going to perform the false recognition. And we are in the last detour where we will see what the state of the art is and what the industry can. And I was surprised to see that it actually can a lot. So this one here, this is MaziMuzipedia.org. And they allow a lot of musical queries. For example, one can play this script for Adobe Flash, Pianin, and perform search on what one has played. Is anyone that talented to know what is playing? Anybody? Music? I'm going to try something. But then you need to jump in and help me. So how was it? I'll imagine an Indian. Search. Let's see what he found. Maybe also want to try. Am I good or what? Let's see what he played. So I've just played the beginning part and obviously they have a large database of party tours that they are going to play in the MIDI style, the note associated with the clear sound synthesized. And then they are comparing what notes I'm pressing with a certain soft similarity, I suspect. And by playing the similarity they bring back the most similar sounds. But let's see what was the first sound they thought would be the most similar. The planet's something. Doesn't play? Doesn't do anything? Let's see the YouTube video. Doesn't work. Too bad. Okay, so this would be the Flash Piano query, which is nice. One can search something like this. The other solution would be a contour search. The melodic contour can be given as a series of codes as I've spoken throughout the lecture and we'll see this next lecture, Parsons Coding. Parsons Coding just registers the difference between two pitches, two following pitches. And then I can do something like up, up, down, down, down, repeat, repeat, repeat. I don't know what I'm doing, but let's see what he says. The curve is obviously something important. Georg, Friedrich, something, concert, something. It's exactly what I've done. So I can do music. So actually someone knowing what he's doing could probably perform some kind of some search that makes sense. And some other nice tool is the rhythm search. So one can tap in based on the frequency, so how fast I'm tapping and recognize the song. Jingle bells, jingle. Okay, let's see. Start tapping. Jingle bells, jingle bells, jingle. Okay. Search. It obviously was Mozart. Says here, look, I'm good. So if he says it was Mozart, I believe him. He doesn't want to play with me. Yeah. So if any of you have a better sense of rhythm, feel free to come here and try. Anybody? No? Okay, so this was fun, but I found something better, something which I've promised you during the break. I tested it for Forever Young with me myself, the one. So what one can do with this tool is just try to sing the refrain and it will try to find it. So please don't be scared. Forever young, I want to be forever young. Do you really want to live forever, forever or never? Let's see. No? And he says, I'm right. The trick here is, I was wondering why, how come is he able to search my voice even if I sing miserably? I have to say it. The idea is they use a database of other people who have sung the same song. And they have such a database for a lot of songs. And the advantage is that most of the people don't really sing that well and make for typically the same mistakes as I've done right now. So they sing usually the refrain and they sing a bit higher at some points where the pitch is a bit higher, then they sing faster maybe, or they make a pause where it shouldn't be. But the advantage is, comparing me with some kind of other people that are the same in musical qualities, the similarity is great. And if I've already tagged that one as singing forever young, here's the result. Right? So the idea is quite cool. Let me see. There are two people. This is the original. This is the part he found as similar to what I was doing. And now he found that I've sung quite similarly to these two guys, to Mr. Friendly and Adler. Let's see what they've done. Yeah. And he sang better than me. Let's see the next one from Sweden. He sang really good, in my opinion at least. Okay. My battery is dying. So let's do the same. Have you talked about the next song? We can also find them. Everything you want to say, you've tested something. At that time, but I don't remember the lyrics from the album. I don't know. Can you come to your mind? As you wish. What you want. It's your decision. But with the refrain, we have more chances of getting it good. Just a second. I want to see if it's got the microphone. Okay. We have to repeat. So, it's okay. We have already 14 seconds. Let's see if we find something. Sunday morning. I'll cut it from the record. I'm cutting it out. So it's just for us. What if Dieter hears it? Maybe you're the next superstar. Okay. So, Jingle Bells. You can try Jingle Bells. I don't know if they have the data on my hands. It can't be. Whatever. Jingle bells, jingle bells, jingle all the way. Oh what fun. Hey, Jingle Bells, Jingle Bells. Let's see. Sure. No, I need the other track. Oh, okay. Okay. One second. I think so. It's on. Okay. So then let's try it again. I suspect it's because of the curve also. You have to hum a bit longer. You can sing something in French. Show us how it's done. Do it. Okay. You just try what you want. You can sing in English. You can sing in German if you want. You choose. Yeah. I think we will need a bit longer periods. So as I've said, it's, yeah, it won't find anything because we need a bit longer. So if you can hum a bit, if you don't know the words, you say, hmm, hmm, hmm. And maybe it can find something. So let's try again. Two ways. For some reason, the microphone. No. Interested. Good. The last one gets excused. If she doesn't want, she doesn't want. No. No, no, no. Due to data privacy, so actually what you need to do is register and then allow them to. Yeah. As I've said, data will hear you and then you have a contract on your hands. Okay, I'll switch. For the last part, I will switch microphones because it seems the battery is empty. So this was the detour. I also tried the systems at home. What we've just seen is midomi. And as I've said, I think it makes quite a good recognition and this idea of using other people singing is great because it copes for the small errors that we're doing when singing. So we've discussed today about the low level audio features. We've discussed deeper about the brightness, the bandwidth, about the zero-closing and the silence in the signal. We've discussed about the smallest difference in the frequency that humans can perceive, the difference Lyman. We've seen also some nice experiments about how to trick the brain into thinking that we've actually played another pitch. This experiment here and the shepherd experience experiments are quite nice. And we've also discussed about pitch tracking algorithms here. I can only mention the harmonic product spectrum, the maximum likelihood that uses the idea of patterns for different pitches and the more patterns you have, the better the estimation is. And we've spoken also about the pitch tracking algorithm in the time domain based on the auto correlation function. The idea is that harmonic signals have a powerful auto correlation in the time domain with themselves so I can either build a sum of products or the differences and where the difference is minimal than there is where I have a phase so harmonic and therefore also pitch. We have also established the examination dates and the first examination period will be in August between 16th and 19th, including these days. So you can already register for these dates at our secretary. The second periods are between 26th and 30th in September. We've decided for periodically split months so that you may choose based on other exams that you have. If you want to take this first then you'll probably choose somewhere in August. If you want to leave this as the last exam you have then you'll just do it in September. If there are major problems and you can't do it in any of these dates, we can fix you with a special date but it has to be discussed with the professor and me so that we're both in branch flag. So if there is a major problem I don't know if you maybe want to or have to go back to France before this then just stop by our secretary and we can set up a special examination date. Again for you I don't know exactly what the plans are for the 80 students but mainly these are the dates. If there are exceptional cases we can discuss about them. Good, next lecture we'll discuss more in detail about query by humming, whistling also and it's actually basically similar to what these systems, midomi does. They extract the melody then they try to set up a trend of the melody and basically build similarities between the query, the melody of the query and the melody of songs from the database. Then we'll discuss about how to represent this melody, how to perform the matching. Typical solutions for representation are the parsons coding. This is the idea with up, down, repeat. So how is the previous pitch compared to this one? How is this one compared to the next and so on? So as a sequence. We'll speak about dynamic time warping which is very important for performing the matching. Since some humans may tend to be faster when they sing a melody so they kind of increase the sampling rate so to say one needs to stretch or to compress time in order to be able to compare the melodies. So to bring them to the same speed if you want. And then we'll discuss about hidden Markov models useful for establishing again similarities for melodies and we'll do all this in the next lecture. That's it for today. Thanks for the attention.
In this course, we examine the aspects regarding building multimedia database systems and give an insight into the used techniques. The course deals with content-specific retrieval of multimedia data. Basic issue is the efficient storage and subsequent retrieval of multimedia documents. The general structure of the course is: - Basic characteristics of multimedia databases - Evaluation of retrieval effectiveness, Precision-Recall Analysis - Semantic content of image-content search - Image representation, low-level and high-level features - Texture features, random-field models - Audio formats, sampling, metadata - Thematic search within music tracks - Query formulation in music databases - Media representation for video - Frame / Shot Detection, Event Detection - Video segmentation and video summarization - Video Indexing, MPEG-7 - Extraction of low-and high-level features -Integration of features and efficient similarity comparison - Indexing over inverted file index, indexing Gemini, R *- trees
10.5446/339 (DOI)
Hello everyone and welcome to the wonderful world of multimedia databases. And last time we were beginning to talk a little bit about colors, about images. And we had, well, we kind of like, we saw color as the first and the primary impression towards perception. So what you immediately notice, if you're not colorblind, is kind of the contrast and the colors that are in an image and that makes an immediate impression on you. And of course this could be formalized, so we were talking about a couple of color spaces, the RGB space, for example, CMYK, usually used for printing. But also other spaces for building the actual histograms, for building the actual features, we reflected on HSV. Does anybody still know why we reflected on HSV? Exactly, it's kind of like, it's not really psychological color space, but usually the distances or the measurements of distances in HSV have a pretty good notion of how humans kind of like distinguish between colors. And the basic form of HSV was this cylindric form, so you have the color, which is kind of like the round part of the cylinder, and then you have the saturation from inside to the outside of the color of the cylinder, and you have the brightness beginning very bright on top and then going down all the way to dark areas. So we were talking about a couple of how colors can be mixed and how colors can be subtracted or added or something. But in the end, we were kind of interested in what we could actually do with this color. How could we compare images based on color? And the one idea that really makes it great is the color histogram, we say, well, how much, what percentage of each color is in the picture? And then you can do all kinds of tricks with the layout where you say, no, you also have to consider where the color actually is. So the location of the color. And you can do a lot of tricks there, and it gets more complicated. But in the end, what you get is a feature vector, and there are different ways to compute similarities between these feature vectors beginning from simple histogram distances. So just subtracting the different columns from each other up to quadratic measures or the Malanobis distance, which kind of takes the correlation between different colors and the similarity between different colors in the spectrum into consideration. And today we will be moving on from the simple colors to something that is also very interesting in recognizing images or describing images. And that is textures. What is a texture? Anybody wants to venture for a definition? Why is the surface of this table not like the surface of the carpet? Where's the difference? The material that it used, yeah, but I mean, you cannot see materials. And abstract from the color. So this is brown, this is gray, yes, I noticed. But still, also in terms of visual impression. Yes, meaning? Different reflections for the... Different, yeah. The light reflection is somewhat different. But could you describe, take a look at the table before you, and would you describe it in a way? Seems pretty smooth to me, too. Any ideas? It's hard, isn't it? So this is kind of stripey in a way, you know, like with the wooden stripes in it, you know? This is kind of pointy, I don't know. It's very hard to describe what it actually is and what makes it so different. But we can immediately recognize the difference. Even if I would do the same in the same kind of colors or in the same types of, you know, like smoothness of the surface, if I would take exactly the same reflection properties, you know, I will immediately see that this surface looks somehow different from the surface over here. And the idea of this lecture is to describe this. How is it different? And this is what we call the texture, right? So we will be onto texture-based image retrieval today. We will just go into the basics of textures and find out what makes a texture, a structure, a surface pattern, or however you may call it, a texture. We will then talk about some features that could be used to measure or to describe such textures. And as in this time, we will also introduce the idea of low-level features and high-level features. Low-level features are very basic descriptions of something, high-level features are, well, basically intrinsic descriptions that are built by mathematical models. So can be pretty complicated, but usually give you a better impression than a low-level feature. They don't abstract as much. But both have their uses. So let's hop into that. So textures describe the nature of typical recurrent patterns in pictures. So if I look at the surface here of the table, there are the stripes, and there's not just a single stripe, but there's kind of a layer of stripes on top of each other. And they're not really regular, so I wouldn't go that far that I say this is shaded somehow. But the stripes are of different strengths, and some are perpendicular, no perpendicular, probably not, but some are vertical, some are a little bit angled or a little bit skewed. So there is a regularity, though it's not as regular as if I would go consider the blackboard and have the real vertical edge here. But the idea, why I know that this is a pattern is that it is reoccurring. It repeats several times. Otherwise, I wouldn't recognize it as something of a pattern. And that is the basic idea, that it is recurrent. A shading is only a shading if there are several lines. If it's just a single line, nobody would call it a shading. And the same goes for the pattern of the carpet here. That's rather pointy, or I don't know how to call it actually, and that is part of the problem. Because if I don't know how to call it, you will not understand what I mean or what I'm talking about without actually seeing it. So I try to describe the pattern of the carpet over the phone to somebody. That would be very difficult. It's like a grayish, pointy, carpet-y thing. Got an impression? Probably not. What we really need is a good description for those images, for those patterns. And I could use, on one hand, the objects. So this is a wood structure. And everybody knows what I'm talking about. It's pattern like wood. Because the different wood kinds are different in a way, but they are kind of similar to each other. It's all these stripes, it's all these big loops that they have, not holes probably. But everybody knows what I'm talking about. Not exactly, but everybody has an idea. I can only do that with things that are somehow natural, like the grass. Everybody has an idea of what's the pattern of a meadow. Just the leaf of grass beside each other. Or gravel, a heap of gravel. How does it look like? Well, it will be pebbles, different sizes. So it will be a little bit coarse. But also artificial things, like a brick wall. How does a brick wall look like? Well, usually it's kind of like bricks. And the next brick, you know, you have an idea. Though, I don't really say what it is, what makes a brick wall. Or how big the bricks are. That is the basic idea that we are looking at. Come on. Yeah, here's for the saving again. And the idea of this lecture, or our quest for today, whoops, our quest for today is to order and somehow describe random textures that may occur in images. This is kind of very regular, though it is a natural thing. It's bamboo. So how do we describe it? It's kind of parallel lines. Maybe that is easier. Kind of knotted, parallel and perpendicular pieces of something. It's really hard to do. Whoever tried it, it's kind of clear what I'm getting at. But it's totally unclear how to represent it in a computer. Because even talking to you and talking is a natural language. Speech is one of the most effective and efficient ways of transporting information. I go a lot kind of. A computer doesn't know kind of. A computer knows one and zero. And this is something that we have to consider. And actually, this is not only useful for multimedia databases. But the description of textures is very important in many areas of computer science. So we will also revisit a couple of techniques that you may well know from other lectures, like for example Fourier transformation as a typical high-level feature. Because textures are used in many other applications. So one problem is always the segmentation of textures. If I talk about a certain texture, I talk about a certain location in the image. If I talk about a wooden texture, not all that I see here has a wooden texture, but only this table. If I step a step aside, the wooden texture is gone. So it has something to do with segmentation. And talking about the texture of an entire image would need the entire image to be kind of totally covered in the texture. Which doesn't make too much sense. Because most pictures, I mean, take any photo that you did recently, it doesn't show a single texture. It will show maybe happy people, and maybe a tree that has a leaf pattern, and maybe there's sea sand or whatever. But there are certain elements in the picture. Trying to figure out which element is which is very important, called texture segmentation. Then we need to classify the texture. We need to know what we are talking about. OK, the area A over here is of the wooden texture. The area B over here of the image is of the carpet-y texture. The area C, is that a texture? Is it? Can you describe it? It's not really texture, is it? Because it's not regular enough. Because here are some words, and this is white, and this is green, you know? That's not really a texture as such. So some parts of the image is maybe very hard to describe in terms of texture. Also that is something that we have to think about. So this is basically the classification of the texture. And these two parts are the parts we definitely need for multimedia databases. We have to investigate incoming pictures, so pictures that are put into the database, or pictures that are put up as a query picture, what textures are contained, which needs segmentation. And we need to classify those textures to compare them between different images. So we need the classification of the texture. The third part that is very important, but that we will not go into is so-called texture synthesis. And this is one of the major features of, for example, computer graphics. Think about gaming. 3D engines. What is the trick there? The trick is to project textures on surfaces. And that makes it pseudo-realistic. Texture mapping. And for those kinds of techniques, it's the same problem over and over again. You need to classify the textures. You need to see how the textures look. You need to do ray tracing and whatever, you know, very complex algorithms to get a good visual impression of the texture. An impression that could fool the observer in believing this texture is real. This is a wooden wall in the computer game. Or this is a wooden table. You will immediately recognize that if you see it, just because the texture seems woody. Okay? So creating texture, that is something that, it's a big part of texture research, but we will not go into that here. So for the texture segmentation, we want to find regions in the image which have a certain texture. And one often calls of scene decomposition. For example, here is a gray P texture, whatever that may mean. And the texture over here in the rest of the picture is leafy. Very hard to describe, you know, like a lot of colors. Nothing really regular. Okay? But finding out the difference between the two leads to understanding what the image shows, what does the image show. It shows a bunch of grapes and some wine leaves. With the leaves, I can have recollections of how leaves look like. What texture do they have? Maybe this classical here with the stem and then the little branches over here. Okay? This is kind of like how leaves look like and then the branches even branch out further more. Okay? This is what we would expect of leaves. Just the grapes, it's clear. Kind of like, oh, let's take red. Wee, wee, wee, wee. Kind of like all this bunch of grapes. So very regular texture with little circles, basically, spheres. Okay? This is what we would expect. And the color and texture are usually related. So what you often do in texture segmentation is look at the colors of the image. So for example, you find the brown color here in the sandy part of the image. That gives you a certain texture or, well, there's basically no real regularity there. But as soon as we look at the green part over here, we find that there's a certain pattern which is kind of like the change between light green and dark green parts. That has something to do with the color, but it's not true that textures always have to come in the same color or there can be very colorful or a mixture of different color. Still the periodicality of the pattern, the reoccurrence of certain colors, that might be a very good hint at what a texture really is or what area of a picture is really texture. If we denote the segmented region with a predominant texture, that might also have a benefit besides just being able to focus on a single texture because very often in images, areas with a certain texture belong to the same entity in the real world. Think about sonograms or x-rays to some degree. The idea basically is to make things visible that are inside the body. Very often they are kind of color coded and you see, oh, this is the liver over here. It's this area that has the same typical form usually, but also a certain texture that reflects the sound waves in a certain way if it's a sonogram or if you have tomography, it will be other kinds of rays that penetrate the body or the outer layers of the body and are reflected in a certain way. This way of reflection will be a certain texture that is imposed on this area or on the specific point in your body. Doctors actually can see things. For example, in oncology, if you're looking at cancer, it's very often possible to see tumors just because there's some change in texture or there's some change in the way the rays are reflected. Same goes for satellite images. If you look at satellite images, you can immediately see what is water and what is land because the water has, besides the color, has a different texture. It has long, acute stripes which are kind of waves in the area close to coasts and a very flat, no texture area in the middle of the oceans. Whereas on landmasses, you usually have some mountains, you have some cities, you have some, I don't know, like forests or something that will change the texture very quickly. If you look at images of densely populated areas where you have agriculture, for example, you will have this carpet pattern, like with different cornfields and whatnot. What we have to do now for the classification is we have to describe the corresponding texture with some features or words or whatever that can be used by computers to compare the textures of different images, whether it's the same texture or it's a different texture and how close matching textures are. The classification on one hand can be semantic, so I can say, well, if something is textured like that in a medical image, it is the liver. Or if something looks rather bubbly in X-rays, it might be the lung. So there's a semantic meaning to the things. But this is very strongly dependent on the application. In medical imaging, one can do that. In many other kinds of remote sensing, it's just not possible to say what it actually is. It's like seeing something on the radar, like there is something, but you have no idea what it actually will be. But you can figure out it's not the background. It's not the background noise. Same happens if you look at photos, photos of friends. You will immediately recognize where the person is in the image and where the background is. You get an idea of that just by looking at the clues, because there's something that is hair textured around the head of a person. Well, very often, or more often than not. There's something that is kind of like textured here. Sylvia is a very good example of texture today, where you would expect the shirt. Yeah, exactly. You find regular textures and you will immediately focus on these areas that are more interesting to you. And the good thing is about the textures that you can actually skip those parts that are of a texture because you recognize it and then you go, oh, this is the shirt, no interest. This is the face. I have to look at it to see who it is. So also by segmenting the images, this classification is also very helpful. If we consider the classification not to be in real world terms, this is the shirt, this is the hair of a person or something like that. But if we rather say, well, this is a striped area, this is a wood textured area, whether it's a table or chair or wall panel or whatever, this is just a pattern area. Then it allows us to compare between images. You can say, oh, this is an image that has a wood covered or wood pattern area and this is another image that has a wood pattern images. It doesn't care if one shows a table or the other shows a chair. It's not semantic anymore, but it's just the visual impression that it puts on us. And that actually allows us to compare between images, compare, describe the visual impression. This is kind of the same trick that we did with the colors last time. We didn't care about what the colors actually depicted. Was it an elephant that was shown there or was it a carpet that was shown there or was it a silvery that was shown there? Well, that's hard with the elephant, but sir, but sir, we just focus on the colors. We can do the same here. And we can also do something that is called query by example. We just say, okay, if I'm looking for a wooden table, then I will just give you a piece of wooden pattern, whatever it may be. I want all the pictures having this wooden pattern in it. Freeze me from having to do something like annotating every image. What is a table? Can be very helpful. Anyway, so for the classification of images, one of the classical examples is satellite images, where you do it semantically. So you look at the things and find out this here is a river with a very smooth texture. This here is sand, which is very light texture and coarse texture. And then you can use it for later segmentations of what you're looking at. The question is really how to describe textures for the measurement. And they are on one hand low level features that just say, well, what are the building blocks of the texture? What makes the texture the texture? So also a shading starts with the single line. And then you add this parallel line, and then you add a third line, and then you probably have a shading at some point. Or you can have high level features that are kind of like a mathematical interpretation of how things, different patterns, different statistical characteristics of patterns reflect on the users. So typical examples here are Garbo filters of the Fourier transformation. We will go into that during the course of this lecture. The interesting question that always remains or that always is there once you want to build a system that is really useful to humans is kind of how do people distinguish textures? How do we find out that this is a regular pattern? So how do we, what would you say? Any ideas? Yes? Some little black lines here, as you can see. So there's no black line here. So if we kind of walk through this image, pixel by pixel, then at some point we will hit a black line. And once we are over it, we have kind of the same distribution of pixels, or of colors, of intensities than before. What happens if we walk in this direction? We will not hit black lines. So the change of light intensity or color when walking in different directions of the image seems to be something that could be used. Other things? Well, would you say that the texture is different in different parts of the image? Maybe I just showed the lower part here of the image. It's much harder to see the texture there than looking at this part of the image. Okay? Why is that? So looking at this part, one can see it. Looking at this part, I would say one could see it better. Why is that? Yes? So if we do what you exactly said, we walk that way, and we do the same over here. In the upper corner, we find that we hit very fine lines of black. Whereas, as you say, in the red part of the picture, we find that there's a lot of darkness. So we had longer stretches of black that might belong to a shading or might be not. So also this is kind of what characterizes a texture. And this is kind of what gave the first idea. So there are basically three main criteria. One is the repetitiveness. Something is not a texture if it does not repeat in some interval. Hitting one black line is not enough. You have to pick hit black lines at regular intervals. It also has something to do with in what way you walk visually through the image. Because in one way, you may hit black lines periodically. In the other, you may hit nothing at all, which makes it a shading, basically. So this is orientation. And the other thing is the complexity of the pattern. They're very simple patterns and very complex. It's like carpet weaving. You have the very simple carpets that are just striped, very easy. Or you have the ones with elaborated floral patterns and everything. This is also a pattern, but very hard to describe because it's more complex. So Rao and Lozer actually define three criteria, the repetition, the orientation, and the complexity of some pattern that actually make for the possibility to discover its nature or to describe its nature. And the question, of course, is can we measure that? Do we have any chance to find it out? And in the 60s and 70s, actually, the idea of describing such patterns that's older than computer science, actually, psychologists were already interested in that long before computer science adapted the problem because they wanted wonderful 3D ego shooters. The 60s and 70s basically focused on gray level analysis and said, it has something to do with going over the picture in different direction and finding once you hit the black line. So what we do is we kind of like take the gray values of the pixel and we have the intensity of each pixel. We build a histogram on that and just count how many, I mean, light pixels are there, how many dark pixels are there, and so on. We get these histograms. These histograms could be kind of like compared to each other like we did with the color histograms. And we can use some statistical information about these histograms. Do they have just a single peak? Are there several peaks? Something periodic, maybe? What's the standard deviation if you have peaks? What's the expected value or the mean or the median or whatever? And the idea, of course, was that similar patterns would produce similar kinds of histograms, similar types of histograms. And since you abstract it from the color by just taking the gray values, by just taking the intensity values, similar patterns should look like in the histograms. And if you then take moments of the first order, which is basically the expected value, you throw away all the information of where each pixel is located. So if I say there's 50% black pixels and 25% gray pixels and 25% white pixels, it could be that it's just a bar of white pixels, a bar of gray pixels, a bar of a bigger bar of black pixels. And it could also be that it's kind of shaded, that they are well mixed, which makes it very hard to see the periodicity. So if I look at this picture here, there's basically no periodicity, and we get a color histogram that looks like that. What does it tell us? Well, there's kind of like here, very little black, then it goes up here. Something like that was an expected value of fear in the middle gray area. This is where you would expect your usual picture. But you're thrown away all the information where each pixel is located by using this histogram. And the solution to that is kind of the gray level co-occurrence. You want to find out where the different intensities of pixels co-occur with other pixels. If you have a picture at some certain position, it has an intensity, say Q. And one of the first approaches or one of the first investigations of that was also done by psychology. It was Charles Julesch in 61. And he said, well, basically what I have to do is I have to look at each pixel in some image, here for example, a picture. And then I have to see, taking its gray value, which is here the blue, what happens if I move it in some direction? What is the expectation that this gray value changes? And if I have a pattern, for example, like this, and I take a pixel, a white one at this time, and I change the direction into this direction, the expectation that the intensity changes is zero. If I change it in this direction, the expectation that it changes is very high. And that is the same for all the pixels here. That is something that is interesting now, because now, not really looking at the exact point where a pixel is located, I can still describe some characteristics of it. For each pixel anywhere in the picture, if it is a regular texture, shifting it in different directions should result in the same probability distributions of changing its color value or changing its intensity value. Is this a clever idea? Psychologists, hey. So, what he did is kind of like he calculated the empirical probability distribution for intensity changes of the value at pixel shifts. And he just used shift to the right. So we said, well, basically, my pixel has the intensity of q. And if I shift it, d positions to the right, then its intensity is m. And I want to know the probability. And with that, I get a probability distribution over the whole picture. And if the probability distribution for two pictures is identical, the texture is identical. Wow. Well, of course, this is also only true if you have longitudinal changes. If it's a texture like that, it doesn't help you, because it's kind of like not changing. So then a couple of years later, Ulish said, well, yeah, I see that. This generalizes it to shifts in different directions. So in any direction that we walk through the picture, we need the probability distribution. And then as a two-dimensional distribution function for every single picture, we get our probability estimation. And this gives us actually the gray level co-occurrence matrix. So for any direction, I just say, what is the expected pixel change? Yes. The gray level co-occurrence matrix basically considers all the pixel pair within Euclidean distance of d. And for all these pixel pairs, the entry in the matrix is the probability of shift. So if a point x, y, x1, y1 has a gray value of i and a point x2, y2 has a gray value of j. And that's, I mean, gray values usually 0 to 255. It's easy. You will define in the matrix, in the gray level co-occurrence matrix, for the field i and j, the number of pixel pairs that have exactly that shift in the distance in any direction. I just count them. How many in the pictures are there? How many pixels are in the pictures? That if I move d pixels in any direction, I get a shift from intensity i to intensity j. Big matrix? What can you do with this matrix? Well, what you can always do with matrix, you can compare them to other kinds of matrices, very efficiently, actually. If you have one of these matrix for every picture, then the texture should be the same. And actually, it's the thesis of Julesz, that was done in 1973, stating that if two pictures show the same, nearly the same, gray level co-occurrence matrix, then it's new. It's not possible for humans to distinguish between the patterns in them. Well, my thesis, but wrong. Interesting, newer perception psychology shows that it's not like that. And actually, one of the psychologists doing it was Julesz himself, a couple of years later, seeing, well, the theory is nice, but the tests don't work out so well. But humans don't behave like they're told to, usually. Especially not if they're told to perceive something by a gray level co-occurrence matrix. So as a rule of thumb, similar co-occurrence matrix indeed do point to the same textures, but it's not really true that it has to be the same. You can trick the system. Okay? But as a rule of thumb, kind of interesting enough. So this was the first idea that was actually along the lines of, yeah, if I go through the picture and hit the black line, like so I have the pixel, gray intensity change, yeah, this is what I'm doing. It's exactly the idea that the people had in the 60s. So you seem to be kind of a 60s man. And it was a good idea. I mean, it was the basis of color management. But when it became clear that this is not the whole truth, and on one hand, these gray level co-occurrence matrices are very hard to compute. It's very not efficient because you have to shift every pixel. Then you have to look at different colors, values that you get from the shift. Then you have to count them. Then you have to kind of like put it into the matrix. And you have to repeat that for all the pixels in the picture. This is rather tedious work and will even in the age of computer science and very quick computers need a lot of computation time to prepare images and also prepare the query image. And this is the time that really is needed. So Tamura and some other fellows in 1978 there. Well, basically, maybe we take it a notch down. We don't go from every pixel in every direction and look at the thing. Maybe there are some basic characteristics that more or less describe the image. And we can have a smaller feature vector than this huge quadratic gray value matrix, which in the least resolution is a 256 cross 256 matrix. If you consider more gray values, it will grow. And they said, well, basically, what we can see is that the granularity, the causeness of the image has something to do with the perception. So if you consider gravel versus sand, sand is very, very fine granular looking at it. It's a smooth kind of texture. Gravel is kind of, you see the individual pebbles and they differ in color. So it has a cause impression. Then the contrast, there are areas of light, there are areas of shade. The more shade there is, the more kind of like areas dissolve into each other. The less contrast you have, the more different will be the perception of the texture. Directionality. If I go in some directions, the intensities of the pixel change very quickly. If I go in others, think of the bamboo, going along the cane of the bamboo, they don't change at all. It was the same color all over the picture. Line likeness. Does it look rather pointy and elongated or is it rather bubbly? Like pebbles. I can immediately discern that. The regularity of the pattern, does it really repeat, is there a periodicity? And finally the roughness, that is the impression that whereas this wooden structure seems very smooth to us, the carpet structure does not seem smooth. It seems rather smooth, but if I look directly here, I see some irregularities which do not make it smooth. So also this could be. And they were actually measuring or trying to figure out how to measure these things and found out that basically these seem to be correlated to the other three. If you got the three, the other ones are more or less linear combinations of the ones before. They seem to be dependent on the other ones. If you have a strong directionality, you also will have a strong regularity. Or if you have a strong directionality, the line likeness is going to increase. So they looked at the different correlations between them and crossed out the last three. So this is what we want to do. For the granularity, it has something to do with the image resolution. So for example, if you look at aerial photographs from different heights, you will find that here you can see the buildings on the left hand side. Whereas same picture, just a different resolution. This is both Manhattan, is it? Yes, seems to be Manhattan. So this is the point of Manhattan, Statue of Liberty here. And this is a house block in Manhattan. And you can actually see the different houses here, different skyscrapers. Okay? It gives you a totally different impression. But how do you measure it? Any ideas? Scaling. Scaling. And then? I mean if you scale each image enough, you will end up with the individual picture. It doesn't help you really. You have to take the picture as it is. Yes? The size of the small picture area. Yes! There we go for the 80s man. So why don't we look at different sized pictures or frames in the image and look how regular the colors are in there. So basically the idea is I take a rectangle of a certain size and I move it over the picture. And if I do that with the same sized rectangle here and there, assuming this is the same sized rectangle, I will find out that this rectangle here very often hit houses of the same color. Here it will not. There's also always a mixture of different colors. This is one way of describing the granularity and this is actually what you do. You examine the neighborhood of each pixel for brightness changes. Not actually the color but the brightness is enough. So you work for each pixel. You have a window of size one to one. You start with one to one and go to 32 to 32 pixels. So different sizes. And you just record for every pixel in the image what is the brightness change within this area. This is for example here the typical values of IBM's cubic Qubic image content that was one of the first running systems for multimedia databases. And then for each size of the window, you record the average gray level in the corresponding window. So you get one for each pixel for this, for this, for this, for this size. Distribution of gray values for the different sizes. Good. Then you compute the difference of means of gray levels between this window and the window next to it and the window on the other side. What does it help you? Well, if you have three windows directly adjacent and of the same size roughly and there's a change in the gray level distribution, then it means that it's not a regular area. If there's no change or very little change, the three values, the three windows may belong to the same area. So going back to our example here, if I take any three adjacent things, I will find in this case that the gray value has not changed very much. So this seems to be one area making this area of the image very coarse. Okay? If I do the same over here, it changes very much showing that this area is very fine granular. So it's not these three different windows do not belong to the same area, to the same object. So this seems to be rather fine granular. And I can do the same over the whole image with different sized windows. And for each pixel, I determine the maximum window size where it has the maximum different from its neighbors. Okay? So basically what I'm trying to do is if I have an image, I work with these little windows of different size and I basically try to blow them up until they fit the cause of the pattern. And if I have a very fine granular image, this will not be possible because already at small windows, the distribution between adjacent windows will change. If I have stretches of the same texture of the same color, this will be possible to blow them up because the adjacent windows still have the same texture. Means the same gray level distribution. Clear? Yeah. That is basically the idea of what you would do. And the granularity of the entire image is the mean of the maximum window sizes of all pixels. So if you have one area that is very cause and one area that is very fine granular, it's basically just take the percentage between them. And you can also use a histogram mapping the number of pixels corresponding to each window size. Then you would have kind of like how many cause texture or how little cause texture is there in the image. Or you could just use a single cause and this value for the entire image just as the expected value of that histogram. Very well. So there's one problem with that and that is that the image selections whose granularity you need to determine might itself be very small if you segmented your total image in different texture areas before. So you may be left with very small places where you have to find a texture. So consider an image, I'm just sitting here on this table and the most of the part is you see me and you see some wooden texture here right beside my knee. In the image that would be just a very small part. Most of the table structure would be covered by me. And this is one of the problems that you have to deal with. Well there are actually some ways then to estimate the maximum delta, the maximum difference from smaller values. So if you can't blow up the pictures, the pixel windows that you move over the pixel to a certain size, then there's still ways to do that in a probabilistic fashion. Because on the web page you might look it up. The second part is a little bit easier. It's a contrast. So we have focused on the causeness of the picture. Now we want to focus on the contrast of the picture. And the contrast is kind of like the clarity or the sharpness of the color transitions. Also the shadows that are there. So for example, again looking at Manhattan here, we find that this is very much a grey area. You know, like, and you hardly can see the different skyscrapers in some areas of the pictures. You can distinguish some here. Yes, but it's getting different because the contrast is very low. Whereas here you can see clear cuts between parts of the image. Very high contrast image. This is something that you can easily measure. So for example, the contrast values is just the expectation of the grey level histogram distribution. You just build the grey level histogram for each pixel. You record the intensity and add to the column of the histogram for that intensity. And then you look at the expected value of the histogram. And the contrast of some picture actually describes the histogram. And it's the standard deviation divided by the cotosis, which is the fourth central moment. Everybody knows about statistical moments? The first statistical moment of some distribution is, ah, need to polish up your statistics at some point, the expected value. Basically moments are statistical values that you get from distributions to describe the kind of distribution. So how can I describe such a distribution versus such a distribution? Well, to distinguish between them, I could use just the expected value. Okay? This doesn't tell me the whole story, but this is the first moment. How can I distinguish between such a distribution and such a distribution? Well, they have the same expected value. So the first moment is exactly the same here. So we can derive the second moment, which is the variance. Clearly distinguish it. Then versus something like that. Third moment, the skewness. Okay? And so on. So these are different statistical measures taken on the probability distribution. And if you look at those distributions, you could also imagine them as histograms, as a gray level histogram. And this is what we use here for describing the contrast. Okay? We could use the expected value. We could use the variation of the histogram. We could use the skewness of the histogram. But what we use is the kurtosis, which is basically the force central moment divided by the standard deviation. So please look it up in your statistics books if you don't know. It's not too interesting. And the kurtosis is not to discover the mass of the distribution or the variance of the distribution or the skewness of the distribution, but actually the number of modalities of the distribution. So that means what we have to distinguish with the force. Statistical moment is kind of such a distribution from such a distribution. Okay? B modal distribution, unimodal distribution. This can distinguish between them. Okay, this is what we do. Still, directionality. How do we deal with that? Well, it's kind of the predominant direction of elements. And seeing that I walk through the image, I will hit black lines here very quickly, in that direction not as quickly. In this direction, always very quickly. So this is the way of distinguishing between the images. And what you look at is the gradient of the color change. If I walk through this image in this direction, which is highly directional, I will find that the gray level values will go like this. Because here's black, I go to light, I go back to black, I go to light, I go back to black. Okay? This is exactly what happens here. Black go up to light, go back to black and stuff like that. What happens if I go into that direction? It looks like that. I have some color and it never changes as I go through the image. The gradient here is zero. The gradient here is quite high. Okay? So the gradient is a good measure when walking over the picture for the change of colors. Same goes here. So here in both directions, the color changes very quickly. Okay? High gradients in all directions. This is a way to distinguish between the two images. Yes? It's a problem. So what do we do? Perfect! Typical computer science solution. Pragmatic and works. Yes, there are many arbitrary many directions, but what we will do, we will just stick to eight of them. Like in the good nautical charts, there's east, west, north and south and the things between and that's it. And this is what we do. So for directionality, we just determine the strength or the magnitude of the direction of the gradient in each pixel. So we can, for example, use a solvable edge detector or whatever. So we fix pixels, then walk in the different directions, look at the gradient. Pics of the gradient determines whether there is a big change or there's not a big change. Okay? Same here. Good? What do we do? We build histograms with the directions, number of pixels that have a big gradient in that direction. Okay? So create the histograms for each angle, the number of pixels with the gradients above a certain threshold. Okay? And if there's a dominant direction in the image, there will be a peak in the threshold because many pixels will have a high gradient there. Okay? Easy. What could happen now is that I'm not interested in the direction, so, you know, like if something is shaded or if something is shaded, is this the same texture or not? Hmm? Depends. It's a very good answer. Yes? Exactly. So it is kind of the same texture. But one could argue that in certain semantic occurrences, if you have a horizon, for example, it might not be the same texture. So what you can decide is you could say, well, I want my measure for directionality to be invariant with respect to the rotation. I want to see, well, basically, it's the way the photo has been taken of this wooden structure, whether I photograph it this way or whether I photograph it this way. It's still the same texture and it's rather random what kind of photo I got. It's unfair to kind of like punish the photographer or the pattern for the photographer. Then I should vote for something that is in rotation invariant. But if it really has a sense that something is horizontally striped or vertically striped, then I should probably not. So this is a design decision that we can make. Good thing, our directionality measure, our histograms can be made both ways. If I note the different directions, 8 or 16 or however you want to, and leave it like that, it's not rotationally independent. If you have a strong directionality in north house direction, this is a different texture from if you have a strong directionality in east-west direction. On the other hand, I can also say, well, do not look at the different columns. Just count how many columns are there with different directionalities. Is there a single predominant directionality? Are there two? If I do that, it becomes rotation independent because I don't look at the exact histogram or the exact places of histogram columns, but just look at the structure of the histogram as such. Good. The Moorer then went on to show that these first three measures, the cause-ness, the contrast, and the directionality are not correlated. So they are independent with respect to each other. So the distance measure between two images with respect to the texture could just be the cause-ness value plus the contrast value plus the directionality value, Euclidean distance, just simple thing, divided by scaling factors, which is basically the standard deviation to kind of normalize between the three features. That's it. Our first texture measure. So now we can take images apart. We can measure three aspects of the image and compare images with respect to each other in a digital computer system. Ain't that wonderful? You don't seem so happy. It's great. Anyway, and it actually works. So if you do it, this is the IBM Qubic system, and I took here a couple of codes of arms. So this is an ermine pattern and try to find it in different pictures, and you see here that with rising distances, the things get more and more difficult. So of course, the first one is an exact match, but also these inverse-colored, slightly different patterned images are immediately recognized, and the more you go towards something that is rather striped here, you will have a higher difference measure. Good. Second possibility of computing the similarity is so-called random field models. So you could also say that basically your image is a random variable, random being for each pixel the intensity. And if you set the intensity of some pixel, then it will influence by the pattern the similarity of other pixels. So for example, if I do have one of my nicely striped patterns, then knowing that this pixel here is red influences the probability of the color of this pixel. Because if it would be a pattern, this pixel should be white. Same goes for this pixel. It would not be regular if this pixel were not red too. So knowing kind of this pixel gives me at least an impression, if it is a pattern, given that it is a regular pattern, of the surrounding area of this pixel. And this is basically the idea. So the basic idea is textures repeat periodically. If something is not regular, it's not a sensible pattern. So what you have to do when you do the synthesis of patterns is basically you have a small sample of the pattern and then you just repeat it, put it together. And what you do to make a realistic pattern is what do the game industry people do to make it look realistic. They introduce some errors because if it would be a perfect pattern, it would look artificial. In real world textures, they are no perfect pattern. It's always little no and this is like you have the tree with a lot of leaves but they are not side by side. One may be slightly behind the other. Some may be missing for some reason or the other. And the same with the brick wall. The brick wall is very regular. But there is a chip taken off some of the bricks. There is some smear of the mortar somewhere that makes it look realistic. So introducing a certain irregularity is basically a good way of making a texture look realistic. Why don't we do just the opposite thing? So from looking at a somehow flawed pattern, we could find out what was the model that generated this pattern. The statistical model with this pattern was synthesized. And if we decide for a certain class of models, then the parameters of this model that were used to introduce the texture with high probability. These parameters are perfect description of our pattern. Everybody understood the idea? So in texture synthesis, you use statistical models to change a texture a little bit, just slightly, to introduce some noise, introduce some or take the edge of some of the artificiality if you want it like that. Using this model will result in similar textures. Now also the opposite is true. Similar textures will result in the same model. Knowing what model very probably or with the highest likelihood is behind the synthesis of some texture helps us to describe the texture. Good? Actually very simple idea. So if you have a good model, you create different but very similar textures. And we do this the other way around. So which model, which parameters for a certain model class generates the textures occurring in an image in the best way? Okay? Well, how to model generated texture? Let's just assume we have some model, call it X because I don't know its name. And using this model, what are the expected intensity values of the pixels? Fix a pixel at some point, okay? And look at the surrounding. Based on the intensity value of the pixel and based on your statistical models, I can predict with a certain probability the intensity of all the other places. Okay? For example, if my model created this texture, it is obviously a model that creates stripes. But then knowing that this picture is black should increase the probability using this model that also this picture is black and this picture is black. And at the same time should increase the probability that this picture is white and this picture is white. Okay? And of course in the created texture, this could also be black, sure, yes, because this would be an error. But in the long run with the highest probability, it's white. Now we do it the other way around. We take the pixel, look at its surrounding and see this surrounding as an observation of a statistical model in action. Try to figure out what other parameters for this model. This is our feature vector for the image, for the texture. Okay? Good. So, if we do the same here, it's a more complex pattern. So also the model has to have different parameters. If it would be the same, the upper and the lower should be white if our pixel is black and the right and the left should be black. It would also be some kind of stripey pattern. But it has to be different here. For irregular patterns, it's a very complex model and the parameters definitely have to look different. So let's go. We describe the image by some matrix. Basically, this is the image with the different pixels, look, inside. We take the intensity value of each pixel and put it into a matrix. Okay? Same entries as pixels in the picture. Okay? This is our matrix F. Nothing happened. Now we assume a model where this matrix is a random variable. Of course, a matrix is two-dimensional. This is why they call it a random field. Okay? It's a two-dimensional random variable. Nothing more. If I know the distribution of class F by just assuming some kind of model, I still have to look for the parameters that resulted in this observation. So we have an image. We assume that it is an observation of this model creating those matrices. What are the parameters for the corresponding distribution? And this leads us basically to a maximum likelihood estimation. So what are the most likely values creating this specific matrix and therefore the specific picture? Okay? So a picture is seen as a matrix with intensity values as the entries. We assume there is a common model always producing the matrices. We take all the textures that we have as input values, as observations of our model, and then do a maximum likelihood estimation. What are the parameters of the model that created these observations with the highest probabilities? And this is what it's called. The problem is the dependency. So if I look at a certain pixel, will it kind of influence the color of its neighbors? Well, if it's a pattern, yes. Otherwise it would not result in a pattern if I use the probability distribution. How about the neighbors that are a little bit more far off? Surely also yes. How about things over here? Well, transitively yes. But on the other hand, if I look at some pattern, locally I can see very strong connection. On long distances, things might have changed. It might not be as regular. Look at this wooden structure here. Yes, it is striped. But I can see a very good and a very regular striping in small areas of the wooden part. Just saying this goes on until the edge of the table over here is not true, because there are some irregularities in between that would change it. So what we can do is we can have some idea of locality and just say, well, basically if the neighbors to the left and right are white and the up and down neighbors are black and we have some striped thing, the pixel considered here has a very high probability of being white. And this is not influenced by some pixel over here. Just look at the immediate surrounding of any pixel to determine its value or its most probable value. Anybody sees, well, how many of you have actually heard statistics? So many. Interesting. Then nobody obviously sees what this characteristic is. This is the Markov property. So probably many of you will have heard the terms Markov chains or hidden Markov models. So just as a name, this is actually one of the properties of Markov chains or Markov stochastic processes that you always restricted to the locality. So the probability of some event occurring only depends on the probability of events occurring in the neighborhood. So immediately before two time steps or whatever before, but not 50 years before. Makes it easier to calculate. So we just assume that the value of some pixel does not depend on the value of all the other pictures in the image, but just on the values of the pixels in the neighborhood of this pixel. And this called the Markov property. Again, like in the directionality idea, we say neighborhood is basically shifts. Five pixel to the left, five pixel to the right, five pixel up, five pixel down, and the diagonals. So this is it. The neighborhood of some pixel s is start from the pixel s and go t in some direction. And what we basically do is we will just go into every direction possible one pixel. Enough for us at the moment. So zero, one, zero, one minus one, minus one, oh, and this is one, one, one, minus one, minus one, minus one. Okay. See, these are the neighborhood pixels here. Four pixel s. Now we have to define a model that reproduces the observed distribution, our images in the collections, our textures, with the best value, with the best parameters. And actually, for everybody who has heard lectures or worked in computer gaming, there are a lot of different texture models. They all have their drawbacks and their advantages, but basically it's a lot of them. And of course, if we want to compare images from different collections, we cannot say, well, let's just assume different models and different parameters and different everything, but we have to restrict ourselves to one class of models and then just look at the parameters and the difference in the parameters will be kind of like a measurement for the difference in the texture quality of the image. So a popular class of modules for textual refrigeration are so-called simultaneous auto-regressive models. Basically what they do is they take the different intensity values from the pixel of the neighborhood with a certain parameter that changes it. This kind of parameter is what we later need for our feature vector, because this is characteristic for each different texture. The neighborhood is the same for every texture, just moving, but the way the neighborhood is influenced by the color of some picture, that is different. In a striped thing, the neighborhood in these directions is influenced being white. The neighborhood in this direction is influenced being black. If you have some, I don't know, like pebble type pattern, then the neighborhood is influenced having kind of the same characteristics or the same intensity as our pixel. And this is basically encoded in this factor. Here it goes for this factor here, this is noise. You just have a random variable with mean 0 and variance 1, so it's a distribution spread over the whole probability space and every value of that is kind of the same characteristics. So basically this is just adding white noise, totally randomly. And it's just to account for small errors in the textures that are given by the observations. So this parameter, how much noise is in the image, how much noise belongs to a certain texture is characteristic. And these are therefore the two parameters that we want to estimate with the maximum likelihood. This is what we want to have. This is what describes our texture. The other parts here are the same for every image in the collection. We look at the same size of the neighborhood, we look at the same Gaussian noise that is distributed somehow, just the intensity of the noise is different between different textures and just the way the color is changed in the different directions is different between different textures. That's it. The problem is really that restricting ourselves to some neighborhood of the picture also means that different periodicities of textures could be detected or not. So for example, I have a texture like that and I have a texture like that. Those are shaded textures. If I look at the same size of environment around certain pixels, I will find that in one case I clearly detect some pattern, in the other case I detect nothing at all because it seems the same to me. This is kind of difficult and unfortunately this is not a trivial problem. What is often done to solve it is so-called multi resolution simultaneous autoregressive models. I don't use a single model but I use different models of the same type, autoregressive simular models with different sizes of neighborhoods. Then I say, well the feature vector is not only these parameters tether and better for a certain neighborhood size but the feature vector is tether and better for neighborhood size 1 pixel, tether and better for neighborhood size 2 pixels, tether and better for neighborhood 4 pixels and so on. This gives me different resolutions of the images basically that are different sizes of the neighborhood. So this is what I can do. What I end up with is really a feature vector consisting just of these parameters. That is high compression for the images texture, isn't it? It's wonderful. We had a complete image before, now we have two values or if we use different sizes of neighborhoods maybe 8 values or 16 values, something like that. We compressed the texture information of an image into 16 values. That's great. The only assumptions that we did is the mark of condition is valid, colors or textures are of a local nature and they are repetitive and we chose the size of the neighborhood well for the pictures in the collection either manually or by using several of them and looking which works best. This is what you have to do. Good. Should we go for a short break? Break time. What's the time? Half past 11. Ten minutes. Ten minutes. Very well. So what I showed you right now was two basic ways of describing textures with very little effort. One was basically cause-ness, directionality, contrast, measure it, determine the values for each picture, that's your texture measure and you can compare it. Second one was random field models. Well, dual maximum likelihood estimation of the model parameters, that's your feature vector for the picture. But there are also other ways of describing textures and they are ways that are derived from signal processing actually because, haha, interesting isn't it? Because what is a texture? Well, a texture is a periodical pattern in the continuous intensity information of an image. So if I see the line of each image as a sequence of intensity information to detect a periodical pattern in that, I can use the tools of... No, not for you, the tools of signal processing. Right, that was what I was looking for. So and this is what often is called transform domain features. So I go from the signal as such, the intensity information as such, into some other domain, talk about frequencies, how often does something occur, talk about the amplitudes, how high is the change or what is the rate of change, and I can talk about these things rather in a different domain, not in the image domain anymore. And this is something totally different than the low level features that we had before, which kind of abstract from the feature, from the actual picture. So if I tell you there's a certain granularity or a certain shading in the picture, you can't really reconstruct the picture from that. The granularity, it will give you an idea how it looks like, but you can't paint it. On the other hand, if I tell you, well, there's a signal of intensities and going along the system and measuring the amplitudes and measuring the periods and the frequencies and whatnot, you know, change rate, I can reconstruct the signal. So I can reconstruct the actual image. This is the idea here. We don't focus on certain aspects, we focus on the whole image and we can reconstruct the complete picture from that if it's lossless. And this is what is called a high level feature. It takes the whole picture into account and the whole picture can be reconstructed. So transformation or transform is basically the conversion of something or the signal in a different representation. I can transform it into the domain, I can transform it back from the domain and it preserves all the information. So for example, if I have the picture of a straight line, I can describe it by two points, A, B. Just storing these two points will always allow me to reconstruct the line, this line, not some other line, exactly this line and therefore also the picture of this line. I could also say, well, actually what I do, I don't know point B, I just need point A and a gradient here. Also these two pieces of information will allow me to draw exactly that line and therefore reconstruct the picture of it. It's totally different, the gradient and point information is totally different to the two point information but they encode the same thing uniquely. And this is the same for all the transformations. For example, for images we often use Fourier transformation, so this is an image and this is a representation of this Fourier transformation. We don't see anything anymore. It's a different way of looking at the image and it's reversible. We can get it back from Fourier space. We will see in a moment how that actually works and how that helps us. So the idea is we gain information by transforming to some other representations, to see other things, to see frequencies, to see periodicities that we know about but that we can't really put our fingers on. Now we can quantify it. We can say how much of each frequency is in the picture, for example. And of course this is also a good measure. Good. Let's start with some algebra. Before that we had statistics, now we go for algebra. We take points in space. And if I have n points in space, there's a theorem in linear algebra actually that I can construct a polynomial of degree n minus 1 touching all these points. So it doesn't really matter where they lie. I will always find a polynomial going through them. So for example, if I have just a single point, I can always construct this point. If I have two points, I can always construct a line going through the two points. It does not depend where they are. The equation for the line will look different, yes. But it's always a line. If I have three points, I can always have a parable going through these three points. It does not matter where they lie. But as the line is a polynomial of degree 1 and the parable is a polynomial of degree 2 with more points, the polynomials get more interesting to say the least. But the degree of the polynomial is always n minus 1. Good. So then to describe the points, I could just say, well, given the points, I look at the coordinates of the points, n points. I will give you the polynomial, the formula for the polynomial, and I will give you steps where you have to look at the polynomial to get the points. So for example, if I have this polynomial and I say, OK, this is 0, 1, 2, 3, 4, 5, then just knowing the blue line, I can detect the points. The representation is totally different from the representation of the actual points. And it is the same information. OK? Good. Let's now say that an image is just a discrete function that assigns each pixel on a two-dimensional plane an intensity value. So if I look at an image, I will just take the location of each pixel and say for every pixel in a certain intensity dimension, what's the degree of intensity? So for example, this here is light, and the pixel next to it maybe darker or even lighter. And so I get a kind of flying carpet thing. Each image is a two-dimensional function that is somehow contorted in space. And you can also use color images and not talk about intensities, but about the colors, of course, but that makes things more complicated. Anyway, each row of an image can be interpreted as a sequence of real numbers. OK? And each row, since it's just a sequence of real numbers, can then be described by some polynomial function. OK? Easy to do. But for textures, this is kind of strange, isn't it? Because we know something about textures, and that is... Exactly they are regular, and they are repeating forever and ever and ever. So it's not just some polynomial that kind of like does something and then moves off into any direction, but it's actually a certain type of polynomial that does the same over and over again forever. Anybody knows kind of function that do that? The senus and the cosine. Exactly. And this information is kind of like not very new, but has been discovered in the 1700s by Jean Fourier, who said that basically any periodic symbol or any periodic signal can be covered by sine and cosine functions. OK, so Jean Fourier was a French mathematician, and this was his great idea. He said that any periodic signal can be decomposed in a sum of series of such oscillating functions, the sines and the cosines. And why are we mentioning Jean Fourier here in this context? We've just said that images can be represented as signals based on the gray value. It starts from dark, from black, to white. This is our signal right now, real numbers. OK, this is two-dimensional, but it's our signal. The idea is since polynomial representation don't repeat themselves, but sines and cosines do, and Fourier says that any periodic signal can be decomposed in a sum of such series, why not represent patterns, so images containing patterns, in such a Fourier series, such a sum of oscillating functions? This is what Fourier said. He said that any such function can be decomposed in an infinite, probably infinite, sum of such functions. And let's start with the basics. Let's start with a one-dimensional signal. You can imagine, for example, radio waves or voice, for example. So sound has the amplitude and the frequency in time. So if you have, well, this is not quite a good example for a sound wave. It's something like a step function, but anyway. It's perfect for exactly, for intensity information. So the value would be then our intensity, and then we have the time axis. And discretizing this time axis, you can obtain some real numbers, different moments in time. For example, this one, this one, this one, this one, this one, and so on. Well, I'll leave one out. This one I didn't represent. Anyway, and then our signal can be described through this series of real numbers. And according to Fourier, the signal, the series of real numbers can then be decomposed into a series of sine or cosine functions. So then let's start with a simple sine function and overlap it over the signal. With the whole time spectrum, considering the whole time spectrum, so not a locality, for example, the first seconds or so, but the whole time spectrum. And this looks something like this with the sine with the lowest frequency. And then we go further, we increase the frequency and add additional sine or cosine functions to better approximate our original signal. Well, the idea here is actually that, and so on, I can't draw on this thing. A lower frequency and a different amplitude give me a higher frequency and a different amplitude give me another sine representation, which added. So then I add this representation here and this one and get such a curve like this one. So I'm just basically adding sines together with some frequency and some amplitude to get a better approximation of my signal. And I do this further with higher frequencies. You see, for example, here, the third addition and then the fourth and so on. And at some moment in time, the quality would be higher and higher and higher with the higher the frequency. And what's important to notice here is that I'm happy with the quality of my estimating the original signal, then I can just stop and cut and say, OK, I'm going up to this frequency and I'm already happy with the representation. Otherwise, I can go up to infinity and get a perfect match. So this would be the idea. I can then decompose the original signal in the sum of oscillations, as you've seen here, going up to infinity by increasing the frequency and depending on some amplitude. OK, so then we have, as I've said, some signal here. And what Fourier says is that I have some amplitude, a coefficient, some frequency, the sine or cosine, plus another signal with some amplitude and the frequency and so on. And then I represent this signal as this sum here. What this helps me for is to translate my original signal from this time domain, the intensity over time, into the frequency domain, where I say, OK, let me represent the frequencies I've just built. This frequency here, this frequency here, this frequency here, I get them all in my frequency domain, as for example, in a histogram, with the amplitudes described by these coefficients here. Yeah? So I am building this histogram. This is the representation in the frequency domain of the signal in the time domain. And as I've said, the further I go with higher frequencies, the better quality I get. If you want some intuitive comparison, think of sound. The higher frequency you get, the lower information the higher frequency contains. So if you cut somewhere at 22,000 hertz, you won't lose too much, because the human ear doesn't hear that high frequencies, and they also don't contain that much information. OK, more formally, what Mr. Fourier says is that every sequence of real numbers, in our case the intensities, can be transformed into a sequence of coefficients. The coefficients we've just seen for the cosine and sine functions. Yeah? And then, of course, we have to override the frequency of these sine and cosine functions. So then we need practically, in order to transform this real, the sequence of real numbers, only to establish these coefficients here. And for calculating these coefficients, what we basically need to do is to project the signal, the original signal we have, onto each cosine or sine wave we compose our signal with. Yeah? So in order to project this, for example, for the coefficient of the cosine function of a certain real number, we just multiply the intensity the signal has in the corresponding point with the corresponding cosine value. And we do the same for the sine coefficient. This is how we calculate the coefficients, and then this is how we then represent the signal. This was great for the one dimension. Of course, images are two dimensional, so then things become a bit more complicated. Of course, there is a generalization for Fourier, and of course, we have the possibility to do this also two dimensional. The idea here is, for example, that we describe the intensities in images as sums of sine and cosine, considering also directionality. So we have also directionality to consider. But for example, here, these pixels here have the same intensity. The idea here is that the intensity of pixels vary only with this direction. Maybe I should choose another color. Yeah? These have the same intensity. Here I'm going, for example, from white to black. And if you would be to compare something like this with a pattern in an image, then you probably should imagine you have something like this. I'll draw red for white. Well, not quite. Black background with white stripes in this direction. Why doesn't delete this thing? Yeah, doesn't want to. Okay, so this would be the idea. I would have such a pattern. Okay, so then the formula becomes a bit more complicated because I need to take into consideration the two dimensions. The two dimensions are actually the coordinates of the pixels based on the width and height of the image. And then I have, again, the same cosine and sine oscillations, which describe my function, so each of the pixel intensities together, again, with the amplitudes of these oscillations. Then the pixel intensity can be represented as this A and B coefficients, which now are matrices. Yes? Yeah? You mean the amplitude, not the frequency, because the frequency goes over the whole spectrum. Well, actually for the amplitude, you project, you perform projections of the signal over the sine base, so to say. And this is how you calculate the amplitude for that sine base. If that sine base is not fit to represent that amplitude, then the coefficient will be zero. So then it's out of the picture, so to say. The step signal, yeah? Yeah? Yeah, you override the frequency, yeah? Yeah, you override the frequency, yeah? Yeah. And then the frequency of the more frequent sine function over that and the frequency of the signal. The frequency of the signal is independent. No, no. Yeah? Yeah? Yeah? Okay, take all you can get from the first, then do small error corrections with the second, then do small error corrections or smaller error corrections still with the third quickness. In any case, you need to... Okay. So then again, the amplitudes can be calculated based on the same projection principle. Only this time, I have a two-dimensional function and I multiply it with the corresponding cosine and sine functions. Okay, now the purpose, the goal of this Fourier transformation was to compare different images. So I want actually to be able to compare an image with another one in the frequency domain. One possibility would be to compare these A and B coefficient matrices for the sine and cosine of one image with the A and B matrices of the sine and cosine for the second image. Well, of course, this is not the best solution. One reason would be that actually Fourier coefficients are complex. They have a real component and an imaginary component. And the second would be that actually we need an image that shows us the data in a different perspective. And this is the Fourier representation. This is what the Fourier representation, how the Fourier representation shows the frequency spectrum. The idea here, just a second, the idea here is that we show the frequencies as, well, points with different intensities, light intensities where we have, for example, the fundamental frequency. Then we have some lower and higher harmonics of this frequency showing us the directionality, together with how much information does this frequency actually contain. Let me show you a better example that I hope I brought. So the properties of these images here are that they are centered on the fundamental frequency, the lowest frequency of my sine and cosine. This is this one here. And they are then symmetrically towards the origin. So what happens upwards happens lower and left and right, symmetrical. Then I have the harmonics, as for example in audio signal. They are the upper frequencies, which are multiples of the fundamental frequency. Like, for example, what you see here, they, for example, become weaker and weaker. I get the strength of a frequency through the light intensity in this representation. So the lighter it is, the brighter it is, the more strength lies in the image for the pattern with that frequency. So, for example, for this image here, I have again this signal here. And I should imagine that the amplitude of, so it lies such, the information that I get from this pattern here is in this main frequency. So then I have such a sine, for example, so that you can get an image in the time domain, which repeats itself over this directionality, which is given also in the Fourier domain, with the strength of how much from this pattern is in my image here. Of course, I can have many more patterns, and these patterns, if I would imagine of having, for example, also these stripes here, could be present, something like this. Yeah? Of course, this image gives me also not the direction of the pattern, but also the size of the period. So how big is the distance between these two stripes, for example, in the certain pattern? Well, if we start from the image domain and we imagine something like this, a pattern like this, then we can already see in the frequency domain that there is something happening on the vertical. So I am varying practically from bright to black, to bright to black and so on. Yeah? That the frequency, actually, the period is not that big. The intensity is quite high, and if you compare it with the Fourier representation of the next image, you can see that the directionality of the pattern is again the same. So I have the same vertical variation from bright to black, but the frequency is higher. Again, on both of the images, you can see that nothing or almost nothing happens on the horizontal. And then again, if you rotate the image we previously seen, then you also can have some diagonal representation. You can already see this also in the Fourier transformation. So clearly recognizable directionality. You can see the amplitude, the difference in amplitude, and this is how you can differentiate between this pattern here, this pattern here, and this pattern here, just by looking at their Fourier representations. Okay, now I've said that in order to get this, we must first calculate the amplitude coefficients of the sine and cosine waves. In order to perform this, I've shown you a formula which actually has quite bad complexity. So I have something like to force here if you were to program this, and this means complexity of N²2. This is not that great if you have a big database with images and you want actually to extract for each image the Fourier domain and then compare it. Now you imagine that you have 100,000 or millions or I don't know maybe of images, then you have to extract these coefficients for each image and then extract the coefficients for the query time for the query image and then compare them. For this reason, there is a more efficient implementation, that's the whole turkey algorithm, which actually implements the variant of the discrete Fourier transformation, the so-called fast Fourier transformation. And the idea here is to reduce the problem, so it actually functions as the divide and conquer paradigm. It reduces the complexity of the whole problem to smaller problems, always divides the domain and actually through this reduction the cool turkey algorithm results in a complexity of N log N. Of course, for our exercises, I won't require you to implement this in Java or something like this because it's rather laborious, it's not that trivial to implement. And this is the great advantage of MATLAB, it has libraries which efficiently have implemented Fourier transformation. You can just do the two-dimensional Fourier transformation, so what you need for images is the call of a function. What you need to previously do is just to get the gray levels of an image. You have an image with colors, you get the intensities, so you transform the image to gray levels. And then just by calling FFT of two, two dimensions, you get the representation, the Fourier representation of that image. So it's that simple. I have here, I've brought here an example, you would probably need this for the next homework you will perform. What you just need to know is that image to gray level, gray level to Fourier transformation, and of course, in order to be able to see something and compare something, you need to center the Fourier coefficients, as I've said. This image should be centered on the fundamental frequency. That would be everything about the Fourier transformation. So everything clear about the Fourier transformation? It's basically like we did with the polynomial, given some points in the intensity domain. We can calculate a unique representation in terms of sine and cosine curves, and taking the coefficients of each frequency as a feature vector, we can distinguish between different patterns. This is not only true for Fourier transformation, but there are many transformations of these kinds. So there is, for example, the discrete cosine transformation that restricts itself to cosine forms only. And there's another one that I want to go very briefly into, which is called the wavelet transformation. And the basic ideas, they all give the same results, but slightly different domains. And other domains are sensible for different purposes. So the frequency domain might be very interesting for things in signal processing, whereas wavelets can be very interesting in slightly other application areas like image processing. And this is where they're actually done. The idea behind that is kind of the same. In cosine transformation, I do just the same as Fourier transform, I restrict myself to cosine functions. And one application area where this is very practical is the encoding of JPEG images, for example. So in JPEG, you also go through the pictures for compression, and you want to compress how many pixels following each other have the same color. This is the basic compression. Of course, this gives you a function again, and you can compress this function using cosine measures. This is what you basically do. Basically, since both are based on sine of cosine waves, the power spectrum, that is these coefficient matrices, is what you compare. The images that Sylvia showed you is just to show how different patterns work, work on the frequency spectrum, so you can actually visualize it. But for the comparison, you will take the matrices of the coefficients. Usually, you restrict yourself to only the first few coefficients, not really the tail of the distribution, because that's suitable enough. Okay? With the wavelength transformation, you approximate the intensity function again, but with a different class of base functions. You don't use sine and cosine functions anymore, but you use functions, polynomials, that only exist in very localized spots. And the idea is that if you have functions, there can be different shapes, we will see a couple of them in a minute, that exist for some interval, and then are zero on the rest of the interval. They can be used as a new basis system for building up your intensity function. Because for every point in the intensity function, you have to say where it is, in the sequence of time, and you have to say what its value is. Take as a basis all the functions that would exist in this interval of some time, and put them together uniquely to get to this value. Okay? This is the base idea of wave-land transformation. And the idea how to reach the value is exactly the same as in the Fourier part. You start with the biggest wave-land existing, you take as much as you can take of that, and then you add on the smaller kinds of wave-lands that exist in this interval to shape the curve. The functions that we may use are considerable. It's kind of like polynomials and also things here with spikes, for example. So not really polynomials. But they are local, locally integrable, and the integral over the function is zero. So they don't have any mass. This peak here covers out the areas down here. Okay? Exactly the same mass. And this is true for all different wave-flat functions. And we will only consider the easiest kind of wave-flat function that might look like what? Well, that's basically a step function. This area here covers this area here. Okay? Just step function. And this is actually called the Haar wave-flat. But as soon as we have any such wave-flat, whatever it may be, just call it psi, we cannot generate a base by shifting this wave-flat around and by scaling the wave-flat in size. Okay? The functions exist only locally. For the rest, they are zero. Shifting that around will get us the mass of the distribution where we need it to get the desired intensity value. But then each wave-flat has a certain shape. Our intensity curve also has a certain shape, which is usually not the shape of the wave-flat. So we have to even out the problems, the discrepancies in the shape. What we do is we scale down the wave-flat and add it and subtract it where we need it. Okay? And this will then give us a representation for the curve. So basically there's the scaling factor. We just divide the value by something. And there's a shifting factor. We just shift it left or right. Okay? And for the wave-flat bases, you usually use powers of two. So you say I have one wave-flat that exists on the thing. Then I consider two wave-flats covering the halves. I consider four wave-flats covering the quarters. Eight wave-flats covering the eighth part of the whole space. Okay? So I kind of make it smaller by a factor of two. And these values that you don't have to be two, but one usually uses two, is called a critical sampling. What you would do. So the most simple example is the half wave-flat. It's one for half of the interval and minus one for the other half. As you can easily see, this would cover this. So the integral is zero of this function. Okay? Using this wave-flat, what do we do to scale it? Well, easy. We just scale it by half. Then we have a wave-flat that lives on the half interval. Right? Again, the areas even out. But of course, this lives only on the half interval, because it's half the size. So what do we do? We just take a second instantiation of that, shift it to the other half. Okay? Next step that we can do, we take the quarter. Okay? We have to shift that one time. Three times. So we get four of these quarter-sized wave-flats. And now we want them to be finer. This is we don't take the full amount, but just rescale them also. I will show you in a minute. It will be kind of like now we have orthogonal bases, but what we need is an also normal basis. So we will normalize the smaller wave-flats by factors to make them smaller, so we can even out the finer areas. So adding a factor of 2 to the power of square root j, we can kind of get the basis to be orthonormal. So what happens is if we have started with the mother wave-flat, the biggest wave-flat that we can get, we get two of the smaller wave-flats that also have smaller amplitude by these orthonormalization factor. Okay? So they are shifted and they are scaled. Next generation of wave-flats, four. Shifted four times for the different parts of the integral. And again, scaled by a factor of 2. Mother wave-flat has the amplitude of 1. First son wave-flat has the amplitude of 1 divided by square root 2. Second wave-flat, so grandchild wave-flat has the amplitude of 1 divided by 2. Okay? Good. What we can do is we can also represent this base as a scaling function. And for half-wave-flats, the scaling function that we use to build this is just the characteristic function on the interval 1 and 0. So wherever this function is 1 on this integra, and it's 0 everywhere else. And if I have a data set of cardinality 2 to the power of n, there's a theorem stating that I can represent it on this normalized interval of 0 and 1 by a piecewise continuous function using the specific scaling factors that we need. And the location of where the wave-flats are in this interval, and this is the characteristic function saying I'm 1 within this interval, I'm 0 outside this interval. Okay? This is basically what the scaling function does. The step functions we have for our intensity values are obviously finite, so the image ends at some point, and they have a limited number of points. Then I can represent it by the scaling function and the half-wave-flats. I take the different half-wave-flats with different scalings and different shifts at the scaling function for each part of the interval. So if I have a function here, this is the intensity, and this is the row of the image. Pixel 1, pixel 2, pixel 3, pixel 4, and so on. Okay? Then I can use the scaling function saying, okay, I consider all these little intervals individually, and for each value in the interval, I can build a unique representation with respect to my wave-flats. Okay? So let me show you an example that we can see it. I have the step function given by these values. So these are intensity values from one row of my image. First pixel has intensity 1, second pixel has intensity 0, third pixel has intensity minus 3, or whatever. And I want a resolution of my basis of 3. So I take a mother-wave-flat, a child-wave-flat, and the grandchild-wave-flats. That's the only thing I want to do. Then I need to calculate the autonormalization factors, which would be 1, the square root of 3, 0, and 2 to kind of scale it down. What happens now is that I build myself a characteristic function for the interval, that is just 1 over the interval. Then I build myself the mother-wave-flat, which is basically 1 for half of the interval, minus 1 for the other half of the interval. Then I build myself two baby-wave-flats. The first one lives in the first half of the interval, 1 in the first half of the interval, minus 1 in the second half of the first half of the interval, scaled down by the autonormalization factor to the square root of 2 and minus the square root of 2. Then I shift this to the other half of the interval. So the first wave-flat covers the entire interval. The two second cover the first half and the second half. The first half and the second half, and are 0 in the other half. So I just shift it, otherwise they are identical. Then we have the grandchildren covering the first quarter, the second quarter, the third quarter, and the last quarter. We have one mother-wave-flat, we have two baby-wave-flats, and we have four grandchild-wave-flats. Why do we have eight intervals? That is basically the scaling function behind it, because we have eight data points. 1, 2, 3, 4, 5, 6, 7, 8. So these parts here, these are eight points, and it defines us basically how many intervals we have to separate our interval into. So if we want to build anything with respect to some basis, then we just make it a linear combination of these values. Making something a linear combination is, I just have the factors, basic scaling function for the characteristic function, coefficient of the mother-wave-flat, two coefficients of the baby-wave-flats, four coefficients of the grandchild-wave-flats. And what I want to have, what I want to represent by this, is the intensity function. Intensity of pixel 1, intensity of pixel 2, intensity of pixel 3, and so on. Yes? Clear? This is basically what it does. Okay, so if I solve this linear equation, basically I can do that, then I get, as coefficients, these. Okay? And what do they mean? Well, they mean I have to take half of the characteristic function for each point. I have to take minus one-half of the mother-wave-flat at each point. I have to take such and such for the first baby-wave-flat at each point. And if we now get the function from it, here with the wave-flats, okay, and the factors as derived from there, yes? Then we can reconstruct each point of our characteristic function. For example, let's reconstruct the first point. This is the interval 0 to 1-eighths of the interval length. We have determined our interval had eight. Eight buckets, okay? One, two, three, four, five, six, seven, eight. Okay? What is the value here? Okay, let's try it out. 0, 1-eighths, which wave-lets do live there? The mother-wave-flat? Yes. So we take one-half. The first baby-wave-flat? Yes. So we calculate it and take minus one-half, okay? The first grandchild-wave-flat? Yes. And finally, the last one, okay? So what we do is we just enter the values here into the function where it exists, okay? So the first one does exist. This is the one. The second one does exist, mother-wave-flat, in this area. This is minus one-half. The third one does exist in this one. Goes here, okay? Second one does not exist on this interval. Don't take it. This one does exist. Take it, okay? And these don't exist on our interval. Good? And if we add this all up, we get one. Looking back on our function, the first value indeed was one. Okay? We can go from the function to the wave-flat to the function again. This is how it works. So actually, it's very easy if you look at it. And as a summary for today, I showed you some low-level texture features today, where we just considered, you know, like the cause-ness or the granularity or statistical models and their parameters. And I showed you some high-level features, which kind of like embed the image information into a different domain. The frequency, the description in a wave-flat basis, whatever it is, okay? And one allows us to describe the textures, but not to reconstruct the image. The high-level features even allows us to reconstruct the image and do some interesting stuff with the feature values, compare them to each other. And if there are no more questions today, are there? Everybody happy? Good. Then I would say happy Easter and see you again next week when we will continue with texture analysis, a little bit of multi-resolution analysis, and then start on the interesting part, the shape features.
In this course, we examine the aspects regarding building multimedia database systems and give an insight into the used techniques. The course deals with content-specific retrieval of multimedia data. Basic issue is the efficient storage and subsequent retrieval of multimedia documents. The general structure of the course is: - Basic characteristics of multimedia databases - Evaluation of retrieval effectiveness, Precision-Recall Analysis - Semantic content of image-content search - Image representation, low-level and high-level features - Texture features, random-field models - Audio formats, sampling, metadata - Thematic search within music tracks - Query formulation in music databases - Media representation for video - Frame / Shot Detection, Event Detection - Video segmentation and video summarization - Video Indexing, MPEG-7 - Extraction of low-and high-level features - Integration of features and efficient similarity comparison - Indexing over inverted file index, indexing Gemini, R *- trees
10.5446/337 (DOI)
It's my pleasure to welcome everybody to the multimedia databases lecture today. And it's a beautiful morning because we are diving into new areas, unexplored before we're leaving the images and we're going directly into audio today. And this is, of course, an interesting thing. Short recap of what we did last lecture. We were dealing with images, we were dealing with shapes, we were kind of like asking ourselves how to represent shapes. And then we were discussing several ways of edge histograms, to chain codes where you just kind of like go around the shape and then try to figure out what pixel in what direction is next. We were talking a little bit about area-based retrieval, kind of like using statistical features to describe the area of the shape that is enclosed by the contour. And we were giving you a short outlook on query by example. We were just sketching something and then the sketches somehow a little bit distorted and normalized so that it can be compared to edge histograms gained from normal images from the image database. And the interesting part, of course, is always the matching. I mean, what kind of feature vector do you get? And with the chain codes, it can be a pretty big feature vector because you really have to record every pixel around the contour. However, if you refer to moment invariance or something like that, you will end up with eight or seventeen different features in the feature vector. So this is definitely something that can be done and that is easy to compute. But as I said today, it will be audio retrieval. And audio retrieval is something that is getting more important nowadays. I mean, with the images, most of the big databases actually featured extenders or cartridges or however they called it, blades, where you kind of had the possibility to get the image retrieval functionality, the image segmentation functionality, the image matching functionality directly into the database. And by user defined functions and user defined types, you were able to manipulate image objects. So this is now kind of a standard and there's interesting algorithms to be seen. There's interesting always the question how it works. But there's not too much work on image retrieval recently. People have moved to either video retrieval, which still is a big problem on the web. As is the image retrieval, if you look at Google images or something like that, where you really get the encompassing text passed and that is basically it. So no multimedia features there. But people have moved to video retrieval, to audio retrieval because of the entertainment market, obviously. And there's a lot of applications that focus specifically on these things. And today I want to talk a little bit about the basics of audio data and how it is actually stored and how it is actually, how you can work with it. I will then do the connection to databases or what does it all have to do with databases and then start on the retrieval of audio files. So we have something in stock for you today. And for the basics of the audio, it applies to what we said in the very beginning of the lecture, in the first instance of this lecture. And it is just a different medium because audio is a transportation of information, not by visual means, but by auditory means. And audio comes from the Latin means I hear. And this is actually what it is all about. So the medium that you have is the air and the sound waves directly going into your ear and are interpreted within your ear. We will come to that in a minute. And then you can extract the information from what you hear. And there's different things. There's on one hand music, which very often does not carry information, but rather carries emotions or carries feelings, things that are very hard to express in language very often. Then there's spoken text. This is normal information. This is kind of language. And language is kind of like what is interesting for transporting abstract thoughts. I mean, you can always do an expressive dance or something like that to dance your name like they do on Waldorfschule. But it's not very effective. But telling somebody something is very effective indeed. And then there's a different kind of audio, which is usually referred to as noise. I mean, it's what you hear. And it's definitely sensible that you hear it. I mean, you might be shocked by the constructors amending the street right next to your sleeping chamber. And wake you up at six in the morning with the air pressure driven hammer or something. But still, noise can definitely give you warnings. So for example, a cry or something. It's typical noise that is neither language nor is it music. But it warns you about something. So it has its merits and it's definitely important for some things. And in modern music, noise is kind of like integrated as being all the rage nowadays. So that might also become part of music in your course. And on one hand, the noise is made. So the sound is produced. On the other hand, we have to recognize the sound and we have to interpret the sound. And of course, like we did with the visual images, you know, like with the images and the visual perception, the same applies to the audio. If we know how we perceive sounds, we can model a good representation for sound or we can model a good matching for sound. Because what we perceive as being similar should also be perceived by a matching function or by a scoring function as being similar. So I want you to know a little bit about the basics of auditory perception. And basically it works all by pressure fluctuations in the ear. So you have the eardrum over here. This is where the sound goes in. And the eardrum moves. It vibrates synchronously with the sound waves. And this is directly taken up by the ear bones. You have three inner ear bones that are connected and that kind of amplify the signal taken by the eardrum onto this little membrane over here. And this membrane is really interesting because it gives on the amplified sound to what we call the cochlea, kind of this spiral thing over here. And at the same time, it has something to do with these bows over there. And these bows over there, they don't have anything to do with the auditory perception. But they have something to do with your sense of balance. So if you're upright or if you're lying down or something, this is what you can and also if you're accelerated. This is what you can feel, sense in this part. But the interesting thing is really in the cochlea because the cochlea is a spiral. But it's hollow. It's filled with a fluid. And in the cochlea, there are little hairs. And the hairs are connected to neurons and the skin of the cochlea. And as this membrane in front of the cochlea moves, the sound wave is transported or transposed into a water wave. So it is given on to the fluid. And in the fluid, the hairs start to be moved. And depending on where the hair moves or in what sequence the hairs move, the neurons fire and that gives us the sense of hearing. So this is basically what happens. We create an electrical impulse due to the neurons out of the sound waves coming into your ear. This is basically how it works. And I brought a 3D model so that we can see it. If it's true. No. No, it doesn't work for some reason. Yes. No. Then we have it in the folder. Here we go. Do, do, do, do, do, do. There we go. Okay. So for some reason, stay with me. For some reason we don't. Stupid. I want the big one. So what's happening here, this is the eardrum. Okay. The eardrum is connected to these little ear bones. There are three of the ear bones and they are used to amplify the signal coming from the eardrum. These are the bows that are for balance and the cochlear, the spiral thing, also with a membrane that is directly connected to the last eardrum. Okay. And all the impulses given on the eardrum via the ear bones are given directly onto a small window into the cochlear. And these are some nerves and some blood vessels that kind of like distribute everything. Yeah. This is kind of like how it lies in the skull. Okay. So, so we've seen that it's how the ear actually is built up. And what our brain actually gets from the sounds is an electrical signal. And the electrical signal does not carry the full sound wave. It carries two definite characteristics of the sound. And one is called the pitch, which is kind of like how high we perceive the sound. There are low sounds and there are very high sounds, which can be sometimes painful even. And we perceive the volume. So every sound can come in different degrees of volume. And there are loud sounds and there are very silent sounds, there are very small sounds. And if you look at a scale here, I got the scale somehow from Switzerland. So they have a lot of fun things to do. And see about the decibels, so the loudness and the sounds, then we can hear very well. So at zero decibel, that is basically the minimum loudness that we need for perceiving the sound. And then up to kind of 60 decibels, that is everything we do at home. That's kind of like in the office, in your living room, in a radio studio or something like that. What's supposed to be very silent. And everything that we experience every day in connection with other things like we talk to somebody, or it's the traffic going on outside, that is about 70 to 80 decibels. Now comes the favorite of my parents, the disco take. This is 100 decibels, it's far more than anybody should take. Nevertheless, you do it. But then come things that are really annoying, as I said, the constructors with a very, very nice swiss term. Pneumatischer Bohr, Jumbo, was immer das Sein mark. And at 130, it gets painful because the nerves are firing in such a degree that pain is transported to the brain. And you see that the auditory systems can actually suffer. And there's a lot of related illnesses like the tinnitus, for example, that can be very painful and very long and eurig. And you can even get deaf if you are exposed to very loud noises over a certain time. So for example, planes do have 140 decibels. A pistol shot is 160 decibels. And a rifle shot is 170 decibels. It's a very bad idea to fire a rifle without covering your ears somehow because it's already painful. Not that you should fire rifles, but if you have to, for some reason. So again, but it's only these two characteristics that we get. How high in the scale do we perceive the sound and how loud in terms of decibel do we perceive the sound. And this is basically the information that we need. And if we then look at the sound wave as such, then we have two distinct characteristics of every wave. And one of them is the amplitude of the wave. The other one is the frequency of the wave. And interestingly enough, these are the two things that we can perceive. The amplitude is the volume of the sound, the higher amplitude, the louder is the sound. And the frequency is the pitch of the sound. The higher the frequency, the higher we perceive the pitch in the scale. So if you have Maria Callas or something, you will perceive her singing in the soprano very high. If you have some Luciano Pavarotti or something, you will perceive him much lower. This is basically the idea. And as in the amplitude, we have a logarithmic perception. So as the amplitude doubles, we perceive the loudness, the noise of the tone, tenfold more. So the ear is very sensitive towards loudness. And even if it gets just a little bit louder, if the amplitude just grows a little, we perceive it as being much, much louder. With the frequency of the pitch, it's, well, frequency is a number of perints per unit time, usually measured in hertz. And our hearing range is between 20 hertz and 20 kilohertz. So depending on your age, once you're young, you perceive higher pitches. When you grow old, you will not perceive the higher pitches, the highest spectrum anymore. That's the way it is. And some people hear better than others, so that's also a very individual measurement. But usually you can say between 20 and 20 kilohertz, that is what you hear. Well, a complex audio signal, however, does not consist of one sound wave, but it's basically a mixture of different sound waves that make what you perceive. So you know that with musical instruments like the piano, if you hit different keys at the same time, you will hear different tones, different notes at the same time. And they somehow merge together. And if you look at the frequency spectrum, this is what it is. So it's not your usual wave, but it's a mixture of waves that happens somehow, and that basically gives you the spectrum. And how does it work? Well, we can have a complex frequency that however is built by different sound waves. So the frequency over there, which is very, very regular frequency, is actually built out of five different pure cosine waves. And they have different frequency. So this red one is a long-running one. The blue one has double the frequency, okay, or even more than double the frequency. And the orange ones, yeah, the green one has double the frequency, obviously. So here, if we go here, we have one wave for the green one. The blue one has actually tripled the frequency. So here, okay. And then there's the orange one and the violet one that have even smaller frequencies. And the interesting part, what you can see is that in the first part of the signals, all the different waves add up because they're all going up at the moment. And this is making for this large ascent here. And at some point, the first starts to go down again, okay, which also causes the sum of the frequencies to go down a little bit, to decline a little bit. And at some part, you know, like more and more start going to go down. So also here, the wave goes down, okay. And then they're picking up again. And then you just sum up the amount that is in the signal. So if I want to have this point in time, oh, no, this is easy. I mean, all the waves are zero, so that's not very interesting. If I want to have this amount in time, I can just go down here. So I have to take this, this is negative, and this is also negative, also negative. And then I get two small positive parts of it. And this is the point over here, okay. So kind of all the negative parts multiply, and then you take part of the positive parts, add it, and that's kind of like what you finally get. And you can basically modulate with the simple frequencies of cosine sound waves, every modulation that you would ever want. So you can build them all whatever they may express. And of course, as a sound, this one is perceived totally different from pure wave like this one over here. But we see it as a mixture, and to some degree, we can get the waves apart. We can see what tones are part of the sound. Good. Basically, what happens is interference. And there's constructive interference where you all have positive signals like here and here and here and here and here. This is all positive, okay, and we'll add up to something that is positive. But it can also be destructive. So if you kind of like have the positive signal here and the positive signal here, but have a negative signal here, a negative signal here, a negative signal here, it will already give you a negative signal which kind of like leads to this point in the wave, okay, the combined wave. That's basically the idea. And this is a physical phenomenon, constructive, deconstructive interference, and this is how the sound forms. So now for the audio examples. Thank you. Then let's get back to Matlab. Okay, so for some for you to get a feeling about how audio really feels like, I've brought a synthesizer in Matlab and let's start with the standard pitch, the 440 frequency. And synthesize that. No, go away. This should be a five seconds long 440 standard pitch A. Nothing special, but let's see how old we are. Let's go to 1000 frequency and just see what happens with our sound. I want to test also the 2000. Let's move this here. It's already a bit high for my taste, but we can go up to 20,000. So let me go slow to 5000. Let's see 10,000. I can hear that. Let's go up to 18,000. Did you hear something? I didn't hear anything. Let me try it again. Yeah, it's there. Not nice, but it's there. Let me see 20,000. Although I don't know if our sound card... Oh yeah, I can hear it. Is there someone that doesn't hear the sound? Okay, so... Well, why not try above the spectrum? Humans can hear. I can hear that also. Okay, so we can hear high pitch sound. Let's go to some lower pitch sound. Let's go to the half of the standard pitch. And 50 should be the limit. It was 50 or 40. Yeah, it's a hardware limit. Okay, let me plot this for you so that you can... Yeah, what happens here is that I have five seconds of a clean frequency, so you can't see anything because all the waves are compressed together. But if we take a sample out of it... Oh, I also have it here. But the sample is already too big, so... Let me take the first 200 oscillations. Yeah, this is how the wave looks like for the standard pitch A. So we already see the wave with amplitude and frequency. And of course we've discussed about Fourier transformation, and we've said that with Fourier we have the possibility to transform a signal into the frequency space. And this is exactly what we can also use here, so that we can see how the frequency of the sound we have discussed about looks like. And this is the 440. Well, what you have here is the frequency in megahertz, so multiplied by 10 to the power of 4. So 440 is probably somewhere here. As you can see, there are no harmonics. It's a synthetical sound, so there are no oscillations of this... No harmonics of this standard frequency. Let me see if I position this to 5000. How will the representation in the frequency space, yeah? As you can see, the frequency is exactly at 5000. At 5000 hertz. But this is a synthetical sound, so let's take something more natural, something like a guitar sound. The guitar sound that I have brought is... sounds something like this. And its representation in sound waves looks like this. Again, a bit compressed together, but as you can see, it's not as constant as the synthetical one anymore. And if we go into the frequency space, you can also see why. There are a lot of different frequencies in this sound. So you have a lot of different pitches. It's not like the synthetical sound anymore. Okay, we can go further with the lecture then. So now we understand what all this waving is about. Every sound is characterized by the frequency and by the amplitude. And for creating those sounds, we have a couple of interesting tools. So one is our voice. If you have a good singer, she will be able to create the sounds in a very clear way. But from early on, people were actually considering ways of creating sounds that were sometimes a little bit more difficult than just singing, but a little bit more, let's say, foreseeable than singing. And that is what they did with musical instruments. And the basic idea of musical instruments is getting the vibration, getting the sound wave into the air in a specific way. And the easiest thing, of course, is kind of like you take a drum and you hit it, and of course the membrane begins to swing, and this swing creates the waves of the air that is then perceived. And there are different musical instruments that we know about. So there are string instruments like the guitar sound we just hear. There are the blowing instruments where you have the flutes and the trumpets and stuff like that. Where kind of like a flow of air creates a certain compression or a certain wave form. And there are percussion instruments like the drum that is just hit and the membrane directly creates the sound wave. And also the acoustic of what you do depends on the vibration generator. So for example, for the strings, this is why it's an own kind of instrument.
In this course, we examine the aspects regarding building multimedia database systems and give an insight into the used techniques. The course deals with content-specific retrieval of multimedia data. Basic issue is the efficient storage and subsequent retrieval of multimedia documents. The general structure of the course is: - Basic characteristics of multimedia databases - Evaluation of retrieval effectiveness, Precision-Recall Analysis - Semantic content of image-content search - Image representation, low-level and high-level features - Texture features, random-field models - Audio formats, sampling, metadata - Thematic search within music tracks - Query formulation in music databases - Media representation for video - Frame / Shot Detection, Event Detection - Video segmentation and video summarization - Video Indexing, MPEG-7 - Extraction of low-and high-level features - Integration of features and efficient similarity comparison - Indexing over inverted file index, indexing Gemini, R *- trees
10.5446/334 (DOI)
Welcome everyone to the lecture of data warehousing and data mining. This is our last installment, so next week there will be no further lectures. And for the last lecture I thought I would bring a little bit of practice into the theory. And I invited Toma Wachinsky, who is the German chair of the Aastra Company, which is one of the great companies for data cleaning and data warehousing issues, working with a lot of big customers like Volkswagen and stuff. And I'm very excited to hear a little bit about about the practical implications of data warehousing and how it is actually used and where the real problems in the field are. So, welcome Toma Wachinsky. Okay, thank you very much. I'm talking loud enough. Okay, so. Some words about me. I got my education at the Technical University of Sofia in Bulgaria, this was 1994, how many years I don't count too much. But my experience the first years as usual, I work as a programmer, database developer in the past 12 years, mainly as a BI data warehouse consultant, business analyst, of course project manager, and maybe for the last five years I've been the local manager, the manager here for Germany and for Aastra. So my experience is mainly in the banking sector and automotive, but also insurance aircraft production here in Germany and some other industries. Two words about Aastra. Our focus is data management and data information management. And under this terminology we understand of course data integrations, data warehousing, business intelligence, but also master data management, data quality. Of course we're doing also application development, I'm seeing this only this slide is some of the words are in German. Currently we are over 700 employees all over the world. Okay, the number maybe sounds not so great, impressive, but compared to the other big consulting companies, and if you only see only this area, data management also the other big names does not have worldwide more than 1000 specialists in this area. So our headquarters in Canada, Toronto, we have big offices also in Eastern Europe like the Czech Republic, Slovak, they are working mainly for the local customers. We have also offices in Germany here in Great Britain, but also in Bulgaria there is our new offshore center. Here you see again the offices in the rectangles, but also in the countries where our consultants are doing business, and usually we are working for customers also in the west world, but also in the eastern part of the world where our western companies just make business. We have also another within a dusty group there is another entities like Takamis, pure software development company, specialized in data quality, master data management. We have business consulting parts, they are specialized in the area of the banking sector, credit scoring, credit risk issues, money laundering for detection, and report as a specialized entity for business performance management with strong expertise with Cognos Cognos planning. So I will start with some case that actually this is the main what I suppose we would like to hear. I heard that you know a lot about data warehousing, database modeling techniques like a star snowflake, you know the reference architecture, what are the layers, one typical data warehouse. If you have any questions, I think that something is not clear, just ask, do not hesitate. I will talk firstly about one big, very big data warehouse system at one of the biggest German banks. This is a huge project, maybe one of the biggest data warehouses in Germany with the technology which used there, somebody says maybe this is even the biggest with these technologies all over the world. The project was initiated somewhere in the year of 2000, the first development in 2012 and the first production release in 2003. The business case is about credit risk, basal tool and regulatory reporting. Within these banks the user group of these data warehouses come from the internal risk reporting, external risk, or the people who are reporting to the regulatory institutions like in Germany this is the so called BAFIN or the Central Bank of Germany. There are different users from the retail banking, different users from the investment banking, internal audit of course and external auditors. This is first some initial architecture and simplified. As usual for each data warehouse there are a lot of source systems delivering data in different formats. The source systems here are using different systems, they are using relational databases, some of them are applications on the mainframe, or newly developed Java based and C based applications, but the first rule was each source system deliver as a flat file. That means the data warehouse should not care about whether the source system is on Oracle or DB2 or mainframe. There was a clearly defined standard for delivering flat files. There are two types of stages because there was different requirements or challenges. For example this part of source is coming from the main banking unit, that means this big bank acting as a retail banking in Germany, but this bank is also the head office for another legal entity like the investment banking with the most big part of the business in the UK for example, or other small banks all over the Europe. What was interesting from the central bank, the source system deliver just rolled data without any calculated measures. That's why within the data warehouse there should be an engine calculating the first measures in the area of create risk. On the other side the other legal entities because they also stand alone entities and they also should report to their local central banks, they deliver also the same type of information but plus already calculated measures or facts. Typically in one data warehouse or what is in the theory, that's all of the calculations are in the central data warehouse, but in a very complex environment when the business departments are huge departments and maybe there is one department only calculating one part of the risk measures, there is another department calculating another part of the risk measures. That means these departments have their own IT people who are very close to the guys who define the logic. For example this is very complex logic how to calculate, for example expected loss from one credit or how to calculate the risk-weighted assets. That means coming this roll data from the source system, source system means credit accounts, saving accounts, credit cards from the investment bank, there is total other type of the business entities. Maybe there is first calculations here within the data warehouse, after that the data is sent to these business departments where they really make the complex calculations, they could calibrate this, make a lot of runs and after having this calculated to the right time, they are pushing back to the data warehouse or pushing to the next layer where also further calculations are done. Everything is getting back to the central data warehouse and of course depending on the users group there are different data margins. That's what usually happens in one big bank. Somebody in the top management is receiving regular reports and they are receiving one report from the internal risk reporting and other reports from the totally independent department for the external reporting, but they are reporting the same numbers. They are reporting for example what is the total risk exposure for the bank, for example divided to the private customers, business customers, and suddenly becoming these different reports and it was supposed that they should deliver the same measures, but this big manager said the internal reporting reports 10 billion of euro, the other reporting 12 billion of euro, where is the true? The reality was that these all different departments, they had their own data warehouses, they calculated in their own ways almost the same facts and this is what the typical, maybe you have learned this Syllos approach, that means each department implemented their own applications. So that means usually in our time often happens data mass consolidation because the top manager says, guys we could not go in this way, our numbers should be equal no matter whether we report for our internal controllers, whether we report for the external authorities. So of course the reality is a little bit more complex. During the time also the tea practitioners, the business people said, okay the pressure is big, they are constantly coming new and new requirements, so the big bank was in the year of 2008, that means all banks in Germany should start it in 2008 reporting to the new Basel Tour regulations and somewhere in 2007 or early 2007 they said, ooh, but our original architecture is somehow not prepared. Somebody said, okay we are receiving the same type of data, for example, the same credit accounts, but from very different sources. Then before we start with the calculations we should make first initial consolidation to put everything on one table. That somebody said, okay let's make a new risk information consolidation layer, that means this is typical stage area where you have one entity from each delivery object, that means no matter whether you receive from five source systems credit accounts data, you put all five delivery files in five tables. But before you start calculating you should consolidate this data and really make one entity to keep this simple. That means somewhere one year before the big bank was introduced a new layer. Suddenly the people who started to design here the data maps for the internal reporting, they realized the feeding of these data maps became totally complex, that means here you have not a consolidated layer, that means again he is also consolidated but the view is more what is the work in the source systems. Here you have also need the consolidation but the view is already what is need for the analysis. And having first delivering crawl data after that many other engines delivering result data, and this is a parallel table with the same granularity, but each engine is delivering just new facts and it's usually in one data warehouse. By default, by rules, the people are using inserts but not updates, that means it will be too expensive with millions of records just to get the facts, the calculated facts and to update the already existing tables, that means they started, we must load the data very fast and that's why, for example, let me say credit exposures, we have certainly five versions of credit exposures, the same granularity, the same volume of data but each table with some additional fields and measures. Somebody said, okay, we need also one additional layer, let's first consolidate this data, chaotic data and after that we defeat the data maps, that's appeared a new layer. So this is the reality in one big complex data warehouse, let's go further about this. So what kind of technologies are used in this data warehouse? The first place for the data integration, this is Informatica Power Center, I don't know whether you know the name but this is one of the leading vendors when it's about CETL or data integration software. As a database, it is used IBM DB2 Universal database, very powerful products, not so widely used like the others like Oracle, for example, the market leader is Oracle. For the business analysis, it was used three BI tools and this is some other part of the reality. On the first place was business objects but after that came Cognos, some other departments used Technic. And this is part of the reality also in the big data warehousing system. You see IBM DB2 database, Informatica, these are one of the market leaders, but what happens and what really cause immense problems, so the database is not available and this happens even within the hottest phase when the data should be delivered after two days and reported to the Bundesbank. And this is not only reality here but also in the real IT life, that's maybe the biggest company all over the world. That means this is one of these BI tools, it's another reality in the real world. Usually the software vendors, they are very keen to sell, they have very good experienced sales managers, they come going to the customers and say, guys, we need these tools, we are the best. Of course, they are approaching the business departments, one business department, one business department. For example, business objects and other Cognos, third business department buys micro-strategies. At some point in time, everything should go to the IT and should be managed by IT people and they say, okay, now we should train people in all these tools. There are political issues, even maybe wars sometimes, what tool should be used as a first tool. But on the other side, there is also not perfect BI tool to say, okay, we could do all of the tasks, we could cover all of the requirements only with SAP Business Objects. All of these are really great tools, I could not say something negative. Of course, for test management, there was a special software from HP, Test Director, where the test manager could define the test plans. During the test phase, the business departments could register the defects, which should be fixed. As an operating system, it is used by IBM IEX, UNIX, and of course, there are a lot of shell scripts written all over the data warehouse. That means it is not only informatic or business object or database, but there are a lot of shell scripts. Small part, there was a small task for data quality management, it was also implemented with informatic data quality. Then, of course, many different small or sometimes big modules were implemented with Java or C++. So, in the reality, for very complex calculations, maybe still the best solution is to develop applications with C++ or Java or C-Sharp today. So, what were the challenges in this project? First of all, there are special regulatory requirements for the data management and development process. That means there are requirements about the availability of the historical data. The general rule is what was reported officially by law should be frozen. That means if you see that two months after that there was some calculation problem or some of the data was not correct, you could not just go and change the data and say, okay, now I have the right data. That means what is reported is frozen. That is to implicate the other complications with the versioning, but the law says, okay, you reported some problematic data, this should remain frozen because you know, especially nowadays, many big bank managers, when the bank is getting in bad conditions, maybe they could be sued, and the auditors go to the bank and start to analyze the raw data. Why this bank reported this? Why this bank reported us that the risk exposure was only 10 billion a euro when the risk exposure was 15 billion a euro? That means everything should be by law frozen. But on the other side, I said, okay, now you know what you change this calculation engine, please start with this old data to calculate to see what should be the real exposure for this time in the past. Also, the data processing must be auditable. So high level of formality, this is typical for the banks because this is one another company like AutoMotive, AutoProducer or some other industry. The only controlling authority is maybe the top management or maybe the internal auditors. But in the banking, also the auditors are looking at how the development process is organized. How is the data falls? I will come back to this also. And the documentation is very complex. That means typically this project, actually this is maybe good for us as consulting companies, the banks had the biggest budget for the debt warehousing system because they must invest in this area. Telecommunications or false faggants or Daimler, they are investing when they have enough profit. They start investing in some analysis optimizing the business but when everything is going down, when there is a crisis, they say, okay, we could stop for two years and so we are not investing. But the banks, they must invest, depending on whether this is a crisis time or not crisis. There are regulations and they said, if you want to be a bank, you should implement everything as it should be. So data histruration, data versioning as I said, maybe you learned what is slowly changing the emission type too. I don't know the MBA, also they should know. Histruration is typically in the customer table. The customer lived till yesterday in Frankfurt. Now he is living in Berlin. For the analysis purposes, the record for this customer is historicized. That means you have now two versions, one with the old address and validate till yesterday and the new with starting with today. But what is versioning is, in the old till yesterday, this customer was defined to live in Frankfurt, but it was false in the reality he was living in Berlin. That means this is not historicization because the data was wrong. That means you should correct this, but as I said, the old reported data must be frozen. That means what is left, you could not just go and make updates. The wrong records set town to Berlin. You just have to create a new record and this is a new version. This is the implications especially for the data mass, for the star, schemas. Typically you have effect tables with a lot of dimensions. One of the dimensions for example is here is other customers. What is another very important rule in the data warehouse world? The developer should avoid updates or deletes. Everything should be done mainly with inserts. Because updates, deletes, decide very expensive operations, usually you should have activated indexes to make the right updates. That means you should first look for the right records and update the records which should be updated. If this is about only one or two records, it is not a big problem, but in the data warehouse world, your dimensions, customer dimensions could be mainly, maybe could have, let me say, 10 million of records. This is so-called very large dimensions. That means instead to implement some complex logic for the versioning and historicization to update on a daily basis these dimensions, maybe the practice showed that the best approach is always to have a full load. That means you have your source data, you know what is, what are the right data, what are the false data, and what is left to do is just to make the right filter. You filter in the source system a lot, as a bulk load, for example, the right data. Of course you create a new version, you don't delete the old data. High complexity, the complexity is not only that the logic is complex, but also that you have immense amount of source systems, almost the same systems, sometimes for the same entity, let me say accounts, you have two or three types of different primary keys. So how you could manage this? That means the real dimension of model is not exactly this. Of course you introduce surrogate keys, but your loading process is a little bit complex because you have, depending on the sources and different lookup tables, to help to find whether you have already this record or not. High political pressure, of course deadlines are fixed, that means the law says from 1st of January 2008 you should report according to these new standards, that means there is no postponing. Of course also the internal pressure, the standing fights between IT, business always say, we need these changes tomorrow. The IT said we could deliver this in six months because we have released planning. And this makes, especially in such big project where the budgets are about over 100 millions of euro, so you could imagine what is the pressure of the IT managers, what is the pressure of the business people, and everybody's thinking of this pressure. So back to this issues, challenges and startability, auditability, ability to be auditable. For example, the typical question is when the auditors are coming, you are loading, for example, the best case could be once a month, but typically now the frequency is going down to every day. Sometimes you have for the same day several deliveries because the floor slots just crash and because they're deleting the wrong data is expensive operation, you don't care that there is wrong data or not consistent, you just want to restart the whole process and load the data again. And some days you have for the same business, there are several versions, and when the auditors come they would like to say, okay, what are the right data with which you calculated the reported results. Or restartability, that means as I said, you've loaded 100 tables, most of them with 100 million of records, and the whole process taking maybe one day or something like this, and certainly in the middle of the process the system crashed, and what are you doing? So of course you have inconsistent data in the database. It will be a very complex issue, a very costly way to go and to start deleting these data, which are not complete, and to first make the database clean after that restores. This will be very expensive, that means that people are just forgetting these data, they are staying in the database and you start the process again, and this has implications of all physical data model, that means you must have metadata. You know what is metadata, this is data about the data, this data not coming from the source systems. And a must column for each business entity table is load number and business data of course. Of course you introduce a new additional entities, like you define one in several tables, these are so-called static metadata, that means they are defined once and they are at least stable during the time, like sources, you define all of your sources, all of your targets, tables, all of the jobs for example, loading credits accounts, loading securities, applications, and dynamic metadata, these are the run statistics, what exactly the jobs did. This is now an example of these tables, the first table is a table with the real business data, it is coming from the source system, for example these are accounts, you have account ID, customer ID, and balance. But for the purpose of the data warehouse you add two additional columns, business day and load number. And what you could see as I said, for example, we started our job on the 21st of December to load this data for the last day of the year, but suddenly the system crashed and only the first record was loaded, let me say you don't see this, the second and the third records. That's why we have here load number and in another table we have this load number is the primary key and when the system crashed, of course somebody should set these status to failed. We restart the system, don't care anymore that there is such, this is now a dirty record, we don't care about this, we start the whole system again, at the end we have clean data sets and these are our real two records and they have load number 16, that means our primary key for this table is now the account ID plus load number and maybe plus the business day. And you see here load number status is OK and loaded records rose to millions. And this is example, that means some other job is running with ID 17, this is status running. So that means this makes the whole story a little bit more complex and this table is not only with these five columns. In this case, in this bank, the table was with something like 30, 40 columns with totally different statistics. Also the other metadata tables like sources, targets, everything was very complex and very complex to maintain during the development because usually these are metadata, most of them the static metadata they maintain manually, you have Excel sheets and OK if you have enough time you could develop some application. But through this metadata you provide auditability, you provide also reloadability. That means if somebody can't and say OK we have in our latest data map tables also this load number and we know that we reported exactly the data with the load number 16 and if two years later comes the auditors and say let's see what you reported they could go start with the data maps and analyze, exactly go back and then to see what exactly the data used for these reports. And again for the reloadability we, the system crash we just don't care that there is a dirty record we just started loading. Of course the modern databases they care for also means for deleting in bulk mode. So that means some days this record will be deleted but this will be not maybe once a month. There will be another job independent from the whole process starting and looking at this table. I have load number 15, this is failed. Nobody needs this data because this data was not reported. And the modern tables, databases like DB2, like Oracle, they provide the operation truncate or the DB2 if you define the table like a cluster that means you define that your cluster is built on the load number and business day. When you say delete load number 15 the database go and now that this all of the five million records with the load number 15 are saved in the same place and just remove the, define this space as a free. Nobody is making that physical delete or why to do this just from now on this space is free. And nobody is making... So this is the main issues. You work with huge amount of data. Your load process are very complex. The whole process that means the most important reporting data are the data from the end last day of the month. Usually at this bank the whole calculation process is still taking about 12 days. That means now is third of February. They are running now the calculation for the 31st of January and the whole process will be ready something like 10th or 12th working date, one quick day in the following month. Do you understand what I am talking? Do I talk too fast or... So what's... So we have 20 minutes. Actually this was all about this huge big data warehouse. As I said it was implemented informatica and many people are saying this was one of the biggest informatica project worldwide. Let me say with something like more than 1000 mappings, mappings means this is the smallest piece of programming code for transforming data from one or more tables to another tables. This is the analog in the programming case function. The reality in the other companies, other type of companies like Volkswagen, Daimler, they also have such data warehouses. They don't have this strong political pressure. Also not this complexity and the communications between the IT and the business department. It's not so tensed. Of course when the IT people have arguments say okay we could not deliver to this terminal, let's make a smaller release and we could deliver this in the next release. It's going everything easier. So I will talk just some words about data quality. Dust is also specialized in this area. We have our know-how methodology, how what we understand on the data quality management, what should be done and to our customer, what is master data management, what is data governance. We combine these three words or both words in one. The term above this is data management. But we have 20 minutes more. So why do you need data quality management? Typically these issues arise in the data warehousing systems because in one operational system, the particular operator who is dealing with the particular records, if you see that some of the records, the data wrong or something is missing, he could fix this on the way. That means some customers is coming in the bank office and doing something and both of them realize something is wrong, they could update this. It's not a huge issue. There is no need for some special measurements. But in the data warehousing world, people are analyzing. They are analyzing based on the geography or based on the region data, based on the sex male status, whether this is female or male, based on the part of the town. For example, when you calculate the risk, create risk measures or create scoring, it's important when the customer is getting to the bank and that's for the credit, the credit employee should know whether this customer will pay the money back. And of course, in the modern world, there is so-called operational BI, that means collecting the data from this customer named, birth dates, marriage status, where is he living automatically, the banking employee is getting a scoring. So the typical issue with such age with the people who are living in this part of the big city, is that the probability that this guy will not pay back the credit or will have problems is 20%. And this is already big. That means this has impact on what will be the interest rates and what should be the amount of the, which the customer should bring himself. That means in order to this all analysis that is to be reliable, all of the data must have the good quality. That means it is also important some small details, part of the town, marriage status, female or male. And typically for the normal bank employee, maybe this is not important data. When the people start to analyze or make data mining models, it's very important to know what is the data quality. Are all of the fields really filled or some of the are nulls? So maybe these fields are not important for the operation or business. They are in the application some fields. There is no constraints to check whether this is null or not null. But for the people who make analysis, it's important. And where usually where the people starting with the data quality, first you have to know the data. And what is usually missing in one big data warehouse is the profiling stage. That means you have a new delivery object or several delivery objects or you start to integrate a new source systems. And the first job is to get the data learned. And we are doing this with profiling. That means we are starting as the first place understanding the data. After that we are validating the data and whether we could define some business rules or whether according to the business rules. So after data validation you could define for your data warehouse some data quality rules. How to clean the data, what should be done if some records, some fields are missing, whether you should use default values. Whether if some part of the source system does not deliver these fields, but these fields are delivered from the other systems. And if you have to enrich this data, what should be the rules? And who is responsible for this data quality? Usually who is the data owner? That's in this way we are coming to the data governance. In the data governance the company should have established processes. Who is the owner of the information? Who should, what analyze? What should be the steering committee? Because the most difficult is to say to a source system you deliver dirty data, please change this because we could not calculate. Of course this is totally other department, totally other manager. They have their budget for the whole year, they have a lot of time for something else and somebody now comes, somebody from the business department, business analysis department, say your data dirty please change this. But if this system is 30 years old system, what is the reality in Germany, in North America, you have the core banking systems still running on mainframes developed 20, 30 years ago, there is nobody who could change this, so if you could change this this will be very, very costly. That means you have to implement this data called the cleansing in the data warehousing. Unfortunately I have not brought, there was a very good video, what we shot one of our conferences, but the case was such, it's such a small municipality in the United States. One of the employees in the municipality makes an error and estimated the cost of a house instead to be 500,000 euro, somehow she puts additional 3 nulls at the end. And certainly this house was 500 million US dollar. But this is a big talent, let me say the total amount of the houses, maybe several billions of US dollars and nobody will just see this difference, that certainly some 5 billion hundreds US dollars came just suddenly. But this employee made this error, nobody noticed this, and suddenly say okay the prices, when they calculated the total price of the house in this area, they said oh the prices increased, we will get now additional taxes. And they made the planning, they made the planning because based on this prices increase and said we will get now for the next year 10 billion US dollar more. They plan to make some infrastructure changes, to build a new kindergarten, to build, to renovate a new school. And suddenly the lady who is the owner of this house received and sent a message to pay 1 million euro something like this US dollar tax. This was crazy, she was started to sue the municipality, of course it took maybe years, and what happened at the end, the municipality planned money which they will never get because this lady with the house costing 500 US dollar, she could not pay 1 million taxes. But this was a disaster, and this disaster how you could, for this reason you have data quality measures, especially for the system before you start to calculate, there are special monitoring approaches, you always, you receive the daily data, and if you see some pics, suddenly yesterday was the total amount, let me say 10 billion US dollar, today it's 11, and you received very small deviation in the past year, always something like change on a daily basis about 10, 200,000 US dollars, certainly a big amount, somebody should be, this should be seen somehow, that's why there is special reports how to monitor this, when the systems see this, the corresponding people should see this and report, and the first question is to ask Jessica, is this true, is it possible, and the right people should check this and say, oh, this is error. But this is the typical data quality problem, and I could give also, these are typical results from the data profiling, that means when you start to analyze a typical data profiling tool, it's giving some statistics of, let me say this one table, or this one field from one table, of type day voucher, maybe the next example is more, that means you get statistics, this field of type double, what are the values of this field, how many are there, what are the typical values, what are the average statistics, what are the variances, and sometimes you get very interesting information, you analyze a field which is for the analysis very important, and suddenly you see, for example, male or female, you see from 5 million customers, 5% males, 5% females, the rest now, that means you could do nothing with these data, but you realize this in the early stage and start to do something, either go to the source system and request a change. This is how we do data profiling, that means when you start doing with new data source, get production data, manage security issues, this means if these are customer data, usually the IT people are not allowed to know the real names, it could happen that within this customer they are employees of the bank, and perlos you are not allowed just to have access to these data, that means the first step is always anonymization, and to run the profile, it could be with a tool, and it could be also with simple SQL statements, but nowadays they are very good tools. You analyze what identify out layers, and get the steward data, which are usually from the business department to validate this information to say this is an error, we could do this, we could define the new master data, or you could define a rule, how to update, or how to define a default value. Okay, for female or male, it will be difficult to define a default value. So I will skip, so what is left? I could talk a little bit about how the projects are organized, so maybe what are the roles there? We have roles, so how the projects are. So system development life cycle, you see what are the phases in one project, so of course there is an initial phase, somebody says we need this analysis, so far we have not make customer segmentations, because of the marketing processes, the customer should be segmented to know how many are young people, how many are old people, how many are living in North Germany, how many are South Germany. Let's start such a project, of course the next step is high level requirements analysis, data analysis and profiling, what are the role data, what is the reality actually, could we make this analysis at all, or we should first start changing the source systems. Data modeling and so on, so we could go quickly through these different phases. So during the initiation planning, there is this stage, maybe the deliverable is some kind of high level project plan, which is of course some status report, but there are a lot of meetings, usually on the higher level, managers, the EICLAF Bank, where we have budget for this, at this stage there is some bar off cost estimation, of course the next phase is some detailed requirements analysis, and somebody said, okay we could do this, let's do this, we will plan the budget, let's start with the real analysis, what we need, what should be the requirements. And here coming the, the previous steps were more for managers, for business department people, and data analysis are starting to, the mixture between IT and business people, who could make for example data profiling. If you have a good profiling tool, maybe this could be used from somebody from the business department, but if you don't have, you need somebody who knows SQL, who could go to the real role data and say, yeah the data are so, all the necessary data. Data modeling, I don't think behind the data warehouse words, during the data modeling you first creates the logical model, and this is the star of snowflake, but the real physical models is totally different, and the data modeling is also made from the people who are between the business department and IT's, so when you have your models defined, that means you know what you will be analysed, you know what will be the information, you know what will be your target models, you know what is your source models, and you could define what should be the calculations between both of them. And usually the people in the data warehousing world, they are defining mappings, for example you have target fields, or target table, you have source table, or tables, and this should be the calculations, and the best ways to do this is in a table way. And usually it is recommended to use some pseudo languages understandable for the business people, but also understandable for the IT peoples. A real, total, a real, only text is not working. So of course if it is a totally new project, you need somebody to define what should be the architecture, do you need first high level architecture, what kind of layers do you need, how many layers, of course you start with as simple as possible, what could be the database, what is the tradition in this company, is there some reference already running projects, could we reuse this, it's a typical question, and what is recommended, first try to see whether you could reuse something, somewhere there is already somebody made the data warehousing, for example you are in the marketing department, but the risk people already have something, could you reuse this, this is the first question, and this is in the part during the solution architecture. Okay, the IT people they prefer to make all this everything new from scratch, but the business people say, okay we already paid for similar stuff, please go and look what the other department is doing and check whether it is reusable or not. Yeah, detailed process design, development, data integration, maybe this is one of the big parts in a data warehousing development and usually the biggest part is the backend development or the data integration process. So, and this takes a lot of time, the people are engaging almost during the whole life cycle phases, that means okay in the early stage you start with design and analysis, but you always start with prototypes, you always start with to check whether what you have as idea, whether it will work at all. And the developers should stay till the very end because there's test phases they should be ready to fix if something is necessary to fix, and what is usually happening in the big companies when they are ready with one development phase, they start already working on the next release in a big project. And usually they have separated backend developers and front-end developers or the reporting. And usually this part is relatively small. As I said, the big part is the backend development of the data integration process. In the previous part, usually in the most of the cases, this is now there is ETL tool like Informatica or data stage from IBM or there is data integrated from business object, Oracle has also tools, but there are also many projects and where the ETL is done with SQL scripts. And these are many huge projects. The reporting development usually you have reporting tools like business objects or micro-strategy. And you also have two layers. The first layer is for example in business object, these are so-called universes, so you define some meta data layer. That means you define what are the business requirements, you know what are the physical models, and you define this in this meta data layer. And usually the typical, the ETL people just create several complex reports or initial test reports and after that the real reporting development is done by the business departments. That's why they are using BI tools. So quality assurance is, especially in the finance sector, is a very important part and very strictly controlled by the authorities. Usually this starts from the very beginning. You have a standard how to define a high level design. Usually they also push to use so-called CMI approaches where it's defined for each phase of the project, what should be the documentation. Of course it's very costly, but it's a must. You have standards for data modeling design. You do check this and the data quality assurance team is checking whether the standards are conformed. That means during the quality assurance phase, or in this phase on one side you control the quality of the whole process and there is another part just testing the results. And the testing of the results is also starting during the development phase, during the design phase. At this time the business in IT agreed what should be the test plans for the products during the development phase. You have in parallel the developers, when the quality assurance team is working also in parallel and is generating usually test data. In the most case this is artificial data based on the business rules. And during the tests, of course they are working closely with the developers in the business department, during the test phases, the test people just loading the test data in the database and the process started and they are analyzing, in most cases automatically whether the results are matching the tests what should be the result, what is the expected result. So integration tests you have at some point of time you should test whether the whole system is good work. That means whether all the versions of Informatica is confirmed with the current version of the database where the Unix machine codes or the database codes cope with the amount of the data. And during this integration test you try to really to load the system as much as possible and mostly with the best with real data. And at some point of time there is also a user acceptance test where either based on the artificial data or part of the real data the users, the end users also define what should be the expectation either they accept this model or we do not accept it. That's why it's very important how to plan the whole phases that you should have really conformed time. If you see something is not really developed you should have time to fix this to deploy again, to install again. Deployment is also a very important part usually maybe in some small project neglected. That means in a small project you say ok we developed we have some test machine now everything is ok, ok let's start just copying all the source code to the production machine and what usually happens you just forget something and suddenly you test it, the whole system but on the test machine you start to work this in production and suddenly it does not work because somebody just forget to copy something from the test to the production machine. That's why the deployment is also, let me say also mission critical parts and these are usually independent teams, they are trained especially how to control first deployment on the test machine and they say people when what you deploy on the test machine they should be the same package, you give us this package and now that we should deploy the whole package to the production machine. So project teams, maybe this is the interesting part also for you and what I have from my experience running interviews especially with university, fresh people coming fresh from the university of course you have a project manager, you want big projects when I'm talking about projects I forget to say the consulting company usually work for project for their customers we are not a software company that's to have our production life cycle we are working at customer sites in the most of the cases of course they are offshore or on your short parts but usually in the big companies the projects are from the customers they have the budgets, they build a team and you have usually a project manager, the project manager is responsible for the project first it is planned in the right way, the budget is provided that means they should go to the other managers or the business department say we need for this budget and to try to convince them why this is so of course he should control the whole team but he is very important the communication skills and also the skills to take responsibility is also to be accountable that means for such people it is very important to go and fairly to say to the top manager or to the business guy people will have a problem, so this is sometimes difficult part it's difficult now, sometimes to say you see that the team is behind the agenda but somehow the managers are afraid to say we are behind the agenda let's see what we could do of course these are special requirements for the soft skills for the project manager typically these people should be open in communication and when there is a problem in the early stage to say maybe we will not keep this deadline let's see what we could do so business analysts these are people who stay between the business departments and the IT, that means the people who should really be able to run interviews to ask the right questions but also describe the information in the way which is good for the business departments to say that because they should confirm that these are the requirements but also for the IT people that they could understand what should be done so of course the requirements here are also good communications but good analytical thinking data architect usually these are experienced IT practitioners they of course talk a lot with the business department with the source distance to define what the real data, what should be the data models they should know the different types of data modeling normalized, denormalized, snowflake, style schema so solution architects as I said this is the there is such phase, there is such people who should really define the system as a whole like layers, also what could be the technologies and this should be somebody also who is experienced when we talk about project team this does not mean that you should have physical persons for each of this role usually what happens you have one physical person covering two or three of the roles for example somebody could be solution architect but he also could be the, for example on the next stage the data integration designer, the elite data integration developer because on the other side the big part of the data warehouse is the data integration so you have a lot of data integration developers who really are doing with the corresponding ETL tool usually by default in the data warehouse world this should be very good database specialist people who really understand now what is SQL how to analyze very quickly the data and now the tools and the concepts, the ETL concepts the core journalists or testers, people who really should very quickly understand what are the business cases, what are the test plans to define the right test data covering all of the cases for example you should have positive negative test for example you define a test data where there is a rule that some field should be filled and you generate test data where this field is not filled and to see how the system will react so this is also some special parts you need also special trainings and skills DBAs, these are people who are more technical they should have very good knowledge about the particular database whether this is DB2 or Oracle, how to create maintain the DDA scripts they also define the physical data model for example they define the index policies whether it should be used to materialize views for performance optimization what should be done in the database if you see all people who insert in this table a lot of data but we have index on this and we call it the same time when you load somebody is reading from this table what should be the solution and it's also a very important part of the team repose developer, these are people who also must have good communication skills because they are also working with the dealing with the people who are working in the business departments and develop the complex reports and should have somehow some understanding what is the business about so now I will talk some words for example we are a Dastra as a search company also needs smart people we are recruiting a lot of junior people people who are coming from the universities usually we work with people yeah so what we provide and what's the typical in Germany what is the typical intentions for young people is to start in one company to stay till the end of the lifetime there maybe this is not anymore the case with you but at least it was for several years ago everybody is looking for lifetime employment but nowadays it's not important to have lifetime employment because you could start in a big company and after three years they say we move half of the company to India and half of you are on the streets that means the most important to have lifetime employability that means you start somewhere and the important is what are you learning there what you could say is this word could you apply this independently whether you are still working in this company or what is the industry and actually this is what we provide and the typical people coming to us they say ok I want to work with the top technologies really to work something that is independent from the crisis and if you need to also look for fantastic organization or culture we are in a truly mean of the words multi-cultural company first stock is a big part of the company is working in Canada the Canada is typically immigrants country we have people not only also in Germany from all over the world all parts of the world Africa Asia far east, eastern Europe but also South America Europe, North America I don't know how much time do we have now let's talk very quickly about what are the trends you could see what are the issues with which the CIOs nowadays are dealing for example from the business perspective of course from the first places process improvement and cost optimizations from the IT side for the technical stuff ok there are some business words now like virtualization, cloud computing web-source, social networks but you have also as a constant this cloud computing is maybe not older than 2 or 3 years you have constantly business intelligence now also mobile technologies and what we see this business intelligence relative constant on this top high priority issues of the top IT managers and our business is really spread all over the all businesses of course the people the companies with the highest budget these are bankings especially the investment bankings they are making the highest profits now maybe the companies who really invest in cutting edge computational technologies for example you maybe you have heard the American private investment companies they are playing on the stock exchange all over the world but they are not playing with real persons they are playing with computer systems that means they are now buying the most expensive computers and they are investing in the most expensive software and this software should really analyze in real time what are the statistics from the past maybe from the past years and to predict what will be the development of a price in the next seconds and to buy or not buy a stock but however all of the banking sector they will continue to invest so there was a big crisis of course there are new laws and this is welcome for us for the IT people and insurance also they have a lot of regulations which should conform telecommunications they are typical users of data warehouse usually they have the biggest amount of data because if you collect the data on the call level that means imagine just in Germany within one day maybe there are several hundred millions of calls and you have all the statistic data and you have to make analysis of where it happens the same with retail companies like Metro or Reve in Germany or Walmart in the US they also have data warehouse with huge amount of data typically telecommunications retail their data warehouses are not so complex the data are also not so bright like the banking they are really dealing with huge amount of data in the other sectors for example to motive the main users of data warehouses these are supply chains for example how we could optimize the supply chain of global companies for example Volkswagen they have in a year something like over 100 billion billion euros revenue from this 120 billion euros 80-90 billion euros are going to suppliers all over the world hundreds of thousands with billions of parts all over the world so the question is how could you control this how could you optimize this and how you could say if some purchaser negotiated whether if the company is making savings whether this is for example because the purchasers was very good and negotiated good prices or maybe the prices on the market went down and when the companies are controlling and calculating exactly this what are they doing their purchases all over the world what is the scoring what is the quality of the suppliers now there is in the past the typical traditional users of business intelligence was usually the top managers or some small group of analysts but these users groups are going down and down now we are talking about operational BI as I said previously one small employee maybe he is not highly educated but when he has to give a credit he should get automatically the credit score for this type of customers and this is calculated somewhere in a data warehouse and is sent back to the operational systems and this is so-called closed loop so on multidisciplinary this is our business because this is not only most of the job is done by IT people but this is not just programming this is not just management this is business analysis quality assurance data quality front-end development data integration so you see all these terms appeared or came in the past years and typically the vendors are trying to play with this now now it is what is the buzzword mobile BI operational BI and each year they are coming and coming in such a world and typically for you the question is where should you go in which direction so it is not so complex as it seems to be leading catch technologies let me say let's go to another part let's say it is some words about you know what is TARS Schemat, I suppose you know what is also rollup, relational, mollup but what is the tendency typically the traditional data warehouse or data melt is with star schema to develop so what usually happens this fact table is growing and growing but some days it is rich let me say maybe one billion records the dimensions the typical dimensions you could have dimensions, count types or factories could be 100 or 200 but you have also customers and the customers also could be also 100 million records and the traditional data melt is still widely applied based on the relational databases and implemented with star schema whether this is Oracle or DB2 and the typical problems are the performance problems this is what the business users mostly complain for the reports are too slowly and of course the database producers started to invest in the optimizing the software for example Oracle introduced different type of indexes so called bitmap indexes they introduced so called materialized views do you know what is materialized view everybody so that means you have your dimension this is a fact table let me say one billion records you have dimensions, customers regions and date, time, customers 100 million records but the typical analysis for example the people who analyze they don't care about to make analysis on the particular persons usually you have here some hierarchy in the customer for example towns, regions or another classification male or female they want to analyze let's see what we have solved for example in the region of north Germany that means if you have denormalized dimensional this is the typical star schema you have here the hierarchy let me say you have towns, regions, regions within the country, continents and business regions all over the world so the typical BI tool will generate for example if you have to analyze salt amounts in north or south Germany you generate the select statement you join this huge tables that means you join one billion records with 10 million 100 million and this will be very slowly because this is a real physical data although actually what you need is you need is just two type of data whether this is north or south Germany and in this way you could what we could do usually in the previous time you go and define some aggregated table facts on regions that means you have now here let me say 5 million records but not one billion and what you do you say okay guys now you have one additional table you could start sending your reports directly to this table and it will be very performant okay on the next day there is another requirements another aggregation time the database goals and create a new aggregated table and so on and so on it's going bigger and bigger and sometimes they have to say but now we have here 100 tables so what we could not manage this so what did all the people also do they say okay there is this aggregation here we will make the database so intelligent or they made the database so intelligent that when the front end to send the select statement on this fact table and this dimension table and making group by the database is sink all but I have already aggregated data why should I group again and calculate when I could go here directly here and this is so called query rewrite that means the database intelligent and rewrite the query the query is going to this small table getting the data back because at the end the user usually have on the reports maybe 10, 20 or 100 rows not more and this is milliseconds and so this is the investment of the database what is the problem the complexity of the database become very high that means you even the most of the database administrators they don't know these features even the consultants of the software producer they were not always up to date about these features so that's why what's appeared I could open a new maybe I could delete this and so appeared the so called more lot databases where you have really cubes where the data are really already saved in a pre-aggregated way but this is now know how of the database itself that means you don't need a DBA to analyze which kind of aggregates do you need because this also an issue should we have pre-aggregated 5 materials views or 10s that's why this more tools the most important what they are bringing is the tool is the database intelligence intelligent itself to define itself what should be pre-aggregated within all dimensions and all groups and hierarchies of course you have the means to control this because this is this space on the other side and you have to load these cubes and this to load and to create all the aggregations you also take time that means you could control this and so typical software vendors in these areas are Oracle S-Base is a typical multi-dimensional database IBM Bank Cognos Cognos both another company which is producing so called TM1 database is also truly multi-dimensional database but as I said this also have its costs you have additional knowledge that means now you must have relational DBAs now you should have your data warehouse somebody who also understand this technology and you should also calculate that the calculation time should be also immense loading this cube it could also take maybe days another tendons is so called in memory BI that means coming again from this performance ensure that the reports are too slow that's tools analysis tools that's they take the data and the working in the RAM and for example if this is previously we saw 1 billion records dimensions 10 million 100 million customers nowadays the memory is very cheap that means my notebook has 5 gigabytes of RAM you could buy easily notebooks with 8 gigabytes of RAM one working station you could have with 16 it costs maybe how much 2,000 3,000 500,000 you're this nothing that means what they are doing during the analysis they load the whole data in the RAM memory and from this point this is very very fast that means you don't have this very expensive this success operations typical software producers click view this is a small company from Sweden but very popular in the last time Volkswagen started to use this click view as a tool of course SAP business subject announced that they will also go in this direction to have in memory BI analysis but in typically the reality is that the amount of data will grow and grow and grow here there is some example how could make a skip just this is a small sample for example this is an example for counters of the power energy use and typically let me say you have 1 ton 1 million customers and each has such counter and 2 the companies are earning the data once a month and we have 1 billion records monthly but now the technique is very advanced everything is happening well less and the companies could get this data on an hourly basis and you usually could have not only one such counter in the house but you could have several and the same example is you have 1 million customers but if everybody has 10 devices in his house and the companies are earning once each minute you could have monthly how much over 4 billion is this records 400 million records that means you see what is how the data is changing there was another example this is mostly from the statistics that is in the next 3 years the world will generate so much data what was generating in the past 400,000 years because this is not only this example with the power counters but you have now RFIDs when you buy a ticket for example for the stadium and everybody knows who is getting in the stadium maybe you the train tickets you have such RFIDs and everybody will know where the person is and you will see there are overall readers and these readers will generate statistical data, statistical data, okay this will be collected but some days somebody will say let's optimize the business based on this data and this is huge amounts of data and how the industry is addressing this issue this is so called data warehouse appliances this slide was in the so it is the slide from the leading cash technologies but data warehouse appliances we are coming back to the performance problem so one of the tendons the solutions could be more of database it could be memory another solution is data warehouse appliances that means you have a box and this is a database machine in this case on the picture this is also a producer of such databases another big producer is Teradata or this is a traditional the most used databases for such very very large databases usually the telcos and retail companies but what they say we are selling you a total solution the hardware is specially designed for this database even though it also has their solution called Exadata they said the disks are so intelligent that the disk could resolve the warehouse of the SQL query that means the disk itself is only making on this level the filter so and here the tendons is if you have such powerful machine and usually the vendor saying no matter how stupid SQL you are sending to the machine the performance is always better then we have the first case in a telcos and one customer says but why do I need a data mass so I just we will have a central database warehouse everything third normal form why should I develop a data mass the mass tall as Starros Northex came up I have a very powerful database and no matter how stupid is the select statement the performance will always be better so this is another tendons and so far in Germany it's still not so widely used except this traditional cases with teradata in the telecommunications companies but many companies just are checking this whether this could be a solution whether really they could save developing a data mass or just having such a box database box and having calls in it I will finish my presentations so information management be at the Dastra first to go in this direction information management be a great choice the best choice to go to this at the Dastra of course so we are very dynamic organizations you could really start us getting your first experience with really cutting-edge technology in such great areas like data integration like data quality mass data management or BI and we have also a special program for even for students in the last year or just who finished its study we are sending here in Germany such people from the German University to Canada to get their first experience in Canada and usually this are in areas where in Germany we still do not have experience actually yeah some also final I am getting in the wrong direction some advices so what is my experience running interviews with people like you even going to the university I had such experience somebody students in Informating Master Informating I said oh I don't want to program so I don't want to make SQL I don't want to make the database this is sounding too technical but I asked what are you studying what you finished Informating I said oh I made a study a practical semester and I was the project lead because the professor gave me this role and I did this very well and I want to be a project manager right away starting the job and this is not so easy you should not be afraid of programming you are young people now in Germany I personally will go in pensions with 67 years so I am now 42 maybe you will be allowed to go to pensions maybe 70 years that means if you are now 25 years old or 30 you have at least 40 years to get experience just to start right away right now to be a project manager and to fail and to be disappointed and to go in depression it's not worth just start with programming just start with something technical you should not be ashamed doing this you should not be ashamed doing database job or be an ITL developer you could collect two or three years if somebody has familiar skills to be a project manager this will be recognized very quickly and without the people who are not waiting somebody to say I want to be a project manager he will just receive this task to be a leader of course nobody not everybody has this skills so somebody is better to go deep in the technology and to be an expert in the database area or the ITL or to be a technical consultant somebody go in the direction business analyst that means he is communicating very easy could understand the people what they need could describe this very good way but say I don't want to be manager because I don't want to care with other people so my advice is start doing simple and smart thing that's just doing something technical programming is really a lot of fun there you could see what is the real world you could see how your colleagues behave what are the problems you are one personality the colleagues beside you is totally different personality having this experience you could be a better manager knowing what is the reality just starting as a manager to be disappointed and then could give you such negative example even at this bank we had certainly needs to make some of our juniors as team leaders or just helping the project manager to be team leader part of the team suddenly the project was in a hot phase the people should go make overtime go work on the weekends and certainly we had a consultant from our Canadian office and she made a big scandal complain to the company owners that she is forced to work on Sundays and from the same time she is from some special Christian religion in Canada and in this religion it's not just allowed to work on Sundays it's just for her this is total it's not just to work on some law this is just her internal physical law and she just complained and made a big scandal and this young guy could not manage with this because he said we must work because we are behind the schedule and there are such cases and instead to go and be yourself disappointed and first collect experience and see what is the reality and having this experience you could be a very good senior consultant you could redefine a good concept for example it's a real illusion that you could come from the university and go make some strategy consulting that means that you go to some top manager and say you need to make your data warehouse now this that means you need a 500 million euro budget and he will believe you what is happening with such people they are going and they are doing just the power point writing stuff that there is somebody who is experienced who is defined the contents you think you are making some strategy consulting but what you are doing you are just assistant writing the power point slides so this is the reality the top manager they need when it's about consulting they need to read somebody's experience who really know what is the real world what could be the problems what are the risks so usually it's what's happened also with junior not experienced people so you become your first task okay you are a master so it's supposed to be better but typically this fear to disappoint the manager that means okay you receive some tasks this should be done in five days you say okay I could do this the five days over the manager is coming into your notes already so that means although you saw already on the first day that you could not cope with this timetable that means usually the typical rule is during the first 20% of the time you will have all the feeling are you going to cope or not and this is just an easy goal and communicate guy I think I could not keep this deadline what could you do should you take another resource or somebody to help me or should we give the task to another who is more experienced and this is what this expect this what is the right thing to do so that means you should be proactive so just not wait the typical the junior just they just waiting and somebody to ask are you ready or not just go and say I am ready I could not be ready in five days but not wait three the fifth a day and learn SQL this is the typical what is likely we have also people who studied here in the branch bank and this is one of our best guys and but we have experienced with people coming from another universities they have their lectures in SQL database is a relational database and we have also a test everybody must make this test and we saw people really they could not understand the simple select statement and they could not differentiate with the between where having and this are really simple stuff and I could say really from my personal experience I was also in this situation I also studied in the university databases some of our SQL after that made two years only programming C++ after decided to go to apply for a job for the company in Bulgaria who is was the Oracle specialist they gave me a test and I felt on this test because the SQL I just did not make this a for to go and read some of the stuff and it's just within several hours you could go get if you don't have books you could go and there are thousands of examples and really learned it because the data warehouse is the business intelligence this is the basics if you don't know what is segregation how you could your group how you filter the groups you are really lost there ok of course we don't employ people who don't have the chance to learn and we of course see what are the potential in this guy you know that this could be learned quickly but this is really disappointing from somebody even with master degree to mix where having or to write where some balance greater than one thousand or two thousand this just my recommendation my advice ok actually I am done yeah we took more time hope it was interesting yeah thank you very much thank you so much other questions open floor for questions actually this is independent from the branch the best data have maybe a middle-sized insurance company or bank which have their sources only one source system for example with the bank of the insurance company it's not so big and you have only one SAP solution or its own solution it covers all of the cases that mean you have really totally integrated environment you have only one source systems you get all of the data from there this is typical of the data clean there because if you even have problems they could change very quickly the daily days of the promise happening in the big companies when they have hundreds of systems hundreds of subsidiaries independent subsidiaries and like companies like falsefag and they are buying and buying now it's Porsche tomorrow maybe it will be something else they are integrating the systems but they could not make this at once this is happening during the years and maybe if at all and it's just normal if you take the same type of data from different legal entities that they have really they have different standards and when they develop the system they have different requirements and it's natural that they have you have dirty data there but today you could not say there is an industry where the data are the best of course the banking are very strictly controlled they must make this they invest in this in this such project in the data quality master the management but however also this is not clean that means you could not change the operational system you do this in the data warehousing systems and the operational world is staying not so good...
In this course, we examine the aspects regarding building maintaining and operating data warehouses as well as give an insight to the main knowledge discovery techniques. The course deals with basic issues like storage of the data, execution of the analytical queries and data mining procedures. Course will be tought completly in English. The general structure of the course is: Typical dw use case scenarios Basic architecture of dw Data modelling on a conceptual, logical and physical level Multidimensional E/R modelling Cubes, dimensions, measures Query processing, OLAP queries (OLAP vs OLTP), roll-up, drill down, slice, dice, pivot MOLAP, ROLAP, HOLAP SQL99 OLAP operators, MDX Snowflake, star and starflake schemas for relational storage Multimedia physical storage (linearization) DW Indexing as search optimization mean: R-Trees, UB-Trees, Bitmap indexes Other optimization procedures: data partitioning, star join optimization, materialized views ETL Association rule mining, sequence patterns, time series Classification: Decision trees, naive Bayes classifications, SVM Cluster analysis: K-means, hierarchical clustering, aglomerative clustering, outlier analysis
10.5446/333 (DOI)
So, hello everyone. Welcome to the lecture, data warehousing and data mining. And we kind of are through the data warehousing part, so we can kind of like get rid of that. And for the rest of the term, we will deal with data mining issues and some very exciting algorithms and interesting ways to get more out of your data. And so, today we will start with a short introduction into what business intelligence is and then present the first algorithm which will be frequent item set mining and association rule mining today. So, last time we were talking about how to build data warehouses. That concluded our idea of, well, what a data warehouse actually is in terms of the software that is done. We were talking a little bit about the ETL process, which is the most important process in the life cycle of a data warehouse because the old crap in, crap out paradigm is still valid. So, if the data in your data warehouse is bad, then all the decisions based on that data and all what you can mine from this data is of low quality too. And that's kind of very annoying. So, what you have to do is you have to consider how do you get the data, how is the data transformed into some global schema that you can really use for a lot of applications, which is extensible also for new and upcoming applications. And then you have to look in the transformation phase a little bit into the topic of data cleaning. How do you make sure that the data is really correct and that the data is complete? Because that determines the major quality issues of your data warehouse. And interestingly enough, if you look at some of the big companies today that have huge data warehouses, terabytes and terabytes of data, and you look at the quality of the data that is actually in there, you will more often than not find that the data quality is actually pretty poor. And of course, that is a problem in today's data warehouses in bigger companies that cost a lot of money to clean. Finally, you have to load it. That is usually a process that is done in bulk mode. So you load all the data that has been cleaned, that has been transformed, that has been extracted from the underlying productive sources, and put it overnight or whenever into the data warehouse, and the data warehouse is ready for all the olapqueries that you could think of the next day. Another thing that we were talking about briefly last time was metadata. So how do you describe what data actually is in the data warehouse? How do you keep up with the metadata? And how do you get an understanding about the nature of the data that comprises your data warehouse? Another thing that is very often done these days in terms of metadata is the so-called data lineage or data provenance. So information about where does the data come from? That can be very interesting to see, you know, like if there are some oddities or some things that you just don't understand, looking at the trail of data may very often help to get a feeling how this data was aggregated, and maybe there is something wrong. So this can point you really to interesting insights about your data warehouse. But as I said, today we want to go into data mining rather, and the major term that is needed if we talk about data mining and data warehouses in one lecture is business intelligence, because that is what you want. You want to find information that you don't know yet about your business, about your company, about your customers, about whatever you have in your data warehouse. Since you have all the data in the data warehouse, that's a good place to look for it. On the other hand, it's quite a difficult thing to look for that, because you just dump the data in the data warehouse, and there's no way that you find out some hidden relationships or something, but you just have your schema and loads and tons of data. So we will briefly provide an overview of what business intelligence challenges are today, and then start with data mining, and today will be one of the most important algorithms for data mining that is very often employed, especially for marketing purposes, association rule mining. Good. So what is business intelligence? Business intelligence is kind of insights about your company, insights about what you do in your company. And it's very often said that what in your data warehouse actually resides is data, and what you need to make decisions is information. And those are two totally different things. You can generate information from data. Well, actually you have to generate information from data, otherwise it would be guesswork. And the data gives you the support to have some statements about what's going on, and the information makes the data digestible in a way. So you can easily grasp the concepts underlying your things. And there's another step. Many people talk about this pyramid kind of shape. You have a lot of data in your company. You extract information, out of the data, and what is the peak of our little pyramid here? No, decisions is the actions that you take based on what is in the pyramid. Knowledge. Knowledge, exactly. So this is the most refined analysis data that you will have. It's built up on information and gives you some time perspective into it, and some decision support. The decisions that you then take is basically the actions that are supported by your knowledge. So what you basically do is you take the data warehouse, this is here, where all the data resides, the data warehouse, and to get from here to there, you use analytic tools. So this is where OLAP resides, and this is where data mining algorithms reside, working on the base data and generating information from that. And this is basically a manual step, a manual step, putting together information into knowledge, which is basically the work of analysts that just go through all the information and then build strategies from it and find out what you should do to improve the processes of your company or to increase revenue of your company or to increase the happiness of your customers or increase the creativity of your workforce or whatever. You know, like, can be all kinds of stuff. But it's found it, securely found it, by all the data that we have stored in our data warehouse. That is a basic idea. Okay, with that we're going to store again, I see. Same old problem as every year. There we go. Come on, business intelligence, hello. Yeah. So what are typical applications for business intelligence? One very important thing is segmentation, mostly customer segmentation or product segmentation, where you find different groups of customers that you could address by certain, well, advertising campaigns or whatever. You find out how the market is actually segmented and whether your products cater for specific segments or for the market as a whole. And then you have decisions like, well, if I don't cover the whole market yet, why don't I address some other segments or something like that? Then there's the propensity to buy. So who are the customers that are willing to buy, that have the money, that are willing to spend the money, if you just give them the right idea, what they want? Also has something to do with advertising. Profitability, of course. Is it worthwhile catering for certain customers or are these customers demanding extremely high quality products for very cheap prices? So your revenues are actually marginal and eaten away by the taxes anyway. So you might not be willing to cater for these customers anymore. Very central thing, fraud detection. Think about a credit card company or something like that. That needs to find out whether the transactions taken on a credit card are possible. Figure for example, you have where the card was used and you find out that the card was used in three hours in three different countries. Maybe that is possible. It's very improbable. So with these things and typical data mining algorithm, you can detect fraud or at least you can generate candidates for fraud detection that you then can look over and find out whether this really was fraud. Same thing, customer attrition. So are the customers leaving? Are things slowing down and the channel optimization? So how do you get the products to the customer? Are you addressing the segments through the right channels? So probably emails is not a very good way of advertising anymore because everybody has spam filters these days. Maybe making some postal things would be much nicer but you can't afford it so much and so on. So find out what your customers really want. Customer segmentation, as I just said, is it basically about the market segments? So are the young people, are the older people, does your product cater for them all? You might need some flashy products for the younger people and some products that are very reliable for the older people or something like that. Also the personalization of customer relationships, which is usually referred to as customer relationship management, you will all have heard of CRM tools that are derigueur in industry today, which just means customer relationship management. So follow up on purchases, find out whether the customer is satisfied with the product. Maybe you can do something to make them even more satisfied. You can handle warranty claims and stuff like that over customer relationship management. Very important part of these days. And of course, via customer relationship management and data mining, you can find out whether somebody always uses the warranty or hardly ever uses the warranty. And then in fulfilling the claims, if somebody hardly ever uses the warranty, well, why would you care? Just fulfill the claim. But if somebody always uses the warranty and has returned some product five times already, there might be something wrong. So this is what you find out why customer relationship management. Second, some propensity to buy. So which customers are most likely to respond to a promotion? How can I target certain customer groups, especially campaign profitability? So if you're planning advertising campaigns, where should I put television commercials? If I have certain customer groups, it might be more interesting to put my advertisement in the evening program. If I want to cater for kids and they should go to their parents nagging about what they want for Christmas. No, I want this new product from Tito's wonderful company. Then put it, well, during SpongeBob or whatever. So the children's television program. Profitability. What is the lifetime profitability of my customer? Does it pay off to send him or her a Christmas card? Are they going to buy some car in new cars or did they just buy a car? And I couldn't be interested less because the next five years they will definitely buy no new car or something like that. That's kind of interesting to follow up on customers and find out about the overall profitability. So you can really focus on the profitable customers. It's very often done in banks these days. So you have these key account management where really if some customers, especially profitable, you open up a key account for them and have some people that exactly cater for this customer. Very important in today's business and today's industry. Fraud detection, as I tell, what transactions are likely to be fraudulent. So credit card companies use a lot of that and have very clever ways these days to finding out what is actually fraud and what is a normal use of a card. It has also something to do with a little bit of... What was I going to say? Outlier analysis, yes, and profiling. So find out what somebody normally does and then look for the outliers that are not typical for his or her profile. These outliers could be fraud. You hear the messages on the news every five minutes, you know, like be it Nigeria scam or things like here, like 400,000 pound insurance scam that can be found out. And the good thing about data mining algorithms or business intelligence algorithms and that kind, is that they work automatically. So you can run them overnight and react very quickly to what's happening. Sometimes while it's happening, but at least immediately after it has happened. And that could cut your losses and could make things easily addressable also for police and stuff. Customer attrition, channel optimization. So what customers should I take care for because they are bound to leave or they're kind of behaving in a way that seems they're considering other alternatives to my products or to my services. And the idea is to prevent the loss of high value customers. And if somebody is not profitable anyway, well, your competition is absolutely welcome to those customers, of course. Channel optimization, whether you do TV advertisements, email spamming, or like fanciful banners in the city center or whatever you do, you know, like, I mean, it has to fit to the customer. It has to fit the product. And this is what you find out why business intelligence. So what you basically do is you have the data warehouse at the center of your company. You have some data sources, which is basically productive systems, but can also be customer relationship management systems or key account systems. And the ETL process, ETL, is basically bringing all that data into your data warehouse. And from this data, you have the business users, mainly analysts, that work with the data, that work on the data. And the most important point here are dashboards where you can see things that are automatically generated. Some things are reports like we did with our all up queries, but more often than not, you will find that there are some data mining algorithms that give you some hint what is going on or what is interesting, what are trends you should look at, and what are trends that you can safely forget about, because they are not interesting or they are not. And finally, it goes to the managers or to the decision makers that will develop the strategies or adopt strategies as shown by the analysts. And the interesting thing is here in the future component, intelligent systems. So this is where mainly data mining algorithms work on their own directly on the data warehouse and also deliver the data what is normal, what is exciting, what is new to the user interface so the analysts can really see what is happening. The one thing to find out what is happening is automated decision tools. So for example, fraud detection mainly happens automatically. Very often you don't have a human in the loop anymore, but the algorithms run and immediately when your credit card seems fishy, you get a letter, your credit card has been or the account has been closed or has been at least put on ice at the moment. Did you really do these three transactions that look fishy to us? And then you respond, yes, I was buying something at Greenland yesterday or something, or you respond, no, definitely there was not me. And then the process is involved for credit card misuse. But all these closing down the account or freezing the account, sending out the letter for the potential fraud victim, all that can be done automatic. And this is just kind of interesting. You have just rule-based systems that have a certain solution if the case arises fraud or possible fraud, probable fraud. At this point you don't know whether it is fraud or whether it will be perfectly harmless, but better safe than sorry. So you should really consider freezing the account at a very early stage before all the money is gone and then the customer says, well, I wasn't that and you should have detected it. So you have to be reliable for the revenue that was kind of done here. Same goes for loan approvals. Also that is mainly automatically these days, you just hand in what you earn and what you want and what you basically have in terms of securities. And then the decision can be made automatically. Same goes for business performance management. So just getting the indicators of how your business is running is a very important thing. And mostly today based on so-called balanced scorecards, which is an economic system that shows you the key performance indicators of your business. And that can be done automatically. So that can be performed in an automated way directly on the data in your data warehouse. And then you get the balance scorecards and you can see what performance indicators are up and running and what performance indicator should need a little bit of attention. Very good. One of the possibilities to get this to the people, to the decision makers are so-called dashboards or business cockpits. Because they look like the dashboard in your car a little bit. You may have some meters showing whether you're in the green area or whether you're in the red area with your revenues or what you gained or some regression analysis showing whether the trend is upwards or downwards or stagnation at the moment. And you can see all these pictures and derive the information and the knowledge from it that you might need to find out what is happening in your business. Well, so how to do that? That's the kind of interesting thing. How do we get all this wonderful data, all this wonderful information that we want to visualize in our dashboard or in our business cockpit? And the answer, of course, is data mining. Also known as knowledge discovery and databases. There was an early term for that. The life sciences then thought, well, knowledge discovery and databases sounds a little bit too bizarre. Let's just call it data mining. You go into the mine and you pick out all the data or the information that you need from your data mountain. Kind of interesting. And this is exactly what it is. You find the interesting information or patterns that occur in your database. Given that the database is so large that you can't do it by hand, that you just don't see it. I mean, if I can look through my account books, it's very easy to find out whether I've been profitable or not. If I'm a multinational company with thousands of holdings, it's not so easy anymore. And you have to look very closely. What you need for having interesting information is that it is non-trivial. So it's obvious that when you earn a lot of money, that things are good. I don't need somebody to tell me that. But maybe somebody should tell me about a certain segment of customers being more and more dissatisfied with my services. Because that is not trivial. It's mostly implicit. So I mean, it's not just a single number, but it's information that puts a lot of numbers together. That I, in some factual connection with each other. And it should, of course, be previously unknown. Because I don't like a system telling me what I already know more of the same. I could have Mackenzie for that. But that is basically what I mean when I'm talking about interesting information. And of course, there are other ways to get to that information. So for example, there's deductive query processing, expert systems, statistical programs that also run. We have a different lecture this semester, which is about expert systems. Actually, it's a knowledge-based systems lecture. And there are a lot of techniques, mostly rule-based or, I don't know, there are a lot of logical ways of addressing information, of mining out interesting information. But that's not what we want. We want purely statistical algorithms running over the data. And then we want to find out what's interesting in our data. The applications of data mining are then, on one hand, database analysis. So find out what is in the data. And of course, decision support. Find out what to do when something happens in the data. And as in the case of business intelligences, all applications that are valuable. There are a lot of more applications, especially in the data, in the life sciences. There's a lot of data around, you know, like they have to find some characteristics or some patterns in this data. They use a lot of data mining. Other things go for text mining. So there are a lot of products at the moment exploiting blog posts or finding out how people think about a certain product by looking at posts on the internet. Or email analysis or whatever. You see a lot of new services at Google every five minutes to find something out that you've done previously. So these are typical data mining applications. The one that we are focusing on is mainly market analysis. You want to know how the market is cemented. How your customers fit into the profiles you address with your products. You want to find purchasing patterns. Is it worthwhile to close down the shop over the weekend? Because nobody buys something over the weekend? Especially your product or whatever. Do you just want to get into the Christmas business and forget about the rest of the year? Cross market analysis. So what happens between different products? How they are related? So maybe upselling, downselling, cross selling would be a good idea. If a customer buys something, maybe he wants the extension or something that fits very well to this product. That would be a nice thing to have. Also summary information. So again the topic is reporting. I want to know about what's happening in my business. I want to have the reports. And that of course includes all the statistics. Second thing is corporate analysis and risk management. Finance planning. What is my company worth? What is the cash flow that's happening at the moment? Can I predict some interesting trends? Are the sales going up or down? Is some product running out? Or is some product rather getting hip at the moment? Also resource planning. So what do I have to put into resources? Do I need more of something? Or should I slow down production a little bit? Because the market doesn't take it at the moment. Also very good to know what your competitors are doing. How's their market? How's their segmentation? How's their revenues and their sales? That's kind of interesting. Pricing strategies of course. What is the stuff worth that I produce? Which is not only dependent on what I pay for it. But rather dependent on what the market is willing to give for it. There might be high quality product that the market is just not taking. If they are too expensive. And there might be some, which happens very often actually. Some amazingly cheap product that the market is prepared to pay everything for. So why not take it then? That's basically what you do. The architecture of data mining systems is basically, as I just showed you for the business intelligence, you have the data warehouse that is basically built from the productive systems and resident data in the data warehouse. And on top of that you put a data mining engine. That is the main component. That runs all the algorithms for data mining that you are interested in. Finds the pattern. And then you need something to evaluate the pattern. Very often this is rule based. Very often this is based on some constraints that you can fiddle with. Which is part of the knowledge base. And then you have a graphical user interface to, well get the information across or build nice charts that you can show to your boss. And say, well here's a trend channel that is showing there. Sloping down, you know, like, and we don't want that, do we? We have to do something. Good. The major parts in data mining that we will cover in this lecture are association. We will do that today. Association means that things may be correlated with each other because they are causal. So if one thing depends on some other, well, product that is bought, or some other event that has happened, then those should be considered together, obviously. But how do I know which things are dependent on each other? That is association mining. Second thing, classification and prediction. How do the segments fall apart? How can I predict which segment is stronger or is getting weaker? Very important to find models that kind of allow me to predict a lot of what is going on in my business. And there are a lot of ways to present all that information. So decision-free classification rules, we will go into some of them during the next lectures. And of course, we will also look a little bit into the future with prediction algorithms and find out what will happen next. Cluster analysis is a very big topic in data mining. So you might have different classes and you have to label them. You have to find out how to group your products or your services together that you can manage them more efficiently. For example, produce similar product together or outsource a certain kind of service to some subcontractor or whatever. Same goes for the advertisement. If I know what cluster of people are buying a certain product, I can directly advertise. Maybe if I want to address people that are rich, I don't go into the slums or the poor parts of the city and distributed leaflets. But I will just go to the hip suburbs where all these people that are rich and beautiful live and try my advertising there. Then I already talked about outlier and other briefly. Mostly fraud detection and stuff like that. So I will just go to the top of the list of the reasons that is out of the normal. Maybe a Black Friday, some market crash or something that is definitely out of the normal, better detected early. Maybe some fraudulent use of a credit card. Again, better detected early. That's rather useful if you have rare events. There's association rule mining, as I said, associations between objects. And the idea is that whether objects or products or customers or whatever it is that you're looking at occur together. There has to be some hidden relationship between them. Why are exactly these five people buying that product? What common idea do they have? And the classical application for that is so-called market basket analysis, very often done by the big vendors, the big grosses. So for example, Walmart was one of the very big contenders of market basket analysis for what reason actually. So what it actually does is you have a supermarket and you have your market basket, like this little basket, and you put in stuff and you buy that stuff. And then you find out, for example, some association rule, everybody who buys cheese also buys wine. It's very probable that if somebody buys cheese, he will also buy wine. It's obviously where the rules come from, because very many people like to have wine and cheese, like it's a dessert or something like that. And you will have a certain support for the rules, for example, 10% of the customers buy cheese and wine together, and 80% who buy cheese will also buy wine. This is a rule that is intuitively clear. Interesting thing, why should I know being Walmart? Special advertising, my beer and wine, no, cheese and wine special. Buy the things together and get 15% off or something like that. Other reasons? Silly as it sounds, that's one of the major reasons. Planning where to put your products. If I know there's a relationship between two products, it's much more probable that I get customers to buy exactly these two products or to look at promotions that I'm doing on these products. If I put them together, very close to each other, because then people will think, well, here I bought the wine, oh, there's cheese, wonderful, that goes very well together. Let's also grab some cheese. This is one of the reasons, with cheese and wine, it's kind of a classical example that is very obvious. For example, Walmart has found out that nappies, so, pampers and stuff like that, are very often bought together with? Exactly, with beer. Six packs. It's not obvious at all. If you think about it, it's kind of like, it's the dead east that goes Saturday afternoon shopping, and then they have toddlers at home, so they can't go out for a beer. They have to care for their toddlers, they have to stay home with their wife. So I need a lot of beer. Well, put good cold beer next to the pampers and you're fine, because people will buy it. So this is kind of like just getting things more efficient. Let's go into the algorithms of association of romance. How do you find out, given a large set of data, how do you find out efficiently and correctly, what are the associations that are valid for your data? We have two basic components. One, the items that I'm selling, the products. Everything that is in my supermarket is a product. So I have one as a product, and I have two as a product, and so on. And two, what is bought? And I consider this as a market basket as a transaction. So he takes his market basket, all his little shopping carts, he puts everything on the tail. And all the stuff that is bought together by a single customer is one such transaction. The data is very easy to get, but all the supermarkets now have electronic cash registers and scanners for everything that is bought. So at the end of the day, you know exactly that there was a customer. Don't care about the identity of the customer as soon as the customer does not use here feedback cards or Deutschland cards or how it's called, you know, like these profiling tools. You know that some customer, whoever he may be, bought these items together today. An association rule is an implication where you say somebody who bought these items also bought these items, and they should be different. I mean, I don't want tautologies. Somebody who bought wine also bought wine. Ha ha, that's not very novel. But typical things are somebody who bought wine also bought cheese. Somebody who bought peppers also bought beer. This is what we're trying to find out. And if we take the items sold in the store, one might be beef, I2 chicken, I3 cheese and so on, everything that you sell. And then you take, you look into the basket of people at the cash register and say, well, there was one guy who bought beef, chicken and milk together. And there was a second guy who bought beef and cheese. And the third one bought cheese and wine and so on. Okay, that's the basic idea behind it. And now you can find out an association rule. If somebody buys beef and chicken, then it's very probable that he also will buy milk. Rules of that type allow me to put the fridge with the milk next to the beef and chicken. Good. Of course, I could make thousands of such rules. Because people will buy anything they need, you know. And I might come up with, oh, this guy bought shoe polish or something like that. And he bought wine. So wine and shoe polish do have something in common. There must be a hidden connection. Well, the hidden connection might have been that he just needed both things, which can always happen. So the rules can be rather weak, like the shoe polish and wine example, or they can be strong. And what makes them strong? Basically, that it occurs very often that people buy these things, you know. If I consider something like shoe polish, people will buy that when they need it, when they need a new can. It doesn't really matter what they bought together, you know. It's just that every two months you will buy a box of shoe shine. And that's about it. There are two basic measures for the strength of a rule, which is the support and the confidence in the rule that you have. The support deals with the data. How strong is the data? And the confidence deals with the semantics. Support basically means, does it happen very often that people buy shoe polish, or that people buy wine? Or is it just a one-off? I have this one guy. I've never sold a shoe polish in my shop, and there comes one guy and he buys shoe polish and a pineapple. Wow, there's a new rule. No, obviously there's no new rule. It has to happen a certain amount of times that people buy shoe polish until I can state some sensible information about what is often bought together with shoe polish. That is basically the support. So the support of a rule is the percentage of transaction that contain both partners of the rule. Okay? Then I can say it happens together. Basically this is the probability that all these things occur in a single transaction. How often does it happen? Well, the support for some rule is just you take all the items in X and in Y, and you count the number of transactions where all these items in X and Y are bought together, and you divide it by the number of transactions that you did during the whole day, or that you did in your shop. Okay? Then you find out how often do people buy this product normalized by how often people buy in your shop anyway. And you find out whether something is interesting or not. The confidence of some rule deals with the semantics of X is somehow the cause of buying Y. So if somebody buys cheese, it's very probable that he also buys wine. How about the other way around? If somebody buys wine, is he also bound to buy cheese? Well, it could differ, couldn't it? They could be bought together very often, but one might be the cause for the other. So if I buy wine, cheese would go very well with it. But there might be a lot of anti-alcoholics that don't like wine at all, but rather like cheese. So they're bound to buy cheese and not by wine. So the rule seems to work rather the way somebody who buys wine also is bound to buy cheese. Then somebody who buys cheese also is bound to buy wine. How do I find out? I count the occurrences where wine and cheese have been bought together. And look how many times one of the projects has been bought individually. So if somebody buys cheese, if I wanted to state the rule, if somebody buys cheese, he's bound to buy wine. I count the number of times wine and cheese were bought together. And divide them by the number that cheese was bought without wine. Then I found out how often is it really that they are both bought together, or that one of the things is bought individually. So the confidence of the rule is basically take the number of times it is bought together. So basically the support and divide it by the number of times that only the cause has been bought. Yes? Clear? Good. Well, the rationale behind support and confidence is basically that if the support is too low, we have no way of saying this is a typical rule. This really has statistical relevance. But it may be just like our shoe shine and pineapple idea, you know. That happened once and nobody ever bought shoe shine again. Then this is not a rule, this is stupid. So low support should be avoided. Same goes for low confidence. If the confidence is low and people buy anything with cheese, then putting the wine next to the cheese is as random as putting shoe shine next to the cheese. It doesn't help you because you don't have the confidence in your rule, obviously. The association rule mining as an algorithm now has to discover all associations in the number of transactions that you have with a minimum support and a minimum confidence. So you set some values and say, well, I'm only interested in products that make 10% of my business. I'm not considering shoe shine. Get me to the big players. But I only want rules that are reasonably believable. Maybe have a confidence over 80% or something. Well, that's what we do. So let's try it. I have my number of transactions here. Seven customers came in today. The first one bought beef, chicken and milk. Second one bought beef and cheese. Third one bought cheese and boots. And now you see it's not too easy to see all the associations in these transactions, is it? Because it could be possible that somebody who buys beef always also buys chicken and milk. One possible rule. It could also be possible that somebody who buys beef and chicken always buys milk. How about somebody who buys beef and milk always buys chicken? There are lots of possibilities. And I could put all my items that were sold together in any arbitrary order. That is an exponential explosion of possible rules. So what do I do? I calculate the support and the confidence for every one of these combinations. Madness. I have five things here. Cheese, boots, beef, chicken and milk. And close six things here. Walmart has 20,000 in a single supermarket. Try building all the rules out of 20,000 objects. Madness. So let's consider we have a minimum support and a minimum confidence. We're only interested in things that we sell more than 30% of our sales. And we're only interested in rules that have a minimum confidence of 80% so that are rather believable. So one thing would be a rule I just invented. Whoever buys chicken and closes also buys milk. How often does that occur? Well, how often do chicken, closes and milk occur together in my transactions? Once, twice, three times. This is only chicken. This is only milk. This is chicken. Closes. And that's it. So these do not count because they don't contain all the objects that I'm interested in, all the items. But these three do count and I have three out of seven transaction dealing with it. Three out of seven is 42%, which is definitely higher than my minimum support of 30%. Okay? So this obviously beats the minimum support. What is my confidence? Well, the rule states that whoever bought chicken and closes will also buy milk. So let's find out who buys chicken and closes. Chicken and closes here, here, here. Chicken and closes. That's it. Okay? In how many of these cases that somebody bought chicken and closes? Did you also buy milk? One, two, three. Three of three. Okay? This is 100% confidence. It never happened that somebody bought chicken and closes without buying milk. Okay? So the rule seems to be causal. It is not because I invented it, but still. It has a confidence of 100% and a support of 30% for my little supermarket here. Okay? Everybody clear? Good. Now, we could have all kinds of... Well, somebody who buys clothes also buys milk and chicken. Somebody who buys milk also buys clothes and chicken. Blah, blah, blah, blah, blah. I can make thousands of these rules. How do I find out efficiently what association rules are above the minimum support and what association rules beat the minimum confidence? Anyway, somebody can think of... Hmm. Now, if you had the idea, you would have invented one of the most prestigious algorithms in data mining. A very valid algorithm. We will see the algorithm in a minute, and it's actually quite simple once you have seen it. It's actually very believable. Of course, this kind of association rule mining is a rather simplistic view of the shopping basket. We're not considering the quantity in which things are combined or what price is paid. We would be affected by some special offers where you say, okay, this is amazingly cheap today, so it will be bought with everything this week or something like that. There are lots of problems with the basic thing. But once you define your transactions and your items in the right way, you will find that it is definitely valid. It doesn't matter how you mine out these rules. You can always compute the minimum confidence and the minimum support. It doesn't matter what algorithms you use. The rules are the same. There is a certain set of rules that is above minimum support and minimum confidence. The three major algorithms of which we will look today are these two. The a priori algorithm. Mining was multiple minimal support. There are also so-called class association rules which we will go into very briefly in the detail at the end. The best known algorithm of all definitely is the a priori algorithm. A very efficient way of getting things done properly. It basically consists of two steps. The first one is so-called frequent item set mining. So I have to find out all the item sets that are above the minimum support. And the second step is generating candidate rules that have a chance of being above the confidence from these frequent item sets. These are the candidate rules that will be generated and only these. We will not look at all rules that are possible and still get the correct result. So we will never miss a rule. The basic idea is that when somebody has a frequent item that is above the minimum support, all the items contained in the set will have to be frequent themselves. So if I have a minimum support of, I don't know, 10% or something, then I can say if milk, cheese and bread is a frequent item set, then all of these three items individually have to occur at least in the number of times they occur together. On top of that, they can occur individually. But their support is definitely higher or equal to the minimum support of the bigger set. Okay? Understood? So this is what's called a downward closure property. Any subset of a frequent item set is also a frequent item set. If I take chicken closes milk, then all parts of it occur at least in these three instances and can occur some more. Doesn't. Okay? So this occurs three times. What about the others? How about chicken closes? Chicken milk. Again, occurs three times, but even occurs a fourth time without the closes here. But it definitely occurs at least as many times as the big set occurred. And this property is amazingly useful because it allows us to steer the candidate generation from frequent item sets of low cardinality to frequent item sets of high cardinality. And that means we won't have to try all the different rules that are possible, but only those that definitely have a frequent item set that works. So how do we find frequent items? We start by looking at one element sets. Okay? Once we have found all one element sets, how can two element sets be built? Well, basically by putting together one element sets that are frequent. Because if part of it would be not frequent, the downward closure property wouldn't hold. Okay? So the bigger set cannot be frequent. That's the basic idea. For each iteration, I look at the generated candidates of the last iteration and put them together such that one more new item gets into the set. That means I go from a K minus one frequent set to a K item frequent set. Good? And as a brief optimization, I will assume that the items are sorted in a lexicographic order. So I have a topological sorting of the items such that I can find out what... Well, very easily find out what is the intersection of item sets. Because if they would be kind of like all twisted round, I will always have to look when comparing two item sets what is contained in both of them. If I order them alphabetically or lexicographically, I will just have to look at the beginning. And once they start diverging, then they cannot have a large intersection anymore. Good. Let's start with finding all the item sets of size one, which basically is all products that are sold more often than the minimum support. Easy. Just go through your transaction list and find them. Then I need the next step. I need to build two-dimensional, three-dimensional, four-dimensional, five-dimensional frequent item sets. How do I do that? Well, I take the one element item sets, put them together to two element item sets. There's a candidate. I still have to check in the data whether this really is a two element item set. Because consider I have an item I1 and an item I2. Okay. And these are my transactions. It doesn't really matter what they were bought together with, you know, like could have been bought with all different things. Consider now I have a minimum support of two. Then both I1 and I2 are frequent items. But is I1 and I2 a frequent item set? In my little example here. Is it? Exactly. It's not. Those items individually fulfill the minimum support, but the intersection in the transactions is empty. So they are never bought together. Good. So we find the candidates that are possible and now we have to see which of them are actually valid. How do we find the candidates? This is the joint step. We put together all the sets of cardinal gk-1 such that one new item enters the set. And it's a very efficient thing to do because we have ordered the things lexicographically. So they have to have the same head and in the end one single item should be differing. Because if I take those two sets and put them together, then I get a set of cardinal gk. The head has to be the same. The last element may differ. Putting these two together results in, of course, the same head. But then two different items here. Again, sorted lexicographically. And the cardinality of Ik is k. Everybody sees that? Very efficient to do. And that is important for this algorithm, of course. Good. Now I have a candidate. I have to look whether this candidate is valid. Just go through the data and look if it fulfills the minimum support. So all the candidates that do not respect the downward closure property, what does it mean? That all its subsets are frequent items. Do I know the subsets? Well, yes. Because in the last step, I calculated the k-1 frequent sets. If there is a k-1 set in any of my candidates that have k products, then I can immediately prune it. I can immediately cut it out, because then the downward closure property would be heard. Clear? Let's consider an example. It makes it more easy to see. So, assume we have worked our way up to three item sets. And these item sets are 1, 2, 3, 1, 2, 4, 1, 3, 4, 1, 3, 5 and 2, 3, 4. And now I want to put together four items, item set. So I want to go from F3 to F4. I have to join the candidates. How do I join candidates? Well, I look at the first items, and those with the same first items can be joined together, because having different items in the last place would create a new item set with four. Entries. So what I have to do, I start here. And look, what tuples could this be joined with? This is 1, 2, this is 1, 2. So definitely, here's a join. This is 1, 2, this is 1, 3, no join. 1, 3, 2, 3, no join. So these are invalid. Putting this together results in a four tuple set. Clear? Good. Let's go to the next. 1, 2, 4, 1, 2, no, no, no. Doesn't create something new. Next, 1, 3. Oh yes, 1, 3 here, 1, 3 here. So this creates 1, 3, 4, 5. Okay? Doesn't create anything. Next, 1, 3, or it did that. 2, 3 cannot be joined with anything, because there's no 2, 3. Okay? So I get two new candidates from my 5, 3 item candidates. Right? And those are the only candidates that could be frequent item sets. Or there could be no item set, I don't know, having a 6 in it or something like that, because 6 is not a frequent item. Good? Well, now I have to see whether those actually do the trick. Well, how can I prove they do the trick? I look at all the possibilities to get 3 numbers out of them. So I could have 1, 2, 3, 1, 2, 4, 1, a, m, damn it. 1, 3, 4, 2, 3, 4. These are all 3 element sets that are part of the 4 element sets. And I have them here. Do they exist? Are they frequent items? Well, let's look about it. 1, 2, 3, 1, 2, 4, 1, 3, 4, 2, 3, 4. We have them here. And for your convenience, I put them here. Okay. 1, 2, 3, got it. 1, 2, 4, got it. 1, 3, 4, got it. 2, 3, 4, got it. All the subsets of my bigger frequent item set are itself frequent item sets. The downward closure property holds. So this is definitely a frequent item set. Let's look at our second candidate. 1, 3, 4, 1, 3, 5, 3, 4, 5, 1, 5, 4. Can all be done. Look at it. 1, 3, 4 is there. 1, 3, 5 is there. Those two are not there. They are not frequent item sets. Downward closure property is heard. Prune it. It's not a valid frequent item set. Okay? So the candidates for the frequent item sets is just a single item. Of course, if we do that, we have to scan through the list times and again. Let's look at an example with a minimum support of 0.5. We have four transactions here, which means I need two occurrences to make 50%. Okay? Well, let's look at the individual items. Here's item 1. Here's item 1. So I noted two occurrences, which is minimum support of 50%. Item 2. Here's item 2. Here's item 2. Here's item 2. So I noted 3 for item 2. Minimum support of 50% done. Item 3. Here's item 3. Here's item 3. Here's item 3. Here's item 3. Note 3. Now it's kind of getting bizarre. Item 4. Oh, only one occurrence. Good. Item 5. 1, 2, 3. Okay? Now, these are the candidates, which actually has the minimum support of 50%. Everything that occurs two times or more. So the 4 is cut out. Clear? Good. Now, I have to join them together. Well, joining one element sets to two element sets is very easy because they fit all together. They have all the same beginning. 1, 2, 1, 3, 1, 5, 2, 3, 2, 5, 3, 5. The 4 does not occur anymore because it in itself is not a frequent item set. I'm actually closing down the space here, the search space. Still getting the correct result. Makes the algorithm efficient. Good. Now I have to look at what actually happens. Well, in this case, everything belongs to F1. So there's nothing pruned yet because all these things are frequent item sets. I don't have the 4 in it. And now I have to scan through the things, which of these candidates actually have a minimum support of 50%. It's all possible that they have a minimum support of 50%, but where does it actually occur? So let's look at 1, 2. 1, 2 happens once. Okay, no to 1. 1, 3. Here's 1, 3. Here's 1, 3. Happens twice. 1, 5. Here's 1, 5. Happens only once. Okay? You see how it's working. Calculate the minimum support. You need 2 or more. 1, 5. Definitely not in the game. 1, 2. Definitely not in the game. All the others are frequent item sets of size 2. Now, how do we join them to item sets of size 3? Well, can I join something with the 1? No, I don't have any 1 anymore. Okay? Can I join anything with the 2? Yes. Here's a second 2. Join them. 2, 3, 5. Okay? Can I join anything with the 3? No, there's no 3 to join. Good? Only single candidate for 3 items, frequent item sets that I have to look through. Okay. How about the subsets? 2, 3, 5. 2, 3 is here. 3, 5. Oh, yeah. 2, 3 is here. 3, 5 is here. 2, 3, 3, 5. 2, 5 is here. Okay? All there. No need to prune it. Is it really supporting? Well, let's look about it. We have 2, 3, 5. It's a valid candidate, as we said. 2, 3, 5 happens here. Happens here. That's it. 2 occurrences, support of 50% holds. Definitely is in the set. Can we join that with something? Well, no, we don't have anything anymore. I considered 5, 1 element item sets. I considered 5, 2 element item sets. I considered a single 3 element item set. That's much less as the combination of the 5 items I have here and all the subsets you can gain from that. Okay? Very easy, very efficient. Everybody clear about the step 1 of the apiory algorithm? Good. Then let's make it a little bit more difficult. No, not at all. Let's go to step 2. In step 2, we will generate rules from these frequent item sets and evaluate the confidence that we have. Well, the frequent item sets have to be distributed because a frequent item set is kind of 1, 2, 3. An association rule looks like who bought 1 and 2 also bought 3. So there are several ways to split the items in the frequent item set to gain association rules. So what we basically do is we distribute the items in the frequent item set such that we need to support how often do the items occur together by divided by how many transactions do I have. We need a confidence which is basically how often do the items occur together divided by how often does the body of the rule occur on its own because if it occurs on its own, it's not the cause for why. And this is what our confidence wants to measure. Basically, this is support y by support x. This is the confidence that we want to calculate. So what can we do? We have frequent item set 2, 3, 5 and it has a support of 50% as we calculated. What does it have in terms of subsets? We have 2, 3, 2, 5, 3, 5, 2, 3, 5. All of those could be the body of the rule and the head of the rule would follow from what is missing. If the body of the rule is 2, 3, then of course the 5 is missing, would be put in the head of the rule. We can calculate the supports for all the different things like we did in the last step. For example, 2, 3 has a support of 50%, 2, 5 has a support of 75%, and so on. Now we generate all the rules that are possible. 2, 3 makes 5, 2, 5 makes 3, 3, 5 makes 2, 2 makes 3 and 5, 3 makes 2 and 5, and 5 makes 3 and 2. Now we need to know the confidence. The confidence is basically the occurrence of all items divided by the occurrence of the items that make the body of the rule. Easy to calculate. For example, how often are 2, 3 and 5 bought together? 2, 3 and 5, 2, 3 and 5, 2 times. How often are 2 and 3 bought together? Also 2 times, meaning we have a confidence of 100% that this rule is correct. 2 and 3 are never bought unless also 5 is bought. It might be a cause. Very high confidence. 100%. Do this for all the others and we find we have actually two rules with 100% and all the other rules have a confidence of 2 third, which for our toy example here is very high. Usually you won't get so good rules in an association of rule mining. The confidence is usually much lower. And the support for all the rules is kind of the same, of course because it's the same set, which generated from this single set. Clear? Easy to calculate. You make that for all the different sets that you have. All the frequent item sets, you get your association rules. Only considering a minimal number of candidates. Good. Break? Okay, let's reconvene at, well, 5 passed. Good. So, let's cut to the chase. Aha. To summarize the algorithm, we want association rules of that type and we need to know the support for the body of the rule and the support for the whole frequent item set. And of course, we already got these values when calculating the frequent item set. So testing the support is not, testing the confidence is not a problem. And the support we know anyway. For every step of the algorithm, for every extension of the frequent item sets, we just made one pass through the data. So if our largest item set is of size K, we pass over the data only K times. Given that there's exponentially many association rules, that's quite efficient. It can be done in linear time. That's kind of nice. The mining that we're using exploits the sparseness of data and high minimum support and minimum confidence thresholds. It's especially rare because the frequent item sets will break down nicely and the candidate generation can be restricted to only a few one. On the other hand, it's kind of interesting sometimes to focus on rare items. So what about the shoe polish? I will never get any rules with shoe polish because it's not bought too often in the supermarket. At least when compared to milk or to yogurt or whatever it is, you know, like that people buy every day in the supermarket. They buy shoe polish once a month or something. That's different. So all my rules will be considering milk and yogurt and bread and whatever. And no rule will consider shoe polish, shoe shine, because it's just a rare item. What can we do about that? That is a variant of the Arpura algorithm. So basically it's mining with multiple minimal supports. So you say, well, the minimum support is not bound to the set of items in my shop. I don't have a global minimum support, but for every product, I have a special minimum support that reflects how often this product as a whole is bought. Good. Sounds easy. Poses some problems. Well, the basic point is that it really introduces the rare items into the association rules. If you have, for example, cooking pans or frying pans or something like that, and you are sure that they are bought much less frequently in your supermarket than bread or milk. Just lower the minimum support for these products, and they have a chance of getting into their frequent item sets, and then you're done. This is exactly the rare item problem. If the minimum support is set to high, you will never find rules involving this. If you need a rule that involves both frequent and rare items, and you use a global minimum support, then you would have to set it very low, generating a lot of rules about bread, sugar, milk, blah, you know, like everything. Because everything is over that minimum support basically. It doesn't help you either. A global support cannot do the trick. You need individual support. That is what I'm stating here. If you have a minimal support for each item, then you need to find out what the rules are. Making a rule of shoe shine and bread is not very reasonable. Because one is a frequently bought item that will very often occur without the shoe shine. The confidence of rules like that is definitely very low. You don't want them. How about things that are also rare? Well, that could work. So maybe people buy shoe shine with laces or with shoes. Yes. Right. Very rarely. But the confidence could be very high. So what do you basically do? You restrict the minimum of the frequent item sets to those where objects have not too far diverging minimal support. You don't want bread and shoe shine, but you might want shoe shine and shoes. Or you might want shoe shine and caviar. Both things are very rarely bought. There could have something to do with each other. Probably don't, but still. Both rare items, and this is what you do. You basically look at the maximum support for any item in your set. And you look at the minimum support for any item. If they diverge too much, if you have one that has a minimum support of 50%, and one that has a minimum support of 10%, just don't consider. It's not a sensible rule. Good. This is basically what you do. Once you have chosen your frequent item sets, every item in the frequent item set may have a different minimum support. What do you do for the rules that you generate? The basic idea is take the minimum of support values that you have, and that is the support value for the whole rule. For example, if you have the user specified minimum values for bread, shoes and clothes, 2%, 0.1%, 0.2%, and we consider it works. Like 2% is not too far from 0.1% or 0.2%, it doesn't diverge too much. And I could have a rule, closes, everybody who buys closes also will buy bread, has support of 0.15% and a confidence of 17%. But how about the minimum support? Closes and bread have 0.2% and 2%. This rule doesn't get the minimum support value. On the other hand, if you have closes and shoes taken from the same frequent item set, so the support and the confidence are the same. But this time, the 0.15% work because it's larger than 0.1%. Depending on what items are in your rule, you define the minimum support for the rule as the minimum support of all the items in the rule. The problem with multiple minimal supports is that the downward closure property breaks. And why is this? Consider we have four items in the database, 1-3-4, and have different minimum supports, 10%, 20%, 5%, 6%. Then we could again join them. For example, 1 and 2 has a support of 9%. The minimum of 10% and 20% is 10%. So this is not a frequent item anymore. It doesn't cut the mustard. What is this 1-2-3? Well, if we take 1-2-3, we get 10%, 20%, and 5%. Now the minimal support for this set is 5%, not 10% as was the case here. So this set has a minimal support of 10%. This has a min-sub of 5%. What I know from the downward closure property is that the frequency of 1-2-3 definitely is smaller or equal than the frequency of 1-2. But now that I also have a smaller min support, it could be sufficient. And if we wouldn't have created the 1-2, we would never have thought of the 1-2-3 with our a priori algorithm. We have to work on that. That's a little bit difficult. Basically, we will this time sort all the items again, but not according to their lexicographic order, but according to their minimal support values. So we have a total order of items. The smallest minimal support starts and then the minimal support grows. What is the idea behind that? If we do so and then take the beginnings of the lists of items and frequent item sets, the minimum will be the same. It can only happen with one item that is taken out that will change it. The multiple minimal support algorithm is a straightforward extension of the a priori algorithm. Again, we have as a step one frequent item set generation. Only that the candidates are now generated with respect to the new multiple minimal supports. And we need a slightly different pruning step because we still have to consider some of the sets that don't make the minimal support. But maybe they could be useful in some other sets, in some bigger sets where they will meet the minimal support. In step two, the rule generation works exactly as in the a priori algorithm. So what do we do? For the frequent item set generation, we take the set and we take the minimum support for each. 10%, 20%, 5%, 6%. Now we order the item such starting with the lowest minimal support going to the highest. So 5% here, item 3 is the first one. Item 4 is the second one. Item 1, 10% and item 2, the 20% comes last. Just ordered. Nothing happened. Now I scan the data and count each item. 3 occurs 6 times, 4 occurs 3 times, 1 occurs 9 times and 2 occurs 25 times. So this is a very frequent item. Assume we have 100 transactions. So the support for 2 is already 25%. Support for 3 is 6%, 4 is 3%, 1 is 9%. Good. Having said that, we go through the items and find the first item that meets its minimal support. And this item is the seed for generating joint partners. Let's do it. So we said start with 3 and the minimal support for item 3 is 5%. But we have 6 occurrences in 100 transactions makes 6%. So that is valid. Okay. Next one. 4. We have 3 occurrences in 100 transactions. The minimal support for 4 was 6%. Arc. Doesn't work with 4. Okay. The 1. For the 1, we need 10%. We have 9. So it doesn't meet its minimal support. But it meets the minimal support of the first item. So it's a valid partner. It's not in itself a frequent item. But it's a valid partner together with a rare item. Okay. So we use it here to build a new frequent item set. Okay. We do the same with the 2. Has a minimal support of 20%. Happens 25 times. Not only beats its own support, but also specifically beats the 5% here. Good. So these are 3 candidates. 3, 1 and 2. And now we will find out that the item set 1 really has a support that is lower than itself and should be excluded. So this is not a candidate for 1 element set. So if it's not a candidate for 1 element set, why do I consider here the minimum support of the first item that I started with? Well, because the 1 could be interesting for building higher order frequent item set. Though in itself it is not a frequent item set. And how would it be interesting? Well, I could combine it with the 3. It has 9%. The 3 has a minimum support of 5%. Okay. So it's above the minimum support for the 3 if it's put together with it. Right? This is why we keep it. Okay. Well, now we have to do 2 element sets. We have found out that item 2 and 3 meet their own minimum support. So they can be mixed with others. And at least 1 is kind of like not a good candidate, but still a valid candidate. And fear for was definitely out of the race. Okay. It doesn't even meet the lowest standards. Let us assume we don't want any rules where the spread is too far. Okay. Then we take the first evidence and now use the candidate set. So 1, 2, 3 as elements and not those meeting their actual minimum support because of the downward closure property. Okay. So we know the 4 is definitely out of the game. Those 2 are good contenders. And this one is not a frequent item set item. But is a contender for joint. Okay. Still I don't have to join with the 4. So what do we do? We have the support of 3, which is obviously larger than the minimum support. And we could form level 2 candidates with the 3. 2 possibilities, 3, 1, 3, 2. Okay. 3, 1 is a candidate. The support of 1 is larger than the minimum support. And the support of 3 minus the support of 1 is smaller than the 5. Because this has a support of 9. This has occurrence of 6. That's quite close together. Okay. Let's look at 3, 2. Well, 25 and 6. It's very far apart. It differs by 10%. More than 10%. So cut it out. Okay. Doesn't have to do anything with the support or the minimum support or stuff like that. It's only because of the divergence rule. But see that we have created a candidate matching all the minimum support requirements by some item that in itself is not a frequent item set. Okay. Understood. We do the things, we take the next seat from L. So 1, 2 is the next probability. The support of 1 is smaller than the minimum support. So we cannot use that as a seat anyway. And after the candidate generation is completed, we have only a single candidate, 3, 1. Now we look over the data again, read the transaction list and find out that the support for 3, 1 is 6, which is larger than the minimum support of 3 and 2. Minimum support of 3 was 5%. Okay. So this is 5% here. This is a valid candidate. This is a valid frequent item set. Okay. That's basically how it works. If we want to have bigger sets, it works exactly like before. But we have to look at this clause, the divergence clause. Okay. This is another check. Otherwise we will do exactly the same thing, which means look at the front. If the heads of the two lists are identical up to the last thing, these make the new joint partners. Okay. Good. Then we need the prune part. So now the point is really that we cannot go to the k-1 subsets to find out whether this is a valid set, because they might have been pruned away due to a higher minimum support. The downward closure property doesn't hold anymore. Okay. Still, if we find all the subset, everything is good. We're still valid. But it could be that the head of the rule, the first item, having the smallest support, is missing in the rule. That would increase the minimum support, because we have ordered the list so that the smallest minimum support is in the front. And that would break it. So this is the exception that we definitely need here. Look at the head item. If it does include the head item, then it has to be in the candidate list that has been calculated before of the smaller sets. If it does not contain the first item in the list, it has to be in the smaller candidates. And if it does not contain it, it does not have to be in the smaller candidates list. So this is what we do. So for example, if we have a couple of three element sets here and we join them, here's 1, 2, here's 1, 2, makes 1, 2, 3, 5. Here's 1, 3, here's 1, 3, makes 1, 2, 3, 5. Here's 1, 4, here's 1, 4, makes 1, 2, 5, 6. Okay. Only joins we can do. Then after pruning, we might get 1, 2, 3, 5, 1, 3, 5, 4, 5. Okay. Let's try to figure out what's in the set. So we need to have all the sets that can be built out of all the three element sets that can be built out of this four element set. 1, 2, 3 is there. 1, 2, 5 is there. 1, 3, 5 there. And 2, 3, 5, 2, 3, 5 there. Yeah. Or there, valid candidate. Okay. Let's look at the second one. Okay. 1, 3, 4. 1, 3, 4. Let's do it in a different color. 1, 3, 4 is there. 3, 4, 5. Oh, it's missing. Does it matter that it's missing? No, it does not matter, because 3, 4, 5 does not include the minimum support item. This is the one that presses down the minimum support. Okay. So we don't need it in the list before. All the other things, this is 1, 4, 5. 1, 4, 5 is there. And 1, 3, 5. 1, 3, 5 is there. So they are both there. And thus, we have that. Okay. Then for the last one, 1, 4, 5, 6. Okay. How can we build things out of that? Let me get rid of my little drawings here. We can have 1, 4, 5 is there. We could have 4, 5, 6. It doesn't have to be there, because the first item is not in the set. We could have 1, 5, 6. Oops. That is missing. And the first item is included in the list. So the whole thing goes down the drain. Okay. Clear? Similar as before. Very easy. Just, you don't have to find those sets where the first item is not in the list. And that's all. Okay. Good. Well, for the rule generation, we found that the downward closure property is not valid anymore. That means that we have frequent k-order items that contain k-1 non-frequent items. So we may have built something whose subgroups are not frequent items in itself. And, well, that is a problem, because we don't have the support values. We have to compute the support values for those. It's not a problem, but we have to go over the data again. It's an efficiency problem, not the effectivity. Well, that is called the so-called head item problem. So we might end up with some sets where we have to look at supports, but don't have the support values. Because the set of smaller cardinality is not a frequent item set in itself. Well, easiest way to deal with that is just if something is missing, go over the data, find out what it is. Of course, also a very inefficient way to deal with it. There are some other ways to deal with it. We will come to that in a minute. So to give you an example, if we have frequent item sets, shoe closes bread, then we could have the minimum support is the minimum of all the three of them. So if we have bread, closes shoes, then 0.1 definitely is the minimum. So this is here. Now we have, for example, closes bread, has a support of 0.15, shoe close bread has support of 0.12. And the minimum support holds, but what about closes bread? Well, for closes bread, the minimum here is 0.2. So the minimum support for this set does not hold anymore. I don't have the confidence for that. All the rules I can build that have the closes and bread, I cannot compute. They might be true. I just cannot compute them without accessing the data. Good? Clear? Okay. Not too difficult. So this is what I call the head item problem. Basically, I can calculate the support of some item or the confidence in some rules and in some, I cannot do it without reading the data again. And well, basically, there is a possible solution that at least heightens the probability without reading the data again. If I take the non, or all the probabilities where one non-frequent item set is taken out. So if I take those and take out one non-frequent item set, then I will get to this. And that was the probability that I didn't record in the first call. But if I record all the probabilities also for those items, it's not guaranteed that I get all the things I need, but it's probable. So also calculating them when you run through the data is a good idea. Anyway. Good. Advantages of multiple minimal support. It's very realistic for practical application because bread and shoe shine are sold together, but are not the same thing. One is a convenience article that is bought every day. The other is an article that is bought, a luxury article basically that is bought once a month or something. You cannot deal with them with a global threshold. If we use multiple minimal support and consider the divergence, we will not end up with all rules, you know, like people like bread because they like shoe shine or something like that. But we'll rather have rules of the type. People buy laces because they buy shoe shine because they think at that point that they're also new laces or something like that. So this is kind of like rare item rules and that is what we set out to do. Basically, if we set the support values to 100%, we can prevent items to occur in rules anyway. Also a nice idea. I'm definitely not interested in anything that has something to do with bread. Well, I just ask for a minimum support for bread of 100%. If there's a single transaction where bread was not bought, that's it for the bread. So I can effectively exclude items from frequent items at a time. Maybe a good idea. Yeah, and that's it for today in terms of algorithms. Yes? This is basically how many, it's kind of like a steering wheel setting the minimal support for the granularity of rules that you are interested in. If you really only want rules that have something to do with the top products you set, raise the bar. If you're interested in all your products and their relationships, you will get thousands of rules, use a small minimum support. That's basically it. It sets the granularity of how you look at your data, how you look at your products. In the detail, we will look at another algorithm, the class association rule mining, and that will do a silvue. Okay, so we've seen the classical approaches, we've seen the cases where we are going to perform some market basket analysis, but this kind of analysis is not targeted. So I'm not expecting something specific in the right side of my rules. Therefore, there was the idea of class association rule mining to introduce specific semantics to these kind of rules by exploiting classes. The idea was we will define a set of classes and we try to identify those rules which explain the important items co-occurring inside those classes. And for this algorithm, I've taken an example from the area of text mining. It's, I think, the best field for class association rules. Okay, so let's imagine our data as a set of documents. Here I've taken seven documents, each speaking about different topics. The first three documents speak about education and the last four about sports. We have then the nouns. Well, of course, for the sake of an example, I've taken a very small number of nouns for each document. You should imagine about 300 to 400 unique nouns. I've taken here three nouns for education for the first document and so on. Then we apply classical a priori algorithm and we calculate, of course, rules with support and confidence as we've seen before. So, for example, for the education, for student school, we find the support of two divided by seven. And this has to fulfill the minimum support condition and the minimum confidence condition. Clearly, the minimum confidence condition is not such a problematic situation. The association rules that I am getting out of here, as you can observe, have a classical structure. Similarly, the words are in the left side, the class at the right side. This is the whole idea of class association rule mining. This is why the advantage here is that association rule mining with classes can be performed in just one step. I'm going to perform the candidates to join and prune, identify the frequent item sets, but then I don't need to perform any combinations to extract the rules, because I already know that the classes are at the right of my rule system. So then I have each transaction as before, resulting in rules of type condition set. These are my words or items and labels of my classes. It's pretty simple. Therefore, the a priori algorithm can be further used to calculate class association rules just as before, just adding semantics to classes. Of course, also the idea of multiple minimum supports can be used here. I can add to different classes, different minimum supports. A classical example for this is when I define two classes, one positive, one negative, with the corresponding items, and then I say, okay, I'm really interested in what makes the positive class. So I'm going to use a lower minimum support, and I'm going to use a higher support for the negative class, since I'm not that interested, or it comes a lot in my data anyway. If I want, I can also exclude it entirely from my data by using a minimum class support of 100%. Good. Some classical tools for performing association rule mining, all the three algorithms we've discussed about today, or variations we've discussed about today, are offered as well as in open source projects, as in commercial projects. The most known open source projects for association rule mining and data mining as a whole are RapidMiner and VECA. You can download them and play with them, test them and see what kind of functionality they offer. They are completely free for student use. As commercial solutions, they are a bit more powerful, they scale very well with data, and now are the Intelligent Miner offered by IBM. There's a product from SPSS, and of course from Oracle, they also have their ODM Miner for the Oracle database. I wanted to bring you a practical example, so I've tested association rule mining with the a priori algorithm on a dataset which I have obtained from the California University. Of course, you can find there a lot of datasets for performing exactly association rule mining. Here I've downloaded a dataset with regard to customer preferences when it comes to buying a car. The classes are unacceptable, the client found a certain product unacceptable, acceptable, good, up to very good. And the attributes which the customers had to consider when deciding upon the class value was the cost of the car, which varied from high to low, very high to very low, the maintenance cost, the number of doors, luggage room and so on. There were more attributes for considering the acceptability of a car. I've performed this study with VECA, and in the VECA interface one can set how many rules one wants to obtain. I've set here, I think, 100 rules. One can set the support intervals, the confidence, the class index in the case one wants to use the class association rule mining. In this case, I've performed classical a priori algorithm with classical association rule mining, so no class involved. And I wanted to see what comes out, and I've observed that among, so the first rule, for example, was that a coupe, so a two-person car, was found as unacceptable by most of the clients. This is a pretty convincing rule. It has a confidence of 100%. The same can be said, for example, about cars with two persons and small luggage compartment, again unacceptable and so on. These are pretty simple rules. VECA displays at first the rules with just a small number of items in the left part of our rule. So I waited a bit longer and received also a bit more comprising rules, like, for example, this is a sedan, a four-seat car, which was found unacceptable due to its low safety. So if a four-seat car is found unacceptable, then it is most surely because it's unsafe. And if you wait some more or define a larger threshold of allowed rules, then you'll get even complexer rules, which decrease also in confidence due to the large number of records in your dataset. Good. I then wanted to try something larger. And I've taken a car accidents database, which actually is pretty interesting. It is quite big, so 350,000 rows and 54 attributes. And I was also curious to see what kind of attributes. And then there's information about the hour the accident took place, where the accident took place, how the intersection, if it's an intersection, looked like, what are the meteorological weather conditions. All this information is there in the attributes. So I wanted to see, okay, how do the rules look like in order to detect what kind of conditions are more frequent for such accidents. I started the VECA and for only 54 attributes and 350,000 rows, it died pretty fast. So you can imagine the complexity of the algorithm can go up pretty fast and four giga of memory are fast not enough. Of course, there are improvements of this algorithm, which achieve this kind of performance and can deal with more amount of data. And I then also was interested to see how Walmart can do such analysis. And I found out that actually what they are doing is kill it with iron. So they just put a lot of power when performing something like this. They try to prune as much as possible. They have a tree based algorithm and then it works. So it's not a good idea to do this on a laptop as I've tried it. Good. So what I would like you to take from today's lecture, we've spoken about the business intelligence. We've spoken about the importance of segmenting the customers, about the propensity of the customers to buy. How and what are they disposed to buy? What is the profitability based on the customer and how may I keep the customer to the company? We've spoken about an overview regarding data mining. What is data mining in general? What are the interesting algorithms? And we've started with association rule mining. We've introduced the a priori algorithm, which we can measure the strength or the weaknesses of rules through support and confidence. The most important part of the algorithm is the very intuitive downward closure property, which minimizes the intermediate results by high order of magnitude. We've discussed about the association rule mining by considering multiple minimum supports to solve the rare item problem. And we've discussed about the head item problem, which is derived from using multiple minimum supports. After the holiday, we'll continue throughout the data mining field as an application to data warehouses with time series, trend and similarity search analysis, as well as sequence patterns. Thank you and have a nice holiday.
In this course, we examine the aspects regarding building maintaining and operating data warehouses as well as give an insight to the main knowledge discovery techniques. The course deals with basic issues like storage of the data, execution of the analytical queries and data mining procedures. Course will be tought completly in English. The general structure of the course is: Typical dw use case scenarios Basic architecture of dw Data modelling on a conceptual, logical and physical level Multidimensional E/R modelling Cubes, dimensions, measures Query processing, OLAP queries (OLAP vs OLTP), roll-up, drill down, slice, dice, pivot MOLAP, ROLAP, HOLAP SQL99 OLAP operators, MDX Snowflake, star and starflake schemas for relational storage Multimedia physical storage (linearization) DW Indexing as search optimization mean: R-Trees, UB-Trees, Bitmap indexes Other optimization procedures: data partitioning, star join optimization, materialized views ETL Association rule mining, sequence patterns, time series Classification: Decision trees, naive Bayes classifications, SVM Cluster analysis: K-means, hierarchical clustering, aglomerative clustering, outlier analysis
10.5446/328 (DOI)
So hi everyone and welcome to the next installment of our lecture data warehousing and data mining techniques and we started off last week to look into data warehousing as a whole to look at applications and how people use data warehouses what are typical queries that you would try to answer with data warehouses and of course how the life cycle of the data warehouse what do you do with the data evolves this lecture will cover architectural issues mostly a little bit of modeling so how do you deal with the data what is the basic data model how is the data warehouse actually built and what parts of the data warehouse can be identified as architectural components and we'll start with some some basic architectural constraints and look at the very course architecture then go into the storage structures to see what is stored where and how this stored data can be used we will then discuss a little bit about the architectures in terms of tiers the stages that data goes through until it's finally used for querying or for reporting issues we will very briefly talk about distribution so what can distribute data warehouses to and then as a larger part we'll go into data modeling find out how data is related to each other and what is the important part of the data and what should be stored and what can be calculated on query on use anyway considering the basic area the basic architecture of a data warehouse allows us to clearly identify to clearly identify a couple of stages first we have the raw data the raw data basically comes from operational systems from well the data where it actually occurs naturally yeah for example production data sales data so whenever something is done or whenever something is accounted for the people in accounting or in the production department or the salespeople put it into their computers it somehow transferred to a central data storage that is an operational system as we pointed out last week the main focus in terms of queries and in terms of operations on this data is on OLTP an online transaction processing kind so what you basically do is you take the data you put it into the database you update the data you work with the data and so on so this is fully operational systems sometimes it's even so-called flat files so if you have some XML or something was just stored and appended you know like especially some of the big players in e-commerce like Amazon come they do things with flat files these days they don't employ relational databases as operational databases anymore but rather user service oriented approach that will have own storage structures in the case of Amazon that is Dynamo which will be considered in a different lecture so that would be out of the scope of this lecture but if you're very interested in these kinds of architectures and how to deal with them that is our distributed databases architecture that will cover that area if you're interested in the basic storage and how to search and flat files in service oriented environments then we also offer a lecture on XML databases very nice to see there for now this is basically the data source the operational system then we have a second area here which is called the staging area the staging area is you where you perform operations usually costly operations on the data prepare the data for the final storage in the actual warehouse this is basically where the data is stored and kept and from all that data that is kept in the warehouse you go to the presentation area and the presentation area prepares the data for its use and its use is basically analysis very often the OLAP process and of course reporting and then the second part of our course we will see data mining algorithms so this is also whether actual mining takes place so we can clearly identify four basic architectural components the operational source systems the staging area the warehouse area and the data marks or presentation layer starting with the first one once you get your data from the original sources you enter the staging area what is the use of the staging area the use of the staging area is that the operational data cannot just be stored but often has to be transformed last week we already talked a little bit about the ETL process extraction transformation loading okay and this is exactly what happens here the whole ETL process is focused on the staging area that I was extracted from operational databases or flat files then it's transformed into whatever format you want it it's cleaned so there are no inconsistencies and you know like redundant data and all that kind of stuff you take that out and then after all that is done you load it into the warehouse so what you see is basically the extraction step is here from the data sources into the staging area the loading step is from the data from the staging area into the actual data warehouse and within the staging area the transformation process takes part we will cover the transformation in two lectures I think on next lecture I think in two lectures it's later lecture seven okay for building the warehouse so transformation as we will then see is quite a complex task and and very not so much storage consuming but but very CPU consuming so very time-consuming such that this area or mapping this area as an area in its own right is actually a very good idea the interesting thing is that why are there is in the staging area it should not be touched in any way by any of the users neither should users be allowed from the operational systems to influence data in the staging area nor should users be allowed from the analyze from the analyzing part from the data presentation part into the staging area it's actually a little bit like like you know in in a restaurant's kitchen you wouldn't want customers in your kitchen because you're cooking there and they were there they're doing they're taking the lids of the Potsam they're looking at it you know like and you go like oh now the souffle is gone and it's kind of the same thing here every change even if it's only weeks that happens in the staging area can affect the data can affect your statistics about the data and thus might hamper the the the cleaning process and the process that you have so data cleaning is a very interesting and and very skilled profession there are lots of interesting algorithms that we will go through in a couple of lectures and you will find that the less people are concerned with that and the less people touch the data and somehow also the data in the staging area the better it is for the entire process because you will have to control it the second big part after the TL process in the staging area has been done is the presentation area after the thing was loaded it's simply stored we will come to that in a moment how it is stored but what do you do with the stored data you have a ton of very heavy weight queries that aggregate something that join something and very often or more often than not you will find that many people are interested in asking the same queries over and over again you know for example salespeople how did the sales in northern America develop three months later how did the sales in northern America develop what do you do with this kind of query you can either give it directly onto the data you have to optimize it you have to aggregate the values blah blah blah or you could say well basically let's pre-aggregate something store that and then work on that data and this is exactly what the presentation area does we will also go a little bit deeper into that when we come to data mods and how they are built but basically the data that is in the presentation layer and the data that is in the warehouse is the same kind of data it's only a different view on it it's a materialized pre-aggregated version of the data in the warehouse that's prepared for certain kinds of queries that are often occurring okay good what you do with the data basically what you perform is online analytics queries reports are very important so some of these data mods could be prefabricated reports that are just taken by users whenever they want them and it is basically something the the abstraction is something like happens in the normal 3 tier database architecture we have this view layer or the external layer that basically says this is what the user sees the user is not so much concerned with the warehouse the users definitely not concerned with the original operational data sources the users just wants a certain view on the data all the data that he or she needs all the data that he or she is allowed to see so also security reasons apply here and all the data is somehow pre-optimized to make the the access quicker and this is basically everything the users are interested in and and if the presentation area is up to date the users are happy and then everything's fine okay so this is basically what we are concerned with the ETL process getting a good structure in our warehouse and prefabricating the right presentations the right data mods for the users okay in terms of the the storage that is actually used to store the data the database or the the central part can be done in several ways the easiest way is just putting a database management system normal relational database management system in the middle just have some oracle system or some some large-scale system running there put the data there and the data mods should be extracted from that the form of the database can be either relational or it can be multi-dimensional and now who has ever heard of a multi-dimensional database nobody and that is because this is not a commercial product as such it's a paradigm it's like a relational database or an object oriented database it can be mapped onto a relational kernel can be mapped onto a object based kernel it's just the way of presenting the data in a multi-dimensional fashion we'll go into that in a minute so what I want you to take is that in the staging area you usually have some relational database system can even be a smaller one but you need to do a lot of calculations so that should be strong machines strong mainframe or something user creators are not allowed on that it's just for the cleaning process it's just for preparing the data for the actual warehouse in the presentation area you need a certain way of preparing the data you need aggregations different levels do you want the sales data on a day-to-day level do you want it on a monthly level do you want it on the whole fiscal year it depends whatever you try to do with it if you want to find out whether you should close down your shops in in northern America over the weekend because there's not too much being sold there and you can save some severe costs you need a day-to-day level it doesn't it doesn't help you if you have monthly averages because then you will not see whether whether the weekend is good for business or bad for business on the other hand if you're interested in in how much taxes you have to pay you need the fiscal year it doesn't matter how much you sold on one day it evens out it averages somehow with all the other days in the fiscal year you need different different structures and this is why in the presentation area you definitely need something that is multi-dimensional okay that does not just have this relational structure but a multi-dimensional structure that's just that was a relational model you all know relational database you all had to relational databases one and basically a database is a collection of predicates over a finite set of variables so a certain number of variables whatever you need and you declare some truth function in a way over it where you say okay this is like sales the sales is the income that you get by products so the sales could be a variable that you choose and then you can have a predicate that counts or somehow averages all the prices for all the products that have been stored okay what you do basically in the relational model is you put relationships between entities the entities that you're concerned with for example if you're a bookseller a book is a typical entity it will have a title it will have publisher might have certain category or genre and what you do now is you have relationship between the publisher and all the publishers there are all the publisher houses so publishing houses exist whether you sell some books from it or you don't so you might consider this a different entity and what you do in the relational model you put a relationship between it a foreign key relationship usually and this foreign key references whether the publisher is existent or whether certain genre of the book is existing okay and once you have a query you want a publisher that publishes books about certain in certain categories about certain genres of what for example is a typical fantasy book publisher what you do basically is you get the description fantasy you take the foreign key to some of the books you take the foreign keys to some of the publisher and then you do the drawing okay you join together the information that is in the book table in the publisher table in the book category table and then you know what publishers are prevalent in fantasy novels or something that's easy on the other hand that involves a lot of joints and as we know from from query optimization who has had databases to relational database to not too many so not too many have heard about clear optimization where should it's kind of interesting course so you probably should take it if you're interested in information systems as a whole and then you just have to believe me joint are one of the most costly operation you can do inquiries because you have to scan through the tables and try to match all the books in the book table with all the publishers in the publisher table and once you found a match you can take that that couple but you have to scan through them all which is quite annoying task a relation basically is usually described as a table but has a certain name table name and a couple is a row in this table and the row consists of several attributes or columns in this table okay so all the tuples that are all the objects sorry all the objects that are in the table share a certain set of attributes a certain set of characteristics that describe the objects okay it's like like your typical index card you know put something on top and each index cards for some entity of the real world and then you just put the values for certain characteristics into that index card same here when you do multi-dimensional databases and the basic idea is the same you have a table but instead of now putting all the data in a normalized fashion in a non redundant fashion you prepare all the redundant information that you might use later for certain types of queries what you basically do is in the in the staging area when cleaning the data and when preparing the data you of course have to store that data somehow relational databases fine but what if you don't have the daily data but need monthly data or yearly data then you would have to aggregate all that data you have to do all the joints which is pretty annoying which is pretty time-consuming so what you basically do in a multi-dimensional database is you define dimensions that allow for different that allow for different aggregation levels so for different degrees of aggregation so for example if you have the time dimension you can have daily data monthly data yearly data can have everything in between and the more you go into the more you the more you you are interested in details the deeper you have to dig for the data you can always imagine it as tabular data like in the relational model okay and in the tabular data you have for example the sales on a daily value and then you put the same table but here you do a weekly scale and what you do is you group all the other attributes and aggregate all the data for the same week and then you do the same table again and do it on a monthly base okay and this is basically the table that are in here we have the take a different color the time dimension down here so the first table would correspond to the first row in the time dimension the second table would correspond to the second row in the time dimension third would correspond to the third row in the time dimension and then can be as many dimension as you want to usually it's called a cube because it's not just two-dimensional a table but multi-dimensional there's a basic idea behind that and to get a little more intuition about multi-dimensional data we'll enter our first detour and Sylvia will tell us something about multi-dimensions thank you so let's take an example so that you better you can better understand this multi-dimensional paradigm we'll take our classical entrepreneur the car manufacturer who wants to increase the sales of course in order to increase the sales he must first see how he is doing where he's at how are they his sales doing so he is interested in the volume of sales he wants to see this by a model color color dealer and maybe over time so probably he wants to see the sales for last year or something like this actually so that you can get the feeling this is this can be shown displayed at something like this yeah so you have an excel like tabular form with many dimensions graphic representations and so on but we're more interested in one what happens under the lower layer so if you take the classical relational database which is powerful normalized then you would have a table for the corresponding model the IDs as primary keys colors again in another table IDs product linked together to the model and to the color true foreign keys and sales again linked to product true foreign keys if you join them together you get such a table for a simple query who wants to know okay what are my sales by model and color you imagine that I already have mentioned about four joints in order to reach this table I just need aggregated data respectively this sales volume what can I do so that the this tasks become become more intuitive I can represent the data like this I can say okay I have two interesting dimensions the model and the color then I have my sales numbers for example minivans were sold 113 minivans in the color of black 324 in blue and so on then I also have the aggregated data which has been previously calculated during extract transform load process because the next step might and it usually is okay I don't care right now about the model I'm just interesting interested in all the black cars I've sold or I'm just interested in all the minivans I'm sold I've sold and then I have the pre-aggregated value I don't have to search for it it's already there this kind of a structure is the multi-dimensional structure with two dimensions and the sales fact this is a classical multi-dimensional database which respects this paradigm now of course this was easy for two dimensions what do we do if we have more dimensions let's say you have ten values we've seen there only three values three colors three three models and ten dimensions or I've given here an example for three dimensions and just ten values in a relational database this leads to 1000 rows something like this represented in a multi multi-dimensional paradigm just leads to three dimensions and the corresponding fact information represented in a cube let's take another example I want to search for the sedan a blue sedan sold by a certain dealer I'm a car manufacturer I have dealers all over Germany yeah I'm interested could you could you go back one slide please yeah I don't know if that became clear what is the basic amount of data that you have to store for the three dimension was ten value each for each single value yeah exactly so for each value one of the dimensions has to be filled which makes three and then it just depends on the number of values that you offer okay if my database contains thousand objects it will be 3000 values that I store what happens if you do it multi-dimensional exactly you have to store all the aggregations together with the original data okay that is a basic point here well as I as I've said the saving the data is not really a problem dealing with data warehouses we already have to consider large hard drives and so on so what we are trying to spare here is the calculating power I don't want to wait too much when it comes to query this is why also I don't care about how much I have stored during the ETL process now when it comes to performance in the case of the 10,000 rows I the of the 10 of the 1000 row I need to read for a relational database all these records no index considered of course one could say yeah I will lay an index on it yeah we could index everything but this is not we'll see what what what happens when when using indexes later so in the case of relational databases I would need to go through the whole 1000 records in the worst case when the information is exactly at the end of the of the table in the case of the multimedia multi-dimensional databases I have however additional knowledge I have this three dimensions so I just need to say okay I will take the first dimension I need to search to the through the 10 values which correspond to through this dimension for the color I'm searching for black it's easy I find black in the dimension color then I find the or blue whatever is here I find sedan in the next 10 and I'm fine I find bug in the dealer dimension in the worst case 10 plus 10 plus 10 means 30 yeah if you compare 30 to 100 it's already a very big improvement in terms of search if you consider even the average case for a relational database this would however mean 501 searches yeah one plus 1000 divided by two where in the case of multi-dimensional databases this is 18 this is one 10 divided by two this makes 5.5 superior of 5.5 5.5 is 6 multiplied with three dimensions gets to 18 searches which you you need to perform again a great improvement in this case well if this query is even more relaxed so I'm just going to search for all the sedans I don't care about the color I don't care about the dealer again relational databases need to search the whole 1000 records while in the case of multi-dimensional databases I just take a slice the slice of data which corresponds to the sedan it's actually also human humanly intuitive I'm just interested in the sedans in this cube so I just take a hundred of this raw data by going through just 10 records the type of cars the models so I reach much faster to the data I'm interested in this clearly speaks for the performance advantage that multimedia multi-dimensional databases have so they are an order of magnitude better than the relational database management systems for such queries such queries are exactly what we use for what we need for the data warehouses and this is why multimedia multi-dimensional databases are the best paradigm for all up queries for of course we can implemented either relationally or multi-dimensional as lower layer we will see this in the physical storage level let's do also a short comparison between the two of them let's see some advantages and disadvantages of the two technologies I will start with the presentation so the ease of data presentation maybe you can remember the long query which resulted after the Colgate campaign from the last lecture Walmart wanted to perform Colgate promotion and they needed to compare what are the sales from this year with last year in certain regions when doing something like this based on a relational database something like this has to be written when doing something like this will with multi- dimensional databases I just come out with a cube this is much more intuitive it's not that complex in writing but I already and I already have it there so I can present it also to the business user again there are some issues with the query language which one can use on relational databases very good example is when I am doing topk queries I want to find out the five cheapest hotels in Frankfurt for example a classical topk query if I want to do something like this I need to write a complicated SQL query if I'm using the optimized SQL SQL 9 from 990 from the so the last SQL standard I can probably use some function like for example stop after which I know for a fact it's not a standard in all of the relational database systems for example my SQL uses something like limit 5 db2 uses something like request first and row numbers so it's practically a very tedious task to do something like this in SQL something like this in multi-dimensional databases is actually natural I just take the first values provided by the multi-dimensional multi-dimensional database another advantage is the use of maintenance I don't need to perform this translation between what the user wants and what the data looks like I have my queries based on dimensions and facts I have them stored in the multi-dimensional database I don't need to translate through joints and stuff like that another advantage I don't need those indexes which I need to tune my database with so that it performs better saving and maintaining indexes is very costly I can save myself this cost by using multi-dimensional databases of course the performance is also very important factor as I've said I'm going to spare myself the database tuning I can't do tuning on all the possible ad hoc queries which I would certainly need if I would use a relational database system and I can't aggregate on everything multi-dimensional databases have the information pre-aggregated through the ETL process while in relational databases I should use additional components like aggregate navigators or some other mechanism that would provide me this information of course your question would automatically be why not use multi-dimensional databases for everything if they are so great why not replace relational databases forever and use just that well they are not suited for everything it's great if we have data which can be split it into dimensions and facts and so on but if you have something like this where you have a name a person and an ID then we'd have a lot of sparse a sparse cube or a lot of information which actually doesn't need to be stored and performing operation or such multi-dimensional database just loses time so when our multi-dimensional databases appropriate they are appropriate when we have highly interrelated data sets so not like ID and person or so they are recommended for expressing the user view of the data they provide therefore ease of access for the analysis tasks and classical examples for Zovas for something like this are financial analysis reporting budgeting promotion tracking and so on good so basically what a multi-dimensional model does is it trades space storage space for query time for the complex queries that are that are warehousing queries this is a very good idea and of course one could argue this would also be a very good idea for all the relational databases would it not any ideas why don't do normal relational database why why doesn't Oracle come with a multi-dimensional add-on and pre-aggregates all the kind of stuff yes yes exactly that is a basic problem updates and inserts because what happens if you have to do an update yes the aggregates exactly so for the aggregates is you have to keep in your model when something changes also the aggregate might be changed the average or something you have to recalculate that you have to put it into the database on the right spot inserted or update the value and that costs of course a lot of time so there are clear trade-offs between the time that you need for building the multi-dimensional model in face of updates and inserts and the time that you need for querying the multi-dimensional model but if that is so why can data warehouses get away with it and always rely on a multi-dimensional model yes exactly the basic point that if you remember last week's sector is that the data in a in a data warehouse is hardly ever updated it's loaded yes and large volumes of data are loaded and then aggregated but it's not like in all TP systems where every second a couple of updates come in and the data has changed in the data warehouse you keep all the historical data and that has not to be updated so going the extra mile pre-calculating all the aggregates doing all the sums is a good idea for fast-acquiring okay ever with me good well so basically there are a couple of popular architectures for for data warehouses that are concerned with the tiers that you have so it's always architectures always kind of like layers of the cake you know what component comes where and generically there's a two tier architecture where you have the staging area and and and the data presentation area then you can have independent data mats that add to the presentation area you can have dependent data mats and operational data stores you can have logical data mails active warehouse three tier architectors we will go into them in a minute and there are a lot of other things that have been discussed but basically it comes down to this if you have a generic two tier architecture you extract the data oops you extract the data into the staging area put it into the data warehouse and on this data warehouse you perform all the queries that you need to do okay so basically the queries are directly posed to the data warehouse and this is how you get the two tiers and what you do is basically you extract the data every night or once a month or something like that clean it reconciled inconsistencies do whatever you want to and in the next step get it into the data warehouse one big loading operations one big operations where you aggregate all the all the data that you need for four year dimensions and then it's queried the data analysis on that comes basically in two kinds it happens either on the client where you get the data from the data warehouse and then do all the kinds of calculations that you that you need on the client what is very popular all the calculations all the query processing all the aggregations that are needed on top of what is in the data warehouse are done on the server side in that case we're talking about sin clients sin clients means I just have a yeah a client that is basically some laptop or whatever mobile device that is just used as a query interface and what a shift from the server is the prepared query result not the raw data okay once you have internet access to your data so for example all the people that are involved with customer care they take their laptop and of course shipping all the data from the data warehouse to the laptop over the net cost a lot of bandwidth because a lot of time then computing all the complicated things on the laptop cost a lot of runtime that results in a long query time not what you would want in a customer interaction oh let's wait for half an hour we have to kind of yeah there are the results very bad idea so with sin clients you would just use a query interface on the client side you will ship the query to the server the server will compute everything and just ship the results the final results for presentation to the client's laptop the other thing is exactly the opposite the fat client the fat client is a very powerful device and the data really delivers the raw the server really delivers the raw data the execute all the analytics that you do all the aggregations all the things that you that you work with is executed on the client and all the data that is not needed is just well swept out of the system the problem is that the communication between the server and the client is very heavy very bandwidth compute consuming on the other hand since the server is not too loaded because it doesn't have to to do all the queries the parallel access to the server is much higher because it just gets the query what data is needed ship it out which can be done in a in a very efficient fashion and then get to the next request which is something totally different from okay where is the data and now I need to aggregate it and now I do need to perform joints and now I need to do whatever and maybe prepare it for presentation that all the excel slides are there or whatever you want on the presentation layer and ship that out much more difficult so what you're trading here fed client versus sin client is parallel access on the server and server load versus the bandwidth that you need for data shipping okay depends on your application scenario whatever you need once you add an independent data mart or several independent data mart for your data warehouse you get to a three-layered architecture because you still have the staging area and the extraction you don't have to store the data somehow usually part of the staging area and from that data you make several snapshots so you don't have one big data warehouse anymore that cares for caters for all queries but you have queries that specifically work on these different snapshots that can of course be distributed to different servers you're solving two problems the databases get smaller you don't have a single big data warehouse but you have smaller data marts and the access of the clients can be paralyzed by putting different data marts to different servers why don't you always do that what is the basic disadvantage of that architecture yes redundancy yes on one hand but of course also in the big data warehouse doing all the pre aggregates you have a lot of redundancy but here you introduce even more that's right other problem that one can see that's right so some data marts may have different dimensions different facts and different aggregation levels so how are they supposed to work with each other that was also exactly it might be that some presentation that some report really needs to access several of this data marts and then join with the results this is something that you basically wanted to avoid when building the big data warehouse you reintroduce it through the backdoor if you work with independent data marts here okay so basically what you do is a little bit difficult so if we don't want independent data marts because you might need joins over them how about dependent data marts still the same here okay staging area stays as it is but what you do now is you load it into the data warehouse one big data warehouse not part of the staging area anymore but really part of the data data presentation area and from this you extract all the little data marts that you need and all the queries that you do either work on one of the data marts if they need more information if they need to combine several data marts they can also work directly on the enterprise warehouse advantages disadvantages any ideas no no no that's pretty obvious yeah exactly so the computation getting it into the data mart and then splitting the big data mart of the big data warehouse into several data marts takes somewhat longer but then you can handle those queries that are that only rely on a specific data mart directly on the data mart so you can again distribute some of the workload over several servers still for all the queries that involve many data marts where we would have joined operations in the independent data mart scenario we can work directly on the enterprise web warehouse and everything would be fine okay little bit more overhead in the loading phase better clearing good then we have the logical data mart and so-called active warehouse what you do here is that you put the state store the staging area and the actual storage and data mart creation into one architectural component you load the data from the operational systems near real-time into the big data warehouse you add a transformation layer to the data warehouse that immediately after receiving the data transform it transforms it into several data marts and the data marts are just logical views of the data warehouse so that kind of materialized views that can be used but by not materializing them on different hardware or on different parts of the software but keeping them as logical views over the data warehouse you get very fresh data on one hand and you don't have the overhead of shipping the data to the data marts if your queries are well conserved with very very fresh data so for example if you have stock market applications or something like that then this basically is is a must and on the other hand you have a central server that does the staging plus the data warehouse so this is a big disadvantage you have one big mainframe that has to handle everything okay one big component on the other hand amazingly fast what is the active part in that well the active part is that you have notification whenever something happens so whenever something changes in the in the source system you immediately extract it put it into the data warehouse okay and feed it to the user presentation tools I'm talking about data marts all the time so basically what is the difference between the data warehouse and the data mart who has got an intuition about it now everybody knows what a data mart actually is anybody wants to risk a definition of a data mart yes that's right it's a subset of the data warehouse that is specific to the application that means basically it's pre-aggregated for the application so whatever the application needs whether it be yearly data or data with a certain geographical scope it's prepared for that okay so in terms of the scope you have the data warehouse that is one centralized component okay that has been planned by all the system engineers whereas the data my mart is decentralized okay and only applies to a certain application and it just happens you know an application very often poses a certain query the data warehouse may decide this is a good idea getting a data mart for it how about the data in the data warehouse that's all the data that you have all the historical data all the detailed data all the aggregation levels in the data mart you take only those aggregation levels or only this the data that is needed by the application okay and in talking about normalization in the data warehouse you have a slight normalization of the data in the data mart you have very high denormalization because you just store the aggregates that you need you may even skip all the basic data on this data mart then some queries are not possible on this data mart but the data mart is created for some application so this type of query may not be needed in terms of subjects the data mart is always concerned with a single central subject that is concerned by the application data warehouses may allow several multiple subjects sources the data warehouse gets all the external sources the data mart just gets basically a snapshot from the data warehouse and this is basically what what is the main difference between a data mart and the data warehouse so it's really a view on the big data warehouse that is specifically tailored to the needs of a certain application and of course extracting the data mart costs time keeping the data mart up to date costs time so data warehouse should have good reasons for making a data mart that means the query rate on that data mart should be high the application that uses data mart should be well very often used or something like that okay good and then a couple of more higher-layered architecture so for example there's a three tier architecture where you say well basically I have the operational data I have the data warehouse here is the ETL process through the staging area and there's an independent data mart that has been derived from the basic warehouse and the queries are only put on the data mart and whenever I need a different query difficult query that cannot be handled by one data mart I can either build a new data mart for that query if it occurs very often or allow joints over the data mart but I will not post the query to reconcile data to the actual data warehouse okay this is the clean data in the middle that is not touched by anyone that is just loaded from the staging area has been cleaned has been reconciled has been worked on and the data mart is what is derived the data mart is what is worked on the reconciled data can be exploited in several instances so you could do it query based just log the queries that come in find out which queries are as opposed to the database to the data warehouse and then create the data mart accordingly or you could have a fixed process for that and have a well an architect basically that decides who needs what and who gets what so that also has to be would depend on your hardware or kind of work with your hardware then there are other architectures so for example there's the one tier architecture that is rather theoretical in type so basically you work directly on the presentation interfaces work directly on the cleaned data in the staging area that is possible but the big difficulty here is that everything is on a single server for big data volumes for complicated query queries and complex cleaning algorithms working on one server is a challenge to say the least so it is sometimes considered for mobile applications where I say okay I want a real-syn client you know like I just want my my my blackberry or my my my palm pilot or you know what as a very thin client and and everything is done on the server so why not do it in the staging area well you will never see that in practice unless you work for Nokia probably then there's the n-tier architecture so you could add layers as you go it's very often done with the so-called self-ex approaches so whenever your data warehouse has active components that create more data mods or that create more levels you might need several levels that is in there the complexity of controlling all that data keeping the redundancy in check of course grows with the number of layers that are concerned so probably a very bad idea and then there's a web-based architecture the interesting thing here is that you distribute the data warehouse and do it rather in a service-oriented way so your queries are considered as being answered by certain services that are employed over central or decentralized data sources you put the data where it exists or where it is basically comes into assistance or an operational systems and then just work with the services on the data on the other hand if you use these web-based architectures you have to make sure that the services that work on the data really do what they're supposed to do if a service is not supposed to access some data the service is not supposed to tamper with some data then you have to keep it out of your system in a web-based system this is or you pay for the flexibility basically you know like you need a lot of authentication you need a lot of authorization issues to to find out who actually can do what it's one of the big problems in in service-oriented architectures that you will find nonetheless I mean there are a lot of companies that actually work fully service-oriented these days Amazon being one of the prime examples they even have the the post of a service orientation evangelist which is really an official job title who goes through the business and tries to find out where more services could be useful and war where some components that are still not yet service-oriented could be put to use in the service-oriented manner so basically there's the architectures that the data warehouse could have any questions no then I would say we take a short break can relax a little bit and then we'll go into the modeling issues so we will reconvene at the quarter past quarter past five so let's get on I think so yes so before before going into the modeling of the data and how do you do that just a short note on distributed data warehouses the the basic point is that for all the analytical queries that you have to do a lot of data is needed and most companies really rely on a centralized data warehouse so one big place where all the data is put together and you can do everything you want to on top of it this is of course the basic idea the basic vision of the data warehouse you know you don't have different storage houses distributed all over the place but one big warehouse where you can find everything and then when you have to assemble products you know you always take it from the same warehouse once your company becomes quite big or is globally distributed as most companies today are of course doing a centralized warehouse comes with its own problems on one hand if you replicate the data warehouse to different geographical locations it's actually a good thing because you are disaster safe so if you have a part of the company here and a part of company in New York or something like that and you just copy your data warehouse you keep replicas of your data warehouse then whether New York burns down is no concern of yours because you still have the data here if you have a single big data warehouse and your company burns or there's an earthquake or whatever you know like I mean many people in the Silicon Valley are just waiting for the next big earthquake exactly when St. Andrews fold goes goes blinky again that could destroy a lot of your assets that you had because one of the most important thing for strategic decisions in your company is data stored in your data warehouse is the historical data it's basically or should be the basis for all the decisions that you do and your decision support systems can only work if they have good high quality data on the other hand keeping the replication in check is a problem because what do you do with inconsistent data in the different warehouses can happen by some mistakes that people make or because some updates were lost or one day the ETL process in one of the data warehouses doesn't work properly or something like that and then you have inconsistencies over your data and your American branch tells you well we sold a fortune and you look into your data warehouse and say no you didn't you actually made depths difficult to do all these things basically there's three types of distributed warehouses the one that I was talking about was a geographically distributed data warehouse so you basically have local instances of a global warehouse you could have technologically distributed warehouses they all are one but there are several servers that serve different parts of the warehouse especially if you have independent data mods this is a wonderful thing to have for performance reasons because different servers could serve different parts of your of your business so there might be one server for accountants where you do all your accounting stuff and all the well basically all the other data reports or the analytical theories that you might need there might be one for the sales people where they are get their decisions from what they should advertise or what they should push in promotions or kind of that stuff and of course you will need some of the data mods for the management that somehow aggregate all the data and again show a different view of that and then you have the ind independently evolving distributed data warehouse this is the most tricky part of a distributed data warehouse where all the data mods are left to their own devices so what a data mod does is basically depending on some some some big data warehouse or the extraction process sometimes directly on the extraction process but then the development of the data mod becomes well in a way unforeseeable or chaotic so you have uncontrolled growth of what you need and every data mod that needs something more takes it from the basic transactions or from the basic operational data and builds on top of it might be a little bit difficult because you're again replicating widely over the area as for geographically distributed I already said most of the corporations nowadays are big corporations so they spread over the world they have a global attitude and the information is always needed locally and globally so if you have your headquarters in Germany the top management will sit in Germany but still there are the American branch and the I don't know UK branch and the Africa branch or whatever and you put all the data together and of course they don't want to see the American data but they will just focus on the data that they need distributed data warehouse and always make sense in that scenario when most of the processing really stays on the local level if you have really have executives that are only concerned with northern American sales not really depending on the European markets why should you put all the European data in a warehouse in America doesn't make sense but put one the warehouse for the for the American data maybe New York based and one for the European data maybe Frankfurt based and all the queries that come from the European branch will be routed to the European warehouse and all the queries from the American branch will be routed to the New York warehouse that's kind of interesting sometimes you will have to aggregate so you can always decide on replication versus really distributing the data really dividing up the data replication very good in terms of disaster recovery replication very good in top management that needs all the data so if you have larger decision involving also the other markets then you might need this data a distributed data warehouse in a technically distributed way is for example that you have a Europe warehouse that is IBM and Asia warehouse that is Cybase because they offer very good rates and the US data by a warehouse maybe a mixture between IBM and and terror data and all the data is stored in one global data warehouse and different snapshots are done for the different sites so whatever you can perform locally you should perform locally good thing you can decide and stick to certain to certain technologically technological decisions that you have so if your Asian branch uses Cybase only the data is shipped not the technology is shipped so they can communicate with almost everything the technologically deep-distributed data warehouse is basically introduced by by the vendors so there are many kinds of distributed databases we offer lecture for that if you're interested and the advantage is that distribution is part of the database management system so you don't have to care what you replicate you don't have to care where the queries are shipped because that is part of the database management system and it virtually comes at no cost so even the basic enterprise versions of DB2 or Oracle all have different nodes where you can place your data and where you can distribute the data it's a little bit difficult if the thing gets too big because for a large-scale solutions the classical database systems kind of break down and most of the companies that are concerned with real big data like for example Google with the Google index or Amazon with the sales the worldwide sales build their own kind of database management system that is really only catering to their certain needs the point in this technologically distributed there the data warehouse is that the bigger the warehouse gets the more these replication features take in terms of communication so if I have to replicate data that comes from operational systems and I have to replicate it into different locations and over different probably technologically distributed platforms the communication costs increase so for example if we consider that we have four nodes that have the data of the last four years and we could access that data on each of the nodes but then would have to join all the nodes storing the individual data or we could take the data of the four years and put it replicated on service then we could answer the queries directly just addressing one node on the other hand we would need much more storing space and we would need the replication step which is very time consuming and a lot of communication because you do bulk loading by it's loading over the network is a very bad idea okay so this is kind of difficult if you work with the normal with the normal vendor specified database distribution the worst case in terms of distribution is really the independently evolving distributed data warehouse the problem is that happens very often because how is your organization built does the accounting really care what the production does or does a production care what the marketing does and the big answer is no usually in a perfect world there would all be one big team you know like and really really doing what they can for each other to use all the synergies that are possible between the different departments and so on the reality is this is simply not happening so marketing couldn't care less what production does as long as something is good for them they will do it production doesn't care what accounting does as long as they get away with it they will do whatever they want to and yeah that is basically how these independently evolving warehouses originated because some people were just thinking you know like a warehouse might be a good idea and I got the money for this more warehouse that would cover my production department so I introduced one and I decided for the right the right schema for me and I don't care what schema might be helpful for other departments because I'm paying for it I get it and I do what I want with it and production may be the first that do that then financial departments come into the game and do the same and now you have different schemas so the whole point the whole idea of the data warehouse is gone having this integrated schema having that big view the birds I view on all your operational data or your company data and bring that back together integrating independently evolving warehouses is a very big problem in data integration and we will talk about that later when we do the ETL process because it's very similar in terms of how to integrate data from different sources and how to query data over differently evolving warehouses so if something like that happens in a company and you are the data warehousing architect or the analyst do everything you can to get to one of the other solutions for distributing data warehouses doesn't matter whether it's a distributed database basically or if it's globally so somehow geographically distributed but not independently evolving okay good and with that we start with the modeling part and go into the next detour to see a little bit about how modeling works in normal databases good so we're finished with the architecture we come to the new theme of today the data modeling data modeling is practically creating a data model by analyzing the requirements of our business processes of course this is also caused called database design this is called database design because into it to intuitively we save the data in a database talking about some basics regarding data modeling what is important to know is that data models provide the definition and format of our data of course this can be specific for an area of interest just imagine a big enterprise we've just spoken about data mart and departments and so on and you have accounting again and marketing if you want to perform data modeling now accounting will regard or sales will regard the client as just an ID they don't really care where the client lives what kind of income does he have or what kind of interest he has if on the other side we consider marketing they are very well interested in what he buys what he can buy what's his income where he lives what kind of relationship does do we have with that customer if you can consider the enterprise data model you have to consider then also this contradictory views over the data and model them together on the other side of course this is we also can perform the subject area data model which represents only the data on one of these departments point of views considering the general software development lifecycle everything starts with the requirement analysis in a classical software development then you have the functional requirements which lead to this nice part here this is the program development well this belongs to the software engineering so I won't go into this part in detail but after performing the requirement analysis a data requirement has to analysis has to be performed and this results in developing the first model respectively the conceptual model this regards only the abstract information about my my business processes what kind of entities do they speak about do I need the client do I need the sales what am I interested in after modeling the conceptual design logical design comes in place if the conceptual design is very abstract concept logical design makes the it connects this abstract view to the database management system which we are using so the software the underlying software software if this one is independent of the software this one makes the transition after having modeled our data on a logical level we can go to the physical design which not only takes into consideration the relational the database management system we are using but also the hardware this is why it's called the physical design good so as I have already mentioned the conceptual model is high level does not contain any implementation details and is independent of hardware and software it describes the data entities which are need we are need which are needed for the business processes in the company the relations between the entities and constraints attributes and so on the logical design adapt the independent conceptual models to the database management software so it makes basically a transition the physical design is describes how the data is actually stored for example if I have relational database management system I'm interested I'm interested in the table spaces in the indexes in how access parts are performed if I have a multi-dimensional database which is physically stored then I'm discussing about array matrixes and so on everything is dependent of course also the database management system now going from a phase to the next we must first compute the result the prerequisites for each phase this means that going from the conceptual design to the logical design I must have finished the conceptual design otherwise I have problems building the logical part the same goes also for the physical design so finish first the conceptual design then give it as input to the logical design and so on of course this these phases are also accompanied by feedback for early detection of errors and so on if I have a business process which is not fulfilled by the conceptual design and I see I correct it and this this modification propagates to the logical and physical design the conceptual model is the highest level as we have already seen it is a group of ideas as I've said very abstract it naturally clusters the information which is belongs to similar categories of course they must be relevant to the organization so everything starts with requirement analysis on the business processes it describes the major relationships between the subjects we have selected and it contains the least amount of detail more detailed information you can obtain on the conceptual model in the relational databases one lecture which is actually parallel to this lecture the conceptual model contains entities one example would be a car account product client and so on attributes color for example is an attribute of a car and relationships between entities an example person owns cars more complex model can be seen in this example when if if we want to describe management of the lectures for an university then we would have a lecture entity instance also as an entity students professors course of study we have attributes for these entities like for example when does this lecture take place day of the week room semester so on a student has a registration number name and then we have relations between these entities for example students attend lectures and roles a course of study and so on I think this is pretty intuitive for the ones of you that have followed the relational database management relational database one lecture for the ones that are not acquainted with entity relationship diagrams please take a look at the lecture the slides are on the web on our website good another possibility to describe the conceptual model is the unified modeling language and they are usually described through class diagrams the entity becomes a class I think those of you who used to program in Java or C++ and describe classes objects so on understand this better like this so an entity becomes a class you then of course have the attributes also very intuitive and then you have relationships which here become associations of course you have special associations like generalization composition aggregation we've seen an example before as a class lecture and then the instance one special lecture now the logical model makes the connection between the abstract conceptual level and the database management system for a relational database management system this is the case of tables which are described through attributes the columns and the rows the tuples of course very intuitive for relational database management systems regarding the physical model as I previously mentioned this refers to how the data is stored for example on the house a hard drive it comprises data types indexing options access paths and other parameters I've just printed here an image of how this would look like in a mysql database when you just said data description language good now that we've seen how this can be classically done in relational databases let's see what happens with data modeling in data warehouses well as we have seen the basic steps of how do you define what you need what are your entities of interest what is the basic relationship between of your entities of interest is exactly the same in the data warehouse than it is in relational databases but the problem with the data warehouse is that the environment oopsie the environment is rather complex so there's not only the single relationship okay a person owns a car or something but there's a different aggregation levels how many blue cars does a person own how do we deal with that that is one of the major problems of databases also the relationship that exists may change over time we have a whole historical rep up of the data and the relationship that used to be there five years ago may not be present or not be needed any longer in a certain company so your model kind of evolves and that is an interesting thing that is also not really present in relational databases that is something that is new and that needs to be addressed in data warehouses there is one part the modeling gets more complex the other part is that the modeling involves more steps because how do you deal with a normal database well you basically decide for some model you say this is my entity it's the customer and the customer owns a car this is two entities and one relationship between them in the data warehouse you get the data from some operational databases that offer a certain schema that has a structure and you have to respect that structure in some regards so you can on one hand not model freely but on the other hand you need different information the information that is needed for decision support so you have to get that data from the operational data basis into the form into the schema you want it to be in your data warehouse which again refers to the extraction transformation loading step that has to be maintained that has to be worked out and of course you want the user on the presentation side of the data to understand what the data is about how the data is organized so the modeling of your data warehouse also has to be some explanatory information it has to explain users or application designers at least how this data can be used and what the semantics of this data really is so what you basically want to do is you want to model business query and of course when you have a business query there's always a certain paradigm always a certain vision that your company and compasses something that the company is interested in and of course your data that is needed for decision support should also focus on these visions on these ideas that you have and basically on the business model that is on the center of it all because the most decisions that you have in the data warehouse the most strategic decisions that you need for your company are concerned with earning more money retaining more customers producing more products getting more money for the company for the good of the company basically that is at the heart of it so you need to identify the questions the entities that you're interested in and you need to define the purpose and the subject of the data warehouse okay once your company is a service company what should be of the center of your company what should you do that a lot of focus on you want to provide services hmm no ideas well probably the customer should have a role in it shouldn't he because he's a service user and the more comfortable the customers was the services offered the better your business case if you're a production company you're producing some goods the customer may not be too interesting there should be a market for your goods sure but who exactly buys your products probably couldn't care less okay so the customer is not the central part of the model this basically the the kind of queries that you have to address when you think about the goal of your data modeling for the warehouse then you of course have the subjects that you need to consider as I said for example the customers who bought product the dealers who sold the products what was actually sold where was it sold when was it sold how many were sold all these questions may arise and maybe entities in your data warehouse as you had entities in your relational database when doing conceptual database model okay quite easy so the for the conceptual design in data warehouses those simple entity relationship or you ML techniques are not important appropriate because you also need this multi-dimensional data and with ER or you ML you're only modeling the data on the lowest level on the least aggregation level and everything that is aggregated and that is transported upwards but maybe vital information for your company cannot be expressed in those models that is a basic idea the ER model has been built to remove redundancy and to get down to individual records this is something we do not want in data warehouses we embrace redundancy because it makes our analysis quicker as long as it's controlled as long as all the data is reconciled we even don't care about stale data I mean sales of today sales of yesterday for the big picture same thing and this is why why you basically with the conceptual design in databases optimize online transaction processing very quick inserts very quick updates lot of them in the case of data warehouses you have to build the processes you have to consider materialized views for different stages of aggregation you have to consider how data is related in different stages of application maybe for the tax income only the yearly data is interesting how can that be reflected in the model maybe for the sales or the promotion that you want to do for some certain product only certain geographical regions are interesting if you're promoting fridges and arctica will probably not be one of your concerns if you're promoting heating systems and arctica might be quite interesting market for you not a big market but still quite interesting so that's this a difficulty that you that you have to deal with basically what you do in conceptual model for data warehouses you focus on facts measures dimensions and hierarchies what are the facts the facts are the basic entities that you're interested in so for example the sales that could be a very good fact or the customer satisfaction be a very good fact everything that you are interested in that you need for strategic decision is a fact and of course if you have a fact if I have sales I also have to see how do I measure it this is where the measures come in the measures are attributes that describe the facts so sales is not interesting if I don't know how much I earned or how much product I shipped or where did I do the the the sales these are all attributes that depend on the sales on the facts and that make up the facts for the different measures describing the fact you might have more information for example if I said when was something sold this is a point in time this is a date but the state is not only a day it also belongs to some week or it belongs to some period some fiscal year or whatever it is and this information is called a dimension it's a well basically a new axis that allows different aggregations different cause of focusing on the special date and there the hierarchies come into the game if you have a dimension you can have a hierarchy of information so of course all the daily data could be aggregated to the weekly data the weekly data could be aggregated into yearly data and then you have three times kind of the same data in a certain hierarchy years more than a month or is made up of 12 months and each month is made up of about 30 days so what you do when you have to conceptually design data warehouse you use a multi-dimensional entity relationship model or multi-dimensional you are male who would have guessed so you need to put this multi-dimensional information into directly into the model there are some other methods the dimension fact model of the talk approach which we we will not discuss here because I mean there are different different paradigms that are getting kind of well out of hand in any case the multi-dimensional entity relational model is an intuitive representation of the data with the multi-dimensional information so that you can pick the distinct aggregation step that you need for getting a good information quality and fast access to your information so this multi-dimensional model also transports what data is available in your data warehouse what data can be used for analysis or reporting easily in a pre-aggregated fashion that's kind of an explanatory component already that shows you how it works basically it's built on the ER model so you also have the basic entities and the relationship between things but you have a specific possibility to specify also multi-dimensional semantics so what is the dimension and what is the hierarchy that is given by the dimension since it's a specialization of the ER model you basically use all the art ER part all the old ER parts you have the entities in small boxes you have the attributes in those little oval circle and then you have all the relationships between them the good thing is that basically taking the least aggregation step into account it's a normal ER model which you would find but then you have to find some additional elements to express multi-dimensionality and of course that should be a rather little number of attributes that you add and that makes it still easy to understand and there's the biggest problem how to get the multi-dimensional semantics into the game basically you have three main constructs the first one is the fact note the fact note is the entity that you are interested in and as opposed to the normal box that entities are made of in ER model you basically do it as a little cube to show the multi-dimensional idea behind it and each fact is described by some attributes of some characteristics and then you need a special special classification edge it looks like this that tells you what classification level you are on so each of your characteristics could be connected to some dimension that on a certain classification level or a certain number of classification levels allows you to aggregate the data or to access pre-aggregated data make a short example let's take a store scenario which we have in the normal relational ER model so we have a store and the store is in a city two entities store is an entity real-world existing entity the city is an entity real-world existing entity and I have a relationship between them store is in city okay and probably I describe cities by some characteristics attributes here city as a name city is in some district huh so for example I don't know San Francisco California or something like that would be a perfect description for some city and each stores sells some articles so we have another relationship here to the entity article maybe it's a ternary relationship so you also record the date that something is sold on an article is packed in a certain package and then send off an article belongs to a certain product group okay this can all be expressed by a basic ER model where's the difficulty now if we think about where something is sold then our major concern is or probably the article probably the product in question but to get to where an article is sold we need one join two joins to get to the information and aggregation levels all the products that were sold in the US I can find out whether a city is in the US but where's the information about the US cannot be captured in that model so what we can do is we can say well what we are interested in as a fact is sales we are concerned with where how many articles are sold to what price and these are the characteristics these are like attributes of the sale for example one of the characteristics I had in the actual model was the date I will take another color was the date here an article was sold on a certain date I want to transport that information because it's a defining characteristics into my multi-dimensional model so I say a certain sale for some certain product happens on a certain day for some certain product in a certain store okay and now I have these binary edges here with the little tails as a little fins at the tail that show me there's a hierarchy of aggregations behind that basic information so sale is not only a product is sold in a store at some date basically the same information that I have in the relational model but the store lies in a certain city and the city lies in a certain district and the district lies in a certain region and the region lies in a certain country so this is a dimension geography okay and now I don't have to do all the joints to do queries on what was sold in the United States but I have the level of aggregation already pre-computed so whenever I refer to some sale I want to know what store it was or what what where it was sold actually in terms of country I can directly get this value from a multi-dimensional model okay same happens with time same happens with the product that I can put into product groups or product family and product categories and as we see this is strong hierarchically a day has to belong to some year a city has to belong to some country or to some district okay but they could of course also be alternatives so for example a day belongs to a week and a day belongs to a month but I don't want to do any specifications whether a week belongs to a certain month for example then I will just have alternatives in my multi-dimensional model and this is basically the three main types of information that I convey the fact note the characteristics that are needed to describe the fact and the hierarchy of information that these characteristics are made of which is basically this little tail-finned error getting clear everybody sees how this works everybody sees why this has a very much a bit of an explaining quality basic ideas that from this model I can directly see what facts what characteristics are actually pre-computed so they can be directly accessed without doing any joints without doing any complex operations which is basically because they're in the hierarchy and I can see what are the basic ideas behind sales for my company what do I consider important about sales the location the type of article of the type of product and the time for example price here has not been specified it's not important for my business if it would be important should be in the model somewhere okay like in the entity relationship case if I model some product I can model the price as a attribute or not my decision I have to work with the model I have to live with the model and all the different applications that query the model can only rely on what I do model good the sales was elected as a fact note the dimensions that I have a product geographical area and time the dimensions are called through are represented through basic classification levels so for example there's no smaller level like like the product itself I cannot buy a part of the product so if I want to have spare parts or something in my model I have to add a hierarchy level this model does not allow to talk about spare parts this model does not allow to talk about the hour that a sale was done okay just talks about the day just talks about the store just talks about the article and I'm responsible for that and there could be alternative parts in the classification level where say well something is has been done in a week or something has been done in a month doesn't doesn't matter there there's no connection between them the same thing I've done with UML with entity relationship modeling right now can exactly be done with UML so they're also these multi-dimensional core ideas to to to to expand XML so you can can actually use it and basically there are some domain specific issues that can be used for building multi-dimensional UML models there's one the stereo pi types where you can build new elements from existing elements which is kind of the idea of of aggregation that tag values which are new properties and there are constraints where you can add new semantics onto on top of what what you're interested in so going into the stereo types if you have multi-dimensional UML you grant a special semantics to some construction without actually modifying the original construction there are basically a couple of representations of the stereotype so basically there's the icon and then you can talk about the the basic fact which is called a decoration it's kind of like an annotation then you can talk about that you talk about the fact with its label and of course there could be there could be none so these are different types of representation for the stereotypes the tagged value you can just say well I have a tag that describes what the value is about and have a certain data value for that so the tag is the value that is one possibility so you have units unit price given by some price or you could talk about some formula which is which computes the data value which is used for aggregation if I need to do sales for a year or something then I have to pre-aggregate how many units were sold in a certain time interval making that year this aggregation can be specified in multi-dimensional UML so it explains how the different aggregation levels come into being if that's not clear and so if you go into the UML you have for example the dimension sold products which refers to product you do a roll up into the product group you do a roll up into the product category okay this is basically the idea of having the hierarchy exploited then you have the geography store city region okay and then the land but you can also do a roll up from the product into the land and so on okay basically the same thing fact table here the dimensions are here okay just a different way of showing it with UML takes a little bit of getting used to if you do it that way but it's really a simple and very self-explanatory way of well seeing what your data warehouse is about and what the basic items of your concern are good what I want you to take away today considering storage structures then we need an extension from what is present in relational databases multi-dimensional databases whether they are mapped to relational kernels or whether they are column stores or flat files or whatnot does not matter the multi-dimensional structure of the database the multi-dimensional database paradigm is important the architectures there basically one tier architecture of the rather theoretical value that is one that is often promoted in typical literature for example in man always talks about often talks about one tier architectures for the data flow but it's rather theoretical usually you have n-tier architectures you can closely say that that the most popular are two tier and three tier architectures depending on whether you have independent data marks or dependent data marks then you have the web-based architecture which leads to a reduction of cost because you can ship the data to different areas you can keep up technologically advanced solutions that are managed by your database management system or you could use heterogeneous environments where depending on the area where your data warehouse is built use a specific software landscape or whatever you do on the other hand with web-based architectures you get security issues you will have to think about authentication you will have to allow for data access but it's a good idea maybe to go into web-based architecture because service oriented processes are very popular nowadays in most companies so this is actually quite a good idea to prepare for that usually a data warehouse will grow very large and once the company is not really your your well next door carpenter or whatever it is but rather a company international company like the Volkswagen or IBM or you name them some banks then the data warehouse becomes on one hand too big to be kept locally on the other hand it's too dangerous to keep it locally because you could lose data and you need a certain amount of replication so what you do is you distribute it geographically and usually you don't want to care about the exact way of distribution so what you do is you let the database software take care of that and use a distributed data base management system the data modeling and conceptual modeling as we have seen is in a way close to the entity relationship modeling or the UML modeling but you can't use the original relational entity relationship or the relational UML because you have to add hierarchies for the additional values you store for the additional levels of aggregation that you store and you have to you you have to to allow for facts that are the basic entities of your concern but are composed somehow by characteristics that again measured in some dimensions this is something different a proper methods are multi-dimensional entity relationship I gave you a short overview over that multi-dimensional UML also exists can be you can read it in any kind of book about data warehousing how how it actually works it's basically just notation but the basic concepts I hope have gotten clear we get the different levels of aggregation the different dimensions the facts and the characteristics describing the facts okay questions well that's it for today and the next lecture we will go a little bit deeper into the logical model and into a physical model that means how to actually build the data warehouse if you are in a position at some point to design your own little data warehouse or big data warehouse these are skills that might come in useful and if there are no more questions which doesn't seem to be the case I thank you for the attention
In this course, we examine the aspects regarding building maintaining and operating data warehouses as well as give an insight to the main knowledge discovery techniques. The course deals with basic issues like storage of the data, execution of the analytical queries and data mining procedures. Course will be tought completly in English. The general structure of the course is: Typical dw use case scenarios Basic architecture of dw Data modelling on a conceptual, logical and physical level Multidimensional E/R modelling Cubes, dimensions, measures Query processing, OLAP queries (OLAP vs OLTP), roll-up, drill down, slice, dice, pivot MOLAP, ROLAP, HOLAP SQL99 OLAP operators, MDX Snowflake, star and starflake schemas for relational storage Multimedia physical storage (linearization) DW Indexing as search optimization mean: R-Trees, UB-Trees, Bitmap indexes Other optimization procedures: data partitioning, star join optimization, materialized views ETL Association rule mining, sequence patterns, time series Classification: Decision trees, naive Bayes classifications, SVM Cluster analysis: K-means, hierarchical clustering, aglomerative clustering, outlier analysis
10.5446/325 (DOI)
Hello everyone. It may come as a surprise, but I hope it doesn't. This lecture will be held entirely in English language. So if that is a problem for somebody, raise your hands or don't say anything at all. The topic is data warehousing and data mining technique and the reason why it's in English. First of all, because it's important for you to know proper English and to be able to grasp concepts even in English. Second of all, this topic in all the database curriculum is the easiest. So concepts are really easy to grasp even in English and most of the technical terms are English anyway. And third and the actual reason is that this lecture is part of an international course of study. The Master of Information Technology and Information Systems that is a master course that is held conjoint with the universities of Hanover, Klausthal and Göttingen. And so we are very proud to be part of this curriculum and I hope this helps you to get some of the concepts we have here in English. It's a good exercise for later job applications or stuff anyway. First of all, this is a lecture that is very important for computer scientists since it's database is sure, but it's a lecture that is very important for business computer scientists too. So enterprise information systems, business information systems, that is a course of study that should definitely know about data warehousing because it's the reality out there in almost every organization. Who's from business information systems? One, two, three. One, two, three. Oh, that's not too many. Interesting. Good. So tell your colleagues about it. This is really important to know. Anyway, what we will be doing, I will start with some organizational issues and then go directly into the lecture. So the lecture is from the 28th of October until beginning of February every time from 3pm to quarter past five. So that makes three lecture hours if you count correctly and the interesting thing is why do we do this in such a large block with a short intermediate break? The answer is because we try to integrate everything. We try to get the exercises within the lecture, within the coursework and have some of the background, some stories about it, some historical remarks, how it came to be and what is important and what is not disguised as kind of detours. So where you can just lie back, relax and listen to the things that are happening. This is not really for thinking, this is not really for understanding, but it's rather meant to be background information for you so that you can see some of the issues that are involved. We also have the solution for the exercises integrated into the lecture and discussion about homework that will be homeworked for I think every two weeks or something? Every week? Oh dear. So there will be homework every week now. And finally there will be exams, oral exams and we used to have as a prerequisite for these exams that you have to get 50 pence of the homework score, of the total available scores. We cannot do that anymore due to some regulations of the modules and the ministry told us that it's not possible to have two issues in one exam, so having 50% of the total score and doing the oral exam would be two parts and this is no longer a viable option. So I still very strongly advise you to try to get the 50%. It's not difficult. Most of the lectures are really repeating what you have read on the slide, understanding, thinking about a little, which you will have to do for your exams anyway. So it's a good exercise for the exam and everybody who's regularly done the homework will have no problems whatsoever during the exam. So please, I really want to encourage you, try to top up the lectures with some of the homework at least. The credit points for this lecture is four or five, depending on whatever Studienordnung you study in. The English people don't have Studienordnung, so this is a typical German term that is exported in some country. The basic point is if you have the old one, then it's four credits. If you have the new one, it's five credits. But it's not automatically five credits, you have to change your course of study. You have to make sure you do that, then you get five credits. Not a problem here. The next part of the introduction is always why you should be here. So what is the interesting part of this lecture? What are the things you will learn and what is the knowledge you will gain from this lecture? The interesting thing is that if you have bad business decisions, as everybody can tell you, bad management decisions, you will be suboptimal in what your organization performs. And this is to say the least. There have been some outright disasters in industry. For example, the crash of Lehman Brothers. That was very bad, strategical decisions. And they were not inventing these decisions on a whim, but going out on a leave. Let's do something that ruins all the economy today. But they thought they were doing the right thing. How did they come to think this? And that is where the data warehousing comes into the game. The data warehousing is one of the major sources of information that management draws the grounds for strategic decisions. And that really goes through all areas of organizations, be it companies or also in the private sector. There is a lot of thing to do. The interesting part is for the data warehousing, more technical perspective, what is the data that your organization owns? And on the other hand, what is called online analytical processing or OLAP for short. This is how this data is kind of analyzed, how you get the information that we really want to have. And the information is not out in the open. I mean, I can use a database and put in some SQL query and then I know everything that is in the database. But most of the information that is really important for strategic decision is hidden somewhere in the data. Or it can be extracted by putting some pieces of data together in a very complex manner. And this is what you have to do. You have to see the data from a different angle. You have to look at it in a different way. So what about specific products that I sell? Is it worse while selling these products, producing these products or is it kind of like not taking off? What about the clients? Do I have clients that never pay? What about the time? So the Christmas market or something that is very important for most companies. And all they will have an increase in sales during Christmas time. Or the geographical area. Does it really pay off to sell fridges to the Arctic Circle? Probably not. This data that is not open, that is not out in the open, but data that is rather somewhere hidden in your organizational data, in the sales data, in the customers, in whatever. So what you have to do is you have to get statistics over all this big amount of data. And then putting these statistics in a nice Excel slide and bring it to the next board meeting. Obviously, the sales in Northern America are not worse while. Helps you to get your point across. Because who can argue with statistics? Only somebody who understands statistics. And that is not management. So that is basically a point. And the second point is that statistics, having good statistics, is always the basic building block of making good predictions. So you can, to some degree, predict future developments, other sales going up or down. Stop market at a turning point, stuff like that. You will not always be right. But you get the idea. And that is what this course is basically about. This is what you should know. Because it's interesting. But on the other hand, it's not only interesting, but we all love databases and we all love being paid for what we do. And if you go to some of these job sites, Monster, or however it's called, you will find a lot of job offers on technology, like analysts, data analysts, where you see, okay, five to ten years, SQL Server experience, working with.NET architectures, data warehouse experience, all up analysis services, experience with data mart, staging metadata, extraction transformation, loading experience, you know, like that. And that will pay you about 150,000 bucks a year. And you find loads of these. And this is really a market, this is really an article in demand. Data analyst. And this is what you can get if you follow this course. And don't come too late for this course. I also want to point out some literature that I like very much. And that is kind of standard in this area. So data warehousing and data mining, there are some very good textbooks. And probably the most well-known is the so-called data warehousing Bible or the Inmon Bible by William Inmon, building the data warehouse, very good book that basically deals with all the aspects of data warehousing, where you really get to know what you're doing, why you're doing it, what data warehousing should do or shouldn't do. And there's a second one along the same lines, the Data Warehouse Toolkit, that is more on a technical level by Ralph Kimball. These two are very good books for a general view on data warehousing. I also point out one German book, Data Warehouse Systeme, Andreas Bauern-Holger Günze. It may be a little bit superficial, but it's kind of nice. And if you want something that is in German to accompany the course, to read about some of the concepts probably in German, that might help you. It's kind of a nice book. If we go into more technical directions, specifically for the online analytical processing part, then I can recommend very warmly the Data Warehouse ETL Toolkit, which is kind of like a second volume for the Data Warehouse Toolkit. Again, Ralph Kimball, which specifically concerns the extraction transformation loading process of the Data Warehouse, so how it's really built up, not how it's used, but how it's really built up. So all kinds of how to get the data into the warehouse from productive databases, how to clean it, how to normalize it, and all that kind of stuff is in that book very, very well written. As for the analytical processing part, there's a very good by Eric Thompson, All Up Solutions, and by Robert Wrenbel, Data Warehouses and All Up. Interesting books, I can recommend them very well. And basically that covers most of what we do here in this lecture concerning the warehouse. Yeah. No, yes, that was too much. Okay. So what are we going to do today? Today I want to give you a basic introduction, the whole problem. So what is a Data Warehouse? How do you use a Data Warehouse and what do you use it for? And then we'll go briefly into the life cycle, into the phases that warehouses need to fulfill its full potential. Basically, a Data Warehouse is a very big data byte. So we all know about databases. I mean, we all had relational databases. One, who had other database courses except for the basic? Okay. And the others all only had relational databases one? Probably, yes, no. Who didn't have databases at all? Yes. So everywhere had at least relational databases one. Well, that was suffice for our purposes. Basically, we can see a Data Warehouse is a very large database. And now, of course, the question occurs, then every large database is a Data Warehouse? And the answer is no, that's not true. So we have the typical relational system, the Oracle systems or whatever, you know. They are not Data Warehouses. They are just large. And every Data Warehouse is also large, but it has some characteristics that are totally different from databases. And we will go into them in just a minute. If I say large, I mean terabytes. I mean, really, really large. My SQL is not a fit. 800 gigabyte at least, but most do really have several terabytes. And of course, several terabytes always implies that doing that on one server is a very difficult thing. Most of the data is globally distributed for security reasons. What if your computing center burns down? And the data of your organization is lost. Very bad idea. And on the other hand, because the data is needed in different facilities, so usually big companies own a couple of facilities and all will need to access the data. You can well imagine that this may cause severe bottlenecks if every... Well, if the whole database runs on a single server. But mostly, Data Warehouses are distributed. Well, then every distributed database is a Data Warehouse. Now, that's not true either. A Data Warehouse is something really specific. It's a collective data repository. And that means that all the operational data that is produced by your company is stored in the Data Warehouse at any point in time. So you have a complete history of your data in the Data Warehouse. You don't change it. You know, in normal databases, as large as they may be, you update data, you put new data in, you delete data, you don't do that with Data Warehouses. You keep all the data with a timestamp. Okay? And why do you do that? Why do you need the whole history of your company? Well, the interesting part is that if you want to do proper statistics, they will have to cover some time to see trends emerging, to make predictions for the future, the models that you used for getting your predictions are true, are valid. Then you, of course, need a historical perspective on the data. That is what you're doing. The process, how you get your operational data into your Data Warehouse, because that's not a single system, that is several systems, is called ETL or Extract Transform Load. And that means that from the normal databases you have in your company, you have some kind of a process that will be investigated during the next couple of lectures that cleans the data, that normalizes the data, that works a little bit with the data to get more information out of it, and then it is put into the central Data Warehouse. That is the real big database, I'll say. Those are normal databases. These are the ones that are covered by relational databases, one and two. This is the one that we're dealing with in this lecture, in this course. And then you can do some analytics on that, covering so-called data maps. We will do that during, I think, fourth or fifth, something like that, six lecture, and then comes the interesting part that will be covered at the end of the lecture, like the last four or five lectures, the analytic part. So how do you do OLAP on one hand, and how do you do data mining? What are these algorithms to get out the hidden information? How do I know which products to put together on the same shelf in my supermarket so people will buy more? These are the interesting parts. We will cover these algorithms, and you will know about it. If you own a supermarket at one stage, you're able to make sure that everything is in order, that cheese is very close to the red wine, because they're always bought together. This is basically what we're doing, and this is the same way the data flows. The data flows from the transactional systems, from the everyday use systems, where all the production data, the customer data, the financial data, all the kind of data that is collected by your organization is hosted and is processed. The data flows with the help of the ETL process into the data warehouse, which is just like a real-world warehouse, you know, like you have lots of shelves and you just stuff the data in, and then at some point, somebody goes through this data warehouse and visualizes some connections, does statistics, makes predictions, or just creates reports for management, like, was it a good year or was it a bad year? All this can be done with data warehouse. Why don't we use databases for that? I said a database, a data warehouse, basically a very large database, which is true. Well, if we compare it to a database, I said that the data in a database does not have any historic impact. It's changed and then there may be a change date, but that's about it. You don't know about the old values or the history of some data item. You do know in a data warehouse because you never change anything. And that is, of course, interesting if you do strategic, tactical decision, that you need to know some of the development to predict what is coming or what is good, what is bad. It also implies that for management decision, you only have very small number of transaction because management board meetings take like most of the time of the day, and they discuss three, four, five things that have to be prepared by analyzing the data warehouse. If you're selling things, products, or have customer contact, that will be three, four, five things you do per second because you're selling thousands and thousands of your products. And that is, of course, a very interesting thing. You have a very small number of transaction, those transaction may be arbitrarily complex because they need a lot of data to sift through. They may need a lot of different angles to look at the data. So maybe grouped by certain countries, grouped by certain time spans or the development over certain time span. So these are usually very long-running transaction. And they don't have to be online all the time. But you get your analytics and then the system is kind of free to use for other people or for the ETL process or whatever. Whereas a database that is used for operational purposes has to be online 24 by 7 because you cannot afford not to sell something because your system is down somehow. You need to be available. So if I compare them directly to each other, the online transaction processing, that is basically the traditional database and the data warehouse, on the other hand, then the business focus is that the database is operational. It's really for sales data. You just put that in whenever a unit is sold. And that happens very often. The data warehouse is for tactical, strategical decisions. It happens very rarely that you need those. The transactions that you have to do is very large in a normal database, very small in a data warehouse. How many strategic decisions do I do a day? How many units of product do I sell a day? Totally different. The transaction time is short in databases. How long can it take to update some record? And very long in data warehouses. How long does it take to group everything by country and who bought what and why and the time he needed and blah, blah, blah. Lots of joints, lots of orderings, lots of aggregation and some other data functions. So this is basically the major differences that we see between databases and data warehouses. Again, saving. Kind of annoying, but I have no idea if that can be... Very important information. Ah, we should use an ETL process for that. Okay, come on. This is ridiculous. Okay. Good. Some definitions of data warehousing. So what the expert said, I talked about the Kimbell book and the Inmon Bible just a minute ago. Now, Kimbell says it's basically a copy of transaction data that is specifically structured for query and others. That's a very simple definition. You have the transaction, the business data, that is operational. You just copy that into the data warehouse and you copy it in a very clever way so you can use it for later and others. So it's normalized and all kind of that. Inmon is a little bit more to the point. Basically, he says that a data warehouse is subject-oriented, integrated, non-volatile, time-variant, a collection of data in support of management decision. What do these four points need? I mean, subject-oriented means it's organized in a way that all the data is about the major focus of your company. So if you're selling computers, then the focus of your data in the data warehouse should be the individual computer that you're selling. And all the data that is related to it should be grouped around the central product, the customer, the cost that you have in production, the orders that you have, warranty claims, or the accounting data that you need that is grouped around the central entity of your data warehouse. That means subject-oriented. To give you a short example, if we take the customer as a subject in the data warehouse, then we could have base customer data from different periods. And the activity that all these customers did. So we have the base data, like addresses, and maybe some demographic data, the age, the gender, or if you buy something at some of the big markets, like Saturn or MediaMarkt or something like that, they very probably will ask you what your postal code is, your zip code is. Anyone ever experience that? That when paying for something, they ask, what is your zip code? Did it make you think, or did you just hand it over? What are they doing with your zip code? I mean, it's totally uninteresting for the transaction, isn't it? You buy it, doesn't matter where you come from. Unless you had an idea? They analyzed this zip code from their customers. Exactly. They analyzed all the stuff they had. Exactly. So they analyzed with the zip code where the customers come from. So what the basic area that a market serves is, and of course that is very important information for what? I mean, I sell the stuff, so I couldn't care less where the customers come from, isn't it? Yeah? Advertisement? Yes, right. So if a large number of my customers come from a certain area, then I should probably invest in some ad campaigns in other areas that I want to address. Yes? Yes? That's right. So basically you can find out which markets are performing well and which are kind of underperformers. And you can always define areas that are obviously in need of some new markets. Where should I put up the next market? Well, just order your customers by the way they are taking. And the customer who are the farthest away and are quite large in number obviously need a new media market. And that's a strategic decision. A strategic decision that is not obvious from the sales data. That is only obvious after you ask people for their zip codes when selling things. And all that information really lands in one big data warehouse that is focused around the customer. And usually there are lots of tables in a data warehouse, a hundred tables or whatever, and they are all related. They are all concerned about the object of desire, the customer or the product or whatever it may be. Whatever you focus on. That is the center of your data warehouse. That means subject oriented. Second, it has to be integrated in Montelsers. So the data warehouse contains all the information from multiple transactional systems that are spread throughout your organization. That means of course that all the data that gets into the data warehouse has to be consistent in all of these base systems. Otherwise you will have inconsistencies in your data warehouse. So every inconsistency that you experience in the underlying systems will be directly moved into your data warehouse. Which is not a good thing if you are thinking about strategic decisions. Because, well, that is the crab in, crab out principle. If I have bad data, I get bad decisions. And of course we want to deal with that somehow. So all the data from the underlying systems has to be made consistent. And that means integrated. You integrate the data from the different operational systems. So whatever you have, you can have maybe the gender or some measurements or some conflicting keys or something like that. What if somebody has the data male-female? Other systems just use M and F for male-female. Some use 0 and 1. Some use, well, woman-man, whatever it is. How to get that in the data warehouse? Well, you have to think about one common format that fits all. Just decide for one. And then during taking the information out of the other systems, just transform it. That's actually the transform in extraction transformation loading. That's one part of the ETL process. Transform your data that it is in a consistent representation. But of course that's not all. What happens if you have some customer buying a product and the sales department says he is male and the warranty department that has some claims says she is female? Same customer. It can happen. None of the underlying sources will notice. How should they? They are all different worlds. But your data warehouse will notice. Because there are two entries, one saying she is female, the other saying he is male. Obviously impossible. So you have to clean your data. You have to decide which one is true of the alternatives. Or best say nothing at all. I mean, gender unknown. That is basically the integration part. So if you have non-volatile, that refers to the historical dimension of the data warehouse. So the data in the data warehouse is written, but not updated, not deleted. You don't get anything out of the data warehouse. You just put it in. It's not like a real world warehouse, like where you trade goods, where you store goods and then sell them off at some point. And the analogy breaks it to some degree. You just stuff data in and you keep it. At some point you might delete data that is older than 10 years, because nobody is interested anymore, or stuff like that. But still, you don't do the regular update for everything that happens. Somebody buys a computer, you note that in the data warehouse, somebody returns the computer because he didn't really want it. You note that in the data warehouse too? No, you don't. You just keep the old records and you put some new records into it. So you put a new record, it was returned, and then it was sold again. So you have three different records for the same machine. In the product database, you will have a single record that has been updated for a new customer who eventually bought the item. If some changes occur and you snapshot, the record is written that is exactly something was returned, or something is no longer valid. But you note that as a new record, you don't delete the old record or update the old record. That means non-volatile. It's basically stable. It's just growing. And it's definitely time-varying. The changes to the data warehouse are tracked and recorded. So you have a history over time. You can see how some entity changed. So if your central concern are customers, for example, then you will see all the things a customer bought during his life cycle with your company. And you will see, oh, this is a good customer because he buys every five weeks. This is a bad customer because he bought once and never came back. And that was three years ago or something like that. So maybe he needs another advertisement because he's interesting still. Maybe the other customer who buys every five weeks needs a customer card so he can have big savings when shopping with us. So for all these kinds of things, for all these customer processing or the customer relationship management, you need the historic data. What was the development of a customer? Is the customer happy? Well, if he buys repeatedly, he's probably happy. If he used to buy repeatedly, but then at some point suddenly stopped, he may be unhappy. He may have moved places, you know, like you never know. But still, there's some reason that this had to stop. And you can find out about that. Of course, you have different time horizons. So if I look at customers, there will be probably a couple of years that are interesting for me. If I look at products, the sales, I will not cover them in years. I will rather cover them in quarters or even in days. I say, you know, like, what happened today with my company? So you need, you rather need a, as the freshness of data is concerned, a large time frame, 10 years, 5 years, 10 years, that you might consider having the data, because predictions for the next year that you do on grounds of 3 months are worthless. You cannot predict with a very short time frame. Well, you will probably need 5 to 10 years. In operational systems that are somehow different, you know, like, I mean, product data, once a product is solved, it goes to the warranty department and maybe comes back at some point. But basically for the production, it's not interesting anymore. It's not interesting what was produced 30 days ago. Okay? Well, so let's come towards the general definition. A data warehouse is a repository of all the organization's electronically stored data. Whatever the organization needs, whatever the organization is about, is stored in the data warehouse. And it's specifically designed to facilitate reporting and analysis. Okay? So we need a normal database for the storage capabilities. We need a model that somehow groups the data around a certain subject. We need a lot of storage space, because the data that we're collecting has a 10 years time horizon. We need ways to extract reports and to analyze the data. And we need ways to prepare the data in our organization for being analyzed and giving good quality reports. That is basically what data warehousing is all about. And of course, you usually do that on custom-made hardware. It's specifically built for this function. You will not have a data warehouse running alongside your transactional systems, or a data warehouse running on some computer that somebody works with in some scenario, because if the data is destroyed by some unfortunate events, that would be a very heavy blow for your organization. And you very often have database management systems, typically Oracle or Teradata, Microsoft SQL Server, IBM DB2, which are basically relational underlying systems for the actual storage part. You retain the data for long periods of time, so you have to think about back-upping and maybe near-line, offline data, because you have large storage devices. You have to consolidate the data that you get from a variety of sources, and you have to figure out what the data model of your organization is. So what should the central entity be? Is it the product? Is it the customer? Is it the earnings? Is it the sales? What are you really interested in? How does the organizational data fit to the subject? That is something that a warehouse architect has to think about first. So let's go to the first detail and see a short use case of some real-world data warehouse. Yeah, so we've seen what the experts say about it. We've seen what Kimball or Inman say about the data warehouse, but I still didn't find it intuitive enough. So I said I will search for some use case to see what people who've worked with them in big companies say about them. So I found a joke, actually. I don't know if it's real. It seems some guy from Seabase wrote it over the internet about how they sold their first data warehouse. So it's about Walmart. In the 90s, Walmart started selling a lot of products. They were pretty big, had a lot of data. So they said, okay, what do we do with this data? We need a database. We need to store it somewhere, otherwise we are not really efficient. So they called Seabase, Technology Vendor, Database Technology Vendor, and said, look, we want to track our sales. We need something from you. Sell us some relational database systems. Seabase consultant greatly obliged. They bought a huge amount of money for it. Sure, with the underlying hardware architecture, as Tilo said, you don't usually use a small computer for it, so you really need a cluster for something like this. They bought something from Sun with a lot of computing power. Everything was great for the first few months. But then they had a promotion, and they said, okay, we'll be offering some Colgate toothpaste for some small towns for a few days. The management said, after the promotion, how did the promotion go? How are the sales? Were they good? Were they good comparing with normal sales days? Were they good with the last year's sales during the promotion? So they called the technical guy. He wrote some nice SQL statement involving, of course, the product, involving the city. So the city should be small because we sold this toothpaste only in small cities. And the time constraints, how were the sales yesterday? How were the sales compared to the last year's promotion? How were the sales compared to last summer? And whatever analytical query you can imagine. What happened when they executed this? This is a pretty big select. It's not a problem if you have, I don't know, 100 rows of data. It's nothing. Imagine they sold a lot of toothpaste. You have to do six-way joints. Right? This is a lot of data needed to be extracted from the database, to be joined together, to be calculated. It seems that this took about 20 minutes for the query to be executed. Well, the manager had time. It was not a problem. The only issue was that after five minutes, he started receiving calls from all over the cash registers, all over the Walmart shops. They started saying, okay, I can't sell anything. The cash register freezes. Nothing happens. The database is out of order. Nothing functions anymore. So sure, what could he do? He just called CASE. And he said, look, I bought your system. I typed my toothpaste query in and everything breaks down. I can't sell my toothpaste anymore. I can't sell anything anymore. Right? And then the CASE guy said, yeah, look, you have a big query. It takes 20 minutes. What we've sold you is a transactional system. This means you have a 20-minute transaction. In these 20 minutes, no other transaction can use the database. So the registers can't update the database. They can't, for 20 minutes, operate any sales. Right? A sale is an update over the database. This doesn't go. Well, of course, the Walmart technical department was not happy about it. They said, I don't care. I want this to work. I want my 20-minute query, but I also want to do some sales in this period of time. Right? Well, the CASE support solution was with the system, you can't do anything. This is pessimistic logging. So you can't execute such queries and expect that during your transaction, another transaction can take in. This will break any asset rules, for example. For example, the atomicity of a transaction is not respected anymore. So you can't do it. You need something else. You need a data warehouse. You can input your query there. You can find out your answer in 20 minutes. And on the other system, on the transactional system, you can do your sales as before. So this is how CBASE sold their first data warehouse to the Walmart. We can already probably imagine a bit better how things look like. So we have these transactional processing systems, the ones with the CASE register. We use them for operational data. We operate a sale. We update the database CASE register. You see it every day probably also in real and everywhere. This is not a data warehouse, obviously. A data warehouse is fit for analytical processing, the big query you have seen, for promotions, for comparative analysis between what is the state today, and what was the state last year. How much did we sell last year in those cities? This helps for resource planning, for budgeting, for marketing, as we've previously seen. So it is decision-oriented. It's something else. As major properties, we can distinguish between the updates. So the operational databases, the relational databases in transaction processing are mostly fit for updates. This doesn't happen in the field of data warehouses. You have the first step of the ETL, the loading processing, but the rest is mostly read. You do a lot of selects. You do a lot of comparative analysis. In the transactional field, you have a lot of small transactions. They take under a second. You do an update. I've done a sale. The scanner reads the product and then updates the database. This does not happen in the data warehouse. You have large queries. The size, from megabytes to terabytes, sure, you can have big OLTP databases. You can, for example, after every each month store the data in archive. You can't do this in data warehouses. If you have only the last month of your data, you can't do any prediction. You need years of data. This leads to petabyte databases. One example, Walmart had in 97 data warehouse of about 50 terabytes. eBay announced they had last year four petabytes. This is how it grows. You have raw data in OLTP systems. You have summarized aggregated data in data warehouses. I'm not interested in how much did this specific client at 12 o'clock afternoon bought in, I don't know, what city? I'm interested how much did we sell today in North America? I'm interested in amount, aggregated amounts of data. It's for a different purpose. It's for decision-making. Of course, the data must not be the last state of the data. It must not be the data that is when the registers close today at 6 o'clock. I'm not really interested in this kind of precision, but I'm happy if I have the data until last week, but for the last five years. So it might be slight out of date. At least I have enough data to make my predictions on it. You all probably know how the data is stored. In a normal relation database systems, you have the third normal form or some other way to avoid redundancy of the data. So then you may have your invoice table, you may have your customer table, some status, product, sales, and some other tables which are linked together through different keys. In the data warehouse, we don't really care about the normalization. We allow some redundancy. What is interesting for us is that when this fact table, which contains the interesting data for me, my sales, is to be related to the customer. So I want a certain customer. I just need to do one join. If I'm going for a time period, it doesn't matter if it's a year, a week, or a day. So different granularities, I just need one join. Although here there is a lot of redundancy. So I repeat myself. But hardware, storage hardware is cheap. It doesn't matter. I just don't want to wait a week. Some other basic insights are how do I do this? On which hardware do I do this? Should it be a separate machine, a separate installation? Of course, this is the way to go. Of course, I should have different hardware. If I want an operational system and I want a data warehouse, they should be separated. You can do it also on the same machine. But then you really need some computing power. You really need to make sure that the machine on which you store both of them has enough power to solve both of the tasks. You can say, okay, the C-base guys were stupid. They were at the beginning. They didn't know what they talked about. Maybe pessimistic locking is not the solution. Maybe working with something like optimistic locking would allow us to use the same system. It goes. But it goes for, I don't know, hopefully, a 1TB database. Afterwards, it doesn't function anymore. And this is exactly because they work differently. If we consider the hardware utilization of the operational system, we observe, this is the cash register behavior, right? Making sales, continuous making sales, the CPU looks like this. It's always faster at its peak. The data warehouse had three big analytical queries today. They were at their peak. If you take these two hardware representations and put them one over the other, what do you think would happen here, here, and here? It wouldn't work. They wouldn't be able to make sales. The hardware wouldn't be able to manage both of them. And I think this is kind of the story I wanted to tell you about how practical data warehouse has this work. I think this is the perfect... So, let's go on with our lecture, yes? Sound us up. In the last part of the lecture, we want to show you some applications of data warehouse. So, what are these typical queries that would need a data warehouse and online analytical processing for answering? And typical queries are, for example, if you focus on some certain unit and on some certain time span, how much did they sell? You can do that with all kinds of different intervals, with all kinds of aggregations. So, in a month, in a week, in a year, four different units. And then, what was combined sales for the first quarter? So, you aggregate over queries that already use aggregations. And what you see now is that the six-way join that the Walmart CIO had in mind is exactly what you need. If you would put that in simple SQL over a normalized SQL data schema, then you would need a lot of joins, a lot of aggregations, a lot of sub-queries that may be correlated with the other query. And for every database analyst, for every database administrator or somebody who has set up a database and controlled a database at some point, this spells disaster because those queries are virtually impossible to optimize properly. And then you have these 20-minute monsters that will run forever and block your system. So, apart from the complex formulation of the query, it really needs complex joins, multiple scans over the whole dataset, for performing the different aggregations, and this is very time-consuming. So, first, you have a very big chance of getting the query wrong, so it doesn't really say what you wanted to say. And once you notice that or once you get the error from the database, you will have lost a sizable amount of time. And then, even if you do it right on the first shot, it will still need a sizable amount of time anyway. What can a data warehouse do to answer these queries in a more efficient way more quickly? And the idea is basically that you do not rely on normalization, but that you rearrange the data and pre-aggregate some data that is always important. If I'm interested in sales for a day, I might be interested in sales for a week, for a quarter, for a month, for a year. So, time is something that is naturally ordered or naturally aggregated. I cannot predict every exact query, but I can point out that some aggregations might be more probable than others. Consider, for example, geographic locations. Is it very probable that somebody will ask a query combining the sales of Braunschweig and Kiel and Lübeck? Well, he might be interested in some old Hansen city things or whatever. Yes, the query exists, but it's probably not a very interesting point of view. On the other hand, having all your markets in lower Saxony or in northern Germany intuitively would make more sense, because that is a strategic decision that you can really think about. Like, okay, we need more markets in lower Saxony, or we need to put some advertising campaigns in the big cities of somewhere. So, we know already some of the very probable aggregations that will happen. And what we do in the data warehouse is we pre-aggregate them. Here, for example, the time period. If the basic period is a day, then we can, for example, see the fiscal week, the fiscal period, the fiscal year. Those are typical periods of time that occur in accounting. If somebody has to do a tax return, the fiscal year is all that is important. So, once collecting the days, I can already, after the year is done, pre-compute the fiscal year. And whoever does a tax return or whatever statement can rely on these pre-aggregated data. Okay? This is basically the basic idea. This is basically the trick that databases use. And the concept is called a cube. So, you don't have the table of the data, but you have different dimensions, like 3D, like what enlarges the table into a cube, where you have different time spans, where every field in your database can be pre-aggregated along the dimension of time, or along geographic dimensions, or along customer groups, all people who live in big cities, or all people who are male, or all people are female, or something like that. And this leads to the very typical star schema that we will discuss next lecture in detail, where you really have the time key, which is connected to a whole table of the time with different degrees of pre-aggregation. And now, putting out a query on one of these time spans may lead to a direct hit. I want all the data of the fiscal year. No, don't aggregate it, just take it from the pre-computer database. That's quick. Still, if I need to aggregate Lübeck, Keel and Braunschweig, I can do that, because I have all the data for the day and for the city. But again, then I have a six-fold join. In a database, it would be deadly to post such queries on a data warehouse. It's still possible. Good. The data warehouse is basically the repository for analytics that are put up in front of it. So what happens is that all these tools, data mining tools, visualization tools, work on the data that is provided by the data warehouse. And the most renowned is online analytical processing. There are a lot of tools for that. And every company has its own tool set that works on the very data. A second thing that is very common is called KDD, Knowledge Discovery and Databases, which uses different data mining processes or data mining algorithms on the data to detect hidden connections between some things. So what if you find out that a certain customer group always buys a certain product? You can do targeted advertisements, things like that. You can make big sales at points where these people are around. And you can set some, well, maybe special advertising campaigns for people who don't buy a certain product. So everything that you don't know about your customers, your sales, your products, but that you can find out by analyzing what is happening in the stores is interesting for you, is interesting for strategic decision. Then we have, of course, the data visualization and reporting, which is very interesting for getting ideas into management. If I can show how something behaves or if I can predict how something behaves, then my related comments or my proposals for management decisions will, of course, be much more believable and much more obvious for the management. So the chance of succeeding will be higher. This is why all these consulting companies, McKinsey or you name them, always have these wonderful, colorful slides with all kinds of Excel graphs, like how things behave and how the world works, because it makes them credible. And this is something that we can do too. All of the analytical processing is, as I said, a form of information processing. And what it needs is basically timely information. So the information has to be accessible. It has to be accurate. As I said, we need to clean data. We need to normalize data. We need to transform data sometime. And it is understandable. And, well, timely, yes, for a management decision, you can't need a week to get the data, obviously. But still, it's not like at the cash register, where you put in the customer card and then it should go, okay, here's the amount, here's your bill and move on. And not, well, let's wait for the server. Nice weather today, isn't it? Oh, now it comes. This is not a good customer experience. So you need to be quick. You need to be on the spot. It means seconds. In OLAP, minutes, hours are not unheard of. And sometimes it may even take a week. It's a very important and very complex thing to derive. And OLAP comes in several flavors, some of which we will discuss in depth. There's OLAP, there's DOALAP, there's MOLAP, there's VOLAP, there's HOLAP. So a lot of things you can do with OLAP. And I will not go through all the acronyms. So for example, OLAP is on relational databases. So OLAP and MOLAP is for multi-dimensional databases and so on. So these are different flavors that are somehow concerned about how the data is stored and how the data is reported. We will go through some of them in time. Second thing, KDD, data mining, the basic idea is to find mathematical models, statistical models of the data in question. And a model always summarizes what is happening. And a model can be used to predict trends, which is a very important thing. So if you have a database like with customers here, where you get the zip code for everybody who bought something and maybe the gender and the income and all kinds of demographic data, you might find that somebody who has children, rather buys a mini-ven, then sports car or coupé or something else, which might not come too surprising. But then there are many things that are very surprising indeed. So especially if you go shopping and you have a retail market like grocery shopping, what stuff is bought together? Are there things that are typically sold together? You will find besides the obvious, you know, yeah, spaghetti and tomatoes, you will find some things that you would never have thought possible. For example, wine is very often sold with diapers. It's because the people stay at home and rather have a bottle of wine than going out in bars. If you think about it, it makes sense. You wouldn't have thought about it beforehand, but you can dig it out of your data. And that is what data mining does. So all the important and all the hidden stuff that is inside your data is there if you just go looking for it. And with that you can have a family special, two bottle of wine per one extra diaper pack that you buy or something like that, you know, like everybody will be happy. As I said for the recent thing, if you can say some, well, select stuff from customer where total spend is bigger than you 100 euro, you can build some models that for example show if somebody buys a minivan and the age is larger than 35, then he has family. He's probably a good job and the total spending will be a rather large. If somebody is a male and comes from Braunschweig, then he might have money because this is a zip-cott of the Eastern Ring. And there are all the, I don't know, professors and Volkswagen managers and teachers and whoever lives there, you know, like they might have money for some reason or the other. Well, not the professors but the Volkswagen managers at least. These are typical rules that you can deduce using your data. And knowing that is a business advantage because your competitions may not know them and then you can target these customers, you can target the interesting market segments and how to do market segments will also be part of this lecture. The questions you can answer with that is which products or customers are more profitable, what markets, what outlets have sold most over the last years and the decisions you can take on ground of these reasons is where should you open more shops, should you close down shops, which customers should be targeted for promotions and so on. Okay? So you need some reason to do something. Should you increase production, should you decrease production? Producing more is good but not if the market is saturated. How do you find out whether the market gets saturated? Or you can do market research or you can look at your own data. What happens? If the sales stay on the same level for some time, the market may be saturated already. Second thing is who is the user? The data warehouse is online analytical processing so it's kind of like a business cockpit. But still, it's not the managers that do it. It's too complex for the managers. What you need is analysts. You need decision support analysts that for specific questions that the management may arise. Where the management raises a question of what should we do strategically? Should we go into the far east market? The decision support analyst goes down to the data, drills down to everything that could be implied by the data or not. You never know what's in it. You define the information that is needed. You discover the information that is needed by every tool that is applicable. And then you have all the nice charts, you have all the nice visualization to get a well-founded decision for the management. Then the management may follow you or not. If you're a successful decision support analyst, your word will count for something. And the problem is really that even if you're a decision support specialist, you will go through different levels of analysis. You will very probably not pick the right query or not take the right algorithm in your first try. You will certainly look at the data, find out how to cluster it, find out what makes sense, what makes less sense, and then draw some conclusions from that. And with every conclusion, you will find out that you really want something different, something more specific or something altogether different that you've set on the wrong horse at some point. And in the end, you will find what you're looking for, hopefully. There's typical of explorative analysis. You explore the data. You look at some aspects of the data just to see what you're really looking for. So you're building up your mindset, which is somehow steered by the data that you find. And the more you dig for something, in the end, you will probably find it. That's basically how you're working. And the typical explorative line is this, now I know what is possible, and from that, I can deduce what I really want to have. But if I don't see what I want, I don't see what I need. That's basically what you do, and this is exactly how the data warehouse is actually built. If you have a decision support system, working with the data is totally different from working with the database. In the database, you have the Chris SQL statement, and it tells you what you're asked for, not more and not less. In the decision support system, you detect trends. You detect connections. And you, as the support specialist, are in control, which path to follow, which information to ignore, what is relevant, what is irrelevant. And this is basically what you do. It's like requirement engineering, you know? Only that you don't know all the requirements at first, but factor in new requirements every time you learn something new. I think if the thing that you learned is something important, factor in your requirement, if the thing is irrelevant, just leave it out and stick with the old thing, explore somewhere else. Okay? This is basically who the user is. And for the lifecycle of the data warehouse itself, there's a system development lifecycle that is very renowned. It starts, of course, with the design. And the design is a typical software engineering requirements analysis. You talk to the users. You talk to the management. What are they interested in? What are their primary goals? What is the subject the data should be centered on? You see what systems are there. What data sources do you have? So what do you have to factor into your data warehouse? Then you look at the key performance indicators. So what really is interesting? Is it really the money you earn? Is it your customer base? That is the important thing. Is it the products you want to sell, the products you want to market? What are the key performance indicator? Then you find out how the management works, you know, like you try to find out how are they doing decisions? How are they coming to conclusions? And this is a process you have to support. It's not always the correct process, but you know, you can't change humans too much. If somebody is a good manager, then probably his way of managing is successful. Then you can't rush in like McKinsey and say, oh, everything has to change. I mean, there was the great failure of the ISO 9000 norms. The people came rushing in, auditing, auditing, auditing, and everything has to change. And after the audit, nothing worked anymore. Because nobody felt comfortable with it. Nobody knew what to do. Well, it's more efficient. Yes, it's more efficient, but it doesn't work. And that is kind of the problem here. Try to map the decision-making process underlying the information needs, and then finally design the schema. So you have to talk to a lot of people before you're ready to define the schema. And that is really necessary. So there's no standard way. If somebody tells you there's a standard way, just use this data, this data, this data, and this table, this table, this table, and that's it, then it's wrong. It doesn't work that way. Okay, the neckling is get to a prototype. And the prototype has to constrain, and in some cases reframe the end-user requirements. So, of course, everybody wants a system that has totally wonderful decisions for me, that are 100% correct and will grant large, large benefits for everything I do. That doesn't work either. You just have to constrain it a little to what is possible, and what is sensible. And sometimes you can work with people and say, well, I know you want that, but it's not really what you need. Let me tell you. And I can change it a little. Then try deploying it. Rollout process is a very big problem in most companies, especially in big companies, because you have to, again, talk with a lot of people. You have to train people working with that. And you have to be very patient before people start using the thing in the intended way. And sometimes you have to listen to people, because they may have reasons for not using it in the intended way. Maybe that is a good thing. Maybe that is a change that you should work with. And after the deployment phase is over, it's day-to-day operation. And what is needed in day-to-day operation, you have to control the data warehouse, you have to monitor the extraction transformation loading process, you have to see what happens in your data warehouse and control the whole thing. Then, basically, you need to enhance what you're doing. As I said, all these different parts of the life cycle are connected to each other. Even if you have the initial requirements, the deployment process will tell you what you need. Can people work with that? Are people comfortable working with that? So in every step from the design to the prototype, you have to feed back some information. From the prototype to the deployment, you have to feed back some information. For every step, you really have to feed back information into the step before, until in the end you are again at the design phase, where you rearrange logical schema or get in new data, get in new data sources or new ways of exploring the data that you have. The classical software design life cycle and the data warehouse system design life cycle is a little bit different, because in the classical domain, the requirements are broken down to all the components of the system. You know what the system is going to do, which is not true for a data warehouse. In the data warehouse, what you do with the data depends on what you find out from the data. So what else you need in terms of information sources depends on what you explored before. It's kind of linear, it's kind of reversed in a way. You don't start with the requirements and then build the program and build the database and run the application on top of it. But you start with a data warehouse where you put everything together, and then you use some algorithms to find out what the data is about and how the data is connected. And from that, you derive certain rules and you derive certain possibilities of using the thing. And this is basically what the requirements are about, what it should use. And then it comes back to, okay, the requirements are I need new data sources, that should be part of the data warehouse. Or I should normalize the data in a different way. Maybe the subject that I chose was wrong. And back it goes to designing the data warehouse. It's kind of reversed. It's kind of top down. So the classical life cycle starts with the requirements gathering, analysis, design phase, programming, testing, integration, implementation. That's quite normal. That's what we all know from software engineering. Once you install a data warehouse in a, or once you deploy a data warehouse in any organization, you will probably start differently. You will first implement it. After having the basic infrastructure, you need to integrate the data because it comes from different sources. You have to test for bias what is in there. Are you taking data that is incomplete or data that is inconsistent with what you have? You have to program against the data. You have to find out what algorithms you want to run, what interesting structures in the data you want to discover. Then you design this decision support system on top of that. And then you look at the results of the decision support system. And you try to figure out whether what they predict is good and viable or is unhelpful. Depending on whether it's unhelpful or really already good, you understand what the requirements actually are. And you go back to implementing the warehouse. That's a little bit different from what we already know. Some people call the system development life cycle of data warehouses cycle life development system, which is kind of like just the thing reversed, which is kind of a bad joke. But some computer scientists like it. The development cycle of a data warehouse is usually data driven. Whereas the normal software engineering development life cycle is systems driven. In software engineering, you design a system that does something. In data warehousing, the data asks you what you need. Kind of what you find in the data, what you explore in the data sets the requirements for what you actually need and what is to be implemented. Once the data is integrated and tested, you can write programs and the results show what correlations are in the data. What hidden connections are in the data. Because that is a trick, the connections are hidden in the data. If you would have known them upfront, you could have had requirements. But you don't know them. You have to find them. Once you find them and once you understand them, you have to make adjustments to the design of the whole system. Then the cycle starts all over, which is why it's often called a spiral technology. You go around in circles and your data warehouse or your decision support system becomes better and better and better. The decisions become of more quality, but it takes some time. It's an explorative thing. Finally, you have to operate the data warehouse. You have to do the everyday's job, which can be broken down into the monitoring. You need to see that all the systems are running, that everything is up. Then this magic extraction transformation loading process, that will be a major part of the next two weeks. And finally, the analyzing phase, where you really work with the data. What happens in monitoring is basically that you do a normal surveillance of the data sources. You find out whether they are productive, whether they are responding, whether they are giving you the data that you want. You find out which data modifications are in the operational systems, how can they be reflected in the data warehouse. This basically sets the stage for the next steps. The monitoring techniques can be active mechanisms within the basic systems. It can be event-condition action rules. If a payment of over 10,000 euro is recorded, transfer it to an economy account or something like that. Or it can be on every update, do something. What you do for the replication is basically you take snapshots of the operational data. Which is basically a view, for example, Oracle does that. Or you immediately replicate data. IBM, for example, the DV2 products always use direct replications and write the data that is important. Or the update data, for example, into a different table. And this table is then transported into the data warehouse. There are also protocol-based mechanisms where you just have logs of what happened to the operational data. And you use these logs for updating the data warehouse. The problem is that this protocol format, this log format, may be ambiguous. So it's sometimes hard to see what really happened. And of course there are some application-managed mechanisms which basically are very hard to implement for legacy systems. So if you have some old system, you need an application that does exactly know what the software does and how to get the data out of there. But it also can be done. So if you have to integrate a couple of systems, then data comparisons or time-stamping, who did the update first? Or is it the same update? Or what happened in the update can be done. But that's really handcrafted. Then comes the extraction step in which you take all the data that you need to put into the data warehouse from the operation system. Whether it be from the view or from the logs or where you got it from. And usually this data is quite a lot because if you take every single update and put it into the data warehouse, then it will put a large stress on the data warehouse and it will put a large stress on the operational system. Because the operational system has to perform the update or has to perform moving the data. This is of course not what you want, but you want something that overnight, maybe from 12 to 1 words, kind of like getting very quiet in production anyway, you want to do that work. All in a batch. And then the operational system is kind of ready for the next day and that's it. Or between Christmas and New Year, so typical periods of inactivity. This is what you use for the extraction process, take the data from the operational system. But of course you have to be sure that you pick certain ways of making sure that everything really gets into the data warehouse at some point. You can do that either periodically. So for example, if you have weather information or stock market information for brokers, then you need to update it every, I don't know, stock market probably every seconds or 5 seconds, weather information probably every hour. If it's the marriage status of people, updating it once a year should be enough for most of us. There are very little people that marry twice, right, or 4 times a year. That is what you have to think about. You can also do it actively, so on request, new item comes in, so please update. Or event driven, so if something strange happens, immediately make a snapshot. For example, big sale or something like that, immediately make a snapshot, put it into the data warehouse. Or the Christmas run, like where everybody buys the last Christmas presents, probably shorter update periods are needed during the Christmas time or pre-Christmas sales, then for the rest of the year. Or of course you can do it immediately. So whenever something happens, report it. This is a very rare case. It's really only applicable in the financial sector if you have stock market new. If something starts to slump you should sell immediately, not in an hour when it's down on its own all time low. So basically immediate is only for time critical applications, for real time applications that you need. On the other things, it's usually done periodically or after a certain number of transactions that run through. This of course also depends on the hardware and the software used for the data warehouse and the data source. If you're still under the limit, then you can do it immediately, obviously. But if your system is stressed, doing it immediately is a bad thing and you should do it periodically every night when the system is under less stress. So it depends on your software and hardware capabilities, what you find in the company. The transformation process is basically about adapting the data and finding out how it relates to data in the other information sources. Also the data quality is a big concern here. So how to get the data consistent? Can it be that somebody lives at a certain address? So everybody knows these forms that you find all through the internet where you have to register for something. What kinds of address do you type there if it's not important system? You type something like, how should the system know that it's not a real address? But I don't want to tell my address to some system that starts sending me some spam mail. Do you want to put that into the data warehouse? Probably not. You want real addresses. So build some rules how real addresses look like. Real addresses are like street names, followed by some number. For the house. And simple rules can find you very easily mistakes in the data or bad quality data. Then you need to integrate the data. So if you have different data sources, they might store things differently. So for example, if you have encoding by and sell and some say just one or two or B and S for buy and sell, you have to kind of carry it into the same format. It doesn't matter what the representation format is of the target system. But you have to decide for one, you have to transform it. Then how do you deal with keys? Make sure that it's really unique. Make sure that if you have foreign key that the form references actually in the reference system. What about the data types? Is the data of the right type stuff like that? So pretty simple stuff that can severely hamper the data quality if you don't do it. Normalization, you'd write Michael Schumacher or Michael Schumacher or Schumacher Michael or Michael Schumacher. So you have to find some way to store it. Doesn't matter what it is, but in the target system, it should be the same. Date handling, typical American date, typical European date, or at least middle European, central European date. Decide for one, doesn't matter which one, but keep it the same over the whole system. Measurements, inch to centimeter or something like that, happens very often. Spaceships failed for having the wrong measures. So if something drops a couple of centimeters close to the orbit and something drops a couple of inches, that can be fatal. It has proven to be. Make sure you have that. Calculated values. So something including the value added tax or something excluded the value added tax. Basically the same price, just add the value added tax rate. Multiply it by the value added tax rate. Then it's the same. Decide for one, same aggregation. If you have daily information, pre-aggregated into weeks, pre-aggregated into months, pre-aggregated into years, because you might need that at some point. At finally, cleaning. So consistency checks can be very easy, like the delivery date should be after the order date. Nothing can be delivered before it is ordered. Or if you have some missing values, there's no address field for some customer. You have null values. How do you deal with that? Do you exclude the customer or do you say, well, let's try to find out what the address is. Maybe some other system has a valid address. All things that have to do with data cleaning. And I can tell you that data cleaning is a very challenging topic. And we will also deal with that in the next couple of lectures. It's not as easy as it sounds. Finally, the last step of the ETL process is the loading step. So what you do is basically you take it offline. You make it during the night or during the weekend when the system is not under stress. And then you batch everything together in a single batch and you let it run. Because most databases have high performance loaders. The individual update is quite a costly operation. You basically split between the initial load, which is the initialization of the data warehouse, and then periodical loads to keep the data warehouse updated. Of course, the batches for keeping the data warehouse updated, all the data that amounted for one day or for one week is much smaller than the initial load. So if the initial load takes a week or months, it doesn't matter. But then the everyday work should be fairly quick. And for this reason, because the initial loading can be very big, there's usually bulk loaders that will take all the information that is in the source systems and put them in some format that can be worked on very quickly. So usually it's something like comma-sepulated values that are just flushed into the system. And the actual loading of the rest of the data, of the new data, that is just of aggregated nature. So you need some partitioning, you need some incremental actualization of the data, and the increments are usually fairly small. So that can be done during the night or on the weekend. Well, then you have to analyze the data, the data access, what do you need? So how many iPhones were sold in brown-shark stores in the last three calendar weeks of 2008? You can do that in old TP systems, it takes a lot of time. If I have the information pre-arranged and pre-aggregated for weeks and areas, like the brown-shark stores and companies probably, only the T-Mobile, then I can do it very quickly. So the district should be interesting, the aggregation of time should be interesting. Everything that can be pre-aggregated in a sensible, in a semantically meaningful manner should be pre-aggregated, because it will take a save you a lot of time during the online processing phase. And basically it's called online analytical processing, which means now and directly in front of me, not offline analytical processing, which means I come back tomorrow and want to report. It very often amounts to that, but still, it shouldn't. Really pre-aggregate wherever it's possible. Now, the way you do that is basically with a multi-dimensional data model. So you split a dimension like time into multiple reference frames. You say this is the day, this is the week, this is the month, this is the year. Same with geographical reasons. This is the city, this is the state, this is the country, this is the area, or the continent, or whatever. And then you have certain operations which we will also discuss in detail, like roll-up, drill-down, slice and dice, rotation. You don't have to understand all these terms right now. We will come to that when we do all the olap. Basically, if you have this multi-dimensional data, you just slice through it to get the right information, in the right aggregation step. And of course, you have the data mining step, where you can have hidden patterns that you need to find. Basically, knowledge discovery in databases. So people who buy wine also buy diapers very often, something like that. And of course, the prediction. I sold so many units of some product during the last four years. How many am I going to sell in two years? Okay, so trend channels, things like that. And it's very useful for answering how do sales evolve, what happens in the near future. And the techniques that you use for that is basically clustering, classification, regression, association, rule-learning. There are lots of them. We will discuss some basic algorithms that already do quite a lot. So this is basically everything I want to do to introduce data warehousing. Next lecture, we will talk about the architecture of the data warehouse. So what are the couple of basic architectures that you will find in practice? What are storage models? How do you store the data? What layers do you have in the data? What aggregation layers? And of course, a little bit about the middleware that you need for making the connection for your operative systems to the data warehouse. Any questions concerning data warehouses? No? Good. Then see you next time.
In this course, we examine the aspects regarding building maintaining and operating data warehouses as well as give an insight to the main knowledge discovery techniques. The course deals with basic issues like storage of the data, execution of the analytical queries and data mining procedures. Course will be tought completly in English. The general structure of the course is: Typical dw use case scenarios Basic architecture of dw Data modelling on a conceptual, logical and physical level Multidimensional E/R modelling Cubes, dimensions, measures Query processing, OLAP queries (OLAP vs OLTP), roll-up, drill down, slice, dice, pivot MOLAP, ROLAP, HOLAP SQL99 OLAP operators, MDX Snowflake, star and starflake schemas for relational storage Multimedia physical storage (linearization) DW Indexing as search optimization mean: R-Trees, UB-Trees, Bitmap indexes Other optimization procedures: data partitioning, star join optimization, materialized views ETL Association rule mining, sequence patterns, time series Classification: Decision trees, naive Bayes classifications, SVM Cluster analysis: K-means, hierarchical clustering, aglomerative clustering, outlier analysis
10.5446/316 (DOI)
So, it's a pleasure to welcome you all back for the data warehousing and data mining lecture cycle in this winter term and in the last lecture we were talking about partitioning of data. So how should we partition data between different servers between different resources also like like hard disks and hard drives and we were coming up with two possible ways to partition basically a horizontal partitioning or vertical partitioning That's a very easy view of the topic For everybody who wants to get a deeper insight into that. It's a typical topic of distributed databases in which we do a lecture probably next semester or no not next semester after the next one I think so basically there are two ways to partition data one is a horizontal partitioning So a row based partitioning the attributes stay together and you can read rows very easily The other is a vertical partitioning where you group those attributes that occur very often together in queries and Whenever a query is posed it can work on hopefully on on on one on one table only And that is exactly what happens You either need to join the tables for Yeah, product queries or queries that are That just involve more attributes than you put in the partition table Which of course is a very costly operation Joins are very expensive And so you should always look what records are really used together Do you have something like like records that can be distributed by year where you say nobody attach addresses the the Records that are more than a year old or something or you only work with The records of the last three months that would be a good idea to partition horizontally If you say well, it's basically department based Some departments only address the customers and are very interested in all the information about the customers and some departments only address Like like sales figures or something and are only interested in those Perfect then group and partition Vertically We were also concerned last week with materialized views So of course a materialized view is always a wonderful thing because it immediately Gives you all the aggregated results that you might need for answering your query So it's a very quick a very fast way of answering your queries On the other hand, how do you materialize a view you have to calculate it And as data changes in the data warehouse Materializing everything becomes prohibitive because The combination the possible combination of all the dimensions and all the different Aggregation granularities Is exponential And that will lead to a large number of views that you could possibly materialize basically it will Lead to a to a knapsack problem where we have to decide which ones are important and which ones are not as important So you need a good cost function here to decide which one should be materialized which one should be maintained How often do you maintain them so every time something is queried or every time something is updated or Once a day or something like that But we were talking last week always about queries and how they influence What should be grouped together or should what should be joined together or what should be materialized What are those queries and that is something that we are going to talk about today So this lecture is about queries and talking about queries in data warehousing is Somewhat different than talking from from simple SQL style queries like we had in relational databases one Because what we really need is the power to pose analytical queries that are executed very quickly So that you can base your decisions on solid data from your data warehouse And that's called online analytical processing so hopefully well it doesn't quite work in real time yet But but many vendors like like SAP or some of the big data warehousing companies work on real time interaction With the data that your company company owns So today we will talk about the typical operations that happen in all of queries And we will talk about two ways to implement those queries which will be SQL 99 and a multi-dimensional model That is somewhat proprietary but very often used because it's a Microsoft product So we want we would just want to introduce them a little bit in the following Thinking about data warehousing queries we can first say that they are definitely big queries It's not your typical insert into table or select from table where name is John Smith or something like that It's rather you look through a whole lot of data so you really address many data records in your data warehouse And you address many attributes many columns in your data warehouse But you usually only read so updates of a rare and are basically done through the ETL process We will go into the details of the ETL process in two weeks I think or next week next week actually cool So we will go into details about the ETL process next week And still those queries are only read queries so how difficult can they be? Well actually they can be pretty difficult concerning all the joints that are involved And how many of you have heard relational databases too? Okay yeah quite a lot. Then we have the big problem of query optimization And joints was one of the things that always cost most time when evaluating queries So this is really an interesting factor and therefore the redundancy of data in a data warehouse is a necessity If you want to get close to real time interaction with your data If you want to see the results of a query maybe three seconds, five seconds or something like that later And not wait for hours until a report is finally finished So you need materialized views, you need some indexes that speed up certain kind of queries We were talking last week about denormalized schemas, so star schema, snowflake schema Which one is better for, for which type of query These are the typical tricks that are done when working with your data As I said the purpose is really to analyze the data and the key word here is online analytical processing So directly in front of the computer you interact with your data You create the reports you're interested in and you use that as a firm foundation for your later decisions And there's actually a big market for data analysis So some people that are just drilling down the data of companies or using data mining algorithms To find out what is happening in a company or what's going haywire in a company And mostly highly paid jobs, so it's kind of interesting to know about these things Anyway, my good old friend the automatic saving procedure Anyway, so data warehousing queries are kind of big and involve some operations that are not typical in normal SQL So that normal SQL is implemented in most commercial databases at least until a couple of years ago We're not able to handle So what is basically the interesting thing about all of queries, who needs them The one that we're always referring to is resaving or what is this? Yes, resaving! Well, I had another slide so that was already worth saving obviously I have to do something against this saving Okay, no idea Hmm, come on Here we go Hmm, no Ah, that's not good Hmm, no Ah, goodness, so this is rather annoying at the moment I'm going to give you that, we'll be here all the weekend if it goes on like that Yeah, sure Yes, sure Uh-huh Okay, here we go, something happens I don't know what, but something obviously happens So, the field we were referring always to was management information, so real decision support within a company But also of course government is very interested in all the data that can be derived either from things like taxes, incomes, demographic information That allows to separate different groups of people to find out what parts of the population are concerned about it And there are scientific databases that are very big and kind of unstructured They also use databases and run analytical queries, especially for regression analysis and stuff like that In any case, you can see that in most of these information, well, consumers, the time it takes to build the report to answer the query is a crucial thing So for a scientific evaluation, you might have a day or two to wait for the results, you know But of course, you want the results at some point to be ahead of the crowd Same goes for the governmental issues and especially in management, where you really have to react very quickly Because if the market does something that's strange or something goes haywire, the sooner you detect it, the greater will be the possibility to cut your losses And this is real money we're talking about here, so seeing things first or seeing things immediately is a very good idea So what are typical analytical requests that might occur in your company? One typical type is comparisons. You have to decide, I can either invest in a business in North America or I can invest in markets in Asia or whatever So you need to compare them to get a feeling of which investment would offer more return on the investment So for example, things like show me the sales per region for this year and compare it to that of the previous year to identify what happened there So market growing, if the market shrinking, how are the competitors, are they gaining on me and stuff like that Second thing are multi-dimensional ratios, so you could have something like, okay, I'm not considered about certain items I'm just considered in all the items that I have for some geographic location and what has been sold between 1st of May and 7th of May So I can put some ranges for all the dimensions that I have and just interested to see what happens there Then of course you have ranking and statistical profiles, so for example you could say I want the means or the averages or the standard deviations or typical statistical aggregation functions And of course I can also introduce ranking and say, okay, what are my 10 most profitable salesmen or 10 most profitable stores? So I can invest more in them or the 10 least profitable stores so I can close them down or start an advertising campaign for these stores or whatever it is that I have in mind But I need the information and I need this information Other thing is custom consolidation, show me the income statement per quarter of the last four quarters for the Northeast Region operations which just aggregates it in different ways And then you get a nice spreadsheet or a bar chart or some of these nice pie charts and then you can see what happens or at least get some data to find out whether your suspicions are correct To do that you need some operations that are typical for OLAP but not typical for SQL And basically there are four operations that are used very often First one is roll up which means to move from more detail to less detail The opposite of rowing up is drilling down, moving from less detail to more detail You can slice and dice your data cube so you're not interested in some things I'm just interested in a single product, I don't need the cube for all the products, I'm just interested in a single So I slice it or a couple of products or I dice it And the last one is pivoting or rotating which basically means that you make spreadsheets out of the data for easy consumption And easy perception of what happens actually There are some other operations like aggregate functions as I said So statistical aggregations like regression analysis or variances or means or median computation or something like that Can be pretty annoying in SQL They can be built very often but it's kind of complex to do that and time consuming So having aggregate function beyond what is obvious, sum and count and average is a good idea Same goes for ranking functions, so give me the top 10 or the top 20 or something like that, always a good idea And then two less important drill across or through, I don't want to go too deeply into them So let's focus on the first four very important operations that happen in every data warehouse First one is called a roll up Some people also say drill up which is kind of bizarre because you can drill into something but you can't drill out of it But you hear that so I put it there In any case, the roll up takes the current aggregation level, the current level of detail for the data that you're interested in And just moves to a higher level of detail or skips some of the dimension So there are basically two ways of summarizing the data that you're interested in One is really climbing up the hierarchy I move from days or daily information to weekly information to yearly information I'm losing detail basically, this is what's called a hierarchical roll up Or I could just say, well I'm not really interested in all the different products and having each value per product But whatever the product is just tell me what the sales were So I lose a whole dimension, this is called a dimensional reduction And there could of course be any mix between them So at the same time lose dimensions and lose some level of detail in some other dimension And basically it's always a summarization of data getting the bigger picture So what are the two possibilities? As I said a hierarchical roll up will consider some of the dimension that I've chosen Oh there's a different dimension, so hey hey, like that So for example here the location dimension And then my query may be concerned with all the cities that are involved in what I've sold in every city And what I do doing a hierarchical roll up, I move to the district information Or I move to the region information so it could be possible And then I have aggregated data I can't see what happens in every city but I have fewer values because I'm only interested in the regions now Can happen Second thing that can happen is the dimensional roll up What happens here is that for all the dimensions that you have you just drop one of them So for example the client dimension, I'm not interested in the client anymore, just aggregate all the clients Doesn't matter which client it actually was Again I'm losing information getting the bigger picture Clear? Everybody understood? No, it's not too difficult So what happens when you roll up over the top? So you are on the highest level on a yearly basis, what happens if you kind of start rolling up again? That will basically reduce your data warehouse to a single number Which means the single fact, if you affect the sales Everything you sold everywhere for every product at every time is just aggregated into one big number, the all number And this is the single number that describes your business in a way You can't get too much from that but this is basically what happens So if you kind of start at some product level and then aggregate, aggregate, aggregate And end up in the all section, so in the APEX Then basically you have lost that dimension So that's what happens The opposite of the roll up is the drill down So you drill down into the data, you look at more detail level And it's very often also called roll down Because roll up, roll down, drill up, drill down Just a de-aggregation operation Of course you have to see that when introducing new dimensions You're not interested in the cities anymore, you're interested in each individual store You've got to have the data to do that So you cannot drill down below the level of your dimension If your dimension is built on days, then you can't have hourly data If your dimension is built on weeks, then you can't have daily data It really depends on how you build, how you plant your warehouse And it's a very costly matter If at some point you will notice that you need more detailed data And you go like, okay, where do I get the customer data of the last 20 years from... Very difficult Of course the more detail you add in your warehouse, the bigger the warehouse will be So it will cost you analysis time It's a trade-off again, you know, you have to decide as a builder of the data warehouse You have to decide what level of detail is needed for strategical decisions And how much will it cost to actually get this level of detail On one hand in storage space, on the other hand in time for query evaluation The bigger the database, the harder it usually is to do a proper query aggregation So to show you how those two principles rolling up and drilling down can be combined We could look, for example, at some alcohol business So we have a table that says for a set of bars, here's Joe's bars and the Salitos and the Roots What did they sell per week? So for example the last three weeks or something like that I find out, for example, in the first week the Salitos was really well performing And kept on for the whole week, for the last three weeks, it was always the top performer But on the other hand, Joe's seems to be going down So I can see a lot of things in this data So for example, Joe's might need an advertising campaign to get more customers Okay, with this information I'm going on And now I say, well, in any case, how stable are my sales over the different bars that I own I want to roll up the bar information This is a dimensional roll up because I'm totally losing the bar information All the bars are here, aggregated, all the data And I can see that, well, my business over the weeks is kind of rather stable Not depending on the locations too much So the information that Joe's has sold less week per week is lost I can only see what happens in an aggregated manner And now I might be interested in, well, so what types of drinks did I sell? What, well, producers should I talk to to get more rebate and thus have bigger profits I will do a drill down by the brand, that is, add another dimension here, brand Stay with the time dimension here And now this number is kind of split into the different numbers Walter's, Beck's, Krumbacher, so it's all about beer here And I can see that I, for example, well, probably should talk to Walter's to get a better deal on what I'm doing And the sum of all that is happening here Is exactly the rolled up number from above I just see the number in a more detailed level per beer brand Everybody understood? That's a typical way you analyze data You have a question in mind, you have a task in mind You know, like planning an advertising campaign for one of your bars Which one should it be? Well, probably Joe's And I should talk to some producer of beer to get a better deal Which one should it be? Well, probably Walter's or something, yeah, that's basically what you do The next very often occurring operation is slicing And slicing means that you cut the cube This is where the slice comes from, you take slices Because you're only interested in so many values If you consider your cube, not a very beautiful cube, but anyway And you have the dimensions here Then cutting out single values basically means a selection So you're cutting out some slice of the cube And you aggregate everything that is in that slice And the rest is just projected away So all these parts that are not interesting for you are just projected away That's basically what you do So for example, you could project on the geographic location or on the times Because you're only interested in laptops, a single product Not the whole product dimension, but a single product, which means a slice You still keep the four dimensions of geo-diamond And the whole dimensions of geographic location and time In every detail level you might want to But still, it's kind of like a projection on the store and the time You need the full thing here and the amount You need the amount as the sales fact here And then you do a selection on a certain article Okay? Well, it's basically what it amounts to You have the product dimension, which spans the whole cube So here, for example, cell phones and here are laptops and here are whatever And you cut out this slice You project the rest away I don't need this, I don't want this And you keep all the other dimensions This is what slicing means Okay? Basically, it corresponds to the where clause in SQL Where you say where name is Maya or something like that Same thing here, only multi-dimensional Dice is very similar, but dice is basically a slice over a range If I say I don't want only the laptops, but I want the laptops and the cell phones This is two slices, which makes basically a small cube or a dice So basically you cut this out and again project all the rest away But now what you cut out is not, doesn't lose the dimension But still keeps the dimension here, some range Basically it is a range, a select and the selection has to be a conjunction of values You can say something is in a certain set of values or something is smaller than 5 Or bigger than 10 or something, or between 5 and 10 or something So in any way you can define a range You can use the selection here, that's the basic idea The other thing, all the other dimensions stay intact And again you have the fact measure as the central entity here stays exactly the same as for the slicing So dicing is multi-dimensional slicing When slicing you lose the dimension, when dicing you keep that dimension And just use a range So for example you could do an equality select on two-dimension products in time So you want the article ID to be a laptop and you want the month ID to be December I'm interested in the laptop sales in December Then I will slice the laptops And I will slice the time, this is December And the intersection of that, so basically Yeah It gets clear, these are two slices The intersection of that is a dice There we go, stating the fact, stating the amount of units sold of type laptop for the month of December So this is typical dicing Good, last thing I want to talk about is a pivoting, pivoting or rotating Means that you rearrange data for viewing purposes The data stays exactly the same, so you don't lose dimension, you don't gain any detail You are stuck with the same data Only sometimes when you do spreadsheets it can be very comfortable to be able to rotate it in a way And say, no, I don't want the bars by time, I want time by bars Same information as a way of presenting that information So pivoting is just for information presentation for nothing else So basically the simplest view of pivoting is taking two dimensions And then aggregate the measures, what you get is a spreadsheet Typical Excel type spreadsheet, we have one dimension, second dimension And all the table fields are just filled with the aggregated values of the measure That is because people cannot read three-dimensional data or four-dimensional data too well There are some ways to visualize that, but still most reports that you will find Are strictly concerned with two-dimensional spreadsheets And this basically is the typical way of doing it, which is called cross-tabulation So you have one dimension cross, a second dimension, then you build the table From that was all the measures If I have, for example, the fact table sales here and dimension one location, dimension two time Then I have the key relationship between the dimension, location and the facts And the same goes for the time And in this dimension I am not really interested in Now I want to have cross-tabulation of the locations per time aggregating the amount Which is basically the measure that I am interested in the sales fact What can I do? I can pivot on city and day, which basically puts the city down the axis here And the day down this axis here And gives you all the sum that you need for aggregation These are the basic amounts that have been sold on Monday in Auckland or something like that And you can see that in that way If you now say, I want to pivot it on day and city What happens basically is that this spreadsheet is transformed into this spreadsheet The data is exactly the same So the 60 stays the 60 here It's just that the axis have been exchanged And sometimes it's easier either for printing when doing the report Or for perusing the information, so for getting the information across to do it one way or the other And that should be able, you should be able to do that in an all-up scenario Without having to do it manually or on the basis of something Therefore you need this pivot statement So nothing happens to the data, same subtotals Same things happening, same data, just a different presentation of the data Nothing more, nothing less Good So when you express all-up operations It's often very hard to do that in Queer language I was talking about SQL because all you guys know SQL You all had relational databases, one, some even relational databases, two So you can write SQL statements Well, may take some effort, but still, it's possible If you consider our friends from the economical sciences Guess how many of them know how to write SQL statements Next to nir, they have no idea what it means So how could they do the business intelligence, could they gather the business intelligence that would be interesting Very difficult So something like that doesn't happen This is why all-up is very often done via client interfaces, graphical user interfaces That hide away the actual implementation of how the query is opposed to the data warehouse The actual SQL statement or whatever it might be from the user And then what you get is kind of something like that, just a report where you can see the numbers and the figures And you just click through and use all the slicing and dicing that I've been talking about Or drill up, roll down by commands rather than by writing the SQL statement So they are very convenient tools already And in the next D2O we want to show you some of the tools so you can see how to interact with them If you forgot about your SQL or if you're an economist Okay, so we've seen some of the basic operations one needs to perform when it comes to analytical processing I've selected here a list of software solutions which show you how such operations can be performed And how the graphical user interface looks like The first example is from Crystal Decisions They have an Excel-like interface, tabular interface where they present their dimensions For example here we have the product dimension and the store dimension They are presented as expected one on the column side, one on the row side It's pretty intuitive to perform something like this, we probably all know it from Excel When it comes to three dimensions things become a bit more complicated Their solution was okay, we'll add the third dimension like different pages And we'll have the first page with the first supplier with the product and the stores The second, so this will be supplier one, supplier two, supplier three And this way we can still navigate through our data although it is three dimensional We can reduce it to two dimensions, again pretty intuitive What do we do if we have more dimensions? Let's for example consider a cube which has four dimensions, the supplier, product, store and time If you want to reduce something like this again to a two dimensional space then you have to use some tricks Like for example you'll represent the time in days as a slider and you'll choose the page you are currently working on For example we are seeing the data from Monday for store, product and I can also vary my measure to see by volume of sales, by turnover and so on So different kind of controllers to reduce this to two dimensional space again, just easy and cheap tricks Another solution which is also used by IBM here again the crystal decisions using nesting by some axis For example I am going to hierarchically organize my products, under store, measures, by revenue and cost and then describe the products Again I can also slide through the time dimension here with another controller Good, let's go to the IBM InfoSphere, they offer a web interface Again you can choose the dimensions you are interested in, you can choose your cube for example then say ok I am interested in these three dimensions And I can also drill down through these dimensions and I am interested in the time dimension by quarter level, in the store level and product type Granularity, I want exactly these two measures to be shown and then I get my nice report here Again it's a nested solution and I can already press on controllers like this one and sort my information and maybe get additional knowledge about how my data is distributed Again not very complicated A drill down operation can then be performed by going into the dimensional box and choosing different degrees of granularity I am going from year to quarter or month, I am going to sort the information if I want to, pretty easy stuff An interesting to note tool again for the IBM is that during this reporting one can also perform trend analysis, they can sort or they can see ok this is how my data evolves over time If I want to and I can gain knowledge about, I don't know some events that have happened in quarter two, for example something happened there, some big sales I have opened a new shop and I can consider this when I am looking on my sales Another interesting reporting tool is the Palo Technologies tool, completely multi-dimensional and integrated into Microsoft Excel They offer dynamical reports, this is an Excel sheet where you can click and select your measures, your dimensions, your granularities Dynamically all the data you see here will change, all the graphics will be adjusted This is why you actually, this only works with multi-dimensional saved data warehouses Because if I would have a roll up on a physical level and I am going to press on the year and say I want the sales for 2010 And this is not materialized in the right way, then I need to wait And then I will see what happens when for us the power point is saving and we will have to wait for one or two minutes until Excel wakes up and shows me that sales This is not nice, with Palo Technologies it works because they have everything pre-aggregated but they can do it only up to 50 giga But as I have said it is great, integrated in Excel and for this reason you can see the results by second You can again perform drill downs, for example here I have tried showing what it means to drill down through the product Here are here, I can for example choose here from their example PCs, portables, monitors and so on and stop myself with a certain product, performance lines for the TFT monitors for example and so on This is how all up graphical interfaces look like today It's not something extraordinary but this is how the interface looks like and this is what the business analysts see Yeah and the next thing is how does it work? We were talking about the operations just now but of course it's very important to see that this has to be brought down to the actual underlying system And we have basically two underlying systems that are possible, you can either store something in a multi-dimensional database, just store it as a bunch of lines that you can skip through with a Z-curve or a Hilbert Curve or whatever Or in a relational database and if you have a multi-dimensional database the good thing is that the all up interface can immediately work on the multi-dimensional representation Because everything with the aggregations and stuff is already there and you can immediately address that If you're working on a relational database, things are slightly more difficult because you need a roll up server, relational all up That does actually the calculation that basically transports the all up queries that you have into the relational way of expressing these queries which basically is SQL And then at the front end you have the presentation interface which is something like Sylvio just showed a moment ago where you can interact with the data or see the reports, build statistics out of them graphs or some pie charts or whatever you need And of course we were also talking about the whole up hybrid approach where you basically base on a relational database system but build a small multi-dimensional representation of at least the aggregates and some of the interesting data And then do kind of a mixture between them so if you need to drill down you get the data directly from the relational store by SQL commands if you're working on aggregates you get it directly from the multi-dimensional database And the whole up server basically has to coordinate what is going on which data goes where and comes from where In any case the user just sees the presentation layer and just interacts on the presentation layer What happens here with the all up interfaces or how the queries are performed is usually not what concerns the user All up systems therefore build on a typical client server architecture so you have the big database, you have the big warehouses And you don't want to work on these warehouses because if everybody would do that you wouldn't get any information out of them it would just cost too much time So you have two choices, you have a client display that can either be thin, ship the query, get back the data, prepare the data for presentation Or you could have a client device that is thick, ship general queries, get a lot of data back, aggregate the data on the client device and then present it for a prepared for presentation Advantages of course are if you do aggregations on the client device, the server is somehow less loaded, good idea if many people concurrently use a server On the other hand you have to ship a lot of data so if the network is the bottleneck then having thick clients is definitely not a very good idea Again it's a trade-off and you have to figure out in your environment, in your company what actually works best In any case the client device is the device for the manipulation of the data and the work with the data is done Because as I said it's kind of an interactive process where you really address the data and then start drilling down or take out dimensions Just as your information needs changes, as the information that you are interested in changes and this is kind of done on the client device by invoking the OLAP operations The server on the other hand is responsible for providing the data and either immediately aggregating them or just chipping them so they can be aggregated on a thick client device How the actual data is provided depends on what you are using, if you are using multidimensional OLAP, relational OLAP, hybrid OLAP or whatever The OLAP server is definitely a big engine so it should not be the bottleneck otherwise your decision-making or your decision support systems will get into serious trouble That is specifically designed to support and operate on multidimensional data structures The point here is that sometimes the structures that we know from databases don't work So for example there is a big discussion at the moment going on whether column stores or row stores, traditional databases are row stores You have the table and you just store every row If it wouldn't be more sensible to store the columns instead So all the data for one attribute should be stored together and then the next which is kind of the largest denormalization that you might find So basically every attribute is its own table that is linked to the other data Of course if you do it like that, if you have the column stores you can aggregate very quickly because it will be just a scan through the data and then you know the average or the median or whatever you are interested in Whereas if you store it row wise you have to jump over the records and then find out what the data are It depends on what you are actually doing but there is a lot of interest and a lot of new developments at the moment in server technology for OLAP And I think we will go into column and row stores probably in a little detour one of the next weeks Anyway the OLAP server is optimized for flexible calculation and transformation of the raw data based on the relationship that are given by the OLAP client So if you want to drill down then it's a denormalization, you go into more detail If you want to roll up then you just aggregate it and kind of sum it up or use products or whatever you need The OLAP server may either physically stage the multidimensional information which would be MOLAP It has a multidimensional database, it may store the data in relational database and simulate the multidimensional access to the data, OLAP or basically mix it somehow with the hybrid OLAP So when asking the query how does it work we have to distinguish between basically two things relational on one hand and multidimensional on the other hand In any way what you present to the users is always multidimensional, so these nice and nifty interfaces that we just saw are always multidimensional Sometimes a little bit more complex, basically it comes down to how do you visualize three dimensional, four dimensional, five dimensional data You build spreadsheets with different dimensions and it's kind of difficult to work with these data, but still it's all multidimensional information Basically what it comes down to is this kind of cube down here, of course you can't visualize it in this cube This is kind of, these are slices, you just see the dice So for the analyzing purposes the data should be presented in a form that makes it easy to understand And easy to see what the basic point is or what really happens If you want to query the data you need a query language And what is the query language for OLAP comes immediately into mind As I said there are two ways, there's the multidimensional way and there's the relational way So in the relational way of course SQL is the ruling query language And SQL92 is not able to deal with most of the OLAP problems or at least only in a very restricted sense We will see a couple of examples in a minute But the SQL standard has been extended for SQL3 or SQL99 So in 99 most of the concepts of OLAP became part of the SQL standard So all the different vendors, be it Oracle or DB2 or Sybase or whatever Now offer multidimensional functionality for SQL queries Second possibility is MDX That is a language that has been designed by Microsoft For expressing multidimensional formula for molap or rollup stores So this is kind of a proper language But in general there is no the query language for OLAP It's just a mix of what is there already If we look, well let's go a couple of slides still So once we introduce the new functions that should be a good time to break probably So looking at typical queries we can say that if they are in SQL We find it very difficult to see what they actually do So what does this query mean? You have to find ok, here is the fact table, these are obviously dimensions And here is one dimension that is kind of restricted by some values So this would be a what operation? Yes, this would be dicing, exactly And we have a second one here that is set to some value which means, yes, slicing, exactly And we have group by operations here And we have some aggregation functions here So it's very difficult to understand what the thing actually does So it's much easier to say, ok, I want to slice it, I want to dice it I don't want to extract it from the SQL So the idea is to select by attributes of dimensions or group attributes of dimensions Aggregate some of the functions that you need And this can be pretty difficult functions So for example statistical functions like the median or standard deviations are definitely not part of the SQL standard That are not yet provided for And it was one of the tasks of SQL99 to prepare SQL for all up queries So they introduced some new commands, grouping sets, roll up, cube, typical commands that you would find in a all up environment They also added some new aggregate function like the median and the mode and whatever And last but not least the top K type queries Very important for SQL3 If we look at SQL92 or SQL2 for a minute We will find that it's hard or impossible to express multiple aggregation, comparison or the reporting features that are needed for all up queries And when you try to do it, you will immediately get a performance penalty Because building these functions needs joints, needs set operations And a lot of correlated sub-queries or view queries that have to be built beforehand So all the things that we learned about in relational databases to make up for the most time on one hand query Optimization on one hand query, execution So most of these things really have terrible performance And of course you don't have the statistical functions that you need in the normal standard So looking at multiple aggregation in SQL2 We want to create a two-dimensional spreadsheet that shows the sum of sales by maker or in car model for example Then every subtotal that we have to do And also here, needs a separate aggregation query So to get this, there has to be an aggregation query here To get this, there has to be an aggregation query here To get this, there has to be an aggregation query here Same here and then one for the APEX for everything that you need So this is four different queries I have just written them here, select model, make some amount from sales group by model and make Select model, then union, one of the functions with a very big performance penalty Then I do the same for the model only, for the make only And always union the results to get the... Sorry, to get the final answers Really huge performance penalty Same goes for comparisons, what happens if you have comparisons? If you compare something, it's basically different than selections Compare the results of one year for another First query for the first year, then query for the other year And then compare the differences or whatever you do So basically it's a self-join, you create a view It basically gives you the possibility to get the aggregated information for all the years And then you query that view by saying Okay, I want this year and I want the next year And this is how you get the averages between the years So what happens here is a self-join between two instances of the year, the current year and the last year Where you basically relate the sales from the current year to the last year That's the self-join, over view, that is if not materialized Evaluate every time you ask query We already get a feeling this again has terrible performance costs Very bad idea, same goes for the reporting features in SQL 92 How do you do ranking? Just not there How do you do entals? So the top 10%, the top 20%, the top 50% Top half bottom half, something like that You just cannot do it, quantiles, quartiles, very important in economic applications Just not easy to express, median I've been doing a query in one of the relational databases once, exams Where I ask people to build a query, to build a normal SQL query that will calculate the median Very little people actually got the right statement that does it And evaluating that statement took for ages Very bad idea to do that over a terabyte of data Same goes for running totals, moving averages, cumulative totals All these signs are not supported by SQL 92 For example, if you wanted to have a moving average of a 3-day window of totals for each product Again, you would need to build a view first that does the basic aggregations And then you would have to consider a self-join over the view Where you basically take the end time from one view smaller than the start time from the other view And the end time from the other view is smaller than the start time plus two So you get the 3-days window basically and grouped by the time that you do it That will calculate you the 3-day window Basically, again, it's a self-join over here So v-sales and v-sales and you have to create the view either materialize it or calculate it every time a query like this is post Bad, terrible performance again So what do people do? Well, there are some in-built functionalities in databases that can do reporting And some of these functions at least very quickly, store procedures and stuff like that And this is actually what has been done in SQL 99 There have been a set of grouping operators that have been added to the set, to the mix And these are basically extensions to the group by operator that are very often used, the grouping set, the roll-up and the cube And with that we make a short break, 10 minutes, so 20 passed, we'll meet here again Are there any questions? Good So what are the extensions that are valuable for doing the actual roll-up aggregations that we need to do? And as I said, they are basically extensions of normal group by operator of SQL The grouping set, the roll-up and the actual cube So the grouping set replaces the series of union queries that are only used to aggregate the subtotals For example, if you have something like a select department name and then I want the job title and count star So I want to count how many of each job title are in my department And a union with the job titles that I actually have I could also use a grouping set that takes both of the input parameters So this is basically the job title counted by department name as a grouping set And the union all is a very inefficient thing to do because it's a set-based operation This statement with the grouping set can be very efficiently evaluated One thing that happens, however, is that I introduce some null values here Cast null as character 10, for example, for the job titles What are these null values that I introduce? Basically, they are the subtotals or placeholders for the subtotals that can be calculated The problem is, as soon as there is one null value in the data So I don't know how many of some job titles there are in the department What is the total? How do I compute a total over null values? Do I just take the one that I know of or do I take a possible maximum or what do I do? That is one of the problems that cannot really be solved by just looking at the null value So there could be a null value that is generated by the grouping function as a subtotal Or it could be in the data anyway And to be able to tell the difference, there is a return value of the grouping function We just state grouping of job titles, for example, which returns a zero for null in the data And a one for a generated null So the computation has either not yet been done, if there is one Or it's impossible because there are some null values in the data and you can't aggregate over that Okay? So there is a possibility to distinguish between aggregated and generated, between data nulls and generated nulls Newly generated nulls Basically what happens? With these grouping sets, you can basically do all kinds of multi-dimensional aggregations And one of the operations that we were concerned about was the roll-up What is a roll-up? A roll-up is basically, I skip dimensions, I aggregate entire dimensions And I want the subtotals over these dimensions in my data So I prepare the table by using grouping sets that add additional lines for the subtotals And then, if I do roll-up, I can in a certain direction lose details So for example, if I have ABC as dimensions The next step would be to lose the information or to aggregate all the dimensions about the C Leaving me with AB as a grouping set The next step would be losing all the dimensions about the B Leaving me with A And if I also lose the information about the dimension A, I will be at the apex That will be the one number that describes everything that is my company So basically, if I say group by roll-up ABC, it means I take the following grouping sets into account And just always lose a level of information So if I put n elements in the roll-up set, that means n plus 1 grouping sets All the different possibilities plus the apex And note that the order here is really important Because if I say ABC, I get this grouping set If I say CBA, the first dimension to lose is A So I get the CB, I get the C and then nothing Okay? Or the apex It's a different way of losing the data The dimensionality that I want to roll up always has to be the last in the list for the roll-up statement And the direction that I want to go So the information that I want to keep longest is the first one in the grouping statement Okay? Everybody clear? Well, let's look at it in an example So we could have a roll-up operation For example, I want the sum of the quantity Quantity is my measure in the fact table And I'm interested in years and brands So how many cars of a certain brand did I sell per year? And I do a roll-up over year and brand What does a grouping set basically do? It introduces some new rows for considering the subtotals So the basic data here per year and brand is this one 2008, I sold 250 Mercedes And I sold 300 BMW And I sold 450 Volkswagen And 2009, I sold only 50 Mercedes And so on. This is a real data. This is uncompressed The grouping set introduces rows that are predefined with null values Or kind of like defaulted with null values To keep these different subtotals So for example, if I skip the brand, which is the last in my roll-up statement I will add a row that is only concerned with the year In 2008, I sold a thousand cars Not looking at the brand In 2009, I sold 400 cars And if I also lose the year, I'm at the apex Which means total sales of cars Every year, every brand is 1400 So these green rows here are introduced by this somewhat clumsy union statement That we saw and have a default for the different subtotals of zero That is then calculated to the actual subtotal By using the roll-up here, I state that the first thing I'm interested in is the year Not looking at the brand, and the year and the brand And not looking at everything, the apex The year and the brand, only the year and the apex This is basically how roll-up works in SQL 99 Yes? It's this table, it's the whole table The presentation layer has to work on something and an aggregation of the subtotal data for presentation purposes Getting the subtotals from this table is a single query As I said, here is the null values that have been integrated If I use a reporting function, this will come back as one, because this is a created null For example, here, a null, because I just don't have the data for that This will be a null in the data Everybody understood? These nulls that I have are different, maybe I should write this one in blue then This is a different null that returns a zero in the reporting function The red nulls return a one in the reporting function So I can understand that this line is a subtotal line, the green one, and the blue ones are data lines, though they contain nulls This is also different for the presentation tools, because they can ask for what exactly they want to know Once I have to aggregate, I do a roll-up of a brand I don't want to do a dimensional roll-up, just lose the brand information Single query to this table And I lost all the dimensional information about the brand So this is basically here and here Good Is there a list of the people who are in the company? Null No, no, it can be at the end, it can be anywhere, so there's no roll Well, basically a relation in SQL means a set, which is basically unordered It really depends on the implementation, on the vendor, where these lines are put Usually they are put at the end of the table You never know, so going by row IDs would be a bad idea Good More questions? Everybody clear? That's not too difficult So, the next operator, or the last operator that we have is the cube operator The cube operator is for building the whole cube So what we did now was basically taking the data of the spreadsheet And getting the subtotals for my roll-up to or down information By roll-up For cubing, I want the whole data Everything aggregated in any dimension Being able to roll-up and roll-down into any dimension Okay? What do I have to do with that? Basically, I need all subtotals, rolls of roll-up And in addition, the cross-tabulation rolls It's basically all combinations of grouping sets that can be performed on the dimensions If I start a cube with n elements, with n dimensions That means I have 2 to the power of n There should be a large n 2 to the power of n different grouping sets A lot of grouping sets actually So for example, if I have cube ABC, 3 dimensions Time, location, store or whatever Then that means I have the detailed data I have all possibilities to do a dimensional roll-up in any dimension A B loses C, A C loses B, B C loses A I have all possibilities to lose 2 dimensions ABC and I get the A things Okay? Detailed data And highly aggregated Okay? All the possibilities, that gives me the complete cube To show you the image If you have the data cube What is basically done by the cube operator You create all possible spreadsheets 2 dimensional You create all possible 1 dimensional aggregations And finally, running out of colors You create the apex node That aggregates everything So 2 dimensional, 1 dimensional, apex This is a roll-up 1 dimensional, 2 dimensional Is it real down? The cube prepares all the data in a single table Good? Everybody understood? Not too difficult to use And because it's really made in the, or really put into the center Of the database kernel It's an efficient operation So cubing is much more efficient Than building all the aggregate columns With select statements and doing the unions And the self-joins and everything that is needed to do it Okay? Good Well So what happens if I use a cube operator on my example from before So I have the year and the brand And I want the sum of the quantity Quantity is my measure from the fact table I want a cube by year and brand Again I get the blocks of data Having all the years and brands together And here Second one Now I need all two-dimensional data I can lose either the year or the brand Losing the brand Is exactly what we have by the grouping set here brand So the roll-up operation But now I could also use, lose the year and take the brand So what is new are these rows down here They aggregate by brand And again the apex aggregates everything Okay? All possibilities to get the information and to get more detail With a table like that Any presentation tool Can immediately return All the roll-up operations All the drill-down operations that I needed Because it's all in the table Okay? Very efficient Very efficient to calculate Quite efficient to calculate And very efficient to use Good Well, that is basically that And with that we are at the next detour And look at some of the advanced features For example moving averages Okay, so we have seen the main functionality Of SQL and SQL Of SQL 99 But we have said for example that SQL 92 is not able to Compute efficiently something like a moving average Let's see how SQL 99 does something like this A new clause which I am going to speak about Is the window clause Which allows me to select the moving data For my average The idea is that I want to specify A function Which has to perform over a set of rows And the keyword here is over Because in SQL 99 this is exactly the keyword For the window clause So one has to specify the aggregate function The over clause And then it has the possibility to specify some sub clauses Like partition by to specify different groups An order by clause Or the aggregation function The example for the moving average Translates then into Of course the average by sales The window over A partition by It is not a mass But it is a mass Partition by It is not a mass But in this case I have used it To obtain a moving average by region Considered by month And here I have built a moving average of 3 So the last 3 months What basically this query does Is performing the moving average By different regions For the last 3 months Mooting our sales results Smooth also some Spikes like for example Summer sales When the school starts Or for example when I don't know holiday shopping Is a great example when you really want to perform some moving average Because otherwise you see the spikes You know why you are there But you are interested in the big picture Not in the locality of the event So you can do this with this window clause Another difficult task Is ranking As we have said in sql92 It is a laborious task MySQL has introduced Some special implemented functions db2 doesn't have something like this In sql99 ranking Has become standard But let's see An old style ranking Just to understand what kind of problems this function poses The old style was using The row number function Which basically returns the number of the row in the table Something intuitive You can see here an example Of using the row number Again with the over clause as a window If I want to The problem It works great If you do it on something like sales number And so on You have the feeling that The function also did the job you wanted However If you have ranked On values That have the same value For example here the sale or the order id Is the same twice You get different ranks You would say Okay, it's not an issue, it doesn't bother me Well, if you have Distributed data warehouse Technologically distributed One with oracle And one with something else Like infosphere for DB2 in Europe Something else in North America And you run a rank function Like this with the row number You get results dependent on the implementation So the results are also non-deterministic In the same implementation Because they are dependent on the row number So then you have issues You can't compare This is why you need something Which results in deterministic ranking The solution is the rank function In the SQL 99 With a particular version of it The dense rank function We'll see both of them The syntax is pretty clear Again rank The window close And again if I want to introduce Some partition or order by And dense rank with the same syntax What it basically happens When I'm going to perform ranking Over the sales Is that with the rank close I get of course for the biggest sales Number the first rank The second rank for 9000 The third rank twice For the same number of sales And the fifth rank So you can see that here's a gap Four is missing because three appear twice And the fifth rank Is missing because three appear twice For the last number This is exactly what the dense rank Cops with if I'm not going If I'm not interested in something like this I'm going to Want to have consecutive ranks Then I can use the dense rank And it will cope with Such situations Which can again create Null values And then they can raise questions At the interface level Another possibility due to the window close Is group ranking I can perform ranking By regions Or by some other product groups Or so Which allows me to For example To identify the most profitable Channels Of sales For example if I want to see how good I have my sales Direct sales been doing I can see okay In this month of 2009 Direct sales have been great In February Have ranked second In the month of March On the internet The result has remained constant Again I have the first rank Again because I'm going to Partition by the channel So my group is the channel This is my first group This is my second group This is my third group And between each of these groups My rank is resetted to 1 To 0 Clear? How ranking functions? The issue of the null values Is a bit special So in data warehousing One doesn't know At the beginning If the data wasn't there So the null becomes A flag for not having enough data Or the null has been created through aggregation Ranking Copes with null values As treating them The same value So null is equal to null Is the same thing And it also allows For me to see And rank the null values first If I'm interested in seeing Where am I missing any data Or is this aggregated data Or I'm not interested in it Then I can use the null last Close and rank them at the end For example If I'm going to use the null first Close I can see They are ranked I don't have any values here They are ranked at the beginning Of course the ascending Has then an influence Over the amount sold Null's last Has influence again over the null values Ascending Remains the same With the difference The one Previously for the null values Is now four The same happens also for the descending parameter Just for the ranking Accordingly to the sold values So it's actually not that Difficult If I'm going to do a top ranking A top K ranking Then I still need to perform The ranking function here In a sub query with The window close That I'm interested in For example the sum Here is a more complicated query And then I have The outside select Which says Okay I'm only interested in the top five So this has not Become standard to extract the top count In SQL 1999 Another issue Is the person tile Function the end tile As it is called in SQL 99 Which is still Not standard in all the implementation So for example DB2 has introduced it But I think Oracle doesn't support it The basic idea behind The end tile Is that it splits Set of data into equal groups It divides the order partitions Into buckets assigning them Numbers And the buckets differ in size Only by a maximum of one row Let's see an example So if we are going to Perform an end tile Of three over the sales Then we have three buckets One, two and three And they are equally spread Over the data This means that this is my first bucket This is my second bucket And the third bucket has With a maximum of one And an element less in rows For example Could have here Another three thousand Which belong to three And of course it would also be possible To have something like this It's still valid Having another record here Wouldn't be valid because It goes against the Entile definition of the difference In cardinality of one record Okay this was SQL 99 With its functions That it supports In order to Offer analytical processing Functionality And further Should I continue We will go to multi-dimensional expressions Which is basically The Microsoft solution For supporting such multi-dimensional operations At the time MDX has become Spread in the Data warehousing community Microsoft had A pretty big Market share And it Was able to impose Its standard It's not a standard But it was able to impose it And it was able to Use the most used Query language in the Data warehousing Well it's not brilliant But it is widespread So this is why we should Speak about MDX also I just want to mention that Oracle also has Something like this Of course they said Why should I use Microsoft technology I want to develop my own Only and only they use it The idea with MDX Is It was developed As an interface Between the graphical User interface So what users see And the data for the mall app system Of course If I want to develop new functionality In a data warehouse For example I want a button To hold the mall app Server is proprietary technology So I can't start programming Into the code of some vendor The idea was Okay we'll send our Consultants And they Will write some MDX queries To support this moving average Link it to the Graphical user interface And voila The functionality for the data warehouse This is why they added flexibility Through this query language The relational databases Have said okay If this is becoming such a standard Why not allow it also in roll up So they've said okay We'll input our MDX Edit box and we'll allow Also MDX in roll up But then we just need another engine Which adds a bit of complexity And translates MDX into classical SQL 99 Stupid but it was needed Because people started working with MDX In order to add new functionality It was easier to understand And they said okay We are losing clients We should implement it too So let's see How MDX looks like The syntax is similar As expected to the SQL We still have select from where One example of an MDX query Looks like this select Country, area, some other areas City We can already see That it interferes with how the data Is displayed I want to see all this Information on a column Then I'm going to represent I'm going to represent the time dimension On rows You remember nested interfaces With two dimensions, three dimensions And so on Where I have columns and rows This is exactly what MDX Is able to express I want data on columns I want data on rows And I want to see How MDX looks like And I want to see How MDX looks like I want data on rows I want this data from the sales cube And I'm going to impose Some Slice restrictions For example On the time I want the 2008 year data And I want all the products It's pretty intuitive So I don't perceive it as difficult This is why we'll go through The basic elements Which link it to Query languages like SQL It supports enumerations The elementary nodes From the classifications level Can therefore be enumerated We've seen in the previous Query I've enumerated the country Area And something which could appear Different Frankfurt am Main Is a collection of strings If I want for it to interpret it As one string I have to use The square clammers Square Claimers Brackets I am also able to generate elements For example I don't want to write All the children of the Countries level I am also able to write All the areas I use the children Generative element This generates Neither Zaksan Bayer And everything Which is in the children Of the country level I can do the same With parent reaching Again the country level And this is also very useful For the time dimensions For example I can see All the members of the quarter Classification level I can also Generate, functionally generate Sets, for example I want all the descendants Of the United States On a city level based I get all the cities I can generate The descendants of a certain geography I can enumerate All the cities For example of the United States And France just by generating The descendants of this geography By city So It's closer to the human language But it still is classical Square Something which you have Already seen today In the graphical user interfaces For All operations Is these Nested dimensions For example here In order to create nested dimensions I'm going to use a cross join Between this set And this set And I basically obtain This here I obtain the sales On columns Based on the country Some regions Also some cities So you can see there's a mix It's not by countries It's by country regions and some cities For different Producers Or retailers And I can also use Another dimension Like for example time on rows This is exactly the functionality Which we need In order to build such interfaces Easily Again it allows also Relative selection I'm going to If I'm interested in the last quarter Of 2008 I just say time dimension 2008 As a representant And I want the last child of 2008 I get the last quarter Last quarter because The quarter is the smaller granularity Of the year Representant of the time dimension If I'm going for the next year I can just say 2009 In the hierarchy I just want the next member I can also say Okay I want to jump two months I'm in the fourth quarter In November I want to jump two months And again January 2009 I want all the representatives Between 2006 and 2009 As expected It will enumerate all these representatives This is very important For creating the interface It helps creating The interface easily If I'm going for Information extraction In my hierarchy I want to know For a certain node What is it in my hierarchy I can say Okay, point level Tell me what it is It's a country Time, the first level So the biggest granularity Of the time dimension It's here And I have of course Some syntax elements For the language I don't know how these are called In English Set brackets Okay With which I can define my sets For representation The text interpretation Is in the case of Frankfurt So that it doesn't interpret That as separate strings And of course The round brackets For Classical usage as in SQL So if I'm going to Have multiple where closes I'm going to use these round brackets Besides the functionality That SQL99 offers Index also offers something Like for example top count Top percent and top sum Which are actually very interesting I've said that if I want to do a top K in SQL I have to do the rank In a separate query I do my rank And then I do a select And say okay I'm interested From that rank in just the first five elements You can do this in MDX Just by using the top count function You can use top count And then say okay In what I'm interested I'm for example interested In the turnover In the geography dimension As the regions from Germany And I'm just interested in the top five It's pretty easy The same can be performed With percentage or the sums I can also perform filter functions Again this filter function Is very interesting If I want to compare turnover For different years For example here there's a select function Where I can compare the turnover Again in the regions of the region And I can compare the turnover Again in the regions of Germany Between 2008 and 2007 To see if there was an increase Or a decrease in the turnover Time series are also possible They are also very important For moving in the time dimension Time dimension has a special place In the MDX language I can for example convert With periods to date Between the beginning of the quarter And the current date For example and then I get Okay I'm interested in this date Here and I'm going to set As the beginning date The first day of the quarter Which holds this date for example I can also calculate the time For example here For example here For example here I can also calculate For as is as was type of queries A parallel period Where I say okay I'm interested In your granularity I want to jump three years back And to see how the data Was doing three years ago It can be done with just a function It also supports numerical functions Like covariance, linear regression And correlation These are a bit more complicated examples So they will be probably present In the homework Okay Another interesting evolution Was an interesting attempt Of developing an API Around OLAP based on XML The idea was to Develop an XML for analysis Or it is later was known As MDXML Multidimensional XML Which is actually a wrapper Around XML For MDX It is based on the classical technologies For web services So XML, SOAP and HTTP And it is Again it allows the classical Primitives for something like this To discover and see How my data looks like And to execute In order to execute the queries And get the result At the end of our lecture Let's briefly remember What we have been talking about today So we have started By enumerating The OLAP operations The classical OLAP operations Are the roll-up Which can come in different flavors Like for example If I am going to do a hierarchical roll-up Or a dimensional roll-up A hierarchical roll-up I am going to Aggregate based on the granularity For example in the time dimension I am jumping from days To months, years and so on And a dimensional roll-up If I am going to disregard The whole dimension And say well I don't care about the product I just want the data aggregated Again, drill-down is the reverse Of roll-up And it's pretty intuitive You go from a higher aggregation level To a lower aggregation level To more detailed data What is very important here Is that you can't drill What you don't have So if you don't have the data You can't dig any deeper Other classical operations For data warehousing Are the classic Classical operations for data warehousing Were slice, dice and pivot Slice again in the ware close Of the SQL Dice with ranges And pivot classical For cross-tabulation operations And you probably see in Excel We've seen some example Of graphical user interfaces And then how these operations Actually get to the data The solution for this is As expected through query languages Only that the classical SQL 92 Is not able to cope with the With the multidimensionality nature Of our data So for this An expansion of the query language Was needed So this is why we have spoken About the SQL 99 And the SQL 99 And the MDX query language In SQL 92 We've spoken about the grouping sets Rollup and cube operators Which allow us to go From the detailed data To higher level of granularities Based on combinations Of the dimensions And then we've spoken about MDX Which is a SQL like The SQL 99 And the MDX query language Which is a SQL like Language is an SQL like Language It is especially used for molap In order to develop new functionality And to bring it to the interface Without interfering With the proprietary technology Of the storage And it was also Implemented for compatibility reasons And for marketing strategies Into rollup By mapping it to SQL 99 Are there any questions Regarding what we've discussed today? Thank you for your attention.
In this course, we examine the aspects regarding building maintaining and operating data warehouses as well as give an insight to the main knowledge discovery techniques. The course deals with basic issues like storage of the data, execution of the analytical queries and data mining procedures. Course will be tought completly in English. The general structure of the course is: Typical dw use case scenarios Basic architecture of dw Data modelling on a conceptual, logical and physical level Multidimensional E/R modelling Cubes, dimensions, measures Query processing, OLAP queries (OLAP vs OLTP), roll-up, drill down, slice, dice, pivot MOLAP, ROLAP, HOLAP SQL99 OLAP operators, MDX Snowflake, star and starflake schemas for relational storage Multimedia physical storage (linearization) DW Indexing as search optimization mean: R-Trees, UB-Trees, Bitmap indexes Other optimization procedures: data partitioning, star join optimization, materialized views ETL Association rule mining, sequence patterns, time series Classification: Decision trees, naive Bayes classifications, SVM Cluster analysis: K-means, hierarchical clustering, aglomerative clustering, outlier analysis