doi
stringlengths
17
24
transcript
stringlengths
305
148k
abstract
stringlengths
5
6.38k
10.5446/16315 (DOI)
Thanks to the organizers for inviting me. It's a pleasure to be here. So I'm talking about Fret Home Theory and Lean Mumford Spaces for something you haven't heard of called Witch Balls. All of this is joint work with Katzchen Verheim. You can see what we've written so far in these two preprints. So here's the plan. So I'm going to give you some motivation. I'm going to tell you what quilts and correspondences are. I'll tell you the blueprint for the algebra that we've come up with for this thing called SIMP, this two category like thing. Finally, in the last section, I'll tell you a new result about Fret Home this for quilts with certain kinds of singularities. So four could be called the P's section. So let me recall for you that if we have a symplectic manifold, then under favorable circumstances, we can define the Foucaille category associated to it. So this is an A and FinD category. The objects are certain kinds of Lagrangians inside of M depending on your situation, they'll be required to satisfy some hypotheses. If you take two morphisms to Lagrangians, then as long as these guys intersect transversely, the morphisms are defined to be the free vector space generated by their intersections and in general is equal to the third complex. Okay. Then the last thing I want to tell you about Foucaille is that there are composition operations that eat D inputs for D at least one, which is something that any infinity categories required to do. The D-area composition operation is defined by counting pseudoholomorphic D plus 1 gaons. So the images of these pseudoholomorphic polygons in M look something like this. Okay. So this is the main thing we care about in the talk. So before I move on, any questions about this? All right. So several years ago, Verheim and Woodward defined something that they called Quilted Flur theory. And the point of Quilted Flur theory is to build functoriality into the Foucaille category. So relate the Foucaille categories of different symplectic manifolds. So in particular, they had this idea that if you take a Lagrangian in a product of symplectic manifolds, so L01 sits inside M0 minus times M1. The minus just means flip the sign of the symplectic form on M0. Then that should give rise to an infinity functor F L01 from Fouc M0 to Fouc M1. And if this looks funny to you, I can at least tell you some motivation for why the right notion of morphism from M0 to M1 should be a Lagrangian like this, which is Weinstein's symplectic creed. Everything is Lagrangian and therefore, morphisms between symplectic manifolds should be Lagrangians. And I'll tell you more about these Lagrangian correspondences in the next section. But for now, if you want to keep in mind, you can think of the graph of a symplectic morphism from M0 to M1. And Nate, can I ask just a real stupid question? So the four chain complex here for the morphisms, how is that different if I switch L and L prime? They have to have a morphism. So morphism has to go between two objects, so it's directed. So what's the difference between Cf L L prime and Cf L prime L? They're closely related. So in good circumstances, there's a duality between those things. Okay. Now, it turned out when they tried to carry out this goal that this is really a non-trivial thing to do on the object level, it's not so hard to understand how this functor should work. Well, it's not trivial, but it's not as difficult as what happens on the morphism level where you're counting something pseudo-homorphic, but it's going to be these funny objects with singularities that will be the subject of today's talk. And Verheim-Woodward's resolution was to study some related objects where the analysis was not so hard, but where the result was not quite this. So the goals of Cachron and me are, first of all, to actually do the analysis for these singular objects. So let me call this goal star. And then the second goal is to enlarge this algebraic framework. So there, algebraic picture is Lagrange's correspondence should induce an infinity functor. And the bigger algebraic framework is that we actually expect there to be a infinity two category. So just think of this as some kind of infinity version of two category where the objects are also well are some large classes of symplectic manifolds, the morphisms are Lagrangian correspondences, and the two morphisms are flurco chains. So the picture is something like m0 and m1 are objects, Lagrangian correspondences give you your morphisms, your one morphisms anyway, and your two morphisms are given by flurco chains. So I'll say more about this algebraic picture in the third section. So what I'll do today is I'll, if I have time recap the progress we've made, and then the thing that I'll definitely say is I'll explain a new result for Fred Holmness of the pseudomorphic objects used to define the structure maps. Before I move on to section two, I want to mention the result that Verheim and Oderb were able to prove. So like I said, they worked with some pseudomorphic things which are easier to deal with analytically, but they got this result which was a little bit different than what they originally aimed at. So if you assume that you're in the either monotone or exact setting, then they showed that Lagrangian correspondence gives rise to an infinity function, but not between fukai categories, between these things called extended fukai categories, which they invented for this purpose. The objects of an extended fukai category not being Lagrangians, but by formally composable sequences of Lagrangian correspondences. Yes, thank you. Yeah. Okay. Oh, and final thing is back in the audience. No. Okay. Well, anyway, for the algebraic geometers. Okay. Great. Well, anyway, so if you're an algebraic geometry, then something that may have come to mind is Fourier mucai transforms. So there's this thing that at least formally seems very analogous in algebraic geometry. So if I take an object in dbcoe of a product of smooth projective varieties x and y, then it's this totally well-known construction that you can produce from that a functor going from dbcoe of x to dbcoe of y. Really useful. So the hope is that eventually we'll be able to understand how mirror symmetry intertwines these functors fl01 and Fourier mucai transforms. So that's a really quite distant goal. All right. So before I move to the next section, any questions? Does anyone know when I started? No? I think it said about 938 on that clock. Okay. Great. Thank you. Okay. So let me tell you a little bit more about Lagrangian correspondences and give you some examples and that sort of thing. So a bit of notation from now on, I'm going to write Lagrangian correspondence instead of writing it as a subset, I'm going to write it as an arrow. All right. So let me give you some examples. If I take a Lagrangian in m0 and a Lagrangian in m1, then I can form their product and that will be a Lagrangian in m0 minus times m1. So it'll be a Lagrangian correspondence. In particular, if m0 is a point, this tells you that Lagrangians induce Lagrangian correspondences from the point to whatever manifold they live inside. Okay. Something I mentioned before is that if phi is a map from m0 to m1, let's say it's a symplectomorphism, then its graph is a Lagrangian correspondence from m0 to m1. The last example, which is the most non-trivial of these, is that Hamiltonian group actions by Lie groups give rise to Lagrangian correspondences. So say that I have G acting on m in a Hamiltonian fashion. So the Hamiltonianness means that that comes with a map called mu from m to the dual of the Lie algebra. There's this construction called symplectic quotient where you take the zero-level set of mu and you quotient up by G and you get another symplectic manifold. There's some hypothesis. It turns out that this zero-level set is a Lagrangian correspondence from m to the symplectic quotient. Okay. I'm sorry. Okay. Now, there's this important feature of Lagrangian correspondences, which is that you can compose them. So let's say that I take L01 from m0 to m1 and L12. Then their composition is defined by pretending that they're graphs of functions and doing what we would do to compose those functions. So that is to say, we're first going to form the fiber product of these guys over m1 and then we'll project down to m0 times m2. So an example of this is that the graph of phi and the graph of psi are going to give you the graph of psi composed phi. Okay. Now, in order for this to be sensible, we have to have some results saying that the compositions of Lagrangian correspondences are again Lagrangian and it turns out that that's true under pretty general hypotheses. So this is a theorem of Gilman and Sternberg, which says that if the fiber product is cut out transversely, which is to say that the intersection of the product with the diagonal in m1 is transverse, then pi02 defines a Lagrangian immersion of the fiber product into m0 minus times m2. So important note, there is no similarly general hypothesis you can write down with the property that you'll get a Lagrangian embedding. So if you're going to be in the business of composing Lagrangian correspondences, you have to do stuff with the Merce Lagrangians. Okay. Yeah. All right. So before I move to the next section, any questions? Okay. Sorry. I guess this is second half of the second section. So I mentioned these pseudoholmorphic gadgets in the introduction, and now I'll tell you what they are. They're called pseudoholmorphic quilts. The way you form these is you understand what rule Lagrangian correspondences should play when they interact with holomorphic curves. So we're all used to saying that if you have a Lagrangian, then that's defining a natural boundary condition for pseudoholmorphic curves. What the theory of quilts tells us is that Lagrangian correspondences define scene conditions. So I'll tell you what scene condition is, and I'll also simultaneously define what a pseudoholmorphic quilt is. Okay. So here's Riemann surface. Let's give it some genus. Now let's divide it into two pieces by drawing this circle here. Now let's label the chunks that it's been divided into, or the patches by symplectic manifolds. Let's label the boundary components by grungians and then let's label these one-dimensional submanifolds by Lagrangian correspondences. So just as when you draw an unquilted normal Riemann surface and label it with m's and l's that represents PDE with boundary conditions, this represents the system of coupled PDEs. Okay. So this represents the following system. We require a map u which goes from this left patch into m0, u0, a map u1 going from the right patch into m1. Then for all points in the boundary circle here, they're supposed to be mapped by u1 to l1. Finally, the new interesting condition is that if we take any point in this seam, and we pair up its image under u0 and under u1, note that they're both defined at that point, then we land in l01. So that's what I meant by seam condition. Is everyone okay with that? So first of all, let me say an important thing I didn't say which is that u and u1 are supposed to be J-holomorphic, of course. I'm not imposing an equation like that. So the crib L01 can somehow go in any way on this. Oh, I'm sorry. You're talking about this seam. Oh, okay. There are conditions and what you mean if you're talking about a non-singular quilt is you don't allow any, let's just say you don't allow any intersections. But for the purposes of this talk, I'm only going to be considering really specific quilt. So we don't think about the question, what's a general well-behaved quilt? I can tell you more afterward. Okay. Great. All right. So now it's time to talk about figure eights and which falls. So what a figure eight bubble is, well, it's a new kind of singularity that Verheim and Woodward discovered when they were studying strip shrinking. So the situation that they were looking at is they were considering sequences of maps whose domain looks like this. So this is defining a quilt problem. And they noticed that, well, if you have a family of quilts with this as the domain and if the width of the middle strip is shrinking, then this funky thing can happen. So like usual, you can have bubbling wherever you like on this quilt. And in particular, it could happen in the middle of this shrinking strip. So let me try to draw what's going to happen at an intermediate step. So we're puffing out a bubble. But we'd better be carrying the seams along with us when we blow this bubble out. So on the base quilt, the seams now look something like that. But the bubble is puffing them out. And if the rate of bubbling is proportional to the rate of the delta going to zero, you're going to get something interesting happening in the limit. So in the limit, you'd expect to see a quilted sphere sitting on top of a double strip where the seams on the sphere look like this. So this sphere has two circles as seams, this one here, this one here. And they meet at the south pole tangentially. Does that make sense to everyone? All right. OK, and now it's time for the philosophy portion of this talk. So in the late 80s, Fleur was studying Fleur strips, though I suppose you just call them strips. And he noticed that you can have this funky thing happen where bubbles can occur, in particular, you can have disc bubbling on the boundary. And he said, whoa, let's see what hypotheses we can put on our situation so that we can avoid that. But then Kenji Fukaya comes along and he says, well, Fleur strips are pseudomorphic discs, more or less, or inhomogeneous ones with one input and one output. And disc bubbles are also discs with zero inputs and one output. So instead of regarding them as a bugaboo, let's study pseudomorphic discs with arbitrarily many inputs and one output. And out of that, he got this amazing algebraic structure called the Fukaya category. Right, so then as a naive graduate student, I thought, let's try to do this with a figure eight bubble. And it turned out that an interesting algebraic structure seems to emerge from that. And by the way, the reason that this is called the figure eight bubble is that if you look at it from the south pole, then the two seems look like a figure eight. All right, great. So then the idea that comes out of that philosophy is to count quilts like the figure eight bubble, though let's call them figure eight quilts since they're no longer bubbles. They're really the primary object of study. And just like Fukaya did, why don't we put some mark points on the seams? And since the south pole is what attaches to the face quilt over in the situation where the figure eight emerged, let's regard the south pole as the output. OK. Now before we can do anything, we have to have some idea of what do the inputs want to eat. So what should this be defining a map between? And the way you can understand that is let's look at a little neighborhood of one of these mark points. So we get this little disk mapping into m1 and m0 with these seam conditions, l01, l01 prime. OK, and then you can see more or less trivially that if you fold this thing across the center line, this is equivalent to a little half disk mapping into m0 minus times m1 with honest Lagrangian boundary conditions in l01 and l01 prime. Sorry, like corresponding to the picture, shouldn't that just be m1 and m2? Like just from the picture. Oh, yeah. You know, I probably should have reversed these. I mean, it doesn't matter, but I'll get confused otherwise. No, I just want to move it. OK, and then we know that this sort of input mark point should eat a Flurr code chain in CF l01, l01 prime. OK, so now we know what sort of input this guy should produce. And, well, a removal of singularity theorem that I might not talk about today tells us that we'd expect that this output mark point would produce something in, well, I'd better give things names. Let's see. So this is l01 prime. This is l12 prime prime l12 prime l12 l01. So this guy should spit something out in CF l01 compose l12 l01 prime compose l12 prime prime. And so this is a Flurr code chain group in m0 minus times m2. At that point, you've got four Lagrangian's plane. How do you decide to group them one way rather than another? Does it not matter? I think you decide to group it like that because you can prove that that's where the limit will live in that Flurr code chain group. Does it have to do with the fact that there's this tangency condition? Yes, that's right. Yeah, and maybe I should give a zoomed in view near the south pole. So let me cut out a little disk near the south pole and then look at it from below. And I'll show you what it looks like. Yeah, so anyway, yeah, the tangency is what makes the result be code chain in here. And I'll come back to this, but I'll also say that the reason that the analysis of these figure eight quilts is difficult is exactly because of this local picture here. So if the seams were coming into the output point like that, then you'd have no trouble at all. And that's pretty much what Verheim and Woodward and Mao did. OK, great. So what kind of algebraic structures should we get? Let's do it over here. So there's some heuristic, heuristics for gluing that tell us what we should expect. And I don't think I'll get into the heuristics, but I'll tell you the result. Right. So based on what I was saying over here, counting these figure eight quilts of the precise type that I wrote down with two mark points on the left seam and one on the right, that we expect to define a map from, let's see, Cf L12 prime, Cf L12 prime, Cf L01 prime, two Cf L01 compose L12, L01 compose L12 prime. And just to make it really clear what I mean when I say that counting these quilts should give you this map, what I mean is that if I call this map C2 and if I feed in Y2, Y1 into the left seam and X1 into the right seam, then I'm defining the output to be the sum overall Z in the intersection of the composed Lagrangians of the count over the zero dimensional strontum of figure eight quilts passing through those specified co-chains. And that's the coefficient in front of C. So putting all of the different maps that you get out of this process, you know, for however many mark points on the left seam and however many on the right, what you expect to get is something called C2, which has input these two Foucaille categories, Foucaille M1 minus times M2, Foucaille M0 minus times M1, and it maps to the Foucaille category of M0 minus times M2. So what do I mean by this? Well, what I mean is on the object level it sends a pair of Lagrangians to a single Lagrangian, which when L01 and L12 have good composition is just the composition of them, and if they don't, then you are going to have to do some work. And on the morphism level, your font size is getting smaller and smaller. OK, I'll try to remedy that. Thank you. On the morphism level, it's going to input however many morphisms you like in each of the Foucaille categories and spit out a single morphism, defined exactly by this process. Right, and I said that we have some expected relations for these maps, and in the case of C2, what that relation is is that C2 is expected to be an infinity bifunctor, which is essentially the same thing as saying that it is expected to be an infinity functor from the tensor product of these two Foucaille categories to this Foucaille category. And let me draw the picture that gives us that expectation. So these are all Foucaille categories of the immersed Lagrangian. Yes. Saying something is an infinity bifunctor mean that you have to know a whole lot of other higher CIs to make that definition, or does it mean that all of the structure tells you what the thing is? You don't need to know any higher CIs. So it's to be an infinity bifunctor that's defined just in terms of the infinity operations on the three infinity categories involved. And the CIs are not the same thing as those infinity operations. I'll say it in a second, but the infinity operations are the same thing as C1. Right. OK. So here's an example of that gluing heuristic that I mentioned. Let's consider a one-dimensional logicalized space of figure eight quilts with two mark points on the left seamen and zero mark points on the right seam. So let's think about how this thing can degenerate, i.e. let's think about what the boundary of this should be. So one thing that can happen is these two mark points can come together. So when these two mark points come together, we'll get a two-patch sphere sitting on a figure eight quilt. So the two-patch sphere is divided into two by this circle here. OK. Now other stuff can happen. For instance, the two seams can come together and collide. And depending on where the mark points are, when that collision happens, you're going to get different bubbles resulting. So for instance, if the two mark points do not come together as the seams collide, then you'll get two figure eight quilts sitting on a two-patch sphere. So if the two circles come together, they could come together as the circles come together, in which case you'll have a single figure eight bubble sitting on a two-patch sphere. OK. If I missed anything. Oh yeah, I've missed two things, which is that you can have floor breaking at either of these two mark points. So if that is to say you can have energy concentration at either of these guys. OK. And now let me write down the algebraic expressions that correspond to these sets. So if you count quilts like this, oh, and I didn't write the mark points on it. So this thing, so this sphere with two patches, if you fold across the seam, you see that this is just the same thing as a pseudo-holomorphic disk mapping to M1 minus times M2. So this corresponds to doing the C2 operation to the product of two morphisms in M1 minus times M2. And this bar here divides the things that you're feeding into the left seam and the things you're feeding into the right seam. Now what's this? So this is going to be mu2 of C2y2 C2y1. This one here is mu2 of C2 of y1, y2. This is C2 of y2u1 of y1. And then this one is C2 of u1 of y2y1. So anyway, the fact that these guys arise as the boundary of this one-dimensional watch-less space, or they're expected to anyway, tells us that these algebraic expressions should sum to zero. And this is one of the infinitely many relations that must be satisfied in order for C2 to be an infinity by functor. OK. OK, so I put witch balls in the title, so what's a witch ball? Well, the answer is figure eight quilts have two circles as seems. The structure maps in the Thukai category of a product you can represent as counts of spheres with one circle as seems. So why don't we study circles with arbitrarily many circles as seems? What we call those quilts is witch balls, d-patch witch balls. And counting them is going to give rise to some operation called CD going from Thukmd-1- times md through Thukm0- times m1 to Thukm0- times md. And I should say that it's only for d is equal to 2 that this thing should be a a-infinity multi-funkter, which is exactly analogous to the fact that the only a-infinity operation, which is a chain map, is mu2. Great. Now, for every d we have, or for some d we have predicted relations, and we expect predicted relations for all d. We're just working on the algebra. So the first relation is that if you sum up all ways of sticking c1 into itself, you should get 0. Now, c1 goes from Thukm0-1- times m1 to itself. And as it turns out, basically what I said before is that c1 is exactly defined using the infinity structure maps in that Thukm category. So this is exactly equivalent to saying that Thukm0- times m1 is an infinity category. So you see this by like taking the sphere and folding it into a disk across that one circle. Yes. OK. The second relation is equivalent to saying that c2 is an infinity bifunctor. So that you get by sticking c1 into c2 either on the left or the right. And also adding in the result of sticking c2 into c1 however many times as you like. So the dot, dot, dot here means fill up c1 with c2's. So we know what r1 and r2 are, and we have an idea of what all the other expected relations should be. But exactly the algebra, exactly what it should be, we're working out. And the last thing I want to say before I move on, gosh, let's see, is that, well, let's see. So I'll say that this thing I mentioned in the title, this expected infinity two category, is just, we plan to get it by u. Using these cd's as a structure maps. I'll tell you one special case of this whole construction, which is when d is equal to two and let's see, m0 is equal to a point. In this case, the fact that c2 should be an infinity bifunctor means that if you fix l12, then that as a formal corollary of the expectation is that it applies to an infinity from fukm1 to fukm2 defined by counting quilted disks that look like that. The reason that I say disks and not spheres is that a priori it's defined using these figure eight quilts, these three patch spheres, and one of the patches is mapping to a point. So you can just delete that patch, flatten it out to a disk, and this is what you get. And let me note now that what Mao, Bairam and Woodward did is they straightened out the seams in the neighborhood of this singular point. So today studied quilted disks where the seam is this teardrop thing. So these seams come into the output point in this sort of transverse fashion. All right. And then really the last thing I'll say before I move on to the new result is that in the fukai category, you get the infinity operations because your spaces of pseudohormorphic curves live over the operative associate hydra. And the infinity operations reflect algebraic structure of that operad. So we expect that these spaces of curves live over another operad, and these relations exactly come from that operad structure. Specifically, the operad where a typical space is the moduli space of d-patch witch bolts. So I'll show you an example. So one of these spaces called P300 is the space of thrice quilted disks with three boundary mark points. The example is an example of disks and not spheres because they're easier to draw. And it turns out that what you get is, well, so take this picture, cut it out along the edges. There are certain edge identifications that you have to make that I haven't indicated in this figure. But the point is that it'll glue up to a polytope inside of R3. And what that polytope represents is this compactified Deline-Mumpard space of quilted disks. And the different faces represent co-dimension one degenerations, which are indicated here. So the edges are co-dimensioned two degenerations and so forth. And it's now time to play a game. What's someone named two adjacent faces? Alpha, beta. Good choice. The fact that they're adjacent should mean that alpha and beta have a common co-dimension one degeneration. So let's see if I can figure out what that is. I promise I haven't practiced. Okay. So what the common degeneration is, if the seam in this component here gets larger and larger until it hits the boundary, so it's going to force two patch disks to bubble off. And here what's happening is the two attachment points of these guys are going to come together. So what you'll get is... So I hope that that's convincing. Any questions about... Can you just explain that again? You're blocking the view. Oh, yeah, I'm sorry. Okay, so what's happening is the degeneration of alpha is that this seam here, so the border of this green disk is expanding until it hits the boundary of the disk. The mark points in this degeneration have not come together simultaneously, so by definition of this space, when that happens, when that collision of the seam with the mark points occurs, you're forced to bubble off a quilted disk. So that's what these guys are. And after this degeneration happens, this formerly quilted disk becomes an unquilted disk. The way that that happens with beta is that these two attachment points have moved together, and so you're forced to bubble off a disk. And that's those two guys. Okay. So I don't have a proof, but I expect that it will always be a polytope, which... a convex polytope. And I can say that in particular, you can specialize it in two different ways to the associahedra. You can also specialize it to the multiplahedra, and furthermore to the bi-associahedra and bimultiplahedra. So it seems to be a pretty rich object. It's sort of like a two-categorical version of the associahedra. Okay, so any more questions before I move on? So in your picture here, it looks like you've got some points with valence higher than three. Am I reading that right? Yes. Is there a simple example that could be done where you can see... because you said this is supposed to be a convex polytope and R3, so I was just curious if you could give us an example of a corner which had more than three faces coming in to define it. There is a simple example, but I better show you afterward, because it'll take up too much of the time. But that even happens with the multiplahedra, and I claim that these are generalizations of the multiplahedra, so you're forced into that. I have a question. Yes. Usually, the number of nodes gives you the degeneracy index. So why do we have three nodes on a boundary face? Right, okay, suggestive question. Yeah, so this has to do with these gluing heuristics that I alluded to earlier. And the answer is that there's this funny thing that happens with figure eight bubbling. So suppose that we have this picture, so a disk with two patches, and let's look at the degeneration when the seam gets larger and larger until it hits the boundary. So we'll get two quilted disks stuck onto an unquilted disk, like that. So you can ask what's the co-dimension of that degeneration. So that is to say, how many gluing parameters are there for this situation? And you might think that there are two gluing parameters. There's one for this node and one for this node, and therefore this stratum should not show up in the algebra. But as it turns out, that's wrong. The reason it's wrong is that there's no way to only glue this node based on the seam structure. So the right way to think about it is in order to glue up this picture to something smooth, you're first going to have to sort of glue this unquilted disk to something that's quilted with a very thin outer patch. So do something like that. And then you're going to have to glue the bubbles in. And it turns out that there's a relation between those three gluing parameters, the two neck lengths at the nodes and the width of this very thin patch that you've introduced. And in fact, that width determines what the neck lengths must be. Satisfactory answer, Genshin. OK, great. So today, your co-dimension is, I mean, you have a sort of central bubble, and then it's the number of things going out to the band. Those two are sort of, because they're on the band, the same bubble, have the same doing parameter. Exactly right. Yeah, so each kind of seam has its own. Yeah, that's right. And so it's sort of like the usual situation that you're used to is that if you have some nodal curve, then the different nodes don't know about each other. You can do whatever you want to any one of them locally. But here, if you have a bunch of different bubbles sitting on a seam, the same seam, then they all know about each other, which, you know, that seems simple enough, at least in this example, but when you have many different seams and many mark points, it turns out to, you know, you have to think carefully to figure out which bubbles know about which other bubbles. Any other questions? All right, so time to move on to the next section, which I had intended as a sort of... No, no, no, no, no, no. No, I wanted to get... I thought I was going to have 20 minutes. You know, the theme of this conference is sort of especially educational, so I wanted to do a little bit of a summary of a standard technique and then note how it gets adapted to this situation. So I will see if I have time to do all of that. Right, so this section is about fredholiness. So first of all, let me mention the general strategy for proving fredholiness of a linearized del bar operator. And let's say that this linearized del bar operator goes from H1 to H0. Whoops. You know, you can do... you can prove the same fredholiness results for lots of different function spaces, but this is the easiest situation. So the procedure has three steps. The first step is to prove that du is semi fredholm, which is by definition saying that the kernel is finite dimensional and the range is closed. Okay, and then the next step is to identify the co-kernel of du with weak solutions of du star is equal to zero. So this is the formal adjoint that Chris mentioned. And then the final thing is you have to prove a regularity result to show that you can identify weak solutions of du star with strong solutions. So here's an example of number one. So let's say that sigma is a closed Riemann surface. Then the standard elliptic estimates tell you that you have an inequality of the form that H1 norm of C is bounded up to a constant by the H0 norm of duC plus the H0 norm of C itself. Okay, and now, I should have said this, we're going to observe that there's a theorem saying that semi fredholmness is equivalent to the condition that there exists in operator K going from H0 to E, which is compact such that you can bound C H1 by duC H0 norm plus the norm of K C. Sorry for the font size. So anyway, this is exactly satisfies that condition because the embedding of H1 into H0 on sigma, which is closed, is compact. So that's great. So that's how you prove semi fredholmness on a closed Riemann surface. Now, right, I'm going to skip some stuff because I don't have so much time, but let me just say that another example of proving semi fredholmness is the case that sigma is not closed. It's equal to the cylinder. So let's call that cylinder C. Then there's this problem. The problem is you still have elliptic estimates. So you have that the H1 norm of C is bounded by the H0 norm of duC plus the H0 norm of C. So that holds, you know, these function spaces are on C. But the problem is that relics theorem no longer applies because we're not on a compact Riemann surface, and therefore H1 embedding into H0 is not compact. So we don't yet know that this operator is semi fredholm. The reason it's not compact, if you haven't seen it before, is because you can just think of a sequence of bump functions which are scooting off to infinity. So let me just say that the resolution is to study the asymptotic operator, which is the limit as S goes to plus or minus infinity of the linearized Del Mar operator. And then the asymptotic operator is, of course, S invariant. And so you can argue that there's an injectivity estimate for it. It's actually an isomorphism. And that's what allows you to fix this problem. You can therefore replace this H0 norm on C with an H0 norm on a compact subcylinder. So rather than saying anything more about that resolution in the quilted case, it turns out you can make it work, but you can't make it work trivially. You have to do some work. Instead of doing that, I think I'll just say what the analog of the elliptic estimates are in the quilted case. So here's what I mean by the quilted case. I apologize in advance for going about three minutes over. So in the quilted case, let's say we want to understand the figure eight quilt. Why don't we put some markpoints on there? And we would like to know that this defines a Fred Holm problem. So the linearized Del Mar operator in this case is Fred Holm. In order for that to be true, we certainly need to have some kind of chance or salad conditions on the correspondences. But let me not say anything about that. It's exactly what you'd expect. So to get the argument started, we need elliptic estimates, which you can get locally. So away from this bad point, you can get these elliptic estimates just from the standard unquilted elliptic estimates. But then you have this bad point down here. So let's figure out what happens there. So let's cut out a disk centered at that point. And let's go into cylindrical coordinates on that disk. So then what you get is a quilted cylinder with these non-straight seams. So how the heck are you going to get elliptic estimates on this thing, given this weird structure of the seams? And the answer is, well, look at my thesis. Let's look at these chunks. So these are constant width, constant height rectangles. We're looking at their translates. And we can note that you can try straightening out the seams in each of these rectangles. So what I mean by that is choosing diffeomorphisms of these rectangles with pictures that look like that. So what you see is a sequence of quilted squares with three patches, where the width of the middle patches shrinking to zero. And so, Verheim Woodward came up with an estimate for quilts with domains like that. And I upgraded that in my thesis to, in particular, include the case when the domain complex structure is not standard, which will happen here because you're using these diffeomorphisms, which are going to be tweaking the complex structure. So that's all I'll say about Fredholmness. I'll just finish by saying that we now know as a theorem that in this case, du is Fredholm as an operator from h1 to h0. Yay. No, I'm not done. I was yaying the Fredholm list. Right, so the last thing I want to do is I just want to mention the laundry list of analysis goals that we started with, and I'll tell you which ones we've checked off. Anyone else who can do the math, you've got analysis after this, so we can go. So the first analysis goal was a removal of singularity for figure eight quilts and, more generally, witch balls. So that's done. The next thing is a compactness theorem for modular spaces of such things. OK, the next thing is Fredholmness. Are you working in a polyfoil context, as you said, Fredholm? That's the goal, but we haven't achieved that yet. So when I said Fredholm, I meant classically Fredholmness. Classically Fredholmness for standard h1 to standard h2. Yes. We would really like h0 interseq. We would really like something like hk intersect wk minus 1, 4 to the analog 1 level of differentiation down. So we would like to get that, and then there's still some gluing stuff we'll have to do in order to put that into the polyfold context. So this is sort of partway done, you know, just because of what I was talking about with DUSA, that we'd like this in a polyfold context. We'd like it with slightly different function spaces. And then the final thing that's really not done is gluing to undo strip shrinking. And is that something you expect to be polyfolded to accomplish? Well, no, I'd say that we need to figure out in order to be able to put it into a polyfold setting. Once we have four done, I think we're really ready to turn the polyfold crank. So anyway, four is next up on the docket. Thanks for your attention. Any further questions? I think I can do it classically. It's a way to try and turn what you can do in a sort of hands-on application into the polyfold package. Something like that. So Virheim and Woodward did classical gluing in one situation. I don't know if you could generalize that classical gluing to sort of witch balls. And I don't know how generally you could do that. But the thing we need to do is polyfold gluing, which we have not so much of an idea of how to do. And what was the importance of having the actual witch ball and so this teardrop? If you use the teardrop, then what you'll get is maps between extended Foucaille categories. So these are Foucaille categories where your objects are formal sequences of Lagrangian correspondences, which is formally somewhat easier to deal with, but is quite a bit further removed from geometry. So that's why we want these things amongst Foucaille categories. Furthermore, even if you work with a teardrop, if you have bubbling, the bubble is not going to have that teardrop structure. It's going to have this witch ball structure. So you would only be able to try to do things with teardrops when you could exclude all kinds of, every kind of bubbling that happens when strip shrinking also happens. If you use this teardrop map and then you use the, how to say it, then you actually compose those things using the one map, do you get the same as using your map to begin with? I probably, but it's a little bit of a perverse thing to do, because in order to prove something like that, you have to completely understand the witch balls, and therefore, why are you thinking about the teardrops? Do you have a question, Dusse? Yeah, I had a question about that. I can't quite see the colors there, but it looks as though some of those, you had bubbling both in the level of blue and of the level of green. As far as I would understand, then that would be sort of co-mentioned too. So you're talking about sigma, for instance. It's something, as I say, I can't quite see, but it looks. Yeah, or what's the simplest example of this? Well, OK, yeah, let's look at row. So here we have a whole bunch of nodes, and they're not all nodes of the same type. You're right. Can't answer that very well in the time I have, but the idea is that the Deline-Muffered space I wrote down was not the Deline-Muffered space that you would write down on your first try. The thing you'd write down on your first try would have some strata, which it's not clear how to cast them in terms of algebra. So we've stuck on some additional cells in order to make every strata correspond to an algebraic expression. And the ones that you're confused about are exactly the new faces that you get. And a pleasing thing about this picture. So the picture I wrote down is exactly the polytope that tells you that C3 will define a homotopy between the functors corresponding to different Lagrangian correspondences. And Katrin and Chris Woodward and Sikkimedium out proved such homotopy statements, though they had to do some funny analysis with delay functions because they hadn't done this resolution. So a pleasing thing about this resolution is that you won't have to do such funny mucking around with delay functions. You have an idea of the flavor of geometric applications that this sort of machinery might allow you to tackle, or as opposed to the usual infinity structure. What other things might you be able to do? I think an interesting goal would be take one of these correspondences coming from symplectic quotient. So going from m to m mod g. And then this is expected to give you a functor between Fouc of those two symplectic manifolds. So how can you relate Fouc of m to Fouc of m mod g? My understanding is that's an interesting question in algebraic geometry, so you'd think it'd be interesting here, too. More generally, I have some ideas, but I'd rather not speculate in public. For instance, Rucier has endless two categorical representations of this type of thing. Do you have ideas about what symplectic manifold you might use to get those out of symplectic junk? Sid Rucier has what? I mean, he makes two categories out of the representation theory of the upgrade representation theory to one category that we'll love. And so in place of the kind of way of the presentations, you've got two categories of categorical representations or whatever. I do not have anything sensible to say about that. We should talk afterward. I think we're a respectable, slightly more than 10 minutes behind schedule. So let's have the next talk start at 10 past 11. Sure. And let's all express our appreciation for the Fetholm property and the big name. Thank you.
In work-in-progress with Katrin Wehrheim, we aim to bind together the Fukaya categories of many different symplectic manifolds into a single algebraic object. This object is the "symplectic A-infinity-2-category", whose objects are symplectic manifolds, and where hom(M,N):=Fuk(M-xN). At the core of our project are witch balls - certain pseudoholomorphic quilts with figure eight singularity. I will discuss recent progress: toward the construction of the moduli space of domains on one hand, and toward establishing the Fredholm property on the other.
10.5446/16309 (DOI)
Was this like when I was in, I forget where it was, in some country I wanted to get a Diet Coke, and this is a refrigerator with Diet Cokes right there, but I'm not allowed to touch it, so I have to ask this one person who's also not allowed to touch it, but that person can then call in a supervisor or something because it's actually allowed to get it out and then bring it, it's a very complicated process. Right, anyway. Okay, so today I'm going to finish with the Morse theory and then try to get onto some things which are involved holomorphic curves and are related to the polyfold foundations of SFT, which we'll be hearing about next week. So the Morse theory example I had was you have F-T-G-T is a one-parameter family of pairs of a function and a metric on a manifold, and a time t equals zero, this fails to be more small. So a time t equals zero, there is a index zero flow line from a critical point Q to a critical point R, so these both have index i. We'll call this thing u zero, but we'll assume that it's a sort of generic situation. When I say the word generic, that means satisfying various conditions which hold for data in accountable intersection of open dense sets and the set of conditions may increase as I go along. So here the... So the linearized operator for this u zero is not surjective, but by the generic case its co-cernel is one-dimensional. And the other generic thing is you can look at the derivative of the gradient flow equation with respect to t on u zero. So if I look at sort of d dt of the gradient flow equation as ds minus v on u zero, then this derivative at t equals zero should have a non-zero projection onto this co-cernel. So this is an isomorphism. So this is the generic way in which an index zero thing can appear in a one-parameter family. And then for t non-zero but small, this will actually be more small. Yeah, it could be real valued or circle valued. It doesn't really make a difference for this discussion. So if you're more comfortable with real value, let's say it's real value. That's an assumption. What's an assumption? This derivative. Yeah, so these are assumptions. So this whole board is assumptions. And then what I want to analyze is what is the change in the chain complex as we go from negative t to positive t? So in particular, if I have... Are you assuming that there's only one flow diagram here? Yeah, so just this one failure of more smaleness arises at time zero, which is this one index zero flow line appears. Okay, and let's say I have another flow line p of index i plus one. Excuse me, another critical point p of index i plus one and a flow line u plus from p to q. Then we have a configuration like this at time zero. And I can say, well, for... So we expect that this can be glued to an actual flow line from p to r for time small, maybe positive, maybe negative. So you want to analyze when that can happen. So in the last time, we wrote down the equations for this. So we have gluing parameters r, which is very large, and t, which where the absolute value of t is... Well, t is nonzero, but its absolute value is small. And what we do is we translate these flow lines, so my maps are all parameterized. So u plus is a map from r to x. And u zero is a map from r to x. So the first thing we do is we translate these. So we move u plus up by r over two. We move u zero down by r over two. And to the great confusion of every one, last time, I continued to denote u plus and u zero by the same letters. So I'm translating them, but denoting them by the same letters. So we translate u plus up, u zero down. Up and down, zero is close to q or far from q? The image of zero will be... The image of zero will be far from q. So this is breaking. I mean, the inverse of gluing is breaking. And what's breaking? So you have a flow line like this for some nonzero time. And when it's close to breaking, what it does, well, actually, I'm going to have my things going up. It goes up from r and it's near q and then it just sits there for a really long time. And then it goes up to p. So it's like a camel with two humps where there's some action happening. And these two humps are getting ripped apart. That's why you talk about breaking. It's very dramatic. The poor camel. What? The camel knows what to do. It's a tofu camel. No animals are harmed in this talk. Okay. All right, so we translate by this. And then we're going to change the time to t. So I mean, with the equation, I guess I should put a t here. So vt means the upward gradient vector field for time t. And then we have psi plus and psi zero. So these are perturbations of the u plus and u minus. So psi plus is a section of u plus star tx. And psi zero is a section of u zero star tx. And then we had a, we looked at beta plus times u plus plus psi plus plus beta zero times u zero plus psi zero. Where beta plus and beta zero are cutoff functions. Those derivatives have order one over r. And this, this expression really means you start with u plus and you apply the exponential map to psi plus. And then I, I choose, I choose some coordinate chart in neighborhood of q such that we're both of these cutoff functions are non zero. It makes sense to add them like this. And then the gluing equation. So if we could call this whole thing, let's call this capital U. So this is, so this is some map from r to x. And then we have the equation for this to be a flow line. So we have ds plus vt of u has the form beta plus times theta plus psi plus psi minus t plus beta zero times theta zero of psi. Sorry, this is not minus zero psi plus psi zero t where. Did that go to the right place? But you can't see it. So then we're a theta plus is the linearized operator for you plus apply to psi plus plus d beta zero ds times u zero plus psi zero. Plus some other stuff, which doesn't really matter. So I'm not going to write it. And then theta zero. Well, I guess I could write this here. Although it's not going to matter, but we have sort of the plus the derivative of the well plus t times the derivative of v with respect to t. Because the equation is changing because the vector field is changing when I guess is ds minus v. So I should put a minus sign here. And then there's some sort of error terms and then theta zero is d zero psi zero plus d beta plus ds u plus plus psi plus minus t times partial vt partial t. And then there's some error terms and then there were some lemmas. So the lemma. So let's fix. R and T where R is sufficiently large and the absolute value of t is sufficiently small. And then there exists a unique psi minus or sorry psi plus and psi zero such that psi plus is l two. And this is this is unique and you have to work in some bonus space where these things are required to decay. And this unique such that side psi plus is l two orthogonal to the kernel of d plus. So I zero is l two orthogonal to the kernel of d zero. And theta plus is equal to zero. And theta zero I can't quite get it to be zero, but I can get it to be l two orthogonal to the image of the operator d zero. So if I could actually get theta zero equal to zero, then I would have a gluing because the that whole expression of there would be equal to zero. I can't quite do it. There's a failure, which is this theta zero. So you can think of theta zero is living in the co kernel of d zero, which is a one dimensional vector space. So I have an obstruction to gluing, which lives in a one dimensional vector space. And when that obstruction is zero, this construction will work and I can glue. And some people asked me last time, why do we want theta plus and theta zero both to equal zero? We just need this linear combination of them to equal zero. And the answer is because then you get this uniqueness of psi plus and psi zero. And as helmet pointed out to me after the talk, this is quite analogous to the anti gluing in poly folds. You want to make the gluing unique or make it an isomorphism by including this keeping track of this anti gluing. So so by making both of these equal to zero, it's like analogous to saying that I'm making the anti gluing equal to zero. Okay, so now we have an obstruction bundle picture. So this is the general picture. So in our particular case, we have a bundle over the set of all pairs R and T. It's a trivial bundle. It's just that the fiber over any point is the co kernel that operator d zero. So this is this is some one dimensional vector space. And we have a section of this bundle. So S of RT is defined to be theta zero, the theta zero provided by this lemma. So we try to glue maybe doesn't quite work. So as I learned, the way mathematicians can be successful is when you you turn failure into success by calling it an obstruction and writing a paper about obstruction theory. So so that's the obstruction. And then this construction defines a map from the zero set of S to the set of of gluings, the set of flow lines that are close to breaking. And then a non trivial fact, which I won't attempt to explain here that this is a homeomorphism. Okay, so all this all all gluings can be obtained by this construction and this does a bijection. Right. So then, well, that's nice, but then we have to understand what is this section. Okay, and then the trick for that is we're going to approximate it by different section. So let's let's try to understand what's going on here. So. So let's let's look at this this expression for theta zero. Now, so S. S of RT to be a little more precise is the projection. Theta zero onto the co kernel of D zero. Right. So this first term was in the image of D zero. So it doesn't doesn't doesn't come up at all. And I've got this term which measures the derivative of the equation that's going to stay there. This side plus it turns out this side plus is very small. It's much smaller than everything else and the reason why so I didn't draw the picture of the cutoff functions for the cutoff functions look like this. So here is, you know, S. Here's R over two. Here's minus R over two. So, so beta zero is sort of going like that. And beta plus is going like this. So we can make it so that the region where it's actually getting cut off. Let's say this goes from say R over six to R over three. Okay. So this is minus R over six and minus R over three. Okay. And then if you look at this equation, the other equation for theta plus this thing is actually equal to zero. And the upshot is that away from away from this region where the derivative of beta zero is equal is nonzero. Psi plus satisfies an equation which forces it to exponentially decay as you go this way. So, so psi plus is something over here but then it has, it has time over three which is some very large amount of time to exponentially decay before it gets over there. So the psi plus that you see here, it's heavily exponentially decayed. So it's sort of doesn't matter much. So this, let's, let's ignore that. There's some other stuff here and let's ignore that too. And then. This is awesome now. What? This is great for math. Yeah. So then we have an approximate section and then this U plus we can also simplify that a little bit because we have to think about the asymptotics of U plus. So before translating. For T, for T sort of very negative U plus, sorry, not T, S. What is U plus of S? Well, it's approximately e to the minus in which ways it can go. It says nothing to the section. This has nothing, I'm just talking about more theory right now. Oh, that, oh shit. This is supposed to be a fracture S. Actually, if I do this, it's going to give me an error and it's going to tell me I have to use a math frack. Yeah, sorry. It's not enough letters. I think with the Greek financial crisis, they could sell us some letters for cheap. It seems like they already did that. And all the other letters look the same as Roman letters, so we're out of luck. I have to import some letters from somewhere else. Okay. So. So this is going to look like minus. This is, I have my reasons for writing it like this. So this were so generically, so we're adding conditions to the generically. So lambda minus is the smallest positive eigenvalue of the Hessian at the point P. I put a minus here because it's associated with the negative end of the flow line. I guess I don't need an absolute value sign because it's positive. So I'm going to add some eigenfunction. And I'm going to assume also it is add to my assumptions that what's it? What's assumed that the eigenvalues of the Hessian are all distinct. Q, h of q. So this is q, not p. So that's what it looks like. And then after translating, well, okay, so now we have to pair this with a co-kernel. But we translated by total translation distance of r. So the upshot is that this d beta plus ds times u plus, when I project this to the co-kernel, this is approximately e to the minus lambda minus r times some number. So I can think of 8 as a number. Well, it's some fixed element in the co-kernel. So it decays like this. Yeah. Is that eigenfunction as the one corresponding to the eigenvalue? I shouldn't have said eigenfunction because it's just an eigenvector. This is a freaking vector space here. Okay. I'm sorry. So 8 is an eigenvector corresponding to this eigenvalue. So I said eigenfunction because I'm really thinking about holomorphic curves. This is just a model case for holomorphic curves. All right. And then I'm going to, to make things, the equation is a little simpler. Let's, I know the derivative of the equation at u0 is an isomorphism for the co-kernel. And let's use this map to identify the co-kernel with r. Okay. So this 8 is actually just a real number now. And generically it's going to be non-zero. So I can define this, what's in our, the paper with TOVs is called the linearized section. The linearized is not really a very apt word because it has nothing to do with linear or nonlinear. It's just a sort of leading order approximation to the section. And I'll call it s0. So s0 of RT is, it's now e to the minus lambda minus r times eta minus t. That's my section. And then the next fact you need, let's put it this way. So for, for a fixed t, the number of zeros of the actual section counted with, say z mod 2, or counted with sines is equal to the number of zeros of this linearized section. And another fact, which I, there's a, if you want to do this carefully, there are a billion technical things you need to check. So another, another, you know, fact you can check is that s is smooth and generically s is transverse to the zero section. So if I want to, so the upshot is that if I want to count the number of gluings, I have to count the number of zeros of this thing. But this is now quite simple. This is just an exponential times a constant minus t, not very complicated. So we can put together the conclusion from this. So the conclusion is that, so if eta and t have opposite signs, well then you're in big trouble because there's no way this can ever be zero. So then you can't glue. And if eta and t have the same sign, then there's a unique solution, r, so to the equation s zero of r t equals zero. So the number of gluings is equal to one. And probably with a little more work you could show that there's actually just one gluing. There's no cancellation. Right? So this is sort of what we expected to get. We expected that you would be able to glue this configuration for either positive time or negative time or not both. And that's what we found. And the analysis actually tells us whether t will be positive or negative. To figure it out, you have to look at the asymptotics of u plus and you have to pair that with a, well, with the derivative of the equation. So you can work, so it tells us completely explicitly for which sign of t you will get a gluing. Any questions about this? It's about to make it harder. Can you say a little bit more about what goes into this second fact? Well, basically you need to show that the things that I casually crossed off are small. So basically there's some, you're sort of restricting attention to some set of r and t. And you want to show that as you deform s to s0, no zeros of the section can escape outside of the boundary of this region. And to do that, you need to show the boundary of this region. The s0 is very big. And the stuff that I threw away is very small, so it can never vanish. How much more complicated is it, yeah, if you have a code from a curing of s1? Well, I'm about to get to that. There is another generosity condition because you're assuming that the eigenvalues of hq somehow project the eigenvector corresponding to the eigenvalue has a non-zero projection to the cocoa, right? Because you're assuming that the projection of eta... Doesn't that follow from the tether method? Yes. Okay, so there's something I forgot to say. This is not a generosity assumption. It turns out that if you look at a co-curnal element, you can identify it with an element of the kernel of the formal adjoint. If you look at the asymptotics of an element of the kernel of the formal adjoint, it has exactly the same asymptotics as this flow line U+. So the leading eigen... Well, I guess that's something one needs to check. But I think one can just prove that probably. Well, anyway, if you look at the asymptotics of the co-curnal element, it has a leading term which has the same form as this leading term over here. So I guess we might need to... It might be a generosity assumption to say that that's non-zero. In the Holmorf-Curve case, it actually... Yeah, you're right. We maybe need to assume that. I'm sure it works exactly the same. What? I'm sure it works exactly the same as in the Holmorf-Curve case. So, yeah. So there's an additional generosity assumption which probably most people in the audience got lost and don't know what I'm talking about. So there's some additional generosity assumption in there. So it's basically saying that the flow line is approaching the critical point with the slowest possible asymptotic behavior. I'm assuming that about U plus, but I'm also assuming something about the co-curnal if U is zero. Yeah, because you're assuming that the projection on the co-curnal if U is zero is not zero, right? So you get this number. Yes. You see that number. Right. So that'll be true if the sort of leading asymptotics behaves in a generic way. Right. I think I'm going to not go explain that point more at us too. But, yeah. You're absolutely right. I forgot to mention that. All right. Other questions about this? Yeah. There's some orientation of the picture because if you identify it with power and then the sign matters. Right. So I'm choosing the identification in which this thing is identified with one. Okay. So the derivative of the equation at U zero is some element of the non-zero element of the co-curnal. I'm just going to call it one. So I did that just to simplify the equations. Otherwise, I have to sort of, instead of thinking of these as real numbers, I'd have to think of them as co-curnal elements and use more notation. All right. Well, let's up the difficulty level just a little bit. Where's the hook? It's hiding. Need a hook to get the hook. Okay. Yeah. I probably did. I probably play on this poorly. It's too complicated. I was trying to do it so that the boards wouldn't have the shadow on them, but. Look at this board. Right. So the next example is let's say we're in this S one valued case. So there can be an index zero flow line from Q to itself. So here P is index I plus one Q is index I. That's a time T equals zero. That's a flow line from Q to itself. So I already know how to glue this because it's the same analysis I just did. But maybe what if we want to glue to two copies of the flow line from Q to itself? Okay. So, so time T equals zero. I have this configuration. And we could say if T is non-zero, but close to zero. Does there exist a flow line which is obtained by gluing these three things together? So maybe there is, maybe there isn't. We can use a similar analysis. So in this situation, there are three gluing parameters. As T is as before, I'm just going to replace time zero with time T. And then there's going to be going to be an R one and R two. So what I'm going to do is I'm going to translate everything. So I'm going to, I'm going to, the total translation distance between the upper two pieces will be R one. And the total translation distance between the lower two pieces will be R two. So I translate everything. I then do, I then do the same business. Let's call this U plus and U zero. And the gluing equations are going to look like this. Okay. So you're going to, you're going to be able to glue up to, up to two elements of the conchurnal of U zero. So you're going to have like a theta for here, which you can get to be zero. A theta here, which is, lives in a one dimensional vector space and a theta there, which lives in a one dimensional vector space. They continue to identify this one dimensional vector space with R. And the gluing equations are going to look like. So I'm going to have e to the minus lambda minus R plus e to the minus, sorry, R one, plus e to the minus lambda plus R, sorry, this will be the eta, same eta as before, plus e to the minus lambda plus R two, eta plus minus T equals zero. And then e to the minus lambda minus R two, eta minus minus T equals zero. So where does this all come from? So what's lambda plus? So lambda, lambda plus is the largest negative eigenvalue of the Hessian, Q. And eta plus or minus are determined by asymptotics of U zero at the positive and negative ends of U zero. So this, the second equation is sort of, is the same kind of thing I had before as the equation you get for this. So there's a, here we're looking at the co-colonial of this and there's a term coming from the asymptotic, the negative asymptotics of U zero. And then this negative asymptotics is measured by this eta minus. And then this exponential here comes from the fact that I'm stretching these two things apart by distance R two. And then this cooling equation, this eta is determined by the negative asymptotics of U plus. And this eta plus is determined by the positive asymptotics of U zero over here. So those are the equations you get. So these are simple equations and now you can just analyze them. Why is there a thing with two exponential terms before and now and it wasn't before, which is, I didn't understand that. Okay, so the first equation comes from this first piece, this middle piece here. The thing is this middle piece, we're gluing things to the middle piece on both sides and that's why there are two exponential terms. And the second equation comes from this lower piece, we're only gluing one thing to it, which is why there's one term. So in general, each piece has a gluing equation where for everything that's glued to it, there's something involving the asymptotics of that other thing. Right? So like if we're gluing these three chalkboards together, so this chalkboard, the gluing equation for this chalkboard has a term for the asymptotics of this chalkboard and a term from the asymptotics of that chalkboard. Can you please just avoid exactly the structuring bundle where the sections are? Okay, so in this case, it's a bundle over the set of all pairs R1, all triples R1, R2, and T. And the fiber over a point is the co-colonial of D0 direct sum the co-colonial of D0. Because now we're gluing two pieces which have co-colonial. So for each of those pieces, you have a sum end. This must be really fun to do for ECH. Oh yeah, it gets much worse. But I won't do the worst part. So we're going to see later that an example of this bundle is not trivial anymore. It's actually a real interesting vector bundle. Anyway, with these equations, maybe it was a little confusing how I got to them, but now that I've written them down, you can just solve them as completely elementary. So you can solve the gluing equations if and only if all of the following hold. So first of all, A to minus and T, it better have the same sign. Okay, because that's just looking at the second equation. If A to minus and T, so this is, we're sort of fixing T. And the question is, can we find R1 and R2 solving these equations? So the second one can be solved if and only if these things have the same sign and then there's a unique R2. Then you have to look at the first equation, which is a little more confusing. So, yeah. So we're saying that that's one, doesn't that make D of VT minus one, which is what we're multiplying T by in the equation? This reminds me of some nightmare I had once. I was like, I had some math nightmare about this, and it's reminding me about this. So I sort of distracted everything out, so this minus sign is sort of long disappeared. So, I mean, you could just forget about this. Let's just call this, you know, like F or something. So this is the equation that I'm trying to solve. And then I'm identifying this with a Cochlearl. And then everything, so I sort of trivialized the Cochlearl and identified everything with Y. And then these are some real numbers determined by the asymptotics. I don't know if that helps or not. I mean, at a priori, I have no idea what the signs of these numbers are. It depends on the asymptotics of the flow lines. So basically, like you're saying, you know, am I approaching the critical point from this side or from that side? And then in some obscure way, by that identification over there, one of them is actually declared to be positive and the other is declared to be negative. Anyway, so you can figure this out. So, well, maybe it's not necessary to go through the whole thing. But there's some more conditions for the second, to be able to solve the first equation. In this case. And then when you look at this first equation, it's actually going to matter. So it's actually going to matter which of these two eigenvalues is bigger than the other. So let me spare you the whole thing because I'm just going to confuse myself trying to do it. But there are more conditions. So in some cases, these depend on the sign of the difference between these two eigenvalues, which I'll add to my list of generic genericity assumptions that this difference is not zero. So the question of whether or not you can glue, it depends on the relative signs of eta, eta minus eta plus and t, and also on the sign of this difference of eigenvalues. So it's like four different signs. So it's sort of 16 different cases and you can just sort of go through and check each one. But elementary methods. I worked out all the details on this blog posting. This is the one from July 2014. Or at least I worked out the algebraic details, the analytic details. If anybody likes analysis and wants to understand the stuff better, it could be a good exercise to do the analytic details. So sort of to justify all these facts and work it all out. So we did this in the paper with Tauss for holomorphic curves, but not in this Morse theory case. But I should also mention that the thesis of Jial-Yong Li does some polyfold version of this for certain holomorphic curves. I haven't actually seen the thesis, but there's some related polyfolding by Jial-Yong Li. Anyway, the upshot is we have this configuration we want to glue. We reduce the question of whether we can glue these to solving some elementary equations involving real numbers. And when you can just solve them. So you may or may not be able to solve them. In general, I can tell you what happens. So there's sort of three possible outcomes. This is just kind of curious. So let's suppose I want to glue two k copies of the flow line from q to itself. So case one is you can glue only for k equals one and only for sort of one of the possible signs of t. That's the simplest case. Case two is you can glue for all k for say, for one of the signs and none for the opposite. Case three is you can glue for k equals one for one sign and k equals two for the other sign. Those are the three possible outcomes. Remember, we were supposed to get a polynomial which was to be one plus t to the plus or minus one. In case one, you get the, well, I mean, you're sort of seeing a one plus t there. In case two, you're sort of seeing a one plus t plus t squared dot dot dot, which is the inverse of one plus t. And in case three, you're seeing something like one plus t squared divided by one plus t, which in zero to two coefficients is one plus t. So you are getting the correct polynomial. So this third case is quite weird, but it can happen depending on what these signs are. And you get this, so which of these three cases you get may depend on what flow line you're trying, what flow line you plus you're trying to glue, but you always get the same polynomial, the same power series. So, yes, I didn't explain, I didn't go through all the calculations, but I hope the base, I explained the basic idea of how you sort of, how you try to glue these things and you reduce it to some elementary equations, which you can then solve. Are there questions about this? So I think I have 10 more minutes. Is that correct? Yeah. All right. So let's just forget all this and start over. So if you're totally, if you got totally lost, we'll start over. Just forget about all this. There's one question. Oh, yeah. So you said it depends on whether it can come from one side or the other side. What does that mean? How does critical point have sides? Well, so here's the, here's the critical point q. Here is an eigen space, or this is an eigen space of the Hessian. And if this is the sort of the smallest possible eigen, smallest positive eigen value of the Hessian, then the flow on U plus generically is going to either come in like this or it's going to come in like that. So come in along this, along this, this eigen space plus some exponentially decaying error terms. So that those are the two sides. This is a little counter to your usual intuition and Morse theory because usually you, you, you think of a stable or unstable manifold, which is, has dimension bigger than one. But when these eigen values are distinct, there's a preferred direction, a preferred eigen space, a preferred line along which you approach the critical point. So that's actually the generic situation. Well, usually you make, you want to assume the opposite and more serious. Usually you want to assume all the eigen values are the same. It makes it easier, but it's not a generic at all. Why did you think this would work for ECH? What else are you going to do? It just seems like you'd have a lot of algebra equations to solve. I knew that d squared equals zero because it's the same as cyberwitten and d squared equals zero and cyberwitten. Sometimes you know something's true even though you don't have a proof. They just have to work out the proof and it comes out. So now let me, let's just switch gears and talk about the Holmorf-Curve problem I care about. So I'm going to introduce the problem and then we'll talk about it's how to solve it tomorrow. So we're going to look at a three-dimensional contact manifold. There's a three-manifold. Lamb is a non-degenerate contact form. R will denote the ray vector field. We'll choose J will be a suitably generic meaning satisfying an ever-increasing list of desired conditions, almost complex structure. What did you call it, suitably desired? Desirable, almost complex structure. Desired might not be generic but you have to make sure your desires are achievable. What? Come here. Imagine it's like a dating website. I'm looking for a Comeger partner. See how many responses you get to that. Right, anyway, so we satisfy the usual conditions which this is a special case of Chris Wendell's talk but I want that the J sends the derivative of the S direction to the ray vector field. J sends the contact structure to itself, rotating positively with respect to the orientation on it. J's are invariant. OK, then we're looking at holmorphic curves. Again, a special case for Chris Wendell's talk. And these will, so sigma is a punctured compact Riemann surface. And the ends are asymptotic to ray orbits with S going to plus or minus infinity. So these are the kinds of things that one wants to count in the SFT differential. In my case, I count them in the embedded contact tomology differential but I'm not going to assume any knowledge about either of those things. I just want to talk about gluing these things in a particular situation. Yeah, I put that here. That's OK. Sorry. I could make a dating soap for theorems and proofs. Oh, that's a good one. I could put a bunch of entries on that one. What? Theorms. I could put a bunch of theorems on that one looking for proofs. I don't really have so many proofs looking for theorems. OK, so here's the problem we've got. The problem is to glue a longer branch cover of a trivial cylinder. So the trivial cylinder means r cross a ray of orbit. So you could have a holomorphic curve like this. So this has index equal to 1. So it lives in a one-dimensional moduli space, which means that after you might outplay the r translation, this lives in a zero-dimensional moduli space, assuming transversality. So the usual kind of gluing would be two of these things might break like this along some ray of orbit and you want to glue them. Or maybe they break along multiple ray of orbits like this. And you want to glue them. That's OK. You can glue these. But the problem that can come up, so you can glue these two things to get an end of the moduli space of index 2 curves. And the problem that can come up is you could have a sort of three-part curve where you have an index 1 curve on the top. You have an index 1 curve on the bottom. So this index 1 curve has the simplest case. It would have a negative end at gamma squared, where gamma squared means the double cover of some simple ray of orbit gamma. And this lower curve has two positive ends, both with the ray of orbit gamma. And this curve is index 1. So it looks like you can't glue these because they have different kinds of ends. So there's no way to put them together. However, you can insert in here a pair of pants, which is a 2 to 1 branched cover of the cylinder R cross gamma. And it turns out that 50% of the time, depending on some conditions, which I'll tell you later, this pair of pants has index 0. In general, it has index either 0 or 2. So sometimes it has index 0. So this whole configuration is index 2. And you could say, can this whole configuration be glued to an end of the index 2 moduli space? And the answer is, yes, it can. And you could say, how many ways can you do it? In this particular example, the answer is one way. And you can see that by looking at an obstruction bundle. So I'm out of time for today. So what I'm going to do next time is I'm going to discuss just this simple example of how to glue this using an obstruction bundle, which in that case will actually be a non-trivial bundle where it's not just the same fiber over every point. It's actually interesting. And then I'll mention the issues that this raises when you think about the definition of the SFT differential. Because this means that certain configurations involving branch covers of trivial cylinders must contribute something to the SFT differential. So it will be interesting to compare when we see the definition of the SFT differential next week. All right. So next time I'll talk about how to glue this. That will be my last example. Thanks. Thank you. More questions for Michael? Let's take them again. Thank you.
There are easy examples showing that classical transversality methods cannot always succeed for multiply covered holomorphic curves, but the situation is not hopeless. In this talk I will describe two approaches that sometimes lead to interesting results: (1) analytic perturbation theory, and (2) splitting the normal Cauchy-Riemann operator of a curve along irreducible representations of its automorphism group. Both were pioneered by Taubes in his work on the Gromov invariant and Seiberg-Witten theory in the 1990's, and I will illustrate them by sketching two proofs that the multiply covered holomorphic tori counted by the Gromov invariant are regular for generic J. If time permits, I will discuss some ideas as to how both methods can be applied more generally.
10.5446/16308 (DOI)
Okay, thanks to the organizers. So I'm supposed to talk about obstruction bundle gluing and the references are, so these two unreadable papers by myself and Taubs, gluing something, something, something, parts one and two, and the fun part begins in the second paper, section five. So there it explains the gory details for obstruction bundle gluing for holomorphic curves in four dimensions. And but I'm going to start with the simplest non-trivial example I could think of, which is for Morse theory. And that example is on my blog, flurhomology.wordpress.com, in a posting from July of last year. And I'm not going to talk about this at all, but there's another nice example of similar techniques in a paper by Urkel Bau and Kohanda on contact homology, 1412.0276. So you might find that interesting also. So what is obstruction bundle gluing? So obstruction bundle gluing is a way of gluing things when transversality fails, but it doesn't fail catastrophically. So you might not need polyfolds, but it kind of fails, so you need something. And it'll be interesting later to compare this with the polyfold stuff. So Helmut next week is going to talk about the definition of symplectic field theory by polyfolds. So the polyfolds say, well, you can get some numbers, and these numbers satisfy some properties so that you define invariance. But maybe you don't know much about these numbers. But in some situations, you actually want to know what these numbers are. You want to know what's going on. For example, if you have a situation where you know what all the holomorphic curves are, but transversality fails, you want to know what are the numbers with which you're counting these things. And obstruction bundle gluing is relevant to this question, as we'll see. Okay, so here's the Morse theory example I want to start with, because it's easier. And then we'll talk about the holomorphic curve stuff later. So I want to consider circle-valued Morse theory. And circle-value just to allow this phenomenon that I want to tell you about to occur. So we have a finding dimensional smooth manifold, and we have a Morse function. And you choose a metric that allows you to define the gradient vector field. And then, so Morse means, I mean, if you have a circle-valued function, then locally it's derivative is the derivative of a real-valued function. So locally it looks like a real-valued function. So the same definition of what's the Morse function. Okay. And then you can define a Morse complex where you do the usual thing. You count gradient flow lines between critical points. The only issue is you have to be a little careful, because there can actually be infinitely many flow lines between any two critical points. So I draw my manifold X like this. And F is this sort of circle direction in the picture. Maybe there's a critical point over here, and the flow line is some critical point over there. The flow lines can sort of go around in the circle direction many times. So there can be infinitely many of them. So to get a finite count, you pick some level set that doesn't contain any critical points. And say it has no critical points in it. And this theory is defined over a nova covering, which in this case is power series in a form of variable T with coefficients in Z. And the differential of a critical point, so P is an index I critical point is the sum over critical points Q of index I minus one. And then we have the sum from K equals zero to infinity. The form of variable T to the K times some count of flow lines from P to Q. Let's put a K here. And these are flow lines across the level set sigma K times. So the flow line like this is counted with T to the zero. If it goes across this three times, it's counted with T cubed. And once you do that, you get finite counts. So then you get circle value more theory, and it's kind of fun in ways which are completely relevant to my talk. Yeah. You could say, well, is this an invariant? If I do a homotopy of the function f, and if I change the metric G, replace it with some other metric, will I get the same homology? And the answer is yes. And you can prove this by following the usual strategy of defining continuation maps. But for certain obscure purposes which are beyond the scope of this course, sometimes you want to know in more detail what the continuation chain map actually is. So you want to define an explicit chain map. So you want to know sort of what, so you want to sort of deform f and g in a homotopy. And maybe at some times during this homotopy, it will fail to be generic. And you want to know as you cross that non-generic time what exactly happens to the chain complex. Stay? Is it staying? I can't see. Yes. Okay, good. Okay. So you have a... So about this circle value, why not just unwrap the circle? Yeah, you can do that. Although you still have to account with the nova covering because you'll have a non-compact manifold. And then if I know how usual most functions change. Ah, this is... You're about to see why it's not that simple. Okay. So now I have a homotopy. And let's say it's not generic at time t equals zero. And there are various ways it can fail to be generic. But the one I'm interested in is where I have a critical point Q of index i, say, and I have a flow line from Q to itself. Let's call this gamma zero. Or actually, you'll call it U zero. Okay, so U zero is a flow line from Q to itself. And so this... The reason why I'm doing circle-valued Morse theories for this to be possible, you can't... This can't happen in real-valued Morse theory. In circle-valued Morse theory, this can happen. Let's suppose that this intersects sigma just once. I want to know what's going to happen to the chain complex. And so you could have some other critical point P of index i plus one. So here's P. And there could be a flow line, let's call this U plus from P to Q. So this thing has index equals one, this fault line. And then sort of unwrapping the circle, as Vivek suggested, to draw the picture, we could have a bunch of copies of this flow line U zero from Q to itself. So then we could say, well, here's this pretty non-generic thing. It's a broken thing with many levels. And then if I perturb T a little bit away from zero, what's going to happen to this? So it could happen that for T slightly negative or slightly positive, there's a flow line like this obtained by gluing all this together. So the question is, how many ways to glue this to a flow line for T not equal to zero? And that's going to tell us how the chain complex changes. Like, if there's maybe there's no flow line here when T is negative and there is suddenly a flow line appears when T is positive. So then the differential of P is getting added to it T to the something times Q. So this is my, this is the simplest example of a problem I could think of for which we want to do obstruction bundle gluing. And I sort of know what the answer is for obscure reasons. So I'll tell you what the answer is, but then we're going to try to actually understand what's going on. So there's a lemma which is that there exists a power series equals one plus or minus T and then I don't know what. Such that, so the differential after slight, where T is slightly positive, so D plus and it's a differential for T slightly positive. And the differential for D slightly negative are obtained by conjugation by a map AQ. Where AQ of Q is this power series A times Q and AQ fixes all other critical points. And this A basically is a count of sort of flow lines that are created or destroyed in this bifurcation. And then it's a fact which follows by combining various obscure forgotten theorems that in fact this power series is one plus or minus T to the plus or minus one with four possibilities. These signs are not related. And I want to understand this directly by obstruction bundle gluing. So you're saying you take any generic parts that goes through this and then each such part will be one of those matrices which is part. Because I thought you were just going to ask you what's your model for going through because obviously if you could go through from T minus to Q plus you could reverse T across in the other direction. Opposite power series, yeah. Yeah, so we're going to see that this depends on some very subtle data. But the nice thing about the obstruction bundle gluing is the analysis will, if we look at the analysis carefully, you'll tell us exactly what's happening. It's very precise. But we have to look at it carefully. First of all, is it correct that T is used in two different ways? Oh shit. I just wanted to make sure that that was correct. Yeah. Right. Let's call this Q. No, use Q already. Okay, well, we're just going to have to learn to tolerate some ambiguity. We're never going to, but let's make this a capital T. How about that? Is that better? Yeah. All right. Any more? Okay. Sorry about that. Also this lemma, this is a fact I know of in the usual moist series and it's only involved with two terms. This lemma, yeah. So this lemma, it's proved by doing what you suggested and sort of unwrapping the circle and looking at what happens there. Okay. Other questions about this? Yeah. Is there a straightforward criterion for which version A is applied? There's a non-straightforward. So we're going to, there are like four or five different signs and putting them all together, it's going to tell us what A is. Oh, and by the way, from now on, so to avoid my head completely exploding, I'm going to replace this with Z mod 2 coefficients. So normally you count, you have to count flow lines with signs, but I think it's sort of indecent to discuss that in public. So this sign no longer matters. This sign still matters. So if using Z mod 2 coefficients, A is either 1 plus T or 1 plus T plus T squared plus dot dot dot. Okay. So how do we glue this stuff? Well, so I'm going to, let's do a warm up before we get to this. Let's see. Where is the hook? Ah. I'm really impressed with what it says audience numbers. Not yet. Wait. How do I get that one? Oh gosh. Here we go. I'm sort of afraid that I'm going to be like hanging onto the boards and then like lift it up. Okay. Right. So what's my warm up problem? So the warm up is let's look at the gluing needed to prove that D squared equals 0. Okay. So this is something which is supposed to be easy. I'm going to make it difficult. Okay. So I'm going to review this in a slightly strange way because the instruction bundle gluing setup looks a little different from the way people usually do gluing. But so I'm going to do this in a strange way which is then going to generalize conveniently to the non-transverse situations. Okay. So we have a flow line P of index I minus, index I plus 1, sorry, the flow line Q. So let's call this U plus to Q which is index I. I have another flow line U minus to critical point R of index I minus 1. I want to glue these things to get an end of the moduli space of curves from P to R. So these, let's write V for the gradient of F. I think V is the upward gradient and then these, these flow line U will satisfy the equation DSU plus V of U equals 0. So I'm thinking of, I really should draw the arrows going the other way. So they're parameterized like this. So my flow lines actually go up. Okay. So U is a map from R to X. And I want to glue these things. Also there's a, there's a, sorry. These signs are going to kill me. Okay. All right. And there's also a, there's a linearization of this equation which I'll write as DU. So this goes from sections of the, sorry, of U star TX. And we probably want to use some monochromatic space completion. It doesn't really matter which one as long as you have some decay conditions. So we could take L21 sections. So this is the derivative of this equation with respect to the deformation of U. So DU of sum C is the covariant derivative of C in the direction of the flow line minus the covariant derivative of the vector field V in the direction C with respect to, say, the Levy-Chavita connection. All right. Okay. So that's, so now, so how am I going to glue these things? So I'm going to choose a very large value of R. And I'm going to translate U plus up and U minus down. The total translation distance will be R. Okay. So I pull these things apart by distance R. And then I'm going to have some cutoff functions. So I guess U plus up by R over 2, let's say. And U minus down by R over 2. Okay. What does that mean? Okay. Mark, are you translating in the source R or the target X? Well, it's sort of both, but when U is a map, we'll define an R. So I'm going to compose that with a translation of R. So, right, and I'm going to, I need to choose some cutoff functions. So there'll be a cutoff function beta minus, which will look like this. So here's R over 2. Here's minus R over 2. So it's 1 over here. And somewhere between 0 and R over 2, it dies. And then there'll be a cutoff function beta plus like this. I'll call this parameter S. And the size of the derivative of these cutoff functions is on the order of 1 over R. Okay. And then what? So now, this translation by R is making them overlap more or less? So basically, you're pulling them apart so that sort of at time, at S equals 0, they're sort of close to the critic, they're both close to the critical point Q. Oh, no. And then we choose a coordinate chart on X in a neighborhood of Q. Okay. And then we can preglue these things. So this is a curve which is, well, let me not put the S in there. So it's beta minus times U minus plus beta plus times U plus. So for S less than minus R over 2, it's just U minus. For S bigger than R over 2, it's just U plus. And in between, we use these cutoff functions to interpolate between U minus and U plus. Okay. Now, the usual approach would be you start with this thing and then you perturb it and you argue that you can perturb it to get an actual flow line. Wait a minute. You haven't shifted though. Shouldn't you put the shift in by R? The U plus were shifted. I guess I'm using the same notation. Well, U minus is the, we're supposed to be shifted rather than be close, isn't it? Yeah. So these have already been shifted. So when I said shift, when I said translate them up and down, I didn't use a notation for that. So these have, these are the shifted, the translated U minus and U plus. Sorry about that. And the betas also have R. Is it changed the R to the beta plus and minus? No, I'm not doing anything to the betas. Betas stay the way they are. The betas do depend on R. Yes. Supposing I understood the two gluing from yesterday. Is this like the same kind of thing in one dimension down? Yeah, it's the same kind of thing. So I mean the picture, if you want a picture, then I had a picture and it's gone. So this is U plus and this is U minus. Then our flow line, so as S increases, so first we're following U minus, or because U minus has been translated, we're following it for a very long time. So we're very close to Q. And then these Cotto functions kick in and we start interpolating to U plus and then we follow U plus like that. And we have some coordinate chart here. So all the Cotto function stuff is taking place inside this coordinate chart. So it makes sense. Can you do the understanding on that side? Yes. So as S increases, so we're following U minus until we're very close to Q. And then these, these Cotto functions kick in and then we're interpolating to U plus and then we're following U plus. You cannot require that there's a relation between beta minus and beta plus like beta minus equals 1 minus. I don't, I don't need that. It could be, well, that you have two times. So what is the, you assume that the image of it could be 0? Right. Yes. So, so this addition is defined with respect to my coordinate chart in which the critical point corresponds to 0. If I wanted to say this a little more correctly, I would talk about exponential maps and so on. But I'm trying to make it as simple as possible. Okay. So that's the pre-glue curve and the usual, the usual way that's presented is to say you would now try to, you'd argue that this can be perturbed to an actual flowline. Then I'm going to do something a little different, which is I'm going to perturb the things before, before pre-glueing them and then pre-glue them. So it's going to look like this. So we're going to, so we're going to choose or let psi plus or minus be small sections, C0 small sections of pullback tangent bundle. And we're going to look at a curve. So let's consider beta minus times U minus plus psi minus plus beta plus times U plus plus psi plus. So this is, so this is my pre-glue curve, but it's modified using sections of the, well it's modified both on U minus and U plus. So on U minus, this is perturbed by psi minus. And up here on U plus, this is perturbed by psi plus. And in the middle, it's perturbed by some, some combination of psi minus and psi plus. Okay. Yes. So U plus and U minus are always shifted. Okay. And I want to say what, I want to solve for this to be a flowline. So we're going to solve, so, so when is this a flowline? So I'll just write the equation. And if you've done this before, the perturbation answer would have been, yeah. If the size is zero, then it won't be a flowline. So I want to choose the size to make it a flowline. So I want to solve for that. I'm going to write down the equation for this to be a flowline and then solve it. Okay. So this is a little messy, but this is really the very simplest case I could think of. And also, let's, to make my life even easier, let's assume that, let's assume that V is linear in this coordinate chart. I don't assume that there are some extra terms. In general, in this business, well, I mean, so I wrote these gluing papers with TOVs and so this is like my analysis training. So what I've learned is basically you sort of write down an equation, then there's the stuff you want, and then there's a whole bunch of crap, and you have to estimate all that crap to show that it doesn't matter. That's, I tried to make, but I'm trying to make this so there's as little crap as possible. Okay. So let's just write the equation. So I have ds minus V of this thing, beta minus times u minus plus i minus plus beta plus times u plus plus i plus. So what is this? So there's a ds beta minus times u minus plus i minus plus beta minus times, I probably should be writing the covariant derivatives here, but I'm just going to not worry about that. So ds u minus plus ds side minus. And there's a similar thing with plus. And if you don't mind, I'm going to take the liberty of multiplying this term by beta plus and this term by beta minus, which I can do because the support of the derivative of beta minus is contained in the region where beta plus is equal to one. So I can, I can, I'm allowed to do that. It doesn't change anything. Then I have to put in the vector field. So I have minus beta minus times V of u minus plus the derivative of V in the direction side minus. And then in general, there's going to be some additional error term. I'll write this as q minus of side minus. So this is, basically this is the Taylor expansion of the vector field. So it's the vector field plus its first derivative plus some quadratic term. And then minus beta plus times V of u plus. I thought we agreed that the vector field is linear. Yes. So near the origin, this is all, this only comes up away for, away from outside of my coordinate chart. Outside of your coordinate chart, you just take exponentials instead of sums. So you put them by sums and it only makes sense in this coordinate chart? Yeah, yeah. So this is to be interpreted as the exponential map of u minus evaluated on side minus. So I'm sorry for my sloppy notation. I just tried to make it simple. Okay. And then we can cancel some stuff out because, so u minus is a flow line. So ds u minus plus V of u minus equals zero. So I can cross this out. And likewise, u plus is a flow line. So I can cancel this ds u plus with this V of u plus. And what do I get? So I get, so I get beta minus times, so, so another nice thing is this term, I really should make this a covariant derivative here. Let me do that to be a little more honest. So this nav s of psi minus, minus nav of psi minus of V is the deformation operator applied to psi minus. So these two terms here are d minus of psi minus. Well, this is d minus is the deformation, the linearized equation for u minus. And then what else is there? So then there's this q psi minus. And then there's this ds beta plus of times u plus plus psi plus. And then there's an analogous thing times beta plus. So beta plus times d plus psi plus plus q plus psi plus plus ds beta minus. I guess these q's I have to subtract, it doesn't matter. And then this times u minus plus psi minus. So this thing is the failure of my thing to be a flow line. So I want to make this thing equal to zero. In the more general, the more general situations like in the papers with the tobs, there's a bunch of extra crap in here, which you then have to estimate. Okay. How much time did I start? Or how much time do I have left? Okay. 22 minutes? Okay, sweet. I can actually get somewhere then. All right. So I'm going to rewrite this as beta minus times theta minus of psi minus psi plus plus beta plus times theta plus of psi minus psi plus. So this is my equation for the thing to be the flow line. And then the first lemma you need is that if r is sufficiently large, then there exists a unique pair, psi minus psi plus, such that I guess these, these are in what you have to assume some decay conditions like L21. So it's a unique pair of psi minus psi plus such that psi plus or minus is perpendicular to the kernel of d plus or minus. So this is perpendicular in the L2 sense. And I'm assuming that we're in a situation where d is defined, so these things are cut out transversely. So in this situation, d plus or minus is rejected. And its kernel is one-dimensional given by the derivative of the r translation of the flow line. So there's this unique pair such that both theta minus and theta plus are equal to zero. And then now if we have this lemma, then we've glued because if both of the theaters are zero, then certainly their sum is zero and so it's a flow line. And the idea of the proof, so I'm not going to go into too many analytic details because it will be too messy and take too long or to be a little more honest because I don't remember all this stuff from this paper which was written eight years ago. It's one of the reason why the invention of writing was such a milestone in human culture is that you no longer have to keep things in your head. You can write them to have them and forget them. Anyway, the idea of the proof is to, well let me first rewrite the equation a little bit. So let, what? You were ravens that were tying. Uh-oh. So what's my notation? Right. So let pi plus or minus. Is this really what I want? Sorry, hold on a sec. No, I actually don't need this yet. Okay. Sorry, never mind. Which term is guaranteeing that after the term, the end of one goes to the start of the other one. I mean these are decaying as you go to the ends. So these go to zero. So you don't, you're not going to escape this coordinate chart and it's going to be okay. I think the initial set of guarantees will tell us the zero and it's zero. So, I think so there's a, so there's an inverse. So d plus or minus inverse, which is going to go to the target. Use your favorite, favorite bonnet space completion. So we'll give you an isomorphism to the orthogonal complement of the kernel of d plus or minus. Okay. So I'm going to rewrite this equation. So the equation, so what is theta? Let's look at the equation theta minus equals zero. So this says that d minus psi minus is equal to Q of psi minus plus ds beta plus times U plus plus psi plus. So then I can apply this inverse. So that's equivalent to saying that psi minus, remember I assume that psi minus is orthogonal to the kernel. So psi minus is equal to d minus inverse of Q psi minus plus ds beta plus U plus plus psi plus. And likewise, theta plus is equal to zero if and only if psi plus is equal to d plus inverse of Q plus of psi plus plus ds beta minus U minus plus psi minus. So these are the, these are the equations we need to solve. And then you want to use, use the contraction mapping theorem in your favorite bonnet space completion. So you pick, pick your favorite bonnet space completion and then you want to show that the right side of this equation is a contraction mapping on the set of pairs psi minus and psi plus. If you look at this, well this, this, this term of the U plus and U minus is a constant so that's fine. And I have the derivative of beta plus times psi plus. The derivative of beta plus is on the order of one over r. And I don't know what the operator norm of d minus inverse is, but if I choose r to be much larger than the operator norm of d minus inverse then, then this part of the equation will be a contraction. And also this quadratic term if you set things up so that psi minus is c zero small then this, then this part will also be a contraction. So you get a unique solution. So tada, we've got to include. And then you can, the fact to check. So this construction gives you a homeomorphism from the set of r sufficiently large to the set of gluings. So if you choose two different r's you're going to get two different glued curves this way. And any, any curve which is sort of close in the sense of the usual compactness to this broken curve is actually obtained by this construction. And it's a, it's a slightly weird construction because if you look at this thing here, so if you look at the psi minus then when s is large like up here in the picture then the psi minus doesn't matter at all. So the, so what we're doing to the curve up here doesn't depend on psi minus at all. And what we're doing to the curve down there doesn't depend on psi plus at all. But still I'm solving for psi minus and psi plus which are defined in the whole real line. So it's a little bit weird because the psi minus and psi plus contain various additional information which I don't care about. So it's a little weird but as we'll see this is a useful way to do things because it allows us to use the analysis of these operators d plus and d minus. Yeah. So where, I can't currently see why it matters if it is linear or not. Well I just used it in this equation so I had this, I had V evaluated on this whole expression and then I expanded that linearly which I couldn't do if you were nonlinear. If it's not linear then there's just some extra error terms and it's not a big deal. Question? That's right. Well I don't know what to say. Anyway there's a unique psi minus and psi plus satisfying these conditions. If I didn't put all these conditions on it wouldn't be unique and then it would be more confusing. Okay so now let's do an example where there's an obstruction. But are you saying that it would be possible maybe to do using different theta minus and theta plus to get that equation to be zero even though individually they're not zero? I mean you can change psi minus and psi plus. So in the region where they were both of the cota functions are nonzero you could add something to psi minus and subtract something to psi plus and you get the same curve. However then you no longer satisfy the equations theta minus and theta plus equals zero. Okay now here's warmup number two and it's going to get a little more interesting. So warmup number two I have a one parameter family of functions and at time t equals zero there's a flow line, an index zero flow line. So this goes from say q to r. So these both have index equal to i. And then maybe there's another flow critical point p of index i plus one and the flow line u plus. Okay and then I want to glue these to an actual flow line from p to r for t not equal to zero. Now the thing is you actually you can do this only for one sign of t. So either you can do it for t positive and not for t negative or vice versa. The analysis is actually going to tell us which. Okay so how does this work? So what's the construction? So again we choose r very large we translate as before. And then what's the data for the inputs? So there's no assumption that we have t, g, t for t to zero it's generic. Yeah I do need generic. I do need generic. So this I'm going to assume that this. Okay so the co-index of this would be the co-index of the co-index of the co-index of this would be one that you would prefer to go for. Where is the fact that you want in this situation appear in the previous. What? In four walls that you could align when you were gluing them. So what's different in this situation is that the linearized operator for d zero is no longer subjective. So this has a one dimensional co-kernel. And we can't glue these for fixing t. We have to change t to be able to glue. So the inputs for the gluing construction are a psi minus and psi plus as before plus t. So I'm going to change to a different small time t and perturb by psi minus and psi plus. Okay and then from these inputs we get. And what's the larger one? All of this discussion. We're trying to show that something about the homotopy invariance and this construction in the start or? That is one thing you could do with this. Right now I'm just doing it as an example of the gluing construction. But yeah if you want to understand how Morse complex has changed under bifurcations and this is part of what you have to do. Okay so then this you get a flow line if and only if again beta minus of beta minus times theta minus of psi minus psi plus t plus beta plus times theta plus, excuse me, there's no minus. This is zero. Okay so you can do a similar calculation to this. There's going to be an extra term because when you change t you change the vector field. So this theta is zero of psi minus psi plus t is equal to d zero, ah, zero. So it's equal to d zero of psi zero plus, there's going to be that ds beta plus of u plus psi plus and then an additional term which we'll write as t, t times v prime where v prime is the derivative of the vector field with respect to t. And then there may be some additional quadratic terms which I'm going to ignore. And a similar expression for theta plus. So theta plus is d plus psi plus plus ds beta zero u zero plus psi zero plus t times the derivative of v plus and other stuff. You said we used the index differences being one for surjectivity but we don't actually need surjectivity. We don't need the v plus minus two. So we could take orthogonal component. Right, so we're going to do something like that shortly. So here this inverse operator was defined in the whole target space. In general it will just be defined on the image of the operator. No, well it will work up to an error which I'm about to show you. So here's the lemma. Let me state the lemma and then we can discuss it. So the lemma is that for t sufficiently small and r sufficiently large there exists a unique pair psi zero psi plus such that so as before psi zero is perpendicular to the kernel of d zero. Psi plus is perpendicular to the kernel of d plus. Theta plus of psi zero psi plus t is equal to zero. And as for theta zero I can't necessarily get it to equal zero. All I can get is that this is perpendicular to the image of d zero. And the idea of the proof well I shouldn't have erased this. But the idea of the proof is again use the contraction mapping theorem. So when you do that the inverse operator is only going to be defined on the image. So, well maybe I should write it down. So let's let pi zero be the projection to the image of d zero. So then we have a d zero inverse goes from the image of d zero to the orthogonal complement of the kernel of d zero. Now we're going to solve an equation that looks like well what I want is that pi zero of theta zero is equal to zero. That's the equation I want to solve. So I can write this as pi zero of d zero psi zero equals blah blah blah. And then I can write this as so now I can apply the operator d zero inverse to it. So I can write this as psi zero equals d zero inverse of blah blah blah. I guess there's a pi zero here also. So d zero inverse pi zero of blah blah blah. And then you can do the contraction mapping theorems before. What's the punch line? So I didn't quite get as far as I was expecting to get. So I don't think I'm going to be able to completely explain this example today. But let me tell you what happens. So to glue I really want theta plus and theta zero to both equal zero. And I can't get that. Well I can get theta plus to equal zero but theta zero it lives in a one dimensional vector space. It lives in the complement in the co-kernel of d zero. It's a one dimensional vector space. I can't get that to equal zero. So now we have this picture over here. So I have m will be the set of all pairs t and r. And over this I have an obstruction bundle O. This is a trivial bundle. The fiber over any point is the co-kernel of d zero. And I have a section s. And what is the section s? So s of t comma r is equal to the state of zero. So the gluing construction says I can almost get the gluing to work except the state of zero might not equal to zero. The state of zero is the obstruction to gluing. So when this is equal to zero I can actually glue. Okay? So this is some function of t and r. When this function is zero I get a gluing. And I run out of time. So in the next class I'll tell you how to actually compute what this function is and how to figure out whether you can glue for t negative or t positive. And then I'll do the harder example and then I'll do the homomorphic curve stuff. But I'll try to make stuff modular like, you know, the Titanic if it has compartments. So if one part of it fills with water they just seal it off so the ship won't sink. So I'm going to try to do that. So if you didn't understand anything it'll sort of start over. As long as if too many compartments fill with water we'll still sink. But all right. So I'll be hanging around if anyone wants to talk about this stuff. Thanks. Are you and Chris having office hours tomorrow for this officially? I have no idea. Anyway I'll be hanging around after all of my talks to answer any questions. All at office hours. Any questions? In this example the D0 doesn't depend on where you are in the space. It's a very trivial adjustment. Other examples will be less trivial. But this one is, as I said I tried to make the easiest one I could think of. Any questions? Let's thank Michael again.
Obstruction bundle gluing is a method of calculating the number of ways of gluing certain configurations in which transversality fails, but not too badly. We will introduce this technique and show how it works in simple examples from Morse theory, contact homology, and embedded contact homology. (There might not be enough time to cover all of these examples.)
10.5446/16307 (DOI)
Yes, and thanks for all the fabulous questions. I do need to get to the point landing though, so Helmut can take off next week. So I would ask that if you have questions that are technical, technical or academic, you ask me after the talk. So which means, what you should ask me about is how in the world do I apply this to my favorite J-curve space? That's what we're really here for. So why don't you just, or is this algebraic geometry questions, are good for coffee? Good. All right. So here's our theorem and today the goal is to actually explain every single word. So we want to regularize a modularized space, which we think of as being cut out as the compact zero set of some section. We know by now what an M-poly fault is. We know essentially what scale smooth is. I'm going to need to tell you what Fredholm is. We know what a section is. Right. So the reason we want to model everything on scale-hybrid spaces is that we want cut-off functions. So. That's why you say I'm hybrid. Hybrid and not banner. Right. Exactly. I mean, but you can do it also inside the banner space. Right. But we don't want to worry about that right now. So then really the key is that something is non-empty. And what is that? So it's a space of other sections that are SC plus in some sense, which I'm going to need to. So if you're seeing SC plus, what you should think is compact perturbation. That are transverse general position to boundary if there is boundary. And I'm going to be allowing myself to fix a neighborhood of the zero set and a norm. And I'm allowing myself to force the section, the perturbation to only happen in that neighborhood and to be bounded by that norm. Since I can scale, I'm going to be able to scale that norm down so I can have a one here. So this pretty much means I can make my perturbations as small as I want. And that is the reason that I think you. Now with these two writers, I think you don't need any kind of genericity or commiga. This gets you to make your perturbation zero set as close to the unperturbed as you want. There is also the Obamacare writer. If you like your solutions, you can keep your solutions. So what I need to explain at some point is also what an auxiliary norm is, what Fredholm is, what else SC plus, this is right. So what do I mean by controlling compactness? So if you just add a perturbation in this infinite dimensional setting, it's totally not clear that the perturbed zero set, right, which we're going to want here. I mean, it's going to be smooth if it's cut out transversely, but why is it compact? And so we're just going to build this into the definition and then somebody has to, you know, prove that there are norms in neighborhoods that control compactness. So controlling compactness means that, you know, if I have any SC plus section that satisfies these two things, not necessarily transversality, then this set, however regular it is, is compact. All right. So check. Let's see, no dash. Good. What's what? Y prime. Y prime. Oh, right. Why, good. Yes. Y1 is sort of the quality one fibers and I should say that again at some point. That's going to happen. So, right. So now the implicit function theorem that Joel already told us about gives us, you know, that's out nice, smooth manifolds. And since I'm asking for general position, actually, I know exactly what my boundary and corner strata are. That's important if you sort of do Fleur theory or something, right. You do a one-dimensional moduli space. You regularize it and then you're going to get, you know, that the sum over the boundary terms is zero because it's a one manifold. Now you'd like to know that those are actually the once broken trajectories and not the triply broken trajectories or something. All right. So the actual boundary would be degeneracy one. It might perturb Fleur space. And this regularization theorem says that that's actually exactly where my perturbed zero set hits the degenerate index one locus in my ambient space and in Fleur space is degeneracy exactly comes from number of breakings. So that's the only boundary you get. So this is actually has applications. Right. So you might ask why in the world did we talk about good enough? So the good position is something that happens when you're trying to force more coherence of your perturbations. You might not be able to make general position. But that's because you've already prescribed the section on boundaries, which is something I haven't done here. So that's where good position comes from. Let's see. What else do I need to say? Right. So in order to regularize now, I should say that somehow whatever count I get here is invariant under choices. And so I need to particularly, well, I really I need to think about what happens when I vary J or whatever in my setup. But mainly in this theorem, I need to think what happens if I take two different perturbations. And the claim, well, certainly most of these conditions except for transversality sort of a convex set. So I can make one parameter family and then I wiggle that one parameter family again a little bit to get transversality. So I take a Pt that goes from P0 to P1 and I built this into a Fredholm section over zero one times x. So the neat thing is this is all now Fredholm. So I don't really have to sort of go and prove again that this is Fredholm. What I'm doing here is I'm adding one dimension in the domain. That should, you know, add one to my Fredholm index. And then I'm adding a smooth section that's also compact. So that shouldn't, you know, by sort of there should be Fredholm stability. So this shouldn't change my Fredholm index at all. And so, you know, all those sort of basic Fredholm facts I survived in polyfolds. So this is going to be polyfold Fredholm. It's going to be transverse after I wiggle enough by using that theorem again. And then I get a smooth cubordism out which goes exactly, well, the only, if there was no boundary before, then the only boundary I have here comes from the interval. And so then it's a cubordism from whatever happens at zero and whatever happens at one. So, all right. I wanted to write a box except somehow statements of theorems nowadays are harder than proving theorems. Good, yes? Did you say it's not just like, did you really go into that? Well, and then you have to wiggle a little to get transversality. Good. So, any questions about the statement? Any nonacademic questions? Good. So, right, I want to, well, I should say what a strong tame and polyfold bundle it actually is. And in particular, I need to fix notations. I can actually tell you what a Fredholm section is. But I'm going to just tell you about this special case that so far has suffice in all applications. So I'm going to say, and the nice thing is that this special case will be things that are automatically strong and tame. So, I want to not just talk about general tractions, I'm just going to talk about splicings. So, an M polyfold bundle of splicing type is what? So, I start with something that should be a bundle. So, I need some kind of subjection between two polyfolds that are modeled on splicings, which you may not really know yet, but it's going to become evident once I write down the local trivializations. So, together with, so it like this to be a vector bundles, I'd better put a real vector bundle structure on each fiber vector space structure. So, that's going to be O goodness. Right. Y sub X, you put a superscript T here. If you were wondering, these, if you perturb, you might have to actually go into a different, no, this is, that's totally a lie. Bundle doesn't change when I change my perturbations. Sorry. It just does if I change J. Right. Okay. Good. So, these are actual fibers here. Right. And I need local trivializations. Right. So, you see the exercise in writing the polyfold book is to really take your differential geometry book and just sort of write SC or polyfold in front of everything. So, right. Yeah. At some point, you have to worry a little. So, right. So, there are splicing things that are just sort of the M polyfold models. And now for a bundle, I want a bundle splicing. So, what is that? Well, it happens over an open set in the base. I would like the local bundle to be SC defiomorphic linear on fibers to a certain model R, which is again a retract, but it's a specific retract. So, I'm going to at some point probably forget to specify open subsets of retracts or splicing. So, if I do that, you can fill them in or you can just ignore this. So, whatever this retract is, it's going to have a very nice structure. So, there's going to be one parameter that parameterizes families of projections which give you the base and the fiber. So, this whole thing. So, my parameter is allowed to, that's where all the boundary comes from. So, this is where V is some finite dimensional parameter and that's where all the boundary comes from. And then, I have a family of projections on a scale Hilbert space E and one on a scale Hilbert space F. So, these two are. So, in particular linear projections, which means also I don't have to worry about scale smoothness of them by themselves. I just have to worry about scale smoothness with respect to that parameter. These are usually the gluing parameters. So, right. And now, I'd only have to write this once because they don't know whether this is a small pie or a large pie. So, but there are two different families here. And on this, right, and this is, well, this would be the retraction actually. Let me write the retraction here. Drag and drop. So, the parameter space of V is here. So, then the retraction just goes to V projection on E capital Pi projection on F. This is not algebraic geometry. No, sorry. Good. No, I just wanted to know, you know, the way you set this up, suppose you take the projection and then you build a bundle out of things you projected and then you get the section. Why do you do that instead of taking an honest SC infinity vector bundle or whatever, then taking the section and then taking the projection at the end? What's the. Because that's what you have, really. If you think about it, well, ask me that again once I've given you the real example, I think. That's. Or even the toy example. So. So you're saying it's an SC slicing because these projections are linear, right? Yeah. So the nice thing is, right, so all the weirdness now comes from jumps in dimension of the fibers, right, of these images of the projections. So, right. Yes. So the thing is that what I'm asking to be scale smooth is this map. I'm not asking Pi V as an operator to depend continuously on V. It will not, in fact, well, it might, but then everything is boring. So if it varies continuously with V, then, sorry, if the operator topology is continuous, then the dimension of the fibers doesn't change. But in infinite dimensions, you can let the dimension of the fibers change. That was exactly the example that Nate worked out yesterday. And so you have this sort of bundle with varying dimensions of fibers. And really what happens is, see, these retracts, that is your space, right? That's where you have the sections. It's just like Kuranishi structures. The sections happen over the sort of small little things and you don't have a canonical extension to anything bigger. So the bigger stuff here, the sort of ambient spaces just exist locally and nobody says that they fit together in a nice way. So I don't usually have an extension of the section to the bigger stuff. This is not like what I'm talking about here. S is not just a restriction of some section in a Banach bundle to some weird subspace, right? So the local sort of Banach ambient spaces are just local. Does that make sense? Good. All right. So, yeah. Ah, yes, I should, okay. Did I say, I did not even say this. So, right, so this is the base and this is the fiber. So, really, right. So what I should say, search that fibers go to images in F. So, really, right. So this is the base. That's where boundary and corners happen and F for fiber. So N has no special meaning. N here, no, it's just, I have N gluing parameters here. So that's, yeah. It's just some gluing parameters give me boundary and some don't, yeah. Random choice of words. Right. I'm not even going to say why this is tame, but it's automatically strong because here's the Y1 bundle. So strong means I have a meaningful bundle of better quality fibers. And in this case, the fibers are simply given by, right, this is my local model. And I take everything in the base, but in the fiber, I only take the fiber of quality one. So this is, this, this one is that one. So, all right. Good. So, example, right. Toy example. Right. So, this is a, an M polyfoil. So what you see here is an open ball that I've attached sort of two dimensional Saturn ring two. Again, with open boundary here, and then I've attached just one interval. So it's sort of one dimensional, two dimensional, three dimensional pieces of my X. And in order to, for, you know, transverse section to cut out something smooth here, right. So what I would like to be able is to say, okay, I have a Fredholm index one section over this, you know, that goes into this two dimensional domain smoothly, and then maybe even into the three dimensional domain to cut out something smooth, right. So the goal is to not get a sub poly fold, but it's actual manifold. So if that is to be the case, I'd better have fibers of different dimensions. And the fibers better jump sort of at the same rate as the base dimension jumps. And that's sort of nice to explain with this pricing. I find, right. So here in this case, Y over X, right. If I want one dimension here, I should have sort of no vector space over that. Everything is zero. Over these points, sort of Y X should have one, and over these points, I should have a two dimensional fiber. And so really at some point, the condition is going to be really that in this bundle splicing, somehow the fibers jump at the same rate. But we're going to build that into our definition of Fredholm operator. That's not in here yet. No. Right now, right, these could be these two things that this jumps at zero and that jumps at one, right. That would not be good news. Homework, right, imagine how Nate's example gave you a little bit of transition map or a little bit of a chart for a piece here. Right. And then you might have to might want to put, you know, another chart that's just two dimensional. And then you'd have sort of an overlap, but it just goes from this two dimensional piece to that two dimensional pacing here. You then need somehow our definition of scale smoothness to say in what sense that scale smooth. What do I do right now? I do want that. Yeah. So Fredholm section is going to mean something that imposes. Jumps at the same time. Yeah, actually, let me write this down and then, right. So, so here's a little chart that I would like to do. So my chart map is going to be the more fake to some open subset of this ambient splicing. Right. So now, right, I better have a topology in which this thing is open. But that's how we built that. And what is this? Right. So this here, this ambient splicing was the union over R. So R is this direction here. And each fiber here was some line in L2. So the funny thing is just that these lines in L2 are not constant. They sort of turn into all the infinite dimensions because it's this family of bump functions that somehow weakly converges to zero because it just gets pushed out to infinity as you get closer to that point here. So, right, so here. So what I'm actually going to take here, right, is I'm going to take the projections to be the same projection on L2 to the same little bump function. That's just zero for V less or equal to zero. And it's a bump centered at something like e to the one over V when V is positive. So far, this is just here. This is just the image of the little pi because that's just my base. But now, my fiber over this is going to be the same thing except it uses the same base gluing parameter again. So over this, I'm going to have the same fibers. And here, at every point here, the fiber is zero. Right? So that's the same thing I'm going to use. Maybe I should say here, right? Over V, the fiber of my base splicing is zero. So the fiber of my bundle splicing is also going to be zero. And at a point here, this fiber is one dimensional. And so over that point here, my fiber is also going to be r beta V. So the local trivialization of the bundle y over u is simply going to be, well, again, this open subset, what is it? Right? I could take the points V comma x or V e that lie in this thing here, open subset O. Right? And over each of them, the fibers again are beta V. So these are my fibers. What about right? So what happens at the interesting point? Right? So the fiber of my splicing is still just zero. Right? And so the fiber of my bundle is also just going to be zero. It's just that then because the fiber of the base jumps, I'm going to have the fiber of the bundle, or I'm going to have the fibers also jump. So that way, the dimension jumps, you know, equally instead of my Fredholm index actually stays constant, which is a good thing. It's not built in here. No, it's going to be built into the definition of a, I should say, but one could say, okay, so let's actually do this here. So y is fillable if for all V, I think the kernel of pi V is isomorphic to the kernel of capital pi V. So that's the complementary. And we're going to have to fill at some point. Yeah. Strong just means that this nice bundle Y1 is defined, which in this case is sort of automatically defined because we have these nice fibers. So we I could just take the fibers of quality one in the, in the ambient, in the space. It's a property of a general and polyfold bundle. It is automatic for and polyfold bundles of splicing type. Right. And what you want to think of is when you have this right, where's my right, the real example, right, it is time to go. What do I mean by this, right? Yes. In some sense, I'm going to need it. Can you just trust me that this is going to be built into the definition of Fred home filling? Great. Yes, please complain. I guess you out of all have the right to complain. Anyway, so the way you should really think about this strong and why one is because you have fibers age three over age three maps. So really what you should think about strong is that age three in a pullback tangent bundle makes sense when you was in age three. But so far, usually our fiber over a point of quality age three is usually just age two. Roughly speaking. So there's sort of, yeah, better quality. Well, it makes sense and it's sort of invariant under coordinate changes. Except if, right, so if I wanted to make sense of age four over an age, in an age three pullback tangent bundle, then that is probably going to depend on some choices of the trivialization, right? But when age three, it's going to be independent. So that's, let's see. Good. Okay. So this was the baby example and this was already sort of halfway to the real example. So it's good to be very sloppy. So I want to think about the bundle near a K-fold broken flirt trajectory. Because I'm lazy and I don't want to say SFT. You could think, you know, K-fold building, right? This K-fold broken flirt trajectory is the same thing as a K plus one floor building because if you want to break K times, you need K plus one trajectories. And the first place where I'm going to be totally sloppy is that of course flirt trajectories have to mod out by r at least and I'm not going to do that. So, yeah. Is the gluing parameters also correspond to the height of the building? The gluing parameters, right. Yes. So for every floor, then when I glue together, I need one gluing parameter. Exactly. Yes. Yes. So K is in there. Well, no, actually, no, it's the right K because these come from nodes interior. So every actual breaking, every floor breaking, that's what gives me the boundary. And yes. Yeah. And then internal nodes are just going to give me, right, internal. I mean, I could have written C here, actually, right. Let me not do that, right. So, good. So what is my bundle splicing here, right? So I'm going to have K gluing parameters. I need some base space and some fiber space and I'm just going to tell you what the projections are. But first, I should tell you what these scale spaces are, right. So what do I need to parameterize flirt trajectories, right? I need to vary these. So really, for every trajectory there, I need to have some section of a pullback tangent bundle, right. And then once I've varied them and I apply the Fleur operator, the Perturbed-Kuschiriman operator to it, I should end up in the fiber. So these fibers then are just going to be same kind of product except H2 in the fiber because I'm thinking about an operator of order 1. And then, right, I'm going to be lazy for now and just write epsilon EF goes to pi epsilon of E, pi epsilon of F and now I need to tell you what the projections actually are, right. So really, and each of these is a tuple, right, this K tuple and these are K plus 1 tuples. So pi epsilon is the projection in the sort of retract that Joel defined at length. So it should be the projection to the kernel of an anti-glueing along the kernel of the plus gluing, right. So whenever I see those of the Fleur trajectories, right, really I'm thinking, okay, I vary and then I glue it together, right. That's sort of my chart map. But that has ambiguity. So what I want to do is I want to sort of reduce the ambiguity. That's why I go to the kernel of the anti-glueing. But by reducing the ambiguity, I should somehow not change what the point actually is in X that I have in mind. That's why I go along the kernel of the gluing. So I don't actually want to change the point that I have in mind. Right, and now you're going to say, okay, there's evidently gluing parameters missing and I think it's just worthwhile to say this again. So what are these? So these are anti-glueing of the, right, of really the sections I'm going to, I think probably, well, there's a lot of fuzziness here, right. So you have to ask yourself, can I actually glue these? And then I should do this with some gluing parameters. And those I like to be somehow large. So there's always going to be some gluing profile in the picture that takes small parameters to large parameters. And the choice of this gluing profile is sort of, that is one of the foundational choices in this whole subject. There's pretty much just, well, there's one for which you get smoothness. And that's unfortunately not the sort of obvious one you choose from Dillion-Mumpford space. So eventually somebody needs to prove that the invariance that we get out of this are independent of the choice of gluing profile. So, yeah, yes. But then there's a question whether there's actually a total order on gluing profiles. But it's basically like if somebody gives you R, the real line, and you don't know what it is, and then you have to choose a chart. So you take the identity, or you take X goes to X-Q. So the difficulty is only at one point, basically, which is sort of far. Hopefully, yeah. So in particular, this is where you're seeing this. Like if you look at Gromov, we have modulized spaces for integral j, the smooth structure that each body of z will give you is different than that. Possibly, yes. We don't know that yet. Yeah. Yeah. I think that's a different smooth structure. But in Gromov, it clearly doesn't matter because you can avoid all this stuff. I mean, you can, the integration can be turned into intersection problem. For the intersection problem, you have a count zero dimension stuff, and it avoids the noted stuff. And the gluing profile only appears there. So that should be independent. Right. Right. Let's keep that for next week. Right. So let's see. What happens? Right. So strictly speaking, I told you how to glue in the base, but you can do exactly the same thing in the fiber. So for the fiber, and this again goes back to sort of Vivek's question of what do I actually have a section off, right? I don't have a section that just lives over sort of the broken flirt trajectories. I want to also sort of encode my section that lives over the unbroken flirt trajectories. Right. And that takes a glued flirt trajectory and applies the flirt operator to it. So you get just one section of one, you know, something that lives over a glued curve. And so in the fiber, I kind of need to have the same domains that I have or the sort of glued domains, right. I glue the cylinders. So I need the same domain in the fiber that I have in the base. So that's why somehow the Dylene-Mumford parameter, maybe you could also think of the Epsilon's here on the splicing, right. The parameters are sort of the Dylene-Mumford parameters and they affect the maps and the sort of zero one forms in the same way. So. I can't get confused about this. Your section is like D-bar or something. Right. So why is that not defined on the whole space? I could define it on the whole space except there's nothing to do with flirt theory. But the full D is just... So I should write, I should maybe write this down, right. So let's see. What do I have here? So my flirt theory, right. So what is my D-bar operator really in terms of this pre-gluing and things? See what it sort of takes a big pre-glued curve to D-bar of that pre-glued curve, right. On most of it, right. It's just one D-bar. It's not broken, right. So the open or the main stratum of X is the unbroken one. So this is the main stratum of my operator, right. Except when I'm near a broken one, this sort of I want to pull all of this back somehow to something here that should be in the fiber and something here that should be in the base. Right. So somehow this comes from a choice of gluing parameters and all these flirt trajectories, right. So that I glue together to parameterize this point, then I apply the D-bar operator and then I need to go back. And so I kind of take the... What am I doing here, right. I'm going to the fiber. So to get to the fiber, I need to take plus and minus inverse of exactly this thing and I force minus gluing to be zero. So this then sits in that fiber. Yes. Right. So that way I can... This way I'm pulling back the D-bar operator that just acts on a single cylinder. I pull it back to this sort of tuple of cylinders here by sort of first gluing, applying D-bar and then sort of undoing the gluing. So, right. However, as you can see, right, if this is a Fredholm operator, right, I'm losing a lot of dimension here, right. So I have a massive kernel, then this is Fredholm and then I go back and I have a massive... Well, this is never going to be surject if there's a lot of things I don't map to. So I have infinite kernel here and infinite co-kernel here. And so the idea is going to be to cancel the kernel here by the co-kernel there. So, right. And I'm not sure I'm going to get there. So, but this is my real-life example and again, right, so here Y1 now comes from the local things that are V, E and then the Fs that are actually already in H3 and not just in H2. So that's really what I should be thinking of as that bundle Y1. So, yes, right. I feared that would come, right. So, yes, someone could say that. But I haven't said what filling is. So, ask me that again once I've said filling. Let me just quickly say what can I say now, right. So, let me go down my laundry list actually, right. I needed to tell you what an auxiliary norm is. So, what this is is a continuous function on this better quality bundle. That's a complete norm in each fiber. So, classically continuous norm in each fiber. And in the example, well, you take the norm at a point, what is this? V, E, F to be F H3. Oh boy. And now I'm, there's probably an exponential weight on here because if I want this to be a scale space, right, if this is F0, then this is supposed to be F1. And so that needs to be compact in F0. There was an extra, there's an extra condition on the norms. Yeah, so there's, because the others can get some digital parts of that. So, you want, if you have a sequence of vectors, there's a base converges to X and the limit super is going to zero. Okay. Then the vector converges to the zero. Okay. Good. Okay. Yes, there's something. Okay. Good. Okay. Oops. Small print. There's also, right, so I should say, right. So, obviously you can see all the small print in the Hofhewe-Wiesotzke-Zinder papers. And if you're trying to find where that specific small print is, I would recommend looking at the, what we used to call the user's guide, polyfold at a glance or two. That, we also posted on the, on the web page. That is lacking this small print. However, it always precisely sites the place in the publications where you can see all the. Wait, now it's called polyfolds at first and second glance. At first and second glance. Right. Yes. Because it became too long. Right. So, I need to tell you what an SC plus section is. So, a section, well, first of all, a section is something that gives you the identity when projected down. And it's SC plus if it actually takes values in these better quality fibers and is SC infinity as a section of that better quality bundle, which is exactly, you know, a first order. So, what you should think if here is a zeroes order operator. Well, compact operator. Maybe compactly supported would be really nice. The note here is that the classical perturbations, if I change J, this is something like J minus J prime DT. That is evidently first order. So, this is not SC plus. So, wiggling J is not what happens in this regularization theorem. However, a homotopy of J's fits into this cubotism argument. And so, that makes sort of changes of J built into the theory after all. Can you just say, I'm sorry I understood. Why is that not as simple as what property does it? Well, it's first order. So, in particular, this needs to take H3 to H3. Okay. And it, yes. Right. Yes. Good. Okay. So, now I can, right. So, now I think the only thing left to say is what is Fredholm. Am I missing anything else? Right. But we know what compact means. And I explained what controlling compactness means. So, first things first. So, now this is the place where I think a grad student holed up in a basement would have a little bit of trouble making up the whole theory. So, Fredholm is not quite obvious. So, one thing I need is I need it to be regularizing, which means that if I'm in the quality, if my section takes values in the quality k fiber, then I actually want to have the base also quality k, which is like saying if D bar U is in HK, then U is in HK plus one. Also known as elliptic regularity. So, that's something that's sort of true for the operators that we usually look at. And then the key point is this filling that we've alluded to. And this, at last count, you needed a filling somehow strangely not just at solutions, but at everything that's sort of smooth, that's of quality infinity. So, near each point I need a local trivialization and a filling. And I'm going to say what a filling is. So, the filled section is Fredholm. So, this Fredholm, however, is going to be easier to define than that Fredholm because this Fredholm happens on splicings and retracts and this Fredholm is going to be a map between or map, germ of maps, right, between Banner spaces or SC Banner spaces or SC Hubbard spaces in this case. So, I'm going to leave that for the moment undefined and tell you what the filled section is. So, so in a local chart, right, I'm going to this bundle splicing, this sits in and now I'm going to forget that there are open sets. I don't want to write another one in here. So, that sits in a splicing that sits in here and this bundle splicing sits in the bundle splicing that sits in here. And right, so what I would really like to do is I would like to fill up my section here so that I have any chance of this being Fredholm. So, let's see, what could I do? Right, so first of all my problem is that this E is not, this VE is not necessarily in O but I can map it down with my retraction and then apply S which sits in here. So, that's not so bad. Let me, I should say what this S actually does. So, right now you're going to find the fill in. Yes. So, I want to write, so the section takes VE obviously to VE and then something happens in the fiber. So, I'm just going to write F for whatever F does in the fiber and so then I can apply F. F is only defined on O. So, I need to throw in a projection here. So, I can do that. But then I'm only ever going to hit the splicing within this fiber. I'm never going to hit the complement so I want to add something. And I'm going to let whatever happened there just depend on the complement of E. So, I can write each E as pi VE plus one minus pi VE. That's the splitting here and I'm going to use the same splitting in F. This is the splicing and the complementary splicing I'm going to have right this guy here. And what do I want right? So, F prime of V comma dot, right, maps the kernel of little pi V to the kernel of capital pi V. And I'd like this in an appropriate sense to be an SC infinity family of isomorphisms. The idea being that I do need to soup up kernel and co-kernel to make S twiddle Fredholm but I don't actually want to change the zero set. So, that's the main remark S twiddle inverse of zero is S inverse of zero. And also the kernel and the image of the linearizations of D S twiddle are isomorphic to the, that's, that is wrong. Kernel and co-kernel, image perp are isomorphic to the co-kernel and co-kernel of D S. So, that allows me to dream a Fredholm theory. So, that is the theory. So, that's why we're doing linear? Yes, isomorphism. Yeah. Okay. Yes. But, I mean, you put out the term of more general stuff except the children change the solutions and the Fredholm equation change. Yeah, this is, yeah. I mean, in my splice, I mean, this is a simplif, I mean, I think, so, well, so here's the example, right? So, in the example. Actually, listen, one example is as good as all. Yeah. You only have to know one example. Right. So, the filler is the linearized operator. Yeah, you may ask, right? It's on the anti-glowing. So, overall, the filled section, so, if you look up there, I wrote down what actually the section is in local coordinates. So, the section went from epsilon E to what was that? I just wrote down the fiber plus minus inverse of plus of E and zero. Right. So, that was my, that was the section. And in order to get the filled section, all that I'm doing is I'm writing down the minus gluing here. Oh, right. Complete breakdown. Of course, there was a D bar operator here. And so, the minus gluing of E always lives in, so, of H3 of R times S1. It comes from data that is very close to the breaking. So, it just lives over a fixed Hamiltonian orbit. Well, it's really a product, right, because there are various breakings. So, point being, so, there's no R dependence here. And so, when I take the linearized D bar operator on this, there's a general theory that says that R invariant operators here, you know, if they are useful, if they're Fredholm at all, then they're actually isomorphisms. So, this is of the form dS plus A. And then you read my favorite paper by Robin Zalamon. About, I think, spectral flow. All right. And I have successfully sidestepped the question of what a Fredholm map between scale-Hilbert spaces is. So, which is evidently in my notes. So, I'm just going to hand wave for like one minute, and then you can ask me about details if you want them. So, the problem is really when you're proving the Invisit Function Theorem that you need to do Newton iteration. And the Newton iteration sort of comes from a contraction. And when you write down the contraction that you get from a scale-smooth family of, from a scale-smooth, what do you actually mean by a Fredholm section, right? That's nonlinear. Usually we say, well, the linearization is better be Fredholm, right? So, that's something that makes sense. But the other thing we need is that Newton iteration works. And that requires a certain continuity of the linearizations. And unfortunately, there's this scale shift. And so, you get a contraction property, but the contraction goes sort of down in levels by one. So, if you start iterating, you go further and further down in levels. That does not bode well for any convergence. So, you kind of need to sort of force yourself back up. So, we need to add something called a contraction germ property. That sort of implements this contraction. So, really what this is is a contraction, a level-preserving contraction in all but, finally, many dimensions. And so, then you ask yourself, well, how do you ever prove this? Well, and that sort of comes if your Fredholm section is actually, right, it's scale-smooth. That means you always have that loss of levels. But if you have classically C1 in all but, finally, many dimensions, then you can prove that that implies this contraction property plus some small print. And so, that's this can be found in the Grumhoff Witten paper and somehow I independently figured, well, that there should be a better definition of Fredholm-ness. And I'm currently revising that paper in which I also attempt to prove the Fredholm property for flirt trajectories. And I would end by saying that polyfolds are awesome because you actually get referee reports. So, if the referees are in the audience, thank you. You know, they did not complain of me failing to spread peace and harmony. They actually wrote the, read the paper and, you know, found mistakes and pointed out things. And so, I'm revising and that is, yes, beautiful thing. So, I wish all referee reports were like this and maybe I just need to keep doing polyfolds. On the polyfold mini course page, there's not an obvious lecture note going through this lecture. So, we're supposed to go to sections 6. Yeah, well, I'm going to scan these. Okay, so you will scan them and put them on. Yes, and then put them on. Yeah. Okay. Now, I mean, however, I would say, I mean, all of this is sort of essentially just skimmed from the user's guide. Okay, we go to sections 6.6. Yes, yeah, yeah, if you want to. There's more than everything in there. Okay. Yeah. So, if I'm going to create a set of some cases and I want the auxiliary, I want to find that. Can I just use the term you already have on the first level of the, I don't know what the scale of the case is. I thought so. So, can you read it? Well, I, right, he's asking me and I'm asking you because I don't know what the small print is. So, I thought you just use cutoff functions. You take local coordinates and take sort of the one, the normal, the one that, you take a partition of unity and put this together. That works, right. It satisfies whatever small print you have. Awesome. Good. The question is, is the definition of the almost totally defined performance? Ah, as the, as the index of the linearized FEDOM operator. Yeah. And then, in some way, it's, it's built in that that's actually constant. Yes. And then you could ask how do you define orientations, right? So, you need to construct the determinant line bundle and, yeah. So, we're convening at 3.35. Let's thank her again. Thank you. Thank you.
This lecture will state a rigorous version of this theorem, and explain the notion of a (sc-)Fredholm section. [related literature: Sections 6.2 and 6.3 of Polyfolds: A First and Second Look.
10.5446/16306 (DOI)
Awesome. Thank you. And thanks for putting this on. I believe the world will be a better place. I should also give you a renormalization instruction for any criticism that I might utter. If I don't tell you that Ethereum is complete bullshit, I have not read it. And I think it's not interesting. If I do tell you it's complete bullshit, it just means that you should actually understand the proof for yourself and not trust it. And by that sort of criterion, really, I think all proofs in mathematics should be complete bullshit because, well, you should understand them. So I'm actually not going to talk about analysis pretty much at all. I'm going to talk to you about why we need to worry about analysis. So I'm also going to try and sort of ground us in a big picture of what are we actually doing here. So I'm very happy that somehow the term I think that I coined regularization has taken root. But I want to tell you what that actually means from my point of view. So when you regularize a modular space, well, and when I say modular space, I'm usually going to mean pseudo-amorphic curve modular space. So if you're given a compact modular space, which will always be called m bar, usually of j curves, I would like to assign to it in some sort of unique way, well, one or two things. So in the Grom of Whitten world, I would like to actually give it a fundamental class. But then, of course, the question is, what do I mean by fundamental class in the homology of something that might just be a compact metric space? So if this was a manifold, it would have a fundamental class. If it is not a manifold, we don't know what its fundamental class is. But the nice thing with trying to actually construct that right in here is that from the modular space, we have evaluation maps in Grom of Whitten. And so if we had a class in there, we could push it forward directly with evaluation maps. That's why I think that's the cleanest way of saying what we want is to make a homology class straight in there. So this is really the Grom of Whitten case. And pretty much in all other theories, Fleur, SFT, anything, infinity related, you're going to expect your modular space to have boundary. And so really what you want to do is you want to somehow count things and then get a relation from the boundary. And so what we'd like there is some kind of cobaltism class of closed, no, not closed, but compact things. So here you could get this from maybe a compact manifold. Well, at least that would have a homology and how I get it into M-bar is not so clear. But here I'm not going to say closed because they're going to have boundary compact manifolds. Right, in general, they might be weighted branched. And they're going to come from some choices. Knew that Chris Wendell up here had up here, but then unfortunately somebody diligently removed this. So your task for this next hour is also to figure out what's completely different in the philosophy that I project here than it is on Chris's board about abstract perturbations. And I'm happy to discuss that in office hours. So really these manifolds are going to come from some kind of choices of perturbations. Knew that we'll talk a lot more about. The main thing is it's going to have boundary in general and usually also then corners. And the algebraic identities usually come from identifying the boundary with some kind of fiber product of products of the modular space itself. So if this seems totally alien, maybe think about Fleur theory. And what I'm thinking about here is I'm packaging all Fleur trajectory spaces in this M-bar new. So no matter between which critical points and no matter between of what index. And then this just says that somehow the boundary is given by broken ones. So that's not the whole thing is not going to be compact. Yes. Well, indeed. Yes. So that's why this is the philosophical overview. But I could do this on a certain up to a certain energy. And then this would be true. So good. So this is my goal. This is what I mean by regularizing a modular space. Assigning to something that I can count or that I could integrate over and that has some kind of nice boundary structure that gives me the algebraic identities I would like. So well, I don't need to write this. What are my tools here? Right? So in the algebraic geometry, then I describe J curves on my modular spaces are really modular spaces of subvarieties or not manifolds, but they're sort of subsets of an ambient algebraic variety that are cut out by some algebraic equation. And so then I can do algebra to describe my modular space. And the thing that Gromov realized was that, well, if my J isn't integrable, I can't quite do algebra, but maybe I can do exactly the same things that algebraists do with PDE methods. So that's why there's a lot of analysis here, because all the magic that you can do with algebra we now have to do with analysis. However, we always have somehow this intuition that everything that's true in algebraic geometry should be true in symplectic geometry. Because the general Gromov written in this, the quick connection with Gromov written in this was later based on the paper by Gromov. Right. OK. Yeah. So some things flow back. Yes. Good. Catherine, can you stay away from the last four inches? Yes. This. OK. Here. Awesome. Good. Yes, I can. So let's see. What is my intuition here? So well, I should say something. So first of all, I'm going to do an analytic description of M bar, namely in terms of solutions of a PDE, but then there are two other things. And so I'm just going to roughly think about having a fixed Riemann surface and a fixed almost complex structure on a symplectic manifold. But I have to do two more things. I have to mod out reprametrizations. Because as an algebraic geometry, I'd like to actually count the curves, not the maps. So an image in M bar I could parameterize in different ways by maps. And so I want to quote you out by that ambiguity. And then we're always going to want something compact. So I'm adding Gromov compactifications. And so maybe just as a side note to the examples that Duzer gave, right, there were beautiful examples of non-compactness coming from Mark points running together. However, that's not the worst of our problems. So usually really the hard problems with Gromov compactification come from energy concentrating. And that's where you get the sphere bubbles that then have an unstable domain. And this is where really all analytic hell breaks lose. So that's really a PDE blow up situation. So you really do need this Gromov compactification even if we have no Mark points. So if that's how we described M bar, then we can do, well, two things. So first of all, as you've already seen, I think in both talks, there's a more or less well, local or global Fredholm description together with gluing maps, which we'll sort of start talking more and more about. And so very roughly what that does is it somehow, this is a lie, but the intuition is that you're writing your modular space as the zero set of some kind of section. Actually, mainly the section here is a lie. So but let's pretend, right? So at least to some extent, that seemed to be the description. And then the reason you believe that these things should be things that have a fundamental class. I believe is really this finite dimensional regularization theorem, which says, well, let's pretend for the moment that my nice compact modular space of J curves is actually cut out from a finite rank vector bundle over a finite dimensional base. And let's say that's even smooth. I guess let me make it an oriented vector bundle. Then well, first of all, even without the section, these things have an Euler class, right? So that's just the Euler class of the bundle, but you can pull that back and let it sit in the base. And I guess the theorem here, well, you can construct that with algebraic topology or you can construct it by taking a section and perturbing it. So there's various methods of constructing Euler classes even. And so I think the main one that we talk about in some plectic geometry is to do it by perturbations. So I would like to be able to just say that my Euler class is actually maybe the fundamental class of some perturbed section of the zero set. So that would then sit in that homology. I believe I should actually add, oh right, so usually you need the base to be compact to get an Euler class at least. But I believe you can do this as long as the zero set is compact. You mean as a homology class? Right, yeah. Right, otherwise it's just not unique as a homology class. There's no chance you'll ever have that. And also, right, so I'd like B and E to both be finite dimensional. But the finite dimensionality is actually not the problem. I think you could also take them infinite dimensional and then you would just need the section to be Fredholm. And then the same theorem is going to hold with exactly, essentially the same proof. Right. But what do I mean by this really? So how do I construct? What do I mean by having perturbations that give me this Euler class? And this is where, right, this is where the word generic comes in. That's what I was going to strike in big and red from Chris's board. Because really, so people always say take a generic perturbation or a generic J. So if you give me a J, how do I know whether it's generic? Generic is not an adjective that goes with a single element of a set. Right. Somehow we say generic when there is a commiga set and then the elements of that set are the generic elements somehow. And so, yeah. So I think that's one danger because people often say generic in situations where it's not even clear what the Banach space should be from which you're taking something. And also, I mean, I said somehow second category was wrong. But I actually don't quite understand why we're so focused on having sort of a commiga set or a large set or a full measure set. So what I'm going to say is really all you need is a set that's not empty. And in fact, sets of second category are non-empty. So in that sense, that's a perfectly fine thing to prove when everything goes through if you just prove that something is second category. I just don't know how you ever prove that something is second category without proving that it's commiga. But anyways. So there is a non-empty set in the smooth sections. So that for any perturbation in this set, so these are the good perturbations. The perturbed section is not just transverse, but its zero set is still compact. Yes, we'll do. Thank you. So that tells me that it gets a smooth structure. And if that's compacted as a fundamental class, and then I can embed that into B, and so I get a homology or I get a cycle there. But now what if I take two different perturbations? I would like them to actually have the same fundamental class. And usually we prove that by proving that the two zero sets are co-bordant. So people often talk about the transversality problem in holomorphic curves. And transversality is really just this bit. Whereas somehow regularization for me, when I say regularization, I mean the whole package. I mean transversality, preserving compactness, and then having uniqueness up to co-bordism. So if all you care about is that this set p is non-empty, how do you know there are two sort of disjoints at it? Right. Yes. So it's sort of canonical. Right. I mean, I want to, so usually actually it's not, so usually the statement is not there exists the p, right? Usually the statement is p, which I define as something. So it's a specific set. And I don't need to know that that's commiga, I just need to know that that set is not empty. Right. So for example, in Chris's theorem, I think the statement is going to be that it's a set of j's in this curly j. And the j's are defined exactly by some transfer, by regularity, right? By subjectivity of the linearized operators. Can I make one comment about this? So it's not that I disagree, but when I define what conditions those j's have to satisfy, it's almost always going to be that it satisfies a countable list of conditions. Right. And so for each of those individual conditions in the list, it is pretty important to know that there's a commiga set, because I need to know that the intersection of those sets is not empty. At the end, I only care that it's not empty, but how else would I possibly know that? Yes. Is it true? Sometimes you have some kind of special, like if you want to prove like most people's homology kind of things, you have something that you know, and you want to deform that thing that you know to, and then you want to have some kind of complex, almost complex structure that's near a given possibly degenerate. Right. Sure. Sometimes you'd like to know that kind of density. Yeah. This is just for existence of a possible definition. Yeah. Yeah. But you could see you can write, I mean you can also, again, you don't need this to be dense. You just need to have it be dense near a given point. Right. So you could say, take a set and say, I fixed the support and I fixed some sort of size of the perturbation. And no matter how small the epsilon and delta are, the set is always non-empty. Right. So since which point, you don't know which point you're going to meet in the beginning of the course. Right. Yes. But I mean, so it's right, so the statement that given any J there isn't arbitrarily close by J that is regular is also different from proving that something is commiga. Right. So. I've been, well you're explaining philosophy. Yes. Why do you take a perturbative definition of the order class as opposed to like pull back from the grass, might end or. Right, because I'm going to want to generalize this in such a way that it applies to holomorphic groups. I mean that definition would not be anything, right? It could be a topological space for that definition. Let's discuss that once we're sort of deeper down in the actual application. And actually, right, the challenges are going to come momentarily and then we can talk about whether different approaches will solve that. What was I going to say, P? Right. So, in this case here already, I think the set, so I think Assad's male theorem here would tell you that, you know, the set of ones that, so if I defined this set by the ones that are transverse to zero, I would indeed get a commiga subset in there. However, it's not clear to me that for all the perturbations for which I have transversality, I'm also going to preserve compactness. Right. So, in fact, I think I could cook up a counter example. And then it's also not clear that you're always going to have uniqueness up to cubotism. So it's often you have to sort of fiddle some extra things in here in order to get these two other items. And that's the magic of perturbing J, somehow. Right. No matter how you perturb J, you're always going to have grom of compactness. And somehow between any two J's, there's always sort of a nice family for which you can run another grom of modulized space problem and get this cubotism. So you already see that somehow perturbations of J are nicer even than perturbing finite rank bundles. So, right. So the history of the subject really is that people understood, okay, you have these local Fretholm descriptions. We have this perturbative description. In fact, we have a perturbative description of the Euler class of an Orbi bundle. That's what a lot of the papers took a long time to explain. And then how hard can it be to patch this thing together? And this is, that's literally, how hard can it be that's literally as much as there was written in literature. If there's any justice in the world, then this directly generalizes in such a way that it applies to holomorphic curves. So, and I have to say, I believe that, you know, I really didn't think it would be that hard. However, there are a couple of challenges here. So one is sort of more a comment on the Quranishi approach. So if you really sort of forget about the sort of big D bar operator, you just remember that you have local Fretholm descriptions, then you can write your modulized space as kind of a union of zero sets. Well, let me drop any kind of isotropy here for the second. Modulose some kind of transition maps. And so there's a whole lot of local Fretholm operators, right, who's zero sets will be some part of my modulized space. And once they're Fretholm sections, I can even take a finite dimensional reduction. So I could either make these Fretholm sections or if you don't know what a finite dimensional reduction of a Fretholm section is, you should ask someone this afternoon. Means you can replace, if this is Fretholm, you can replace the whole thing for this purpose of the zero set by a finite rank bundle over a finite dimensional space manifold. And right, gluing maps also fall into sort of the finite rank sections picture. Right, so you have local nice descriptions and you have some kind of overlaps, right? And you would think that you could make the Euler class locally and then somehow patch the Euler classes together. Right, how hard can it possibly be? Or you can think a little bit more globally and that's what both Chris and Duzer did is you can actually look at the zero set of the D bar operator on some space of maps. Right? So here the base was maybe maps from sigma to M, there's some bundle over it and D bar is a section that cuts out most of my modular space except, right, then I need to quotient by the isomorphisms of my base of my domain. And then this is usually still not compact so I should somehow take a compactification in some Gromov topology. Right, so again I'm seeing the zero set of a Fretholm section here and with this sort of Sartre-Smale theorem, you know, I can perturb, I can make this a manifold but as someone asked in Chris's talk, that does not mean that this isomorphism group still acts on it. Right, if I put in some kind of random differential equation perturbation here, it's not clear that, you know, that's going to be invariant under reparameterization. And it's also not clear that after I've perturbed this, I still have a nice compactness result in the Gromov topology. Right, so you add some first order operator to this and who knows what might happen. Right, so the cool thing with sort of perturbing J is that no matter how you perturb J, it's always going to be a covariant. So this isomorphism group is always going to act on the zero set and for every J you have the Gromov compactification. So it really only works when you perturb J. If you can get the transversality by perturbing J. So let's see, right, so I want to point out it's important to say always that this is actually an equivalent Fredholm section. So that's this, right, so really what we would, you know, in our dreams we can generalize this to get a covariant transversality while preserving compactness. And that's just simply wrong. There's sort of two dimensional counter examples to that. So here in our dreams we can somehow get, you know, transverse perturbations in each of the little charts that fit together under the transition maps. But the transition maps are merely somehow continuous maps that factor through this totally irregular space. So it's not at all clear how you would iteratively construct that. So we simply cannot, you know, once you stare at this situation, long enough you realize that it's really a little bit of a fundamentalist religion to believe that that theorem should have a straightforward generalization to J curves. So general M bar, we can't expect to be regularized along the finite dimensional regularization approach. Can I ask why the covariance was so important? Why what? The equivalence. Right, so my compact modular space, what I expect to have a fundamental class is really this, right? It's a zero set modulo, a finite dimensional group, and then compactified, right? So if I now perturb this by something to make it transverse, fine. Then I have a manifold here, but it doesn't even act on it anymore, right? So this is not gone there and so... Less zeros then? I mean, it's like more spot you perturb, you get the homology of the more spot critical. No, this has nothing to do with more spot. Yeah, yeah, it doesn't. So see, I would want to write... So I need to write down a prescription for how to get from this to a fundamental class. And I would like to write down a prescription that gives me the expected thing if the unperturbed one is by itself already a manifold. Right, so if the unperturbed one is a manifold, then it is d bar inverse of zero, mod this group and then compactified is a manifold, I'm taking its fundamental class. So now if this isn't transverse, I'm supposed to, okay, add a perturbation, but if I now simply forget this quotient, I'm going up by like six dimensions, so I'm getting... Even if this somehow magically was compact, I had a compactification, it would suddenly live in a different dimension. So it's clearly not the right answer. So the point is that there are some cases in which it's transverse. Right, yes. Right, right, and so whatever description, whatever definition of this regularization I write down ought to generalize the transverse case. That should be one of my axioms for the fundamental class. So if m bar is already a manifold, it ought to actually be the fundamental class. So this is important, right? So if any of you don't understand why we need equivalents, squeeze people. Sorry? But the group like an E and B, right? Yes, yeah. Right, so this is really... Right. There was a question here, yeah. Are you saying that by instead changing the j, the capital J, that the term that you get like in the difference that makes it equitarian? Yes, right, that's the beauty. No matter how I change j, the capital J, it's right, the reprametrization group is always going to act on it. Right, yes. So if I understand correctly the problem is if you perturb in some arbitrary way, when you act by the reprametrization, you get some different perturbation. Right, well I'm solving a different PDE, right? So in the Ken and Festello's treatment of gauge theory, you end up with some similar problem where you try and truncate it some energy, but then the gauge group's action doesn't preserve that. And so you construct some sort of sensor which is preserved by some group action of the homotopy. Is that hopeless here or is that...? Yes. Yeah, let me leave it at that and we can talk more. Yeah. Maybe, right, so the main thing is in gauge theory, the gauge group actually acts smoothly and so you can do some of the things I'm about to do and then the reprametrization group does not act in any pleasant way. Right, so we're in deep doo doo. What are we going to do about that? Wow, I am in doo doo here. Okay, all right, let's see. Well, I mean mainly this is to energize you why we're spending these two weeks on this. So I guess we've already talked quite a bit about the geometric approach. So the intuitive way it doesn't go through, we need to do something. And so there was the geometric approach that you've seen this morning which has prose, it works. It does all this magic, so in particular it gives a covariant transversality and preserves compactness. Moreover there's something really nice somehow it has automatic coherence which is no matter what J I take, so if my boundary or perceived boundary of the modulized space is this fiber product, right, and that comes from some kind of bubble tree or breaking compactification, then no matter how I perturb J it's always going to be like that. So. Do you just have a terminal logical question, geometric approach means only perturb J? It means, well, it really for me means find some geometric way of perturbing your equations that does these two things. And really you're going to need geometry in order to preserve aquavariance and compactness. So there are various aspects of this. And maybe I should say, right, I mean there's work by Chile but Monke makes sort of more general perturbations of J that to me falls squarely under geometric approach. You call it Hamiltonian perturbations. Right, Hamiltonian perturbations exactly. So the corner of this is it just doesn't always works, right? So it requires some kind of geometric control of curves, namely some kind of injectivity. And right, another side note here was that right people were first thinking about closed curves and their loss of injectivity means you're actually dealing with a multiply covered curve and then you get worried about its self symmetries that Chris called it, automorphism group I'm going to call it isotropy I guess, which means you get into this OB Fold world and this is why people got worried about how do you actually make the Euler class of an OB bundle. However, we're not going to talk about isotropy this week in Joel and my course at all because that's really not the problem, right? So lack of injectivity is the problem and if you still think that not somewhere injective always means multiple covered asks someone about the lantern example. So for disks you have holomorphic curves that are nowhere injective but also not multiply covered and so it's really the and for those transversality is really impossible to achieve pretty much. So it's not the self symmetries that are the problem. It's the really you need some kind of magic in order to get this going. So right and maybe right. So really the paper to read for this I would say is Fleurhofer Salomon. And of course the yes. Yeah, right. Yeah. And of course, McDuff Salomon, the book, the Bible, I guess the big book. But also right and I think right. So Chile but more because also I'm only going to write things on where I believe that if you have questions about it, I can answer them. So and I'm going to write that means I need to not write names on the board right now. So so I could not answer any questions about Foucaia's work or Tian at all. I believe I can answer most questions about Zebert's version. So and this is also the most readable of all these accounts. And because it was the most readable, it also was the only people in which mistakes were actually found. And that's why this is the one paper that didn't get published. However, I believe that's actually the one that's closest to being fixable. So or at least I understand more of it than. Well, anyways, so so there are essentially two versions of this. Foucaia at all is somehow a Kuranishi approach. And these two do something that I would call we I think I've called obstruction bundle before but I really think one should call it a stabilization approach which both somehow look easy. But the con is they either aren't easy or they are easy but they don't apply to J curves. So there's a whole bunch of papers about abstract ways of if you have a section of this kind of structure, then it has an Euler class. And I think those are all correct except I don't think any J, modular space of J curves ever gets cut out by a section like that. So that's when this piece of criticism applies. So however this stabilization here really I think is the right idea. And this is very close to sort of a more conceptually good definition of an Euler class rather than by perturbing. So the polyfold approach, I believe Siebert should have discovered polyfolds 20 years ago if he'd done things carefully at the point where mistakes happened. However, so the polyfold approach as of now talks a lot about perturbing. I believe one could make Siebert's approach rigorous by sort of using polyfolds and stabilizing and then there's less perturbations in the picture. So what happens here? We still are going to use this right idea except we're actually going to do it. And dot every i. So the idea is to really get ourselves rigorously into the setting where we have just one section of one bundle. So that was the idea here. Really I would have liked to describe my modular space by just one section and then I would have liked to apply my regularization theorem. And the question is what kind of bundle. So and what do I then mean by Fredholm? So and that's where language is going to come in. So these are things we're going to need to define. The key is that these things have global smooth structure. Whereas the stabilization approach of Siebert only had local smooth structures. And what else? Right. Just like before I should assume that the zero set is. So if you have a topological space you could give it local smooth structures by just putting charts like local homomorphisms to our end. And if you don't assume that the transition maps are smooth then you have local smooth structures but they don't match. And that's exactly what happens in all these papers. They say always there are smooth structures locally in which the operator is smooth and it's Fredholm they just don't match. So whereas probably there are a good number of compact metric spaces that you can't even find local homomorphisms to our end. So once we have this set up as just one section of one bundle there is the rough form of the theorem that hopefully we will not prove but. Now let me state this in a similar way as the regularization theorem up there. So pretty much we're just going to need to define things here in such a way that A they apply to homomorphic curves and B I have a regularization theorem. That's the goal here. So and this is the regularization theorem. So I'm going to write some set of perturbations and roughly speaking this will be compact perturbations supported near the zero set with some measure. And for that non-empty set of perturbations again I have two properties. For all p I have some notion of transversality and I still have compactness. And then for all pairs I have a co-bordism. What should I oops right so now I need to cheat. So I am going to only talk about the M polyfold version of things. And when this is the M polyfold version then these two here imply that the zero set is in fact a finite dimensional manifold compact. Might have boundary and corners but then now I could say what it means for two zero sets to be co-bordant. So pretty much exactly the same theorem except now generalized to some notion of bundle which hopefully applies to J curves. And so the obvious pro is that this seems to be a very clean way of actually getting regularized modulized spaces. And the obvious con is that there are words here that you are going to need to know. So you need to there is some language overhead but that is why you are here. So yeah. Just so we have an example or two to keep in our heads can you give like a sort of minimal example of something where you need polyfolds not to do much of a question. Whoa. Yes let's see right so if you have Gromov if you have a spherical Gromov Wittner and Varen and there is negative Schoen class spheres then do so explain why multiple covers will no matter what J you have will always give you things that are cut out non-transversely. So also if you kind of don't want to worry about the moment you don't know whether your curves are somewhat injective really. Right yes I call that somewhat injective right. So okay yeah right so the moment there is bubbling so soon as there is bubbling you can't actually use domain dependent almost complex structures anymore because the bubble is not part of your original domain so on the bubble somehow automatically J is not domain dependent and so if you can't exclude say sphere bubbling you're forced to deal with well the question of somewhere injectivity. You can use a Chilean Rontkopf but that's more sophisticated. Maybe right but then but then you can only do that if you don't have boundaries and it's not clear that you can I need they I don't think they can do SFT so. I can't do it in general right. Goodness I am totally all right core points hold on yes. Oh yes this is Hofer Bizotzki it's in there yes. And yeah we'll have complete references by the end of this week so the other con that is the coherence so that the boundary of one perturbed thing is still some kind of fiber product of these modular spaces is no longer automatic. So that is going to require some hands on construction. So the co-priced identification is not the same. Right yes right so pretty much right so this kind of says that really the perturbation on the boundary of my polyfold should be sort of the product of other perturbations but the thing is when you're just allowed to somehow construct a perturbation just like a usual function it doesn't mean that on some stratum it's given by the product of some lower strata somehow that's it that's something you need to just do hands on and that means you come out need to define these perturbations sort of iteratively you know complexity by complexity upwards and sometimes that's possible and sometimes it's actually simply not possible and then you need to sweat a little more. So interesting things happen. So that's the master approach right and so evidently right now I need to explain to you how this fits in with what we think is actually the modular space right we know how to describe the modular space as a zero set mod something and then compact define we don't know how to describe it as a zero set of something right. So there's really a total paradigm shift happening here in that right from describing the modular space as this to just one zero set right what happens here is I first perturb J then I quotient and then I sort of add in bubbles breaking and the like. So here right we realize I can't get a quverin transversality so the perturbation somehow needs to happen after the quotienting and it actually also needs to happen after adding bubbles and breaking. So the order in polyfold is that we first quotient then add bubbles and breaking and eventually we perturb so but that means there's something weird going on right so on this side I have a bundle I have my D bar operator and the whole thing is a quverient under my isomorphism group and so on this side what I should sort of do is I should divide first hope that that gives me a bundle hope that somehow my section still makes sense but not just that I kind of need to actually sort of add the broken things before I write my section and now I don't take this overline as I should maybe make it a wiggly line so this is not a closure in any topology because those are never going to be compact right they're going to be very infinite dimension this is a function space though but I need to add all the stuff that I eventually want in my compactified moduli space before I solve the equation. Do you have the numbers correct on the right side? I mean we don't divide all actually. Yes we do. Well in my interpretation of what you do you do. So well let me say what I'm. No. No. No. No. No. No. But never mind. So what happens here right and this is the core ideas of polyphol theory so my theory has always been that if I explain to some graduate some reasonably smart graduate student these two ideas then I can lock you up in a dark basement for a year and you're going to come up with polyphol theory. Yeah. So let's see so what do we need right so first of all we need a smooth structure on a space of maps modulo the isomorphism group of my surface right. Despite the fact that this action is pretty much never differentiable. So and that's what you'll hear from Joel this afternoon why this is never differentiable. So pre composition with maps is really really bad. So so this is pretty much so the only ways in which this is differentiable is what either your isomorphism group is actually finite which doesn't hold if sigma j is for example the s and s two or its r times s one with standard J or yeah a torus all of these will not have finite isotropic groups isomorphism groups and the other way is if map if this is actually a smooth if this is a finite dimensional manifold that sits in the smooth maps then it's also classically differentiable otherwise I challenge you to find any reasonable for the space topology on an infinite dimensional space of maps in which this is an even one point differentiable. So is this why algebraic geometry is easier because I can put map in exactly yeah this yeah this is exactly why algebraic geometry is easier. One of the reasons because you don't need to quotient right because instead of this you just take curves in M. You still take maps don't you? Maps is finite and how does it now maps is this inside infinity and finite dimensional. Oh because your analytic maps right yes yes maybe right oh right analytic maps might be yes yes right and so the idea is here whoops so the idea is defined so the idea is to pretty much define a notion of scale smoothness and prove that this action is scale smooth right which then gives me a scale smooth structure on this space so in that sense I would say we quotient out first. No that's not what we wrote because you guys should have quotienting out last. Okay goodness I'm glad we talked about it. Good okay. Boy do I have two minutes all right good so this is a reasonable idea to have the second idea is really not reasonable that sort of right so I ought to define to describe the neighborhood of a broken curve or nodal curve as if it was sort of in a Banach manifold right so somehow I want to I'm going to revert to Joel's example right two spheres right requires somehow maps from S2 times to M times maps from S2 to M that's the data I need here for my Banach manifold but somehow nearby I'm going to have smooth curves and really here I just need once maps from S2 to M and this is a different Banach manifold from that in particular really the neighborhood is described by something called free gluing and that should become a short map of the polyfold of the underlying polyfold or M polyfold B so which fortunately also gets me out of the problem of fix of matching notation with Joel's because I'm not going to give you notation. So what happens here you're going to somehow describe nearby curves by a pre gluing map so I'm going to take right so how do I describe all the curves so let's see this is now a point in my big base space right and nearby there are some curves that are still nodal but I've perturbed both maps but there's also non-nodal curves nearby so how do I describe them I actually write there's a rotation parameter which I'm not going to introduce and then there's going to be some gluing parameters which could be infinity and that's not good let me forget about the S1 here for a second so I have a nearby map and another nearby map and some gluing parameter and I'm going to make a pre-glued curve nearby which I get essentially from writing V1 with a long neck and V2 with a long neck and then I'm going to forget V2 beyond here and V1 beyond here and I'm going to interpolate on this little strip and glue the two curves together so that's evidently not going to be geomorphic anymore but this is actually the answer to what is the topology so this tells you what a neighborhood is. You do have a rotation parameter. Right in this case there should be a rotation parameter indeed yes which I just end of lecture great good so the key observation here is that this map is supposed to be a chart map but it is clearly not injective right because I'm forgetting data here and so the idea is to say well it factors through something that is a really weird subset here but then this is it factors through something where I have a local homeo here and this here is something like a retraction so it's a subset in here and I have a controlled way of getting from a general point here into this set and that's really the key to happiness here is because while this is a weird space it has an ambient topology so this ambient topology is matrisable and it has a smooth structure and everything is good and unlike this microphone so I shall stop here. We have questions for Catherine? You mentioned before a big philosophical difference between this idea and the kind of abstract perturbation that I sketched. Right because what you sketched sort of said I perturb here but then I have to preserve the covariance and compactness. So the point is the first step here is figuring out how to describe the whole argyllite space literally as globally as zero section of something. Correct yes right which means I need to figure out what my ambient space is right I want an ambient space for my moduli space that the compact moduli space actually sits in and that's actually that's sort of a general problem that's exactly that problem the problem of an ambient space is what you end up having to deal with with coronishi structures what you also end up having to deal with the Tian and Zebert approach it's always what is the right ambient space in which I could describe things as just one zero set. Yeah so what exactly do we expect from smoothness in this picture? Like there should be some kind of implicit function here in this class but then. Sots mail. But then in the end what we use is this classification of one dimensional manifolds right. Oh if you do one dimensional manifolds yes but how even a one dimensional manifold in order to get something from a Fredholm section that's one dimensional you need an implicit function theorem right you need a notion of transversality so you probably get away without smoothness you could say maybe C1 just for the implicit function theorem but okay it's going to get technical now so Sots mail is the reason I believe you really need smoothness and not CK. More questions? Why is why do you think scale smoothness is such an obvious idea that a graduate student could come up with it? I refer to Joel. No I don't think that's what she claimed she claimed I think. No no no I think if you explain to a student my claim was if you give somebody the definition of scale smoothness and the definition of what these retracts are then. Yes that's not quite what you said. No I know. Well my graduate students of course. Alright. Why did I? I didn't I didn't want to have to deal with trying to explain the rotation because also right so technically if I I shouldn't write S1 times this because at infinity there's no S1 parameter. So what I do at infinity here is I just don't glue and when I don't glue I don't have an S1 choice and that in fact is the reason why this is not a boundary point it's completely an interior point so you're going to see I think tomorrow Joel is going to explain to you that really for nodal curves really the gluing parameters are just a disc and where the middle point is infinity. So here pretty much if you think of these as discs right then you don't have an S1 parameter and then you see where the boundary comes from. But when there is an S1 parameter then infinity should come without that. And I'll we'll have any anyone with like philosophical questions of like why can't you just I am most happy to talk to you all week and you know particularly maybe we'll have a session this evening where we think about why can't we just. The more questions. Why can't we just thank you. Yeah. I can only ask you for my hand. Go to my mobile phone. Hello. Let me hear you.
This lecture will discuss the overall ideas and challenges in regularizing moduli spaces, and introduce the two basic ideas behind polyfold theory: Making reparametrization actions "smooth" and making pregluing a "chart map". [related literature: Sections 2.1 and 3.3 of Polyfolds: A First and Second Look. Related videos: Lecture 8 and Lecture 20 from Wehrheim's special topics course.]
10.5446/16305 (DOI)
Right, so I guess I want to start by saying that I'm trying to, I think anyway here I'm trying to try to find line a bit. In the sense that my main goal, I think jointly with Catcher in this week is to sort of set up the analytic foundations that we need to make sense of what Helen's going to talk about next week. And the thing is that I could do that completely abstractly because sort of the polyphold theory is sort of completely abstract, but it's also designed to be completely abstract which also contains useful problems. So if I did it completely abstractly, I think everyone would get lost pretty quickly and not have it be founded or you wouldn't have much to connect it to. So I have to try and bring in sort of elements of things that are hopefully familiar and pair them in just the right way I think with certain ideas coming from the polyphold theory to try and illuminate why definitions are the way they are. And so yesterday I think it might have been sort of a good talk for if it was 80 minutes with questions or 60 minutes with no questions. So I think things got a little out of hand at the end. And so I just wanted to kind of review what I had sort of hoped to say yesterday and then build off of that today. So quickly then briefly, what did I try to do yesterday? So the first thing was that well we wanted to try and parametrize a neighborhood of a nodal map and we're not dealing nothing pseudo-holomorphic here at all. We're really trying to build this big ambient space of function. So we're trying to construct some sort of parametrization or be able to write down some sort of a chart for this. And so the first observation that we made was that this pre-gluing map basically yields us the right topology in this really big ambient space in which has this nice property that in fact our compact moduli space. So even if it has, you know, you're dealing with nodal curves or your broken trajectories or whatnot, this big ambient space has a topology in such a way that your compactified moduli space is a subset and that the correct topology on your moduli space, your compactified moduli space is induced from the ambient space, right? And so this pre-gluing map then was something rather useful. We defined it sort of like this. And so we said, okay, well if this gives us our neighborhood in the right space, well maybe we can use it as a parametrization but we can't because it's infinity to one. There's all this information loss, right? So well what do you do? Well I say, okay, HWZ, they sort of, they introduced this minus-gluing map or anti-pre-gluing map, however you might like to refer to it. And so they introduced this and again it was some sort of unpleasant formula but it had this nice property that when I pair it, right, when I pair it with the plus-gluing or the pre-gluing map, that suddenly for each fixed gluing parameter A, this map here became a linear bijection. And so then the idea is to say, well how are we going to make use of that? Well that's useful because what we can do is we can define this set script O sitting inside this subspace here, right? With the property that, well, we'll define it to be all those maps so that when you anti-pre-glu or when you minus-glu you get zero. And that's good because out of, what that means is that basically when we restrict our pre-gluing map, our plus-gluing map, precisely to this set O, it ends up necessarily being a bijection or at least an injection or bijection with its image. Now that kills this problem, right? We no longer have this problem as long as we're happy to sort of restrict our attention just to O. But now O might be some weird set and so now you might say, well, you know, so what? What have we actually gained? And so there was this notion and it's an idea because in the end it's going to work out, but the idea is, well, maybe in some way this set O supports the SC calculus. And maybe there's some way to treat it as if it were something like a Bonoq space or something like a scale Bonoq space. Maybe you can define a notion of smooth maps between them. Maybe you can do that. So how might you do that? Well, you have to do a little, you know, a bit of manipulation here to rewrite the problem. And so we did that by sort of writing down this function R. Again, its domain is the same, but its image isn't in the target space anymore. It's a map from sort of your domain of pre-glued or your domain of unglued maps, basically, together with gluing parameters back to itself again. And we define it in this following way. So we take a gluing parameter and a pair of maps. We apply this box gluing map, which is a bijection. Maybe I should say this here. This was asked yesterday. This is my projection onto the first factor, or sort of zeroing out the second factor at least. And then we take this box inverse gluing. And then we made some observations, or very rapidly I tried to make some observations. I would have spent some more time with if I had the opportunity. Was it this set O that we had a definition for right up here? Well, once we take this definition of R and we see how it's defined, we see that it's precisely the set of fixed points of R. Then we also sort of check really quickly that R composed with itself is just R again. It's a projection, basically. It's a nonlinear projection. That's easy to check here, basically, because this is a bijection. You have its inverse on the other side and a projection in between. And then a quick computation. It's a little one-line proof, basically, shows you that these two properties together guarantee that this set O is, in fact, the image of R. And so then there's this theorem. And this theorem says, after giving this space, C cross E, a suitable scale Bonoche space structure, this map R is SC infinity. So I didn't really give you any justification for that. I'm not going to give you any justification for that. You can look in the literature and you can sort of see, well, why is something like this true? And if I remember correctly, the proof of this essentially boils down to, well, you write out why we have an explicit formula for R here. And when you write everything down, you get some terrible sort of equation, some awful composition of additions and products and blah, blah, blah, you break it down into all the little pieces, though. And at the end of the day, you prove each little piece is SC smooth, essentially by going to that list that is in the lecture notes for day one. And that tells you where to look in the SC calculus book or the SC calculus paper to tell you how to prove each one of these components is SC smooth. And so consequently, you rapidly build back up from this that this is, in fact, SC smooth. So now that by itself doesn't really seem to bias anything. Sure, please do. You say you have a suitable SC Bonoche space structure, so that E itself is an SC Bonoche space. Yes. So what you're going to do is rescale me a zero by some gluing parameter, right? To change the structure me a zero. No. No. So what is this suitable? So I'm thinking of C as, I mean, effectively as R2, which is a nice Bonoche space, a finite dimension, and so I can take a… So it's just a standard product. Absolutely. I mean, when you say it's a suitable SC Bonoche space. Well, well, well. E. E. Well, suitable is really applying to E, not C. C is sort of… It was given a suitable structure, but it says C. Well, E, you gave this example to me. I mean, it was… It was a suitable… …dobbing a paper in Delphina. Yes. Yes. That's not suitable. That's not a suitable… It's not a mysterious thing. No, no, no, sorry. I mean, I think, I mean, it was a… I think it ended up being sort of a slight modification. I don't know if in my lecture notes I actually defined the scale Bonoche space structure on E. I gave you something very close to it, and I told you the base topology on E. And if you understand those two, you can obviously guess what this structure needs to be on E. I don't know if I said it explicitly, which is why I'm saying suitable now. Okay. But thank you. Are there any other questions? Because I don't want to blow through this. This is actually fairly important. Yes. So what was the… What was the purpose of this averaging function that you put in the anti-glue? Right. So, yes. So, let me see. I want to see where this… So here's how I would explain it. So, O is defined to be the set of all points where the minus gluing is zero, right? And we define it precisely this way so that the restriction of the plus gluing ends up being a bijection. So, then what you do is you say, okay, well, if you look at the maps, right, so… Let's go see. So script O then is going to essentially be the set of all points which are going to parameterize your neighborhood of your sort of non-nodal maps plus some nodal maps as well. Okay. So what happens is if you think about having this nodal map when you set up sort of a base problem, you typically want that nodal point to say you want to model it so it goes to say the origin in R2N, say, for instance, right? Okay. But then you start saying, okay, what are all my nearby nodal maps? And your nearby nodal maps, some of them are going to be sort of… Some of them are going to be nodal, some are going to be non-nodal. Those which are nodal, you're going to want to allow that nodal point to move around, right? So, now you allow this… So, now you say, okay, well, I have to allow sort of nodal points to move around. This forces you to add in these sort of… this extra constant C that I had in the definition over here of these corresponding scale Bonach space. Okay. That's all fine. You can do everything with a pre-gluing. There's no problem with the pre-gluing. But in the minus-gluing, right? Well, what happens? If you don't have those… if you're not subtracting off those averaging terms, then what happens is when you look at this set here, it turns out that without those averaging terms, it necessarily must be the case that the only way for this equation to hold is if those asymptotic constants are zero and therefore the node can't move around in the image. Think about it. If you take the nodal value and now you glue and you got this long cylinder, what is the best approximation for the nodal value? It's the integral on the middle loop. The average over the middle loop. That's the approximation for the nodal value. Yeah. I think that shows up. Sure. Any other questions? All right. So this is where we left off. So remember, I guess what we're trying to do is we're trying to investigate maybe script O here supports the SC calculus, right? That's sort of the idea. And so then, I don't know how this was sort of developed in practice, but reading through the material, then you make this sort of observation. You say, well, if it's the case that I have this map R mapping essentially a scale bonox space to itself, which is SC smooth and R compose R equals R, well, then I could do this trick. I could do this trick. I could say that a function F defined from the image of R to the image of some other R in some other space is SCK, if and only if, this is my definition, if F pre-composed R is SCK as a map from this open set in a scale bonox space to this open set in a scale bonox space. See, we already have an SC calculus all defined on SC bonox spaces. We have that defined. But now the idea is, well, how about subsets? Well, if these subsets happen to live as images of these special, what we're going to call them, retractions, then it's the case that we can make this definition. O prime lives inside U prime. O prime is a subset of U prime. So there is sort of one good thing about the terrible HWZ notation is that it's fairly consistent. And so it always ends up being that O is always the image of U and U is supposed to be a subset in E and open until in, I don't know, five or 10 minutes, in which case we add in sort of this additional portion where we put in something like, I think it's called a partial cone. You can think of it as a partial quadrant in between here, in which case U is relatively open in a partial quadrant. But then this always ends up being the domain of R. And R always goes from U to U. It has to go from one set to itself, absolutely necessary as you would want for a retraction. Any further questions? OK. So I make this. Now the idea is we have this, we now have a definition for smooth functions between these weird images, these weird subsets. And then this is sort of the first bit of magic. And the second bit is that actually, that these strange subsets, and they really are, I mean, rather strange. I mean, even when you write down toy examples, I think there's a homework example on this where you can see these things might have finite dimension, but the dimension of the space might jump. And we had locally varying dimensions, locally varying co-dimensions. It might be sort of a full set of sort of, yeah, if you have an SC-Bonach space, for instance, which fibers over some other finite dimensional space, it might be sort of full dimension on one region and then sort of have infinite co-dimension on another. It's a very, it's a a priori, it's quite wild. But nevertheless, despite all the strangeness, it does have a tangent bundle and you can just sort of see what it has to be. And I have a map, R, which defines it, which maps U to U, and R compose R equals R. And so I say, well then, it must be the case of the tangent map of R maps T U to T U, and you apply the chain rule, and the chain rule says, well, T R compose T R is T R, which gives you again a map precisely of this form. And so then you just take as definition the tangent to this subset is equal to the image under this, the tangent map of R of the tangent of U. It's kind of functorial. It's sort of the only thing it could be. And I should say, it doesn't actually depend on how only on the set O. Right. So we're not, I haven't even, I haven't officially defined scale, scale smooth retractions, but this is sort of, this is sort of the first observation. And so then, so then once you've done this, I mean, these are sort of, I mean, almost sort of stupid things. I mean, you just, you're tinkering and you find this sort of stuff, right? Once you see that, you should, then you should have this idea. This is the big conclusion then, should, should be, well, let's try to build manifolds locally modeled on subsets like O. So what are all the characteristics of O that we need in order to actually build some sort of, something like a manifold locally modeled on this? And so this is, this is what I would have liked to have conveyed, I guess, in my lecture last time. And I think it's important to sort of see this story completely laid out like this. To see how, so these things here end up being, so it is the case that this is doable. I'll say this in just a second, but these things are called SC smooth retracts. These are called SC smooth retractions. And these essentially are going to form the, are going to form local models for M polyfolds. And what I want, what I'd like you to be able to see is that, is that if you start with just pre-gluing, which is something that shows up in, you know, whatever framework you want, right? I mean, any framework where you have nodal or broken elements of your modularized space, you start with that in some sort of classical analysis. Then you do this trick. You introduce this minus gluing, repackage it into this, into this weird sort of nonlinear projection and show that it's SC smooth. Then you necessarily are led to this idea of having these SC smooth retracts, which are going to provide local models for our big ambient space, right? So any questions about this outline? Yes, sir. So like this, these retracts that's really, really different from the usual setting, right? Where you have close to pre-glued positions with smooth maps, or the map is always horrible, like in several other spaces. I'm sorry, what was that? I missed. So like this is really different from the normal, like the standard setup, this feature of R. Which, the SC infinity? No, no, no, no, that like post-composition with, or like this. Pre-composition, but. Yeah, a pre-composition with it is exactly preserving the features of F1. But if you think of solar spaces and then pre-composition with smooth maps. Well, I mean, I think, right, if I have a smooth, if I have a classically CK function F, and I pre-compose by a smooth function, I mean, my chain rule should tell me that I just have, that regularity in that sense should be preserved. And so again, that's happening here. So that's sort of not so surprising. I think the surprising thing is that, the surprising thing is that, and this was commented yesterday, the surprising thing is that if you reduce this from SC infinity, or rather increase it from SC infinity to classically C infinity, then the image can't have these weird properties. It sort of necessarily must be a Bonoc manifold. And so you then reduce to sort of the standard calculus in that case. Yeah? Am I supposed to think there's a difference between locally modeled on O and locally modeled on the pair is E comma R? OK, so that's a great question. I want to get to that right now, actually. Can you repeat the question? So there's this question of, when I say locally modeled, what actually constitutes a local model? Right? And I keep saying O. And the first time you see this, the natural question is to say, you don't mean O, you mean R in there somewhere, like R and E. So let's see. So OK, so I can say definition really quick. So E is an SC Monox space. Let's say U contained in E is open. And we recall my ambiguity. I really mean open in the Bayes topology. R is a map from U to itself, satisfying R compose R equals R, and R is SC infinity. And all of this says is the definition for an SC smooth retraction. And I'll say, here's another definition, which is that E is an SC Monox space. Which O subset of E is an SC infinity retract provided there exists an R, which is an SC retraction such that O is equal to the image of U under R. So my handwriting is a little sloppy, but everything here is basically just me writing this stuff down in, I don't know, some formalized version. There's no essential change. I'm not sure I entirely understand the second definition. So an SC Monox space comes with a whole lot of structure. How much of that structure is O supposed to remember? Right, so what do I want to say here? OK, so I'll point out that that's your second question. So I want to answer your first question first. And your first question is, well, what do we mean by sort of local? What is the local model? And so that was the previous question. So here's the answer to that question. The local model, our sets are of the following form. And if I want to be, yeah, yeah, not E, not this, and not say R U or U comma E, but we don't want R in here, which is strange. And it's also the case, this is also just in terms of notation. I can say that even recently I've been highly irritated by the fact that you can't just talk really honestly. If you want to really be as honest as possible, you can never just tell me O is the local model. If you're really being honest, you have to give me both O and the ambient space. But it turns out that you don't have to tell me R. And the reason for that is that if you look at this definition, if you look at this definition, you sort of say, OK, suppose I have one retraction, which makes this function here SC smooth, and I choose a different retraction which defines the same set, then again, it's going to have the same regularity. So regularity is independent of this retraction, and the retraction doesn't really tell you any other information. So all of the structure that O has, all of the differentiable structure, I mean, is essentially, I mean, all the scale structure is induced from E, and any differentiable structure is induced from the fact that there exists an R whose image is O. As long as there's one, you're fine. And so in the literature, then, this then becomes your local model. This is replacing, say, open sets in Rn or open sets in, you know, Banoch spaces. And I can make that even more precise now, unless there are further questions by anyone. Can we talk about O also as having a scale structure? Yes. It has a scale. It's a, I mean, the image of the appropriate levels of E. Yes. So O, remember, has to be a subset of E, and so anytime you have a subset, the natural thing to do is to say that the kth level here is just equal to O intersect the appropriate scale structure there. And you know, and... The subset would be zero. Well, remember, my ambiguity at the beginning of yesterday sort of says that, says that when anytime I have a set-wise statement like this, this is like, I always mean the base level. Right. Explain what it means to be an isomorphism of such objects. An isomorphism of such objects. Well... Like for instance, whether I have O, E included inside... Is that an isomorphism, or is that... E cross C. I think that it is O comma zero inside. Oh, I see. Well, I'll be honest and say that I don't quite know what you mean by isomorphism in this case. I mean, you know, it doesn't necessarily have... I don't think it necessarily has a linear... In general, it doesn't have a linear structure to it. So yeah, so in general, this doesn't have a linear structure to it, and so consequently, isomorphism would mean diffeomorphism, and that is a concept I'll define. But in fact, I mean, from just what's on the board, you should be able to conjecture what it would have to be. You would say, well, you know, I've got two such retracts, and we say they're SC diffeomorphism. The SC diffeomorphic provided there exists a function which is a bijection between the two, which is SC smooth, or SC one, whatever. I guess we'll stick to the smooth category, I think, for simplicity. Right? So in other words, the notion of isomorphism doesn't care about you. Well, I mean, E is sort of running the background because I can't talk about a local model unless I have both of them. I have to have the ambient space, and I have to have the subset. Because the definition of SC smooth depends on... This is of course... It passes through you. It passes through you. This of course is actually related to a Facebook post I had about whether or not it's acceptable to write statements like this, typographically. Most people hated statements like this, but writing down what these are subsets of is a useful thing. Vertically it's OK. Vertically it's slanted. It's terrible. OK. Right. I was going to online... I'm surprised he hasn't already. Yeah. I... So what you just said is that the obvious candidate for isomorphism is an F, like one definition which is bijected to SC infinity, but don't you need to require an inverse to SC infinity? Right. I mean, I said bijection and then I want function, and if I didn't say, and it's inverse, to suppose to be SC smooth, I wouldn't want just one direction for sure. Sorry if I missed that. I don't understand why you need to keep E in the local model. I mean, can't you just take some of the knowledge of theta or O plus the knowledge of what all the SC functions from it are? That's out of my pay grade. I don't... Yeah, I'm not sure. How can you define the SC smooth for something not between O's via U, which is an E? Yes, I mean, I can stick another sort of containment in here. I'm making my board work. I'm just saying like that is where E comes into the picture, right? Because you are always just composing all of these maps. Yes. Yeah. Why don't anyone say yes to it? I think it's right. Yes. I'm not entirely sure what the question is. You're just going to define smoothness unless you know E because it contains... Ah, yes. Yes. Thank you. I'm sorry. That was my fault. Use the audience doesn't make remarks. Okay. So now I'm going to do something which is sort of obvious. I mean, I think it's obvious, but I also think sometimes doing obvious things on the board solidifies how obvious they should be. So what I want to do is basically give you the definition of an M-polyfold minus some lying. So let X be topological space. X in X, a chart around X is a tuple, the phi O so that this here is open in X. This is an SC smooth retract. I mean, in this sense here, or it's a local model if you like, O is a retract sitting inside E and then phi is a map from V to O which is a homeomorphism. Because at this point we have no further structure on X. So definition and SC infinity atlas on a topological space X consists of a set of charts of the following form that we've just seen such that they are pairwise compatible. Compatible is the word I missed to define and this collection of V cover X. Two atlases are equivalent if their union is an atlas. No, no, no, no, no. I'm sorry? E is fixed or is it in the definition or? No, E need not be fixed. So any chart of this form where E is some scale bond arc space, O is some SC retraction sitting inside that E, V is any open set and phi is any map. So they could a priori be different. This actually sort of, yeah, I mean things like that are actually important because when you want to consider maps from say your domain is a Riemann surface you could ask questions like well what do we really mean by a Riemann surface, right? Someone might want S2 to be sort of the set of all points of unit distance away from zero in R3 but someone else might have some sort of slightly different, they want it to be sitting inside R4 or something stupid, right? So your domains are different so strictly speaking those are different spaces but you would want to allow that in such a definition, right? So two charts are compatible if associated transition maps are SC smooth. So here's an exercise that the domain of definition is also retry but it's more. SC infinity, atlas. No, no. I've got good. Definition importance. And M polyfold is a para compact Housdorff topological space equipped with an equivalence class of SC smooth atlases. Now I claim this definition should be sort of obvious but also sort of necessary for me to write down. There's a question over here I think. No, I got it. Okay, good. Those are the easy questions to answer. Is there any questions about this? Why is it reasonable to ask a para compact Housdorff? Well because I want to, I would like it to be the case that if I restrict a finite dimensions I recover the usual notion of a differential manifold. But it turns out to be the case that when you build these M polyfolds for instance for a Grumov-Witton SFT and stuff they're para compact Housdorff. All the standard bar manifolds are too. When you say that they're compatible you're asking to be a system and you need to have that, if I remember well that the kth level goes to kth level. So are you assuming that a priori or, I mean, you know you're just having a home or something you can lead to. Right, so you have, well right, so what happens? That's a good point actually, right? So what's happening here is that you're starting with something that's nothing more than a para compact Housdorff topological space. It seems to have no additional other structure. It has no level structure. It doesn't have, it has nothing else, right? And so what it turns out is well this is the structure that you need so that once you equip it with an equivalence class of atlases or even one atlas in particular because it's the case that all of your associated transition maps have to be SC smooth, they have to preserve levels, they have this sort of differentiable structure. Any information that you see in the local model that you would like to see in the M-polyfold that's also preserved in the transition maps, which is essentially everything we've discussed, we can then claim is just induced on M via the local models. So yes, it is the case that after you have an atlas for it, this thing has a nice, you know, what I would call scale topology and variety of other structures. But the transition maps will be defined from a subset of theta to a subset of theta prime, right? Not a, yes. So what does it mean that that maps, that's not a retractable format? This was Helmut's exercise that he stated. So it turns out that it turns out that the restriction of two overlapping, the restriction of two retracts is also again a retract. An open subset of a retract is a retract. An open subset of a retract is a retract. Yes, that's a good point though. Any other questions? Is this all open? I mean, it seems like it's probably been accepted. It's divided by choice condition. Well, it can't be, I mean, in general, it's certainly not going to be open because to be open, it's going to sort of fill up an open set inside E. It's going to be much wilder. But, you know, it is the image of an SE retraction. Yeah, it's the image of an open set in a scale-Bonach space by a scale retraction, scales with retraction. And that's sort of enough to make this definition work. Yeah? And the topology on O is the substaste topology, right? Well, I mean, O, strictly speaking, has, you know, scaled topologies, but the base one, yes, is induced from the base topology. Yes. Yeah. Okay. So, now I can tell you that I'm also lying. In the following sense, so, you know, if you're studying, you know, if you're not studying Gromov-Witton, then pretty much any other sort of moduli problem that you're likely to come across is going to have some more rich algebraic structure, which relies on having met, relies on your moduli space having boundaries and corners. And currently, the way we have this definition of M polyfolds, nothing has boundary and corners. So we need to fix that. Uh-oh. Okay. I'm not lying on the ground. That's triple, triple vowel looks terrible to me. I don't know. Oh, snap. The predictionary.com says the Joel slobbing correctly. Oh, snap, all right. That's why it's free. Ha ha ha ha ha ha ha. Linear SC isomorphism is a linear SC zero map T. This is Helmut's notation. E to F, which is an ISO on all levels. Partial. All right, Helmut. Tell me which word goes here. Quadrant or cone is a closed set, closed convex set. Let me give it a name. C such that C equals T applied to, and T is a linear SC isomorphism. So this, I'm sure, is a partial quadrant. Are we calling these partial quadrants as well? You'll know. Fantastic. We'll call this quadrant. And this is our model, partial quadrant. And you might say, well, why am I doing this? I'm sorry. Yes, absolutely, just a finite number. And you can say, well, why am I doing this? Well, what I want to do is what I really want is to have a notion of boundaries and corners sitting inside a scale-bonox space. And so this is sort of your model for that, the same way you'd write down your obvious candidates sort of sitting even in finite dimensions where you're going to sort of have this shape where this is going to be R to the N, say, for instance. So you could replace this with a bonox space. That's fine. And then we allow this sort of isomorphism so we can move things around. So in particular, this region here is now partial quadrant. I'm sorry? W is just some other scale-bonox space. So your C law is an E. Pardon? C is an E and X is still having a use and T is something. Yes, yeah. So yeah. So C, like I said, over here somewhere, this notation is sort of always standard, it seems. C is going to be some partial cone inside some scale-bonox space. And we want it to be the image of this sort of model one under some linear isomorphism. So like in boxes allowed, but corners. So you wouldn't. OK, so first of all, not the box, but just the corner of the box is allowed. You can't have all four corners because it has to be. No, no, I understand. Which things are I'm asking? He's the following thing, a finite dimensional M polyfoil. So a box, yes, octahedron. Probably not. So for instance, if I have a pyramid with four sides, not allowed, right? You could, but the current theory doesn't allow this. Is there any reason for not allowing that, actually? I would sort of say that that's non-generic. If I sort of think of my faces and I sort of move them around sort of generically, then that's a non-generic position. Actually, you can take any convex set with this interior, and you do every single suspect with that that would cover your case. So you couldn't have spaces that are boundaries just modeled on the boundary of the convex set. So you've seen that in the study? Yeah, you need an interior because. You need a cone condition, right? You have directions to say that the linear map is predetermined. But I'm very curious, and there'd be trivial extensions to polyphones allowing for the opportunity. Yeah. Yeah, I might believe that. OK, so observation. This, the calculus extends to partial quadrants the same way differential calculus extends to manifolds with boundary in corners. It's the same way. This is not a surprising statement. Just look at the definitions. And then now I can make a slight generalization so I can be less dishonest. SC infinity retract, this will sort of replace our previous definition, is really a triple OCE. Previously, we didn't have the C here so that this is a scale, Bonoq space. This is a partial quadrant. And this here is, once again, the image of a retract. But now U is a subset of C, which is relatively open, meaning it's open in the subspace topology induced on C from E. SC sitting inside E here? Yes, sorry. And R is an SC smooth retraction. OK. In the definition of linear SC isomorphism, is it important that you wrote SC0 instead of SC infinity? It's linear, so it should be the case that SC infinity is induced from SC0. So why bother with this T at all and not just take the standard C all in and expect it? Take the standard what? Why not just take the standard C all the time? The standard quadrant, you mean? Yeah, that's a good question for a helmet. I just think it's useful to have this definition at your disposal when you need it, because otherwise you might come across a collection of functions so that the way you've defined it, so the easiest thing you have defined, is such that I think it's convenience. So I didn't get the question. The question is, why have we bothered to define partial quadrant this way? Why don't we just always work with the model partial quadrant? Because if you have a tangent, if you look at the tangent space, there's a natural notion of a partial quadrant, the tangent space at a point. And then there's no zero infinity. So maybe I didn't get it. Why have we defined C instead of working with the model partial quadrant? Yes. So for example, when I have an endpoint for it, and it takes a tangent space at a corner point, then there's a naturally defined partial quadrant with a tangent space. Yes, it is naturally defined. Is there something, and then what can you say? You can only say it's either more of a true standard. I'm not convinced either way yet, but I have some more to talk about, I guess. So how do we know that taking finite k will be enough for applications? Because conveniently, our moduli spaces should be of finite dimension, and therefore, the dimension of their boundaries. I mean, the highest order corner you could possibly have is finite. So the redundant dimensions will always be actual full dimensions, not that's what we expect. No, no, no, not the full dimension. But the corner structure, we usually define as. Just in how many ways can the stuff break? And then usually the energy is fixed in the final model. Yeah, usually. I mean, in order to have an infinite dimensional corner, you need to have an infinite level building. And I mean, I don't know of any sort of standard analysis which covers it. Maybe the applications generalize it. I'm sure it's not difficult to generalize. For some. Right, so then what I would like to do is. But that's the case for when we make them holomorphic. When we make them holomorphic, there is energy lower bound on energy. But if it's not since nothing is holomorphic now, we don't have any lower bound on energy. Potentially there could be. Right, but what you're saying is, I mean, but what ends up happening is you're looking at some big union of spaces, B. So these are sort of non-nodal curves. And then I guess what I want here is some sort of fiber product here. So in other words, you're looking at B, union, B fiber product with itself, union, B fiber product three times. And so you take this big long union, and no point in here is ever a point in an infinite number of. I mean, you could have mentioned that if you look at arbitrary maps, if you can break them infinitely many, often even if you have a finite energy window. But no one map has infinite number of levels. I mean, you could think of. Oh, like 1 over x squared. No, no, look at the sum of 1 over x squared. But you're trying to connect it. But the thing is, you need an ambient space. It just contains this stuff. So in the ambient space, there's no need for it than having more corners built in than for your model aspects. But you could, of course, pursue in certain cases. OK, OK. So since this works, eventually we will pass the whole multiplication, this will work. Can I add a voice from the back? Please. So usually, your space B is all things that are in one homogeneity class. So if you define the energy as actually the sublactic area, you actually have finite energy on all of B. I mean, you could imagine, if you look at the function, R n, you put a lot of critical points in the vector infinity. You couldn't have it in such a way that the flow cannot escape. But you have infinitely many critical points to infinity, which sort of the difference of the energy goes close to 0. Then you could actually break things as often as you want. So that's possible. But for each space that you fix some points, it wouldn't happen on the level of the gradient flow. So we have a polyphol discussion this afternoon. So I think Joel should. Right, which Nate has generously let me take about 10 minutes of, I think. So let's see what would I like to do. So my notes are more detailed than what I'm going to write on the board here. But I'll draw some pictures just to present the idea. So what you want is to say, OK, if x is an m polyfold, let's see if this is what I say this. So OK, so once I have, OK, as I should say this. So once you have an SC retracts modeled more generally, then you run through the same collection of definition in terms of charts and atlases to have a new definition of m polyfold, which allows these as your local models. That's the honest definition of an m polyfold. So now we have finite boundary and corners. But yes, but it will be the case that in that m polyfold, you have sort of corners of whatever dimension you would like, not infinite, but finite whatever you like. OK, so then suppose we have an m polyfold. What we'd like to have is sort of a measure of cornerness. I'm sure that's correct. So what do I mean by that? So what do I mean? So what we really want to do is define something and how many will probably use this a lot called the degeneracy index. Degeneracy index, which really what you want to be is a map defined, well, I think we'll just say dx is a map from your m polyfold into the natural numbers, but can sort of be 0. And so what I would like to do is say, well, how do you define this? It's not too difficult. Since m polyfolds are effectively defined through transition maps and local charts, let's just define it for sort of the model case. And in the model case, here's a nice SE-Bonox space. And if you want to, you could cross it or plus with another scale, Bonox space. And these points in here that are interior points, this is where the degeneracy index should be 0. And these points here are degeneracy index 1. And these are points so that your degeneracy index is 2. And so once you have it in this sort of local model and the standard, and you see how this sort of generalizes more, this is to see how this generalizes, you can then do it for partial quadrants. You make that definition. You then you can then define it in a chart. So in the notes, for instance, you'll see that you have a degeneracy index defined like this. This is some natural number. And then from the charts, you want to define it for an m polyfolding. There's just this little trick here because presumably you can write down local models where the degeneracy index is different depending upon the local model. So you take the minimum over all your possible charts. And this allows you to essentially define this degeneracy index, measuring what stratum of the corner are you in. And then after one defines this, it's important to state the following proposition. If x and y are in polyfolds and u is contained in x, some open set, v is contained in y, open set, and f maps from u to v is an sc. Does it have to be smooth? Let's say it's smooth, diffio. Then the degeneracy index on x of a point x is equal to the degeneracy index in y of f of x. Here we're assuming that x is in u, f of x is in v. So the conclusion here is that sc smooth diffimorphisms on m polyfolds preserve the sort of corner strata, which is exactly what you'd expect in finite dimensional manifolds with boundaries and corners. And also, no one could say that dx is equal to the case of c, which is the trivial thing which you define for c. So the partial quadrant, you can see u as an m polyfold. And then when you look at the corner, as you can just look which way described in this picture, just some definition. It's not necessary. It's because you took an infinite number of possible charts. It's a good thing. You're saying in that picture, every chart you recognize that corner where d is 2, if you always need to. 2, whatever. But do you really have to take a minimum? Yeah. Because you could have a retrack which is this line going like this. Then the naive definition with the degeneracy of that corner is 2, which was taken as a chart. But the line is better for the retrack in general position. Oh, I see. You can have a bad retrack here. Yeah. The sort of mistakes. Because you take just big space and put that same somewhere and then retrack it. Yeah, this would retract onto that. So you look for the sort of. So the point is you might have some set like this, and you're going to have a retrack sort of sitting inside here. And yeah, so you have a retrack sitting inside here. And then the image of a retract in this even this two dimensional case might be this sort of line. And then if you have this line, you would say, well, what's the degeneracy index here? Well, d should be 0, of course. What's the degeneracy index here? Well, is d equal to 1 or is d equal to 2? In this model, it's in an ambient space where it should be 2. But if you take a different local model here, you can find one where it's 1. That's why you need this minimum. Any other questions? How do you know that you actually have the. So like I said, well, you told me that I can make all kinds of weird spaces as these SD manifolds. Maybe I was able to also make the corner of this quadrant. The quadrant itself. How do you know that's not it? But those are the line definition. How do you know that's not possible? How do you know that I can't make something like an M polyfold with boundary in corners making use only of this definition which doesn't allow for the boundary in corner structure? I mean, I. I mean, presumably you have to know that otherwise the degeneracy index breaks. No, you cannot make it better for the real corner. You can take the over corner. You cannot make the corner structure better. That's the zero. Yeah. Can you say anything about the truth of that theorem? Well, I ask later. I think, yeah, don't worry. I started late, so I'm only two minutes over based on when I started. But yes, I can go ahead and stop here. Well, I mean, if you have a final statement, but I was saying the audience should maybe allow you to make a statement. Yeah, so I guess the last sort of thing I wanted to state about this is that finite dimensional manifolds with boundary and corners have this nice property that if you look, so here's a toy example. Here's a toy example. They have this nice property that along the boundary of your finite dimensional manifold with boundary and corners, you can do the following thing. You can look at the set of all points of degeneracy index 1, and then you can look at the closure of this. And the closure of this is, again, a manifold with boundary and corner structure. And so you can see that from this picture. That's just this segment here. And that's a property that finite dimensional manifolds of boundary and corner have. You'd like to lift that up to M polyfolds as well. And that can be done, but there's an additional condition that one needs to place on your definition of retraction that define your local models. So this is called the taming condition. It's in the lecture notes. I'm not going to present it here. And the key thing to know is that the conclusion guarantees for you this nice property that you look at the closure of your faces, or the degeneracy index 1 portions. And these are, again, M polyfolds with boundary and corner. That's one. And the other thing to remember is that essentially all retractions that occur in practice are of this special type, they're sort of splicing type retractions. And in these cases, the additional taming condition is automatically satisfied. So it really ends up being something you don't have to check too much. I mean, it should be completely straightforward. And then you have this additional property. OK, and that's where I'll finish. Thank you for letting me run over. Thank you. Are there any more questions? When should people read the lecture notes by Mr. Bees? I assume it's done already, right? The lecture notes are online. Read them at your leisure. And, yeah, and of course, if you have any questions, you can ask. And I will be speaking briefly in Nate's discussion. I'm sorry for that. But I have to make sure enough stuff is done for Katrin to start tomorrow. Says Katrin. Any other questions? If I take the, I suppose I have some SC manifold. Sorry, just I hate to interrupt. SC manifold or do you want M polyfold? Because there is a, OK, thank you. And suppose I take the algebra of SC smooth function. OK, continue. Does that determine the M polyfold? I think logically that I'm going to work on one of these. No, I mean, is this GELF-on doing it? But what is the GELF-on doing, saying? OK, so I think there are better spaces. We don't even have smooth functions. I think this is a good talk for T-Tag. Let's thank Joel again. So Nate was very generous. And let me talk for a few additional minutes just to finish up the stuff that I wanted to make sure you guys. I had presented before Katrin tomorrow. And as my voice is just about to give out, I think that's the universe saying it's really time for me to stop talking. So I'll try to be as quick as possible. So the main thing, so we'll start over here. Strangely, that's just sort of how things got laid out. So it goes one, two, three, four, five. So the main thing that I want to tell you about is the statement of this theorem. So what I've spoken about mostly so far is M polyfolds, which I said this is like a generalization of a manifold. And it's a generalization in such a way that it's going to have boundary in corners. And it has both topologically and differentially, it has enough structure so that it will contain the compactified, modulized space as a subset with the Gromov topology just induced from the ambient scale topology, or base topology even. Great, so now you have manifolds. But of course, if we want to study pseudo-Huller-Worfer curves, say, for instance, for other moduli problems, we want these to arise as zero sets of a nonlinear fret-home section. And so that means we need some notion of a bundle. And so the main theorem that I wanted to point out here is this implicit function theorem. This is sort of the first step, which basically says, well, what would you expect a finite-dimensional version to say? The finite-dimensional versions say, without manifolds with boundary in corners would say, I have a finite-dimensional manifold, I've got a nice smooth bundle over the top of it, I have a section of that bundle. And assuming that the zero set is the linearization of the principle part of that section is subjective, then it should be the case that you have an implicit function theorem, which guarantees that the zero set is in fact a manifold. And then if the base of your bundle is a manifold with boundary in corner, then you'd like it to be the case that the corresponding solution set or zero set also has a boundary in corner. And so that's essentially what this theorem says. So top of this board down to this line here. That's just essentially the M-polyphold version. Yes? There's x equal y. No, so y is the bundle and x is the little x and little y. Little x and little y. The solution set equals y. This is cap x. I mean, I could rename this. Well, x, if you like. Is that the issue? But x was a specific term for x, x, 0. What was x in the way of the equation set? It's OK. It's at every point x in the solution set. At every point x you want? I'm fine. OK. Thank you. Right, so that finite dimensional version of what you would sort of expect for this sort of an implicit function theorem is exactly what this says. But then I've underlined all the words that at this point in the course of lectures we don't know yet. So I wanted to kind of go over those somewhat quickly. And some of them I can go over quickly and some of them I can't. So for instance, a big one here is an S.C. Fredholm section. So this requires an entire lecture, but that's the point of tomorrow's lecture by Katrin. She'll be introducing S.C. Fredholm sections. And then there's a tame strong bundle, which also takes a minute, and that's essentially what takes up the bottom two boards here. So I want to pause on that for just a second. And the remaining two pieces are good position and sub-M polyfold. So let me go ahead and drag this top board down just so I can point a little bit easier. So good position has a precise meaning. It is in the lecture notes, as are essentially these pictures here. The point is that, I mean, I think the point is that in order to guarantee that you're sort of the sub-manifold that you get, this sort of cutout has a nice boundary and corner structure, then the behavior of that sub-manifold, before you've shown it, has a nice boundary corner structure, needs to interact nicely with the boundary of sort of the ambient space. The boundary and corners of the ambient space. And so general position would be sort of somehow it intersects transversely. And that's sort of ideal. That's what you would expect. I should probably mention this about HWZ sort of terminology. So when you read, when they say general, I think generic. The general case, in my opinion, covers all possible cases, but the generic one is the one that happens sort of most often, but not necessarily always. And then when HWZ say good, what they really mean is good enough. Right? Because in general, I would say this position is better than whatever I've drawn right here, which I'll explain in a second. But I mean good enough because it's good enough to get the result that you want, which is that your sort of sub-manifold has nice boundary and corners, a nice boundary corner structure induced from the ambient boundary corner structure on the M-polyfold, say, for instance. Right, so that's terminology. And so here's general position. And so what does good position mean? Well, basically what it means is that you have a tangent. And it's a little trickier because in general, of course, I've drawn everything with your subspaces being like one-dimensional. And so it's a little bit trickier to sort of make these statements, especially since your ambient space is not finite dimensional. But the point is that you want it to be the case that you have your tangent plane. So it's going to pass through the origin. And you want it to be the case that you can sort of wobble it a little bit. It takes sort of an open neighborhood of nearby planes. And then you want to make sure that that open neighborhood stays, or the corresponding rays, if you will, sort of stays on the interior of your ambient partial quadrant. And so that's why what you have this line right here is supposed to represent this finite dimensional subspace. And it's good because I can wobble it, passing everything still through the origin. And all those corresponding planes are going to pass through the interior here. That would be not the case here. This is a not good case where you have, say, this ray sort of travels precisely along the boundary. Because now you wobble a little bit, and suddenly it's gone. Suddenly it's outside that partial quadrant. I see. So your picture, though, is that it's not a two-dimensional cone coming into this two-dimensional. It's a one-dimensional thing. It's a sort of red thing. And you're a two-dimensional triangle, sort of very quick wobble. Yes. Oh, yeah. I think that's right. So it's really that when you say it's not good, it's because that one-dimensional thing is coming along the edge. Absolutely. Absolutely. Right. And then, of course, here's something particularly not good. When you have sort of a line that sort of, apparently this happens in the theory sometimes it has to be dealt with, you have some line which passes like this just through this point, and then you can sort of see even no perturbations of this line sort of bring you into the interior. So this is just sort of a picture definition. The precise definitions in the notes. But I wanted you to make sure that there at least be aware of something like that. Oh, and then a sub-M polyfold. So I can read this. Let X be an M polyfold. So we're thinking this is an ambient space. A is a subset in there. And we want to know when is A a sub-M polyfold? Well, that's going to be the case if, for every point in A, you can find an open neighborhood V around A and an SC infinity retraction R taking that open neighborhood to itself so that the image of that is A intersect with that neighborhood. I mean. How about your retraction hands? Of course. It just retracts right down. It's much easier. So I think if you think about this definition for just a moment, you should hopefully convince yourself that this is the only possible definition that it could be. In some sense, replace X with SC bond-ock space. And you should essentially get, you nearly get the definition of just a retraction in a model case. So all we're really doing is saying now you can also have retractions in M polyfolds. And that retractions in M polyfolds can cut out sub-M polyfolds. And that's good because this theorem tells you that that subset that you get in your M polyfold is a sub-M polyfold. And then the bottom end of this guarantees that that sub-M polyfold is essentially just a finite dimensional manifold with boundary and corners. Any questions about this? So having a retraction on your polyfold means that locally on charts, it is a retraction. Right. So I mean, a sub-M polyfold, because of the definition, it must be the case that you can find a local model in which it's actually an M polyfold as well. No, but I mean, because when you define your charts on this side, I'm just wrong. Your retraction is defined on an open set. Right. But you should see that as long as you have something which supports the SC calculus, as long as you have a domain which supports the SC calculus, then you should be able to have a notion of a scale retraction. So it's just a one-step generalization from that. Good. So the last thing that I haven't explained then that I need to is the notion of a tame-strong bundle. This tame-ness, I'm not really going to talk about too much, although it's discussed in the lecture notes, basically just saying there's these additional conditions on your M polyfolds to make sure they have a nice boundary corner type structure. So what I do want to talk about, though, is this term strong bundle, which the first time you see it might see a little bit strange. And so I wanted to make you aware of something so that when you see the definition, it doesn't seem like what the heck is going on here. So to do that, I can review some stuff that hopefully we all recall just from the classical theory. So the classical theory says a linear, I should have said, linear Fredholm operator, it's a linear map between Bonach spaces with closed image, finite dimensional kernel and co-kernel. So this is fine. And it's the case that Fredholm operators are stable under compact perturbation. So I've said what I mean by this. You can add on compact perturbations and you still stay Fredholm, and it doesn't change the index. So then if you have a classically smooth nonlinear map, then we say this is Fredholm provided it's the case that when you look at the linearization of this map at any given point, the linearization is Fredholm. So again, this is classical. And so then we come down to this point here. We say, well, let's not worry about domains too much. I mean, I don't want to have topology of my domains. I want to try and keep this someone as simple as possible. And in that case, we see, well, the Cauchy-Riemann operator is Fredholm as a map from something which takes maps of regularity k plus 1 to maps of regularity k, maps, sections of pullback bundles, and so forth. But the main thing I want to focus on is the fact that this is of regularity k plus 1 down to regularity k. So now, if we see that, and that's true for any k that we choose, then the natural choice is to try and fit this into the SC structure, like the SC-type calculus. So then even in toy cases, then, for instance, we think of the Cauchy-Riemann operator as acting from one scale space to another, where this is your first scale space, where the kth level has regularity k plus 1, and the kth level in the target has regularity k. That's just this statement here translated into the SC calculus language. So that's not surprising. But then something a little bit strange happens, which is, well, what are our compact perturbations? That's what we had right here in the classical sense. And so now what you say is, well, we remember from scale bond-ock spaces that the higher levels compactly embed into the lower levels. And so as a consequence of that, if we're going to perturb our compact operator, we expect it to be the case that it's going to be a map from one scale bond-ock space to the other scale bond-ock space, except we've shifted by 1. So it moves one level up. And if you write this down, we look back in this case, well, that just means that we're tacking on a perturbation, which maps hk plus 1 to hk plus 1. So you have a differential operator, which drops you down, a compact perturbation, just adding on a lower order term. So that's not so surprising. But then we point out one last complication just in terms of, and this is really in some sense it's a notational complication, and then you just have to push through, carry through with it, is the fact that, well, the reality is that in our applications, the Cauchy-Riemann operator isn't just a map between, say, scale bond-ock spaces. We really have to think of it as a section. And so consequently, we have to think of it as a section from locally, at least, from e into e plus f. So this is our total bundle. And if this has regularity hk plus 1, then this has to have regularity hk, which is essentially what I've written right here. And then you have your perturbation. But your perturbation sort of lifts you one up in the target here. And so I've written that down here. So your sections have to go from ek to ek plus f1 plus k. And this is sort of frustrating. It's frustrating because that means in local coordinates, in a local chart, it's irrelevant. You say, OK, this is easy enough to do. But the problem is that you then have to build a bundle. And the thing is that what that means is you essentially have a bundle with two different structures. You have this structure that you have to keep track of, and you have this structure that you have to keep track of. And so the fact that you essentially have two scale structures that you have to keep track of in order to have a good notion of perturbations together with your differential operator motivates the word strong in strong bundle. So anywhere in the literature, in HWZ literature, any time you see the word strong, that means that you have two scale topologies, two scale structures that you have to pay attention to, essentially because you're dealing with bundles. Does that make sense? Damn. So this is to be able to have a notion of compact perturbations. Yes. But doesn't it automatically have two structures? If you have FK sitting over EK or something in bundle, and then you look at FK is governed by FK plus one in it. But in practice, it's actually always apparent. I mean, once you see one example, you know the more. But on the abstract level, the formula is S, G, or coincide. So if you couldn't hear Helmut, he said, it sounds so just to sort of highlight something about the polyphonic theory is that on one hand, it's built on this totally abstract level, and on the other hand, we make sure it contains all the theories that we have, all the theories that we want it to. And the downside is that in practice, when you actually write things down, you probably could get away with not having to keep track of these two structures because they would be fairly apparent. But in order to abstractify that and then have this abstract theory in which everything fits, you need to have this notion of having two different scale structures essentially on your bundle. Please. So just as, right, isn't she that you can always get a strong bundle structure on a bundle where the strongness comes from this shifting by one, like you've got in this example? I think that would be more for sense. You have to be careful that the transition that's preserved the scale thickness, but that's a difficulty. That's the taming of the tamed sort of the quality of the transition. I mean, taming the sense that it doesn't allow on stuff. It seems that it draws on comparably nice. For instance, I mean, for instance, it doesn't make sense, like you might say, well, maybe you're lucky and you've got a scale structure here and a scale structure here, so maybe instead of taking the usual diagonal filtration, we take a double filtration, keeping track of both simultaneously. But then you write down a transition map, and you see that you get some of that, but not all of that. There's this unfortunate fact that if I consider a map of regularity, classical regularity CK, it doesn't make sense to talk about a vector field along that map of regularity K plus 2 or something. That doesn't make sense. That doesn't hold in transition maps. And so then consequently, in your transition maps, you don't keep track of the double filtration. And if you look at what's the best you can possibly do in applications, well, the best you can possibly do is essentially to keep track. In applications, in general, is to keep track of precisely these two filtrations. And then from this, you can get a partial double filtration, which is the way HWZ do it. So in the end, what ends up happening is you end up with local charts that are written like this, where we use this left triangle here to denote the fact that this, well, HWZ will tell you that you have a double filtration, m, k, where m is, I'm going to forget these. Is this right, Helmut? Damn it. Hey, as best as you can, it does work. Right, yes, yes, of course. Yes, so you get this sort of double filtration, but it doesn't extend all the way up. And again, the reason is essentially because it doesn't make sense to talk about vector fields which have much higher regularity than the map along which they're defined. That's essentially what this problem boils down to. But also simultaneously needing to have a canonical class of compact perturbations in which to do the theory. So yes? So this is implicit function here, right? And this tamed, strong, bundle condition is a condition so that we have compact perturbations. Yes. So how is it related? We're not going to perturb it. It's already perturbed so that it's transverse. So for today, yes. Tomorrow, no. So tomorrow you will need to have a perturbation. And the point of me presenting this is that I have to tell you what the strong bundle is precisely so that you have a space in which you can make perturbations. Yes, precisely you can make perturbations. For this statement, I suppose I could remove, yeah, for this statement, I think I probably could remove this strong condition and work with something sort of less. I mean, you could probably work through the implicit function theorem. It's not clear to me where that would cause a problem. But how do you define phretom? I think it's sort of one project. But phretom actually requires that when you bring it into normal form, you use the strong bundle property. That's on the target. Yeah, that's right. OK, yeah. So inherent into the definition of an SC-Fredholm section is the notion of a strong bundle. It's inherently built into it, which you haven't seen the definition, so I'm just being vague at the moment. But I guess the one thing that I kind of wanted to clarify here, which I doesn't seem I was successful at, is that when you see strong bundles, they're going to have this weird new symbol where you're thinking of sort of this as being a total space, this is the total space over the base E. And everywhere you see strong, it's essentially because you have to keep track of two different SC structures. And the need for having these two different SC structures is the fact that you have both the Fredholm section and you need to have a compact perturbation. Hopefully I can get that message across, if nothing else well. Better luck next. It's a little bit stronger because the Fredholm power, as I said, the Fredholm power involves this and constrains the flexibility of your corner changes. So Fredholm is defined by looking good sufficiently nice in certain corners. And since in the target, you can only use things which preserves a double-fit fraction, you can't really completely wide sense. So it enters there a little bit. And then concerning your question, then having these perturbations comes in when you do the softness. But if you have surjectivity, I mean it seems like you really need it. Then surjectivity, but the Fredholm property involves something of discovery, in its definition. And if you don't have surjectivity, the perturbation that you can achieve this, generate. Generic, surjection, perturbation. I'm a little confused. So if you have a bundle, then you have a total space and you have a base. And if you have one filtration on the total space and one filtration on the base, why isn't that sufficient to recover the other filtration in this amount of farther direction? I think this was essentially your question earlier, right, Duzza? Yeah. And I think the answer is that if you write everything down in terms of the applications that you want to study, the problem becomes much more concrete. Why is it not formally positive? Because you have coordinate changes. So in one local chart, you have EK and FK. And the coordinate changes are, say, CK. And the fiber, it's things of quality, CK minus 1. And therefore, at level K. And so at level K plus 1, that's why you're allowed to go K up to N plus 1 at level K. And CK, the CK coordinate changes, because I have CK things at the bottom. And basically, over CK things, you have coordinate changes to CK. It's K changes. So the attention bundle is not a strong one, to put it this way. OK, so I went longer than I wanted to, and I really wanted to hand things over to Nate, so I'm sorry for running over.
In this talk, we generalize the notion of sc-retracts to include cases with boundary and corner structure. In addition, we develop the notion of a strong bundle (of which the Cauchy-Riemann operator is a section) and state an implicit function theorem for transverse Fredholm sections with compact zero-set, which guarantees the zero set of the section is a manifold with boundary and corners, with boundary/corner structure induced from the ambient M-polyfold. [Related literature: Sections 5.2, 5.3, and 6.1 of Polyfolds: A First and Second Look.
10.5446/16304 (DOI)
Very good. Thank you. So, yeah, so first right off the bat, I want to apologize for this juggling of the schedule. I have some personal obligations, which are just making it difficult to schedule. So hopefully this will be the last of these switches, but who knows, I suppose. Okay. Right. I want to, so a couple of people pointed out a couple of things to me after my talk yesterday that I wanted to bring up, and it was sort of funny because these people sort of raised exactly the sort of point that I've raised with a helmet on a number of occasions, which is there's oftentimes you start talking about this stuff and there's just ambiguity in the language. And once you're sort of brainwashed enough to sort of understand what's going on, then that ambiguity becomes sort of natural and then I committed those same sins yesterday. And so I kind of wanted to point out something about that. So there's this ambiguity in terms of how I write and in terms of how HWZ write to some extent in terms of E, which is meant to be a scale, Bonoq space and E0, which is just a level in particular the base level, zero level of that SE Bonoq space. I mean, in particular, this looks suddenly confusing. What does this really mean? As E is a scale, Bonoq space, then it's a whole sequence of Bonoq spaces. So what do I mean by to say there's sort of an open set, right? Sitting inside it. And so the ambiguity is sort of always meant that E is both, when I write E, it's meant to mean both sort of the scale, the whole set of Bonoq spaces. But anytime you see a set-wise type statement like this, I'm talking about the zero level, right? So open sets in particular in a scale Bonoq space always means you take an open set on the base level and then it has the scale structure induced by taking the intersection of that open set with all the other scale Bonoq spaces, all the other Bonoq spaces in the scale Bonoq space. Does that make sense? So hopefully that clears things up. And it's just, I mean, it'd be so, you know, the more you go with this sort of stuff, the more you discover that you just need sort of more and more notation and then at some point it just gets too difficult to keep track of all the notation you need to allow for some amount of ambiguity to sort of actually make anything understandable. So that at least seems to be my preference. So if at any point you sort of get stuck even with these sort of basic questions, because believe me the first ten times I read this stuff, this is exactly the sort of thing that would drive me nuts, please ask. Okay? Yes? So being open in the base level E0, is that not, I mean, so the higher levels are compact embeddings, right? Okay. So shouldn't this be equivalent to U intersects in K being open for all K? That should also be, I mean, that will be the case. Okay. It's kind of equivalent to? I'd have to think of it's equivalent or not. I need to see a precise definition. Any other questions? Okay. Okay. So then the first thing I really want to do during today's talk is just to sort of recap what we did yesterday, because we're going to sort of build on this. So the first I would say sort of main thing that I wanted to point out was that the action of the reparameterization group is not classically smooth. So we had this sort of toy problem. We showed that it showed up in transition maps and more in setting up sort of trying to set up Monarch manifolds for more somology. And then I sort of, and then I said that the same sort of action shows up in Gromov-Witton and sort of essentially any moduli problem you want to consider. Action of reparameterization is not classically smooth. And then I was pointed out on the total space, on the ambient space of functions that you want to work with. It is a smooth action if your moduli space is cut out transversely. And the action is just restricted to that space. So then we said, okay, well, we want that what we'd really like to have this action on this total ambient space to be smooth in some sense. So we introduced scale-bonox spaces and scale differentiability. And as a consequence of that, we had sort of two key facts. One is that now the reparameterization action is SC smooth. And it's also the case of the chain rule holds. Chain rule holding basically means that we now actually have this new notion of differential calculus in some sense, right? And then reparameterization acting smoothly is nice because that meant that you could build transition charts now with sort of some notion of smoothness between them. So you could build something like a scale-bonox manifold and some toy cases. And then the last thing that I wanted to point out, I did mention this last time, but for me, if someone tells me that a function is smooth, that always means C infinity. But I know there's some collection of people who also think that smooth should just mean, or rather, a smooth function should be C infinity for me. But there's a number of people who think a smooth function should just be C1. And so for all of my talks, smooth is meant to mean infinity, right? And in particular, SC smooth is meant to be SC infinity, not SC1. Okay? And then you can sort of say, well, did we actually show that the reparameterization action is SC smooth? The answer there is strictly speaking no, but this proof carries over. You just sort of iterate it. And then you can actually prove that the reparameterization action, at least in the toy case that I presented, is in fact SC smooth. Any questions about that? Okay. So, new material. Today, we want to parameterize a, let me write this down right. Maybe I'll say it this way. We want charts near nodal, let's say maps. And we're thinking in some sort of a Gromov-Witton setting. So we're not going to worry about quotienting out by atomorphism groups. We just want to sort of understand what charts near nodal maps should be. So the image of your map here we're thinking of is looking something like this, right? Two spheres, maps from a nodal sphere into some manifold or R2N, say, for instance. And so even to sort of state this, we're kind of implicitly assuming something in the background here that we know what it makes sense to be near a nodal map in a reasonable sense. So you're already, I'm already sort of assuming that we have some sense of what it means to put a topology on the space of nodal maps in this larger space. In particular, you would like it to be the case that near this nodal map is, say, for instance, this map, which is not nodal. You've glued it a little bit. We'd like it to be the case that this is sort of close to this. That's what we mean by near. And that in particular is what we want to try and find a chart for, assuming our chart, say, for instance, is centered at this sort of map. So that's our main goal for today. And how this appears particularly in the polyfold framework. And so the first step, I guess, is to take this picture here. And for simplicity, we really want to cut away as much of the topology of the problem as we can. And if we do that, then that sort of turns it into some problem that looks, my drawing on the fly is a little bit poor. So my apologies for that. But I hope, Joe's delirious. But I hope that you, but I hope, I hope that you at least, okay, very good. So this is what I want to do. I kind of want to chop away sort of the interesting topology. I want to turn it into this sort of problem here. And we're going to see with pictures and then more precise statements how this is going to end up being useful for us. So what I would like to do, though, is a little warm-up problem. Or rather, I want to give a warm-up problem, which goes like this. Definition for fixed delta bigger than zero and k, a natural number, define hk delta r. I'm going to use super, or subscripts rather to denote coordinates on these guys just as it gets convenient to do this if I want to be precise. No. So hk is equal to wk2. And I want to define this to be equal to the set of all f in hk loch such that d alpha of f times e to the delta absolute value of s is in l2 for alpha between zero and k. And then the norm in this case is the sum alpha in absolute value between zero and k, integral r cross s1 e to the 2 delta s d alpha f squared ds dt. And what I want to do is say homework for zero strictly less than delta 1, strictly less than delta 2, and then I'm going to say all of this less than 2 pi show hk, now let's see, I'll write it this way, show g sub k defined to be equal to h sub k delta k is a sc buttock space. So there's some hints in the lecture notes that I have online and of course you can ask Nate, he's worked through this and I think, yeah, so that's an option as well. That is the key step. So that's the key step and I guess, let's see, so the hint that I provide in the notes is that this sort of exponential decay in particular the fact that you have sort of increasing weights in terms of your exponential decay is crucial to guaranteeing that compactness result. However, I would add is sort of in an addendum, I would, as an open ended question I would just sort of say, explore what other possibilities one might have. In particular, it is for instance important that the deltas go up to infinity but do you need exponential decay? Sorry, the deltas increase, what I mean, they strictly increase, don't go to infinity, although that would also work. So then that raises also a natural question, why do I have 2 pi here? This seems sort of completely strange and you can choose a different cap there if you like. Because I'm talking about problems in Gromov-Witton, this 2 pi ends up showing up as being relevant and for those who've done sort of more analysis in the subject, this basically, this number here has to do with the spectral gap for an appropriate asymptotic operator and 2 pi sort of shows up in the Gromov-Witton case and if you're working in sort of SFT or fuller homology or something then these caps sort of have to be different and so there will be sort of corresponding changes there. But in any case, this is sort of the prototypical scale bond arc space that occurs in a lot of polyfold literature I think, everything is sort of modifications on this I think. Good, okay. So now what are we going to do? Is there some intuition as to why that's the norm you write down? What else could it be? Starting from what? Starting from where? Starting from here, I would say this is the obvious norm. Okay, so why is this the choice? So I would say that the point is that you have a non-compact domain and if you want to have a scale bond arc space then you need compact inclusions of higher levels into lower levels and so you can't just, you need some sort of, you need some sort of, increasing the regularity alone won't do it, you need some additional information. So the exponential weights, the, like the exponential weights that we put on sort of heavily weight things on the outside and that, well, I mean you can tinker around with it, yeah. You can tinker around with it and then sort of see that that guarantees compact embeddings. Another way to think of it is that in some sense, morally, those exponential weights kind of force, in many ways, allows you to sort of treat the problem as if you're dealing with instead of an infinite domain, sort of a bounded domain. Why don't I do even more of a violet cutoff? Well, you wouldn't, I don't think you would want a cutoff. I mean, I think double exponential weights. Yes. So again, so that my, my, my addendum to my homework problem was explore the necessity of this sort of exponential, right? And so, I mean, so the solution is basically it doesn't have to be exponential, it could be a lot weaker, it could be a lot stronger, but there's an important, there's an important key in there, but you choose exponential, I think one of the reasons you choose exponential is that this plays very, very nicely with the corresponding asymptotic analysis. I mean, there's a presence. Yes. I mean, you want data to do presence. Sure. And then it's sort of nice for that sort of set. If you had like a double exponential, you'd be filling out some of the things that you wanted to calculate solutions. That's the solution to the exponential, but not the sort of double exponential. But if you just want to construct a space. But if you just want to construct a space, you have a lot of freedom. But if you want to construct a space that's useful for applications, then you have to be a little bit more careful, and they all tend to follow this form. But these are all good questions. So it's certainly reasonable though to be, so maybe that ends the pseudo homework with courage from here. What's this definition? For this definition, the most natural thing to think of is, well, let me draw the domains here in, in, right now, and then hopefully that'll start to clear things up. So, okay, so I'll draw, so let me draw what I erased. So just, you have to buy time. That's better, Joe. That's what I expect from you. It is supposed to be a cylinder. A half cylinder. Okay. So here's what we have, right? So I said with a, with a picture that I erased previously, we had a nodal curve that you might sort of see and grow a lot of witness. So then I said, okay, I want to forget about the part sort of the topology as much as possible, reduce it to sort of this, this pair, this total disc pair basically, right? So this is what I have. And so then what you do is you say, well, look, this is a disc, but now I have this nodal point. I want to treat it like a puncture. And so if I treat it like a puncture that I have a holomorphic coordinates, which take me to sort of a positive half cylinder. So this ends up being r plus cross s1. And then over here, this is going to be, you do the same thing, but in sort of the opposite direction, r minus cross s1. And so now any, like if this is your domain, so now if this is your domain and you have a map defined on this, well, you can sort of pull that back to sort of maps on either of these two, on these two cylinders, right? And then in particular, you'll have sort of an asymptotic matching condition in this case. So then what you want to do, right, because our goal for the day, although I erased it, our goal is to essentially try to find charts for neighborhoods of nodal curves. We have to do, well, the, let's see, where is this in my notes? I mean, the right thing to do is to, is that that's going to involve the sort of pre-gluing maps. So I have to define what those are for you. And so I like to start with the picture. And so, oops. So here's this picture. So the idea is, the idea is, right, if you have sort of, if you have the nodal map, the nodal disc map, and then you pre-glu, you know, you end up with a cylinder of finite modulus. And so that's the picture that I'm drawing here. And in fact, I'm going to name it, and it's going to be called, it's going to be named ZA. And I'm going to define it, very precisely, we're on the board here. ZA is equal to, this is 0, r with an s coordinate cross s1, disjoint union minus r0. This has an s prime coordinate. This has a t coordinate cross s1. This has a t prime coordinate. I want to take that disjoint union, and then I want to quotient out by this equivalence relationship, which identifies these points, st with s prime plus r comma t prime plus theta, where r equals e to the 1 over absolute value of a minus e and a equals mod a e to the minus 2 pi i theta. So what am I doing here, right? So, I mean, I don't think this is significantly different than what's done in the McDuff's Solomon book, right? The idea being that, you know, if you have a nodal pseudo-Hollemore for curve, and you want to find nearby pseudo-Hollemore for curves, well, there's this sort of argument that you make. The argument says, you know, you find this pre-gluing map, which allows you to sort of construct nearby maps by, from nodal ones, provided you've given me this complex gluing parameter, which I'm calling a. Which thing is called the pre-gluing map? The pre-gluing map hasn't been written down yet. I'm about to do that. Okay. Why is it called pre-gluing? So I'm using Katrin's terminology. This is where I sort of acquired this from. The idea being, my understanding of Katrin's idea, she can yell at me if I'm wrong, is that in something like Gromov-Witton or, you know, in the various, you know, fluorhomology or what not, there's a gluing map in that the gluing map should be understood as you take a broken solution to your problem, and the gluing map takes you from that broken solution plus a gluing parameter to another solution. Pre-gluing says, give me two a priori non-solutions, but put them in a function space that's close to where the solutions should lie. So you, it's called pre-gluing because usually, my understanding is that you take solutions to this nodal problem or broken problem, you pre-glu those solutions together, and then you run sort of a card iteration or Bonn-Och-Fixpoint argument to basically say that there has to then be a nearby solution. Okay. Okay. Is that roughly correct, Katrin, or are we busy with something else? No, that's correct, and that's not my terminology, I think. Well, that's where I learned it. So what's the... That's a good reference, I think. Okay. This sort of standard language that I'm just not aware of. Okay. Very good. Okay. So, right. And so what's going on here is that we have to define that pre-gluing map, but in order to define that pre-gluing map, you need a domain for that map, and in order to define a domain for that map, well, you have to do a bit of this type of pre-gluing here on your domains, and then we're going to find maps on this new domain. So let me do that now. So, given... Okay. So I'm going to say, well, I can be precise here. Given A and C, and we're really thinking of mod A being near zero, and given U plus or minus as maps from R plus minus cross S1 into... For convenience, again, I'm just going to work in R2N, and we can change things later if necessary, although with Retrex, you can do some interesting tricks. We have this pre-gluing map of U plus, U minus. Well, let's see. It's defined as a map from ZA into R2N. So are you giving me a gluing parameter, and then two of these maps here, and then I'm going to construct a map from this sort of finite cylinder into R2N, and it's going to be given by... Beta plus A, U plus minus is equal to beta S minus R over 2, U plus S comma T plus 1 minus beta S minus R over 2 times U minus S minus RT minus theta. Where? Let's see. Where beta essentially has the following form. It's a cutoff function. This is beta of S here so that beta prime has... That's the derivative... has compact support. Two beta prime is less than or equal to zero, and three beta of T... Beta of S plus beta of minus S equals 1, identically. Okay. So with this beta defined, then this gluing map, or pre-gluing map, rather, is well-defined. And so then, of course, if you haven't done any pre-gluing analysis before, then something like this looks really unpleasant. If you have done gluing analysis, you can at least sort of see what's going on here, I think, hopefully, fairly clearly. Basically, what you're doing on this side, you've got... You have a cutoff function, and over here, you have 1 minus a cutoff function. And so what that means then, I mean, if you've done this... If you tinker with it, if you haven't done it before, is that you're essentially interpolating between U plus and U minus module of these sort of shifts in the corresponding domains. And then in the picture... I mean, in the picture, that's essentially... That's essentially what happened. Katrin actually drew this picture, a similar picture during her lecture. The idea basically being, you take your two domains, and then you shift them, and you add in this relative twist, and then you want to identify sort of this truncated region, which we define to be ZA. And then in the ZA, say, on the far left-hand side, it's U plus, which is defined on the top part. And then on the right-hand side of ZA, it's going to be U minus, which is defined on the bottom most part. Does that make sense? So this is just a pre-gluing part. This is rather smooth. This is just... Yeah, this is... So this is... Yeah, so this is just pre-gluing, and everything done here is... Yes, the smooth for A away from zero, I think. No, I'm not going to say that. I have to think about what smoothness means in this case. Smoothness is a little bit strange at least because your domains of your maps are sort of changing. So you already need to build a space where it makes sense to even compare them. So I won't make any claims about smoothness. Good. And so... You mean like the variance in A, right? I think... Well, no, I mean, how do you compare... So suppose I take... Suppose you even fix U plus and U minus, and then I compare like A equals one and A equals one-half, right? The point is that if you look at the definition of Z, like your domain is changing, right? And in fact, the way we've carefully defined Z, even if we consider like A equals I and compare it to... Or let me write E to the I pi over two compared to E to the say I, say for instance, right? Even in this case, with this careful definition, your domains have changed, right? They're all diffeomorphic, of course, even in this case, they have the same modulus, but your domains have changed. And so all I meant to say was what does it mean to sort of say smooth at this point already? Because these things are... The domains of your function are changing. If you want to compare two functions, you'd want them to have the same domain, I think. Any other questions? Okay. So now... So now I want to say... I'm going to try and keep this. So... Sorry, can you repeat what the role of this gluing parameter A is? Pardon? Can you repeat what the role of this gluing parameter? So it's... I mean, so as A goes to... So when A equals zero, it's as if your maps haven't been glued at all. So it's the nodal map. And so then what A controls is, well, it controls in some sense how you construct... It essentially controls how you construct the ZA, which tells you how to construct, say, the modulus of this neck in between... More than that, I think the modulus of the neck at this nodal point. Why do you have to twist? Well, because... Well, for one reason, if you didn't twist, then you'd have boundary, and then your Delaney-Mumford space of Riemann services would have boundary, and you'd know you'd done something wrong. I mean, there's just... I think if you want to get all nearby maps, you have to include that twist parameter. So here's some ideas. So the first idea is the right idea. And, yeah, I guess I should say... Yeah, I should say... So Catrin's told me this about a million times, and it was only yesterday that the light bulb went off in my head that this was really the right idea. And that is... It takes a while for stuff to get in. Let me tell you. The idea was to use this preglue map to define a neighborhood of a nodal map. And I bring this up specifically because during Dues' talk, someone said, well, wait a second. What's this topology on this space of pseudoholomorphic spheres or something? Once you compactify, what's this topology? And the immediate answer was sort of like, oh, that's... I don't want to talk about that. And that's a perfectly valid reason because when you write down... Because the way you would define the topology is essentially via Gromov compactness. And you read Gromov, the definition of Gromov compactness anywhere, and it's complicated. And so you kind of don't want to do that. However, if you look at the preglue map, if you look at the preglue map on this ambient space that you're trying to build, then it essentially, when you restrict it to your moduli space, it gives you the Gromov topology. And it's not so difficult to see how this preglue map ought to give you sort of neighboring curves. You sort of say, okay, I take a nodal map and I give a gluing parameter, and then I find these sort of approximate non-nodal curves nearby. And that gives you a bunch of sets. And that bunch of sets defines a topology for you as long as everything's sort of open in the suitable... And it's sort of a suitable sense. And it's, I think, a very clean way to sort of define the topology or what you expect the topology to be in this sort of ambient space. So this is a good idea. So here's sort of... I have to be a little bit careful. It's right but wrong. This following idea, which is to use this preglue map to build a chart for these neighborhoods. And so let's see what I want to do. So I can even make that more precise what I mean with the second statement, as this will be useful to keep in mind and sort of explore. So what do I mean? Well, let's say via... I have a map from C cross E into capital Z. And so C takes this input A, which is a gluing parameter. And I'm going to use C. In reality, we're thinking just being an open neighborhood of zero. But just for convenience, let me say C. And then E consists of these pairs, U plus and U minus. And then we want to take these to their image by the preglue map. And then I want to define these function spaces for you. So E equals the set of all pairs, E, U plus minus, defined on R plus minus cross S1. Sorry. What is capital Z? So Z is another function space, which I haven't defined yet, as I'm busy defining the first one. Such that there exists a constant C in R to N, such that... Let me try to write this clearly. E, delta naught, absolute value S, D alpha U plus minus minus C is in corresponding L2 for alpha and absolute value 0 up to 3. And Z is the union of A in C, where again I'm being sloppy with my notation. H3 maps from ZA into R to N. Do I want to regard what? Oh, right. What kind of object is this by definition? I mean, so each one of these is a set and I can take a union over a set and I get another set. So it's a set at the moment, but you can see that what should Z be in this context? If we actually capped off our domains by these sorts of disks, well, then we would think of Z as being the function space from a bunch of different preglue... Yeah, a bunch of pre-glued Riemann services, basically. In particular, if you throw on some additional mark points, your modulus can change. And so now you have this very large function space of H3 maps from each one of these pre-glued Riemann services, but they each have a different gluing parameter, so their each thought of is completely different. Does that make sense? Just a disjoint union. Well, none of them are contained in each other, so yeah, I mean, I don't think it matters that much, but maybe I'm wrong. I can put a disjoint union. Maybe we can put a 5 over the parameter a. That's true, yeah. That's true also. Any further questions? Yeah? Why are you taking alpha to be bounded with 3? Right, because I want to try and follow the Gromov-Witton paper as closely as possible, and they do that because these functions here then end up being in H3, and then that guarantees that they're C1, and C1 is important once you want to put on transverse constraints, these sort of transverse hyper-services. Right. Further questions? Why don't I just take it to be much bigger and then not lower? How much bigger? I don't know. Like, you can't. Arbitrarily large in what sense? It's a H1 build. I see. It works this way. Oh, I see this number. Yeah. That's a minimum regularity. Yeah, this works. Although I would imagine that that might change things in terms of- It takes us k anyway, so one part of H1 build and it's just also- Yeah. I mean, I wouldn't be surprised though if somewhere along the lines making that choice forces you to do some extra work. Say for instance, proving convergence in terms of Gromov compactness. Now you have to make sure that things converge in sort of- It's not enough to even sort of converge in sort of low regularity. They have to converge in higher regularity. I mean, I don't know how the argument goes exactly in the polyphold framework. It's not clear to me, I think. But in general though, the point is you want this to be as low regularity as possible just to make sure you're capturing all your curves. Because otherwise you could run into the same mistake that you already suggested, which is put in a double exponential and now a priori or you're sort of excluding something. So keep things as- By making this be large, you're restricting your ambient space. And just in general, being cavalier about this, you might lose some information. In this particular case, yeah, you're right. Probably it's not a problem. Yeah, you have to make a choice. This way it's easier to write than it really is. Good. Wisdom, yes. So just to recap, these cylinders, right? So what was it again? You had two nodal disks included in the interior. And these represent folder coordinates near the nodes. So what happened was, right, so we started with a pair of nodal disks. And then from that you have, you have holomorphic coordinates around each one of them that you fix. And that gives rise to these two half cylinders. And then those two half cylinders plus a gluing parameter a gives you za, which is the sort of cylinder of finite modulus. And then from that, once we had this sort of domain, we also wanted to find if we had maps defined on that pair of nodal disks, we want to know what's the corresponding map be defined on this za. And we arrived at that with this formula right here. And then now, of course, we said that there's some ideas running around here. And so one is to use this preglue map to define a neighborhood of a nodal map. And so what I'm saying is, in some sense, this is sort of part of our neighborhood. This ends up being part of your neighborhood of a nodal map. And I wanted to be able to sort of make this precise here. And we had the second idea, which is sort of right but wrong, which for the most part we should think of as wrong, and I'll show you why, is to use this corresponding preglue map to sort of say that, I mean, if this preglue map defines a topology, that's essentially telling you that you're finding you're finding all nearby non-nodal curves to a given nodal one using the preglue map. So if this defines a topology for us, which it does, then you're finding all nearby maps. So why not try to make use of this preglue then to actually just parameterize all your nearby problems? Sorry, all your nearby non-nodal maps. I thought those, the preglue maps are not solutions. They're not. Correct. Correct. So which of my statements is now confusing? So why are those useful curves to consider? I think it's when you say a neighborhood of a nodal map, you really think still in the ambient space. Not a nodal Jacob morphic map. Right. Yeah. Also, you have to, I mean, together neighborhood, you have to continue. See, I just assigned it. You have to sort of add in the disks that you forgot about. That's true, although I would say that the most of today's talk is to sort of try and forget as much topology as possible and sort of, in sort of. When you're talking about the neighborhood of the maps, if you're a nodal map. I have a neighborhood of a map, of a nodal map, a nodal map. From what? Right? I mean, I could have changed my definition from a pair of, to a node, to a two disks. Okay. Is it completely obvious that I then want to discuss neighborhoods of things with many nodes? I'm not going to run into additional trouble with like different errors from the different ones, multiply. No. I mean, you can see from the, I mean, you can see from the, you can see from this, this definition of sort of pre-gluing here is that it's purely a local, it's purely a local phenomenon. And your nodes are always sort of separated in a, you know, in a sort of, yeah, your nodes are always sort of bounded distance away from each other. It's sort of a thick, thin type decomposition sort of tells you this. And when you make it holomorphic, it will not be so local. That's a different question. So at this point, all were, yeah, I mean, this issue sort of, this issue, right. They're not homomorphic maps. Yeah, nothing's holomorphic. I'm building ambient spaces, right? I mean, in some sense, all we're doing for this entire first week is building ambient spaces, right? And with, you know, appropriate bundles and structure and so forth. And so a benefit is that, yeah, we don't run into these sort of, you know, we don't run into these problems of what associativity in terms of gluing, for instance. Good. So my one last question. And the D is there to make, like, to account for the shift? Because it's only in the alpha equal to zero case. Right. Oh, right. So what's going on here? So that's actually a good point. So if you look at this, if I don't put this sort of C here, then this function space is essentially the same one as the practice one. I mean, modulo truncating your domain to sort of half cylinders. So why is it the case that we have this C in here? And what does that do for us? Well, what that does for us is that if you think about, if you think, okay, I've got a nodal map, sorry, a nodal, a pair of nodal disks as my domain, and I think about maps from there, for simplicity, once you're writing down problems, you're going to sort of assume, say, that you're basing your problem when the image of that node at a base map is going to go to zero. But then you're going to want that node to move around. And that's what this C allows for, basically. Right. And so, yes, I mean, if I were trying to construct purely toy problems for this lecture, I could have said, okay, let's kill this. But I really want to keep things as close as possible to Gromov-Witton, because that's sort of, I mean, I think that should enable Nate to recycle a lot of the stuff that I've done and even write down charts in that case. So now, I want to address this point up here, which is this second one, it's right but wrong, and I want to emphasize the moment that this idea is wrong for sure. And so why is this idea wrong for sure? Injectivity. Injectivity. So, yeah, that's exactly right. So, Katrin even mentioned this yesterday as well. If you, when you look at this problem here, when you would try to do this, well, if you'd want to use this pre-gluing map to build a chart or a parameterization or whatnot, you'd want it at least to be, you know, you, at the very least, you'd want it to be injective. And we have a problem that because of the way that this pre-gluing map is defined, it sort of truncates, you know, it truncates your domain, you sort of lose a ton of information. And so, consequently, this map ends up being infinity to one in general. General being A non-zero. Right. And so infinity, infinity to, and infinity to one map is a terrible idea then to use as a, as a parameterizing map. So, where's the, in this case too, you're saying? Yeah. Yeah. We lose data? Yeah, absolutely. Where's my hook? Hook. I guess I get to be a pirate for my talk too. All right. So, let me move this down so that we can still see this, this definition up here. You can see that what's, what's going on here, right? You know, so this bit here is U plus. So, U plus is defined, say, certainly on this region here and, and, and this, and beta in this region, or this region here I should really say, is one. So, U is getting one, but U minus, sorry, but, but one minus beta is zero in this region. So, your map defined on this region, on this region right here is just U plus. And the same argument sort of tells you that on this region here it's just U minus. But when it's just U minus, then U plus is defined, U plus is defined on all of this region here. So, all this information beyond here is just being killed. It's gone, right? It's being killed because this is a cutoff function. I erased it, but you know, it looks like this, right? It cuts off U plus after some finite amount of time and then we just, whatever, whatever the map is in this region, right? That make sense? Good. Okay. But, and, it's obvious to see that there is no way to not lose information when you do this. From this, from this setup what we've done so far, I mean, you're, you're absolutely forced to lose information. You always, you always have to truncate here. I mean, how, how can, you know, yes, it's always gonna be the case that you lose information from this setup. Actually, I, I, I, I can, crap. No, no, no, no, no, no, sorry. No, because I, I, I got really confused about this because there's another way of making pre-gluing maps by which you don't lose information, which is you rescale everything like that, that whole entire, in half infinite cylinder, you can rescale that to like half of the cylinder, right? And then just attach the two to each other. So, puzzle, what's wrong with that as a chart map? Ah, that's a good question. Hold my problem. Ah. Ah. That's excellent. I love this. Okay. So, I said, um, so I said right but wrong and now we've seen right. Oh, we've seen why is this idea here is wrong. So then the question here is why is it kind of right though? So, so the key, the key, the key thing that fails here is that you sort of, in this setup anyway, is that you lose a bunch of information and so then there's a, a fix that one would like to try to do and that idea is to find a way to keep track of lost information. And so, the way we're going to do that, and I want to write this. Okay. So, I'm going to define, okay, this should be good. I'm going to define another cylinder, right? And this is a doubly infinite cylinder and I'm trying to draw it sort of suitably parallel to the other ones because it's related. I'm going to call this C A. Actually, let me, let me fix that. If I were only considering this finite portion of the cylinder, this is then essentially going to be Z A, but if I want to consider sort of the whole thing, the whole thing is called C A, right? So, Z A is still my finite cylinder. C A is my doubly infinite cylinder. I can write a definition for C A and you can even sort of, you should be able to guess it from the, what we've seen so far. R plus S cross S1t, disjoint union, R S prime minus cross S1t prime, modularly equivalence relationship, which is the same as the one that we had before, ST is related to S prime plus RT prime plus theta. And then, and then, I can define theta minus gluing, sorry, the minus gluing, or rather minus pregluing. What is it anti-pregluing? That's what we're calling it as the following. Sure. Since we're not talking about holomorphic curves at all, they just ease a map which looks like the, you've broken that, namely the one that takes that central circle to a point. So, why are we mucking around with making the broken map? It's already there. What? I mean, we don't have a lot of classical looking maps. We don't have holomorphic maps. You are a lot of classical looking maps. Right. What I'm trying to do is motivate, I mean, what, here's what's, okay, so the goal of my talk, which I'm sort of running short on time here, is to sort of say, look, I want to introduce sort of a sequence of things. I mean, I want to introduce the pregluing map, which is sort of standard in this sort of analysis, right? So, you're not going to, you're not going to get away from doing some sort of pregluing map. Okay. And then what I want to do is say, okay, well, what we'd like to do is to use that map to make a parameterization sort of nearby something nodal, and then the reason that you can't do that is that there's information loss. So, now what I want to do is keep track of that information. And then, the punchline of this is that if I keep track of that information in kind of a clever way, and the clever way is sort of the way that HWZ defined it, what you can do is in this, in this, what ends up happening is in this weird subset of a scale-bonox basis, it turns out, it's an incredibly strange subset. I mean, it's very difficult, I mean, I think it's anyway, it's fairly difficult, jumping dimensions and so forth, but nevertheless, it has a straightforward smooth structure on it. And this provides, this then, with this, you can then build essentially something like a manifold, right, which is a smooth manifold, an SC smooth manifold in some sense, which has enough structure to, I mean, it has a differentiable structure, you can build a Fretholm theory on it, et cetera, et cetera. And so- Was there any way I can answer my question? Yes, because the, because the, what you're suggesting is I'm going to say, what next? How do you build a Fretholm theory on your problem? How do you prove a perturbation theory on this? How do you prove regularization with any other option that you're going to try to do? And the point is that I'm following HWZ, so I know all that stuff is going to appear. If you want to make a change at some point, then I'm going to say, okay, you've got another thousand pages to write. That's, I mean, that's, that's, that's, I mean, that's how, that's how I would think of it, but Dewey probably has a more polite answer. If you have a juxtaposition, you just sort of say, you know, one half views, views, you plus, and the other half views, you minus, and don't worry about, you know, that sort of converging to the same thing, and then you can go to the same thing, assuming they go to the same thing, then you lose control of the analysis. I think that's the problem. And the whole idea is you've got to stay in control of the analysis, and these cover functions allow you to do that. That gives you some kind of smooth transition you can, you can estimate from the one key note. I mean, we, you know, in our book, we use the, we use that's the plus gluing. I mean, the minus gluing you can use in polyphonic theory. But with the plus gluing, you can show that you start off with the polymorphic thing, then the plus gluing polymorphic thing is sufficiently close to being polymorphic, but you can process it next time. But if you need to, you need to process it. That applies to last minute, if you should. So you need to remain, I mean, what he's saying is you need to remain in some allogistic framework, and if you don't like the one they've done well, you're welcome to make the wrong decision. I think that's also minus. You have a tattoo? I think you subtract the mean, you know, in this case. No, so this minus should factor through, so this becomes a plus, and this should be a plus. This is a minus? No, because you've got a minus minus, so it's one minus beta times the A. No, you have to correct the end, you have to subtract the mean value from the end table. So this is minus from the last time you gave this thought. Oh, it's the beta, there's a beta here. Yeah, you're right, you're right, you want them to add. Yeah, thank you. I've lost track of your location. What is G? What is what? Pardon? I'm just worried to do each of these pieces. It's not obvious. I'm getting there, right. So, yeah, even if you've seen Prigling before, this should look terrible, and so what I want to do is try and make this a little bit more understandable. So this looks awful, and there's sort of no way around the fact that it looks awful, but once you kind of open it up and sort of see where it comes from, it makes a decent amount of sense. Yeah. The definition of VA, the first one is your class, right? Sorry, what have I made a mistake on? You've read the plus, you've registered that in the Aval. Thank you. And it should be R over 2, presumably not R over 2. Yeah, yeah, that's right too. There's like a hidden matrix. Yeah, yeah, there is, I'm showing that in a second. It makes life a lot easier, right? So, what do I want to say? All right, so I guess the first claim is that sort of this keeps track of sort of the lost information. I'll make that precise in a second. If you kind of want to forget about the fact, one big simplification, and we'll see this in just a second, is that if you're just seeing this for the first time, just kill these terms, these average value terms. Just pretend they're not there and write the same thing down, and that at least gives you a toy problem to tinker around with, right? I'll tell you why those terms are needed in a second. Is that any different from the thing that was there before? I mean, the one that's right above it, it's clearly different. And I would say by some sort of obvious symmetry. Yeah, there is a lot of symmetry. The idea is to keep track of precisely the lost information. So, whereas on one side you see a beta, you'd want to see a 1 minus beta here, and where you see a 1 minus beta, you'd want to see a beta. So, I mean, yeah, there is symmetry. It's designed precisely to keep track of lost information. I can show you how it does that. So, this is giving you the other side? It's giving you the, it's keeping track of the portions that were killed by your cutoff functions previously. So, you could have done like one interpolation or the other, and this is the other? The point is that you, the point is that you take, the point is that you interpolate between, you sort of, you have a, you have a, you have a domain, which you might think, you have two domains sort of split into two pieces. And your, and your pre-gluing map sort of keeps track of this information and this information and then interpolates nearby. Sort of the minus-gluing map keeps track of this information and this information and then interpolates in between. And my claim is that you can actually reconstruct the first two maps from the latter two. May I make a suggestion? We need to end promptly at 315. Why don't we let Joel finish what he had in mind and then, you know, we have our wonderful TA and answer questions. And, and of course I can, I can be assaulted with questions as well. Um, the voice upstairs. So, I don't know. Is that a hofer reference? Oh, the voice upstairs, right. Of course. The helmet's so tall, I just thought. Okay. So, so here's, so here's a trick and this trick is really, I think, sort of where this comes from. So I'm going to write this as a matrix. I want to keep track of both pieces simultaneously. And then if I do that, well, what do I have? I have beta sub a, one minus beta sub a. Beta a minus one, beta a. The identity map, zero, zero. This is the shift map corresponding to a, u plus, u minus, plus zero, one minus two beta a, u plus, u minus. So now I have to tell you, you can kind of guess beta a of s is equal to s minus r over two. And if the shift map of a applied to some map, u evaluated at st, is then u of s minus r t minus theta, assuming I have, are those pluses or minuses? The beta a line, you missed a beta. I did, I did. Thank you. Those are minuses. Good. Okay. Writing it this way, right? So, okay. So one, first step is to just sort of verify this sort of is true, that you had this sort of equivalence this way. Second, this, I should have an a, v in here, right? Second, I said, assume the a, v zero, the terms are zero for a toy problem. And then if you do that, you just have this matrix. And the first thing that I want to point out is that this thing is just, is just sort of obviously invertible, right? Because compute the determinant and it has to be one because these things are cutoff functions and the way these, and the properties of these cutoff functions have. So you invert this. And then this, and then this operator here, I've got an identity and I've got the shift operator, which is also clearly invertible. So writing it this way, it's clearly invertible, I think, for any fixed a. And then, okay, it's a touch more work and the, and the, and the, and the, and the lecture notes, there's a homework problem and I provide the hints which allow you to, allow you to walk through that this is still a bijection, even with these terms added in. In fact, it's a linear bijection. So, so this, so I think what this sort of, what this sort of does in some sense anyways, at least cleans up this mess. I, I, I prefer it to be written this way. And then, so then I can say, then I can say, yeah, sorry. I'm sorry. Through my lecture notes, by the way, this will run over for just a couple. Thank you. Just in case you're forgetting, you promised to say what the point of the averaging terms were. Right. So, I thought I said it at least, I thought I said it once. I can say it again. The purpose of the averaging terms, then, is to allow, is to allow, right, we have a question up front, why we had this constant C in probably the function spaces I just erased. Oh, good, right, I put them up here. We wanted to allow these, we wanted to allow this, this, this constant C in here because you'll want it to, you want it, you want it to be the case that your node can move around, right? And consequently, when you write down these, when you write down this map, you need these averaging terms to actually allow you to, allow it to be the case that you, you can, yeah, you still allow it for the case that, that node can move around. So, you'll see why and sort of hopefully. If you do the same for close theory, it would not be there. Right. Because essentially, your orbit there in the end is fixed. Yeah. So, so what I want to do is, I'm going to say, so, so here's, so here's basically what happens. Why is sort of any of this relevant for anything? So, we had this idea. We had this idea that we would like to use this plus gluing to build a chart for our neighborhoods. And the problem that we had was that it was infinity to one. So, then I said, well, let's keep track of this lost information. So, now you keep track of, of the, of the plus gluing and the minus gluing. And we said that that was a bijection. And that's good because what that means then is that if I set O to be equal to the set of all triples, say, A U plus U minus such that, such that O minus A U plus U minus is zero, then the plus gluing maps essentially O into, into, into Z bijectively. This is for A not equal to zero. And so, so now at least we have a bijective correspondence. And once you have a bijective correspondence, then the next natural question is to say, well, is it possible to give this space any sort of differentiable structure? And, you know, you look at this and you sort of say, okay, the zeros of this map, you know, the zeros of, of, of the minus gluing map, you know, that's going to be some set. Who knows what that looks like, right? And a priori it might be complicated. And so why should that have any sort of differentiable structure? And the magic is, is it, it has an, or rather it, let's say, it supports the Se calculus. And in just a brief second I can sort of, I can sort of tell you why this map here, we can, we can call this map here, um, say the box dot gluing of U plus U minus. And then what we said was, well, this thing is invertible. And so now what you can do is you can define this map, R of A U plus U minus equals, so it's basically, you take box dot gluing, you compose that with sort of the projection to the first factor composed with box dot inverse, right? And this, let's see, sorry, otherwise this is, um, A U plus U minus if A is equal to zero, A not equal to zero. So you define this map and it ends up being the case, just to go over this very quickly here, is that this map has a very nice property which is R composed, R is equal to R, and it's SC smooth. And a consequence of this, and it's a very fast consequence which I would go over if I wasn't already five minutes over my time, is that as a consequence of these two simple facts, you can then define a notion of, of an SC smooth function defined on, say, the image of R. And the image of R is precisely the set of points where the minus pre-gluing is zero. And you, and you do that simply by saying, well, F defined from, say, O into some other space Z, well, I don't want to say, well, into any other space say O prime, is SC smooth precisely, or say SCK, precisely if F composed with R on a slightly larger space, is SCK. And so the point is this, this R acts as a retraction, and images of retractions are called retracts, and if there also happened to be SC smooth, then that guarantees that they sort of support an SC calculus. And I'm going to talk about this more in my, in my, in the beginning of my lecture tomorrow. But the whole point of this, right, the whole point of this, what is essentially, right, so, okay, so I can summarize very briefly. We said, right, we said that what we really wanted to do was have a parameterization or a chart nearby our nodal maps. And what we did is we said, well, the pre-gluing map is infinity to one, so we, we made use of this minus-gluing map in order to cut out a bunch of garbage and make that map one to one. But now what I said is if you're clever with this rewriting, you can see that, that that set can be written as the image of this SC smooth retraction, which I'll talk about more next time, and as a consequence, supports the SC calculus, meaning we have a notion of a differentiable map from one set of this form to another. Which, and these then provide the, the models, the local models for M polyfolds. But we'll talk more about this next time. Okay. I can answer a couple of questions, yeah. Okay. We have a few minutes for questions, but again, remember that, you know, we have many voices upstairs. Including guys. Any questions? And the fact of a usual manifold that the image of a smooth map is squared itself as a manifold. That is, that is, so on so last zero. It's a many, something important. But in the SC calculus is something more general. Yeah, but I'm not manifolds also true. The calculus, they can have locally very much. They still have tangent space. You take TRGU, that's the time and space. And it doesn't depend on the choice of the retraction. The image is the same. So it's a definition. So you can even have one dimensional stuff, or five dimensional stuff in infinite dimension space, which is the right dimension. The other questions, yeah. Sorry, just a question. TR1 is one. Oh, projection onto the first factor. So if you have, but you sort of zero out the second term, I should have said that P R1 of x, y equals x zero. What's the image of R? So R, right. So R. Yeah, I'm just retracting to this map. What, well, it retracts to precisely O. Oh, yeah. It's a subset of, it's a subset of C cross E. I'll make this more clear next time. I went really fast in the last few minutes. I'm sorry about that. But the claim is it, oh. The claim is that the image of R is O. Tinker with it. Oh, it's sitting inside this funny space of maps on the main Z area. No, no, no. O is in C cross E. Yeah, O should be a subset of C cross E, the way it's going. Oh, it's a subset of C cross E. Have I written something which should be? So there's a subset of C cross E, which is a smooth sweet track, and if you do the pre-gluing, the trajectory, it's a bijection. But there's a union of this arbitrary long cylinder that is mapped somewhere. Right, right. But it is in fact sitting inside C cross E as a handle of some map. C cross E is not the kernel. It's the domain of a map. It's the domain of R. So it's a derivative of the retraction. Yeah, so actually if you fiber it over C, then fiber-wise it's a linear retraction. Yeah, for fixed A, yeah. But if A is zero, it's the identity, which is nothing. And if A is non-zero, it actually goes on the proper subset. Yes. Yeah, yeah. Yeah, I mean that's what I was confused about. I was confused about A equals zero. Otherwise, it's just you're changing coordinates and doing projections. Yeah. Okay, of course it's going to be a- If A is zero, nothing happens. Yeah, it's just U plus U minus. So minus gluing when A is zero is defined to be zero? Minus gluing when A is equal to zero. Or it can be defined as zero. Yeah, I guess it has some sort of definition in that case, yeah. C is BMP7. Pardon? C is BMP7 in that case. Right, but there's one map from the MMP7. And if it takes the value of the vector space, it's zero. Okay, fantastic. Okay, with that, I do actually have to go. So thank you for your question.
In this talk, we discuss the second of two fundamental analysis concepts polyfold theory is built on: sc-retracts. In particular, we discuss how they arise naturally as a means of using pre-gluing maps to parametrize a neighborhood of nodal and non-nodal (or broken and unbroken) maps near a nodal (or broken) map. Despite locally varying dimensions, such retracts support a version of the sc-calculus on which the chain rule holds, and we define M-polyfolds (manifold-like polyfolds) to be those topological spaces locally modeled on such retracts. [Related literature: Sections 2.3 and 5.1 of Polyfolds: A First and Second Look.]
10.5446/16302 (DOI)
Thanks for the introduction. I'd like to thank the organizers for allowing me to talk. Okay, so a lot of this will be joined with Michael and it was helpful he came with his obstruction, bumble gluing and intersection theory from ECH. So he was kind enough to lend them to the contact tomology. Alright, so we're going to start off with a contact manifold. For most of the talk it will be three-dimensional, but in some parts I might get to tell you about what we can do in higher dimensions. This is going to be non-degenerate contact and it's going to be co-oriented meaning that there's a global one form that defines the contact structure and for grading reasons let's pretend or assume that the first turn class of the contact distribution vanishes and then R is going to be my rave vector field. Okay, and then I'm going to have J is an almost complex structure on r cross m and I'll take d e to the tau lambda. So here tau is going to be my coordinate on r. It's going to be the translation invariant so I have d tau r is going to be equal to r and I want J restricted to, I'll just say J is d e to the tau lambda compatible. Okay, then we want to count cylindrical curves so crystallized asymptotically cylindrical curves are curves the domain is just going to be a cylinder so it's not going to be super exciting but they're still fun to look at. So I'll denote the space of, so let's gamma plus gamma minus be periodic rave orbits. Mj gamma plus gamma minus is going to be the set of J holomorphic curves from a cylinder r cross s1 j0 into the simpletization with this d lambda r invariant almost complex structure and then we want du of J to be J of du so we want them to be pseudo holomorphic and then we want the limit so if we take r to have the t coordinate s1 to have the s coordinate we want the limit as s goes to the plus or minus infinity of the projection of u and the contact manifold component to be a reparameterization of the rave orbit at gamma plus if we go to positive infinity and gamma minus if we go to minus infinity and then we want the limit as s goes to plus or minus infinity of the projection to the r component of u to be plus or minus infinity so you want to think like if my contact manifold was something stupid like a circle then your counting curves like this gamma plus gamma minus and then there's also going to be an r action that technically you have to mod out by coming from translations on the target and so when we go to look at the differential there will be this translation action but a lot of times I'll ignore it when it comes to talking about index calculations and stuff. Okay so. Is anything other than the gradient dependent on your assumption that c1 equals 0? No. So I can just drop that and you have your z2 gradient? Yeah I mean eventually I have to yeah you can drop that and use the z2 gradient of course when I define a dynamic convex contact form you'll need some sort of way of making sense of the gradient on contractable orbits. So the gradient is given by the Conly's Ender index on our chain complex we have that the virtual dimension of this moduli space is given by gamma plus minus the gradient of gamma minus and here the gradient is an SFT grading so this is going to be Cz of gamma plus n minus 3 so if the dimension of our contact manifold is 3 then you get this is just Cz of gamma minus 1. So then I can tell you what the chain complex is. So this is going to be generated by only good Ray-Borbets so we take the set of all Ray-Borbets and then we throw out bad Ray-Borbets because they're bad and so to define a bad Ray-Borbets a little bit annoying but so the Cz index of a Ray-Borbets can either have the same parity under iteration of your orbit or it can flip flop between being even and odd sorry odd and even if it flip flops then you throw out the even multiple covers so bad Ray-Borbets even covers of flip the Cz guys if you want the real definition or you want an example so in dimension 3 bad Ray-Borbets are even multiple covers of negative hyperbolic guys meaning that they have the eigenvalue of the linearized return map if we restrict it to the contact distribution is negative and real. Is there a reason for your V there for which the Cz VQ like in your top line? It's not a V this is the vector space generated by all the Ray-Borbets and you throw out all the bad Ray-Borbets. Yeah sorry he just thinks the stars never mind. Is there another way for why are bad Ray-Borbets bad? So if you keep them in the chain complex even if you had all the transversality in the world you'd never be able to prove invariance but secretly they you can't orient curves that limit on bad Ray-Borbets unless you have some sort of a parametrization you're using of your bad Ray-Borbets and so there's a when you try to assign orientations you can get both positive and negative orientations assigned to the same cylinder and this causes a headache and in orbithold Morse theory you can kind of think of the bad Ray-Borbets as being like the bad points whose like gradient flow lines get reversed underneath whatever action you're modding out by. Any other questions? It's mobilized spaces. Yeah so that is kind of incorporated by the fact that this is the projection onto the contact manifold as a reparametrization of gamma plus minus so that already includes that. And then the differential. So there's actually two equivalent ways over Q of defining the contact differential. So either we can define it to so I'll just write this down and then I'll say. So either you can define it by looking at the multiplicity of the top Ray-Borbit or you can define it by encoding the multiplicity of the bottom Ray-Borbit. So and this epsilon U is just supposed to be some choice of coherent orientations. M of U is the degree of the covering map of the curve if your curve U is not somewhere injective and the multiplicity of your Ray-Borbit just means you've had to iterate some simple orbit some number of times and the multiplicity will be the number of times you've iterated a simple Ray-Borbit to get to X. And so it might look like these coefficients are defined are not integral but they are in fact integral. And so you can see that because if you have a k to one fold cover where you have X is gamma let's say plus to the kp and Y is gamma minus to the kp this would be a k to one fold cover of a cylinder gamma plus to the p gamma minus to the q and so the multiplicity of X is kp the multiplicity of Y is kp the multiplicity of this curve U is k and k obviously divides both kp and kq. I'll just write against that M of X equals one if X is a simple Ray-Borbit so maybe simple hasn't been defined but that just means that the map X which is r mod tz into your contact manifold is injective and M of U equals one if U is somewhere injective. So that looks like beautiful theory but it's not really so beautiful. So some of the issues that are going to arise is that we're working with multiple covers. Is there a question? So can you explain the difference between the two? One encodes the multiplicity of the top Ray-Borbit and one encodes the multiplicity of the bottom Ray-Borbit of your cylinder. And it turns out that their equivalent sort of if you had all the transfer salinity in the world that would mean that your differentials were defined and you actually got a homology and then on the chain level you could just divide and multiply out by multiple seed of so Ray-Borbit's the pass from one chain complex to the other chain complex. So homologies are isometric under this? If you live in magical transfer salinity. Yes but they're equivalent. So why is this theory possibly not so correct? So the first issue is that we're working with multiple covered curves and Chris spent many lectures telling us as well as Dusa and others telling us about the horrors of multiple covered curves. And then also I think Michael mentioned this a few times but you can look at multiple covered curves and they can end up having negative index even though they live in even though they have to be there and you can't just use a generic perturbation of J to get rid of them and so this means that like compactness issues are going to suck. Well maybe they don't suck, maybe they're beautiful but they make the theory harder. Alright so everyone's favorite congerum from 2000 which is in a paper by Ali Ashbur Gimental and Hofer says that assume there are no contractable rape orbits of index minus one zero or one so this is this grading that's the conglisander index plus and minus three. And, okay, it also started by saying what Y, C, V, A non-degenerate contact manifold, then the differential is well defined, of course no one really said which differential they were using which adds another layer of confusion but that's okay. J is generic. Okay, that you actually get a homology so DQ squared is zero and this homology which depended on your contact form, your choice of J was actually an invariant of the contact distribution so EG independent of the choice of lambda and almost complex structure J. I think should be fair, they didn't quite say that because they said that some abstract perturbations of the modulate spaces would be needed but they wouldn't go into the details of that. Because this one is too much of a fault. Well, this is, well it doesn't even appear in this form in their paper, it appears as a series of like lemmas or propositions. I mean they said at the beginning this is all suiting some perturbations of the name. Well at the beginning it said that this is all like conjectural blah blah blah so this is why it's a congerum so it's a conjecture of A plus. So the conjecture is that the statement can be corrected to the other side. Well the other problem is that some people then said you don't need abstract perturbations and this works anyways. Is anyone happy? No? No one's happy? Okay. So what is this beautiful compactness that I alluded to? So this is Bourgeois, Lyoszberg, Hofer, Vizatsky and Zander. This is a result from 2003. Again I'll paraphrase this result. So there's no bubbling like in Hamiltonian Flora theory but we do have breaking of curves into buildings. So Michael drew some beautiful buildings of asymptotically cylindrical pseudo-holomorphic curves. And so the heuristic reason for why we have this breaking is because so there is a maximum principle in the R component of the symplectization but you can grow minimums and so if you try to compactify the space you're going to be allowed to develop new punctures at the bottom end. So you had some index two cylinder. You could conceivably start to have the picture people draw so you have some sort of a minimum developing in the R direction and so then if you want to compactify the space you could pop off a plane and this would be allowable elements of your compactification if you cannot sort of exclude it for other reasons. And then this makes us sad. And cylinders in your compactified modulized space of index two curves. I have a dumb question. Why is breaking and not bubbling? How much? Because you have cylindrical end which is as a real bubbling parameter instead of a complex bubbling parameter. It breaks like a trajectory of more stereo or core theory and not like a bubbling chroma or a bubbling, okay, that's a bubbling parameter. So I'm not going to draw a complicated sequence of buildings but you can imagine that sort of, well, Michael was talking about how you can have branched covers of trivial cylinders. Those guys can have index zero so you could be in some situation where x is gamma squared, z is gamma and then the cylinder could break to be gamma squared, gamma, gamma, gamma and this could be a holomorphic plane of index two. This guy has index zero and this guy has index zero. So this would be an allowable compactification. And in dimension three there is really nice iteration formulas for the Conly's Ender Index plus automatic transversality so I'll explain in a minute how you can salvage this theory in dimension three but in higher dimensions you can have cylinders of index one and you take their multiple cover and they suddenly have negative index. And then that means that you, in other elements of your compactified modular space, are going to have to include things like a chain of cylinders where you just have to end up being able to add to two at the end of the day. So that makes things complicated. And the, you know, philosophy with a lot of this stuff is well if we only had some more injective curves you just pick j generically so all of my moduli spaces have to have positive virtual dimension because we're working in a simpletization and have this r action and the only way that you can add to two with positive numbers is one plus one. And this original assumption about no contractable orbits of index minus one zero one come because this sort of building is not excluded by the assumptions but you could have some building where you have one, one and one. So this is like why the assumptions in the congerum exist. And then the zero and minus one sort of come up when you try to prove invariance and homotopy depending on what level of transversality you want to assume. So let me see. So it turns out that in dimension three you can get automatic transversality and index calculations to work favorably. And so in my thesis I found not a very large class of contact forms but the class of some contact forms where you could actually define contact tomology. So sort of the definition I came up with is we say a contact form is dynamically separated provided the conles-eander index of gamma is between three and five for gamma simple and contractable. And every time you iterate gamma you're just increasing the conles-eander index by four. So this holds and then plus you can do this if you, so I guess I should say if gamma is non-contractable have analogous definition but you have to keep track of free homotopy classes of ray orbits and it just becomes a mess to write down so I'm not going to. Is there a question in the front? Okay. And then the theorem that I proved is that if, so this condition, this is usually true up to large action. And action is the integral along gamma of some ray of orbit. So by up to large action I mean for all ray of orbits of action less than or equal to something you have this dynamically separated property and these guys all for the most part arises there's a natural class of perturbations you can put on pre-quantization spaces so this is where these guys come from. So if my three lambda is non-degenerate dynamically separated then and J generic then the cylindrical contact tomology is defined and sometimes up to like some large action sort of. So if you don't have this dynamically separated for all of your ray of orbits but only up to some large action then you can only define cylindrical contact tomology up to that large action and index and then there's a way you can take direct limits to actually sort of compute it. Variance under the choice of J and dynamically separated contact form. So this is not so awesome because it excludes a lot of examples like dynamically like certain dynamically convex things like ellipsoids so you team up with someone that knows more than you and then you prove a theorem. Six things is annoying. So a dynamically convex contact manifold is something that Hopper, Vazatsky and Zander studied in the 90s and what Katrin knew as HWZ papers when she was a graduate student. So a dynamically so Y3 lambda is dynamically convex provided the first churn class of the contact distribution restricted to pi2 of Y is 0 and the Conley-Zander index of gamma is greater than or equal to 3 for all contractable ray of orbits. And so an example is any strictly convex hyper surface in R4 with the standard symplectic structure which is transverse to the radial vector field. So this is a nice class of contact manifolds because I erased the theorem but it's basically without sort of a theory of abstract perturbations. It's going to be the class of contact manifolds so that all the contractable ray of orbits give rise to planes that all have to have virtual dimension at least two. So this would be sort of like a nice class of manifolds you'd like to prove. So the termology is defined for an invariance in dimension 3. So you also dropped the assumption in C1 if there were no contract. So we'll make condition A C1 of pi2 is 0 or B. So what Michael and I proved is that if Y lambda is dynamically convex, J is generic, then the only buildings of index 2 in R cross M are either just a cylinder, a once broken cylinder or if you, I should also write this, so we're going to assume that Connelly's Andrer index of gamma is strictly greater than 3 for gamma non-simple and contractable and I'll explain why in just a second or you have a pair of pants, a plane and a trivial cylinder so gamma squared gamma. So this is 0, 0, 2. So we have this extra assumption that, there should be gamma d, d minus 1. So we have this assumption that the Connelly's Andrer index is strictly greater than 3 for gamma non-simple and contractable. We expect that it's removable but we have to do a little bit more estimating things to make sure that something bad doesn't happen in the next blackboard but sort of what we proved is that the only buildings of index 2 are either an unbroken cylinder, a once broken cylinder or this pair of pants configuration. If you assume, if you don't have this assumption, then your pair of pants configuration can have like this gamma could be gamma to the d1, this could be gamma to the d2 and then this is just a d1 plus d2 fold cover of a trivial cylinder. And the second part of our lemma is that no index 2 cylinder can limit to the third configuration. So we didn't prove that there are no buildings of this type, there certainly are, but we just showed that there's no index 2 cylinder that could limit to it, which means that as a corollary the cylindrical contact tomology differential is well defined and it squares to 0. And this is innocent flexion, so the reason why we don't have invariance yet is because this sort of analogous lemma in cubortisms for sure does not hold. Any questions? So a priori, what are the possible configurations for that third one? I mean, I'm looking at this and you're saying if it can't be the case that a cylinder of index 2 limits on third configuration, I'm looking at that thinking, okay, well, if in some world I could perturb and actually glue that thing together in some sense, then it would be an index 2 cylinder. So therefore, if you, like, what kind of perturbing do you want to do? So if you were doing obstruction bundling gluing and you decided to try to see if you could do this, the number of ways you could glue it together would be 0. Actually, just to, that's not how you prove it, right? No, no, no. It's a proof by contradiction, sorry. So we use, yeah, you use intersection theory. You say, like, suppose you have this curve living into this, then you look at some intersection theory properties and prove that you get a contradiction. Therefore your index 2 cylinder can't limit to a building of that type. True proofs are actually the same if you look deeply enough. Yeah, and since obstruction bundle gluing goes back to winding numbers and, um, actually, I have, well, can't the obstruction bundle gluing, can it only tell you that the sign counter phrase to glue is 0, or is this really telling you there's no way to glue it? I think in this case you'll actually see if the obstruction section never manages to look for that. In where do you use the index 2 at the point of the dynamic convex? Sorry? I know that the proof has this intersection theory stuff. Oh, the intersection theory stuff is to prove that no index 2 cylinder can limit to the third configuration. And where does that be used, the fact that it's a dynamic convex? Oh, so we use the fact that it's dynamically convex to prove that the only buildings of index 2 are of those three types. And then you also, I mean, yeah. And then you're also using automatic transversality and some index stuff, improving that those are the only three types of buildings, and that's also what's getting you the transversality you need to say that the differential is well-defined. So we also prove that all index 1 and index 2 cylinders and subluxizations are cut out transversely and, um. So do you need the dynamic convex for that? So you don't need dynamically convex for that, but you're going to have a problem with looking at your compactified, modulized space of index 2 cylinders. Like then I'm going to have way more things, and I'm not going to be able to prove that d squared is zero. Let's say you can know that you only have these three kinds. You can only get that if you're dynamically convex. You all have bad vision, the only people in the front. But what are the indices of the curves in the third building? Zero, zero, and two? Yeah. I didn't want to put the zeros here and here, because then the people in the front row would ask why those curves had genus and I was counting cylinders. Yeah, no, no, it's good. I just need glasses. The top level there is a branch cover of a trivial cylinder. He's a trivial, trivial cylinder. Everyone happy? If you're not happy, I'll get out of my hook. I actually really like holding the hook, but I think that makes for a slightly scarier talk. I was going to move on to the non-equivariant part of the talk, but do people have any other questions about what goes into the Waman corollary? I guess I should maybe upgrade it to a theorem. So two, we're stuck on invariants, so we need to do something. So we'll try to define, so we'll define stuff non-equivariant. So we'll define a non-equivariant version. If I stuck, you mean data approachable? Oh, yeah. Okay. So to clarify, this normal approach looks bad because you can't prove this sort of lemma in co-bordisms. You also don't have the sort of automatic, I mean, you have some automatic transversality in co-bordisms, but it's not good enough, and you're not going to be able to prove the chain homotopy equation. So invariance is not really going to go so well. So what we can do is we can define a non-equivariant version. And when we set up a non-equivariant version, it's kind of analogous in Hamiltonian-Fleur theory to when you pick a time-dependent Hamiltonian. So all of your curves end up being somewhere injective. So we're going to do something similar in the contact world. There's still going to be some things we have to work out, but they're doable. And then it turns out that there's a way to take an S1-equivariant version of this non-equivariant picture, prove that it's isomorphic to classical cylindrical contact tomology, and in a very roundabout way, get invariance for the classical cylindrical contact tomology. OK. So this non-equivariant theory is kind of look like the positive part of symplectic homology, whereas this cylindrical theory, if it was defined, it would look like the S1-equivariant version of the positive part of symplectic homology. But we don't want to use symplectic homology because we'd like to define something directly in terms of the RABE data and in terms of asymptotically cylindrical pseudoholomorphic curves as opposed to looking at the Fleur equation. OK. So we're going to take J here. Instead of using 1J for all time, we're going to take a domain-dependent J. So JT for T and S1 is going to be an almost complex structure. And we still want for each time T that it goes to the RABE vector field, and we want for each time T that it's D e to the tau lambda compatible. And then we're going to be counting, well, we're not going to be quite counting pseudoholomorphic curves where I just throw this T dependence in because that's not going to automatically get us a non-equivariant version. We're going to have to be a little bit more careful than that. But the moduli space we're going to start to want to look at and put point constraints on is going to be this guy where we're using a domain-dependent family of almost complex structures. OK. So you just put a T in your J-homomorphic curve equation. OK. And this T depends. Oh, shit. Wait. Sorry. My R coordinate should have been S and my S1 coordinate should have been T. Now it makes a lot more sense. OK. Maybe this is stupid, but what happens if you make lambda time dependent? Well, so this is forcing in some sense. You can't make your lambda time dependent. This is like the closest you can get to forcing your periodic orbits to be time dependent. I can't make lambda time dependent. It's a content. How would you make it time dependent? It's just a contact form. I could take a time, a group of context, and give them the same context. Well, I would rather take a loop of... We're not looking at period one orbits. We're looking at arbitrary period orbits. Oh, yeah. That's the other thing. Sorry. I mean, the point is now all the curves that were multiply covered are no longer going to be multiply covered for a generic family of almost complex structures that depends on the domain. So, the good news for a generic family, mj gamma plus, gamma minus is a manifold of dimension equal to the Conley-Zehnder index of gamma plus minus the Conley-Zehnder index of gamma minus. And I broke some symmetry, so it's got an extra dimension on it. All right. And so you're going to be working in a situation where the... So there's a paper by Fleur Hofer and Zalaman that gives you transversality in the Hamiltonian Fleur case. And so this is sort of the... You can't pick a time-dependent Hamiltonian, but you can pick a time-dependent, almost complex structure in your J-homomorphic curve equation. And so that's how... So all your curves are going to be somewhere injective and ta-da. Okay. But you're not going to be able to define the differential sort of in this naive way here. So let me tell you how to define the differential. Okay. So to define a differential, we're going to need to throw in some point constraints. And this is kind of how we're forcing a parametrization of our ray of orbits in terms of time. So here's my curve. I'm going to be able to look at U of r cross 0. And this is going to pick out some line on my cylinder. Okay. And you can use this to define evaluation maps from our modulized space into the image of gamma plus minus, which we define to be the limit as s goes to plus or minus infinity of the projection of the curve when we look at time 0. Okay. So this is supposed to be like when t equals 0. And this is some point that's in my ray of orbit gamma plus. And you can do the usual thing where you can map this evaluation, map the sense to an evaluation map after you quotient out by r. Because even if we do define a differential, we're still going to have to mod out by r where this r is coming from the translation and the simpletization direction. Okay. Great. And what do I want to say? Okay. So chain complex, I no longer have to throw out all these bad ray of orbits, but I am going to have to have twice the number of ray of orbits in it as before. So you can kind of think of this as a Morse BOT version where your Morse BOT manifolds through your ray of orbits and you put a height function on s1. And so that gives you two critical points. One critical point has index 0, one has index 1. So we're just going to kind of, that's not really what we're doing, but it's the vestigial motivation for defining our non-equivariant chain complex to be generated. So it has two generators associated to each ray of orbit. So gamma check and gamma hat, gamma is any ray of orbit. And then I'm just going to arbitrarily shift the grading up on my gamma hats by one. So we have that gamma, the image of gamma, okay, I'm just going to say that gamma conleysander index of gamma check is equal to gamma, conleysander index of gamma hat is equal to the grading, I reversed this. So I should have written the inequalities the other way, but the grading on gamma, what am I doing? Okay, I'm getting confused with hats and checks. Okay, so the grading of gamma hat is going to be equal to the conleysander, the grading of gamma check is going to be equal to the conleysander index. The grading of gamma hat is just going to be the conleysander index shifted up by one, and conleysander index of gamma hat and gamma check doesn't make sense because it's the same underlying ray of orbit gamma. So this should be the correct formula. So you just think that I have gamma check and gamma hat are formal variables associated to the ray of orbit gamma. Okay, yeah, this is any ray of orbits, you allow also the battery of orbits. So it's just like in symplectic homology you would also see the battery of orbits. And this is kind of, so this non-equivariant picture is sort of not original, but it is an adaptation from stuff by Berjewil, Yashberg, and Ekholm, and also Berjewan Wansha. All right, and now my differential is going to have a block decomposition. And I'm going to have sort of stuff that goes from a check, ray of orbits to check ray of orbits. So there's four different ways, sort of you can go from, let me just write this out and then we can talk about it. So hat, this is just z generated by this guy. Let me see, check, let's see. So, okay. All right, and I promise that you'll see this evaluation map and I just wrote down and show up. Uh-huh. So we think of it as like a triumph that you no longer have to try out for bad ray of orbits? Is that something desired or is it just a quirk of this? I mean, so it is desirable and it turns out that this S1-equivariant version, like once we figure out what the S1-equivariant version of this non-equivariant theory is, it's actually defined over z. And so it's going to have more information coming from bad ray of orbits because it's defined over z. And it turns out that if you tensor that theory with q, you get the usual cylindrical contact tomology theory. So it's a quirk, but I guess it's good because you get more information. Okay, so what do each of these terms count? So d check, alpha check, beta check. So here you would want the degree index between alpha and beta to be 1. So this will count cylinders where you are going to, right, so you want to, you're going to fix a point on the underlying simple ray of orbit associated to alpha. So this is some point on the image of, and alpha bar is a simple ray of orbit. So alpha, and then we want, so this is going to be curves u in this moduli space going from alpha to beta, mod r, with the positive evaluation map of your curve u being equal to p alpha bar. And we're also going to have to count Morse bot buildings with a cyclic ordering condition. So the evaluation map constraint is supposed to be this sort of point where p alpha bar it needs to match up with pi y of u of s0. And then we'll count these guys, and then we'll also have some buildings that we have to count where you've got a cyclic ordering condition, meaning that you want f minus of the top curve, f plus of the bottom curve, and whatever point you're picking on this intermediate curve, let's call this guy gamma, to be ordered this way as you go around gamma. So you're going to need to orient your, like a direction for how you're going around your ray of orbit. And then the check of alpha check, beta check is going to be something analogous, except instead of having the positive evaluation constraint, you're going to have the negative evaluation constraint, and then negative evaluation constraint, d plus alpha check, beta hat, this has no constraint. So this looks like the too bad map that you see in the Birch-Wile-Yashberg-Eckholm formulation. So if we look at d plus of alpha check, alpha, sorry, alpha hat, alpha check, you get two points if alpha is bad, and zero if alpha is good, and d minus alpha check, alpha hat, this has constraints at the top and bottom. And this non-equivariant theory is not really so easy to count because not only are you sort of counting cylinders that either have one point constraint, two point constraints, or zero point constraints, you also need to count more spot buildings that have the cyclic ordering condition, and these more spot buildings, they're not going to be of infinite length because you only have a finite number of ray of orbits in each conical azander index, but they are going to be some sequence of buildings that you have to figure out how to count. But it turns out that if, so I say, okay, so theorem is that, oh, and I should have this d minus here, so this also, in addition to counting u and m alpha beta with f plus and of minus constraints and more spot buildings, it also will count ways to glue an index 3 curve using obstruction bundle gluing that looks like a branch cover of a cylinder. So, this is zero, two, and one. Okay. What is the j on the cap there? The, it's a two. No, it's a j, I mean, so you fix a family j of t on the remains. Yeah. So what would that be on the cap this class? So it's a bubbling off at a point? Yeah. Yeah. So some j would bump it off? Yeah. Is it clear why you can only have one plane of all this? Yeah, index calculations. Is the reason that you define this chain problem is to see that the kind of penance means you can do isotropy? Yeah. So your curves are always going to be somewhere injective, so you don't have the isotropy anymore to deal with. So for y3 lambda dynamically convex or y2n minus 1 lambda, well, no, I don't want to say that. This non-equivirian contact homology and j generic is a chain complex, and the homology of it is independent. This is a family of j's of your choice of family of j's. And the dynamically convex lambda. Let me tell you how to relate this back to cylindrical contact homology in a way that will make everyone much happier than they are now. So if there is sufficient automatic transversality, then as needed to define the classical cylindrical contact homology differential, then we can use for some generic choice of j, we can use this j to look at this non-equivirian contact homology. And what's nice is that it turns out, so in this case, d check looks like the standard cylindrical contact homology differential. And d hat looks like the other cylindrical contact homology differential, and you end up having opposite orientation, but that's not so important. But it turns out that you can show that, so if y lambda is non-degenerate, dynamically convex j generic, then for d0 equal to dq minus d, well, I did not use good notation for this. So the first differential, this is supposed to be the second differential that I told you, d plus, d minus, and d1 equal to 0, 0 multiplicity of gamma 0. I'll explain what that is in just a sec. You can actually get a chain complex by taking the non-equivirian chain complex, tensoring it with the formal power series of variable 2 that has degree 2. So here u is a formal variable of degree 2 with dz being equal to d0 plus d1, and this is j, that this is a chain complex. So this was kind of a lie. So you still will be able to see your bad orbits, and you're not going to have the problems that you had before with orienting them because you have point constraints running around that are going to allow you to deal with orientations. So I'm doing a bad job explaining this because there are a lot of details I'm omitting. But then, so the degrees of d0 are coming from a complex where the bad points are at the same time. Oh, okay. This shouldn't be, I should have just written, this isn't technically this differential, it should be, I should have written something like this is counting, my notation is really awful. Yeah, we'll just use d hat and d check, but they have a more familiar expression in terms of multiple cities of ray orbits and multiple cities of your pseudoholomorphic curves. So what I should have written here is that if j is, if we can replace j with a single j, then d check counts u and mj alpha beta, you still are going to have your checks and hats with a point constraint at the top and you'll get that d check of alpha check is going to be the sum over the multiplicity of alpha divided by the multiplicity of your curvy u that connects them and now my font has become microscopic beta with the orientation. So the chain complex is still this thing that we're using and then the differentials just have more familiar expressions in terms of the previous differential, but that was a very unclear and helpful way that I explained that. Can I say one thing? I mean the d hat and d check, they're the same as the differentials before when you're going between good orbits. Yeah, and you can show that when they do encounter bed ray orbits nothing horrible happens. Yeah, I should finish the statement of the theorem and the homology Z of this chain complex is independent of lambda and j and a proposition is that CH Z tensored with q is isomorphic to CH q and so this is how you end up getting a variance of cylindrical contact homology. Okay, and I will end. I mean if you don't have something that's dynamically convex then you're going to have to be working with like full SFT or symplectic homology. Yeah, so if you take the connect sum of two tight spheres you're no longer dynamically convex. So this sort of result doesn't work for connect sums, but I mean cylindrical contact homology was never built to being in variance of things that weren't dynamically convex in dimension three. So there was a result that Viterbo mentioned in his talk where he said that if you have this assumption about conical Zander indices then the positive part of symplectic homology is an invariant of the contact structure. It's exactly this dynamical convexity condition and it has an analog in higher dimensions. What's your favorite manifold way to compute it? The sphere. So you can also compute it for like lens spaces and like pre-quantization spaces. What's the answer? Oh, so, well I guess I can just say it in words. For the three sphere you have Q in every even dimension that's greater than or equal to two. And for lens spaces you also only have for ln plus 1n you have n copies of Q in dimension zero and n plus 1 copies of Q in every even dimension that's greater than or equal to two. Any more conversation requests? Any other questions? Right, so that's what you're doing but you've kind of been able to cheat a little bit. So normally your differential would have like an infinite number of terms in it but because we could cheat and use our use a time independent almost complex structure and still be able to define the complex that's why we were able to drop the extra terms but that's pretty much exactly what's going on. Suppose you took two dynamically convex just go back to the convex sum idea. You took two dynamically convex and then you did a convex sum but you did a really, really, really small convex sum where you have very precise control over the sort of, you have very precise control over this hyperbolic orbit and the finite energy planes that would be sort of introduced. Is there any chance that these ideas could extend in that case? I mean maybe but it's my understanding that even when you do a convex sum you always end up introducing a new... You do....rave orbit. But you know exactly where it is and you know pretty quickly you have pretty much......generated longer ones. Yeah you could use action filtration arguments anyway. No because the...my answer is no. So I mean it's conceivable that we could define some sort of linearized contact tomology if we...there's another type of building that we would have to consider and we'd have to be able to understand how to glue it or exclude curves limiting on it and you'd have to take some sort of like augmentation so... I'm going to disagree with you on this one. You think that... We might be able to do it. Well that's...yeah I was saying you might be able to... It's not linearized which is super cool. It might be possible but it's a harder project. Anything is possible. Okay I think Helmut's ready to clap for me and get coffee. Yes so... Yeah why not? Have a good one.
Cylindrical contact homology is arguably one of the more notorious Floer-theoretic constructions. The past decade has been less than kind to this theory, as the growing knowledge of gaps in its foundations has tarnished its claim to being a well-defined contact invariant. However, jointly with Hutchings we have managed to redeem this theory in dimension 3 for dynamically convex contact manifolds. This talk will highlight our implementation of non-equivariant constructions, domain dependent almost complex structures, automatic transversality, and obstruction bundle gluing, yielding a homological contact invariant which is expected to be isomorphic to SH^+ under suitable assumptions, though it does not require a filling of the contact manifold. By making use of family Floer theory we obtain an S^1-equivariant theory defined over Z coefficients, which when tensored with Q yields cylindrical contact homology, now with the guarantee of well-definedness and invariance.
10.5446/16297 (DOI)
The last lecture. Okay, so in lecture one to three, we have described how the algebraic, so we've introduced a category of stable maps, a bundle category over it, Kuschelrieman section functor. We have described the algebraic relationships and we have put smooth structures on this and sort of the packaging is essentially this of the SF, of when you want to resolve the transversality questions for SFT. So you have this diagrams of covering of actually is it now smooth covering, SC smooth covering functors, some other relationships formalized by having some functors which behave in some compatible way. This is the Kuschelrieman operator. So now, so the goal then is so in so now we have a complete setup to perturb the, the cat is a subcategory of sort of J holomorphic objects into some other sub weighted subcategory, which actually will be smooth as C smooth. So that will be enough to extract extract data from it and we would perturb it into a functor which has certain properties. That is the algebraic thing so that we discussed in length. And of course, geometrically, you want that this is a pair in general position as far as the Fritam series. So there's a Fritam functor as far as the Fritam series concerned. Now, if you think of this here locally as a finite number of sections, you know, so you get a finite number of equations and you want that each of them is in general position, which is, can be achieved for this. It cannot be achieved when you homotope one from one perturbation to the other. You cannot in general achieve general position because of all the structures which you have here. But for, not doing homotopes but when you just want to perturb that, that's still possible. So now, we have to perturb and of course we make choices to do so. So if you have two different perturbations, we want to make sure that in each connected component of the orbit space of our categories, the solution spaces are still compact. Yeah? And we perturb and we want to make sure that we can guarantee that we can connect this sort of as generic as possible, these two perturbations and we also have compactness doing that way. So, so now, in order to actually make sure that that is the case, you ideally distinguish something like a convex set of perturbations that you can go around where it's also clear that, up here, clear that you would have compactness. So let me explain here. So, so we will actually be able to make an infinite construction and there's some kind of an inductive procedure to, to produce it. So it starts off with low energy things which cannot split further off, which only contain buildings with height one. Then you go to the next one where you have buildings of maximal height two. But then the, the, the boundary phase is just by the algebraic structure which you have have to be explainable in terms of the data you already dealt with. So you pull up these perturbations on the boundary and then you extend it. Then you have to, then you have to distinguish between so, connected components alone together with, and others with added cylinders and so on. So there's a certain number of things which you have to do on this, on this level. So then, of course, when you just see this procedure, I go on and on and on and on. So it's not surprising that my perturbation gets larger and larger and larger and larger. So, so now if I only can control things in the neighborhood of the, in an a priori given neighborhood of the original solution set, you can imagine that at some point you fall out of it. That actually happens in every other approach. So what do you do? Well, you use homological algebra. So you, you perturb up to some level, you know, then you encode this data, then you take a smaller perturbation to begin with and you can run higher up and so on. But that would require you already to come up with some packaging ideas to just, at the end, get a complete set of data. So, but it turns out that in this, in this problem actually, and that has, that has to do something really, it's basically an abstract factor. You don't have to know much, you have to know a little bit, you have to exploit a little bit more about the Kurscher Riemann operator than people usually do. So there's more compactness in the background and actually this infinite procedure can be done. So geometrically you can run up, you never have to start smaller and go up. So that's a non-trivial effect. So you can make that infinite construction, get that infinite smooth object. Okay. So that of course could be used the other approaches. So this finer compactness it could be used in the other approaches as well, so there wouldn't, there wouldn't be the need to do it. Okay. So, so in order to deal with this aspect of the story, so you have to measure perturbation, you have to measure the size of perturbations and you have to come up with some idea how that com, controls compactness. Okay. So, so we have heard what an auxiliary norm is in the case of an M-polyphol set up. So here it's basically a functor. So it's actually defined on, on the things which have a little bit better quality on the, in the fiber. But, but if they're not lying in this space, we just put it plus infinity. So, so a functor there restricted to this E is on each fiber norm. So if you go to orbit space, it should be continuous and should have the following pro, property. So if I take a sequence of objects so that n goes to zero, so that's a num, the sequence of numbers. And if the, if the isomorphism class of the base point below converges, then the isomorphism class of this vector converges to the isomorphism class of zero over that point. So that's a requirement. It's some kind of local uniformity. If, so you have, so you take a sequence here. So if this is finite here, that means you like actually here. So then the size goes to zero. So this is just sequence of objects, but now you look at the base of this object. So lies over an alpha k takes the isomorphism class assume this converges. Then the isomorphism class of this should converge to the isomorphism class of zero over that object. So then there's something which I'm not going to explain. That's called reflexive auxiliary norm. That's an important refinement actually for this to utilize this more general compactness. So one, one has, one would have to explain a little bit about that. But basically this you can construct always if the fiber in your bundle category is actually reflexive, which is likely about space in our case. So, so, so then you can always get this. So, so there's a, one can talk about, we can presumably talk about an hour about the compactness issues. So then it should be compatible. So, so first of all you define, if you have an auxiliary norm you can define this on a fiber product. You just look at the individual factors, maximum. It's important that you take this and not the sum. That is what you can control here. It's the maximum is controlled and you can control and that means that you can control compactness here and here you control compactness of the product on the boundary. Then the photon property has, has allows you to disperse that control a little bit to a neighborhood because locally looks like some finite dimensional times contraction. So if you're not too far away you, you still have the contractive parts. So, so it's still essentially looks finite dimension and you can control this. So then you want compatibility with this structures which you have. Then you want to have something like this. It factors. So pi is this projection coming from this Whitney type decomposition where you put the zero one form on the non-trivial cylinder, on the trivacinitinal composition either zero or the other way around. Remember that was one of our structural things. You want to have this. So it measures the size on non-trivial of a zero one form on a non-trivial, on non-trivial components or in this case over the cylindrical part and you want to have that behaves like this. The reason is that you later perturb anyway where there's nothing on that side here because over the cylinders the perturbation will be zero. What is this E subscript zero? Oh, this is E. Oh, yeah. So remember for strong, a strong bundle K over O has this by filtration Mk where K is less or equals M plus one. And if you have the strong bundle structure on this and this E over C also gets a by filtration. So on the stable maps you can talk about the regularity of the object and then in the file, then you can, and if you have something in the, in the, the category of firing of it, you can talk about the regularity of the zero one form over the underlying object and that has an addition. So why do you allow the thing to actually go there? Infinity or E? Yeah, because this is only, it's precisely finite if it's here, which is a sub, a subfiber. Can you say something about why is that? Yeah, so, so remember that as, as I think Katrin pointed this out that for strong bundle there is an, it's not just a bundle but there's an additional, this by filtration and if you lie on the one level it's sort of compact perturbations and the SC plus sections are, are sections of the one level. So that is where, where you know that things are compact. So you, so in the fiber you have zero level, one level and so on. But if you are on the one level you go down by compactness on the zero level and you perturb by sections which lie on the one level. And that is precisely there are sections of this. The multi-section funtors are priori are given by sections of this but the images of this section actually lie in that better part of the space. So that is, that is how compact perturbations built into the whole thing. And then, and it should have this property. So if you forget about the stuff over trival cylinders and fix the norm or you move, or you even throw the component away, which of course is more or less, you don't need the trival cylinder component then, then you see the same measurement. So then of course you want, if you have, if you have disconnected conf, non-trival configuration and move them against each other then you would see this. The measurement of those guys, we are maximum. Doesn't that contradict the thing you have a problem about n equals n equals to pi plus n equals to w times? That's a plus right there. Well, a plus, no. So, no, why? So, so if you mean this or that. Oh, no, no, no, no, no, no, no. So this are non-trival components. So then, then you take this and then there would be some cylinders here which it discard and some cylinders which you, here, you could discard in order to compute this. And if you look what happens over the cylinder you take that. So this bit only refers to the things which is sort of non-trivial. Yeah, these are non-trivial. Yeah, so non-trivial components. So once here in this compatible reflexive auxiliary norm exists, actually construction is rather easy to take a local model and then you, you, you use locally, you have a retract in the Banach space, it just takes the norm and take a partition of unity times the norm. And then you restrict it to the retract. And if you add it up, it has precisely this property. So it's, if you have to construct such thing for a Banach manifold, for a Banach-Bandau-Banach manifold, it's precisely what you would do there. Is that, might we as well just use the thing you just said all the time instead of the general auxiliary norm? Yeah, so, yeah, that is what, I mean, whenever I would construct an auxiliary norm I would construct it like this and it would have the property required from a general auxiliary norm. Then if I do the same procedure and I have reflexive fiber, this procedure would precisely produce the norms which has the property which I want to have. So, but, but it turns out that reflexive auxiliary norms have some additional features which I haven't described. So there's some more properties you can, because when you have reflexive fiber you can start talking about the convergence in the fiber. And that is actually useful thing. Because the, the opposite, freedom operators have sort of a closed graph. And then you can, even if you don't have full convergence, you can see that in some weak sense certain things would satisfy the equation and this means you get better organization of compactness just by, by that. But as I said, one needs some time to explain it. So once you have such an auxiliary norm, then we can measure the size of a multi-section function as follows. So first of all, since these are functions I can pass to the orbit spaces. And then you get a, get a function on the orbit space of stable maps just by looking at the maximum of n absolute value where I look at all u for which is non-zero. So basically I measure the norm of the, the maximum norm of the different branches which I have locally. Can you remind me what SC plus means? Okay. So SC plus meant that this is a section function, but the local picture is that it's SC plus section. So they go into the zero one fiber. So, so, okay, so, okay, so, okay, so here's the local, so here's, so let me see. Hold on, there was a dictionary, you should just think of compact perturbation. Yes, so, but nevertheless thank you, Katrin, but let me just say that, that here, so this is the polyphonic bond, the strong bond. So then here you had this filtration where k is less than equals m plus one. So, so, so for example, the Kuschelman operator goes from the m level to the m level, to the m, m level, but there is one better fiber, m plus one. So you could, you could look at sections which go here and this is sort of the compact perturbations because they have the better regularity and you can go down to the m, m level and then you get compactness. And the SC plus multi sections locally are given by sections of that kind. So they're really, so, so then I have locality sections and then I precisely, this norm takes real values on this. So I look, look when I, when I have this picture here, this is, and this are the local section structure, I'll just look at the maximum norm I see. So, so this function of z is over, since isomorphic objects have the same norm, it just is a function of telling me what, what is the norm, the maximum norm I see over the isomorphism class. Michael, happy? Well, you look like you were unhappy. So, so given it, so, so we fix such a norm and now we also have to exhibit a compatible open neighborhood u of the course modelized space of geomorphic curves so that we can guarantee certain compactness properties for the particular problem. So, so I want to say, so what I want to say for example, there is a neighborhood of this modelized space and if I have a perturbation which is supported in this neighborhood and the norm, the norm measurement is small, then the solution space is automatically compact of the perturbs. Of course, if the neighborhood is too stupid, it is not compatible with all the operations I have. Like, if I construct something here and it is in the neighborhood, then I bring it to the boundary and then this boundary part doesn't belong to the neighborhood of the next level, then I'm stuck. So the neighborhood, so, so if I have a neighborhood of the solution space say on the level which never degenerates, then it gives a product on the boundary of the next level. But that should be in some sense the trace of the neighborhood which I have there. Yeah? So, guys with me? Just a level of definition. MJ is the one that is actually J-holomorphic or the one that is a stator? Yeah, so these are just the origin pseudo-holomorphic objects, yeah? So this is the one for which stator is the zero one? Right. And now, so this is a subspace of that, a closed subspace of this which on each connected component is compact. So you require as input this ground-mortar? Yes. Yeah? Okay. So now, I want, so now compatibility. So I have to, so the compatibility is best explained by the hierarchy I use for inductive constructions. So this is a set of connected components of Z, then this splits into a disjoint union of those which never further degenerate, so they only contain building of height one. They only contain, they contain building of maximal height two. But so they contain components where the maximal is two, but there is a building which of height two or top floor number one and so on, yeah? So that is also how the inductions go here, come objects which you actually can count. And then you start some schemes. So there are some intermediate steps I think, which is sort of interesting to see if something at this, at that level works. So, so just, so this is sort of the mother of all schemes, but I think for certain specific things and subcategories, you might be able to get more structure out, but you have to adapt the perturbation thing to this. So, so this is not written in stone, but that is, that always works, but if you want to have more fine structure, you should on certain subcategories modify this. So then, then we know that we have, then we have this forgetful functor which forgets unstable, forgets trivial cylinder buildings. If you pass, and this actually maps connected components to connected components, actually preserving this decomposition. So, well, I go from a component which I have, so, and some trivial cylinders, I just forget the trivial cylinders and then in the image I get another component. And let me write if u is a subset of, of that, that is the orbit space of s, let me write it like this as an index. A is an element here. So, so, so a lot of this stuff, I mean, the connected components actually appears indices in some way. So, so an open subset is compatible provided. So if, if after forgetting trivial cylinder components, I get this class, then the open set over a prime of the connected components should be just the preimage of that. So that means the neighborhood of a trivial cylinder is not, not restricted at all. But I don't have to because I'm not going to perturb to, in the debaubo this is zero, then I will always find my holomorphic cylinder. So, so it can be generous on that side. If the objects in A have more than one non-trivial component, then uA is invariant under this local family structure when I move things around against each other. And if A, so for any A, it holds, if I take, for the phases of this A, it holds that if I look at the category with objects in u, fiber product is a category object, and u would take, take its orbit space and pull this back by, of course the image of this is in a, in a subspace of that and pull back by the covering functor, then I get that. Which basically means if, if I have an open set here, an open set here, takes a fiber product and view it as, as a boundary phase, then the neighborhood which I had in the full component restricted to this boundary phase is precisely what I already had. Construct, but I lift it up from below. Yeah? Is that clear? So in this inductive thing, so you start on level zero. So you have two open neighborhoods of, let's say, of your modelized space. Then you go to the next level, there's a phase. Now the phase pulls back this product of the two neighborhoods. So on the, on the boundary, you have a product of two neighborhoods. And then this should be the restriction of the neighborhood which I have on that component. Okay. Okay, so given, given our reflexive, so this is important, but you can ignore that for most. So I would just say that the statement later would be wrong without that word. So I want to at least make sightable statements. It's a subtle thing you wouldn't realize. I mean, I could tell you, I mean, if I wouldn't have told you, you wouldn't have figured it out. But, but I want to say that. So on the level of lies which I heard in this course here, so I think that's, I could have just forgot saying, not mention it, but. You're denying to it? Huh? You're denying to it? No. I was referring to other speakers. Right. Now, you know, the usual lies which is to say to get a point across, but, you know, it's on an approximation. So that is what I meant by that. So I, I, I tried, I didn't want to use an approximation here. So, so then an open subset. So if you have an open subset of the orbit space, define, so the auxiliary norm is n. Define u n to consist of all isomorphism classes of objects, which are belonging to you where the auxiliary norm on the Kusch-Riemann is less or equal to one. And the trivial cylinders occurring on any floor are j-halomorphic. So this are j-halomorphic cylinders, this black stuff here. And this thing here, the, has some data on it so that if I apply the Kusch-Riemann operator and, and evaluate its norm, it's less or equal to one. Yeah? So now, the bay, is that, is that clear? So you have an open subset of isomorphism classes of objects and auxiliary norm. So now, associated to this open set, you just take all objects in this open set, for which the trivial cylinders occurring every floor are j-halomorphic. That means if you apply the debar, there's no contribution of a trivial cylinder, then you apply n and it should be less or equal to one. So how much, so like, what is that, the condition of the trivial cylinders? Can you take a trivial cylinder? What is an open condition? Well, I mean, you're saying it's only set. I don't say, no, no, no, you is an open, I give you an open set, I don't say UN is open. Ah. You is open, UN is not open, it's definitely not open. Okay. Oh, well, because it's less than one. Because of that condition here. Yeah, but even, even if you, less than one. And if I put, if I put less than not, then I would have to allow basically every cylinder here. Yeah, that. Or a neighborhood of those guys. I'm not saying, can I not allow them to do that. Yes, yes, so, so the thing is that the perturbations I know will precisely produce that, since I will never perturb. You're not to turn. Over, over, yes. So you might, so. Yes. So. Okay. So why don't you, why do you ever then actually put trivial cylinders into the polyfoil, if they're not J-tool or more like? Because if, if I have a building, suppose three floors, and then I have a cylinder here and I do the gluing construction, I need the variation of those guys here. Otherwise, there's no gluing. Yes. Yeah, so, if, so think about this picture here. So and then maybe something non-trivial here. Then here is something non-trivial. But here is a trivial cylinder. And then something happens here. When I glue this part here, obviously, in the moment I glue, this thing has to, will be close to this one, but it starts off deviating. So I need function space there. If you glue this in, I mean, in, in, in this, in the setup. And they, I need to vary the function here. Huh? No, it's glued. No, this is now. So I need this varying things in between here and the function space in the broken one in order that I actually get a Fritton problem. Otherwise, you don't have too much, enough input. Yeah. The middle level, if you look at just the middle level, that is a trivial cylinder building on the far right. Oh, and you're saying you want your poly, the boundary stratum of your polyfoam to be a product of the polyfoam that you used before. Yes. Yes, of course. How does it, okay. I don't remember. What, and where, where can you set up? Well, just as implicit in the setup. Not, not, not simple. They're just, when I, when I have, so I always distinguish between trivial cylinder components and J-holomorphic cylinders. The trivial cylinder component is just top and bottom, have the same thing as homotopic to the, to a J-holomorphic cylinder. I think there's a condition that the map from S to the breaking of it should be a finite problem. And that means in this. Ah, subjective. Yeah, subjective. Okay. Okay, so now here, so, so you have the foreign stock, strong compactness property, given the reflexive compatible auxiliary norm, and there exists an open-neighborhood U of the coarse-modelized space, so the unperturbed J-holomorphic curve-modelized space, such that the closure of this UN, which I just described, has a following property. For every connected component A, the intersection is compact. Yeah. So, so you can choose neighborhoods precisely with the properties which I described to you, which live under this inductive process, and you have compact, guarantee of compactness of the closure of this thing here, which satisfies this. If you look at all those things, the closure of this associated UN is compact, so, and you can choose to be a neighborhood which is compatible, so, which is compatible, so it satisfies these conditions. So, so that, so what does it tell you? That tells you that in your inductive argument, the neighborhoods you are working in then appear as a product are the restrictions of neighborhoods you had already there. And then if you extend then, and the perturbation by the, by the construction before, the, the pullback of the perturbation by the covering transformation because of the definition of the norm maximum of the two things has the same norm on the boundary. So it's less than one. So suppose the first, we take a sequence of epsilon i's less than one on the, so starting with epsilon zero, you perturb smaller than epsilon zero, then you get something on the boundary smaller than epsilon zero, then extend smaller than epsilon one, then on the next level you get something smaller than epsilon one, extend smaller than epsilon two, always controlled by this neighborhood, so compactness never grows away. So let's see. But I explained that in more detail. So, so then there is one issue. So let me first say, if I have such a neighborhood, you, which has this property, I say, and you, as in the theorem that and you controls compactness. Yeah, so I think that's a good word. It controls compactness of, of the situation. Yeah. Can you say something about why the theorem is true? Yeah, because of reflexive auxiliary norms. It's almost true, but there's a certain problem in the transition, when you lift up as a covering and then one extended to the boundary. So you need some, it's a little bit of a subtle argument. It's not true. So I, I wouldn't bet my life on it, neither your life, that such a property holds for arbitrary auxiliary norms could be possible. Actually nobody's life in this room. Okay. Everybody happy now? Okay. So, so now here, here's one issue, which one should point out when it went easy forgets about it. So if I have a perturbation below and it's, it's supported in you here and here supported in some U prime and then I take the product and go on the boundary. This is not supported in you cross U prime. If you take a product of two compactly supported perturbations, the product is generally not compactly supported. The solution set is still the same because you could be here the compact support and here outside. It's a non, it's still non-trivial, the product. Can you draw a picture with me? Yeah. So, so if you have a line and this is a support, cross a line, there's a support is here, then the plane, it looks rather like this, that's a support. But the solutions, the solution set of course for the product still lies in here. So okay. So that's, I mean, if I take a function on one and a function on the other side. Yeah. So, and if the support is here and so if the support is here and here, then for the product the support lies and that's this cross. If f times f prime is not, is long enough? No, no, no, it's a function now from, so, so you have a function from r into r, cross a function from r into r. So you get, you get something from r2 into r2. It's a Cartesian product. Yeah, it's a Cartesian product. Yeah, so that's what I have. But it's like you're adding the function, right? No, no, no, no, no. I mean, you have a, you have a, you have a threat, you have a threat on problem here with a compactly supported perturbation, say infinite dimension in here. And the product, the product threat on problem would not be compactly supported. I mean, everybody likes this fact, so you find it in a lot of papers, but, but it's like, like sort of thinks that compact support is compact support. So, so everybody, it's a generally accepted fact, yeah. But you have to be a little bit careful. Where do you imagine that this solution set is contained? No, the solution set is of course a product of the two solution sets. So if the perturbation was compactly supported and forces you to have a, yeah, so if you know that for each of the threat on problem, the solution set is compact, then the product set is compact. Again, what that picture means? You want to know what that means? You put all those pictures in. Okay. Suppose you have a map from the real line to the real line and the support is here. Suppose I have another map of this kind where the support lies here. Takes a Cartesian product of the two things. You get a map. This means f of x, g of y. So it's a map from R2 to R2. So then, then the support is here. And that is precisely what we have here. So, so now, however, if- Sorry, can you explain the notation that's in there? What is n sub lambda? What is t dot lambda? Yeah, this, yeah, I did, actually. There was a definition somewhere. T dot lambda was a rescaling by lambda when you, when you were sort of last time by lambda. So it makes a score. Oh, it was, it was there. Oh. It's t dot lambda. So it's, ah, here, u n. Consists of all the morphism classes in u. Such a delus true and this thing only has 12 cylinders. A little n sub lambda. Oh. A little n sub lambda. Here it's just the, the maximal norm you see for your section. So you have a multi-section functor that, that goes down to orbit space and you just look at, over an object you look at the maximal norm of one of the branches and that's invariant under isomorphisms. So you, so you get a function on the orbit space. So, ah, so now, however, if, if you are in this situation and you know that you, that, that this space, so if, so let me give this in the, in the simple case. So if you, so, so suppose your freedom problem is this and then you take a perturbation and you know that for this perturbation the solution set where you allow any, any t between 0 and 1 is compact. So if you can switch it off, then if you, then, so there's a little bit stronger assumption than just assuming that this is compact, but of course it's true in, in this kind of sets of, always. Then if you take the product which, which would not have of the perturbations, it takes two such things suppose S of t has bounded support, S of, S of x has bounded support, then of course this was happened here, but it would be true with two parameters t and s. But, because you can homotop away, you can put then, but you can put a bump function in front of it, which actually is supported, which is just identically one here and you cut everything off around it. And it would not influence the solution set. Why this two parameters are in through that? No, no, just that is even unimportant. So what is important is that it means if you put, so, if you replace, so, so now replace if you take the product, yeah, take, just put then in front of this perturbation a cutoff function and of the, suppose the second problem was this times s prime, so here's x times y. So because of that property here and the control compactness, you can, if you take the, the set of solutions of this problem for minus one to one, then you can actually find a cutoff function which just on the solution set has to be one, you can cut it off. So, so there is a, there's a, there's a different perturbation which actually would have bounded support. It would have the same solution set and the same linearizations as the solution. So, so there is this marginal difference. However, if I would require this, then this nice multiplicative property goes away. So I, if my perturbations are multi, I have a definition that on the, on the fiber product is the product and so on. And it's precisely the one which has this support. But, but it has the same, it has the same function then there's another perturbation inside if, which would have bounded support. So, so therefore that is a little point one has to, to observe. So without loss of generality, you may assume that as far as compactness is concerned, the support is in you. And then we have compactness. So, okay. So, so you need this property here then this is well behaved. Okay, so, so, but that precisely we have because I have estimates for my, I have the statement about UN which precisely says if this here is positive, which precisely says this. Let's see where is this somewhere. Oh, here, this property. But this means if you perturb this by t times some section which has norm less one for t between zero minus one, you have that property here. Yeah? So, so it's built in. Okay, so, so of course I could have gotten the way without saying this because nobody really cares about that fact, but, but it's sort of good, good, good to know. So, but for the following argument, I will always say that the support is in you because I could modify it. So, for the following we assume we have compatible auxiliary norm, we have a compatible neighborhood of this and you controls compactness. Now since the local models are built on SC Hilbert spaces, we have smooth partitions of unity and so that the supports are locally finite. Of course, this things can be used to patch together and construct this SC plus multi-section function that I explained to you last time how to construct them and you can construct them controlling derivatives and whatever you want. So, precisely, so whatever you want to do in finite dimension. So, the whole, whatever you can dream up in finite dimension you can do with transversality you can do on this level. So, because you have the same amount of freedom as far as perturbations are concerned. So, so on the previous slide when you said n sub lambda, so this is the maximum norm? n sub lambda, yeah, so it means at every point at every isomorphism class are less than one. So again, over all of this. No, no, no, no, if this is less than one. On what? Everywhere. Everywhere, everywhere, everywhere, on the whole of S. And, but, and in addition it has this property. Which would be precisely, I would be an example here if I just takes a product that would be a perturbation which would be non-trivial on a, on a rather large set but it would have this property. So, the, the, the, the inequality on evolving t in lambda, there's just another sign of t, isn't there? No, no, no, no, no, that's precisely this problem for each branch. So, this is just the rescaling of the branches here. T in front of this. It's not t times lambda, just t lambda is, is a rescaling of, of the local sections by multiplying them by t. So, how does the point of lambda is supposed to be contained in u? Is that not true? No, no, no. This is still addressing the starting point of my discussion. So, so, if you have this property here, then even if you arrive in this bad situation here, you can find a new perturbation which has precisely the same solution set with the same linearizations which has, which has support in u. Yeah. So, so, so, so now I'm squaring back, so, so I now forget about this fact and in my inductive procedure, I will always say the support is in u which literally is not true, but would have to be true if it's compatible with the other structures. Then it would, but, but you can, but since you sort of can cut it off and once you are a little bit inside this thing, then you can just go, yeah, so, so you have this big support here, then you go in, which is on the boundary, then you go inside, you just immediately decrease it to something that your line just said u. Okay. So, now, so what I just said, like infinite dimensions in the case of sections. So, now here's the perturbation algorithm. So, the following result, given a compatible u n, so, these are things we just constructed, they exist for every epsilon between 0 and 1, a compatibility s c plus multi-section functions, this compatibility is all the things we ever discussed, here controlled by n and u, satisfying the norm is less than epsilon, so that lambda and d bar are in general position. So that's the basic result. So, here, here comes now the proof. So let's recall this, then let me define pi 0 less or equals j, just the union of all things which have, so it consists of connected components, but the maximal current flow number is j, so that is how we do the induction. Then, of course, I allow that, I don't know why I wrote this, forget about this. So we say a has at least two non-trivial components if they exist in alpha, this d alpha equals 0 and alpha having two non-trivial components, so we know this already. We say two classes are related if the image in pi 0 is the same, that means this thing and this thing just differs by shedding a certain number of cylinders, trivial cylinder buildings. Then e less or equal to j is just e restricted to this and this thing here is just stands for the union of all. So s, it's s, the full subcategory associated to all objects which lie in some component, which have an isomorphism class lying in some component here. And lambda j is an SC plus multi-section function defined on this and the inductive procedure starts constructs a lambda 0, then I go to the next step, it extends what I had before and goes to the next level and so on and so on. Okay, inductive statement, given a strictly increasing sequence epsilon j in 0 epsilon, they exist for every j and SC plus multi-section function having the foreign properties. One level up, it's restricted to the previous level, it's the one which I constructed. The norm of the thing up to level j is less or equals epsilon j. The boundary of the thing on level j is the pullback of the data which I constructed up to level j minus one because the boundary phases are coming from stuff below. Then this restricted to ej should be true, then this has to be true which precisely says that I don't perturb over the cylinder components and is compatible with the local family structure for some reason that should be blue is black, I don't know, is controlled by n u over the, so lambda j, the compactness is controlled over this set over the closed open subset of the orbit space associated to the connected components in j and these things are in general position over this. So if you have an induct, if you can go to the next step, then you have a perturbation which produces compactness and everything. Okay, a lot of statements here but basically it says, okay, so let's start. So the induction first step is induction, okay, so that's easy. So take first a component without triv- take first a connected component which does not represent object with trival cylinder components. So these are the things which cannot decompose further. I have an open neighborhood around the solution set here which lies in A. So of course I can have only one connected component because if I would have two, non-trivia could shift them against each other and I get boundary strata. I mean, shifts them so that I get a level, I get a building with two floors, so I would have boundary strata. So what I actually, if I exclude trival cylinder components, I only have one connected component. Then you find a small perturbation lambda A on this thing with norm less than epsilon 0, so that this here over S A is in general position and the support of the sections occurring there is in U, the neighborhood which I constructed. So this is a basic of the statement. If you have a freedom operator with compact solutions set, you take neighborhood of this, you take perturbation with support in this neighborhood and you wiggle a little bit and this thing becomes transversal. So that's the classical freedom result just like that. So then now take a component which is related to A. So this differs from this by having trival cylinder components. So this is how you define it. You just take the perturbation which you had on A and on the extended so that over a trival cylinder there's nothing. Okay, so now this is obviously transversal as well because the perturbation is transversal over the cylinders because you have automatic transversality and otherwise it's the previous space. So then the support of this lies in U because U intersects with A prime because there was no constraint whatsoever with respect to the cylinders by compatibility condition. So now the collection of these defines lambda 01 and it satisfies ST0. That is straightforward. So now... I'm like, before you move on, can you briefly say what the theorem is saying? Because it was like... Okay, so you are sure. Oh my God. Yeah. So it says that in a consistent way I can perturb on each connected component. So there is an SC plus multi-section function globally which perturbs the freedom problem in general position and it has the additional compatibility conditions like the data before when it occurs as a fiber product of another component then the perturbation there is a pullback of the stuff which was constructed there. Then if I have a building which has two non-trivial components and I move it against each other and that was a solution, that was a solution, it's also a solution. Then if I have a solution, I add trivial cylinders to it which are geomorphic, it's also a solution. So this is precisely... If you have this and now you compose... So if you have this thing and you compose this with the Cauchy-Riemann operator, you get a smooth weighted subcategory which locally can be represented by this finite manifold. That is precisely what it says. And these are the things you can integrate over if you have orientations and so on. So that is precisely where you can sort of do what is in most papers when people normally are like we could do with transversality and then they end up at this point, then they say what they would do if things would have been generic. So there's a lot of documentation about what you have to do. So if you have this, you can basically actually take this paper about SF-t and just start implementing the things rigorously. Okay, is that clear? Okay, so now... So assume this is constructed with this required property so the norm is for example, less or equals epsilon n. So now we pick a component A so one level higher for which the objects are connected. So which means there is... If you look at a building of height one, it looks like this, of course on the boundary it might have to compose this but it's still connected in the boundary. So now consider the pullback via f theta. These are the covering functors of the previous perturbation for all theta in the phase of A. So now for each... So the theta, that's... So phase comes from the pullback, comes from a pullback of two problems which are in general position. And then, so for example, if you have two level one buildings, you have two of height one and they are in general position on the boundary then, then you have still the normal direction to the boundary and you perturb it to make it in general position in the neighborhood with respect to the full space. Actually don't even... So near the boundary you wouldn't even need this because it's already onto there. And then you extend it. So if you go... So there's one thing one should point out also that if you are... So there are certain things one should think about it but they will not cause any difficulty. So I started in the induction with the components which don't compose further and obviously I can achieve general position there. Then I take a product of two of such guys. I'm on an honest boundary. So if I have a product of two things whose linearization is surjective then if I just and pull it back then on the boundary viewed as an operator with respect to the boundaries already surjective. And then you can extend it somewhat nearby it would be surjective and then inside you have to wiggle a little bit to make it surjective there. Then so now what happens if I'm at a corner. So when I have a corner I cannot argue like this then this part is restricted to this surjective and restricted to this is surjective but if you linearize with respect to this direction this direction you know precisely what the linearization here is. You cannot say now I have an additional parameter to help you to make it surjective. But so you have to look what the linearization is and it's automatically surjective. So then the restriction to previous things are surjective so it's in general position so everything's fine near and you extend it. Oh, oh, I'm sorry. Okay so I'm astonished how short the proof is. So then we take the phases and pull back perturbations. So this first is our trivial cylinder components but okay so I was here. Extend to related as before related means I add trivial cylinders. So now take a component also on the same level as our trivial cylinder buildings but disconnected components then you study the phases it takes a pullback of the perturbations from EP blah blah blah and it turns out that this thing here by just because you can decompose it into disconnected components which I do and it just takes a product that relation was already true on the boundary. So just extend it. So then you have to verify that that is smooth, yeah but sort of that was smooth so you just write down what that means. And so then that's defined and then this buildings here I extend by adding trivial cylinders to it. So now here when I take products and so on so there is this point with what I discussed here, this difficulty which discussed here but if you believe me that I could sort of modify it and bring it into something which has supported you then this procedure never will always have multi sections which have supported you and are small and a priori as a compactness result says that my solution spaces are compact. So then this is a step and this is an extension of the previous one and so on so now you have transversality. So now remarks so I started 20 minutes late because of you. Six. Okay. So I had actually two endings for my talk and one ending was to show you the homotopy argument which is actually between two perturbations which is actually more interesting than that one so that's rather straightforward. So you cannot have general position when you homotop from one to the other and the reason is that you have two T dependent so if you take, if you go from, so on the zero level you can, you can homotop nicely but you also want to homotop for example like this that if you look at the T projection that you only see more singularities for example. So if I look at the modellized space which is where you want the more singularities for example. So then you do the product and go to the next level but then of course the perturbation is a fireball product with respect to T. That causes some constraints in general so things are not, and that prevents you from putting things into general position because of all the other structures which you have. And the problem are precisely situations like this where you, so also if you have two different components below then, then of course this, when the modellized space and have T you can talk about that you only see more type singularities and on different components which are really different you can make sure that the critical levels are different by generosity. But if you have a component and you add trivial cylinders to it the most function is the same. So this thing already produces the same critical point, a critical point so like this. Suppose you have this thing here say beta this is a periodic orbit gamma. So then you can glue that thing on it and you have beta here gamma and you put a cylinder on this. Now you see that this thing plus the cylinder here is related to that object. This differs from this object only by adding a trivial cylinder and gluing it on. So the perturbation you see here is the same perturbation you see here and you build a fireball product that causes immediately difficulties. But the deficiency in the cocoon is one and you have a normal direction if you glue these two things you can kill this but you have to find out what a good perturbation is because when you iterate this process then precisely this kicks in. This problem. So the result of this is that you cannot achieve general position but you can achieve identifiable garbage plus good position and you only have to integrate about good stuff. The other is just so identifiable garbage is something like this. You might have a solution here but where would the solution space be? It would be something like this. It lies outside. The stuff you see is too small a mention. This other stuff looks like this. So you just throw the garbage out? You can, you throw the garbage out and yes, that's what you do. If you're just interested in zero dimensional components it's what happens is precisely you might have this picture where technically it looks like there would be something lying outside of the space. So that doesn't play a role but then all you have something like this which is good. Which of course in general position would have been something like that. So you come out of a corner. Okay so there you have to actually explain more in the transversality during the homotopy which was one ending but after my disaster's first lecture there only came through half of it. So I have to cancel that plan and presumably I have to cancel that plan here as well because our chairman is really tough. So the marks on. After stop. Yeah okay. So here, so let me just make some one minute. We have a discussion. Right. But I have to prepare everybody for the discussion. So let me just say so at that point you want to introduce invariance. So what you have to do is work out orientation things and you run, you see that there are some issues and the best thing is to go to a covering which is precisely in the SFT paper where you number the punctures, you put asymptotic markers, you put random mark points on the periodic orbits and have some requirement. Then you see that you're basically an oriented except with the occurrence of bad orbits. And then you can write down integrals and then you get the relations by this, the model of space is how data is related and then you can represent the data in whatever we said there in that paper. That's a remark. Quick questions. Yes. So, you're saying you want an amniotic form in which the, the, the, the, the, the, the constant cylinders. So in my world what I'm hearing when I hear that is you want local slices to the R action on maps from R times S1 to a manifold M near the ones that have. I mean you, you, you definitely have to put, okay. So, so, so when I, when I look at stable maps then I would have maps from. The question is what is the ambient polyfold for just for the trivial cylinder? It doesn't. It only, it never occurs by itself. It only occurs in combination with something else. But there is one, right? And it's sort of. No, but, but, but, but the construction you have is not a product construction. So you cannot, you cannot have a trivial cylinder by itself because it's an unstable object. So, so, so, so, moment, but a trivial cylinder occurs always together with a non-trivial component. And then when I, when I construct a global slice, it's not a product of a construction on a cylinder and the other one. It's a more, it takes both, it takes data from both and mixes them up. It's not a product. So, so therefore, trivial cylinders never appear alone because they are unstable. They can only appear together with a non-trivial object. And then when I make the slice with respect, because when you make a slice with respect to the R action, then you, you fix say the non-trivial part, but you can slide the cylinder against it. Exactly. Yes, yes. And that has to be built in, but it's not a product situation. Because you also have to divide, you have to make, you have to take a transversal constraint. You have to divide out the automorphism group for the cylinder as well. So, so there are several things which I have to do at the same time, which go into the construction of the uniformizing. So, so let me get this back to you saying there's not an ambient and polyphonic in which just the trivial cylinder sits. No. All right. So, so, all right. Okay, I suggest that we break out of our precious rubric. Thank you.
Topics: 1 Perturbation algorithm in the homogeneous case. 2 Perturbation algorithm for relating two perturbations. 3 Remarks on orientations. 4 Representation theory and SFT.
10.5446/16296 (DOI)
Thank you very much, Nate. So recap. Okay. So far. So we have the category of stable maps. Then we showed that there are certain, then the subcategory of holomorphic objects, pseudo-holomorphic objects, is given by a theta which just associates the weight one to such an object and otherwise zero. And the idea is to deform this theta j into a theta so that a certain number of properties hold. And so we discussed that in the first lecture. Then in the second lecture, we showed actually that there is smooth structure on this category and explain what that means. And what we would like to find are smooth things which do this. And now how do we get them? And that is going to start to happen in this lecture. Is that we will define another category lying over it. For which the Kusch-Riemann operator gives a section function of this. So the fibers actually will be Hilbert spaces over each object as a Hilbert space. And the Kusch-Riemann can be viewed as a section of this. And then the theta will be obtained in such a way for another kind of functors which are this time defined here. Which you can view as multi-sections. And I will explain all this. And this, so here we already know there are some kind of smooth objects. So smooth functors, there will be also some kind of smooth functors here. And if you choose this thing in general position, satisfying some properties like this, but we have to formalize them for these things, then this one will actually have these properties. And will be of a good type, namely it will be of the smooth weighted category type. So locally it's sort of represented by many folds divided. So it's good enough that you can actually integrate forms over it and so on and actually can define SFT. But there are some issues, there's something that we would have to discuss, namely orientation. And orientations are better done to actually go to a covering. Where one actually introduce numberings of the punctures and so on. Okay, so that is what we did so far. Okay, so this bundle category. So we have our category of stable maps. And we take a functor into Hilbert space. So when I formulate things, I usually give some general formulation, but for the category of stable maps. So you can think of other categories you put in. The scheme works actually for a lot of series. So we associate to an object a Hilbert space. And in this category, the morphisms, so two different objects might get two different Hilbert spaces. But the morphisms are actually lifted to linearize the morphisms. Yeah, so then you can define a new category. Then we take the object and the vector which lies over that object in the Hilbert space. And what are morphisms? Well, it's a pair of phi e. Phi is a morphism in your category of stable maps. E belongs to the Hilbert space lying over the source of phi. So the source of phi is say alpha. And what is the target of this morphism? It's just the vector obtained by applying the lift to the original vector e. So you lift each morphism as a linear map between the fibers. And the objects are the vectors in this fiber. And the target is the image under this linear map. Yeah, so we have. So I wonder who did that. So okay, so I don't want to destroy this piece of art. So. Okay. Okay, so here we have the object alpha, alpha prime. So here's zero. The other zero. There's a morphism phi between these guys. And we have a vector e. And the lift mu of phi is a linearism of phism, which would map this here to mu phi of e. And this thing here, so we can identify then a morphism for this bundle category with the underlying object and the vector e. So morphism is phi. And the source of this is e. And the target is the image which you get here. So. So in our, what do we do in our case? Well, if you have such, so that's a building of height one, let's say, then what do we do there? Then this, then here, you have an equivalence class of maps up to r action. So we take a representative. Now, you see, when you take the tangent space here, then because you have the r action here, you can just identify this with r cross the tangent space of the underlying v component of this map, yeah? So the vertical triddle has the r component and the v component. And our Hilbert space for this object consists of all maps which are complex until linear from the underlying agreement surface, that's a point z, into this thing which is identified. Since I take a representative of here mod r action, but the first factor is independently defined on which thing I take, yeah? And this map should have a certain regularity property, namely, it should be away from nodes of the class H2, yeah? And that is because the Kossier-Riemann section will act on H3 stuff, and it goes down to H2. And at the punctures, you have to also take, you take exponential decay of this thing. So if you take a puncture, and you take cylindrical holomorphic coordinates, you want exponential decay of the derivatives, partial derivatives up to order two. Which is precisely when you take this stable maps, which are asymptotic to cylinders and go, have quality three delta zero, so three times, three partial derivatives with exponential decay. And then if you apply the Kossier-Riemann operator, they would go to two derivatives with the same decay. So that is what you take. So that's the Hilbert space. And more generally, if you have a building, then over each of those, so this alpha, so you have alpha zero up to alpha, and over each of them, you take such a thing. It's clear then how this acts. Namely, if you have a zero of one form, you just take T, so then the, this morphisms come from bi-holomorphic maps between the buildings, and you map E to T phi inverse or something like this. Yeah, E composed with T phi inverse, the tangent map of the bi-holomorphic map inverse. Last question for this one. So you said that you wanted a bundle of other spaces, that your definition of what you see in bundle. It's not being given yet. But come, so let me, so, so, in this, so in my talk, so I first did the underlying space just as, as a category without smooth structure. Then we looked at all the relationships. Then I put a smooth structure on it, could say more about it. Now I put just an algebraic structure on this, discuss that, and then I put a smooth structure on this two things. And at that point you are on the level, we can just unleash some abstract perturbation results, which brings things in general position. So that's also structure of the talks. So here we inherit a lot of the structures, which comes from the stable map. So we had this input, this evaluation maps, functions E, E plus minus. Well, we just composite with this, well, with this projection down here. And then we get such an evaluation map for E. The grading, the grading we take from the underlying object. Then we can decompose this. And that is actually as before, except that this thing now has a little bit more structure, over each object. So it has a structure over each object, over each object that lies in Hilbert space. And the next thing is to lift the data from S to E. And the data from S to E means in particular the covering business that you had. So how does it look like? Actually rather trivial. So in the base we have this chopping functor. And then you just over each of these parts you have the zero one form and just put it forward. That's it. And it's linear, it's fiber wise, linear isomorphism. So I have these objects, which is a building and over each of these floors I have zero one form and I chop it here and just take that forward. And that is an isomorphism on the fibers. So it then satisfies precisely the relationships which we had. So that's basically completely on the nose. So you don't even have to think about it. And then there's a functor. Maybe if alpha is given, which consists of different buildings, you just for each map on each building, coming from each building, you just apply the co-chairman. So that's a functor. And so let's now think of, you might have noticed, looks really rather like this. The color is better. That is Joe's color. Yeah, well, I mean, it's sort of better than that one. Okay, no, let's not elaborate on this any further. So now let's first algebraically discuss what we can do here. So the idea is to perturb theta j and theta j, I have limits of my options here. So if I put a pseudo-holographic object into this, then this vanishes, and this thing gives to the 0 vector weight 1 and otherwise 0. So then that's precisely this one. So then we want to perturb this. And what we see here, this gives just the weight 1 for the 0 section. So the idea is now, say, if this is a 0 section locally, I want to sort of have a partition of unity of this and just move. So I've used a 0 section as several 0 sections, but with rational weight, and then I move this individual parts away to achieve transversality. That is what you ultimately want to do. So that's the idea. Of course, then you run. So this one will be perturbed, let's say. But when you move this away, you want to keep the symmetries. It should stay a functor. Now, for example, locally, you have the action of the automorphism group. This whole thing which you get, so it might be. So this will be turned into something like this. If this is a 0 section, so here's a base, S, and here's sort of E, it would sort of look like this, and you want to turn it into this. But you want that this whole stuff, these things each have a fractional weight. You want to want that this is invariant. Then of course, you do this at different places, so it becomes a little bit more messy. So if you go globally, so then the things which are constructed might be then bifurcated further off, and so on. So that is what you want to do, so you need to develop sort of a machinery to be able to pick such things with sort of, so you see if lambda 0 is replaced by some lambda which consists out of this section, then this is only positive if the Koucher-Riemann operator perturbed by such a section, set is 0. What does it mean? If this puts the weight on a graph of different sections, then this here will only, if I perturb this, only become positive if you solve Koucher-Riemann of alpha equals one of the things in the graph. And this you want to achieve transversely. And if this is transversal, then that actually will be a smooth object, a smooth function. So that's sort of the idea, and for this then you have to develop a little bit of machinery. So is that clear sort of what the aim is? You have a chair, you are not asking questions. Okay, you're good. It's like the most basic possible question I should ask. What would happen if your perturbations were not on course? Well then you lose some symmetry, and I think there's still a theory, but it's not the theory we want to do. I think you can actually do some brutal, some really brutal stuff and ignore some of the structures that you can do. And that's another way to produce data. And then out of this, presumably you can produce some invariance. If they're interesting, I don't know. It's like if you do S1 invariant more theory, then you just forget the fact that it's S1 invariant and then you have usual more theory, something like that. So it's on that level. But if you work locally, then what the factor is doing is really saying you choose something locally and then it's consistent with something else. You choose to show it somewhere else. Yeah, so you have to keep that, but you might not. So in some sense when you patch it together, you want it to fit, but you might maybe, yeah. So it could fit in some slightly complicated way. It could be. Yeah, so you have to think about it because just saying now this should fit with that is easier to say with a whole lot of structure, which we have then actually saying I relax it somewhat locally. Like I could say it should not be invariant under the action of the isotropic group and then say it can match it up globally. Because the constructions are local, so you write as you'll see a perturbation as a sum of a lot of perturbations and you construct them locally and then you have to transport them all over the place by the morphisms, the local construction. But I think it's possible at least in a general framework to, if you have a criterion to forget some of structure. For example, you could definitely forget the structure that you want to, when you perturb that if you add trivial cylinders to it that they should just appear as pseudo-holographic cylinders. You could use them in your perturbation. So that would stay consistent as long as it's invariant under morphisms. You could also disregard the fact that things are disjoint unions. I mean if you have disjoint unions that if this is a solution this, you could also go away from this. However, I don't think for the latter one you would get a new theory because since you can do it there's at least a co-vortism from not doing it to the one doing it. And then I think when you arrange the data you might actually get just a lot of cancellations. But I haven't carried this out. So there are a lot of things you can think about. Okay so what are the requirements on lambda gauntings and desired properties for theta? And here they are. Actually they are even better to formalize than for theta. So first of all we have to define something on the fiber product. So if this on the fiber product things, so I ignore or don't write down that E zero lies over alpha zero. So then you just take the product of the things. You don't want any zero? Zero? Oh yeah so I want everything. So this I don't know was my keyboard I guess. I tapped the wrong button. So this is a zero sorry. So this is a restriction. Then we have this covering function. So here's the algebraic version of this which corresponds to the version of the first lecture. But this things can be also lined up according to the underlying phases you have. Yeah which was sort of lecture part two with a lot of confusion which hopefully decreased in the discussion session. So which would be the same formula. But so this is the algebraic version. So is the right hand side not equal to just taking this maximum breaking of any one object? Right but yes if you have this property yeah. So I mean if you, I mean this is okay so what this says is basic so it's maybe I should never write this formula. So what it says is I mean I guess mainly I'm asking is there some interesting sign that I need to be aware of. No it's just a lot of cancellations so ultimately there's one term if you have this. Let's say lambda is always plus one oh minus one the result here is always going to be plus one oh. So now here that is now important. So if I have an object alpha then it has associated remanent surface and then the remanent surface can be decomposed as the component. So let's first think a building of height one. The parts of the surface which carry the things which are not trivial cylinders and the things which carry trivial cylinders. And if you have a building a trivial cylinder building is just a line of those guys and so when you look at this thing you can see the trivial cylinder buildings and the rest of the components. So that's a natural decomposition and you have a forgetful functor namely it forgets the trivial cylinder buildings. We have that already so now there is first of all a Whitney type decomposition of E. So if I look at my Hilbert space and have a zero one form of it I can put this thing zero on the trivial cylinder building or I can put it zero on the complement. So this here is the part which is defined on the original building but it is zero over the trivial cylinder component and this one is perhaps non-zero over the trivial cylinder component but it's zero on its complement. So you have this decomposition. Here it's written. And I have a question. So if you have a two level building and if you have a two level building and it has no trivial buildings in it so it's non-trivial on every level and suppose on the bottom level you have a trivial cylinder and something non-trivial then what's the corresponding splitting? Is ETC just restricted to trivial cylinder buildings or restricted to all? Trivial, trivial. It would be your trivial cylinder buildings. So if I have something and then what you said say I have some non-trivial cylinder here and I have a trivial cylinder there and then I have this which is trivial cylinder. So it would put so I have a zero one form over this. So the ENTC would just put the value here zero. Okay, not on the bottom level. Not here but it turns out when you do inductive steps for actually constructing lumber then this bit already appeared earlier and then that is the thing was already having required properties here to begin with. And the trivial cylinder is something which is inner, it's sort of zero, it's a hard topic. Into a J-honomorphic cylinder. So here is a picture. So here is a lift of the little c. It just forgets the underlying trivial cylinder building and just restricts and gets this new object here in E. So before you just forgot about part of the stable map now you throw away part of your object in E. So now you have two functors. So one is pi is just the projection here of this Wittner decomposition on E on this. So in particular it preserves the underlying, it covers the identity on objects. But this one, C, does not cover the identity, it covers the forgetful function below where you actually throw away trivial cylinder components. Okay? Why are we having such a careful discussion of trivial cylinder components? Because of, if you want to SFT or want to, okay, so if you think of this here as a preparation of producing ultimately data by integrating forms of a components and so on, then when you, then the next step is what can I do with the data? Can I represent it as a chain complex or something like this? Then if you want this thing to have certain properties and in this case an algebra property, you have to, you have to, you have to discussion of the cylinders. They look of course completely trivial but since you make concatenation then you have, you add something into it and so on. So they play actually a non-trivial role. So that is why, because, so if you would disregard them in some way, let's assume you're also some theory but it would be different or possibly different. So now this is of course a projection here and here if the underlying object doesn't have trivial cylinder components then actually you have this identity obviously, yeah, because there's nothing to put zero. Okay, all this functor, so this is a, this is also a retraction, it's linear on the fibers, it covers the other retraction, so that's sort of the structure which we have here and this thing's commute in this way. And then what is important is that if you restrict C to the non-trivial cylinder part you get actually fiber-wise and isomorphism. That's actually important when I, that allows me to pull back perturbations by this because I have the linearity in the fiber, asomorphism in the fiber. So if you go through this list of things here you find that it's actually rather trivial in the concrete example but this kind of thing's how I write it, it's actually in all the problems like Fleur-Siri and so on, so just always this kind of structure. Okay, requirements. So there's this pullback operation and here, so what does that say? So lambda E should satisfy this. So if lambda E is positive that has to be this, so if this is positive this has to be one and what does that mean this is one? This means that over the trivial cylinder part the component of E is zero. What that means is I actually don't perturb over trivial cylinders. So if you don't perturb over trivial cylinders that means then D bar over trivial cylinder is zero which means it's actually the pseudo-holomorphic cylinder. I don't have to perturb there. Is that clear? So if lambda E is positive, here if lambda E lambda E here then this one has to be one. So this here is a part of E over a trivial cylinder building and lambda zero is our original thing which puts weight one on the zero section and otherwise zero. So if this is one this means this is a zero vector and that means there's no, the E over a trivial cylinder component is zero. So then the D bar part on this if D bar is equals that E over a trivial cylinder it's a pseudo-holomorphic cylinder. So that you see how it already produces one of the properties of our theta. So I'm going back to that picture. For the cylinder that's in the middle of the bottom level, can that be perturbed? No, because of the inductive nature of things. So on some levels like a level one building of course that is something which would satisfy this property. So whatever you construct in the perturbation because of this algorithm it will actually not perturb over trivial cylinders. So you would always get pseudo-holomorphic cylinders out after the perturbation. So then of course there's sort of what we had before if I have two stable buildings and then I put them together I can move to engage it other than you want that property here. If I take one of the representatives. So now, so what does that now mean? If this is, let's just discuss what does that mean if this is positive on an object? So first of all it means there exists a rational number sigma positive, a vector in the fiber over that object so that lambda of E is E and alpha satisfies this equation here with a weight sigma. Sigma is the number associated to this object alpha. So now if alpha is actually a building of height k plus one so top floor is k then the sigma can be written as a product of positive rational numbers and each of the alpha i's set and this E then of course is a sequence from E0 to Ek and each of those satisfies this equation with a weight sigma i. What do you mean by weight sigma i? Yeah so if I, so if you look at, so what is the interpretation of lambda composed with, so it means since in the fiber of different vectors that it satisfies one of the equations coming from the vectors with nonzero weight and they have a weight coming from the underlying alpha and so I get a sequence of equations and each of them carries a weight and if I add the weights all up it's one. That's precisely the splitting of the zero section. But the equation itself doesn't depend on the weight. No, no, no, no, no, no, yes. So ultimately in some sense we count solutions but we don't count them zero one, we just count them with a rational weight. And of course there might be a sign also but this contributes so if I have two solutions with each topologically counting one and the equation has weight one half then the total thing I see is one, one half for that equation plus one half from the other. So it's a system of equations where if the equation is true and you have a solution you just, it contributes according to its weight. It's like the S and P index, how big is the capitalization of a company or something like this, yeah. For those who are interested in buying stocks, I mean, it's sort of this kind of. But you're also going to say if you cap lambda of E equals sigma, that's what that means. Cap lambda of E, yes. So this is of course what happens here that which I said somewhere here lambda of E is sigma. Okay, there it is. Here it is. So that means if I solve the equation d bar equals E, this equation counts and it's taken into the general bookkeeping with a weight sigma. So then this one here, if there's a length, it's decomposed in a certain number of EIs and we have this product structure and so each equation is this here has this weight then I can put this together to this one and the sigma comes from this individual weight of this part. So then because of this property here, each EI vanishes on the trivial cylinder components, actually on every I because this one was already perturbed and this EI, yeah, so this is a building of height one and this EI already satisfies the property that over a trivial cylinder component is zero. So all the trivial cylinder bits, for example this one here, if you look at the blackboard, all over this if there are solutions would actually be real J-holomorphic cylinders. So then if the alpha I has different components, so I'm looking on a floor, then this component you can decompose it according to the different components and some of them have trivial cylinders in it and each of them actually has a weight and the sigma I would be a product of the weights for the individual components. So this is what the perturbation all does, yeah. So individual components are perturbed separately, pseudo-holomorphic, trivial cylinders turn out to be pseudo-holomorphic and so on and so on. So now we come to the smooth, so algebraically I think now it's all clear and now we have to put a smooth structure on this thing and see that we can do, define what is a smooth lambda and so on and then we are ready for the perturbation. Can I ask a question? Yeah. So like, what's going on? So at the end, at the very end, you end up with something which is a Q linear combination of matter poles or something which is only locally a Q linear combination of matter poles whatever that means. No, let me, yeah, okay, so, so model the following is actually one of the things you said, so which I'm explaining now. So if I have too many faults, weight of each is one-half, I could use them as four many faults by taking a copy of each of them with weight one-quarter, that would be considered equivalent. So then you can, so then in the, then if you have overlaps, so you have here some many faults and here some many faults with weight, what does it mean they fit together? Basically it means that if you take a certain number of copies here and a certain number of copies here, you can match them up so that the weights are the same. So let's smoothly fit together. So that's. I want to ask you about one of the things which I said. Well, I don't. The second one. It's not a, it's in the middle. You can't break it up into matter faults and then give each of them a weight because they maybe. There's no natural identification, just can identify after. Locally you can do that. But not locally. Not locally. No. I mean, you can put some artificial structure on it to say what you have to do at any given moment in the overlaps. I mean, you can say, you put the structure on top and says you have to take so many copies and that you have to identify with this one. Which is actually, you have to do when you do extension proper, when you do extension properties like for sections. I mean the same thing for sections. Because when you look at how, if I have a section defined over the boundary, how do I extend it to the interior? Well, the only method is you make a local extension, take a partition of unity. And if I don't know what to identify with, what do I add actually up then? So you need that structure of this identification, extend and then according to this identification you glue the things together to get an extension of the boundaries. Thank you. Good question. Any more? Okay. So, so now the theory is there exists a natural, your star means up to some fixing some discrete set of data. A strong bundle structure for this thing here. So it is basically like the polyfoil structure for this except that we here have a strong bundle line over O. We have the associated translation group part. So the objects on top are vectors in a strong bundle over E. Then we have an action by the isotope group on this. And here it covers this thing. This comes, this is one of the size from the polyfoil structure on S. And we have this in a coherent way. And then if you have two of those guys, then we get the transition set and that transition set is actually bundled over the transition set here. And this has a strong bundle structure as it was defined in one of the lectures last week. So, so this would be sort of the generalization of the structure which we had here. And at this point you can start talking about smooth multi-section functors. Is it sort of clear? Okay. So, so now it takes a Koshiriman functor and it goes from here to here and actually, well, let's look at this composition here. Then it lies in the image of this one. And you get a local representative. And it turns out it's Sifreton, which was defined by Katrin and Jor. And the modeler category, so that is where this is zero, has a property that its orbit space intersected with each connected component in the underlying space, the orbit space of this is compact. That's Gormov compactness. So, so now we are in a smooth setting. So Koshiriman section is an SC smooth section. That is what that property means. Yeah, that this is Friton. And if this thing vanishes, this means we have pseudo-holomorphic objects, which is sort of this associate modeler category to this one. If I take its isomorphism classes and intersect it with the connected component of the orbit space, it's compact. That's Gormov compactness. So that's by definition what it means as you have a Friton functor. I forgot what O and S. So S stable maps. Good. That is what we're talking about for a while now. Good, good. Then O, this functor defines the polyphol structure. So this are inject, these are the things which are injective on objects. If you pass to orbit space, you get, so there's a point here, an object here, which is mapped to the original given object, alpha and so on. But O is retracted. Yeah. Well, it's just taken M polyphol. It's an M polyphol. So then this is a strong bundle of an M polyphol that was also introduced. So that is a model for... So is the statement that if I take... So E over S is some bundle that we already said that way. It's a statement that whenever we pull back to a polyphol bundle, to a polyphol triad, I get a strong polyphol bundle without a means. So that means that given any object here, there's a selection of a set of those guys, and if you put that in, then that sort of the local structure near the object alpha. That's interesting. So is del bar... Yes....defined by pullback? Yes. Yes, yes. Oh, yes, right. Yeah. So in this case, then for this construction which you have, if you take any of those guys, it is a Fretom operator in the sense as we have discussed. So the pictures here, when I described before, you have the smooth functor theta and I put my hand in, then I see sort of many faults. Now I have a bundle over this lying here, and when I see the Kuscheliemann functor, the trace, what it does here and where it's mapped to, that's actually a real Fretom operator. And then I can put anywhere, and this structure gives... If I know something here, I can always transport it to a neighbor of any isomorphic object. So that is what it means. So now here's the polyfoil packaging of the SFT data. So we have a strong bundle structure of a polyfoil. The Kuscheliemann section functor is S.T. Smooth and Fretom. We have S.T. Smooth covering functors with compatibility and some additional stuff where we have for each phase. So I haven't defined this, but it's so clear. We defined it on the level for S. So if I face here, then this here is just the stuff of E lying over S.T. And this was a covering functors. Then there were a certain number of compatibility conditions, which we discussed in the last lecture and also yesterday in the discussion. So we have this diagrams of these things and you have that. So I suppressed here the moving of components against each other. Then out of this data, then one can write down what we did before a requirement for the perturbation you want to do. But that is basically sort of the smooth packaging of the data which you need to do, to produce the data which you need for S.F.T. The last diagram also is commutative if you replace P by D bar. All these functors commute. No, I mean if you, like in the first two you, what? Which you talk about this or this or this. So there are three diagrams, right? Okay. And then there's two equations below that, which I read as if you change the arrows down and label the P to arrows up and label them D bar. Yeah, yeah, yeah, so it's compatible. So if you put the sections, if you put the Cauchy-Riemann section here, the local representative. So here this is, so this is a restriction of the Cauchy-Riemann and here the other one, yeah. I don't know how to ask you why there are three equalities at the bottom of the three diagrams. I think that's what's important. Yeah. Okay. So, yeah, so that's good because I forgot to write them. So if you apply, if you apply, yeah, so what do I want to say? Yeah, well, this more controls this one. So if I, you see here, here's the identity. So there's not too much happening with respect to D bar. So it's just controlled by the C, but the C has certain properties with respect to the P. I mean, that's what they want to say here if you have. But it is true that if P composed with D bar is simply zero, probably. Identity, no, no, no. If you have a solution, on the solution set, what you're interested in is identity minus pi composed with D bar would be zero, which means on the trivial cylinders, you would be pseudo-holographic. Yeah. Identity minus pi composed with D bar equals zero means on the trivial cylinders, you have pseudo-holographic. Yeah, so that's exactly the equation here, right? So pi composed with D bar is, in fact, equal to D bar. It would be. Well, you might have a non-trivial cylinder which is not homomorphic in ass. No, no, no. You are like a trivial non-holographic cylinder in ass. And then the other part of that would not be zero. No, then it would not be zero. No, I just said only on the solution set ultimately. So I think you can write certain things under the sum. So identity minus pi D bar equals zero provided the, well, provided actually the, what does that actually mean, equals zero means, well, that is precisely what you can say. Identity minus pi composed with D bar equals zero precisely means the trivial cylinders which you see are pseudo-holographic. But that is exactly your right hand diagram, couldn't you say? Ah, OK, good. Now, OK, so something that direction. So let's put a weight on this 0.1. And so there's some truthiness to it. So now, constructions of SC plus multi-section function which are a particular class of those guys here. So this is an important class. It's sort of, these are multi-section functions which you can sort of view as multi-sections of, these are sort of compact perturbations of this. So multi-section function is of that particular kind provided as the following properties. So if you take, if you take sort of this uniformizers and the underlying thing, q zero would be the object where you're looking at, then this composition here is a count of the number of indices for having, it's a number of indices where this age satisfies this. So you go into the base, pH. So age lies in, so age lies in k, pH lies in o. These things are defined on o. And if si of the underlying base point is equal to the vector you put in. You count the number of indices, this is that, and divide by the number of indices you had. And these things here should be locally SC plus sections. Let me remind you what that was, this strong, this strong bundle come with a double filtration. Namely it made sense to talk about mk and here m where k is less or equal to zero, less or equal to m plus one. So in particular you have a k, you have a k zero one lying over on o zero and the SC plus sections are actually going from here to here and lie in the fiber over m in m comma m plus one. So the si's go from, so they are defined on o zero but they go into k, m, m plus one. And then of course what is important that is why I said it's compact. If you go, this is a fiber regularity and if you view them with respect to the different norm that is a compact inclusion. That is why I call this compact perturbation. So these are some kind of sections but they are constrained by having this property and there were, I think you talked about that or it was mentioned maybe last week. I don't know what the definition is saying, saying that there exists si's. Yes, so there exists finally many si's, so indexed by the set i and you look at the coincidences. So basically the picture here is, if this is o here and this is a fiber then locally, so you have a certain number as i's, i and i and each of them carries a weight one over i, one over the number of elements in this thing. And you just look at this vector here, how often, how many graphs are there in which it lies. So you have this vector, so psi, this is an e here. We have definition function. The only difference now is that we require the things that we locally represented by the s c plus. Right, so locally in a chart or a uniformizer. So first of all the multi-section functions were in each fiber there were a finite number of vectors which having weights adding up to one. So now if I put the chart in, then I have this of course on the image, but this difference thing should lie on graphs of an s c plus section. Is that clear? So if you put your hand in and you see in the fiber the different points they line up as lying on a graph of s c plus sections. So now we want to, and this section should be sort of compatible with the group action and that's sort of the compatibility. So there is an action of our automorphism group on the set i and you have the orbits under the conjugation by this thing lying in there. So let me first say certain properties what you can do with this guys. So you can build this sum here which is sort of a convolution and this is smooth. So if each of those guys is an s c smooth or s c plus, so I forgot the plus here. So if this s c plus, then this is s c plus. Because what is the representation, what is the local section structure of this thing? You just have the section structure s i for one and t i for the t j for the other and you just take all possible additive things and just take as a weight, as a weight to take one over the number of the indices here times the indices of the other. If you have lambda one and lambda two sitting on different bundles. No, no, they are there for all bundles. Why do you call this a sum in one product? Well, it's a convolution. That's better because on the section structure you take basically all possible things how you can add up things. So if locally, so if locally the first one is given by s i and the other by s i prime prime and this in index set i index i prime, then the sum is given locally by taking all this combinations here. But with the, but where the, where the index set is actually i cross i prime. So plus because of that. So then this one here, well, just replaces the sections locally by t times the section. So of course, so it's one over t on the other side. Otherwise, so this is a smooth family. So this is okay. If t is zero, you just get lambda zero, so I put the lambda zero up here. Otherwise this. What is lambda zero? So it should be, zero should be up here. It's a, it's a, it's a section which just is a rate one on the zero section. I mean, this is a smooth procedure. So if, so, so this one here, what does that mean? The indicator function here, it just means that the local structure is t times s i. And then if t goes to zero, then you get zero section. So lambda zero of e is zero unless e is zero in which case it's one. Yeah. And here. And that, well, that works exactly because the total suburb weights over any, on any fiber. Yeah, is one, is one. So that is actually a smooth family if you change t. So then if you have a, if you have an s c smooth function into r. Yeah. So it's clear what that means. That means if you compose it with a uniformizer as s c smooth, then you can put that in front of it. Yeah. So you can use partitions of unity to cut off such multi sections. Smoothly. And then this makes sense as long as locally near, near a point in the uniformizer, you just have that the family is locally finite. So if you take a point and then you, you have only finally many non zero vectors there and you just add them all up in this way. So then this is also again a good section. So, so, so this are important facts for actually constructing partitions. This allows you to construct things locally and then just add things up. And then a good fact is if, if I give you any, so it should be smooth, a smooth object and a smooth vector, then they exist actually such a lambda of s c plus multi section function of a lambda is positive. So I, I going to show you this, how to prove this. So, so now, for example, locally, remember when Katrin was describing the transversality result and perturbation result. So how, so if I, if you have a threat home or even a finite dimensions, if you have a section of a vector bundle and you want to make it transverse by small perturbation, what you take is you add to it some T i times the perturbation to fill up the core kernel. Then you solve this with respect to the additional parameter, you get a manifold and then you project onto the parameters you added and take a regular value. And for every regular value, that is a good perturbation. So now, what do we have locally? Locally, we, so what we want to achieve locally is we want to break the symmetry. That is generally what we have to do to, to achieve transversality. Of course, sometimes we can avoid this. Sometimes it takes the orbit of this perturbation which is also transversal and then we have maybe some more perturbations. But for each of these problems, it's precisely that argument. So when you look at this, you just have to make sure that one of the local problems is transversal, you get a set of full measure for the perturbation, you take intersection and then you take some of the values there. So, so that's the only additional complication, but otherwise you use precisely this thing. So what that means is that actually rather than talking, taking local sections, you construct local multi-sections and take that sum. And each local multi-section depends on a few real parameters, t. You take the right sum so that it fills up the core kernel and then among all this, then you get sort of this branched manifold. And then you have a projection on t and then for each piece of manifold, you require that that projection is regular. Now, these are countable conditions and so you find regular things. So that's the only difference. So it's a straightforward thing from there. Then you can even go further. For example, when you have a boundary point and the kernel lies a little bit stupid with respect to the boundary, like it's tangential to the boundary. And you could, then if you introduce multi-sections who have a particular linearization, you can actually tilt the kernel into the manifold to make it transversal. So, but that is also a little bit, so for this you only have to construct a section which takes enough values to fill up the core kernel. Here you have to think about that it has a specific, it might have say value zero there, but it should have a particular derivative which together with the linearized Cauchy dream operator has a certain thing. But that's the same problem like in finite dimensions. So there's no, nothing new. I mean, it's of course not so surprising because Fritam's theory is locally a finite dimension problem times something you don't have to care about here. And for that finite dimension thing, these perturbations are as rich as in the finite dimension theory. Okay. So, so let me just explain you how I construct a section. So, so I, so I want to construct at alpha in the neighborhood in, so I want to construct a section which has a certain property at an object alpha. Didn't you just explain to us how you construct a section? So now I do it again. So, no, no, let me, I, well on some level now I give you on a precise level. And still I have ten minutes. Okay. Five. Okay, good. So, you know, you just have to put something on the table and then you get a good answer. So, so, so I, so I want to construct something at the smooth object alpha with this, with a given smooth vector over it. So what do I do? So I take such a uniformizer. So here's a picture. The underlying thing. So this is an orbit space that would be psi, the image of this one if I pass to orbit space would be this red stuff. So this points somewhere here which corresponds to the object. I take a neighborhood u there. So now, now I construct, so now what do I need? So in Hilbert spaces you always have smooth bump functions but on certain Banach-Menefolz as well but unfortunately on certain Banach-Menefolz or Banach spaces where you don't. So I think C alpha I think does not have smooth bump functions. But so, so, so there was a study 30, 30 years ago people were interested in it. So there's a lot of literature which Banach spaces have smooth bump functions. But SC smooth bump functions are a little bit more there because it's a vehicle requirement. So, but in any case on Hilbert spaces where I'll set up you don't have to worry. So here is my, here's the set O and here is sort of a neighborhood. So now just here say this is a point which goes to the object alpha. Now you just construct with a bump function this object here. So you just take a bump function which at this object corresponding to alpha takes this value E0 which corresponds to the given thing in the, in your fiber. You take the support in the small set and then you rotate it around by the action. So you know you have a local thing. So, so this is now on the image of psi bar of K so that then we, we, we define it by this formula, you know, which is precisely the definition. And now we extend it to the whole category. If you have any vector then if there is no, if there is no more fism which brings the underlying base point into the, into the image of O. You just say it's, it's the multi-section which has one on the zero section. And if you actually can reach this patch here then you just define it by this, by what you reach. And that is a smooth fun. Because if I go from one to the other I have this smooth transition. So if I have a local section structure here I just can move it over there. Yeah? So, so that is a local construction. You know, so, so now you can take a finite number of those parameterized by P to fill up the core kernel at alpha. Then the free-time property actually guarantees that it nearby is also the case. You do this at different spots covering the compact solution space. And then you have enough things to do precisely what Cartrain says. And then you can add some time ago. Okay? So, and so now I generously got five minutes and I only used three of them. So I'll stop here. Otherwise this. Ha! Are there any questions for our speaker? Can you go back? Just, yeah, time for the last one. No? My computer is very entertaining. Okay. Can you go through this again and tell me again like which spaces are which and which and which. Okay, so, and this argument actually wasn't so apparent. It's actually important that the underlying space is actually at least power compact. So, so I took a look at psi of O. It takes the associated isomorphism class, which is sort of this red stuff. Then that is here. Yeah? Yeah. So then, then in this isomorphism class is the, is the class of the original object alpha given. Which, and I take a neighborhood around this. And then, then the thing is that, so this is a metrizable space. So it's normal. So I can actually find a small neighborhood around it that the closure in the whole space is still contained in it. That's important because otherwise that thing will actually not become even continuous. So then, then you take the pre-image of U in O, which is this blue thing. So this red stuff is O and this is a pre-image of this U. And now in this one you take a bound function which has support in this. And here, here somewhere is, is a point which corresponds to the object alpha which lies here. Now it's the object alpha. This is a category. It lies, there's the object alpha somewhere here. It comes from a point which lies in the blue region. So over the blue region, there's this point representing alpha say Q0. And over this alpha there was this fiber. There was a vector E, which corresponds to some vector lying over this Q0 in the bundle K. So now you just take a bump function which is one in the neighborhood or at this point here times this vector and extend it. So I haven't talked about extension results, but they are on the polyphonic level quite easy. So you can ask me maybe on Friday and I can show you how to construct them there. So anyway, so there is, so there is a section with support in the blue thing. And now you, you want to construct a functor. So what I do is I transport this section around by conjugation. So and then I define, then I, then I get as many, of course some of it could be if you have a symmetric section, then some of them are the same. But that doesn't matter. This is your index set and you give each of them the weight one over the order of the group. So this is actually coming from the isotropy group of this element. So if it has a large isotropy, then in general I would, for example, construct something like this F which is achieved sometimes versatility nearby. And since the bar is a functor, then if I conjugate the functor, it doesn't change. But this then, so then the perturbation by this one is a conjugation of the perturbed thing by G. So it's also transversal, at least in the region where one's. So moving this around doesn't destroy transversality. And then of course in general, you might see some other sections all from coming from the overlaps or so. Yeah, that's the point. But for the construction that is of the minimalistic thing you have to do. And then you give each of them the weight one over the number of elements in the group. So that's precisely the requirement. So that means now on that slice, when you look at what happens, so the section is now defined on that same slice here. So now I have to extend it. So then I get an object here, here with some vector. If there's no more fism from this one, which reaches a point which lies in here. So if I can reach this slice or not, if I can reach this slice by a more fism, then if I cannot reach this slice, I put the weight on the zero section, one. And if I can reach this slice by a more fism, then I define it like this. So I look at what point is there and I give it the same value. And that is an SC plus section now defined globally. So it's not very difficult. It's just really always the local constructions. And the language is sort of so high level that you basically always see everything on the nose. You don't have to go in complicated coordinates and say what it actually means, what you're doing. So that makes it, of course, it could have been equivalent theory. But the language level is much easier if you stay on that high level. In particular, since on that high level, every information is there and there are abstract results who produce whatever you want. Okay, so, good. So I will ask a question that maybe I kind of know the answer, but I'm not sure. Okay. So what is the reason that we need to go to M polyvolts instead of working with retracts as a local model for this kind of category? Well, you could put retracts there if you want. But you can also put in, you know, I mean, this is a little bit larger. So this M polyvolts is locally modeled on retracts. So rather than taking just something which has one chart by retract, then it could replace this one chart by the actual retract. I do this. It also has some advantage when I do cover, when I discuss the coverings. So what is the local, so at some point, of course, in the whole thing which has been suppressed. So give a definition what is actually a covering functor, yeah, in the whole thing. And then I have to give a local model for this. Then on the top, generally have more points. So it's actually more union of retracts going down to the other thing. So it's, so the things are easier. But one could, but I think it would be unnecessarily restrictive to say that. I mean, it's a fair question. I mean, it's like, I mean, the equivalent would be many faults that I define a manifold is something which locally has charged isomorphic to an open set in Rn. And I just define this is a manifold because it's locally homomorphic to some manifold, to some smooth manifold with the transition smooth manifold. Yeah, so. The manifold is like locally a manifold. I mean, no, no, no, no, no, no. It's locally a manifold. Right. That's what this is. Also, one side is a category and the other is actually some well defined, smooth kind of object. Okay, I'd suggest we break. Let's take them with us. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.
Topics: 1 Strong bundle structure and the CR-section as Fredholm functor. 2 Polyfold packaging of the SFT problem. 3 Smooth Multisection functors and smooth weighted subcategories. 4 Construction of sc^+ multisection functors. 5 Auxiliary norms and compactness control.
10.5446/16292 (DOI)
Well, hello everybody. I'm going to give what I hope is a very elementary talk, so you'll probably be very bored. But for those of you who don't know this or have forgotten a little bit, I'm just going to go over the problem of regularization and explain the traditional methods for dealing with it a little bit. So the question is this, we have a symplectic manifold and it's got an almost complex structure J, which is omega-tame. And we're looking at J-halomorphic curves. Now, to be simple, I'm just going to think of the domain being a sphere, so I'm going to write, say, m tilde top 0k of, well, decorated by all these things, J, A, say, which is going to be the set of maps. Now, I'm just going to take the sphere as my domain into m, which are J-halomorphic, which represent the class A, so that's a class in the second homology of m. And, oh yeah, I'm going to have some points, z1 through zk, which is just underline z, and these points are different. And that's just a space of J-halomorphic maps. So these elements are maps and the tilde means that the elements are maps. Now, does everybody know what all these words mean? Because I'm just assuming you do. And if so, if you don't, please ask. Okay? It's up to you. And then we're interested in this space, but also we're interested in, if you've got two of these elements, u, z, we can say one element is equivalent to another element, u prime, z prime, if there's a diagram of this kind. So we've got phi from S2 to S2, which is phi is an amherbius group. So that's a bi-halomorphic map. We've got u going there to m. We've got u prime going there, and the diagram's meant to commute, and say we've got z prime there, and that goes to phi z. Right? So this would be something of the form u prime would be phi composed with u, and this would be phi inverse of z. Okay? So we have this equivalence relation. And we're not interested really in the space of actual maps. We're more interested in the space of equivalence classes of maps. So that's going to be written as m top, 0k. So that's going to be m tilde top, 0k, divided out by this equivalence relation. So these are equivalence classes. So we're not really interested so much in the way they're parameterized, just that they have a parameterization. We're interested in the images, the geometric images in them. And I put top here because this is the top strata of some completed space. Okay? Now, for example, well, so in the best case, this m tilde top, 0k of mja is a smooth manifold, and it's vastly cut out. Oh, I forgot, I said it was j holomorphic. Well, I forgot to say what that means is that the Cauchy-Riemann operator is 0. So del bar j of u is a half of du composed with plus j, du composed with j. So that's the Cauchy-Riemann operator. And so this is 0 set on operator, and if you're saying it's transversely cut out, you mean that the derivative of the operator is subjective. I'm going to say more about that later on. But anyway, you want it to be transversely cut out, and the dimension of this thing then would be equal to the index of the operator is 2n plus 2c1 of a plus 2k, which is the number of mark points because they're varying in a two-dimensional space. So that's a dimension. And why you want it to be transversely cut out is that there's a Fredholm theory, and that means that if you vary j, the space is not going to change very much. So that's the sort of basic space of j holomorphic maps. And so in the best case, the first situation is that this is a nice manifold. And then, as I say, what we're really interested in is the Cauchy space. So this is m top. That this is almost never compact, but it's compact enough. And why this space is almost never compact? You can see that it's almost never compact. Because one thing I'm insisting, I've got these points z1 through zk, and they're meant to be all different from each other. So two of them could come together, and of course, that would not be in this space. And then another problem about the compactness is that this group of equivalences, this group here, psotc, is not a compact group. So that means you're quotienting out by something which is not compact, and that can cause problems. But what do I mean by compact enough? Well, sort of for the purposes, it's compact. So compact, what are the purposes? Well, we're interested in the evaluation map from m tilde, well really from m top, no. And let me, well, m a, so just put an a in there. I'm mapping to m to the k. So we've got a pair uz, which is an equivalence class, so I can put brackets like that, going to u evaluated at z1 up to u evaluated at zk. And that, of course, is defined on the space of maps, but it goes, it maps through the quotient. So there's a well defined map here. And what you want for this is, you want the image, image so that it represent a homology class. And so that, you see, I mean, if this were a compact manifold, then of course it would have a fundamental group, fundamental class. So you could just look at the push forward of that fundamental class. But in fact, for it to represent homology class, it doesn't have to have a compact image. What it needs is that the IE, if you look at the closure of the image, so this is the image of the evaluation map minus the image itself, so this is the boundary of the image. This should have co-dimension at least two. So that means what we say is it represents a pseudo cycle. And to give you an example, well let's take the very simplest example, I think it's a really simple example, take the projective plane with a standard J and the class to be the class of a line. And then C1, the first shown class of the line is 3. And so the dimension of M, if we take two mark points, the dimension here is going to be, it's 2n plus 2c1 of A plus 2k, which is 4, and then minus 6, which is the dimension of the reprametrization group that I've quotiented out by. Okay, so this is the dimension of Psl2c, and there will be a group. And this is 6, and this is 4, and so we get 8. And that of course is the same as the dimension of if we take Cp2 times Cp2. So if we look at the evaluation map from M top 0, 2 in the class of a line going to Cp2 times Cp2, we have uz goes to uz1, uz2. And what this is, remember these things represent a line. And so this is the elements here represent, you've got an s2, you've got two points on it, which in fact we can normalize, we can take the first one to be at 0, and the second one to be at infinity say, because the group is triply transitive, so we can fix these two points, and then we've still got an action of C star acting on these things. So this is M tilde top with a fixed point set to 0 and infinity and quotiented out by C star. So it's still something, a space of maps quotiented out by a non-compact group. And anyway, you've got this space of maps, you're looking at where they go, so you've got two points in Cp2, and you sort of take the line through them. Well, there's a unique line through any two points. Therefore, this map is injective. If we take the standard complex structure, we know that that's injective, but of course it's not surjective. And you can see it's not surjective because the two points it's going through are different. They have to be different because I've assumed that these two points are different and allowed to come together. So the image is equal to Cp2 times Cp2 minus the diagonal. Right? Very clearly. So it's not a compact image. You can see the space is not compact. On the other hand, the set of maps is somehow compact. That's by Gromov's theorem. And you can compactify this. I mean, this diagonal has codimension two. It's real codimension four, I suppose, because it's where these two points are the diagonal here. And so it does represent a pseudo-cycle. It does represent a fundamental class because the image of the boundary is so small. I just understand that the words codimension of these two means inside the closure of the image or inside the ambient? Inside the m to the k. So we're not talking about the structure of the domain here, which is to, I mean, in good cases, you can assume it's a manifold, but it's not going to be compact, you see. But you're saying, so this, it's a, well, it's a non-compact manifold. But if you look at its image in here, it has a fundamental class because the boundary is so small. Is there really a key codimension condition? What? The way that case is. No, no, it's a huge thing. Yeah. What? Codimension in the closure. Well, it's a codimension. It's a real codimension of at least two in m to the k. So whatever dimension that has. So in this case, I mean, this is a, you try to, in this case, what's wrong with it? What if m to the k was an enormous thing and this was like one curve? We really care about the closure. Oh, I see. I see what you're saying. Yeah, okay. So what do I mean by, yeah, so I'm saying this incorrectly. What I mean is that the boundary of the image minus the image should be a union of maps from some manifolds v. So v is some, this is some finite union, say. And the dimension of v should be less than or equal to the dimension of our original space m top minus two. That's what I mean. I'm sorry. You're right. Right. That makes sense. So it's codimension. It means it should have at least two dimensions less than the dimension of the image. And this is assuming m top has the expected dimension? That's assuming that m top has the expected dimension, which it's going to have if it's a, that was my first assumption that it's a transversely cut out manifold of the right dimension. So it means that it has the correct dimension. And then, right, that you're assuming that this thing is going to, then this is meant to represent a homology class of the right dimension, the dimension of the index. That's okay now. Okay. So now, let's say, so there are several things here. I mean, the regularity problem is the problem of arranging the first thing that it's transversely cut out to find if you could find a J to make it a nice manifold. That's the regularity problem. But then there's also the compactness problem that you want to make sure that this image is compact enough. Well, how can you make sure about that? Now, the compactness is actually much easier to deal with than the regularity. So let me just say a few words about compactness. So there's Grumhoff's compactness theorem, which tells you that if you have a, you know, you look at these are sequences of spheres, right? And you look at these are spheres in the same way. So we have these. So we've got some curves UK, say, from, say, S2 into M. We've got a sequence of curves. And they're in a fixed homology class. And so that means that the pullback of a mega on the S2 is fixed. It's a mega of A, the integral. And this is actually, analytically, it's the energy. It's equal to, I hope you got the formula wrong, the pullback of du squared over S2. That's the energy of u. So it's basically the L2 norm of u, measured with respect to the correct, I mean, you have to choose a metric on there and stuff, measure it with the correct things. And you get, this is the energy. So this is a sequence of curves. So these are C infinity maps, the holomorphic curves, and they have bounded L2 norm. Well, they're actually W12 norm. This is the L2 norm of the first derivative. So this is what's called the, it's a sort of boundary case. If we had, because these are holomorphic maps on a two dimensional space, if it were a bounded W1P norm for P bigger than 2, then the standard sort of compactness arguments that tell you there would be a compact subsequence or a convergent subsequence. That's just standard elliptic theory. And why the W1P norm, why P is bigger than 2 is because this norm, if you have something bounded in W1P norm, that controls the C0 norm. So that means it's an equicontinuous family in the C0 norm. So you use sort of standard results about equicontinuity to extract a convergent subsequence. But what we have is this borderline case where it's a 2, not bigger than 2. So in fact, it's possible for this thing not to have a convergent subsequence. But if it doesn't have a convergent subsequence, what's happening is that the energy is somehow concentrating near a point and you can rescale it and get a convergent subsequence. So, and that comes down to the fact that, for example, in this case here, we're quotienting out even here by a non-compact group. That's a non-compact group there. And so that sort of affects the convergence. But anyway, this is really well understood. And what Gromov proved was that there exists a subsequence. You also called UK say that converges to what Gromov called the Cusp curve or what we now call stable map. And what a stable map is, you see bubbles can develop. I'll explain a little bit in this case later on. But what it's going to be is the domain now, sigma, is a nodal sphere. So it's going to be a union of spheres joined at points. So it could look like this. It could be a bubble, you know, a set of spheres like that. That would be the domain. There'd be some marked points on here. Because your marked points would converge somewhere, we've got three marked points, they converge somewhere like that. And the map, you would then map each component into n. So you'd have a sequence of, you'd have a set of maps. And that's what, so for example, just suppose we were looking at a map from S2 into CP2, which had degree two. So that would be a quadric. But a quadric can degenerate into two lines. So that's in class two up. That's one degeneration that could happen. You can have, so that's a geometric degeneration where you have a sequence of things representing a quadric which degenerates to two lines. But you could also have a degeneration that happens because, for example, suppose you're in this case and the marked points come together. Well, the marked points are not allowed to come together here. If they do come together, then you could have a situation like this where you actually have a bubble. And that bubble contains both marked points. And here, this is what's called a ghost component in the sense that the map you restricted to this component is constant. So what you have here is a situation where you have this goes, this is a line. So you've got a map of a line and your two marked points are sitting on the domain in this sphere which is called a ghost component. So I should give you a proper definition of a stable map so you can see you can put these all together into the space of stable maps. Now, is there any way of retrieving that board? Oh, wait a minute. What about this? No. It's got a very tall person. There's a rod. Ah, this thing. Doesn't have a... What's... Oh, what? This is it. This is it. This one's got a hook. Okay. Let's see. All right. Now... Oh, God. Well... That's good enough. Oh, look, it's going up by itself. Very obedient. Okay. Any of the most points you integrate, like the thing you integrate with the energy, did you develop poles there or is it just the same? Well, no, the energy just comes from the map. So it doesn't see where the mark points are. So I want to give you a definition of a stable map. And of course, these can occur in any genus. They don't have to be genus zero. I was just restricting to genus zero for simplicity. But... So let's just do genus zero because the point about genus zero is that then you can think of your domain as a union of these two spheres. And each two sphere, you can parameterize in a nice way and give it the standard complex structure. So you don't have to worry about variations of the complex structure on the domain. Typically, you'd have a nodal surface which you'd have to allow the complex structure on the surface to vary in type of a space. You can put that in if you want, but that's just make it simple. So it's a nodal Riemann surface. So you have something and it's all... It's meant to... You know, you can have something like that. But it's meant to be connected in a tree diagram. So you're not allowed to have something like this. That would have genus. That would be a genus one thing. So it's a tree here, a table tree. And you have a certain set of mark points. You sprinkle your points around. Once you've got five mark points, you could sprinkle them around like that. The mark points are all distinct. And so they're on particular spheres. So you have to say which sphere they're on. Then you have some maps, U, which is a one map for each component. And so we have... So that's... So we're looking at two poles, which is a form of sigma, complex structure is understood here, Z, U. And then you put an equivalence relation. Well, the obvious equivalence relation that just like I had before, you just have... You have some holomorphic. This is a holomorphic map here. So you're interested in these modulo reparameterization. And then there's an all-important stability condition. Now, you don't need to... When you're talking about a stable map, you doesn't have to be holomorphic. I mean, eventually we're interested in the holomorphic ones. But in general, you can just look at arbitrary maps with nodal domains like this. But it's very important to have a stability condition. And what that is telling you, this is equivalent to saying that for each element, if you have sigma, J, U, and you look at the group of automorphisms, so a map of that to itself, this is finite. So you never... You don't allow yourself to have an infinite number of automorphisms. And what that means is... Now, if, say, on a component like this component, the map is non-constant, then the map is either injective or it's a multiple covering, and there could only be a finite number of automorphisms of that component. So if you have a component where the map is non-constant, there's no condition. But if you have a component, if the map U were constant here, then you see, what would an automorphism as this would mean? Well, it would have to fix Z0, Z1. It would have to fix this nodal point. But you see, if we've only got two... What are called special points? We've only got two special points on this component. There's a mark point and there's a nodal point. And so they're, in principle, as a one complex parameter family of automorphisms of this thing. So that would not be allowed. So the claim is that if U alpha restricted to the alpha's component is constant, S alpha to alpha has at least three special points. And special points are nodal points or mark points. That's the important condition. So that means, for example, in this particular case, I could allow that to be... That could be a ghost component. So that one could be a ghost component. This one could be a ghost component. This one's not allowed to because there's only got two mark points. That one couldn't be. So this would be a possible degeneration if you were looking at maps, say, from U from S2 to CP2 of degree two. So that it's in the class twice a line. Then this one could represent a line. That one could represent a line. And we've got five mark points. That would be a perfectly good stable map of this kind. And the claim is that if you look... So we define M bar 0k MjA to be the equivalence classes of stable maps in genus 0. So this is a genus thing. And K mark points. And representing the class A. And if you've got a J, they're J-halomorphic. So that's the space of J-halomorphic stable maps. And it could be... It's a stratified space. I mean, why I wrote top there is that top was its top component, if you like, where its domain just had a smooth sphere. But then it has all these other components where the domains are nodal curves. And some of them, if you like, are sort of artificial. Like, this is an artificial one because the two mark points have come together. And so, you know, nearby things to this would be something where you had a... And then you had this would still be constant, say, this one could have a Z2 on it. But now we'd have three mark points on that component. So... And then when these two mark points come together, we'd have this bubble sort of artificially created in the domain. This is... This was Konsevich's idea of the way of describing what's happening, way of describing a compactification. So the claim is that we have this space here. This is compact. It's a compact space. And if everything is regular... So in other words, if all the underlying maps, you see, a regular thing would be... For this, would be that this map was a j-halomorphic map transversely cut out, which I'm going to say more about later. So is that. And then all these intersections here, you've got a space of spheres here, a space of spheres here, these are kind of a concept map. All these intersections are meant to be transverse so that it all sort of comes out and it's all in the right dimension. And the strata, the formal dimension, if you like, of the strata is equal to the dimension of the original, of the top space minus twice the number of nodes. So each node gives you, contributes sort of a co-dimension two to the thing. And that's a formal dimension. That's the index. So that means that if you can arrange everything to be trans... You know, all the things that transverse, that will be the dimension that it has. But of course, it doesn't have to be like that. But if it were, then you see, if you're in a situation like this where you can arrange that all the lower strata have basically have the right dimension, then the top thing of this will represent a pseudo-cycle because this is compact. So you can compact by the image of the map by just looking at the image of these things. And then the boundary of it is going to be covered by the images of the strata, you know, the higher co-dimension strata here. And they're all going to have at least co-dimension two. So that's a sort of standard regularization picture that you compactify the image by putting in the images of the evaluation maps from the strata and this... In this space of stable maps. If you can arrange, if you can control our dimensions reasonably, then they all have co-dimension at least two and then you get a pseudo-cycle. Now, of course, the kicker is that you can't always control a dimension. That's the whole problem. But if you could, then you'd be... You'd solve this problem. You'd have got... I mean, because our aim, of course, is to try and use these spaces of maps to get some invariance. We want to get these homology classes out that we can then intersect and get the grammar for an invariance or something. And this is just for closed curves, but if you're doing open curves, you have boundaries and you put boundary conditions on. The formal structure is exactly the same. The equation looks a little bit more complicated because you have to deal with what happens at the near the non-compact parts of the domain, but formally it's exactly this idea that you can always compactify the space you're looking at by making some space of stable maps or stable buildings or something. And then you just need to control the dimensions. In principle, all the boundary components you've added should have co-dimension at least two. You need to control that and make sure it does. Okay? So that's telling you all I was going to say about compactness. But you see, I must have erased... Here, I erased the first condition which was that everything was transversely cut out. That's what I need to talk about because that's the hard thing. The compactness is sort of understood. So, so... Why don't you have to worry about the unstable maps? Like, why aren't you there somewhere in the computer? Well, the unstable maps would be... Well, the thing is that they don't appear in the compactification because if you have an unstable component, suppose that were an unstable component, so that map was constant. Then you could just forget about that bubble and put that point there. You see, the unstable components are things which don't actually contribute to the image because the map on them is zero. It's just a constant map. And so you can just collapse any unstable components. And the thing is that it's... I mean, that's a good question. I do not understand. Can you say again? Yeah, I'm saying it. Suppose we had this situation, but this sphere was a constant map there. And that would be an unstable component. But the thing is that you... So, I'm saying that that's not going to be in the compactification because we don't allow that in the compactification. What we do is, in that particular case, we just get rid of that completely and just put the mark point there and there wouldn't be a bubble there at all. Because the image would be the same. If you have like a sequence, I mean, like in your top stratum, like you can just define it so that it converges to a stable map with the stability condition. When you look at the compactness argument, what's happening when a bubble forms, typically, I mean, in this particular case, if this is what we had, all that would have happened here is that some of the mark points would have come together. And the underlying map here, you just have a sequence of maps which was basically converging to a map. And all that would have happened to create this set of bubbles is that the mark points would have come together in a particular way, sort of these two points coming together first and then these two points coming together second and then they would form this bubble. And you can actually see that completely clearly just by sort of rescaling and blowing up and looking at, you know, nearby this. You'd have... Here would be the point where you'd be rescaling. This point is going to go into these bubbles. You'd have a Z4 and Z5 very, very close and then sort of further away, you'd have a Z2 and Z3. So there'd be two scales of bubbling here. This would be the... These ones would be the furthest ones and these ones would be the later ones. But the map itself would be constant. So the map to M would be constant, but you're talking about how the mark points are coming together. And it turns out, I mean, we've proved this in exhaustively in our book on J-Homorff-Curse, a deep mark. If you just look at the way you prove the compactness, you can get... I mean, you could describe... I mean, of course, you could put in something like that and allow that as a limit. But then you wouldn't have a sort of unique limit. And it doesn't give you a good structure. I mean, this was Komsovic's realization that the good structure is... Because there would be an infinite automorphism group of those, which you wouldn't want. You wouldn't want. You'd have to divide out, you see, by a C star here. You want the set of automorphisms of all these things to be finite. That's very important. We are already doing that now. The other component that still has a C star automorphism... No, because you see, here, the map is non-constant. So you've got some map U, and you're saying U is equal to U composed with phi, if you had an automorphism. So phi would have to be something that fixed these two points. And so you'd have to have phi Z1 is Z1. Phi of the nodal point, let's say infinity is infinity. And it would have to satisfy this. Well, if U is non-constant, you wouldn't have... I mean, you could... It's a condition, right? I mean, it could happen. Well, you would only have a finite number. You see, what you could have... You could be a map, say you could have U from... You could think of C, U and infinity to M. And it's going to take Z to say Z cubed. And then you could have an automorphism which took Z to E to the 2 pi i over 3 Z. That would be an automorphism, so U of phi would equal phi. But it would be a finite group. You're not saying there aren't any automorphisms. You're saying it's a finite group of automorphisms. So if it was a multiply covered curve, it could have an automorphism. But if it's an injective curve, somewhere injective curve. So that's another thing. You have this idea of a somewhere injective curve, U from S2 to M. What that means is that there is some point Z naught such that if you look at its image and then its pre-image, that's just Z naught. So that it's a map that... Here is the domain as 2. Here's your Z naught. On this little neighborhood, it's injective. And then you see if it's injective there and you have something that behaves like this, U of phi. Phi would have to be the identity in that little neighborhood where it was injected. And that doesn't happen if you have a Mobius transformation that fixes three points. It's the identity. So really, the stability condition, the sort of geometric way of understanding it is telling you that the group of automorphisms of this stable map is finite. That's what the stability condition is saying. And that's a very important condition that allows you to... I mean, it's the fact that you could have automorphisms means that you're actually... The space here, the strata are typically overfalls rather than manifolds because you could have automorphisms. So you're modeled on an overfold. But it's very important that the finite... It's overfold with finite isotropic groups. Yeah. It just occurred to me that I think you didn't say that stable maps have to have finite energy, which is a guarantee that the stability condition actually implies finite automorphisms. Ah, okay. Well, I was saying that we're in some class A. Right. So then omega of A is a number. Right. So they do have finite energy. But you're right. I mean, if you're doing more general things, the finiteness of the energy is actually what makes this whole compactness serum work. You've got to have a bound on the energy, bound on the W12 norm to get started with the analysis. Excuse me. Yeah. How do you put the topology on and bar the location? Oh, something called... How awkward of you. Something called the Gromov topology, which I don't particularly want to go into. I mean, each stratum, it's clear enough how you to apologize it. And then nearby, well, there's a structure you see. When you have a stable map, I mean, I have to mention this at some point if I'm doing sort of the elementary beginning anyway. If you want to know what's a neighborhood of something like this, you have to do gluing. So you have to say gluing is a way of, if you've got a nodal domain, then sort of nearby, there are things which nearly have the bubbles, nearly have the nodes, but don't quite. So nearby, so here we've got two nodes. So we've got one gluing parameter here and another gluing parameter there. And for each, if you have... Let me get a slightly bigger board. So suppose we've got a single node. So here's a... We've got some map. So this... Let's have no mark points just for simplicity. This is going to some line and that's going to some line. So the non-trivial map. So this is going to a pair of lines. So the image would be a pair of lines. Well, nearby, there's pair of lines in CP2. You actually have, of course, a quadric. Impossible to draw. But how analytically do you see that? Well, you have this sphere. You take this sphere and you take... With minus a disc, you take that sphere minus a disc. So you've got S2 minus a disc of size epsilon. You've got another one, S2 minus a disc of size epsilon. And then you sort of glue the boundaries together. You identify the boundaries so that you have something like this, right? I mean, that's what we'd smooth it out a little bit so it's actually a sphere. Now, when you glue the boundaries, you have a twist here. You have a theta twist. So the parameters you have here, you have an epsilon which measures how big the disc is here. You have a theta. So that you can think of as A epsilon times e to the 2 pi i theta. That is in C. That's a little complex gluing parameter. So when epsilon is zero, it's when you have the bubble. And when epsilon's a little bit bigger than zero, you've joined the domains together. Now, of course, in practice, what you do is you use cylindrical coordinates here. So it sort of looks like an infinite end. So you think of something looking like this and something looking like this. So you take away a point here and you have this infinite cylindrical end and then you chop off here at some point and join them together. And then because a map, you see, how do you get a holomorphic map here? Well, the map you've got u1 here and u2 there, say, well, it's near this point. It's essentially constant. So you can just on this glued together thing here, you just make your new map you've got u tilde, say, which is u1, u2 glued together with some gluing A. It's basically equal to u1 here. It's equal to u2 here. And then here you just patch together with some kind of bump function. You just interpolate. So this is almost homomorphic, not quite homomorphic. And then you use a Newton process to make it homomorphic. So that's an analytic thing called gluing. And again, Dietmar and I do the very, very simplest case of that in our J-Holm-Hoffit-Kerf books. Now gluing, you know, you can, the analysis of this is pretty well understood. It's horrible. I mean, it's real analysis, right? So if you're a topologist like me and you don't like analysis, I mean, if you're an analyst, you love it. But otherwise, it's a sort of pain that has to be survived, right? But it does exist. It is well understood. But it can be done in different contexts. And of course, in the Fredholm, in the polyphol context, they have a very beautiful way of doing the gluing. But, and then that sort of gives you the topology. I mean, there is something called the Gromov topology that you can measure, which I really don't want to go into. But basically, you can take, if you have a map that's, you know, sort of, the domain sort of looks like that, you can, you can, you're basically, well, I'm just not, you have to rescale, you have to restrict the bits of the domain and rescale it and sort of measure it. I don't particularly want to go into that. All I want to say is, is a topology. Which, so you have these strata and for each node, so if you have a strata with one node, like consisting of elements like that, then there's a, over that there's a line bundle consisting of the gluing parameters, and then there's a map which takes you that line bundle into, into sort of resolves it. So you get, you can get a topology. Of course, Gromov, I mean, the Gromov topology, you don't need the gluing. You can describe it without the gluing. Anyway, I did want to say a little bit about the regularity question. What is the dimension of the strata? Right. So you have this two times the number of nodes. So this two is just for each of you, you have this complete parameter. Exactly. Yeah. So that, that, that's exactly, that's, that's exactly right where those parameters come from. Okay, so let me, let's, but I think I, you know, I should say something better than to justify the type of my topic. Okay, so, okay we might just kind of forgot OK, so I had the notion about some more injective curve. It's actually right up here. Now, what does transversely cut out? Well, so we have the Cauchy-Riemann operator. Del by j of u is a half of du plus j composed with du composed with little j. And this thing, well, du is a differential. So that we're thinking of as a one form. This is a one form on the domain with coefficients of the pullback of the tangent model. So it's an anti-holomorphic one form. And du itself is a one form. The fact that we add that to make it anti-j homomorphic, so anti-commute with j. And so we can think in general, we have x is some space of maps. Perhaps I should write w1p maps from s2 into m. We've got j as a space of tame, almost complex structures. And we have a pair of elements in it. We have uj in there. So we have a new does it. And so this is just a map. And that's a j. And then we have this operator. Now this operator lives in a one form. But you see its image depends on u. So that means that we really have a bundle over here, e. And we're thinking of del bar j as a section of this bundle here. And the fiber of this bundle, e uj. So for each pair uj here, we get a map in there. That's precisely the space of. And actually, if I'm completing this to w1p maps, I mean, you can do this in the c infinity case. But if you want to do Fredholm theory or something, you better put some balance spaces here. So then this is better be the instead of the, these are c infinity things. Let's have the LP. Let's take the LP sections because we've differentiated. So you get LP. So we have this bundle here. You can prove this is a nice bundle over this space. It's a balance bundle. You've got this operator. What we'd like to say transversely cut out is we'd like that we're interested in the zero set. And what we want is for this, we want del bar j to be transverse to the zero section. So we want that the solutions. So to say transverse, that means we have this operator. We can look at the linearization L of uj, which is the linearization of this operator. I mean, this is from a Banff manifold to a Banff manifold. So you can differentiate it. C1 should be C1. So you can differentiate it. And then you can project onto the fiber. And using a connection in general, but along the zero set, it's well defined. And then the linearization is sort of the linearization of the. So if we evaluate that on a psi and a y. So psi is in the tangent space to the mapping space at u. And y is in the tangent space at j to the space of almost complex structures. This linearized operator takes the form while it's the ordinary linearization of the Cauchy-Riemann operator, where you fix j and just linearize it. And then you add to it the sort of linear term coming from varying j. Well, j actually appears here only linearly. So this is a half of y composed with gu composed with j. So you have the linearization because it's these two terms. Now, if you fix j, then of course this term goes out. And the transversely cut out means that du, j is subjective for all u in the solution space. And then when it is, you see this operator here, if you just look at du, you fix j and look at that operator, this is a Fredholm operator. So that means it has finite dimensional kernel, has a closed range in a finite dimensional co-carnal. So to say it's subjective that it just has a kernel, the kernel has a fixed dimension equal to the index of the operator. And then it's an open condition to have that subjective linearization. So there's an implicit function theorem in this Danuk space context for Fredholm operators. So if it's subjective for all u here and you vary j a little bit, it'll remain subjective. So that's some kind of nice stable situation. So that's what you're looking for. And so if it's subjective, that means the solution space is a manifold of the right dimension just using the implicit function theorem. But now, what's the basic theorem in this setting? The basic theorem in this setting is that you can't unfortunately prove. I mean, we'd like to say that if we allow variations in j, we'd like to say that it's always a manifold. But that's not quite true. So the theorem is that if you look at m star a jc. So this is all somewhere injective. J holomorphic curves ua. I'm out of time, so I'm just going to. You have five minutes. I have five minutes. OK. Well, that would be helpful. Thank you. So you look at all somewhere injective curves. And you allow yourself to vary j. And this is a Van Aak's manifold. And I suppose I should actually be a little bit more careful. Jl would consist of Cl, almost complex structures, because we'd better put ourselves in a situation of a Van Aak manifold. So instead of lying C infinity j, we'd better complete them to be, they could be W1p, but you can, or Wkp, but you can make them just Cl, almost complex structures. So that's a Van Aak's space. So that's a Van Aak manifold. And then if you look at the map, just taking a pair uj to u, so j, sorry, that this is Fredholm for L large enough. Strictly, you're actually, perhaps it's always Fredholm, but really what I want to say is if L is large enough, does a set sitting inside there of regular values. So that would be where the thing is transversely cut out. It's called regular if the thing is transversely cut out. And the point here is that this is a residual in the sense of bear. Or I suppose you can say commiga. Or if you're in the first edition of the book with DeepMal, we said of second category, but that's actually a wrong thing. That doesn't mean what we meant it to mean. But anyway, this is a large set. In other words, it's a countable intersection of dense open sets. So it's a large set, that commiga thing. It's not open and dense itself, but it's a countable intersection of open dense things. So it means it's always non-empty, lots of stuff in it. And in fact, there's a trick of tags that allows you, I mean, in principle, you do this first when you have a CL thing, but you can take L equal to infinity here by some trick of tabs. So what that's telling you, and then so that means then that there's a large set in here of regular j where there are all the solutions of regular. And so that, but the kicker here is that the curves have to be somewhere injective. You can't do this. You see, if it's a multiply covered curve, you may not be able to make it regular. The difficulty is that if you have, suppose you have u from S2 to M, and this could be regular, that one could be fine. So du could be onto, and it could be somewhere injective. But then we could take a map which is a composite of, we could take phi from S2 to itself of degree k, and then look at phi composed with u. So perhaps I should call this psi, because it's not a holomorphism. So this is now, this is a multiple covering, because you're doing your sort of, this, for example, could be u at z to the k or something. Now, if you have a regular curve that exists, and you perturb j, it's still exist, they persist. So you may not be able to, this may be a perfectly good regular curve, you perturb j, you always have u. Then you always have these multiple covers. But what's wrong with these multiple covers? The fact is that the index of this thing, well it's 2n plus 2c1 now of k times a plus whatever. Well, perhaps we don't have any mark points, so that's what it is. Now, the difficulty here is that if c1a is negative, which it suddenly could be. I mean, there's no reason. In order to have the index, if you have the index of u, is 2n plus 2c1 of a. And because if we're just looking at a map from a sphere into m, it always, if it exists, there's always a sixth parameter family, because we have all the reparameterizations. So the index of this has to be bigger than or equal to 6 in order for this curve to be there. But suppose you're in a dimension, 2n is 10. You could suddenly have this being 10, and this being minus 4. And so you have a curve in that class with a negative shown class. But then you see, if you look at something like this, this thing could be so negative that this actual index could be negative. And yet, this curve would always be there. You can't perturb it away, because the underlying curve is always there, and therefore it's multiple curves are there. So if you have a strata in the stable map, which actually contains something like this, I mean, you could have a curve in class, suppose you have a curve in class B, and then it could decompose into something of class B minus ke plus ke, where c1 of e is negative. It could decompose like that. Well, then you see, you're not going to necessarily be able to get rid of the curves in class ke. They may always exist. There could be multiple curves of curves in class e, which are allowed to exist. But then the index of this thing would be big, because you see the first chunk class here of c minus ke, well, the first chunk class of e is negative. And so the first chunk class of B minus ke is bigger than the first chunk class of B. So that means you'd have a strata, and the curves in class B minus ke, would, in the best situation, even if you could make them all generic, they would live in a space which has got too high a dimension. So that means you lose control of the boundary. And therefore, you can't necessarily, you can't have simple methods of getting rid of this. So that's the problem, why you can't use regular standard thing, and why you need this to be injective. Well, I don't have time. If people don't know the argument I can explain to you, you can ask questions about, I don't have time to explain why it has to be somewhere injective here. But basically, the reason is that you've got, here's your manifold, and here's where you're varying j. So when you're varying, you see you're allowed to vary j here to get regularity. But you're varying j on m. But the image of your curve, here's s2, where the domain of your map is, your map is going to some one form on s2. So you have to take your variation in j, which lives in m, and pull it back to s2. Now if it's somewhere injective, then there's going to be a little neighborhood here. You can control it, you can control j here. If it's somewhere injective, you can pull it back and control it there. But if it's not somewhere injective, when you pull back something from here, it would have several pre-images here. And so it doesn't give you so much control over what the variations in j do for you. So really, as I explained, if it's not somewhere injective of the curve, you can really have these bad components of the compactification. And it is too big that can't be dealt with. Now I was going to say more about the analytic difficulties in dealing with this in the traditional way. You see what you have to do, I'll be explaining sort of geometric regularization. You have to have more complicated variations. You have to allow yourself more complicated perturbations, which don't just depend. I mean, the perturbations I'm allowing myself here are just to vary j, which is say, it's sort of a big variation. That's just on the manifold. You vary j on the manifold. Well, you can have much finer variations of j to correct. But then there are all kinds of problems which I luckily don't have time to get into. So there we are. OK. So since we have office hours this evening, I propose that long questions get asked then. But do people have any quick, shorter questions, producer? Does pretty much everyone say that the regular locus is connected? Yes, it does. One parameter family is because everything sort of happens in code dimension 2. Well, you don't actually have to know it's connected, the regular locus. What you need to know is that there's a regular family, a regular homotopies. And regular homotopies, you see, if you have one parameter family in here, that sort of gives you an extra dimension because you've got a variation of j in one dimensional families. So you can have a one parameter family of things here which doesn't actually consist of everywhere regular things, but consists of things where the code dimension of the image is just one dimension. Right. So yeah, the part of this theorem tells you if you have two regular values, you can find a regular homotopy in between where the inverse image would be a manifold of the right dimension. But you can't necessarily pass between them through regular j's. No, you can't necessarily pass between two regular j's. Because if you could, it would mean that the space of j homomorphic curves for j0 and for j1 were isomorphic, basically. And basically, all you can say is that they're co-important. For spheres in certain situations, you can. But in general, you can't. There are more questions for Dusa. All right. Let's thank Dusa for her nice introduction. Thank you.
Moduli spaces of pseudoholomorphic curves arise as the zero set of a Fredholm section of a suitable bundle, and one expects and hopes that they can be regularized in order to define invariants that are stable under perturbations. This lecture provides an overview of some of the analytic difficulties that must be solved in order to construct such a regularization, and briefly explains some traditional approaches to their solution, namely via geometric regularizations and finite dimensional reductions.
10.5446/16288 (DOI)
Thus far in this mini course, I think it's fair to say all the results I've been talking about are pretty well considered standard. And the proofs I've described are also using more or less standard methods. The title of the mini course has the word classical, classical transversality results. What I mean by that is any result that you can express in the form for generic domain independent, almost complex structures J, we have transversality for some particular class of J-halomorphic curves. So that's allowed to include some methods which cannot be called standard. And that's what I'm going to talk about today, in particular because everything I talked about so far with one minor exception when I talked about automatic transversality, everything else makes the restriction that we're only talking about somewhere injective curves, which of course is a big problem if you want to define big invariance such as Gromov written theory or SFT. And to a large extent, you cannot really hope for classical transversality results to be true in the generality that you would need to define those theories. And I do already illustrated this briefly and I'll reiterate that a bit in a moment. But there are situations when those results can hold, when you don't need to go to much more general frameworks such as the polyfold theory in order to define everything. If you do get transversality for your honest holomorphic curves, it can make your life easier because the Cauchy-Riemann equation is much easier to handle and carry some sort of natural geometric information that you don't necessarily have in whatever perturbed equation you're going to solve in a more general framework. So sometimes it just requires much more originality in your way of thinking to get what you need. Let me give you an illustration. So for today, I'm going to depart a little bit from the title of the course and not really talk about SFT. So I'm not going to talk about punctured holomorphic curves, but just closed holomorphic curves. Let's say sigma and sigma prime will be closed surfaces. Almost all of what I'm going to say can very likely be generalized to punctured holomorphic curves, but that's work in progress. So I'm not really going to touch upon it besides a few general ideas. I will just tell you about things that I know. So a multiple cover looks like this. I have a k to one holomorphic branched cover from one Riemann surface to another. I assume k is greater than one. Sigma prime is the domain of some j holomorphic curve into an almost complex manifold Wj, capital J. And then the composition of these two holomorphic maps gives me the multiple cover. So let's assume V is somewhere injective. U is now a k-fold covered holomorphic curve. And it'll be helpful to note that there's a relationship between the Euler characteristics of these two domains as they both figure into the index formula. So there's the Riemann-Herwitz formula that says minus Euler characteristic of sigma plus the degree of the cover times Euler characteristic of sigma prime equals this quantity that I like to call z of d phi because that's literally an algebraic count of the zeros of d phi. Or in other words, it's the number of branch points counted with the orders of branching, the number of critical points of phi. And since phi is holomorphic, all of those count positively. So this is an integer that's always greater than or equal to zero. So we have this as a constraint relating the domains. If you've never seen the formula before, you can just think of it this way. d phi is a section of a certain complex line bundle which you can easily write down. And then you can compute c1 of that line bundle. The answer is the left-hand side of this formula. That's so hard for you to understand. You can also draw some picture with, you can triangulate things and draw a picture of what branch points looks like. Maybe you prefer that. But it's very quick thinking of it as c1. If I ever forget the formula, that's how I remember. So now let's write down the index formulas that we get from Riemann-Roch. So the index of v, so w, let's assume, is 2n dimensional, real 2n dimensional. The index of v, by which I mean the virtual dimension of the modularized space of unparameterized holomorphic curves that v lives in. So that's not literally the Frenthold index of the linearized Cauchy-Riemann operator, but it is by basically what I explained before. It is that plus the dimension of the relevant Teichmiller space. So it's the actual dimension of the modularized space if transversality holds. And the formula is n minus 3 times Euler characteristic of the domain plus 2 first-turn class evaluated on the homology class of v. How does that relate to the index of u? So same formula with a different domain. Now Riemann-Hurwitz tells me I can rewrite Euler characteristic of sigma as k times Euler characteristic of sigma prime minus this count of branch points. And of course, c1 of u is just k times c1 of v. So I have 2 times that. And now I see sitting inside this formula k times the index of v because there's n minus 3 times Euler characteristic of sigma prime plus c1 of v over here. I just have an extra term. So I get k times the index of v minus n minus 3 times this count of branch points. So in particular, to be a little bit more concrete on what can go wrong with transversality for multiple covers, suppose that the underlying somewhere injective curve had its index zero. So that's generically a rigid isolated object in its modularized space. And for generic J, it's also going to be stable under small perturbations of J, which means there's no way you can get rid of the multiply covered curve since there will always be a perturbed multiple cover of your perturbed somewhere injective curve. But the index of u is then going to be minus n minus 3 times this non-negative count of branch points, which can easily be negative. At least if we're in dimension 8 or upward, unless the cover is unbranched, that's going to be a negative number, right? Dimension 8 and upwards, that's precisely where we stop being able to assume that our symplectic manifold is semi-positive. So there we run into trouble. This is likely to be a negative number in general, but we see that we cannot perturb that curve away. The situation is actually even worse. If you think about what kind of curves we already know must exist in a neighborhood of u, there's not just u itself. There's the other nearby branched covers of the same v. And those come in a non-trivial modularized space in general. So the actual dimension of the modularized space of holomorphic curves near u, and when I say actual dimension, I'm not making the assumption that it's a smooth manifold or anything. It might not be. It might not even be a smooth orbifold, but it does contain a smooth orbifold which I can identify very clearly, namely the space of branched covers of sigma over sigma prime of the same degree. So we get at least the dimension of the space of k to 1 branched covers modular reparameterization, and that's a very well understood modularized space. That's just another modularized space of holomorphic curves living in dimension 2, if you like, and you can compute the dimension from the Riemann-Roch formula as usual. The answer is 2 times the number of branch points. And there's sort of a geometric interpretation of this because given a branched cover, you can find other branched covers nearby by moving around the positions of the branch points in the image. And those are not equivalent up to reparameterization. They give you different branched covers, and this is precisely the number of parameters you see by doing that. So this is basically classical. So we see that number is never going to match the actual index of U unless possibly the cover is unbranched or what, or if we're in dimension 2, but so what? We don't really care about dimension 2, right? So here it's fair to say transversality is generally not plausible for the multiple covers. I do want to make another observation about this though. So again, the index of V is 0. So generically, by which I mean after possibly perturbing J, I'm allowed to assume that somewhere injective index zero curves are immersed. Okay? Why is that? So I discussed this with a couple of people since my last talk or discussion session or whatever it was. There was an exercise I posed about using automatic transversality to prove that in a four-dimensional symplectic co-boardism, generically, an index one holomorphic cylinder will always be regular. And part of what you have to do to prove that is observe that whatever kind of holomorphic cylinder that one covers, if it's a multiple cover, is also going to be a sufficiently low index so that you can assume it's immersed, and then you can apply the automatic transversality criterion after you know that. So there's this general fact which I don't have time to talk about in earnest, but one can show that if you take your usual moduli space, add a marked point to it, so that increases the dimension of the moduli space by 2, but now constrain that marked point by asking for the derivative, the first derivative of your map, to vanish at that marked point. So in general, that decreases the dimension of the moduli space by 2n. So the upshot of that is for generic J, the space of holomorphic somewhere-injective curves that are not immersed may be considered to be of co-dimension 2 minus 2n compared to the larger moduli space. So that means if my index is zero, I can assume that that space of non-immersed curves is empty, therefore this one's immersed. So I'm not going to say more about that, let's just accept it for now. It's a J-holomorphic fact analogous to the standard differential topological fact that you can perturb smooth maps to be immersed, right, given the right dimensional conditions at least. So we have this. Now that means, let's look at the normal bundle. I've got- Can you at least say why there's more to say than what you just said about why the state is true? Yeah, because you have to work out the details. Go ahead and try it. Let me know how it goes. Come on. So the normal bundle is not the generalized, but the usual definition of normal bundle here. Okay. That's what it's C1 is. The generalized normal bundle of U is very easy to describe, right? U is not immersed because there can be branch points, so there are critical points of U. But of course, the generalized normal bundle of U is just going to be the pullback of this normal bundle in V. So that's 2K times that number over there, C1 of V minus Euler-Katterstek of sigma prime. So remember I talked last time about restricting the linearized Cauchy-Riemann operator to the generalized normal bundle. That also gives you a Cauchy-Riemann type operator on a bundle of rank 1 lower. Its threshold index is given directly by the Riemann-Roch formula. Let's see what it is. So d u n is the restriction to the normal bundle, and Riemann-Roch tells me, well, the rank of this bundle is n minus 1 as a complex bundle times the Euler-Katterstek of the domain plus 2C1, the generalized normal bundle. So Riemann-Hurwitz tells me what that first term is. We have this extra term counting the number of critical points, and then 2C1 u I've just written up here. There we go. Am I missing something? People are saying there's a K missing, but where? No, there shouldn't be. It shouldn't be K. That's V and this is u. That's why there's the K. No, I think this is okay. I'm doing this a slightly more roundabout way than I planned, accidentally. I've got K times n minus 1 Euler-Katterstek minus another 2 Euler-Katterstek plus twice C1 of V then minus n minus 1 times count of Bradfine. This is fine because this number in the brackets here is 0 because I assume that V has indexed 0. That's the index formula. So I'm just left with minus n minus 1 Z of d phi. So what I notice about this is unlike the index of u that I wrote down, just looking at the normal operator, there's some predictable pattern. This number is always non-positive, which means it's conceivable that this normal operator might actually always be injective. And that's something geometrically meaningful if it's true. So I'm going to actually state this as a conjecture. About a year ago at this time I was calling it a theorem, but then an error was discovered in that proof. So this is a conjecture that says, so for generic J, all multiple covers, u of somewhere injective index 0 curves V have normal Cauchy-Veeman operator injective, which has a nice consequence if it's true. What that actually implies is all of the other curves in the neighborhood of u are precisely the ones you already know about. They're just the other multiple covers of V. So I'm saying this result would give you a precise description of the modularized space of curves near u as having exactly this dimension of the space of branched colors. That would be the intention, right? So the intention of a result like this would be to prove something like the Gopakumar-Vafa conjecture, right? So the reason why this kind of thing is supposed to be interesting is that it means in certain settings, if you want to compute Gromov-Witton invariance, you really only have to understand the somewhere injective curves. And the rest of it, of course, the multiple covers are not regular in the usual sense, so you have to do some kind of perturbation if you want to actually count them. But there's a standard way of doing this with inhomogeneous perturbations to the Cauchy-Veeman equation, and you can predict the count that you'll get because you see the entire modularized space in terms of the space of branched covers. It has an obstruction bundle, and you can compute the Euler class of that obstruction bundle. That will give you the answer, right? So this has nice corollaries in Gromov-Witton theory. What was the answer to that question? I think the answer to your question was yes. Yes? I'm not really sure how to interpret the index of this model, but do you think? Right. The main thing I want to say about it right now is that since it is generally negative, the operator can be injective. And what I really want to explain is how you interpret the fact that that operator is injective, okay? So if the kernel of this operator is trivial, think about it like this. The following is a scenario you don't want. You don't want to have a sequence of curves that are, have different images from you converging to you, okay? My claim is that all the sequences of curves that can converge to you are of the form V composed with some other branched cover. So they all have the same image. Now, if you have a sequence of curves with different images converging towards you, you can do this trick where you look at that as curves living in the normal bundle of you and rescale that normal bundle so that as the curves approach, you rescale so that you see them not actually approaching but staying in some bounded subset of the normal bundle, apply Gromov compactness to that. That sequence is going to converge to some generally nodal holomorphic curve which will have some component that you can interpret as something in either in the kernel of this operator or in the kernel of some related operator corresponding to a branched cover of lesser degree. So if you know that these kernels are all trivial, that precludes this scenario, right? So it says really that the space of, of branched covers, the space of use that are branched covers of V is an open subset of the whole modular space. Okay. This is all speculation. I mean, well, I mean, the argument that I just described can be made fully rigorous, but of course the conjecture is only a conjecture, right? We don't know if this is true. So I do want to talk about a special case of it which we do know. Yes. Can you give an example when the hypotheses of the conjecture aren't satisfied where the desired result isn't true? You know, hopefully covered curve, approximated by curves with different images? I'd have to think about that a little bit. I mean, I certainly could come up with examples given enough time where, I mean, in particular, there's, there's hardly any conditions here. The main condition is just that the simple curve has index zero. So in situations where that's not true, right, the possibility of a sequence of simple curves converging to something multiply covered is always something you have to worry about. You usually have to make some effort to avoid it. Doesn't Tyverson have some examples when he attacks Tori and he has multiple covered curves. Tori, that's a generative branch of double covers. I'm going to talk about that a little bit, but I don't think it's an example of this phenomenon. It's much nicer than that. Richard Hale has some nice example we can use in the general case. Okay. I'm not aware of that. Okay. So here's an actual theorem which is one of the cases of this conjecture. And this is in a joint paper that I wrote with Chris Garag last year. I'm going to make the statement a little bit unnecessarily more complicated than I need just to illustrate how different it is from the results we talked about so far. So let's say fix an open subset U in our closed manifold M, closed symplectic manifold of dimension into N. And fix also a tame almost complex structure, J fix. Then I will say there exists a commieger subset J reg living as you'd expect in the space of all J's that are omega tame and match J fix outside of this subset such that for all J's of this class, all unbranched covers U of somewhere injective index zero curves considered called V contained in the perturbation domain are regular. So remember if I'm talking about unbranched covers, then the disaster scenario I described with this index relation doesn't happen. That's the case where index V equals U implies index U, sorry, index V equals zero implies index U is also zero. So in that case, regularity in the usual sense is plausible numerically and I'm saying for generic J it actually happens at least if we're looking at curves contained entirely inside this perturbation domain. That's one major difference with the theorems I explained earlier. I'm not saying we get regularity for all the curves that intersect or have a, or have any point mapping into the perturbation domain but they have to be contained in it entirely. So we'll see why that seems to be necessary. I mean, I don't know if that condition can be dropped. I certainly don't know a way of dropping it. You can certainly take U to be the whole manifold, right? But if you wanted to just do perturbations in some subset, then you have to restrict yourself a bit. Do you ask that U has to have like compact closure or something like that? Well, U is closed. Yeah. U is closed. Or sorry, U, M is closed. Therefore, U has compact closure. Yes. I mean, I could also allow M to be not compact and then I would indeed have to require that U has compact closure. That's a good question, in fact. Okay. So I'm not going to explain the proof in quite this level of generality but I'm going to explain a case of it which is somewhat older than our result but somehow very badly known or badly understood. So in the case N equals 2, so it's just dimension 4 and where both domains are the torus and the underlying simple curve is actually embedded. So this is what Dusa alluded to with her question a moment ago. This was done by Taubes in 1996, a paper in Journal of the AMS. The proof there is kind of hidden. In fact, when I went back to it last night to figure out where it was, it took me a while and it's very sketchy but somehow it's the ideas in it are extremely potent. So, of course, this means we're getting regularity for a multiply covered holomorphic tori that are covering embedded tori. Taubes needed this because his definition of the Gromov invariant actually counted those things and it did it without doing abstract perturbations. It did it for generic J. So in this situation on the torus, there are some convenient things. The fact that V has indexed 0 means that its normal bundle has to be trivial. So I can write normal bundle of V is going to be identified with trivial bundle over T2, complex line bundle, and use similarly. And a Cauchy-Riemann operator, if I write it in those terms, just looks like the usual D bar operator plus some 0th order term. So here by D bar operator, I mean literally just partial by s plus i times partial by T acting on complex valued functions and A is say a C infinity map from the torus to the space of real linear n-demorphisms of C. That's what a 0th order term looks like. And then the pulled back normal operator is literally the pull back of that. So the normal operator for U is going to be the standard D bar plus this 0th order term A composed with the cover phi. So just a few general points before I really get into the argument. It suffices to show the following. Suffices to show that for all J, of course, of the tame class I'm thinking about, V is a J-holographic embedding of the torus into this perturbation domain and U equals V composed with some cover phi. Phi is necessarily unbranched since it's a torus covering a torus. Then I want to show that I can perturb J to J prime such that V is still J prime holomorphic. But the normal operator for U becomes an isomorphism, defined with respect to the perturbed almost complex structure J, J prime. Can you explain one more time how to think about the normal operator J-mysupri? It's definition is very simple. You take the ordinary Cauchy Riemann operator, which is defined on the pulled back tangent bundle. You restrict it to sections of the normal bundle. And now you get some section of a larger bundle that you don't want, but also has a projection to a corresponding normal part. So it takes you to some section of the bundle, home bar of T sigma to U star T w, T m, whatever it's called. There's a normal projection in that as well. It takes you to home bar of T sigma to normal bundle. So you just compose the Cauchy Riemann operator with that normal projection. This is the terms of definition, sort of, you know, maps the tangent bundle meaning arbitrary definitions by map. Ah, yeah. Okay. Yeah, there is another nice way to think about this. So there's an alternative way of describing a neighborhood of a curve in its modularized space that I didn't talk about. If the curve is immersed, specifically. So if the curve is immersed, one way of describing all the other curves nearby, let's actually write this a little bit. So U is immersed. All the other curves nearby can be assumed to be of the form X, U, H for some H section of the normal bundle. Okay. Now, in fact, that's going to hit a unique parameterization of every nearby curve in the modularized space, but you don't get to choose what complex structure you have in the domain at all. So what you have to do then is not actually look for nearby maps of this form that solve the Cauchy Riemann equation with respect to some specific J, but these are all going to be immersed also. So just look for nearby maps like this whose tangent spaces are J invariant so that automatically you can pull that back to some complex structure in the domain. You don't get to prescribe it. So that's another way of seeing all the holomorphic curves in the vicinity of this one immersed curve. And, well, you can write down some nonlinear operator that does that for you. Its linearization is essentially this normal Cauchy Riemann operator. That also tells you why it happens that the Fretel index of the normal operator is the same as the dimension of the modularized space in the immersed case. That's basically because if you preserve in the tangent direction then you just re-parameterize in the same space. Right. Perderbations in the tangent direction are sort of not meaningful for studying this modularized space because this is giving you re-parameterizations of the same curve. Okay. So I don't know if everyone had a chance in the meantime to think about why what I just said here is true. It suffices to show given a curve and a multiple cover you can perturb J to one that makes that specific multiple cover regular in the sense of the normal operator being an isomorphism. So this is an exercise using the Talb's trick that I explained last time. Right. You can exhaust the space you're interested in with a countable union of compact subsets, in this case finite subsets, even. So as long as you're able to achieve transversality for each of those subsets, then you can find some set of J's that's a countable intersection of open dense sets that does everything you want. Okay. So that's all I'm going to say. This is a version of the Talb's trick to get you from there to the result we really want. The other thing I'm going to say is we can reduce this problem to something that's really only involving linear Cauchy-Riemann type operators on an abstract vector bundle because I can say for all 0th order terms A prime, so 0th order term is just some C infinity function valued in the space of real linear maps on C. We can find the J prime such that J prime equals J in the tangent directions on the curve V. So T sub V is what I was calling the generalized tangent bundle before. Literally that just means the image of the differential of V. And we can also say J prime equals J outside some neighborhood of the image of V. But the normal operator for V expressed in this trivialization and expressed with respect to the perturbed almost complex structure is D bar plus A prime. So I'm only saying here, give me any perturbed Cauchy-Riemann operator you want in the space of all real linear Cauchy-Riemann operators. I can find a perturbed J that realizes that operator for you. And this is not terribly deep. Actually in the embedded case, when V is embedded this is fairly easy to prove. It's just a matter of choosing the normal first derivative of your perturbed J in the right way to produce the right 0th order term. One can also do it in the immersed case and that's a bit more painful. I'm not going to talk about that. Okay. So let's also take this as given and just look at a problem involving Cauchy-Riemann operators on line bundles. I don't understand in the middle of the first line, such as J prime is equal to J on T sub V is equal to T. The image of DV, in other words the tangent spaces to the curve. Oh, that does the index of the S. Yeah. Any other questions? All right. So here's a claim and I'm even going to label this one an improbable claim because when I first saw this in Chowds' 96 paper I had no idea why I should believe this is true and I'm still not sure I can explain to you why you should believe this is true but I can prove it. So here we go. Let's suppose D of the form D bar plus A is a Cauchy-Riemann operator on the trivial bundle over T2, trivial line bundle and B is a bundle map on that trivial bundle which I'm going to assume is a complex anti-linear bundle isomorphism. So the two key properties are its complex anti-linear and its bundle isomorphism. So given that the bundle is trivial it's obvious that you can do this. This is more or less equivalent to saying B of Z acts on a vector eta in a certain fiber by some complex valued function beta of Z times complex conjugate of eta where this complex valued function beta is assumed to be nowhere zero so it's mapping into C star. Okay. So that's the assumption. I want to just mention quickly if I were not working on the torus but with an unbranched cover with more general domains sigma and sigma prime the assumption that the index of the simple curve is zero allows me again to do this then I would be able to find a complex anti-linear bundle isomorphism between the relevant bundles even though they're not trivial. That's actually equivalent to the fact that the index of the simple curve is zero. So that's something also that is easy to check. Now the statement will be that I can define a perturbed operator d tau is tau plus a real parameter times this extra bundle map B treated as a zero-thorner term that is an isomorphism for all tau real numbers outside some discrete subset. Okay. So this is one way of perturbing a Cauchy Riemann operator that might not be an isomorphism and making it into an isomorphism. In particular what you need to notice about this perturbation is it doesn't care at all about symmetry. I require this extra term B to be a complex anti-linear isomorphism. I don't require it to be anything else. If my bundle is a pullback of some bundle that's defined by a simple curve but I have been looking actually at the normal operator on the multiple covered curve, I can do a perturbation like this along just by changing J along the simple curve. Now I pull that back to the multiple cover. My perturbation of my operator is going to be invariant now under deck transformations of the cover. This claim does not care. Okay. Where symmetry messes up the usual argument and this is the reason why I need somewhere injective in the usual arguments, symmetry ruins this Sardis-Mail argument. This is impervious to that. So I need to convince you this is true. It's a compact perturbation of the operator. So yeah, this doesn't change the index. So basically all the Fredholm operators in this talk have index zero. So either isomorphisms or they have both kernel and co-curl. Yeah? Can you remind me or tell me how we must say up-chart so we have this or we can use it? Right. So if we have this, that means I can find, sorry, if I have my given J and my given cover which is maybe not regular, I can find a perturbed J that perturbs the normal operator along the cover in this way and therefore makes that curve regular. So first step, this one parameter family of operators d tau is injective which of course since its index zero implies its an isomorphism for all tau is sufficiently large. So this step is interesting because I'm quite convinced that this argument couldn't have come from somebody who was mainly a symplectic topologist. It had to come from a gauge theorist. In particular, if you're familiar with Talv's work relating Gromov invariance and cyber Gwitten invariance, you'll notice a parallel here. There's something that Talv's does in cyber Gwitten theory where you write down the cyber Gwitten equations with this perturbation term depending on a real parameter. And you can prove that for topological reasons if you make that parameter very large, you don't have any solutions. Or you do have solutions but they're converging in the sense of currents to holomorphic curves, something like that. So this is going to be a much easier version of that. And it's an argument that I've never seen anywhere else in symplectic topology. So let's say, we'll think about, well, I haven't really specified at all what Bonock space is I'm working with. Let's just, to make my life easy, let's choose Hilbert spaces and say the operators are going from H1 to L2. So let's suppose I have some non-zero element eta in H1 on the torus. And the idea is to operate on that with d tau and look at the L2 norm squared. So d tau is a sum of two terms, which means I can expand this L2 pairing and I get three terms. I have d eta L2 norm squared plus tau squared times L2 norm of the perturbation B, which I'm reading, writing here as beta times eta bar. So let's write it beta eta bar L2 norm squared. And then there's a cross term. I'm assuming that my inner product is real valued because the linear operator is real linear, not complex linear. So I can use a Hermitian inner product on the trivial complex line bundle, but then I have to take the real part. So this is going to look like two tau times the real part of the L2 inner product of beta eta prime with d of eta. Let's write that out as d bar eta plus eta eta. So a few observations about this. Of course, this first term is non-negative, obviously. The second term, since I assumed that B is a bundle isomorphism, which means this map beta is nowhere zero, I can bound this term from below by the L2 norm of eta. So this is greater than or equal to some constant c1 times the L2 norm of eta squared. And that's all I need to care about that term. Over here, I really have two terms in the cross term. So let's look at the more harmless one first. The pairing of beta eta bar with eta, that also is clearly, its absolute value is bounded above by another constant times the L2 norm of eta squared. And I have to worry a little bit about the other part of the cross term. So I need to work a little bit to estimate that properly. So I'd be much happier with that term if it didn't involve a derivative of eta. So you have an L2 pairing of something with something else that's a derivative. What do you do? You can integrate by parts. Let's see. The real part of inner product beta eta bar with d bar eta is literally the real part of the integral of the conjugate beta bar times eta times d bar eta integrated over t2. Now use the Leibniz rule and say that's integral of d bar of the whole thing, beta bar eta times eta minus the integral of d bar beta bar eta times eta. Integrating d bar of something over a closed manifold would give me zero due to Stokes theorem or probably even reduce that to the fundamental theorem of calculus. That's zero. Over here I can expand a little bit further and say minus real part d bar beta bar times eta times eta, no still minus. This is one of those arguments where if you get one sign wrong you're really dead. Beta bar d bar eta. Okay. Hopefully I got the signs right. And I am missing an eta. Yeah. Here. Thank you. That's important because I want this term to be the same as that term that I had on the left-hand side. So I can put this last term on the left-hand side. So I have twice that equals this. So the thing I'm trying to estimate actually equals minus one half real part integral of d bar beta bar eta times eta. And I don't really have to care about the details of this anymore either. I just want to say the absolute value of all this is now less than or equal to some other constant times the L2 norm eta squared. So I put that all together. Now this whole thing. Sorry Chris, are you somehow saying that d bar operator is self-adjustable? No. I am saying that if you take a complex valued function on the torus and integrate d bar of it over the whole torus you will always get zero. And that you can prove that using Stokes theorem. But that means you're just throwing d bar from one term onto the other. Yeah. Is that okay? So that's an L2 in that product. No, no, no. So I mean it was an L2 product to start with but I wrote it as a product of complex numbers over here. I took conjugates of the terms on the left. Yeah. So this is an integral of a complex valued function that's expressed as a product of complex valued functions. Okay. Right. It equals that L2 in a product. It's just the real part. Okay. Right. So I'm defining my L2 product to be the real part of this integral. So. Is it really right eta times eta? What do you mean? It's a complex number. It's just a square of that. Yes. It's a function. Yes. And you've got some measuring T2 here and there. The usual one. Yes. So these are good questions and they kind of allude to the fact that one can do all this in a much more general setting but it's, it causes a bit more of a headache. Right. So this is not a uniquely low dimensional phenomenon I'm describing. Even though I'm describing a low dimensional proof, you can do it in higher dimensions. You can also do it on more general domains. It just requires several extra steps that I don't have time for. So summarize this. This d tau of eta L2 norm squared is now greater than or equal to some constant which I'm going to change the name of the constant C1 prime times T squared minus another constant C2 prime times tau, oops, tau squared tau times L2 norm of eta. And that's the proof. If eta is non-trivial then this cannot be zero. As long as tau is sufficiently large. Okay. Why did you mean to evaluate this last thing? Why could you immediately say that's the first one? First one goes to tau squared and second to tau. First one, yes, goes like tau squared. I mean, the main difficulty in this argument was that I had to get control over this term that has the derivative of eta in it and relate that to just the plain L2 norm of eta. That's what the integration by parts is for. And that's also where I used, I mean, you may not have noticed it so explicitly but that's where I used the assumption that my perturbation term is a complex anti-linear perturbation. So specifically because it appears in there as beta times the conjugate of eta. If I didn't have eta bar on the upper left, this argument wouldn't have worked. As close as I can come to giving you an intuitive reason to believe this. I'm sorry. So, yes. The constant there like C3 and C2, C3 is like the norm of del bar beta bar. It's whatever is convenient. There is one. Do you believe that there is some constant that makes that true? You stare at it a little bit longer and you'll be fine. All right. So, I'm almost done. I have one more bit of magic to pull off to finish this proof. And that's a bit of analytic perturbation theory. So I won't have too much time to, how much time do I actually have? Three minutes. Yeah. That's always the answer, isn't it? Analytic perturbation theory. So, what I'm about to say can be done in the real analytic category but I don't want to because I'm a little bit allergic to the real analytic category. So instead, I'm going to work in the complex analytic category and complexify my operator first. My operator is only real linear even though it's acting in a complex vector bundle. So what I can do at the expense of having two complex structures in the picture instead of one is I can complexify the domain and the target of the operator and consider the canonical complex linear operator, the extension of that to the complexification. So, oops, more precisely. So we've got d tau is a real linear operator from some space of complex value h1 functions to complex value l2 functions. Let's take, actually to be more precise, we can say this is h1 functions from s1 to r2 and calling it r2 instead of c will help avoid some confusion coming up as I'm about to complexify it. T2 to s1. Oh, sorry. Yeah, T2, not s1. I can ask you if you think it even says s1 in my notes. That's not so good. So if we complexify, this becomes an operator I'll call d tau c from call it h1c which you can think of as, well, the tensor product of this Hilbert space with c which is equivalent to the space of h1 functions from T2 to c2. And it's going to map that to corresponding space l2c and it's complex linear. And what I can do now is allow my parameter tau to be complex instead of just real. So now the map taking tau to the operator d tau is a holomorphic map from c to the space of Fredholm index zero operators, complex linear from h1 complexified to l2 complexified, which of course is an open subset of the space of all bounded linear operators, complex linear between those spaces. So this is just a complex Bonnock space. I have a map from c into that Bonnock space. It is easily seen to be differentiable with respect to the complex variable tau, right? It's, in fact, it's an affine map. So that's a holomorphic map into this open subset of Fredholm index zero operators. So the last step is in to observe the following. The space of noninvertible operators, so operators that are not isomorphisms sitting inside this space of Fredholm index zero operators is what we call an analytic sub-variety, a complex analytic sub-variety, which means locally you can express it as the zero set of some holomorphic function on that infinite dimensional, so open subset of an infinite dimensional complex Bonnock space. But that function is valued in something finite dimension, it's valued in c, in fact. So what we end up with is the set of all parameters tau in c with the property that the complexified operator d tau c is not an isomorphism looks locally like the zero set of a holomorphic function from c to c. That's why that set is discrete. We already showed that it's not everything because for very large tau, the operator is an isomorphism. So once you know then that the zero set must be discrete, you're done. And then, of course, you have to think a little bit about relating this statement about the complexification to the original real linear operator. That's fine. Basically you can convince yourself that if tau is real, then the complexified operator being an isomorphism implies that the real linear operator is an isomorphism. So I was going to prove this lemma, in fact I was going to say other stuff after that, but there's no time. So if you want to know the proof of the lemma, I can tell you whenever, if my voice holds out. But not now. But not now. So let's have lunch. So there are any other questions? What do you say it's a discrete set? Do you really mean it's a finite set? I mean, it can't accumulate on zero or a power? So I don't know what happens in, for other large complex parameters that are not necessarily in positive real line. That's probably... I mean, you were talking about, I mean, your main interest was in the zeroes when tau was weird, right? Yeah. And near zero because we're making a little counterfeit. This was where I haven't thought about the answer to your question very much. So it's probably true. But if you're saying a zero is a homomorphic function, they're isolated, aren't they? They're isolated. They're isolated. So in other words, if it's zero at zero, then it's a whole neighborhood where it's not zero. Correct. There's a whole neighborhood there that's discrete. Well, I see a discrete including zero. So, okay. My goal, I had a specific normal Cauchy V1 operator for this particular function. My goal, I had a specific normal Cauchy V1 operator for this multiply covered curve. So this is what in this abstract discussion at the end, I'm calling just plain D. And then D tau becomes just some perturbation of this normal operator, which corresponds to some specific choice of perturbation of the almost complex structure in the neighborhood of the curve. All right. So we're going to reset the clock and restart at two. Have a nice afternoon. Let's thank Chris again. Thank you.
There are easy examples showing that classical transversality methods cannot always succeed for multiply covered holomorphic curves, but the situation is not hopeless. In this talk I will describe two approaches that sometimes lead to interesting results: (1) analytic perturbation theory, and (2) splitting the normal Cauchy-Riemann operator of a curve along irreducible representations of its automorphism group. Both were pioneered by Taubes in his work on the Gromov invariant and Seiberg-Witten theory in the 1990's, and I will illustrate them by sketching two proofs that the multiply covered holomorphic tori counted by the Gromov invariant are regular for generic J. If time permits, I will discuss some ideas as to how both methods can be applied more generally.
10.5446/16286 (DOI)
Thanks, Joe. I'm going to start on this board with just a preface of generalities. Let's call this the archetypal moduli problem, by which I mean from the perspective of global analysis or differential topology or simplexed topology as distinct from algebraic geometry, where they also like to talk about moduli problems. So for me, a moduli problem means this kind of thing. You have some space which I'm calling suggestively J. But right now, I'm just going to call it the space of data, which has some kind of topology on it. And for each choice, which I will suggestively call also J, in this space, we get some moduli space, M of J, which also has a natural topology. And of course, we would like to say that it has some much nicer structure than a topology, like being a smooth manifold. Or if we can't achieve that, maybe at least a smooth orbifold. And over the course of these two weeks, I'm sure you'll also hear things about weighted branched orbifolds with boundaries in corners, though not for me. From my perspective, well, the first thing to say about this moduli space is that locally, you can identify it with some kind of zero set, which I'll write sigma J inverse of 0, except not precisely that. Often you have to divide it by some kind of symmetries. So here, sigma J is some section of a Banach space bundle. And we already heard DUSA sketch one example of this. Because you can express all sorts of geometric PDEs in this way as preferably smooth sections of Banach space bundles in an infinitesimensional context. But I'm saying this is true only locally in general. So DUSA described the situation with holomorphic spheres where you can actually do this globally. But in my experience working with J-holomorphic curves, that's rather the exception to the rule. And so I will talk about some more general situations where you can do this but only locally. And these symmetries can cause quite a lot of headaches. But the first problem, of course, is you want to be able to say that this zero section here is a smooth object. And the implicit function theorem ought to tell you that if you do things correctly. But there are two general ways that you can approach that. First, the so-called abstract approach where you perturb the right-hand side of the equation. You can say something like define the moduli space m tilde, which depends on the data J, and some extra auxiliary choice that are called nu. That's the space of all u in this Bonnock manifold, such that sigma J of u matches nu of u. So this is for some other section nu of that same Bonnock space bundle. And the idea is if we choose that generically, then this set of solutions will be a smooth object simply by the Sard-Smale theorem. That's kind of equivalent to the statement just expressed in local trivializations that a generic point in the target of a smooth map is a regular value of that map. So this is a generic section. It also intersects this section transversely. So for generic choices of this auxiliary nu, our special section sigma J is transverse to that, which means this moduli space of perturbed objects I define is a manifold. And I even want to say more because if we're talking about elliptic problems, this section is not just a smooth section, but also is Fredholm in the sense that its linearization at any point is a Fredholm operator. So when we have these kinds of transversality conditions, we get just not just a manifold, but a finite dimensional manifold because of finite dimensional kernels of the linearization. So that's, of course, very nice when that happens. The problem is I perturbed my equation to a different one, which might not be the one I'm actually interested in. That's problem number one. There are more problems, which I think you'll hear more about over the course of the two weeks because this abstract approach is sort of the raison d'etre for the whole polyfield project. But it's difficult. And part of the reason it's difficult is that, first of all, this correspondence between some zero section and my actual moduli space of interest is in general only local, and it also has these symmetries. You have to deal with both of those in a sensible way when choosing these perturbations. And it means just choosing some generic section of a Bonoq space bundle is usually not enough. You have to choose it satisfying some conditions. And maybe those conditions are too stringent to actually apply the Sards-Mail theorem in this way. So do you want to be like invariant under the symmetry to something? Something like you want various things of that sort, yes. I reserve the right to not talk about that anymore because other people will. And I'm going to talk about the alternative, which is in the title of my mini course, so-called classical, or what many people call geometric, perturbation methods, which is the notion that if you take a generic choice of the data in the space of data, then your actual section will be transverse to the zero section, which means, of course, the zero set of your section is a smooth manifold and finite dimensional as well, because we're in the Fredholm context. And well, the space we're actually interested in is that divided by some kind of symmetry. That, in general, will be an orbifold. So an orbifold is something that looks locally like Euclidean space divided by a finite group action. And that's what you get if you're taking a finite dimensional manifold and dividing it by the kind of symmetry groups that we're interested in. So that is minor headache number one to be aware of. We won't get manifolds in general, but we could get orbifolds if we're lucky. Much larger headache number two is that often we're not lucky and this condition simply isn't true. And Duce already talked about that a little bit. I'm going to talk about that some more. So that's all I will say about generalities. I'm going to focus on this second, this classical approach over three talks. And in a particular context, I want to discuss symplectic field theory. So in the original propaganda paper in 2000 by Elishberg Giventhal and Hofer, you have symplectic field theory presented as a very general way of defining invariance of contact manifolds and symplectic abortisms between contact manifolds. And well, one can do it even a little bit more generally than that, which is sort of nice for certain purposes. So I'm going to talk about stable Hamiltonian structures. So first of all, if m is an odd dimensional manifold, the stable Hamiltonian structure H on m is going to be a pair capital omega lambda, which consists of the following. We have first omega is a closed two form of maximal rank. So you should imagine that as something that arises on any hyper surface in a symplectic manifold. You just take the symplectic form and restrict it to a hyper surface, it's going to be a closed two form of maximal rank. If I omit the word stable, that's what we call just a Hamiltonian structure. Now stable comes from this additional one form lambda, which satisfies following properties. First, lambda wedge, the top dimensional power of omega, must be a positive volume form. So this implies in particular that you can take omega and restrict it to the kernel of lambda. Lambda is nowhere zero. So its kernel is a hyper plane distribution, which I'm going to call psi, as I always do with contact structures, because that's one important example. Could be a contact structure, but in general, it's just some hyper plane distribution. And omega makes that hyper plane distribution into a symplectic vector bundle. Gives us a symplectic bundle structure. And then the other condition, since omega has maximal rank on an odd dimensional manifold, it has a kernel that's always one dimensional. That's the so-called characteristic line field, the one dimension of directions in which omega is degenerate. So what I'm going to require is that d lambda also vanishes on that same kernel. Another way of saying it is that the kernel of omega is contained in kernel of d lambda. OK, so this is a definition which has been around for several years. And as stated, it's not too hard to understand, but I think a lot of people don't really understand what it means. So let me talk a little bit about that. First observation is that if I now take this odd dimensional manifold and look at a small cylinder constructed out of it, so some small interval times m, this thing now inherits a natural symplectic form. Let's call the real coordinate little r and define little omega to be d of r lambda plus big omega. And that's an easy exercise. That's going to be a symplectic form. As long as this cylinder is sufficiently small, it might even be symplectic for epsilon larger than that. But in general, I can only guarantee this is true for epsilon small. Moreover, what does the characteristic line field look like? If you restrict, you've got a one parameter family of hyper surfaces. Let's say m sub r is the hyper surface r times m sitting inside that cylinder. The characteristic line field of omega restricted to m sub r is independent of r. So it's the same characteristic line field for all r, all parameters in my parameterized family of hyper surfaces. That's really the original definition, which I think appeared first in the book by Hofer and Sander of a stable hyper surface. That's where this idea comes from. It's the kind of situation in which you would expect to be able to prove something like the Weinstein conjecture that asserts existence of closed characteristics of this characteristic line field. Because if you have them on some hyper surfaces in the vicinity, you have them on all of them. Excuse me. What do you mean with the first symbol of the last line? The first symbol? That's an and, and percent. So in order to get the stable condition that you want this last condition at the kernel of omega containing the kernel under, that's what implies. That's what implies this fact about the characteristic line fields, exactly. And conversely, if you are given a one parameter family of hyper surfaces that has this property, this stability of the characteristic line field, then you can always find a stable Hamiltonian structure that produces that. So that's kind of an exercise using the Moser deformation trick. How much will I lose if I just ignore this definition pretend you said contact structure? Nothing really. That's a perfectly good point. If you don't like this definition for some reason, because it's new, just think lambda is a contact structure and big omega equals d lambda. I was going to say that soon anyway. So there is a, from this data, we have also a canonical generator of the characteristic line field, which by analogy with contact forms we'll call this the rape vector field. So my notation will be r sub h. And it's determined by the conditions. Omega annihilates the direction of the rape vector field. And then we normalize it by lambda. Is there any reason d lambda couldn't be zero? No. There's no reason at all why d lambda couldn't be zero. I will also say that again in a moment. So we have a stable Hamiltonian structure. It defines this hyperplane distribution psi, which might be a contact structure in certain examples, but it doesn't have to be. Could also be a foliation. We have this rape vector field. Another thing we can do is, since we have this small symplectic cylinder we built out of it, we can also just arbitrarily expand that to r times m and rewrite my symplectic structure there in the form omega phi. That's going to be d of phi of r lambda plus omega. So what's phi? Phi is going to be in the space of smooth functions taking r to that same small epsilon interval with strictly positive derivative. So this is a symplectic form also. OK? It's symplectic to the one I wrote down before. I've just stretched things. But the point is that r cross m is the kind of setting where I would like to talk about j-holomorphic curves. And I can do that now by saying, well, the space j dependent on the stable Hamiltonian structure is going to be the space of almost complex structures, j on r cross m, which I will assume always to be r invariant. In other words, there's this r translation action on r cross m. I want that to preserve all my j's. I also want to say that j maps the unit vector in the r direction to the rate vector field. And I want to say that j preserves my hyperplane distribution as well as the symplectic structure. It's compatible with the symplectic structure on that. So let's say j restricted to psi is omega compatible. OK? So here's a lemon which is very easy to prove and helps justify all these definitions. That is that for all choices j in this space of translation invariant, almost complex structures, and all functions phi, as I indicated above, that are mapping to that small interval with positive derivative, it's not just that omega phi is symplectic, but in fact, it's compatible in the usual sense of Gromoth with this choice of j. So I can fix j and I can then allow any choice of phi I want. And that's an important detail for defining things like the energy of a j-holomorphic curve, because we then take that to be the supremum of symplectic area defined with respect to all these different choices of symplectic forms. So let me give two examples. They've already sort of been mentioned. First of all, if alpha is a contact form, we get a stable Hamiltonian structure d alpha, comma alpha. And then this symplectic manifold r cross m with this kind of symplectic structure I've been talking about. I'm not going to try to say these two precisely. You have to make some kind of condition on how you define phi in order to say this. But I'm just going to say one can do it. So this is what's called the symplectization of the contact manifold. I really should relax the epsilon's in order to say that. I should actually, should at least make plus epsilon to be infinity. Let's not worry about this right now. Yeah. Can you quickly write down what the symplectization is? No. I'm going to say it's this. Yes? So what's the condition where you can integrate this hyperplane distribution to a symplectic manifold? Where you can integrate the hyperplane distribution. Yeah, if you integrate it, and then you find a sub-manifold, which has that as its tangent form. Well, for the hyperplane distribution to be integrable, I think that's equivalent to saying that d lambda vanishes on psi. So certainly it's sufficient of d lambda vanishes altogether, which is my next example. Already in the first row, we think that the word compatible sort of looks like compacted. Well, I'm sorry. I'll take this opportunity to make a blanket apology for my handwriting, as I customarily do. There isn't that much I can do about it. OK, compatible. I just said, i actually looks like compacted. Compacted. All right, second example. If m equals the product of s1 with some other manifold w, and I take w to be a symplectic manifold. So let's say I have a symplectic form little omega on w, and then set capital omega to be that plus dt wedge dh. So here, t is the s1 coordinate, and I'm choosing some function h, a Hamiltonian function, which is dependent on time in a periodic way. So this is capital omega, and I take lambda to be just dt. This is a case where it's closed. d lambda vanishes entirely. So now one can check. This also is a stable Hamiltonian structure. And it has a meaning which a lot of people will probably like. You can compute the rate vector field for this. It's simply, it's the vector field in the t direction, plus a vector field in the w direction, which is precisely the Hamiltonian vector field. So this is a way to turn questions about Hamiltonian vector fields on symplectic manifolds, even time dependent ones, into questions about a time independent, rave like vector field on an odd dimensional manifold. And it gets better because for the class of almost complex structures that we've talked about, we can use the same method. So we can use the same method. We can use the same method. We can use the same method. We can use the same method. We can use the same method. So the class of almost complex structures that we've talked about, a j-holomorphic curve in r times this, is basically equivalent to a solution of the Flur equation in w. So this is a way that you can do Flur homology, Hamiltonian Flur homology, from the perspective of symplectic field theory. OK. So now the setting in which I really want to think about holomorphic curves is also a bit more general than this. So let's take a symplectic manifold. So here I'm talking about co-boardisms with stable boundary. So one can define stable boundary in various ways. The main thing I want to say is there exists near the boundary of one parameter family of hyper surfaces that all have the same dynamics on them, same characteristic line fields. And that's equivalent to saying that there exists this collar neighborhood near m plus that looks like a small interval times m plus, where the symplectic structure is just the kind that I wrote down already, d of t lambda plus plus omega plus for some stable Hamiltonian structure, omega plus lambda plus on m plus. Of course, I'm going to do the same thing on m minus. But the collar neighborhood will look like an opposite interval. OK? So this is the definition of a symplectic co-boardism with stable boundary. And having done that, I can add cylindrical ends to this compact object and get a non-compact object. That's what we call the completion. So I'll denote that by w hat omega hat. It's going to depend on some choice of function phi of the sort that I used to define symplectic structures on simpletizations. And that choice of phi is not particularly meaningful. It's just nice to know that I can do it. So the definition is we take a negative cylindrical end, glue that along m minus to w, glue that along m plus to positive cylindrical end. And the symplectic structure here, we take omega phi, omega phi on each of the ends. And the original omega on w, you can choose the function phi such that, so remember, I still have this definition on the board somewhere. Middle board top right. Middle board top right. Very good. d phi times lambda plus omega. If I choose phi so that it equals just the identity map close to 0, then I can obviously glue these cylindrical ends smoothly together with these collar neighborhoods. That's what I'm doing. OK. So with that choice of symplectic structure, and I don't really have to specify phi, I just know that I can choose one. Are you using the same phi on the both ends? I don't care. Actually, I specifically want to not specify phi and allow myself the freedom to change it. So let's say j of omega and the two stable Hamiltonian structures. This is going to be the space of all almost complex structures, j on the completion w hat, which I assume to be everywhere compatible with, well, let's just say, compatible with omega on the original compact object and belonging to these spaces with respect to the stable Hamiltonian structures that I already defined. So the spaces of translation in very almost complex structures on the ends. So that now is going to be an almost complex structure that's compatible with omega hat phi no matter what phi I choose. OK. So now to start talking about the actual J-Hollomarket curves. Any questions so far? OK. So this you require that J's, what does it say? Translation. Compatible. Translation. Translation very on the end. So what about in the co-boardism part? I mean, don't you have a factor r there anyway? When the co-boardism part, I don't have a factor of r except close to the boundary. Yeah. OK. Yeah. And I don't make any requirement there except compatible with the given symplectic form. OK. So the kinds of holomorphic curves we're going to talk about are as follows. We take sigma J to be a closed Riemann surface. Let's actually say there's going to be two finite disjoint ordered subsets. And in particular, sigma dot always denotes sigma with the first finite ordered subset removed. That's a punctured Riemann surface. We take that, also this subset. I don't have to worry about it too often, but it gets partitioned into the so-called positive and negative punctures. Each of those could be empty, but each puncture belongs to one or the other of those. And now I think about maps from this Riemann surface with the various points removed going into my completed symplectic co-boardism. And I want them to be what I call asymptotically cylindrical, which means, well, let's not write down the precise definition. Let's just say asymptotic near each positive or negative puncture z to R times a Raib orbit in m plus minus. Chris, you're missing a hole somewhere. I'm missing a hole. That's true. Could be worse. The opposite would sound like a medical emergency. So what I mean is essentially, for this kind of almost complex structure I'm considering, with the translation invariant thing satisfying these conditions on the ends, I have j maps the vector in the r direction to the Raib vector field, which means r times a Raib orbit is always a holomorphic cylinder in the end. So those are special holomorphic curves that always exist, at least in that region, maybe not globally. And I want to consider punctured holomorphic curves whose behavior near each of those punctures approximates one of those cylinders better and better as you go toward the puncture. So to make it precise, I would say you can choose some holomorphic cylindrical coordinates near each puncture. And in those coordinates, as you go further up toward infinity on the cylinder, your curve looks more and more like one of these trivial orbit cylinders. OK, maybe it's time to recover this. Yeah? In your picture, theta is empty or what's happening? Theta, I haven't really said. Theta doesn't have to be empty. It's just some finite subset right now. But in the picture, it doesn't appear in the picture because it doesn't affect the picture in any way. It's just points. So theta are meant to be marked points. All right, so the actual moduli space looks like the following. It's going to be a space of tuples sigma j gamma theta u, where, well, sigma j is a closed Riemann surface, as I wrote above. Gamma and theta are these disjoint ordered finite subsets. So let me not rewrite that. Mainly, I need to say u maps the punctured surface to the completion w hat j holomorphically and asymptotically, solidically. And then I just need to say what my equivalence relation is. Deuza basically already said it. You could deduce completely the right answer to this from what Deuza said. I will say a tuple sigma j gamma theta u is equivalent to a tuple sigma prime phi star j phi inverse of gamma preserving the ordering phi inverse of theta u composed with phi for any diffeomorphism phi from sigma prime to sigma. But doesn't the ray orbits allowed to be arbitrarily terrible? Yeah, this definition is still OK if I allow the ray orbits to be arbitrarily terrible, but I won't have any theorems. Yeah. I do have, so finally to make some use out of the mark points, I have an evaluation map sending this moduli space to w hat m where theta, the mark points, are labeled zeta 1 through zeta m. So this just evaluates u at those points. And it's well defined because of the way that I define my equivalence relation. These diffeomorphisms map the set of marked points to the set of marked points. All right, so let's finally state a couple of theorems. So theorem one says, if I have, I'm just going to abbreviate elements of this moduli space by u instead of writing out the entire tuple and saying equivalence class and everything. So if u is what I call Fredholm regular, that's a condition I'll have to explain a bit, and it also includes the condition that the asymptotic ray orbits are not arbitrarily terrible. Then a neighborhood of u in the moduli space, is a smooth finite dimensional orbital with isotropy at u, isomorphic to the automorphism group of u. So I didn't write down this definition yet. The automorphism group of u is simply the space of all bi-holomorphic maps of its domain, which fix the punctures and the marked points and satisfy u equals u composed with phi. So unless u is constant, that's always going to be a finite group. And for somewhere injective curves, it's going to be a trivial group. But for non-somewhere injective curves, so for multiply covered curves, if we get this Fredholm regularity condition, whatever it is, we're going to have to deal with an orbifold rather than a manifold because of this isotropy. Is this going to be a centric of tautology on the word regular? In some sense, this is a tautology on the word regular. I mean, one does have to work a little bit to see why you can make this local identification of the moduli space with something that looks like an orbifold even if you have regularity. And I will at least have time to get to that in this talk, I think. Therm two, I'm going to probably postpone the proof of this for tomorrow. But I'll state it now. Let's say let's fix an open subset u with compact closure. Then I'm going to say there exists a bare subset called j u reg. Well, it's a subset of the space of all. So I needed to fix some more data in advance. So fix also an almost complex structure, j0 of the type I've described. Then there exists a bare subset. So this is a subset of the space of all. J's of the type I've described with the extra condition that j matches j0 everywhere outside of this special subset u. So this special subset u we can call the perturbation domain. It's where we're going to allow ourselves to change the almost complex structure, but we leave it fixed everywhere else. So such that for all j in this j reg u, let's finish it over here, every somewhere injective curve u in the moduli space with respect to j, passing through my special subset capital u is thread home regular. But here bare subset is again a commuter. Bare subset literally means commuter, yes. So what did I call residual? Yeah, I never understand residual. It sounds to me like it means the opposite, so I don't use the word. Some people call it second category, which I think is also not technically right if you look up the actual definition of second category. Yeah. Thank you for that input, Catherine. So yeah. Well, someday some kind of sensible general habit will prevail, hopefully. I say bare subset, and most people don't seem to mind. So the literal meaning is countable intersection of open and dense sets. What contains, actually, if you're being strict, it contains. It contains a countable intersection of open and dense sets. Very good. Maybe more useful input is though somehow you, I think you're assuming here now that the Hamiltonian structure is sort of regular. Well, that's in, oh, in theorem two. I am. That's true. Thank you. OK. So the board's out of reach, so I'm not going to write it. Let's just say this. One additional condition you need in theorem two is that, well, let's say all the closed Ray-Borbets for both stable Hamiltonian structures are non-degenerate. So under those conditions, theorem two holds. Otherwise, you could have trouble. And at least for if your stable Hamiltonian structures are contact forms, that's a condition which is generic. That's a standard result. Can you also do this if you're Ray-Borbets or more spot? More spot is also fine. Yeah. More spot or non-degenerate are perfectly fine. OK. So let's talk a little about the functional analytics setup for a neighborhood of, let's say I have a specific element of the modularized space with complex structure, J0, and the map is called U0. Just to pretend the statement of theorem two. You're not asking that it be somewhere injected inside U. It has to be somewhere injected somewhere and pass through U. Those two things are equivalent, which is not so hard theorem. So what I really should ask for is that there is an injective point mapped into U. And it's a good point because if I were working in other settings where I have boundary with Lagrangian boundary conditions or something, then it's not necessarily true that just because the curve is somewhere injective that its injective points are dense. So then I would have to be careful. But here, injective points are dense if the curve is somewhere injective. I think you've hidden the end of the statement. Can you rise the board and choose? Ah. The statement is on the sideboard. Oh, it's this one. It's this one. Then you can block that board so it's up with the theorem. This is called democratic blackboard technique. All right. So I need to actually write down some kind of Bonnock manifold of maps that will contain the solutions I'm interested in. This is something that's kind of standard in the case of closed maps from closed Riemann surfaces. For punctured curves, I guess it's considered sufficiently standard that nobody really feels they need to go through the details when they write papers. But it means there is no paper where anyone does the details really as far as I think. In Matthias Schwartz's thesis, I think there may be details about this, and that's possibly the only place. There's also this paper by the problem of traveling. We know how to paper this. No. No. Well, I mean, what he does in there is basically just lifted from Schwartz's thesis. OK. So I'm going to talk about a space of sub-elevated class maps from the punctured surface to W hat, class WKP. So I want to have k derivatives that are of class LP locally. Locally, that would be the definition, and it's not hard to express. But globally, what does this mean? Easiest way to say is U is of the form x along f of h. h, where f is going to be some smooth map, which is not just asymptotically but literally cylindrical near the ends. Well, I'll say that in a moment. h is going to be some section of class WKP locally along the pulled back tangent bundle along f. So in compact subsets, this is a sufficient definition, but I need to say more specifically what happens at the ends. So I just have confidence. So the j zero that you started with is regular, right? Or that? No. I don't care. It doesn't need to be regular. So are we requiring anything for j zero to begin with? I don't have to require really anything about j zero because I'm going to perturb it in a certain region, and I'm only looking at holomorphic curves that pass through that region. So near each puncture, in holomorphic cylindrical coordinates, I can choose coordinates st, which are in a half cylinder. I'm going to just call it a positive half cylinder if it's a positive puncture, negative half cylinder if it's a negative puncture. And then the requirements are that the map f just looks like T s plus a constant gamma of T t plus a constant in the cylindrical end r plus minus times m plus minus. So here, c1 and c2 are just some real constants. And what do you mean by gamma plus a constant? I haven't. I'm about to say that. Oh, sorry. Well, gamma will be valued in s1. You can add constants in s1. I thought gamma was an m. Very good point. Is that better? Thank you for that. So gamma is going to be a t periodic orbit of the ray vector field. And then what I say about h is simply that in these cylindrical coordinates, where also I'm assuming I've chosen some kind of sensible asymptotically translation invariant trivialization of this tangent bundle, I want to have h on that end be of class w, k, p on the cylinder, the half cylinder. And so I haven't said anything about what k and p are. I'm not really going to make any requirement about that, except k times p should be greater than 2. Because then the Sobolev embedding theorem tells me w, k, p injects continuously into c0. So these maps are at least always going to be continuous. If they weren't continuous, I'd have real troubles making this a good definition. So you don't require any kind of exponential decay for h or anything just? I'm getting to that. I'm getting to that. So I'm confused because if we're looking at a neighborhood of u0, shouldn't we be writing things like neighborhood for x sub u0, like something? Yeah, but I'm going to, well, u0 is going to be a map of this form. And I'm going to consider other maps of this form that are close to u0. I mean, I haven't localized yet. This is still global. Yeah, the bracket's still open. Oh, very good. Wouldn't want to have some sort of general way tech error. Can you explain what's being ruled out by this list of conditions? But I'm ruling out maps that don't have decay at infinity to some very sort of, well, f is a very controlled map. It has this precise cylindrical form. I'm only considering maps that have asymptotic decay to a map of that form. So I don't want them to move around non-trivial amounts arbitrarily far away. OK. All right. Agucin kind of preempted the next little variation I have to put on this. And I won't be able to explain why just yet, but let's just say it. For a parameter delta, I can let, I'm going to call this b. This is the actual bonnack manifold that I'll work with. So this is going to be, again, u equals x belong f of h as above. But the condition on h will be slightly different. I'm going to require that the norm of h, not the norm of h itself, but the norm on the half cylinder of e to the delta s times h is finite. So that enforces some exponential decay in addition to the decay that the wkp norm already requires. So I'm not going to tell you yet why this is necessary. I will tell you that eventually. I will tell you why it's possible, which is called this fact, though I guess it's sort of a hard theorem. First version of it proves by Hofer-Visatzian-Seindor. And there's been other versions in various contexts since then that if the asymptotic orbits are non-degenerate, then for all delta sufficiently small, all asymptotically cylindrical, or let's just say, all curves in the moduli space I've defined belong to this Bonac manifold. That is, the map involved in defining that curve belongs to that manifold. It has the asymptotic exponential decay, as long as delta is sufficiently small. Hey, Chris, I'm in the back row. I can't quite see. Are there absolute value signs around that s in the exponential? No, but there should be. Thank you. Let's make it a plus or minus. I want a minus sign if I'm at a negative puncture. The point is that it enforces asymptotic decay, or exponential decay, rather than exponential growth or something. What can I sensibly say in the zero amount of time I have left? You have like three minutes. My question stands. I need to. Sorry. The more force you have. That would be remarkable. So in that last actually, is it during the delta? Doesn't it depend on the components of MLK? Yeah, it does. Yeah, to say this properly, I would have to say that given a specific set of asymptotic orbits, there is a delta sufficiently small so that all curves that are asymptotic to those orbits satisfy the decay condition. OK. So let me try and at least get to the definition of Fredholm regularity in some form. What I'm going to do now, I've defined a class of maps I want to work with. I also have to worry about the complex structure and the domain, because that's not fixed. It's not some sort of God given data to work with, but I allow it to vary in different points in this modular space of different complex structures. Now, I'm looking at just a neighborhood of some u0 with complex structure j0, which is it still visible somewhere on the board? No. There it is. So let's just worry about the complex structures that are close to j0. And one way to do that, I don't have to consider everything in this infinite dimensional space of complex structures. But if I want to cover all of them that are not equivalent by diffeomorphisms, I can think about Teichmühler space. That's at least a nice smooth manifold. So what I'm going to do is say let t be a smooth finite dimensional family of complex structures on sigma, which I can, if I want to, I can even assume that they are fixed near gamma and theta. Another thing, I can just take a model surface sigma and fix punctures in one particular place, or several particular places, fix also the marked points in particular places. I don't need to allow them to move around. Because if I want to get all the different conformal structures up to equivalents with these punctures and marked points, all of them can be represented by some complex structure that have the punctures and the marked points in that particular place. That's just a question of choosing a diffeomorphism. So I'm fixing gamma and theta in place on sigma, but I'm letting the complex structures vary. And this is going to be a smooth finite dimensional family that parameterizes a neighborhood of the equivalence class of J0 in Teichmühler space. What's that? I don't understand what you just said. Suppose that it is like w, it ends at empty, and sigma is the sphere. Yes. Then you're not varying the complex structure at all, because you can, but you say no to forget any configuration points that you want. Yes, I can. But I have to vary the complex structure. Because a complex structure is just a section of the tangent bundle, all the morphisms of the tangent bundle are all diffeomorphic. But you can certainly vary them. And also, they're not all diffeomorphic if there are more than three marked points. No, right. So even on the sphere, I have to do this sometimes. So I'm parametrizing a neighborhood in Teichmühler space, which is the space t sigma dot of theta of all complex structures on sigma divided by the identity component of the diffeomorphism group preserving or fixing punctures and marked points. So it's a classical fact that this is a finite dimensional manifold. So I can choose some sort of finite dimensional space of complex structures parametrizing it. And there are various sensible ways to do this. I'll talk about tomorrow if I have time. There's a quick lemma. Let's also assume, just for simplicity at the moment, that the Euler characteristic of the punctured surface minus the marked points is negative. That's, Dusser also talked about this a little bit. That's the stable situation. So that's the situation in which the automorphism group of the domain with its punctures and marked points is finite. It's not an absolutely necessary assumption for what I'm talking about, but it does make a few things easier. So I can, in particular, say there exists a choice of this object, T. I'm going to, in the future, call this a Teichmiller slice since it parametrizes a neighborhood in Teichmiller space. So that is invariant under the action of the automorphism group of the domain. So having made this choice, let me finally write down the point of theorem one. We now can define a smooth section of some Bonnack space bundle. The domain is going to be the product of a Teichmiller slice with my Bonnack manifold of maps. Section is called B bar j. So the fiber over j comma u is going to be the space of wk minus 1p sections of the bundle of complex anti-linear maps from T sigma j to pull back tangent bundle. I hope that's the right number of parentheses. If not, somebody will surely let me know. And we're going to take j comma u to the obvious thing, tu plus big j composed with tu, composed with little j. If you tell me I need to put a 1 half in there somewhere, I will tell you to get a life. And so finally, so. How about changing the T for a D? My own PhD students need to be especially careful in moments like that. So. Analyzing notation using T. That's fine. I go back and forth. So the lemma is that the automorphism group of u acts on the zero set of this section by the obvious phi star j u composed with phi such that a neighborhood of u in the modular space is in bijective correspondence with a neighborhood of the equivalence class. Sorry, I should have said neighborhood of u naught in the modular space with the equivalence class in this automorphism group of u naught, j naught u naught in the zero set modulo that action by the automorphism group. So now what does this really say? By the implicit function theorem, mj is going to be an orbital near u naught with isotropic group equal to the automorphism group. If that zero section is a manifold, which is going to be true if the linearization of the section is subjective. So that's some linear operator from the direct sum of a finite dimensional space tangent to a space of complex structures with, well, let's just write it as tangent space to the Bonnack manifold, which is some space of sections of the pullback tangent bundle of class wkp. Yes? Yes. It's probably delta. Sorry? It should be a delta on your e in the definition of e. Oh, you're right. Thank you very much. If you're told he wants to have a life, it's lunch. This is the moment where I say I thought you people cared about mathematics. I've already said so. OK. Basically, so I wanted to at least end with having written down this operator. So Fredholm regularity means, specifically, I need to finish the sentence and say is surjective. So Fredholm regularity means just that. This operator is surjective. I'll talk more tomorrow about what one needs to do to make sure that that operator is surjective. The result is then we identify local neighborhoods in the modular space with this thing, which is a finite dimensional manifold divided by a finite group. I'm not going to say anything about the proof of the lemma except that if you stare at it long enough, it becomes obvious. It's not that hard. So I'll stop there for now. So there is office hours. So I again suggest longer questions, get postponed until then. So anyone have a short question that they're willing to ask? Everyone? So good to practice. I'm going to show that something is regular, right? What do you need to take into the slides and bury your complex structure? Right. OK. It's a good question. If you want to show that something is regular, well, the answer is I've never been in a situation where I had to actually choose a type of slice concretely. But it has to be there in the theoretical setup, because if it's not there, right? I mean, it would obviously be sufficient if the linearization of this operator, just mapping this space of sections to this space of sections, is surjective. That would suffice. That would imply that this one is surjective. The problem is that that's not true as often as you want it to be. You have this additional finite dimensional stuff in the domain, which sometimes actually makes the difference between an operator with a finite dimensional co-coronal and one that's surjective. So in fact, so if I have time to talk about automatic transversality tomorrow, hopefully, it's a really crucial detail there. The criterion that used to prove automatic transversality does not work if you leave out this whole discussion of type familiar space, because you need to have this freedom to vary the complex structure in order to get as much transversality as you need. Do you have an actual example of the curve which is regular in this sense and not regular if I take that particular space? I could probably come up with that. Ask me in office hours. And you have a discussion tomorrow. And I have a discussion tomorrow. In addition to your lecture. You can certainly find examples if you take chloride on stuff with a pi of t. This will definitely need to vary. Ties will especially vary. What is the fact that you take a finite dimensional balance just all by a new? If I took all of them, this operator would not be Fred Holm. That would be bad. So here you're thinking of fixing big j, which is the complex structure on the target. So far, I'm fixing big j. And your space of j here. I was using a curly j to mean variations of j on the target space. But you're fixing that and making your j as a finite dimensional. I have two curly j's. Well, you have two curly j's. Only one on this board. You see, your curly j there is a type of a slice. But you could, in principle, vary the other j on the target space if you want it to. Yes. You mean when I'm talking about finite dimensional families or you're just generic? I mean, in year 0 of 1, you were saying that you were taking generic elements where you were allowed to vary in the target in the open subset, u of your target, right? Yeah, that's right. That curly j is not actually in this limit. Here you're sometimes assuming you fix that. Right. No, I mean, the question of why you get transversality for simple curves for generic j, I will address tomorrow. So far, all I've talked about is what criterion actually gives you smoothness of the moduli space. And how does one go about proving that? One thing to say is why do you need to vary j and type on the spaces? For example, if you look at the space of tori in CQ2, and look at, you say, you know, the class of DQC, embedded tori. You look at the induced complex structure on the tori. That changes. So if you fix the complex structure on the tori, it's not necessarily going to be regular. Because when you move your points around, you leave the color on j on the domain, naturally change. So you have to have that variation in. Without further ado, I propose we break for lunch. And at 2 PM, I'll make some short remarks about organizational things like finances and stuff. So I'll see you at 2. Thank you.
I will give a quick review of the Sard-Smale theorem and the universal moduli space approach to transversality, discuss the relative merits of classical vs. inhomogeneous perturbations, and sketch proofs of the standard theorems stating that the moduli space of regular J-holomorphic curves in symplectic cobordisms is a smooth orbifold, and that all somewhere injective curves are regular for generic J.
10.5446/16280 (DOI)
Okay, maybe I start. Yeah, so this will be series of four lectures, following lectures of like Pierre Schiper and Masaki Koshivara, but it will be a slightly related subject. Yeah, so the rough plan to be four lectures today will today we'll speak about finite dimensional, exponential integrals, then second lecture will be about one forms and wall crossing structures and then certain force will be infinite dimensional story and applications to quantization, non-perturbative quantization. Yeah, so this it will be quite different talks and I think it will be quite entertaining to see the relations in the story. So I will start with kind of further integral which everybody knows, which is Gaussian integral which is square root of pi. Yeah, and somehow I want to generalize this integral and calculate related things or more general what we can integrate. Let's say we integrate still in one variable we have f of x, g of x dx we integrate such thing where what the f is maybe some constant times x to power of d plus some lower terms is a polynomial which is given. So it's for example this example it's minus x square and now g is also polynomial which is arbitrary, we write g and it can have any degree. Here maybe I assume that g is at least two and I should integrate over some pass. Yeah, so some pass and what will be a good pass will be let's say in C will be oriented semi-algebraic pass, semi-algebraic I mean real semi-algebraic like real line or you can buy some real equation, polynomial equation and such that a real part of f is taking to pass then we go to infinity as we go to infinity in C real part will go to minus infinity. So the integral should be somehow convergent so we have many such integrals but claim that if one has to calculate if you vary both g and pass we have many integrals but really interesting integrals are you have only d minus 1 squared kind of integrals to compute and the rest will be given by some linear combinations. Yeah and this thing depends somehow on homologous class of pass and homologous class of this form. So the path is where? In C. Yeah. First of all about pass. The integral depends only on homologous class. It is not enough to convert to minus infinity you can make a pass where the equivalence is sufficiently low. But pass is semi-algebraic. I said that pass is semi-algebraic and it's not semi-anal. Then automatically the integral will be convergent. Because in some local coordinates function will grow like a fractional power of small parameter. And the integral will be absolutely convergent. Yeah so the depends on homologous class of pass maybe it's called gamma which is homologous class of the pass and where it lies. It's an element of first homologous group of the pair C and I can see the subset f minus 1 of a maybe I'll just draw. What I draw I draw the set of points in C such a real part of these minus t and t is I assume that t is a very large real number. This homologous group is free group of rank d minus 1 with this degree of my polynomial. Yeah just for example if my function is minus x square as it's kind of basic example then what we should draw we have a C. Then we can see the domain when minus x square has very negative real part we get just two pieces and we can connect them. This gives a homologous class. Yeah this homologous group stabilizing t is very large so I mean just any one of them. And if for example if degree of if d if rank was 5 you can you should get the picture like this. You have in C you get depending on coefficients of your polynomial we get maybe five domains where real part goes to infinity and as the basis of homology you can choose some orientation. That's an example of the basis of homology of the pair. Now so you get essentially d minus 1 interesting pass and what about g this function g let's consider we consider twisted Durham complex which in this case is a following. We consider degrees here it's just polynomials in x and goes by df to c of x which dx where df is differential plus which product is df. So if you look how it looks on some polynomial consider some polynomial g, g goes to what? It goes to g prime derivative of g plus f prime times g times dx. That's how this twisted Durham differential acts and why I wrote it because if you multiply by exponent of f it will be kind of isomorphism and good here and here. You just consider expressions which are polynomials for next times exponent and then it also takes times exponent times df in one forms this will be just usual differential. It just to provide algebraically without referring to exponential function and the integral again depends only on co-homology class of g dx maybe g's wasn't true sorry it's maybe should put some another letter because it's not g. So g dx defined model as an image of this differential and co-molge so the co-molge was complex only in one degrees so here is a snow kernel so that's only the kernel so this class of g dx to each one of this complex of this twisted Durham complex which is the same in this case it's a kernel of df so it's just definition and mod out by the image of this map and this space is actually again finite dimension isomorphic to cd to power d minus one and the basis given by representatives of one times dx x times dx x to power d minus two times dx. Yeah it's very easy to calculate this yeah for example for d equal to you're just right houses houses twisted Durham complex looks like looks like so one goes to maybe and function maybe minus x square minus 2x x goes to yeah so you get such formula it's clear that you can kill top-degree terms and reduce to constant here only constants you cannot appear get as image of the differential yeah so that's the kind of basic example generalizing this first things first story and now I'll go to general setup so we have several ingredients so first I'll have algebraic smooth algebraic variety complex numbers let's have certain dimension n then I will have some extra ingredient which wasn't present before I will have a some closed touch breaks upset of smaller dimension maybe singular then I get a function I can write it's in various words that like map from x to a fine line or complex numbers it's kind of polynomial map or it we will include this section of the structure shift on my variety then I should have some volume form which is a top-degree algebraic form it's kind of top-degree algebraic form and and I need something which will be something like denote by c chain of integration I'll explain it in a second but before going to before going to all I have to explain why I should introduce this said D so it's kind of why I have this extra said D because there are other integrals which work can similar to what we have yeah for example one can have this guy and D will be something which contains the boundary of integration or I can have another integral which is the things I think it's called something like e1 of t it's not elementary function and it's this called exponential integral in fact this guy's called this exponential integral it's not elementary function and for example in this case what will be variety in this case variety will be a fine line this is D will be in device at just one point t function is minus x and form is DX over X yeah so it's a kind of integrate exponent of function times volume on X on X yeah it's not compact sorry X1 minus 0 sorry you're right yeah X is X is and t maybe is positive number here yeah it's some some subvariety yeah yeah it's it's sense is it infinity or with some point t which is in this case algebraic subvariety of c star yeah and now it's it's it's a chain it's not a pastor yeah it's chain yeah this case is this chain of integrations so the n dimensional yeah yeah someone should be a bit careful about chain what will be this chain now the chain will be the following it's will be some sum over n ici some find it some vci is oriented and dimensional c-melgebraic sub manifold in no no no it's open part it's it's open kind of open it's open I consider interior c I consider interior of simplex like interior simplex okay so real real c-melgebraic sub manifold and and X with the property and such thing has it's kind of rectifiable set it has natural boundary and I assume that the boundary of support of the differential is in D yeah so it's yeah and and the second thing which I assume is the following if consider real part of f as function is various in real numbers and restrict to the maybe a mecolog of support and restrict to the support of C which will be just union of this different subvarities then this it's a map from to R is has two properties is bounded from above so it means that real part of f is less than some on the support is less than some constant and and second this map is proper yeah and if you get such a situation yes one can integrate of a chain this expression and this expression is integral be absolutely convergent and that's generalization of this one dimensional story maybe yeah yeah but it doesn't matter I will immediately in one minute we'll go to some commodal groups or this exact chance you'll be relative yeah yeah so so now we'll want to say what does it mean homology class of homology class of of the past belongs to what yeah I want to describe really want homology theory so first of all define certain set s sitting in it's kind of set for f and the pair this is meaning of this set it will be some finite subset and see it's finite and the what's the definition of the subset it's informal it's point of anthropology change of the fiber more precisely define what is a complement of the set so I said that z is not before bifurcation value is by definition is the following as there exists an open neighborhood you off point z you just in c so there is an open neighborhood and plus such that over this my neighborhood my vibration is this is subs x to c is locally trivals so this homomorphism between the pullback and u times f minus one of z compatible this projection to you so this diagram is commutative so it kind of trivialize topologically my bundle said that this things also trivialize a restriction to the subset D inducing homomorphism of what of intersection with pre-image with D this you intersecting which is kind of subset here yeah so it means that this thing is locally trivial yeah I claim that this happens for all points except finite lemony and why it's so so the explanation one should compactify x and so the map will be expanded to the map from the complication to P1 and then to some algebraic variety containing x and then x prime minus x will be some vertical part which is f minus one infinity union some horizontal part which is some another sub variety yeah so get some kind of and also do you have our things D yeah so there are algebraic variety with many many things inside three three devices inside and when the projects we know that the pole geofarming is locally constant except some finite lemony points yeah so get this finite set of bad values and now what is homology group one can consider the following various come all the group which will all isomorphic to each other first I can see the H consider the homology of x this is subset D and union of F inverse again the same story as before visit to two coefficients where minus T is less than real part of any zi where s is the is are elements of my fine this critical values if I consider pre-image truncate the things it still doesn't depend on to you this is a natural map from one T to another and they are all isomorphism because we get all homotopic equivalences and of all isomorphic to each other also one can it also it's isomorphic the same thing but I take pre-image not of not of a half plane but any point in the half plane where real part of z is less than real part of the I for any it's again it just consider pre-image of one point because one but one point in what to be equivalent to half plane so isomorphism and so I get many many groups which are isomorphic to each other and call all of them this by one word HB dot will be degree on chromology HB of pair X D and F and B is for in beta it means it's kind of usual topology not the risky topology that's how do you know and so the class of gamma is this class of C is belongs to H maybe our right was of some dimension and yet middle dimensional okay so we generalize the homology class of the pass and now we should do some sequence of form and it's completely similar so you use this to the drum complex again so I have D F D plus which product by D F and it gives me a differential on it gives them complex of shifts but this complex of shifts now I should be very careful I should do it in the risk topology so I consider on the algebraic forms and covering by algebraic open sets and so on so the one can define twisted the Ramco homology of pair is a following so now I have pair just second I think I need my one assumption assume that D is divisor with normal crossing yeah I said it should be device with normal crossing it's in general could be singular space actually it's not not a restriction at all because you can blow up yeah I can make blow up and replace by divisor is normal crossing it doesn't change any commod which we consider at all yeah no but commode of twisted drum will yeah no twisted the Ram I defined only for normal crossing and it doesn't depend on resolution or maybe can relative to the yeah I can see the form vanishing on on D if D singular I will be a bit unconfigured not not logarithmic forms vanishing on D restriction to D is zero yes yes restriction to each component of D is zero here yeah so I can see the yeah so it contains a sub complex differential D F the omega SD are forms whose restriction and D say union of some devices smooth components restriction to each D I is equal to zero yeah and and this things I think I defined as hyper commod in the risk etopology of of this complex of shifts and and I claim that the integral is actually can be the first for if you get this my form which I have volume form if I have and and in fact it's also belongs to because it's top-degree form it's vanish on the on the on divisor D automatically and also we have D of volume form is zero because there's no n plus one form it's top degree form and D F which volume form is zero so it's killed by differential global section so it's definitely gives a class in this hypercromology and what I want to say that's exponential integral which we're interested is actually pairing is a pairing between homology classes so I get graphunology and and the pairing maps with beta homology Yes, they are dual over C. There is a comparison, isomorphism, that's dual space, kind of, B-tic homology, after confiscation isomorphic to Duram homology. Yeah. In this comparison, isomorphism can be, so without reference to those chains, so that's why... It's, I'm not sure, yeah, it's... So, like the rontual and shift theoretic, yeah. Yeah, one can prove it shift theoretically, yeah, it's following what we listened last week, yeah. It also gives a proof, yeah. Essentially, it was... I think it's actually was pruned, maybe first by M. Grange, this comparison, isomorphism. I'm not sure. Or... Yes, no, if consider homology in not-on-the-risk topology, you get just homology of a pair xf, because you multiply by exponent of function, you identify with usual forms. Yeah, so it's completely wrong for, if you put analytic topology. Yeah. You could also compactify and take response normal. I will speak about how we calculate it using compactification, all stuff. It will take some time, but... Now, till the break, I will now speak about kind of topological part on the beta side. So, for a moment, we forget about this hypercomulgent, this thing. So, maybe I'll even kind of hide it. See too much. So, first thing we should want to say, it's some kind of pwn corridority. Yeah, actually, I go to this picture. Yeah, so, I assume I have the following station. I get x prime, some smooth compact variety, compact algebraic variety over c, which contains three divisors, horizontal, vertical, and some closure of my d. This will be normal crossing divisor. And I assume that this will be union of some different components, so different three things. And these things has no common components. I have such thing, variety with three divisors, and I assume that d bar intersecting between dh is empty, and I have a map from, I think, to CP1, just a map, and d vertical is preimage of infinity. Setsoretically, yeah. It's going to be some multiplicity, which I don't care in the moment. And in how it's related, this original story, x, xx prime, I remove divisor, d horizontal and d vertical. d will be d minus intersection of d vertical, and f is a bar restricted to x. Now, so I get such a situation. So, let's withdraw the picture. This is a bit hard to follow. Let's use various colors. Yeah, so I get CP1, I get point infinity, and here's some x bar. Yeah, so I have some devices not causing d vertical, which goes to infinity. Then I get something, I don't know, d bar, and have another thing, which is completely d horizontal. So what happens, I remove this, I think, remove d horizontal, take parallel relative to d. So the condition of d bar intersection, where horizontal is empty, this seems to be a constraint relative to the situation that you considered before? No, no, no, it's not a constraint. I mean, this can be always achieved by making additional blow-ups. Yeah, because my chain doesn't really touch d infinity, if it intersect, I can blow up the intersection, and chain will not touch this. And then you have to change the x. Yeah, I change a little bit the x, yeah, it's true. No, but to have kind of clean Camel de Sura, I really want to have such. But the previous comparison theorem, I mean the setup in which you have this comparison theorem, between different or more of this kind of the ramps, so this does not include this hypothesis. I mean, when you have a situation which this comparison is an open and it's true, it's not necessarily the case that you can compactify it to… It's true, yeah, but yeah, I think you are right, yeah, this seems… Yeah, but one can… at least numbers can reduce to… The logistic… I just want to say that at least in this situation, what is this point of credibility, I have this thing. And so I can define also Bt-co-homology by using homology of pairs, and of rational numbers that do homology. And the claim is that if you consider homology of original, I think it's naturally isomorphic to homology after shift of some new variety minus F, and maybe shift by two times dimension. And what is the new variety? You just exchange the role of these two divisors. You remove one divisor, get pair relative to another or vice versa. So x' will be… this is the complement, but now I just remove instead of d-union, d-vertical, and d' will be just… So we get different homology group, and let me briefly say what's the origin of this point of credibility. Okay. First you consider kind of common open part. You remove from x' all three divisors. You consider the complement. You get some open manifold, and then because it's… you have our divisors normal crossing, one can add some boundary with corners, can add a real boundary, and get manifold with corners using polar coordinates and things which… Yeah, so get essentially closed manifold, and if I analyze all these pairs, one can do something on the boundary, and eventually maybe I'll just draw the picture. What happens? You have some kind of compactification, kind of real manifold… real compact manifold with boundary in those two corners, and then the boundary of this guy, which is again manifold with corners, one can decompose, and can put some negative part, plus maybe some… some middle part plus positive part, and roughly the picture collects this. You get some two disjoint open domains in the boundary, and the middle you get something like color. So it will be… cylinder, and then you get a usual point of creditability. So commulger of pair x0 and d minus x0 is dual to commulger of x0 and d plus 0. Yeah, so one can do analysis like this, but there's some interesting point. Why I replace function by minus function? Because when I do these domains in a boundary, what I do… roughly I can imagine a kind of complexified series sphere at infinity, circle at infinity with the values of my function, and instead of minus infinity plus infinity, I take two angular domains and take them back, and you see that it really looks like two domains which you connect by color, and the same goes to high dimension. Why… you said the boundary is the union of d minus d plus and other things? And other things which is… which is something which… which you added to… which will be contracts both to be something like hemomorphic to the product of whatever, and minus two dimensions, some manifold small dimension times the interval. And this comes from the divisors, from different components? Yes, yes, one can… yeah, yeah, of course one can do it in different languages using shifts and so on, but it's the same story, yeah. Yeah, no, so the main thing is… this object allows duality, that's one thing which I want to say, but now… now, before the break, I want to spend some time with some very basic topology question about topology of the map from a pair, you get x in pair d and maps to c, because it's a little bit topology, I think, c will just think it's a real plane. And we forget about simplification story, just do something very basic. So, suppose you get a map from a pair to r2, and I define before occasion set as before, is a complement to the points when it's locally trivial, topologically local trivial, and assume it's fine. But now x in d will be, for me, very general, for x in d will be, let's say, just topological spaces. I ignore all details of this story, and then I fix integer number, and I claim this case, I can construct a constructable shift on c. And what is the fiber of the shift? Fiber of x, x is at point z with z is any point in c, is defined as a stock at point c, is defined as casecromology of the pair x, take d, and take union of the preimage not of the point z, but small disk. Take with some simple z-tragic coefficients, where we have epsilon z is a set of z prime in c, z open disk. And this, by this finiteness condition, we see that it stabilizes as epsilon goes to zero. So we identify all of them. Now, so the claim, it's a constructable shift. So obviously, if we outside of ramification points, then it's local system, because we have locally trivial bundle. I would say to get structures local system. And what is the structure of a shift? Sorry? I don't assume anything. It could be, in our case, it's finite, but in general, yeah, it's very, very general story. It's the main point that this is finiteness. That's what plays for me essential role. Nothing else. It's a constructable shift. Ah, what's the structure of a shift? Just define just a collection of a billion groups, and one should kind of make a transition. Suppose I have a critical kind of critical, this bad point when I have ramification. Then what should I do? So let's draw kind of a take point zi, take preimage of this disk union with some set and take a model of pair. But now if I have point very close to z prime, I can have kind of smallest thing. This is a d epsilon zi, what is u? zi epsilon minus point zi can contain some u, some another prime, maybe z, some epsilon prime, z is not the prime, and epsilon is smaller. Then I can have a restriction map from commolder of pair to commolder of pair. And that gives you shift structures. I have for a section, for stock of my shift at this point, I have stocks in nearby points. This is exactly the definition of what is a constructable shift. So get constructable shift which is smooth outside of s, and kind of the theorem which is not obvious at all, that this shift has zero commolder. There's no global sections, no first commolder, in fact there's no higher group or what can say this argama. And here it's a very, very bizarre story goes on. You will see it in a moment. And it's not really part of homological algebra, so I'll speak about homology. So what is explanation of this fact? It goes through some different formalism. Let's consider following pairs. We have pair D and B. Consider pair D and B, where D is, let's say in C is closed subset, and which is homeomorphic to closed disk. This curly D will be some, maybe not homomorphical, maybe piecewise, whatever you want, linear, analytics, infinity. This could be some corners. And D is the subset of C. B is the point on the boundary of D. And we have only one constraint. Constraint, there's no bad point from the set of my set on the boundary. So these bad points which I have, there will be some inside, some outside, and here I get point B on my disk. What kind of regularity you have? You avoid some spikes or some... Yeah, it doesn't really matter. One can use semi-algebra, semi-analytic, semi-algebraic stuff. Yeah, it's just too horrible singularity. And for any such thing, I can associate a billion group which denotes something like by A of this pair of D and B, which is a case-commol of the pair. The following, I take the image of disk and take, again, the intersecting with the image of disk union F minus 1 of B. Take a mol of this integer coefficients. I have this commulge of pair. Yeah, so this... And they form the following thing. This A, if I associate this association of the disk and point on a boundary, is a local system on space of disk's boundary such that boundary... This do not touch... So I allow to move my thing, but boundary should should cross this red point, this S. What is the higher D that we avoid those two? Yeah, okay. Yeah, it's some infinite dimensional space, but this should form a local system. Then we have such thing, then we get restriction maps. If one disk contains another and a B is equal to B prime, is the same point. Then we get a map from A of DB to A of D prime B. And it's again locally constant map. If we reform both things. Yeah, so what happens here, get this D, I get D prime and some bad points stay in one, some in both of them, some outside. Because I have just a map of pairs, I just get restriction to smaller space. It's clear. And then it satisfies some kind of additivity. XM, namely imagine the situation like this. I have D and maybe I just make a little bit smaller. So, D contain D prime and union D double prime. And there's no point, no black points, no bad points in the middle. So, there are some points here, here, outside. So, D is a big, disk contains two small disks. Yeah, all points will be, mark points will be the same. Then the additivity property says the following. The condition about the same in the green, that means that... No, no, no, I said that D and D minus D prime intersecting with S is equal to D. I put this condition and D is kind of something, some disk containing... D is not the, contains the union. Yeah, it's not, yeah, and there's maybe just joined here. This point B, the touch at one point. Then what I have in this station, by restriction maps, I get a map from A D B maps to A D prime B plus A D double prime. You detect D minus the union or D minus the intersection? Sorry, D minus the intersection, the, oh, the union, sorry. Sorry, yeah, yeah, you're right. Yeah, by restriction maps, I get a map from, to one group to another group, I get a map to direct sum, and this thing is isomorphism. Yeah, so I get this different types of structure, which I... If in this situation, so what is the relation of this to my shift, to constructible shift F? The relation is the following. If Z is in C minus S, the stock of my shift is, can be written as one of the spaces, namely D B, where we have the following picture. Here B will be Z, D, and D contains S. There are many such disks, you choose any one of them, and you define this like this. It's not really definition, because then you should say how you identify one thing to another. It's a pretty complicated story, but a similar story if Z belongs to S, one of these special points, then you should draw the following picture. You should make, you should just exclude point Zi, that's the point B, and all the rest will be inside. Yeah, that's the definition of what F of Zi is, A of such a thing. So what... Yeah, if you look on this axiomatics that I get, groups depending on disks and some restriction maps and additivity axioms, it works. So axiomatics works. If we replace the commodule of pairs with some integer coefficients, with some given, for given in a k, by any contra variant functor, called h, from what? From topological spaces of pairs, because pairs can be replaced by topological spaces with base point, by contracting close 6 to 0, topological spaces with base point. Two groups are even to any i-billion category. It doesn't have to be groups, it's contra variant functor, it's up to homotopia, but it shouldn't be some homologous theory, it has only one properties. If you have two spaces, I don't know some space, y1 and y2 with base point, y2, then have the union where you identify, you can, bouquet, you identify the base point. Of course, it contains both y1 and y2, and then we have a map, and this should be isomorphism. So it's definitely any homologous theory, like k-serial, bordiasms, what a homologous-ethnic-efficient works, but maybe there's some kind of bizarre, I don't know, functors on homotopy types, which do not come from homologous theory, because I just sit in only one degree, so I don't do things like suspension. Then one can repeat the same procedure and get a constructable shift is argum equal to 0. Yeah, yeah. What is right there? It should be isomorphism. Isomorphism, yeah. So, it'll make maybe in 10 minutes. Ah, it's not clear at all. It kind of reduces from a homological algebra. Just give you some kind of theorem, which we proved with Katsarkov and Panty a few years ago, as this kind of fixed finite set, C to K2, or C, that's really meta. Then the following things are equivalent. This is a category of constructable shifts of, let's say, of a billion groups, which are local system on the complement to S, and set it, set this no h0 and h1, as this one condition. Then it's equivalent to the following data, to the following things. It's equivalent to collection of the Sibling groups ADB, boundary B, this is empty, which form a local system on the space of parameters, DnB plus restriction maps, as I explained before here, set aside the dTv, and so I think that's something very explicit. So this third description, choose some topological data priori, choose some kind of DI BI, set it DI interior of DI contains only DI and no other. So for each point in my set, for each point I choose a small disk containing input, choose a base point on it, and also choose for any ordered pair of indices, which are different, choose a simple path, and I'll draw the picture. And the simple path, save should go from start from some point, so I have map from interval to R2, so 0 goes to point of my first disk, not point BI, 1 goes to point Bj, and the interval goes to R2 minus union of all disks, decay of all points, it doesn't intersect all, so I draw this path outside of my points, so I just make this chase, but not everything, it's, for example, it should be something like sorry, disk I disjoint, yeah I see that also I'm missing this, I promise empty. On this collection of paths, it could be arbitrary, I don't, for example, I can choose of something called Gabrielle of Stipe, what does it mean? I kind of imagine to have some points, the infinity which is very far from my points, and kind of first connect by disjoint path to all the I, there's no disk yet, but now what I do, I just have this BI DI, I have, I just draw a small little disk and have first intersection points, to the points BI, but now how I connect them, it's, yeah there are two possibilities, I go from upward to down and one should choose pass something like this, point in my set. So you want something which is topologically equivalent to such a thing or what? Yeah topologically equivalent to such a thing here, you get just collection of things, you get collection of invertible maps, so it's in this inverse from each guy to itself and collection of maps, TIJ from UI to EJ for I non equal to J, so it is thank no further constraints. What you do there is not, I didn't understand the principle, so because you have different choices, it's A and could be J, maybe first and then J, second, I, you start from one I and you want to reach another disk, but it could be above or below in this picture. And if you have more? No, no, it's many of them, it's maybe some intermediate here which I didn't draw. Yeah, yeah I have many many disks but they are from order set and if I go from one to another, from which I start could be below to the result above and according to this picture you use different. Ah, but do you have a pass from I to J and a pass from J to I? Yes, yes, yes, it's ordered pair, ordered pair. In the past, who intersect each other? Yeah, yeah, yeah, yeah, yeah, they can intersect, yeah. Yeah, so this is just, yeah, it's really a long story how it goes. I just want to explain you, and I'll explain it's in different terms in my later further lectures. What is TII? What is space UI? UI is exactly this ADIBI, this is definition of spaces UI. Then what is TII? You consider if you have a disk. One second. Okay. You can see the one parameter familiar of pairs disk is marked points, you just travel your point around the boundary of the disk and you make by because constraints it's kind of local system, get automorphism of this space UI. And what is TIJ? Just a second. So get for I and get something for J. And what, I just finished in one minute. You have a pass. And now, now again, consider one parameter familiar of disk with marked points, which will start with the following thing. It will be, the mark points will be always BI. I start with this, then intermediate step will be something like this. And the final step will be again get one parameter familiar of disk with this marked points. And what happens here? So I get some kind of again disk depending on a, oops, sorry, disk depending on parameter theta, some angle along this variable. And so what I get, I get AD0, maybe BI by monodromia along the pass, or holonomy along the pass. It's identified as D maybe 2 pi, the same BI. Then both of them map by restriction map, because both contains original disks, A, BI, BI, and also both of them isomorphic by axiom to the direct sum. It's also the morphic to direct sum. And one can check that the matrix, when I have, so I get isomorphism of the space to itself, in the matrix will be identity, identity zero here because this one is quotient object, and sum operator which will be definition TIJ. Yeah, that's a definition of operator TIJ. Yeah, so it's a really long story, but if I choose, and if I get this explicit data, one can explicitly construct a shift, which is argama is equal to zero, and the whole thing is does depends on description. And so eventually we see that's argama is equal to zero, it's a kind of calculational result, it's not follows from abstract homological algebra reasons. Yeah, so one can choose different passes, in fact, if no three points right on the same line, one can choose kind of straight passes between this, maybe rotate a little bit, these things. Yeah, so in general, what kind of collection of passes you can choose to have this equivalence, I don't know. It's kind of interesting question of topology about break groups. So now we have a small break for maybe 10 minutes. So before I talk to you a lot about this topological, how topology looks like, so eventually it's very simple data, but depends on this drawing, and now about Hodge theory. So all this thing which you consider x, d and f is a generalization of just smooth variety and divisor say with normal crossing. But variety is not compact. And in what sense of generalization, we just consider usual rate geometry, it's a part when f is equal to zero. And of course, then we consider h, the Ramx, df, it's kind of generalization of the Ramcoumology of pair when there's no function, we don't have this correction to differential. And so if you try to think it's about Hodge theory, this is Camelge of pair, it's typical example of mixed Hodge structure. Actually it's not enough to consider just open varieties, you really consider pairs to get interesting Hodge mixed Hodge structures. And pure case in, in, correspond to usually that when d is empty and also x is compact. And what is an analogy of pure case in the case of function? A kind of analogy, analogous to pure situation is when d is empty, but f is proper. In fact, it's very common situation that is proper. Often when d is empty, but f is not proper, not proper, there exists a compactification, or partial compactification, x tilde contains x and f tilde c is proper. Such that compliments say again a, this kind of d, horizontal is divisible, is a normal crossing. And f, f tilde restricted to any, and let's say it's union of some d alpha of any intersection, is vibration. So topology at infinity doesn't change. Yeah, that's kind of really nice situation. In this situation, first of all, see that critical points of f is the same as critical points f of tilde. It's, it's automatically belongs to x. And then whatever, commulg of pair for x and f is the same as commulg of x tilde f tilde, and the same is the RAM, and comparison isomorphism. Yeah, so one can kind of get non, a non proper situation can be replaced by proper without changing anything. And yeah, it's very concrete example, I can just say the following. Suppose x is c star square, this coordinates, let's say x1, x2, and the map is those two x, some low run polynomial on c star square. Then the generic fiber is elliptic curve minus three points. And even singular fiber, singular elliptic curve minus three points, you just add three points to the fiber and then you can put the fiber. So x tilde will be x and you put three copies of c, you can put it somehow, and this, this things became now proper map and without changing topology. Yeah, and, and such things are really very convenient, just instead of this proper map, you can consider this things which are equivalent to proper, you have the same co-amoology. And yeah, for example, we can make kind of tame polynomials, even for example, take x is c and f is x square, x square, it's, you can, yeah, no, here's the term, yeah, in case of one dimension, there's really no problem. But what is advantage? Such things are, call x e kind of is a singular to infinity. If there exists x tilde satisfying all this property. And, and if you get this things, one is a singular to infinity and you have another thing is a singular to infinity, you can make a tensor product. And here will be some kind of tomi-sivastiani sum, so it will be kind of pre-image of one plus pre-image of f2. And just take this things is again is a singular. Yeah, so in sense it's proper kind of, kind of pure hodge, whatever, a lack of pure hodge structures, we can really multiply and just take product, yeah, it's, some, say, remark that it's very common. Yeah, there is some thing called tame polynomial, for example, people sometimes consider x just cn and you get f is some polynomial. And for good polynomials, you can compactify fibers uniformly without changing topology. And, so reduce to this pure case. Yeah, but now I assume that it's, assume this empty and f proper. So, I can do the following. I consider some h bar will be parameter, plan constant. For any plan constant I associate h dram, depending on the constant on x and f. There's no divisor, I skip it from notation. And just by definition it will be hypercomulger of x is the risk topology. We take forms with differential h bar d plus df. Yeah, so the claim, the claim is the following. So, this form, this collection of h dram h f is forms an algebraic vector bundle on plane series h bar coordinates. And for each degree, in doubt with, it has a meromorphic connection, natural meromorphic connection with pole at h bar equal to zero and irregular singularity in general here and regular singularity at infinity. No other singular points. And the connection has second order pole. It's irregular to have second order pole at h bar equal to zero. Yes, yes, kind of remark. Yes, yes, it's kind of first fact, it's vector bundle. Essentially it says it's rank of h, h bar equal to zero. It doesn't jump. It's our old result with Sergei Baranikov which was proven several times and I will explain the proof in next lecture of more general result. Yeah, yeah, that's it's analog of degeneration of h dram spectral sequence. And this allows to speak about the fact that the connection has second order pole because otherwise you don't have canonical trivialization. Yeah. So, actually what's the origin of this connection? This for h bar non-equal to zero, one get co-variantly constant into dual lattice in this caramology kind of gamma h sitting to h. And in fact it's to describe connections better describe this lattice. The reason is the following. You see that this thing is isomorphic. We've done commodule of d plus df divided by h bar. Namely isomorphisms you multiply h times degree of form. You kind of rescale complex and you replace by this complex. And then here by comparison isomorphism. So, it's h dram of x and f divided by h bar. And then it's isomorphic to h beta of x f divided by h bar times dram c. And the image of this thing will be a lattice gamma h. It gives a lattice and lattice gives you a connection. There will be a connection preserving this lattice. Yeah. So, the non-trivial thing to check its lattice. This thing has a really second order pole. First thing is if you consider connection over go to Laurent series in variable h bar. Then this and you allow poles. Then it will be isomorphic to the following thing to direct some of all critical values of my function. You take exponent of the i divided by h bar as a kind of generator of d module in h bar variable tending some regular holonomic demodule over this formal Laurent series. And this part, if consider regular holonomic demodule over Laurent series, it's the same as vector space with plus automorphism. And in fact, what is this vector space in this automorphism? We can see the homology of kind of beta homology of neighborhood of maybe f minus 1 of the i. This coefficients this shift of vanishing functions of vanishing cycles. Of function in shift by z, so it will be value zero. And this is a billion group with automorphism and it gives the regular holonomic demodules. And this is I think it's maybe slightly a lot subive. Actually my question a few years ago and it was the answer that it's this holonomic demodule is canonicalism of to this guy. Okay, then there are things which I probably will not explain right now. The Stokes filtration at h bar equal to zero in any direction is compatible with lattice structure. Yeah, so there are several statements here. If you also go to form power series but without inverting variable h bar, then it's canonical azomorphic to direct sum over all critical points. And you take homology of formal or maybe analytic neighborhood or analytic algebraic neighborhood of what? You consider these things and intersect with this critical set of f called this maybe kind of capital zi of zi. It could be very singular subset algebraic subset of my x but take arbitrary neighborhood and in neighborhood I consider forms now at formal parameter and take again hd plus df. The same differential. Yeah, so I can calculate it, calculate these things locally and what else I want to say maybe a couple of points here. Is that fiber at zero is naturalizomorphic sum over i, homology again of neighborhood of zi. It's complex of forms with differential adjustment application by df and it's azomorphic homology of x with the risk typology is a homology of x with analytic typology with the same thing. So formal you don't think that you write formal completion? Yes, yes. And the analytic you don't do you mean complex analytic? Complex analytic. But then you have to take formal or comparison power series? No, no still formal. Yeah in each part of the formal power series. Yeah in fact. So when you take analytic is a fixed one or the limit of all the variables? For any given neighborhood. Yeah, yeah, it was pretty complicated story here. I think it's all of them as a morshek. Whatever you do. And we do the last point here get and also get kind of non-degenerate pairing. It's essentially the spawn carrier pairing which I explained to you before between h and the ram h bar xf. It's azomorph to h maybe take dual and the ram minus h bar xf. Because it's for h bar not equal to zero we get that up to shift we get this pairing which I explained before. It extends to non-degenerate pairing at h bar equal to zero. Yeah so get this picture. I will not really, yeah it's yeah no this is this kind of analog of Hodge theory and let's explain in a second why it's analog. That's going to be sort of analog of Hodge theory. Okay. Yeah so what here really goes on. For h bar not equal to zero we already get this notion and no trouble at all. What happens if we remove zero? We just consider remove zero obtain get irregular the polynomial demodule on c star with coordinate h bar which has irregular singularity at h bar equal to zero and regular h bar goes to infinity. Yeah but now one can make the following I think let's introduce inverse variable call it t. Just so we get demodule on c star t to get vector bundle this connection algebraic connection but now it has irregular singularity as t goes to infinity and irregular as t goes to zero and demodule of this thing it means that we have an action of t inverse and do dt since it's like usual commutation relations but this demodule we can interpret as demodule over c. Let's forget that t is invertible. The same is demodule on c this variable t such that t gives invert multiplication by t gives is invertible operator. It does the same story. Plus the regularity condition. Yeah again the same regularity condition yeah which I have here plus the same regularity condition. And such thing yeah so it's and then this thing gives by Fourier transform. So Fourier transform means that t goes to do dt with z some dual variables and do dt goes to minus z. Okay and we get demodule on c with z coordinates such that do dt is invertible. Plus some regularity. Plus some yeah but this demodule will be regular will have regular singularity everywhere and then what? Everywhere including infinity yeah and this demodule has regular singularity with infinity then it gives by Riemann Hilbert correspondence gives the perverse shift on c such that our gammas of the shift is equal to zero because how we calculate the drum commodule you get a module kind of D demodule over this invariable z and when we calculate drum commodule you just consider m tethering o by o m tethering o by forms and you get the rambd differential you just act by d o d z and consider kernel and cokernel of these things here so the drum commodule is kernel cokernel of d o d z acting on this total space of a module and because it's invertible it means exactly means that it's our gamma of corresponding constructible shift is zero and this is exactly this shift which I explained you before or or it's functions functions and polynomial functions in the invariable yeah so you see that it's um yeah so it's eventually it's equivalent to this topological data which I have before to have these things and so if you just throw away zero you get the same topological data just rephrase in different way and extension to h bar equal to zero it's kind of analog of hodge filtration that's where hot structure appears one can treat the very simple case when f equal to zero yeah and if you follow the line then h deram h equal to zero of x f equal to zero you will be direct sum of hp of x omega two it's a bit direct sum of commodule forms yeah and for h non-cluzio get the ram commodule and how we glue one thing to another it's using hodge filtration yeah so because it's well known things for what we call filtered spaces are the same as c star querian bundles and you get essentially this this picture and uh what I want to say is that yet it it looks it's all this things can be extend to general case if x bar is uh contains this d vertical you know to plus d bar I have the three things and I really want this property uh is before and I get a bar from my extension to p1 uh then what one should do one should consider on x bar minus d vertical we consider things like this we consider forms which vanish whose restriction to d bar is zero and our logarithmic forms with respect to d horizontal and we have this shift and and we end up in doubt with differential hd plus the f take our r gamma and uh is this again form a at least a hope form a vector bundle as h bar goes to zero and the same story works uh yeah so just this is minus d vertical yes sorry sorry you're right you can see the forms vanish yeah because it's the same you have topological description topol and cumulative pair then jump yeah but the j bar could to zero it I think it's it's the spectral sequence again degenerates but um but in principle should be part of more general things about so this is a generalization of the previous yes yes yeah yeah yeah I think it should work even for general mixed hodge models so let's me formulate the general story there's some kind of notion of six called mixed hodge models it's by my setor yeah yeah so it's if you have a but this mix of modules could and very interesting mix hodge model on a fine line c is algebraic all right you're not an analytic one what is it it's constructible shift it's perverse shift of z modules which correspond to some things with regular singularities corresponding to demodule with regular singularities and to treat care something like hodge filtration this will be vector bundle this connection to some delta functions yeah so there's some kind of notion here everywhere at critical infinity yeah and so we get some some category of mixed hodge modules and contains part which do not by something like this those such that r gamma of constructible shift is zero and that shift say yeah yeah or maybe yeah yeah maybe to q shift yeah forget about torsion although it's not terribly good yeah and this is a rigid tensor category where you use convolution additive convolution to make a tensor product yeah actually this perverse shift will be automatically constructible shifts shift shift sitting in degree one it's follows from this condition yeah there's some simple result that in case of one variable it's automatically just sits in one degree shift and there is some kind of weight filtration here yeah so this one can make analog of weight filtration story which we developed with the unsuble man for some auxiliary reasons and um yeah so this can complete parallel to usual story and maybe I stop here and continue next week
The goal of the first part of the course is to describe and compare various cohomology theories for algebraic varieties endowed with global function. In the second part infinite-dimensional applications will be discussed, including non-perturbative quantization of algebraic symplectic varieties.
10.5446/16277 (DOI)
Welcome to the OpenL workshop. It's my pleasure to introduce our first speaker, it's Ross Kane from Manchester University. He's an amazing work. He's published multiple times in Science Magazine. He probably knows him from his work in Robot Scientists. It's a robot that does experiments, tries to interpret observations and adjust hypothesis and then runs new experiments with amazing systems. He's also permanently working on proving P equals NP. But today he's going to talk about metal-learning and Q-SAR data and how it will help to research. Thank you very much. Is that the Michael? Okay, okay. Can you hear at the back? Okay, good. So thank you for the invitation to I&T woven. It's the first time I've ever been here. I'm going to talk about metal-learning and Q-SAR data and this work goes back over 20 years to this project called the Statlo project, which is one of the first comprehensive comparison of machine learning methods. I was thinking about it, I know we're here actually. So one of the conclusions from that study, the empirical study 20 years ago was that busy networks don't work. They were by far the worst methods that we tested out 20 years ago. But of course, that didn't really change. Busy networks have been highly successful methods. So I'm not completely sure what that means. Whether they... Yeah, I'm not sure what it means, but it's interesting, I think, that when we tested them in Perth, 20 years ago, they didn't work very well at all. Now they work well and lots of people have spent a lot of time working them. Not because of their empirical success at the time, but because their elegant approach to statistics and machine learning. Okay, metal-learning and Q-SAR data. Okay, so first the motivation. It surprises me and sort of disappoints me that many of the best people in machine learning spend their life trying to get people to click on certain online adverts, you know, slightly more efficiently than the rival companies. I don't think that's a good use of their talents and life. And for the younger people here, I don't think you should spend your life doing such a thing. It's not. Life is short and you should try and do something useful, I think. Of course, it's better than making weapons for the military, but advertising, but it's still not particularly something to be proud of, I think. So, parts of diseases, the world is still shocking, that the world has still got these diseases, they're major diseases. Malaria, it causes at least a million people a year, perhaps two million, we don't really know, because health, especially in India actually, where the records are unclear, if someone dies in a remote village, it's not always clear what they died of. It could very well be malaria. Hundreds of millions of people catch malaria every year. Hundreds of millions of people catch just as much as well. It kills tens of thousands of people. This is horrible, partially disease from a worm. Malaria is caused by a single cell parasite, as a lishminia, which kills tens of thousands of people as well, and causes horrible dysfiguration. And shaggis disease is from South America, kills tens of thousands of people as well, mostly through complications through heart disease. So these are major diseases out there. We still need better treatments for them, better drugs. So millions of people die from these diseases, hundreds of millions of people suffer infection. There are so-called neglected tropical diseases, because the pharmaceutical industry and its wisdom has not spent money on them. They are, I'm sorry, in our society, they're driven by profits, so they think there's not enough money in these diseases. I think actually their modelling is a bit wrong. So let me try to explain why I think that is. How I think they work is that they look at different disease classes and look to see how many rich people, or at least people in the Western world have them. And then they think, if I could treat type 2 diabetes, that would be worth so many billion dollars a year to me. And that's how they go about, I think. And I think the fundamental flaw there is that they don't take into account the a priori probability of succeeding in finding a drug to treat, say diabetes type 2, because we don't really understand how that disease works. It's something complicated to do with the systemic control of insulin, but we really don't understand it. So it's very hard to treat a disease you don't understand if a single drug. That contrast with these parasitic diseases I talked about earlier, we actually know very well how to treat them. We just, they kill the parasite, and it's not particularly the thing to do, because they're very different from human cells. The last common ancestor was, for most of these, is hundreds of millions of years ago, perhaps billions of years. So we actually know how to treat them. And the pharmaceutical industry could have treated these diseases very easily if they just spent the money, but they haven't. So we need, in the university sector, to be more efficient in the pharmaceutical industry, because they spend, on average, something like a billion dollars to find one drug for one disease. We need to be more efficient. Okay, the problem of finding a drug to treat a disease called drug design, what we want to do is to find a small molecular drug which will modulate the biological activity of a larger chemical called a protein, which will then affect the whole living system. And that's how we treat diseases. Normally we find small molecules that are specifically bind with proteins. That's the name of the game. So small molecules, these are example small molecules. Ibuprofen is the classical pink color. You take a couple hundred milligrams of that if you have a headache, it works really well. Here is the, so this is one abstract, where often computer scientists think of chemicals as some sort of subtype of graph, but they're actually sort of three-dimensional structures. Here's a sort of space filling model of ibuprofen. You can see the red is oxygen, this is the aromatic ring in the middle. If you remember your chemistry, oops. And actually of course they're not static, they actually move around, vibrate. These are proteins, so the protein is the order of several, maybe a thousand times bigger than the small molecules. These molecules are going to bind to it specifically at places. Okay, so this is the diamond synchrotron. Synchrotron is a big x-ray machine that makes high-powered x-rays which are used to get the structures of the protein. So you crystallize a protein, then fire these x-rays at it, and you can work out the structure of the protein. What's interesting I think is the size of this. So if you see these little dots down there, this is the size of a large football stadium. So computer science, we're not imaginative, and should think of the physicists, and the biologists managed to build tools this big. We should think how much could we do for a billion dollars? And the justification for diamond was to treat diseases. So it's crucer type justification. Though the physicists want it for their own reasons as well. Here's our big protein. This is the protein, this is a small molecule interacting with it. That's the sort of typical of the scale of the whole thing. This is a close-up view. And this one is just to show you the sort of level, the complexity of the interactions. This is like a cartoon of that protein I showed you earlier, interacting with a small molecule. So there's lots of spatial interactions, very specific. In fact, drugs, if you think you're going to put a drug into a person's body, it's got to target the right place very specifically. You know, you're only going to add a little bit of drug, and it's got to go to that particular target, and not interfere with anything else. And that can only curve that it's very specifically binding to it. So the probability of it binding there is millions, billions of times more likely than anywhere else. And that's what you try to achieve. I think of an assay. An assay is a small biological test which gives you a prediction of how well the actual compound's going to do when you give it to a real human being or whatever. So it's a cheaper test than actually giving it to a living human being also more ethical because you can't test millions of drugs on human beings. Okay, so it's a simple test. You normally have two approaches. One is to use a pure protein and measure binding on it. So the protein is then called the target. The problem of doing that is that you're never sure that when you put this compound into a human being, that it's actually going to reach the target because so many other things could happen in a complicated living system. The other approach is to use whole human cells. And the problem there is that you're never sure what you're hitting when you put something in. Is it just a target or what? So these are. And both assays are very expensive. So typically a pharmaceutical company will spend 100,000,000 euros designing an assay for one of these trials of compounds. Okay, so QSAR, this is to machine learning. So QSAR is a quantitative structure activity relationship. Essentially it's a function where you input the chemical structure that outputs a real number of how good that compound is on the assay. Okay? So it's a function, the input of the function is the structure of a small molecule and the output is a real number, which is the predicted activity on the assay. Okay, and you typically learn these assays to help you design new compounds. So you want, the name of the game is to design a new compound, not just to make a good QSAR. That's not the point of it. It's actually making new compounds. Can you see? Yes. Okay, and the particular QSAR problem depends on what is known. So you know the small molecule structure, and that's the default case. In some cases you know the structure of the target. In some cases you know exactly. So sometimes you know what the target is, you don't know how the small molecule is binding with that target. Sometimes you just, you know how it's binding, like we saw in the previous cartoon. Okay, and these are slightly different problems. In general I'm going to talk about just when we know the small molecule structure. We don't know the actual structure of the protein or the extra information. Okay, and then there's a problem of how do you represent chemical structure? So you have to have some way of, I'll search the picture, abuprofen is three-dimensional shape. You have to somehow encode that into something which machine learning statistical program can use. So descriptors for a table. You can represent the bulk properties of the molecule. So log P is essentially how hydrophobic it is, how oily it is. So that's important because it turns out that you don't want to be too oily not oily enough if you want to be, we are successful drug. And perfectly we know that's the case. Actually it's a strange story actually. So the pharmaceutical industry, their whole business is based around putting drugs into people. So you think that they would know how drugs get into cells. So a cell's got a memory around the outside. And what they always used to believe was that the reason that you want the molecule not to be too oily but oily enough is that because it's going to diffuse through the membrane of the cell into the cells. That was always what they told you. That turns out to be wrong. And you think that they would have learned that a long time ago because it's so core to their business. It turns out there's actually these proteins which import molecules into the cell and export them. And any drug has to fit into one of these proteins. And one way the pharmaceutical industry should have realized that what they said was wrong was because if you look at the small molecules in the cell, these compounds which are actually there also have the same amount of oiliness as drugs. So they would have diffused out. It was obvious that would have happened. But I don't know, it's strange I think. They don't really seem to step back and think about what they're doing. They have these rules that says you should make a molecule of this particular oiliness. They don't really think why. Fingerprints. The standard way to do this in the industry now is using these fingerprints. I think it's a pretty ugly thing to do but this is the standard. So what you do is you have maybe 100 to 1,000 billion attributes which say something about the molecule. Each attribute says, for instance, is there an oxygen in the molecule? Is there an alcohol group? Is there an alkane group? Is there a benzene group? So they have all these complicated questions. Each one you just get yes or no. You get this long fingerprint, typically at least 100 long, possibly 1,000 which is standard. And that's what they use. Some work been done in 3D shape, etc. Okay, so that's the background of QSARs. We have this project to work on what we call MetaQSAR. So there is... The literature on QSARs is vast. Thousands of papers have been published. Every possible machine learning method has been applied to the problem. And the result of that is not surprising. It's for some problems, some methods work well and for other problems, other methods work well. Probably down to some deep bias in the actual learning problems. So what we're trying to do in this project is to do some MetaQSAR learning. We're going to apply lots of QSAR methods. Sorry, we're going to apply lots of statistical methods and machine learning methods to QSAR data and look how they do well on different problems and try to figure out why they do what they do. And hopefully there will be some lessons to the pharmaceutical industry and people designing drugs so that we can treat malaria and things better. So that's the basic idea of the project. Okay, so we have different databases from QSAR. We're building this sort of intermediate databases which we're going to use in the learning. And we're at the stage where we're sort of building the infrastructure for all this. So I'm going to show you some initial results but these are very initial. We're just showing that we actually can get everything to work. Okay, I'll say I'm pretty sure every form of statistical machine learning has been applied to QSARs. How they differ is to the a priori presumptions to make a bit of a learning task and they assume that the data is going to be represented in a standard way which is a two-pull of attributes. Okay, so one thing that made this possible is when I started working on QSARs also about 20 years ago that I had to input the data myself by hand. I'd read a scientific paper and I would have to translate that data into the computer by hand manually. Now there is this database called Kembal. It was essentially what they've done was that this private company manually curated, there was this journal called the Journal of Medicinal Chemistry which is the top one in the field of medicinal chemistry which is the area of chemistry where you design drugs, medicinal chemistry. Actually I'm quite proud of that for a paper in it, it's a real chemist like journal. So what they did was this private company manually essentially typed all the data from these papers into a big database. So it's based on around about 60,000 publications and they manually took all this data out and put it into a big database of databases. So each one of the papers, typically in a journal of medicinal chemistry paper you have a description of maybe 100 compounds, maybe less, what the assay was and how well the compound did in the assay. That's what a typical paper looks like and they may well have applied some sort of regression method to that data. So this company sort of collected all this data and they were going to sell it but they went bankrupt. Somehow the business plan didn't work, which is good for us because the welcome trust which is this giant medical charity in Britain sort of stepped in and bought the database and then made it online so that EBI and I have this database. So it's publicly available to anyone who wants to look at it. Has anyone heard of the welcome trust? It's their, I don't know, this giant medical charity, they're worth tens of billions of pounds. They never give me a penny in my life I applied. I don't know, at least six or seven times. So every time now I apply I double how much I ask for. Because it's like the St. Petersburg products but they're infinitely rich as far as I can tell. We shall see whether I die for or they give me the money. So this nice database is manually extracted. It's very clean. It's got 60,000 publications, 10,000 targets. So target is one particular type of protein they're trying to design drugs against. And 12 million activities, one and a half million distinct compounds. So it's a very nice large database and this allows us for the first time to really do metacusar work and there's lots of data that we can actually work on. Okay, so this would be a typical representation of a molecule. Like the white, log-P-hydro-pipicity. And here we've got the long fingerprints of the billion descriptors. And as part of the project we want to find out which ones are really important, which ones not. There's many different varieties of fingerprints you could choose and we want to test out which ones work. That's one of the parts of the project. And we've been putting together all this complicated IT infrastructure. So we have the basic databases here. We have the selection of algorithms, the machine learning algorithms. Here what we're calling bioactivity database is the database which contains the level one machine learning problems. So these are the cusar problems. And this one over here is like the metacusar database. So that's going to describe where each of the problems is one of the examples. Okay. And we're openML. So we're going to export the basic cusar databases to openML. This is a permanent place to keep them. Okay. So I said back to the stores the cusar data set information and the data sets. And the metacusar database is the metadata set. And at the moment these are in MySQL. So we have ambitions to put it into a semantic web RDF format. Okay. This is what we've been working on. So we have this our metacusar, our package that implements and runs the cusar models. So it tries to put all this together. It takes the data from the medicinal chemistry databases, Kemble, et cetera. It shoots the fingerprints. These are the descriptors of the molecules, calculates some molecular properties. These are also descriptors of the molecules. They only have to be done once each time. And you created these data sets, these roughly 60,000 data sets. And we want to learn cusar models for all the different data sets using different algorithms, using different sets of fingerprints, et cetera, and learn what's important. We also want to describe the targets, the targets of the proteins. We want to see whether there's different classes of target. For instance, there's a class called the GPCRs. These are probably the most important classes of targets. These are a particular type of protein which sit in membranes and receive signals. So your eye is based on GPCRs, your nose is based on them. Lots of internal signaling in the body is based on them. And they're one of the most important targets. A couple of years ago, the person got Nobel Prize for... Actually, two people who got Nobel Prize for finding the structure of GPCRs. One was for retinol, the one in your eye, and some like 15 years later for the one. In the brain. So most of your brain signaling is done by through GPCRs. So there might be something special about GPCRs which influence the learnings. We want to have a look at that as well. So I think a long time to put all this together into a system that works. Okay, so I'm going to briefly describe to show that this actually works. We wanted to show that the whole system could work together. So we decided to do our initial hundreds dataset problems. We're going to just apply our method. You know one brief question during the talk? Well, mostly yes. How important is pre-processing? Assuming that this is right, what I just said. How important is pre-processing for these datasets? Is this something you really need to get right? Or do you know how to do this and just push in the data through the regression technique? What do you mean by pre-processing in this case? I don't know. I mean I don't know your data. So the data is taken from these papers. And these are quite clean data because each point is an expensive biological experiment. So we're not processing after that. I suppose we could sort of put it all on the same scale or something like that, which may be something to think about, you know. But apart from that, there is a sort of, they're roughly about the same scale anyway. They're not. The data may be of different reliability depending on how much is known about the assay, but that's quite hard to get out. I don't think they really put too much information in when they extracted all this information out. So we're assuming that the data is reasonably good and we're not doing anything with it. And I also ask a question. Sure. When you showed the table, which is the presentation of what the data basically looks like, it was mostly just the description of the graph itself, right? Yes, yes, yes. Binary fingerprints and some properties of the chemical. Is also the target always known? In these cases, yes. And that's, I believe so. And, okay, so actually coming back to the pre-processing. What we have done is that we have collaborators who are proper medicinal chemists in the University of Dundee. And we've taken their version of the chemo dataset in that. The ones which they think they have confidence in. So they've gone through it and said, yeah, we really believe this lot. So it's sort of been cleaned up in that sense and that we haven't just applied everything. We've taken data which our collaborators think is the best data. Okay. Yeah, so we wanted to just to see whether we can get everything to work. We've took 100 datasets, 100 small datasets. That's important when you look at. Just wanted to take too long. We used the standard fingerprints and the standard descriptors. We used sequential forward search feature selection tool datasets. We used five fold cross validation and root mean square art model performance. Just to show that everything could work. We took 18 regression methods from the MLR package. Yes, so basic standard things. This is a pie chart of which method did best on each of the problems. I'm not sure if you can put much weight onto this. This is standard, both standard linear regression worked really well. And that's probably something to do with the size of the datasets, I think. If you've got a really small dataset, it's hard to apply something more sophisticated. So these are just, yeah, just showing that we could actually get everything working together. This is the average root mean square art for the different methods. Which one? This is linear regression again. What's RVM? What's RVM? It's nothing to do worse here. Sorry? Probably, you know better. It's doing very badly here, for whatever reason. I haven't used this very often. Yeah, I don't know. I don't put any weight on these results. Just showing that we can actually get things to work and that the handle doesn't fall off when you try to turn it. This is for one particular dataset, the different methods applied. No, this is the average for the different datasets of all the methods together. So some are harder to break than others. Okay, and for the MetaQSAR problem, we need to have some way of describing the data. At the moment, we've got to just use some really basic ones about the data, which are completely generic. Like the dimensionality instance, things like that. And this is the decision tree you get out of it. So it just shows, if you can see the first choices mean standard deviation of numerical attributes, explicit diversity index. But as I say, this is initial results. Just showing that everything works and it can be done. And hopefully in one year's time, it will have been done. Okay, I want you to see something about relational learning. So this is where it all started. So I have a long history of working on relational learning. So trying to represent molecules not by this fingerprint approach, which I think is remarkably ugly, but using first order methods using predicate logic. And we've been working on this for a long time. And one of the reasons, I didn't put actually any grant application, but one of the real reasons for doing this work is I really want to test whether relational methods, how well they work against all the best regression methods and large proper data sets. Because no one's ever really compared things. We have some, we have our own evidence to ourselves when we're playing around with these things, but they work pretty well. But we've never had enough data to show that. So the nice thing about relational methods for drug design is that you have a nice representation that's really close to what the chemists use. Okay, so drug design and relational learning, we've been working on this for 20 years. We have this really nice representation where you can represent the relational structure of the molecule and sort of map it into the logic. And the basic level, you can just put in the atoms and bonds and the relationships between them and use that as the representation. But you can also add background knowledge about different structural groups. And there's no need to actually do all this fingerprint stuff. And this is some initial work we did showing that you could actually find certain sub-patterns in bigger molecules. So this pattern serves to discriminate between metagenic and non-metagenic compounds. So what I want to do as part of this meta-QSAR is to also compare relational methods to see how well they do, whether this radical different representation works. It's very nice as well because you can add the three-dimensional strife, you can add chemical group information. I told you molecules move. They're all constantly vibrating. This is important because when you, if you do the physical chemistry and try to model it, you won't probably get one minimum structure, you'll have several minimum structures. And you're not sure which one is the one that's actually physically interacting with the protein, necessarily. So it's, one of these representations is important, but you're not sure which one. So that's an interesting machine learning problem as well. What's that called technically again? Where you have different representations of the same instance. Is it a multiple instance problem? Yes. Multiple instance, I think. We have multiple views, so multiple future representations or multiple observations which belong to the... No, it's multiple representations of the same thing. So this is where the problem started in this drug design model. This is a multi-view, I would say, then. So you can kind of expect these kinds of features and look at the data in this way, in this way, in this way. No, no, it's the same features, but you're not sure which one of these is the correct one. It sounds like multi-instance. It is multi-instance, yes. My memories come back. The features are the same, just the values, are they? Yes. It's like a chain of keys, which one fits? Yes, so this is, this was the problem which, the first one that came out of machine learning was this, multi-confirmations of drugs. Okay, I wanted to say a bit about the robot scientists, because we're working on... So robot scientists, we're trying to do is automate scientific research. You represent the problem. Okay, we want to make a computer robotic system which can, in some sense, do its own research. Background knowledge about the problem, normally represented in logic. We have some way of forming hypotheses, some novel hypotheses about that background area, using abduction or induction. In Q-SARG, we're actually going to use induction. We have some way of forming efficient experiments. We have laboratory automation to do the experiments, and we cycle around until there's a final theory or we run it through some resource. And our robot scientist, Eve, is designed to do Q-SARG learning and early stage drug design. Okay, so the whole thing sort of fits together. We want to do the meta-Q-SARG for Eve. These are the diseases we're looking at. These are the actual parasites we want to find drugs which kill. Plasmodium falciparum, Plasmodium vivax. This is an interesting... this is one we've been working on a lot actually. So this species here is the one that kills most people, especially children in Africa, falciparum. Most people in the world get vivax. It's more common in Southeast Asia, South America. It used to be very common in Britain, and it's called the Agu. I'm sure it used to be very common here, you know, all this water you've got. It used to go all the way up to the Arctic Circle because although there's no mosquitoes in the winter, it, unlike falciparum, it can hide in your body over the winter. So it's... fresh infections were causing the summer by someone having the... overwintering the parasites. Yes. These are our targets, diadrofolate reductase. This is my one favorite target of all the... in the world. For some reason, this is probably the best target in the world. The first anti-cancer drug was against this enzyme. If you have a bladder infection, you get an antibiotic which targets this enzyme. If you get malaria, you're very likely to get a drug which targets this enzyme. That's the most important choke point in living systems. Okay, to formalize it for the world of scientists, we use graphs and standard chemoformatics methods for the background knowledge. We use... Eva's using Gaussian process modeling to do the QSAR. We use active learning to decide on efficient experiments. So however the pharmaceutical industry does drug design is that they... they have an assay, not to explain to you what an assay is, it's some cheap test. And then they have a large compound library. Normally this consists of hundreds of thousands of compounds, maybe millions of compounds. And what they do is they test every single compound, one after the other, against the assay. And once they've done that, which typically tastes even if a high-throughput robotics will still take them weeks to do that, they then look at the active compounds, double-check them with a more expensive assay to make sure it's not a false positive because most drugs are going to... the prior ones, most of the compounds are going to be inactive. And then they do the QSAR learning and make some new compounds to fit the drugs. What Eve does is try to automate these three steps. So Eve starts with a compound library, starts screening randomly, after it's seen enough hits, it stops random screening, goes back, does a more expensive assay, and then learns a QSAR. And then chooses compounds from its library to test that QSAR using active learning. And the hope for that would be more efficient and cost-effective than this sort of stupid way of brute force testing everything. And the idea is that if you can find most of the hits without going through the whole library, you'll save money in time because time's very important. If you actually do find a blockbuster drug, blockbuster is one where you earn at least a billion dollars a year. So saving a couple of weeks time on the patent, what is that? That's quite a lot of money. It's the two weeks. So that's one twenty-fifth of a billion. What's that? That's quite a lot of money. So you really want to do it quickly if you're the pharmaceutical minister because once you've made your patent, time is time rolling. Yeah. So it's possible that it's more efficient to do it this way and that's what we're all testing. Okay, we use Gaussian process models. The nice thing about them is they're generative, which helps with the active learning. You want to compare this intelligence strategy of choosing compounds from your library, which you think are going to be hits, and to test the Q-sars, against just doing everything, which is beginning and going until you come to the end and stop. Can you add one more sentence on how you use the Gaussian processes and this active learning is specifically deciding where to do another experiment, which is unlabeled? Yes. So you want to take a compound from your library, which you don't know yet, which is going to... Okay, how to do that active learning is still a research question. So you use some kind of optimized entropy for that through the Gaussian process. So you take the next one where you're most unsure about this? No, because unlike in classical active learning, you don't care how well you predict inactive drugs or ones at the low end. So you don't want to minimize your uncertainty down there. It's at the top end you're interested in. So you have to... Is something like expected improvement, don't you think? Yeah, we've tried lots of different things, yes. It's some sort of compromise in exploration and optimizing at the top end. That's not completely clear of what's best. That's very interesting. Thanks. Okay, this is what's saying here. So you need to balance this exploration and this optimization here. The approach we used is where we combined estimate activity and high variance. So we tried to balance the two things together. So this was work with the University of Leuven. Another complication is that you want to do it in batch, which makes it computation much, much harder because it's easy to optimize one. But then if you want to choose the best 64 something, it's really hard. Okay, I was trying to explain these diagrams. So this over here is the compounds. And this is the active learning so that here we're finding compounds faster than randomly by using the active learning and do it to completion. And this is the cost here. So stopping about here is the most cost effective thing to do there. After here you're starting to lose money relative. And this is some sort of exploration of most of the space. So we had this model of how much everything costs. And by playing around with the different costs you can make different things. So how much does it cost you to miss one of the active compounds? How valuable would that be? How much does each compound cost? So we explored the parameter space. And most of the space is rational to do more intelligent and just try everything. Especially if you can do the asses quickly and you have a very large library. Okay, so we have using ease database for MetaQSAR as well. The advantage here is that we've used the same target from different species, which is an unusual thing to do for the pharmaceutical industry. So it allows us to compare different things. Okay, this is Eve's hardware. The most interesting thing I think is this acoustic liquid handler. So it turns out now that if you want to move small amounts of liquid around, the best way to do that is not to use pipette tips anymore, but to use some sort of sonic system, which sort of makes the liquid vibrate and little droplets exactly two and a half nanoliter fly up and land on the plate where you want it to stick to. And if you want 10 nanoliter, you say four drops, please, and four drops are pinged up. And this is much more accurate and much cheaper than using pipette tips. Okay, so I'll try to show a movie here. Okay, this is what... So Eve is about from here to that pillar and about this wide. And it's got these two robot arms. It's Mitsubishi ones, which... there's a smaller version of the ones that build cars. They're very, very precise. This was accurate in their movements. Now, this is the liquid handler, which does the pinging of the droplets. And this is what's called a 384 plate. So there's 384 little, small little vessels. Each one will be one of the experiments, one of the tests, and put different drug into different ones. Okay, this comes from the compound library. Each one of these different wells has a different drug in it, which some chemist has made at some point. We only have about 15,000 compounds, which for the pharmacy industry they have, fancy, millions. So it's the so-called crème duos. Okay, so I haven't discussed the assay, because one of the most successful design parts is the actual assay. We used this clever idea from biology to make assays, which allow you to target particular enzymes, but also do it in a living system, which is more robust in human cells. So we use yeast as an assay system. Okay. I'll stop it there. We deliberately have the robots going slowly, because especially when you move them really fast, it's scary, and they may hit something, and also it's more likely to drop something. Okay, this is the, we have found lots of new compounds. We've also been working on repositioning drugs. So the idea of repositioning drugs is that you take a compound which has been shown to be not too dangerous, because they're using it for some other, for some disease X, and you show that it works against disease Y. This is work we did on trichopanosome Brucea, which is, this is the organism which causes sleeping sickness in Africa. Okay, this is the most exciting thing is that we found this compound which is active against malaria, Dihydrofolate reductase inhibitor, and it's, I'm really sure it's safe, because it's a well-known brand of toothpaste. You get it, and I've seen it in my washes, and toothpaste is not that dangerous to eat, but children do it all the time. Yes, so it's quite interesting, I'm quite excited about this. We're just trying to do, so it works best against this malaria called the VIVAX when I mentioned. The problem with VIVAX is that we still don't know how to cultivate it in the lab. I said this once at this meeting, and I said, I did it as my PhD, you know, but what they meant was if you had a fresh supply of blood locally, you could keep them going for maybe a few days, we still can't cultivate them really well. So if you want VIVAX, you need to go somewhere where there's malaria, so we have this collaboration in Manias in Amazon where people have, and that's what's quite shocking, it's often come in, you can see from the genetics they've been infected multiple times, but there's multiple strains of VIVAX in them. Okay, I should say something about constructive learning. So the point is that we want to really actually make a new compound, not just test compounds from the library, so it's not active learning, we want to, what compound will optimize this particular assay? And that's still an open research question, you know. So a number of compounds that have been synthesized ever is a few million, the sort of space of compounds you could synthesize is literally astronomical, you know, it's different estimates, but there's a number of games with chess, it's ridiculously large number. Yes, so okay, 10 to the 6 days is a reasonable estimate of how many compounds you could synthesize. Yeah, and we've only ever made a couple of million compounds in the whole of human chemistry. And what's really nice now is that you get these chemical synthesis robots so you can actually get a machine, they can do a lot of chemistry, you can't do everything yet, but most of chemistry they can probably do. There's a complicated question in machine learning and optimization, how do you decide which compounds to make? Because you have to take into account the synthesis aspects of it, also how do you optimize this particular cursor? Finish off. I'm talking about, yeah, robot scientists and automation of science. So in chess, there's this analogy in chess and science, in chess that there is this continuum from beginners to grandmasters, and I think the same is true for science, between the type of science that Eve can do now to what I can do to your Einstein's and your Newton's and things. And if you believe that there is this continuum, it's not just no step function there then, robots I think will get better and better at science. And something like the hardware is getting better, computer machine learning is getting better, AI is getting better, the robotics is getting better. There's very little now that robots can't do in the lab. And I think that the collaboration of human-robot scientists together is better than either one on its own, just like even now in chess, even if my laptop can beat the world champion, human and computers together play better chess than computers do alone. And humans and computers playing science can do better than either alone. So this, Nobel laureate Frank Vltjevich, is on record saying that 100 years time the best physicist will be a machine, which I like, because obviously he means the best scientist of course. Computer scientists and biologists don't really count if you're a physicist. I don't know, I shall see. It's a pretty cool thing. Oh, okay, no conclusions. Okay, I'd like to thank my collaborators in Manchester and Brunel and Dundee who are on this Meta-Cusar project. The collaborators in Cambridge who've worked on making the essays for the drug design work, collaborators in Leuven, helped in the machine learning, and the laboratories for robotics. And I'd like to thank you for inviting me here and listening to my talk. Thank you.
Can we learn how to design drugs? Topics include: Automating drug discovery with the Robot Scientist. Using chemoinformatic databases and in-house datasets to systematically run extensive comparative QSAR experiments. Learning how to better apply existing QSAR methods. Decreasing the time and cost to develop new drugs. Prof. Dr. Ross D. King is Professor of Machine Intelligence in the School of Computer Science at the University of Manchester. King's research interests are in the automation of science, drug design, AI, machine learning and synthetic biology. He is probably best known for the Robot Scientist.
10.5446/16275 (DOI)
The first part will be dependence on parameters. Yes. I recall that I'll consider the case of one function and function from variety to a fine line, which is algebraic variety over C. And in general, I consider a divisor. But right now, just to simplify the notation, we'll know divisor and assume that f is proper just to simplify the notation. It will be easier to listen to this case. So we get a kind of family of compact varieties as such a total space is smooth. And I recall that if you have such things, and you get space of critical values, which may be some finite set, critical values. And then for each critical value, you get a local system on circle, which is kind of a set of theta in r mod 2 pi z. And the fiber of the local system, I denote at hb from beta from my point and theta in s1. The fiber will be the commod of the pair. I take pre-image of a small disk with center zi and radius r. The r is very small. And relative to the pre-image of zi plus exponent of 2 pi my c2 times r, a point on the boundary of a circle. And take commod with integer coefficient. So I get this local system of a billion group of finite rank over circle. And then I had some gluing data, namely if I have a line in the complex line of values of my function f and get several z1, z2, zk, and have oriented line, which is parallel to some direction h bar. Then I consider stocks of my local systems or corresponding to point close to the ion length on this line bundle. And what I have, I get a bunch of linear operators between stocks. Kind of actually integers between stocks of local systems. And this allows me to reconstruct total topology of all. This system of all my picture, so what I have, one can say I have collection of operators from i is non-equal to j, which are mapped from hb to zi, certain c to ij to hb to zj, theta ij. If theta ij is argument of zi minus zj. I have this data, but now I assume that everything now depends on parameters. So what I have, I have some projection pi from x to some space u. It could be topological space or complex analytic space, algebraic space. And it will be pi will be locally trivial bundle. And fibers of pi will be complex algebraic varieties. And f will be a function from x to c, such that for any u, I get the function f u from x to u, which will be pi will be 2.0. It will be just restriction of f to 1 of z. And this should be like Billino-Mell. No, no, no, the same. x could be, x could change as a variety. Local trivialstheological bundle. It's a right-slap. Yes, it moves like that in some sense. But also, kind of really assumption here is that the multiple values come from in-fit. So how to say it? The same as the s u, the f u, the c, you can see for each u, the third of the value values. And what I said, the map from u goes to the maximum of absolute value of z, z, c, and s u. So it's just some negative number. This map continues. So you cannot have a slightly larger particular value in z in fit. It's all collibunded. It's in place, yeah. Because of the properness of the map, yeah. It's not clear. I will talk about it. In principle, they can disappear. So it is not clear if it's collibunded, it's container. Yes, yes, yeah. And do you have some properness in the family that is? A properly proper, yeah. The x can? Yeah, I can see the compact domain in u. I can see the pullback of compact domains and to get a proper map. Yeah. P restricted to y minus 1 of compact is proper. Yeah, so you get this nice family. But then the number of critical values can change. Yeah, so let's assume that we have kind of big open domain when number of critical points stays the same. Then what will happen? So we have moving points. So zi, one can say that zi are functions on u, kind of moving points on plane. Critical values, yeah. Critical values, yeah. So set of open domain number of critical values. At least in algebraic situations, it's clear. We have this risk-open part when critical values stays the same. Then you get complex numbers depending on parameter u. And then over this u, you get maybe some kind of universal set of critical points. Fiber over u is equal to SU. We have to assume more than the number is. Yeah, it's a little bit more kind of move. Move continuously. We have the two of them coincide with the other one. No, no, no. I don't assume that they coincide. No, I assume that the set, the number of elements stays the same and points move continuously. Yeah. And it's constant and points move continuously. OK. Then we get this universal set of critical points. And this local systems, which we have before, and we get kind of HbZi theta, these conditional things, it's formed a local system on S1, this parameter C, to times this universal set of critical points. But the operators which we have here, they will jump for some walls. Gij will jump along certain walls. Let's me kind of roughly describe what this is going on. If points can move really freely, generically, all these points, none three of them will lie on the same real line. I assume that this gives us more than three points in line of Sq in a line in C. And if we move continuously, this will actually not jump. But suppose at some point three elements will stay on a line, on a real line, three critical values on a line. That's called Z1, Z2, Z3. For example, maybe I just continue this picture here because it's, yeah. Then if we move a little bit, it's for some Q. And it happens in real co-dimension, one wall in this U0. Because it's one real condition that three points lay on the line. And if you move a little bit through this, so it gets some certain kind of hyper surface in U, and you get point, maybe U, maybe U0, lying on the wall, and then you get two points in a parameter space, which are on the left and right side of the wall. And if you move a little bit, you get a triangle looking in one direction. If we move a little bit, U0, we get all three on the line. And if we move a little bit, you get triangle looking in the opposite direction. And one really can even call the sides of the wall plus and minus according to two types of triangle. Yeah, I assume, for example, I arrange this line also in one way. Then locally, because we have a local system, we can say that this vector, a billion groups do not change. And here we get some operators. So, we get T1 plus, T2 plus, T3 plus, T1 plus. Yeah, so the claim what happens here, is that the operator is coming from the corresponding two operators, which says on the shorter sides of triangles, do not jump, but what sits on the larger side, it's jump. And here it depends on conventions, how the operators are going from left to right, to right to left, essentially to G1, 2, whatever, minus, T3. Yeah. And what happens in the middle? In the middle, I think it will be left or right semi-continuous here, if I got it, just consider it as one of them. Yeah, there's some kind of ambiguity on conventions, but I think it's... Yeah, so that's kind of... You get this nice behavior, behavior and how to understand it. Yeah, so there's some kind of very general language which I'll use on my next... lecture, it's about wall crossing structures. So it's... Yes, in short. So what is a wall crossing structure? It will be wall crossing structure on what? It's a wall crossing structure, I think it will be on U0 times C star, maybe circle depending on... Planck constant or argument for Planck constant. Wall crossing structure is a form. You get certain topological space like this. And you get local system of Lie Algebra. For example, here what will be Lie Algebra? G, depending on U, H bar will be defined by its endomorphism of... You can see the endomorphisms of direct sum. You can see the sum of all critical points. So in terms of parameters, you consider H beta over Zi, ZiU, and maybe argument of minus H bar. You can see the things and... Lie Algebra is rational coefficient, which also graded by some... by a local system gain of lattices. Gamma, U, H bar. In fact, in our case, this local system will depend only on U. It will not depend on H bar. And gamma U will be a kernel of Z to SUZ. I take sum of all hyperplane given in coordinate space given by the equation of sum is equal to zero, root lattice of AM minus one. Isomorphic root lattice of M is the coordinate of SU. So this root lattice contains special elements, contains elements EIJ. Maybe some I to power J. I minus one contains such vectors for I non-equal, for the I non-equal to ZJ belong to SU. And graded components are the following. Why this Lie Algebra is graded by this lattice? If gamma is equal to EIJ, then the graded component is HOM. This graded components is HOM between two different spaces. And if gamma is equal to zero, you get all diagonal terms. So what does it mean in plain terms? In terms, we get finitely many lattices, so multiply back your rational vector spaces. And if consider endomorphism of this direct sum, we get block matrices. And you say that it's graded by root lattice of AM. And if we take diagonal terms with zero and the rest in the way to AM. So you get, sorry? If gamma equal to zero, I take all diagonal terms. Yeah, we get this all diagonal terms. Yeah, so the general story is that I have local system of lattices and local system of Lie Algebra is graded by this lattices. And what else I should have is continue the description of what is wall crossing structure. Then I get for each point I get a map for any grid bar in my space. I get a linear map for my grading lattice. And the map is the following. There's a grid element, a ij goes to zj of u minus the i of u of h bar. So we get this thing just before to define what is wall crossing structure. By the way, this is kind of, one can kind of reformulate a little bit what is going on. In the same cone we have gamma u grading plus the u h bar gives a derivation of my Lie Algebra, but this complex coefficient. And derivation will be semi-simple. So it will be finite dimensional vector space and operators will be diagonalizable. So it's actually, there's a good question, can one generalize to non-semi-simple derivation instead of grading case? Sorry, but it's kind of familiar Lie Algebra with semi-simple derivation. And now what is a wall crossing structure? What is wall crossing structure? You can see that in this total space some real walls, real co-dimension one walls in u cross c star, which consists of the following points. You can see the points u and h bar such that there exist non-trivial element in my lattice, which is gamma, such that z of u gamma is not, it belongs to, it's strictly positive number and corresponding components, we denote u h bar, gamma is not zero. The sum z i z j such that z j minus z i divided in our case, it means that z j of u minus z i of u divided by h bar is strictly positive number. So it means that we have two critical values on the real, on the line parallel to the real line. And we should do it coincide. So there are real walls and wall crossing structures is the following thing. For any pass, short pass, intersecting wall, we should associate element of the algebra, also there's some kind of wall and parameter space, and we have some short pass. And when we intersect the wall, we should associate an element of what? Of the algebra, I consider some of g u h bar gamma, exactly this point, of such gamma such that z u h bar of gamma is strictly positive number. Yeah, generic is only one component. For this thing, I have an element, maybe called i a u h bar, and then the condition is the following. And constraint is the following. If you make again generic loop for small loop, for small contractible loop, which intersect walls in some several points, if you take ordered product of exponent of these elements, is identity in corresponding group. Why it makes sense? Because if you make small loop, all graded components will lie in some convex corner, and you get nilpotently algebra. So, you can make, speak about nilpotently algebra. In general, next lecture I'll speak about some infinite dimension example. One should put some more serious constraints on the story, but that's basically the structure. And this property, it's essentially saying, saying as that we have this wall crossing structure. Let me explain to you. It's kind of, oops. Simple thing. So, we have the exponent of elements of algebra. So, you take the order of those, and the element is of the... No, no, my group is endomorphism of finite dimensional vector space. It's kind of group jail. So, all gamma such as the gamma is strictly positive real number. I get a map of grading. What's this element a? It's a wall crossing structure. It's given by some collection of elements. For each wall, it is showing this constraint. Yeah, but, okay, let me explain to you just some real simple cases. Kind of simplest case. Case when suppose all my singularities or singular points are isolated and actually holomorphic mores. Then this vector space is h, one dimensional. It's kind of, it's dimension of h. That was this b. If you... H is one. Monogram is plus minus one. Yeah, so it's essentially kind of have basis. Typically, in co-dimension two, in real co-dimension two, in this u0 cross c, c star, you get two possible pictures. Like this and like this. And what is the first picture? It means kind of typical point of the world. It means that you have two critical values divided by each bar. Same real line. So, you get exactly at this point what happens. You get some ze1, maybe they can same line as zj1 and somewhere else you get ze2, same real line as j2. And on one of them you have real parts of the first two coincides and another two coincides. So, these things completely do not talk to each other. And my operators, which I have here, operators, tij are just numbers. You can call it one, you need something in co-dimension one. Yeah, this is in co-dimension one. So, this is a wall. But in the wall, the wall in your picture is just a sling the line or the line or the line. No, no, no, no, no. Wall in my picture, it means that I have also each bar. It means that I have this equality. Each bar is also coordinate in my space. Ah, okay. Yeah. And these are just numbers up to sign, defined up to sign. Because we need some conventions. And here, these things do not talk to each other. So, the rule is the following. If you put some number a, some number b here, number b, numbers stay the same. And here is a, b, c. I claim that here numbers depend on parameters in the following case. So, c, a, b plus ac. And what is, yeah, so what I propose in this picture, exactly this formula if you replace things by numbers. And what is the meaning of this formula is the following. You can see that in this parameter space, you go one point to another. Or you go in this way. So, across three, the total combination will be trivial. So, it means that if you go one way or another way, it should get the same result. And you multiply three matrices. And matrices which you want to multiply is the following. Here I can make mistake. Put plus minus. Yeah, so you have this basic identity for three by three matrices, which tells you that if you go through these three walls and through these three walls, they sort of get the same result. Yeah, so this explanation of this formula from wall crossing perspective. Yeah, so it's kind of pretty amusing structure, which you get. All this, it describes what happens when all critical values are still all distinct, but they also can match to each other. So, now we can ask what happens on U minus this open part when eigenvalues match. And again, assume isolated more on U0 isolated more singularities. And if, let's say, depends on parameters, this kind of holomorphic and kind of U analytic, even projection is analytic. Then in complex co-dimension two, in complex co-dimension one, or the same as in real co-dimension two, what happens two critical values can merge. Yeah, there's also another possibility that these two critical values kind of coincide, but they do not talk to each other. But interesting part when they really merge, so what is a typical example? Consider, like, dimension of my fiber is one, and dimension of the base is also one. So let's call coordinate U and coordinate X here, and the function F will be something like X cubed over three minus Ux. Yeah, you can imagine some other variables, but maybe it's not necessary one here, so we have coordinates X1, Xn. I write only formal near singular points. You should imagine that it's compactified some way. Yeah, so get this stuff. So what are critical points? Kind of the effort S of U. You can see the derivative is equal to zero, so you say that X1 of U squared is equal to U, so it's plus minus. Square root of U. And critical values. You substitute here, square root of U, you get minus two-third U to powers three over two. Now, when you consider this full cross-interaction, for example, I just put h bar equal to one. I just put a small district to small sub-manifold. When I get volts, volts, it means that I have just two critical values, and the difference should be, so Z1 and Z2 of U, two critical values, so the difference should be real and positive. So it means that imaginary part of U to powers three-half is equal to zero, and so it means that U to power cube is positive real number. And what do we get? So you see that locally, these three volts are three rays. Here we get hundred and twenty degrees everywhere, and this is U plane. And what I claim is that my numbers, which I have here, my numbers are all equal to one, because in this simple case of space of one dimension, this matrices are just one by one matrices, just numbers. And claim equal to one, and this is something really nice here. It's really the folic identity, kind of similar to this identity. What is going on here? Why write these things? I want to go along this path, and I claim that the total monogram which I constructed is again trivial. So it's like wall crossing, associative condition for wall crossing condition, but in a point when I don't have any wall crossing structure anymore. So generalization of associativity constraint. Why multiply such things? Yeah, 1, 1, 0, 1, it's a matrix which I associate to crossing the wall. It's upper triangle matrix with element one. But why multiply by this guy? Because it's a matrix that can in two dimensional space which are two, I have two critical points, but if I move from here to here, these two critical points interchange the role. So I should interchange them, and also should care about orientation. Because I said that it's a one dimensional, but base element you know up to sign. One should take care of orientation, and it turns out that one of them should take minus one, and then I get this remarkable thing. So it's cube equal to zero. So there is something which happens on the Eucrose 2.0, one can say that it's kind of generalize associativity constraint. So if you take product over some small loop in U0 surrounding this divisor, again composition order product is trivial. And in what all this will give us, because composition for any loop is trivial, so the result maybe in again in Eucrose C star. And the conclusion that we get local system on U cross C star. We get kind of wrong local system on U0, which don't accept places where critical values collide. And we modify it along the whole source, it will be extend to local system everywhere. So local system is, what is it? It's global H beta H bar of its fiber at U H bar is equal to global beta H bar homology of XU, which come homology of pair when I put homology of pair of XU and the main one, FU divided by H bar tends to minus infinity. Now, so that's rough description of what happens topologically. And in fact, there's something else which I want to describe when critical values collide. It's some kind of stability effect for merging critical values. It's something very general, it kind of, it's, that depends on the half algebraic variety and so on. So first speak about isolated singularities. Yeah, suppose we have a germ of function, of analytic function. Again, this is my function F from a ball in Cn to C, a germ at zero. So restriction of function of ball containing zero. And this isolated critical point. Then for such thing you have a Milner number, some invariant, Milner number, Mil of function and maybe point B0. Just called mu, it's just, it can be described in two ways. Consider homology of the ball and consider preimage and assume that F of B0 is some of the zero. This is zero plus epsilon, where epsilon, very small. So this homology will stabilize and you get, I think, and this Milner number is the same as rank of, I can see the function of the ball and mod out by ideal generated by derivatives. We'll find it could dimension ideal. In fact, it's better to consider it's actually a rank of top degree homology of complex of omega of the ball with differential cap with df. The commodule will be sits on it only top degree. And if you realize, kind of identify with volume element to get top degree commodule here. So it gets Milner number, which is just some non-trivial singularity, non-negative number. It's mu equal one, it's equivalent to get more singularity. And for example, for x cubed mu equal two and so on. So it's some characterization of singularity. And this number behaves kind of stable way. Suppose you move a little bit, deform a little bit function. So it will be some function depending on parameter. And then in this ball, this critical point will be replaced by several critical points. Maybe p0, we get something like collection of pi of u. We get collection of critical points. And Milner number, it's all Milner number of my function and point f0 and p0 is equal to sum over i and fu, p, u. So that could be a several critical points above it even. Yes, yes, no, no, no, I can see the critical points. It's for each critical point. So it means that the x cubed can decompose to two more singularities. Milner number is number of more singularities on which the critical points can decompose. So you get this kind of positive number. And some of the other positive numbers get this conservation law. And this conservation law has a generalization. Maybe there's some generalization. Suppose you get some halomorphic function on some global variety. And let's consider a set of critical points of f. A set of all points in x, so that the f is sticking to the empty space x is 0. And the components is a union of connected components, some union. And suppose it's one of them is compact. Now I will speak about only this connected component. Then one can associate with this the following contribution for this component depending on angle. B for beta c is for the direction f. And it's defined in a similar way to the definition of Milner number. Namely, you do the following. You consider, first of all, you pick some remaining metric. And don't assume this thing is scalar, so the remaining metric on x. And what I take? I take delta neighborhood of this component and take f minus 1. Maybe just call it 0. Small z0 is f of this large component because on every component of critical set, the function is constant. So it will be just some complex number. And I take preimage of this complex number plus epsilon exponent i theta, is integer coefficient. And I take limit, I take double limit. First epsilon will go to 0 and then delta will go to 0. There are some natural maps and this thing stabilizes, so limit is well defined. So it's a generalization of this commodor of pair. And I define Milner polynomial. So my component is sum from i to minus n to plus n and is the dimension of x, as usual, a rank of h a plus n, maybe theta, it doesn't depend on theta of this 0f times t to power e. And it's a polynomial in Laurent polynomial variable t, with non-negative coefficients, maybe called mu, f. And this sort of critical points, consider connected component of critical points. I get this polynomial and claim, it also satisfies the same property. If you deform a little bit, so you get very kind of complicated critical set, a deformant can be decomposed by several pieces, maybe of smaller dimension. I get the same equality. This polynomial will be equal to sum of polynomial of different thing. And in fact, one can construct isomorphism similar to what I explained to you before, just equality of polynomials and get similar property. So how to write it more scientifically? The thing, the thing one can write as a, this is hb to sigma, is the same as hypercommulgary of this theta and consider shift of vanishing functions from maybe constant shift. You get more scientific notation, if you like. There is something very bizarre. For isolytesic and Goulet's millon number, it's always positive. But for non-isolytesic and Goulet's, this polynomial could be 0. So the phantoms, which cannot really guarantee that they appear or disappear, there exists a phantoms, said that this thing is actually 0. Sir? You've been on attack thousands of coefficients. But on global variety, yeah, it's not clear. Maybe there's a, it will be not surprising that this situation, if you put any local system on this critical set, it get 0 hypercommulgary. No, but at least with constant coefficients, at least I can explain it. So not only the rank, but the group themselves. Groups themselves are 0, yeah. Okay. Yeah. Comulgically, it's kind of doesn't exist, yeah. And example is very simple. Your variety, you take product of elliptic curve. You don't know if the component can disappear somehow. Sorry? It could disappear in principle, yeah. Yeah, maybe there's some other, yeah, in fact, I don't believe that this thing can really disappear. But, yeah, it looks hard to believe, but at least comologically, I cannot see. They cannot disappear. So let's, let me show this simple example of phantom. So you get point x and t, say, and you get function equal to t square. It just depends on second variable, but you're more doubt by involution. So x goes to x plus x0, t goes to minus t, and x0, it's two torsion point on elliptic curve. Then on a quotient to get critical points that will be elliptic curve divided by this shift, but local system will be a rank one local system. There's no trivial monodromia, and it will have, it will kill all comology. Yeah, so it's, it's, this doesn't disappear, yeah, because it can put comology with non-trivial local system. You can twist by local system, then it will not disappear. But I don't know, maybe one can cook out, it's, it's, because there are some people construct examples in, I think one connected examples when comology are zero of these phantoms. Yeah, so, yeah, so this things, it disappears and just maybe before you break, I'll just say you can't go on couple of words about Hortz theory. First of all, this Milner polynomial is symmetric with reflection t to go to the universe. It's falls from point of credibility, but in kind of good case. So in fact, it's maybe put a conjecture, assume that, that the zero, there exists an embedding zero to a Keller manifold. Not my manifold, but some another Keller manifold of different dimension. I would say that's kind of singular spaces killer. This year it's one of components of my critical. I have just complex manifold, have a function and I assume we have. So we just did the given value of the parameter. Yeah, yeah, yeah, I don't do, speak about parameters now, I just speak about. Taken with the, the loose structure or with the structure given. Even with reduced structure, I think, yeah. I think it's zero reduced, could be embedded to killer manifold. Then, then this could be some kind of good Hortz theory. Then, if consider hypercomology of again neighborhood of the zero. This form with a kind of analytic forms. This differential HD plus the F. This, this homology, this rank is doesn't jump. As each bar goes to zero. And, and it's clear for each bar not equal to zero that it's coincide with this each beta. So, automatically without any assumptions equal to, for each bar not equal to zero get, beta, the RAM comparison. Oh, sorry. Sorry, I'm just writing nonsense. I think this is. So it means it's kind of vector bundle or formal line. This guy. Can be compared with. Wynch cycles. Yeah, so it gives you some kind of Hortz theory and moreover. You get kind of left. Decomposition. So it means that if consider even in odd part of this middle polynomial. And consider what is the graph of coefficients. You get kind of bell like curve so it will be there. Increasing then decrease this coefficients. So some primitive come all to have that's a little bit more than what one can extract from literature. And, but I think it's that's really sufficient conditions that reduce part of critical point set is itself a scalar and the rest is. Okay, so now I'll make make a break for maybe five, seven minutes. Yeah, so to be kind of. Very good ideas and we'll go to concrete example. Yeah, suppose we get infinite dimensional kind of acts and kind of infinite dimensional complex analytics. Many fault. Yeah, I don't know what is it, but kind of. At least should have tangent spaces, which are complex vector spaces. So and in the get some function on the things you get a homomorphic function. And assume that set of critical points. Is say finite union. Union of first it's compact. And finite union of complex space. In usual sense kind of finite type. And dimension. Maybe very singular one. Why it's kind of natural assumption because you're more or less should fall from the conditions that consider second derivative. Of. Of this function. Is the operator from tangent spaces content. This should be some kind of friend column operator. So it should have only finite dimensional kernel because these two spaces are more of the same dimension dimension. So it's a difficult situation and. In this case. We can replace. F near each component. Each alpha. By finite dimensional model. That's in general it's not. What what what one should do you get some kind of compact. Final dimensional set the alpha in some of this infinite dimensional space. And then one should choose. The sub bundle. In tangent space to x infinity restricted to the alpha. Or finite co dimension. And kind of. Kind of will be kind of T transfer so. Inside the T transfer so intersecting quiz. Maybe skin theoretic tangent space at each point which is finite dimensional is zero. So it means that. You can extend it. To. Kind of vibration. The full issue of maybe finite co-finite co-dimension. Near near alpha. And one can think about it's really kind of vibration or fallation. So you get some neighborhood of the alpha. Open neighborhood. It maps to some finite dimensional space. And. The condition essentially means that function restricted to fibers. Has. Get some kind of projection. Maybe you find it. And function restricted to fibers. Of me point you. In any point in you find it. Only one. Isolated. More singular point. For this projection you have neighborhood. What essentially want to say that in all variables except finitely many one can. Kind of think it's my functions more function some squares. Function is kind of a five five dimensional function. Five dimensional meaningful. No no here I get function in infinite dimension manifold. But the infinite dimensional part is quadratic. It's kind of quadratic there. And then I can make a new function. Is it a logical or abstractions for this like. No no no that's not a logical obstruction. It's analytic I take some sort of some finite complex could I mention. Chain classes will be not obstruction. And the fact that the and the following the quotient. Yeah but yeah because it's. In general the question thing is not not how's Dorf yeah. Yeah but I think here's a sort of no trouble yeah. And you get new function on. On you find it kind of a finite. Which is defined the following it's valid point. You will be critical value unique critical value. Of F restricting to the fiber. So here there are no coincidences when you go you know. Because of the assumption. And we just drew the story so we get some finite dimensional replacement. And then you see that you get a constructible sheaf of finishing cycle. In derived category. Phi of F finite. And you see D be constructible of this the alpha. The alpha will be the same. As critical points of F F finite. Okay now so you get a shift of finishing cycles. But there is a trouble here. If you choose different finite dimensional direction to get a different shift. It's it's some ambiguity. I'm big beauty but 10 by. Rank one local system. With monodromy plus minus one. And also. Should make this choice. What happens. Kind of immediate some finite dimensional thing with some finite. F finite. And make a multiply. Make kind of Tom Sebastian sum kind of add function additional variable. Then consider vector bundle complex vector bundle even find a dimensional. Vector bundle E. This quadratic form Q. Yeah and then quadratic form. It gives you change your shift of my second by one dimensional space and it depends on rotation. But can you like any two five dimensional reductions. By going through a certain one. Yes, but it's still get ambiguity can you have kind of many. In infinite dimensional case I think it's you don't have canonical shift of my second cycles to get. If you find six cycles defined up to multiplication by this plus minus one. Yeah, yes, yes, exactly. Yeah, that one can do. Yeah, so we need some do this choice of this local system. And that's it. Yeah, it's it's and this something. I'll call orientation choice. Which something which have to be done. And now. And then then what then will be. Very nice thing what happens if you make this choice. Then we get shifts of vanishing cycles. So you get some set of critical values. Kind of the I the alpha which will be functional alpha. You can see this critical values. Then you get a local systems over. Yes one by taking commodity with shift of vanishing cycles. Of the alpha and this kind of shift of regularize shift of vanishing cycles. Of my function divided minus critical value and divided by each bar. Of constant shift in some funny dimensional model. Yeah, also shift dimension. There's something interesting also going on when I go to six one shift. To when I go different reductions you get to shift in degree. Come on you should remove dimension of your space. Like in this Milner polynomial. I shift the grading by dimension complex dimension. Yeah, and then it will be. Well defined so get local systems. And then one kind of kind of. Then I should have this operators T whatever I J from the I to the J. From commodity from the alpha to the alpha J. And if you try to think should have some notion of gradient floor and so on in infinite dimension. Do you consider only finite function. I mean this shift of vanishing cycles. For finite function. For finite function. The shift of the dimension. Then the sheet by dimensional finite dimensional guy. You find it. And you stabilize it. It is choice or you take off. Yeah, this is a choice which you should make. Yeah, how you organize the things. Then I should count. T I J it will be something something like generalization of number of gradient lines. And this should be kind of elliptic problem. Again space of solution should be finite dimensional. Maybe real. Whatever semi analytic space. Gradient lines for some gradient floor. Because you remember in finite dimensional if more singularities. This operators in case when we get more things which are integer numbers. Counting some gradient lines. And then an infinite dimension we should be able to write some notion of gradient floor and to rephrase what is going on here. What is the lift in program? What do you have to write to write a gradient line from one critical point to another. It should be reduced to some elliptic solution of nonlinear differential question with elliptic symbol. You'll see an example right now. The infinite dimensional spaces are like seems like binary analytic spaces. No, no, no, no, it's not that it should should not matter at all. In concrete examples you should make sense of individual stuff. Of what the gradient lines. Without going to foundations. And then defined come all the. Through kind of back door without looking for the infinite dimension come all the and so on. So what is concrete example which I look is a falling. Suppose I get. I want to get infinite dimension manifold with the function. I start with holomorphic symplectic manifold. So it's holomorphic it's kind of complex and it's a variety with a simple electric form. In dimension is now to N. For some N and I picked two. Let's say closed. Again complex analytic homomorphic Lagrangian sub varieties. Submanifolds and waters will be my space. It will be space of C infinity maps. One can probably put some banach game apps. Maps from what from interval zero one. To M such at F zero belongs to zero. One belongs to one. Yeah so get the space this space carries a canonical. I think it's kind of complex manifold it's kind of like product of all points of intervals from a manifold time. You're going to make. Yes, yes, quantum mechanics. Yeah, but this is infinite dimension complex manifold. Tungent space is complex. If you look to this and it's has canonical one form closed one form. What is one form called alpha. It's you integrate two form over the interval. So what precise definition. Kind of. And take pullback of omega by evaluation map. From X infinity and zero one to M. It's one form. But it's not exact. It's to be exact. So you want to write this differential some function. One need to make some assumption here. So we assume that omega is a simplecate manifold is exact in the sense that omega is written as differential one form eta. It is kind of one form. And it's it is chosen. And also both Lagrangian manifold say exact in the sense that eta restricted to li is written as differential of f i where f i is. Holographic map from li to C. So it makes choice of. Of eta and. If you were one. Okay, then the function f then f infinity. Such that alpha is differential infinity. It's very easy to write its functional path given by integral of the path of pullback of my form eta. And begin plus minus sense it's very easy to get confused. Plus f one at one end of the interval minus f zero to another end of the interval. It's easy to check this differential gives us two form. Okay. What are critical points? No, no, no, nothing. That's it. Yeah, just two Lagrangians. Yeah, exactly. What are critical points? It's very easy to write what are critical. Just see when derivative is zero. In terms of its kind of constant maps. From zero to some point. M city can intersection. So the set of critical points is either morphic to the intersection. And let's assume that intersection is compact. Okay. There is a kind of question which I explained here in general station. There is kind of ambiguity how to put the shift of finishing cycles. You could kind of it's well defined up to plus minus one. And here's critical point is a manifold. Sorry. These are not the points. No, no, set of critical points in the intersection of two two submanifolds could be very, very bad. Not transversal. So these are not the points. They don't intersect the point. In some maybe single very singular spaces. Yeah, the intersection is not transversal in general. And so locally. One can do the following one locally can identify M, M near some points in zero. One can identify M with cotangent bundle to zero. I just put some kind of transversal lagrangian relation. And also one can put transversal lagrangian relation such it will be also transfer to L one. And then L one will be a graph of function. Maybe some function f L zero graph of differential function f L zero is some function in a zero. Sorry. Sorry. Sorry. The mixing languages here. Sorry. In only locally. Yeah. And then you see that locally this looks like a critical points of function finite dimensional variable. So get this finite dimensional reduction and then I get shift of vanishing cycles. And I assume that f L one intersection on component of intersection is zero or near my points. I get shift of vanishing cycles, but the problem is so locally it's well defined. But again, I've defined optimal multiplication by plus minus kind of one dimensional space. If you analyze carefully if you change the splitting in global, it's not well defined. So one should do something. And to order in order to get orientation. Choice. It's something which several people are analyzed recently. I don't understand the ambiguity because locally because of the symplectic form. Yeah. You can there is not much. Yeah, but you can still choose this identification with cotangent bundle in different way. And then if I follow. The differential is the same. All those ways are connected. It's connected, but it's not will be not simply connected. It will space of choices will be not simply connected to the end of the. Because I think because of the fact that the symplectic form goes over to the standard form. Yeah. And this I think is a. You get some eventually you get some not simply connected parameter space in locally. Yeah, yeah, yeah. So the story which was analyzed maybe by Joyce and Braf and some other people. So you need differently certain choices and the choices. The following. I can define. You should fix some kind of class of H 12 manifold and Z more to. And in fact, one needs some kind of lack of representative this class. Plus representative and convenient choice will be the following. Use some line bundle L M line bundle on M just apological said that beta will be first-gen class of the things more to. And then on each L I. Zero and one you should identify restriction of beta. Identify it's again some choice with first-gen class of canonical class of L I. It's kind of top degree form. What to. Yeah, so for example, if you make these things, what do you what do you do you choose. Square root. Of the line bundle, which is L M restricted. To L I. Tensoring canonical class of L I. And it's in real life, it's really very essential thing even as a case of Cartesian bundle. This beta. Kind of if BT if M is Cartesian bundle to X. Then beta is. This LM will be tangent pullback of canonical class to X. It's definitely not trivial. Things to choose. So you need this nasty plus minus one choice. So we then we get well defined shift of finishing cycles. Just it's pure topological. Yeah, it's completely topological data. Yeah. And what are gradient lines? Gradient lines. Passing X infinity. Now, for example, if M is. Killer. If you choose to some killer metric, then on the space of X infinity also one can choose killer metric. So you choose killer for one one. What will be killer for one X infinity. If you get some, let's say pass and considered tangent vectors, so again, two sections of a tangent bundle. Kind of get some pass phi and you get five dot one and five dot two are tangent vectors in the space of pass. Then the things the killer form will be kind of integral from zero to one. Pairing. In terms of space, my manifold multiply by dt. So we use volume form. Dt on on interval. I can choose different volume from get different killer metric. So it's an example of gradient line. And if you write what is it, what is the pass? It turns out it will be maps from what from zero one times are kind of time parameter along the pass. So it's a get the map from a strip to M, which is pseudo holomorphic. Well, I mean to point where did the everything was kind of mechanical with one dimensional. Yes, yes. Passing through. Suddenly you get to the mention. Yes. Now, because I should consider in general gradient lines in my space. So I can see the pass in space of pass. So automatically get right equation. What is the gradient line to get? Save the golomorphic pseudo holomorphic curves for some pseudo for some almost complex structure. Which is not the original structure, but something completely different on them. Yeah, so get the second and the claim that this is a series of the kind of pseudo holomorphic curves with boundary and this gives some this integer numbers. So in general operators between Komolji, which I talked about. And then one can play the same game consider manifold depending on parameters with functions. So start to move your maybe Lagrangian manifold start to move and then you get flat connections. And what will be kind of concrete example and then I just want to show that it gives some explicit formulas. So the. The original manifold. Yeah. Which is complex complex structure and which almost complex structure is it. It's it's it's determined by killer metric and holomorphic form by kind of point wise you use some kind of hyper killer. If it's hyper killer is you can use one of the integrable. Yeah, it's yeah, it's something which in principle you don't want to do. But claim is a phone. Suppose MS cotangent bundle to some manifold X and I will have a zero in M will be exactly Lagrangian. Submanifold. Yeah, here is this to form is kind of DPDQ is differential of. Form it is canonical one form cotangent bundle and then we can speak about exactly ground so it means that zero is DF zero for certain. No, no, no, I just want the main point I take. No, infold with Lagrangian and and also assumes that zero intersecting with projection of L zero to X. This projection is proper map so it has only. Fibers don't go to infinity. Yes, there's a plenty of such things. Such story. Acclaim that you have a soft canonical. Bundle with flat connection. On X. Maybe depending on parameter H bar. You can introduce parameter H bar. It's all in my game. How do we do this? How define this bundle this connection? So get fibers depending on point on X and H bar. The definition is the following. Again, kind of for generic X and H bar. You do the following. You consider my manifold related to pair space of pass. From L zero to L one depending on point X, which is cotangents point and point X. So I have. This will be my variety is yours to be my variety one to point X. I have two varieties and as explained. This is a long X right X capital X. No capital X it's here. Yeah, big point on my X. L one is based constant section so for not a one is not constant. It's kind of fiber. It just moment. So just fiber. Yeah, yeah fiber direction. Yeah. Then I get a family of my infinite dimension manifolds with function depending on parameter. This pass space I get family of my infinite dimension manifolds depending on parameter. And then I explained to you that should be some wall or walls some isomorphisms and so on. And then we should get a local system on parameter space multiplied by C star. Maybe this thing like this. Now that's something which one can do very concretely. Yeah, yeah, yeah, it's kind of transcendental construction. But I think this construction will solve this. I have this bill of canal many years ago. Conjecture that automorphism of very algebra is the same as simple polynomial simplectomorphism of vector space. And then towards the generalization if we get kind of simply connected Lagrangian submanifold should give some canonical demodule. But simply connected automatically exact because this form can have any trivial periods. Yeah, so it's a concrete story here. So you've got another some construction which is analytic instead of using cryptoristic. Yes, yes, instead of using cryptoristic. I don't know how it's related here. Yeah, yeah. So as far as I understand, so the calculation should be that you are in language of quantum mechanics, you're calculating. You have in the first in L1, you have wave functions and functions of momentum. Yeah, maybe you don't have really little time maybe just finish discussion. Yeah, yeah, for example, there are some kind of tricky kind of very concrete example. Let's consider one can construct some embedding like of C to C square. Algebraic embedding. In the image it will be cotangent bundle to x which is axis C and this will be Lagrangian which will be M between my Lagrangian L0. One can construct very tricky exact Lagrangian. So for example, get rational curve city can see to, yeah, just want to give you some example like point T city can see goes to point with coordinates T plus whatever T to power 4 T square. Yeah, it's kind of not obvious but one can check that it's embedding different points because, yeah, in fact, if you get two points go to the same points, then you see that if you get two equations then it implies that T1 to power 4 is equal to T2 to power 4 by squaring this equation. Then you get this T1 equal to T2. So it's really embedding, yeah, so it's in C2. Yeah? This square, yeah. Yeah, it's easy to check that this guy is embedding. It gives point to some tricky demodule and how I construct this demodule. I just follow this all length and construct wall crossing structure. I write it my form Q and P, yeah, like standard coordinates. The problem is what is corresponding to this bundle is path connection. Sorry, yeah, it's the bundle that connection but we'll have also some irregular structure. In this case it's not interesting to speak about, it's a bundle on C, this is the connection but also with some stocks filtration to infinity. So it's algebraic? Algebraic, yeah, I get algebraic. The construction, is it algebraic or does it use it? No, no, no, no, no, construction is not algebraic, it will be completely transcendental. So it doesn't have an algebraic structure? Really, no, no, no. It's just a morphic part. Yes, but also one have kind of stocks filtration to infinity so one can construct by classification of irregular singularity, algebraic bundle. So you have to, in addition to those stocks, Yeah, so you write form PDQ, we stick into the things, you write this differential function, F0. You should have this function F0. You get some function F0 which is, I don't know, TQ over 3 plus 236. You just substitute to get some polynomial one variable. And all together it means that you get a multivalued function in one variable, algebraic function, which is called maybe called kind of G. G of Q is equal to F0, it's equal to F0 of T. VT is a solution equation, which is a solution equation T plus T04 is equal to Q. Yeah, it's kind of... What is the motivation for this power for F0? Just because from this, I restrict my one form to Lagrangian manifold. Yeah, so eventually what I get after all these things, I get some algebraic function in one variable. G of Q, or Q of maybe X. It's a point, you get algebraic function on one variable. And I think this is a really funny game with algebraic function which you can do. Then one can try to... We do it on computer or whatever. We get a function which has kind of four-valued function. Because my equation has four solutions, so the solution of the equation is substitute to the things. Now I should write wall crossing structure. I put H bar equal to one, and I have this walls, and there are walls where... Maybe call it GIT from one to four. Maybe X, maybe one to four. And I wrote walls in C as coordinates, namely walls are imaginary part of G of X is equal to imaginary part of G of X for I non-equal to J. So I get this thing, and if I analyze what happens, you get something like three ramification points. So one get some picture like this. I don't know what's... You get three critical points, and then you should kind of continue. You get some picture. You get some finite graph. Yeah, it changed Q is equal to X, yeah. I get some finite graph, but the graph will be oriented. Because along edges, real part... On edges you get two branches have the same imaginary part. Orientation, when we read G of X minus G of X, it's positive and increases. I just first take order and automatically can order which one is first, which one is second by real part, and then have a direction in which direction this difference between real part increases. So I get some kind of positive increasing function on my graph. And now I should put some... Now I told you that for wall crossing structure, I should put some integer numbers, because I have critical points. The numbers I know I should put one here, near each thing. And then I have one rule, and I have another rule, which uniquely by induction kind of by growing to this flow, reconstruct my numbers. So I don't have to solve this equation for holomorphic curves at all. Yeah, so it's automatically I can do it. So it's canonically defined. Wall crossing structure is canonically defined by initial data. And what you get, you get local system on C, which is not very interesting, of rank 4. Rank 4, because this projection is here, for a little bit. You get rank 4 local system on C. Just vector space of dimension 4. But what goes on? So I get this picture, but then at infinity, I can go, I can do some other lines. So it will be blue lines at infinity. And as x goes to infinity, given very different equation. When real part of gix is equal to real part of gjx. Are they walls? No, those are not walls. But these are things, yeah, but these are things responsible for stocks filtration. When considering differential equations, we should have filtration solutions labeled by real parts of these things. And my vector bundle will have basis for elements, the solution of everywhere outside of the walls. And I glue them together. And I know how they order outside of the walls. And what I get, I get kind of stock data for these things. And I get kind of transcendental description of differential. You get the stock data exactly? Because my local system has canonical basis. Corresponding to solution of my equations. And then outside of these blue lines, I have the order of them given by real parts. So I get filtration of this story. And then the service case works marvelously, so you get filtration of different stock sectors. And this is all the semantics of anti... Yeah, essentially what goes on, on C infinity decomposed by several sectors. And in your vector space, you get a complete flag in each sector, which changes if you go through the stock's direction. And you can read from all this picture in completely explicit ways. So one can really run computer program and get the story. Is it some additional structure to this wall cross? No, no, it just follows from... Yeah, this stock 6 and infinity is something which you should... I didn't tell you, but it's kind of naturally follows from this all the game. Yeah, but that's how eventually it leads to some absolutely elementary story. And this is very mysterious in fact, because I can check the canonical demodule. And this demodule should be exponential motivic. So eventually it should expect that it should have some cyclic on X cross a fine line. You get motivic demodule, then multiply by exponential function, take projection. And here it's not clear to why it's game motivic, so I get only better realization. What did you say now? I didn't catch the last phrase. Yeah, this... I said it should be some demodular source to this exact Lagrangian manifold. But it should be not arbitrary, it should be what's called exponential motivic. And which means that it should have motivic demodular product of manifold with a fine line. Again, some subquotient of some Gauss-Mannin connection. Where is that fine line here? It's something very general, it's really irregular singularities. In general, on complex algebraic varieties one can see the exponential motivic demodules in the defined conformal way. You can see the product of your variety with kind of universal a fine line. Take motivic demodule there, multiply by exponential function in the last variable, and take push forward to your manifold, kind of family Fourier transforms. Yeah, I think I now have to stop here.
The goal of the first part of the course is to describe and compare various cohomology theories for algebraic varieties endowed with global function. In the second part infinite-dimensional applications will be discussed, including non-perturbative quantization of algebraic symplectic varieties.
10.5446/16273 (DOI)
beim Okay, I will start. So today I will explain the Riemann-Hilbert correspondence and in fact some results which is stronger than Riemann-Hilbert as which I'll see. The plan of the proof is very the same as in the irregular case. But in the regular case it's much easier because we work with the shift of OT of temperate holomorphic function and in the, that's for regular and for irregular you will have this horrible thing that Masaki will explain. E is for enhanced. Okay, but this is later next week. So I first I have to, maybe it's useful to recall what is irregular autonomic demodule to give a definition. So X is a complex manifold. Then you have OX, DX, mode, current demodule, autonomic demodule. And inside you have the category of regular autonomic demodule. So I give a definition. So called lambda, the characteristic variety of M in T star X. So it closed. So we assume that M is autonomic. So lambda is a closed complex analytic conic for the C star action, a C star conic. So a priori if we are here it's coisotropic. So a lot of people say it's coisotropic by Gabber's theorem. But this theorem is due to Satou Kawa Kashiwa. Later Gabber gave a purely algebraic proof. But the original proof is to Satou Kawa Kashiwa. So if we are here, then it's Lagrangian. That's the definition. So a module is autonomic if the characteristic variety is Lagrangian. So the question is what means regular autonomic. So denoted by I lambda, the ideal of the grad dx of functions vanishing f restricted to lambda equal 0, defining ideal of the Lagrangian manifold lambda. Then so the definition is now M is regular autonomic. If locally on x there exists a good filtration such that I lambda applied to the graded of M is 0. If you prefer, if locally there exists a good filtration which is reduced. So that's the first definition. So it's a theorem that I don't prove it here which is due to Kashiwa. I guess maybe there are other people who work on this that a mode regular autonomic of dx is a sick Abelian subcategory unstable by a lot of things by duality proper direct image duality for the module. I don't give all definition now all the result now. So it turns out I believe that there is some sort of canonical global filtration. So that's another problem so that for proper maps to prove the fireness there is no problem because of this filtration because usually for proper maps to prove fireness you need some kind of good filtration at least if you're not a bright set that either you are a complex set that when you need some technical condition which is probably so then when you say this theorem about proper maps in the complex case you have to. Of course when we take proper map we need good filtration but we will come back later when we shall need it. And if I have time I will construct not a functorial filtration but it's another. So I don't insist on this because it will be another course and it's considered as classical more or less. So maybe I should mention also here kawai because there is a paper in kashivar kawai which is called regular autonomic tree anyway. So we shall use these things. So I recall a result that I mentioned last week about the properties. Ah no first I give you excuse me I give you definition again. So you take D a normal crossing divisor. So we choose coordinates x1 xn local coordinate and x such that D is given by x1 xr equals 0. And we say that m has regular normal form along D. If it is isomorphic or if it is a local coordinate system m is Dz power lambda. So let's call with lambda equal lambda1 lambdar in C minus no positive integer. So what does it so is that clear? So it is x the coordinates. So it means xi Dzi minus lambda i for i equal 1 to r on Dzj for g equal r plus 1n. You take this ideal on m is D over i lambda. So regular normal form is very explicit and very easy to calculate. So the game is to reduce regular autonomic D module to regular normal form. So there is a I don't know a lm or a crm as you want. Here a property pxm so it's a statement on x for all m regular autonomic. So we assume that px is local maybe I wrote it last week. It means that maybe I did pxm is true. Definitely for any x for any covering. And assume that it's invariant by shift. It's invariant by shift. It's stable by distinguish triangle. If you have a distinguish triangle, if you have pxm and pxm prime, then you have pxm double prime. So it's in the derived category. Excuse me. You are right. Thank you. The derived category. No, this is you understand what it means of course because this one is six so it means the full subcategory. So you assume my multiplicity that it depends only on the isomorphism plus the f. So there is a condition which is useless here. Maybe I should not write it but it will be useful in the irregular case. So if you have pxm plus m prime, then it implies pxm. In the regular case, I think we don't use that but it's so natural. And it's so now comes the two main properties. In practice, these properties are absolutely obvious to check. Maybe I write here. I want to keep this blackboard. Riemann-Hilbert for example. I will apply this to prove Riemann-Hilbert. So I have not finished. If f is a projective map, then if the property pxm is true, then pxpy of the direct image of m is true. And here we assume that m also is good. But is it automatic because of? It is automatic but when you don't use it, it is automatic. It's known that regular autonomic modules are good but we don't use it. So we assume that the good means you have a good filtration but not everywhere on each compact set, each upon relatively compact set. It's enough. Not globally. Semi-global. On the last one, the best if m has regular normal form, then pxm holds. So if you skip the first property which are more or less obvious, the important property is stable by proper direct image and you have to check it in the normal form case. So I give a glance, a sketch of proof. So the conclusion is that it's all data. Yeah, I forgot the conclusion. Yeah, I forgot. So what is the idea of the proof? So first, assume that d is a normal crossing, divisor, and m is a regular autonomic on x and also which satisfies two things. m is equal to its localization along d and there is no singularity apart from d. Or if you prefer, m is a flat connection on x minus d. Then p of xm. So in this case, how can I prove the result? I assume all conditions are satisfied. So locally, there exists a filtration, not a good filtration, a filtration by d module. m equal m0 contains m1 contains mj contains mj plus 1 equals 0. So we have to judge that mj divided by mj plus 1 has regular normal form. Honestly, Keshore told me it's very classical, but I believe him. I've never seen the proof, but for specialists of this question, it's very classical. So you reduce by this result, the result is proved in this case. So how to pass from this case to the general one using desingularization? So I don't give the proof. I just give an idea of the proof. So more or less, you make an induction on the dimension of the support of m and you will construct amorphous support of m containing z. So you will make a desingularization such that w is smooth, d is a normal crossing divisa. So then what you do? So assume, so it's where I cheat because otherwise it's a little more complicated. Assume for simplicity that z equal x. And call this f. Then you will use this some triangle like that. You take the direct image, the inverse image of m localized. It's not the same d. It's horrible on n. So it's just an idea of the proof. It's not a proof. But z you choose, no? Z is the support of m. So I assume. The support. Exactly. It's contained. Or contained, it doesn't matter. But you can choose x. Yeah, but I do it by induction. Okay, so support. Okay, so for short, I assume z equal x. So you do something like that. And here, this one is case a. And when you take the direct image, you apply one of the properties, property e. So you take the, so this, this module will satisfy the property for this one. And for this one, the support of n is contained in s. So by induction, it will satisfy also the property. And the property is table by triangle. Your notation for the operations in d modules. It's not mine. It's this notation. No, no, because what I remember is that there is, well, I, so that the usual, the each operation is two versions. I mean, there is a, like, like for shifts, a plus, a plus. Yeah, but not for d modules. It's just a shift. It doesn't change. For d modules, there is only one inverse image up to a shift. There are not two inverse images. But for direct image you have. And for direct, we take direct image when it's proper, so you have two notations, but in practice, they are the same. We never take direct image when it's not proper. Okay, so it was like, I can't do it with both. No, for d modules, there is one direct image, essentially, and one inverse image. Okay, so it's the idea of the proof. It's not the proof. Okay? So now I want to apply this big lemma to Riemann-Hilbert. Oh, no, before, excuse me. Before there is something rather boring to do, which is to blow up, real blow up. So what I'm doing now will be used by Masaki intensively. And I think it's, here again, it's very classical. It's like polar coordinates, but with more variables. Okay? So maybe some notation. So we take this action, positive number cross C star cross R to C star cross R. So A z t gives A z A minus 1 t. A is real positive. Okay? So we denote by C, see what, tilde, tot. This is C times cross R divided by this action. And we also need the most important one, this one. C times R positive divided by this action. And also C tilde positive equal C times R positive divided by this action. Okay? So the important one is this one. And, yeah. So now if X is C R cross C n minus R, and you have D, the divisor z1 dot R equals 0. Then you define X tilde, tot, as C tilde, tot, power R cross C n minus R, X tilde, and X tilde positive. See me now. Okay? Yeah, I'm obliged to erase this blackboard. And we have a map, pi, which goes from C tilde, tot, to C, to Z t gives T z. So this map, you find a map, we keep the same notation, from X tilde to X. On here, you have X tilde positive, which is isomorphic to X minus D. Here is the situation. So if you take n equal 1, you will see that it's exactly polar coordinates. This is X tilde. Okay? So X tilde, tot, is not intrinsically defined. But if you have a normal crossing divisor, X tilde is intrinsically defined. And this one is only defined as a germ in the neighborhood of this manifold with boundary. Okay? And now we put, we put, we put shift here. Maybe I skip some details. Essentially, I will define O, T, X tilde as pi, upper shriek, O, T, X tilde, as pi, upper shriek, where is that? X, T. So you take temperate or norm of the function with poles on D. And this is inverse image. And it will be, it can be shown. So of course, this map, pi tilde, is proper. Okay? We use it always, of course. And we can prove it's not, okay. And A X tilde is alpha O, T, X tilde. So this is a usual shift. It's a usual shift of a norm of the function with temperate growth, the boundary of the manifold with boundary. On D, what is the notation? A X tilde is A X tilde, tensor with pi minus 1 O X, pi minus 1 DX. So we will do D module on this manifold with boundary, X tilde. Okay? So why is it so useful? Maybe I give the CRM without proof. So it's an exercise. It's more or less of use. Proposition or lemma. If L has regular normal form along D, so L is a DX module. Then, when I, maybe I should have defined it. If L is a DX module, I set LA by definition. It's, where is that? It's DX A tilde, tensor over pi minus 1 DX, pi minus 1 L. Okay? So when you have a D module here, you associate a DX tilde module on this manifold with boundary. Okay? On the result, if L has regular normal form on D, then locally on X tilde, LA is isomorphic to OA. That is A. So once you have made this desingularization, not desingularization, this blow up, D module with regular normal forms becomes flat connection. Okay? Why is that true? It may, it's more or less of use. It is, the reason is that Z power lambda is locally invertible in the X tilde A, or in X or in A. You see Z power lambda, if you work locally, it's an invertible function. Okay? So maybe I skip some details to state the main result. So now we'll come back to the details later. So it's simply that Z power lambda, which is not invertible outside of Z, is not invertible outside of zero, is invertible, and the blow up, that's all. So maybe I state the main CRM. So CRM, all these things needless to say are not due to me but to Kashiwara. It's old result from, maybe it's a new formulation with OT, but essentially it was due to Kashiwara, at least the two first ones. So you take L, which is regular, autonomic. Then the first result is the temperate, the RAM, I don't remember my notation, is isomorphic to the usual, the RAM. And the second one is the same for solution. So these two statements are equivalent, of course, by duality, because it's a statement for all regular, autonomic, demodule, and you pass from the RAM to solution by duality. And the stronger than Riemann-Hilbert, as we shall see, if you take OT, D-module with L, it's isomorphic to R-Holm, Sol of L, with value in OT. You take the direct answer problem? No, this is the transfer product as a demodule. Of course, everything is D-arm. It's not the transfer product over D, it's the transfer product over O. But with the structure of a demodule. Of course it's D-arm. So before to go further, I want to show you that this is a deep result. So why is it Riemann-Hilbert first? Sorry, but with this plot procedure, do you have something like the function mu-arm for D-modules or not? Mu-arm for D-modules. You mean it's kind of specialization? No, no, there is no macro-localization here. Taking the blow up, you cannot localize? No, I don't know what you mean. Of course, Kachoua, in his study of the autonomic demodule with Kawai, made a macro-local study using macro-diffansial operators. But here it does not appear. So it's different from having real magnetic domains, and seeing functions in the morphic. Okay, so let me give some application of this result, in particular to non-alamorphic function. Let me give some application of this result to show you that they are deep, very strong results. So the first application, you apply the function alpha x to the isomorphism 3. When you apply alpha, it commutes to transfer product, and alpha of ot is O. So alpha of Oxt, transfer L, is L. So you find that L is Rm Sol L Oxt. So this is Riemann-Hilbert by Kachoua in 84 or 80, even in 80. Okay, Kachoua, this thing was denoted T-HOM by Kachoua at this time. So it means that you recover L from the knowledge of the chief of solution. So we did not have Oxt at that time, exactly. No, you need Oxt. Let me take an example. Assume that Sol L, assume that M L is HDZOx. Then Sol L will be the constant shift on Z up to some shift. So Rm Sol L Ox is HDZOx. And if you put OT, then you find HD Algebraic Comergy. That's why you need OT. So we de-formulated originally. Originally Kachoua defined a function that he called T-HOM, which was defined on the category of Constructible Chiefs. And you prove it with this function T-HOM. But T-HOM is a particular case of R-HOM with value in OT. Okay, but today the R-HOM is way too... Because T-HOM f Ox. So this is... But of course all the IDs are in T-HOM. So let us give another application. So in this particular case of this formula, 3, we recover Riemann-Hilbert. This is the Riemann-Hilbert correspondence. So let's give another application of 2. I take a R-Constructible Chiefs. And I apply R-HOM f to this isomorphism 2. So I find that R-HOM over DL R-HOM f OT is isomorphic to R-HOM over DM R-HOM f Ox. So why is it a... excuse me, M on L? So the hypothesis that L is regular autonomic. So let's take a particular case. Take f, take M, real analytic manifold, X, a complexification. Take f of M and take for f the dual over X of the constant chief on M. Then the left-hand side is R-HOM over D of L in the chief of distribution. And it's isomorphic to R-HOM L in the chief of SATOS hyperfunction. So you see, this is a very strong result because... So this result here contains all these regularity results that you can guess. It proves that if M is a autonomic, the distribution solution is the same as the complex of hyperfunction solution. With the same techniques, we can prove the same for sinfinity and real analytic also. On another example, take f, the constant chief on Z, then you find R-HOM L algebraic homology supported by Z. So there is also a comparison with solutions in the formal series. Yeah. The completion, does it follow from this kind of thing? Yeah, sure. Not directly, but you can deduce. You can deduce by duality, for example. That's the shortest. All these comparison results are contained. You can also do the completion along the sub-variety, which is the point, but along the... It is like the old result of the homology. Now you can prove something like that. R-HOM over DL beta of OX is isomorphic to R-HOM over DL O Whitney. So when you apply this to... then you find things like that. R-HOM over DL OX restricted to Z is isomorphic to R-HOM DM L OX formal completion. That was your question. OK, so maybe I will enter the proof of the result, at least the first two isomorphisms. OK, so you see, with this language of and sheaves, O-T or O-W, it contains a lot of results. This isomorphism is very strong. It contains a lot of things. So let me give a sketch of proof. Maybe I want to write down the Riemann Hilbert, because I will keep it. All the results are here. OK. So, so, so, so. So we have seen that the two statements, statements one and two are equivalent, OK, by duality. So we just prove one. So we have to check all properties that I erased of the lemma. OK, so the first one are views. OK, there are only two properties which are not views, direct image and regular normal form. OK, so what about direct image? I have to prove that if pxm is true, then px, the direct image is true. Assuming fx to y is a projective and m is a good. OK, but if you remember, the Durham of the direct image is a direct image. And there is this hypothesis of the Durham of L. We have seen that. It was a consequence of temperate, grower, C1. And the same without temperate is true. This is not so easy. Again, it's used grower, but not temperate. It's not easy, but it's old. OK, so. The usual grower for coherent sheaves, not anything. Yeah, but we have a good filtration. It's not written now, but it was written in the statement of the lemma. We assume it's written. We assume m is good. OK, so we see that the property is stable by direct image. So finally, we have to check the property for normal form. OK, in the case of regular normal form. Oh, I cannot. You won't really. It says that if you have f on x to y, if f is coherent on x, f is proper on support of f, then the direct image of f tensor uxt is. No, we have to do again the proof. It does not, unfortunately. OK, so. OK, so what? What I have to do now. OK, I resist. I have to prove. I want to prove. So now we assume that L has regular normal form along d. I want to prove that L, that omega tx tensor dx L. So I recall we have x tilde to x pi x tilde positive x minus d. And this is an isomorphism. So there is some technical parts that are. So this is the RAM comparator of L. So I skip a technical point, which is not difficult, but it's too many things. I skip that r pi star pi upper streak of the RAM temperate of L is isomorphic to the RAM temperate of L. Need some technical lemma. So what do I want to prove? I want to prove that this thing, so the question is this thing is isomorphic to alpha of the same thing. Is a shift. I want to prove that maybe here. This is alpha of the RAM temperate of L. So I want to prove that this object is isomorphic to alpha. I never write yotta, of course, although it has no meaning to say it's alpha. It's because I don't write yotta. But alpha commutes with a direct improper with with this function. So it's enough to prove that the inverse image of the RAM temperate of L is a usual shift. It's not a n shift. It's equal to alpha. But this we can calculate. So the inverse image of the RAM temperate of L is isomorphic to maybe, of course, it's difficult to follow like that because there are a lot of technical things hidden. So roughly speaking, you will find omega t x tilde tensor over dA LA. And now we use the fact that L has regular normal form. So this is isomorphic to OA. So finally, this is isomorphic as the inverse image of omega x t tensor over dx ox. Locally on this is. So it's a shift. So it's what I've done. I said that after the blow up, my demodule is nothing but the flat connection. Oh, OK. And then it's put. OK, so maybe we may have stopped here. And I will prove the third part after and I will give an application on examples. Mainly what I will do after is to get to treat an example of irregular demodule to see what happens. OK, so we have a short break. So now it's not in the it will not be in the notes what I will say now, but it's one of my ideas since very long. Not like it's how to undo a demodule, autonomic demodule with good filtration. So I take this last formula, you have L. I want to undo L with a good filtration or with a filtration, let's say, but functorial. So an idea is to undo OXT with a filtration. So what does it mean to have a filtration on this strange object of temperate, a lomar function with a temperate goes. So OXT is the double complex of temperate distribution. So the lomar distribution have a very classical filtration by several F spaces. So I don't know. So now I take a real manifold. So is it possible? So the problem is that on the subalytic site, you have the pre-shift of subalephth spaces. S is negative. Don't ask me the definition of subalephth space, but we know it exists. So we have this pre-shift, but this pre-shift is not a shift. So it's difficult to undo the shift of distribution on the subalytic site with a filtration because they are not shifts. So the subalephth space is done on our end. But when you have things with subalytic boundary, you don't use it exactly. So let me explain what exists. So the subalytic site is not enough with Guillermo. We construct a linear subalephthic site. So what is a linear subalephthic site? The two subalephthic sites are the same, but the covering are different. U1, U2 is a linear, we call it linear covering, of U1 union U2. There exists a constant C, such that the distance of X to M minus U1 union U2 is less or equal to C, distance XM minus U1 plus distance XM minus U2. That's why linear, because with Lejasevic inequality, there was some constant N here, but we assume N equal to 1. So there are very few coverings. But with this covering, you can define the subalephth shifts. The soublest question is, let us say U is relatively compact in R and subanalytic. So do you take the one for R and U in some sense or the closure of certain? Okay, as I said at the beginning, don't ask me what is the exact definition of subalephth, I just tell you what exists. So there exists a shift which is constructed by Lobo, which is a shift on the linear subanalytic site. On which, let me one second to finish my sentence and it will become clear. And you have direct image. Let's call it row again. So I work here unbounded or bounded from below. So this is, it's partly my work with Guillermo and partly the work of Lobo. At this stage, it's my work with Guillermo. So by the Brown COR, there exists a left-eyed joint to the direct image. So you can define the subalephth shifts on the subanalytic site as the row upper shriek of the shift constructed by Lobo. This would be the direct category. Yeah, so the object of the direct category. But the good U, the CRM, if U is subanalytic with lip sheets, boundary, then our gamma U of the subalephth shifts is in degree zero and is equal to the usual subalephth space. I changed my notation. What do you say? What did you say? No, that I have not defined. It's negative. Real negative. Anyway, there is something in mathematics which is called the usual subalephth spaces, which is horrible, except when you, it's not horrible, but what we prove is if U is lip sheets, then it's a good definition. If U is not lip sheets, it's not a good definition. So if we lip sheet boundary and locally it's a... A bi-lip sheet invertible function, it's a half space. A bi-lip sheet change of coordinates is a half space. So it shows that the subalephth spaces are a natural object of the direct category. They are not spaces. Once you have subalephth... Yeah, yeah, yeah. Absolutely. We have written the theory for subalephth. Maybe you can generalize. No, but the question is whether if there is a bi-lip sheet on your mark, whether the little subalephth one or you assume the little subalephth one... I don't remember this point. Maybe you ask the graph of phi to be subalephth. I don't remember. Anyway, when U is lip sheets, this object of the direct category is concentrated in degrees 0 and it's the classical one. So finally, you see that you have a subalephth filtration on dB. So if you look at this, maybe I need I somewhere. It's an undefined object of D-module, of filter. So of course it needs much of development. But we can define the direct category of filtered D-module. It's not a billion, of course, but it works. On undefined object of D-module, on distribution are well defined. dBt. I forgot the t. So by taking Dolbo, we see that Oxt is well defined in the graph category of D and dx-module. And once you have a filtration on Ot, by this formula, you have a filtration in some sense on D. So of course you can ask me what can you say about this filtration. Is it in degrees 0? I guess very rarely. Is it good in some sense? But here, of course, we cannot say anything. So it was a, I told you it is apparent, is it? Sorry, at this point, should we take quasi-abelian categories? No, I've not defined this, but it's easy to define the direct category. We schniders who have written some paper on this subject. How does this filtration with the maps? What's not in FIDX? Filter. Filter-odd and object of D-module. No, I'm not precise here. It's just to show you that maybe it's interesting, maybe it's not. So this doesn't give, for regular seeing, it doesn't give the canonical good filtration? No. I don't know if there is canonical good filtration, but this one is functorial, maybe, but it's in the D-RF category. No, but I remember begging that there was, for regular or non-regular, there was a way to... Yeah, but it constructed at hands by, you know, like... But also this is a Kishavara conjunction, so I mean, it's considered whole to distributions. You get a canonical, it's a complex quantity of meaningful. And then distributions, of course, is the whole solution. Yeah, but it's another thing, no? It's another thing. And maybe, I don't know, maybe this filtration works also in the irregular case. You replace T by E, but it's... Anyway, it's too... When we want to prove something, we need a result of analysis that we don't know, so we have not been very far. The most important thing I remember of the L2 lattice... What? L2 lattice? No, but there are a lot of filtration, and the modules. Each mathematician has constructed this filtration. So I give mine. Okay, it's a parenthesis. Okay, so maybe before to come back to the subject, I want to make another parenthesis, more interesting, maybe. I want to discuss an explicit example of irregular demodule. So until now, we have spoken of regular demodule. What happens with irregular? So it's an example that I think which is illuminating for what Keshore will do next week. So we'll look at the simplest example. So dimension is 1. So it's something I did with Keshore in 0,3. But I think it contains a lot of information that will be generalized after by Danilo and Keshore. So X is C. I know in algebraic geometry you say A1, C, but I call it C for short. So the coordinate is Z. U is C minus 0. J is embedding. And my demodule is this one. It is associated to this equation. Simplest 1. So M is D over DXP. Or if you prefer, it's D exponential, which sign? 1 over Z. So I want to try to understand this shift. First I will try to understand H0 of this. That is, kernel. So solt is the complex. And first I will try to calculate the kernel. So ot is a concentrate in degree 0. And it's sub-n-chief of OX. Ot in degree 0, because the dimension of X is 1. Ot is a complication. It's an object of the... OK. So let's call... maybe I will call sol0. So H0 of solt of X is a sub-chief of... or sub-n-chief of H0 solx of M. So when this is not 0, on an open set V, yeah, maybe I should say first that H0 of sol, the classical chief of solution, is a constant chief, a nu. That's easy calculation. And this one is not 0, if and only if V is contained in U. V is sub-analytic by definition. And also, exponential 1 over Z, this function restricted to V is temperate. This is more or less tautological. We look at the temperate solution of this equation. The solutions of this equation are exponential 1 over Z, with a constant. So it has to be temperate. So on this function, if an answer restricted to V is temperate, if and only if a particular 1 over Z is bonded on V. Otherwise, it will not be temperate. And this is bonded if and only if V is contained, if and only if there exists epsilon positive, so that V is contained in some U epsilon, where U epsilon is a set of Z in C, such that I prefer to make a picture. U epsilon is C minus B epsilon. B epsilon is a closure if the ball, close ball, center rate at epsilon, on radius epsilon. So I make a picture. And here is, this is U epsilon. So with these remarks, we find that, maybe I skip some detail, we find that sol t of d exponential 1 over Z. So does the ball go bounded? No. Bounded the ball. The real problem. Otherwise, there... No. No, bonded, bonded. Not a ball. You want this to be temperate, so the real part should be bonded. No, but you say bonded means absolute value is bonded. No. No, it's not through. It's not through. But it's less. No. Let's have to... Okay, how did it get there? So finally, I skip some details. We find that this is inductive limit in the sub-iotic topology of the constant of U epsilon, U epsilon. So we have an explicit description of the complex of solutions of this irregular D module. So of course, if we take alpha, this is the classical things. But it's clear that with a temperate holomorphic function, we have much more information. So what is good and what is not good in this result? So with this, this is much more precise. That's the classical, that's this result. But it's not so good. The bad things, bad news, is that if you take d exponential 1 over z on d exponential 2 over z, they have same temperate solution. So temperate holomorphic functions are not enough, or c maybe, are not precise enough to distinguish this kind of d module. But good, if you take d 1 over z on d exponential 1 over z square, for example, then the solution are different. So it's a tool to distinguish a lot of d modules, but unfortunately, not all d modules. OK? OK, so I have made two parentheses, maybe too much. On the other hand, I can come back to my proof. OK, so it's the end of the parentheses. I think this example with Kachoua, you will see the more general example, exponential d exponential phi, where phi is meromorphic. And he will calculate the solution, chief of this. It's more difficult, but it's very similar to this calculation. OK, so maybe I come back. I know. There is something else I want to do. I think I will not give the proof of the set. I think you are tied, me too. So I will give an application, another application, of this result. The proof is the same line. The proof is the same line. You show both terms are stable by direct image, a new test in the regular case, in the normal case, a regular normal case. No, I want to, here it is. I almost forgot. There is something important that I want to do. I want to apply this result to integral transform. So in the regular case. So what's integral transform means is correspondence. In the complex analytic case. So you have a manifold and morphism of manifolds. So they are all complex manifolds. And I want to compare, I want to see what happens to the Durham complex of a autonomic demodule on x when I go to y, by inverse and direct image. I introduce. So m will be a demodule on x, l a demodule on s. Then I define the composition as a demodule. So you take the inverse image, the direct image. And for sheaves, if I have a sheave, so if l is a sheave or n sheave, is a n sheave on x, maybe I take g on y. So psi l of g as, no, f, excuse me, r, as a direct image of r i on l, direct image of f. Or maybe I define also l composed with g, if you want, r, f, for g. So there is a CRM, which now it's just a corollary of all what I have said. So CRM is the following. CRM is good, not good, quasi good is enough, demodule. You take a regular autonomic kernel, you take a sheave Roman L as the solution of L. Inductive limit, filtering inductive limit of good. Okay. It is the homology sheaves. Okay, the homology sheaves. It's a six subcategory. Okay. On the hypothesis, we assume that g, the projection, is proper on f minus 1 support of m, intersect support of n. Some properness. Then the conclusion, if you take the Durham complex, temperate Durham complex of m, and you make its image by psi, then it's isomorphic to the temperate Durham complex of the image of m by L. So m is not a problem under the problem? No. L is regular autonomic, the kernel. So unfortunately, it does not apply to the Laplace transform, which is not a regular autonomic kernel. But it applies to many situations. So first, I say maybe I skip the detail, that all ingredients of the proof have been given already. Because we have to prove that essentially Durham temperate, so also a remark, this result is false without temperate. It can be make true with other hypotheses. But with this hypothesis, it's false if you remove temperate. What is, we have seen, what does it says essentially, the proof? It says that Durham temperate commutes with first inverse image, then commutes with a tensor product with L, as a demodule tensor product, over D. So this was not written down. It is in the notes, it will be, but this last result is almost a consequence of this one. I don't give the details, but it follows easily from this formula. So this formula 3 is a generalized Riemann Hilbert. It's much stronger than Riemann Hilbert. And it commutes with direct image. And this we have seen many times. Temperate, grower, things like that. So once you know that Durham commutes with all these things, then you have this formula. So maybe I want to translate it. Maybe I'll put it here. Maybe here. Excuse me. Maybe I write again everything here. C L of Durham M is isomorphic to Durham. Maybe I forgot a shift. I forgot to put a shift, Durham of M composed with L. And a shift here, dx minus ds, as I forgot. So before to erase, I correct dx minus ds. So now it's true I can erase. OK, so we assume that G is proper on F minus 1 support of M intersect support of L. OK, so to conclude, let's translate this. So I have erased it, but if you remember, psi on phi are adjunct. When you have quasi-good, so it's a filtered limit of good, the support is it, how is it defined? Is the direct image of the support of it? The support of a shift is the support of the shift. No, there is one way where it takes the point of the... No, you take the closure of the... I don't know. OK, so if I translate this. So now I take G, another unchief. Then as a particular case of this formula, I find our... I see x L composed with G with value in omega x t tensor dx M. I write everything dx minus ds is isomorphic to our home i cy g omega y t tensor dy of M composed with L. OK, so why are such formulas interesting? Because in the literature, you meet very often particular case of research like that. OK, here is the formula with temperate. There exists a formula without temperate with other hypotheses. But let's just give an example. For example, if you take projective duality, other incident relation, then you find formula which says that our home L composed with G with O pn of c, some line bundle, maybe, with some shift is isomorphic to our home g O pn c with a... These are line bundles, OK? There is a shift that I don't remember. So you have this adjunction formula where G is any shift. And you pass from one shift on the projective space. G is a shift on the dual space. So you can particularly arrest a co-ponset and so on. It contains lots of information. What is the K and K star here? OK, it's an integer. It's a line bundle. And I don't remember the formula. K star is deduced from K by some minus K plus N or something like that. So these formulas are classical. Maybe they are in Brininsky already. So I just want to emphasize the interest of these general formulas, adjunction formula. And I think this kind of formula with our home here, there are many formulas in the literature like that. I wrote some with Danielo Longhego on Kashiwara, Tani Zaki. But I think it's the first time that we write them like that without applying to a construction sheet. So I think it's a nice formula and it's still better in the irregular case. Because then you can apply it to Laplace transform. But it's much more difficult and it's what Kashiwara will do next week. So maybe it's in either 10 minutes before my time, but I think I finished now. So what is the L here? The L is given by... By the incidence relation, you have a Z containing P, N, cross... The incidence relation on L, you have some choice, you can take H1, Z. There are many choices. Or you can take the constant shift or something like that. That's the radon transform in this case. Of course you can take other flag correspondents. For example, the Penrose construction of the wave equation is something like that, where you take P3, M4. You start with a line bundle and here you find the wave equation. M4 is a Minkowski force space compactified. It's also... No, it's just to say, I don't know why it's another subject. I don't want to enter it. M4 is a flag manifold, F12 of C4. Okay, so this will not be in the notes? No. And the incidence relation will be in the notes? No, this is incidence relation, excuse me. So this is F134 and this is F12. This is classical, but it's another subject. Thank you. Thank you.
The aim of the course is to describe the Riemann-Hilbert correspondence for holonomic D-modules in the irregular case and its applications to integral transforms with irregular kernels, especially the Laplace transform. The course will start with a detailed exposition of the theory of indsheaves (including sheaves on the subanalytic site) and its applications to the indsheaf of holomorphic temperate functions.
10.5446/16259 (DOI)
So today's topic is the second class on representation, automatic indexing. Could say automatic indexing number one. We talked, in the first class we mentioned that basically what we do in IOR is we count words. And we're almost there. Next time we talk about the counting and what do we do with the number. Today we mainly talk about what are we counting? What is the unit? What is the word? When is it one word or zero word basically, right? And which occurrences do we count and how do we process words before we count them? Things like that. Next time finally we go into the weighting. What happens with the number that we counted? How does that affect the retrieval ranking? Okay, so basically we could say the task that we have automatic indexing is quite similar to the task that we had in manual indexing. We select appropriate representation terms. Term always means word, can assume. So get some words and use them as a representation for the text. So if somebody enters these words, we can find the text, the book on the library or the document on the web. And so far it looks quite similar, but of course there are big differences. For example, in manual indexing, we use controlled vocabulary as we will have already or will do in your homework assignment. Whereas this is very atypical for automatic indexing. Automatic indexing, we have open vocabulary. We can use basically any word. If we do that in, if we use controlled vocabulary in automatic indexing, we usually talk about some classifications, text categorization. What could be other differences between manual and automatic indexing? I think one of the big differences we've mentioned last time already, of course there's huge differences. One uses human intelligence and the other just uses a machine, but that's obvious. But what are the differences in the effect of our terms of math? Think of our example, what was the example again? Our nice example was developing Android applications or something like that. And we had some indexing terms, which indexing term did we discuss in class, which could be appropriate for us as we're non-library, non-experts, yeah please? I think there will be much higher quantity of key words. Maybe some text. In which case? In the automatic indexing. Exactly, yeah. And the quality of those key words may not be as high as... Yeah, of course, that's interesting. Good question. So there might be a quantity trade-off. We have many, many more words, terms. And for automatic, as we said, for one book, maybe you have 10 terms. Mainly we have, for the main axis, we have only one term. That's the place where the book stands, right? Unless we buy two books and put them in different places. So somebody who looks under Android and somebody who looks under programming can find both the same book, but that we would have to need to buy two books. So the main axis is basically one keyword for the library, maybe 10, to search within the automatic system for one book, for one web document, for example. We have a large number of terms. Basically here it says selection. That's almost not really correct, because basically any term that is in the document becomes a indexing term, or representation term, let's say. I can find, typically, we will see exceptions and so forth, the text, the document, with any word that is in this document. Of course, I will find a lot of other documents as well. So the big difference quantity, maybe quality, we don't really have time to go into quality too much, right? People even have been discussions about this, some people say, well, automatically superior. Others say, no way, manual indexing is much better, but we don't have time to go into the details of these comparisons. Simply, we have to assume that basically most large scale indexing is done automatically anyway, even if it would be worse, it doesn't matter, despite that there's lots of niches that we said where manual indexing is still being done. So quantity, and what else? No control vocabulary, and one other aspect. Remember, we had developing Android applications, and what was one of the terms that we discussed? Was it developing? We said developing might not be such a good term. It's very general ambiguous. We could use programming maybe, right? Then it is, we could say in our vocabulary, developing Android applications, when we talk about developing, that means programming, right? But what strikes us? There's different, and there's just a difference that usually never happens in automatic indexing. Do I have programming in the title or in the document? We don't know, right? I can assign a term, programming that might not be in the book at all. Maybe it's never mentioned, maybe it's always called developing code, developing programs, and ah, okay, I know, from context developing means programming. I never have to write programming in the book. Now the indexer can say, well, wait a minute, developing, we call it programming, and it's not in the text at all. This is something that doesn't happen in automatic indexing. Always select terms that are in the document. Machine usually cannot think of or come up with other terms that are not in the text, okay? So these are the three main differences if we talk about the outcomes, about the result, right? And we don't talk about quality. It's quantity, it's no control vocabulary, and it's indexed terms are from the text or from other places, or it can be anything. Questions? So far, everything fine? Why is this size here, this slide? Really, we have all different terms. We see I find something with the terms. I can have mixtures of these things. I can have keywords. It's called index terms that we simply should library. I might have a full text search combined with a control vocabulary. So there's also different ways to combine these things. Also something that we can talk about now, it's not really the topic, is the search model. What kind of search model do we have in usually when we talk about manual indexing? That is because we have so few terms. It's because we have very few terms. What indexing model does the library have? Remember from the introduction class, there are two different main different families of models. Exact match and partial match. Two different main models, families of models. What is exact match? That might not be a question in the exam, but if you don't know it, you cannot answer some other questions probably, so this is basic knowledge. We'll talk about it in detail, but you should know that from the class, like something like Boolean match, exact match between query and representation, and partial match, something like vector space model. There can be appropriate approximate match. There can be not a perfect match, but something between zero and one, something between a perfect and a non-match. Remember, this should be still there from the introduction class. And what search model do we have typically when we talk about manual indexing? What is the search model of our library? If you use the library, what is a search model of web search engines, most web search engines? And you use every day? Do they use partial match or exact match? How can I decide that? If you're unsure, how can you make a decision? Can you find out? Partial match, yes, of course, web search engines partial match. And what is the difference to the exact match? Yeah, so on the exact match, you just get documents or whatever that match exactly the query you... Yes, applied to query, yes. And what does a partial match model usually have? How are the results presented in the web search engine? In exact match model, you get the results set. All right, because you have perfect match, all the documents that you get are full hits, full relevant hits. And in a partial match, you have a ranking. You have a better one is better than the other, right? Based on word counts that we'll still come to. So in the Google Bing and so forth, I have a ranking in the library. What I have in the library? If you type a query in our library search system, you get a ranking or not? No, you say? Okay, uh-huh. What is then presented as the first document? Uh-huh. There is a number one and a number two document. And you could think that it is ranking, but of course it's not real ranking. It's based on... quite different looks identical, but it's quite different. It's based simply on the date of the document. So it's a formal criteria. The set that you get back is sorted by the date. The most recent books are on the top length lists, on the top positions. And for a web search engine, it could be quite different. Okay, so... Remember that. So another difference here. But today we go more into the terms. The first term that we hear when we talk about web search or partial automatic indexing is a so-called bag of words approach. What does that mean? We take a bag and all the words are thrown in. Means their context gets mixed up. They are not in the original context. In the sentence that they appeared, they are just all thrown together. So word counts as a occurrence, and it's not considered how the sentence or the context originally looked like. And what can be a problem here, for example? Maybe you get some kind of statement, and you don't get a lot of which... Very good....beversing the meaning. Negation, a big problem. If I have a document, in the last two months, no wolves have been seen in Niedersachsen. This will be a perfect hit for my query from the first class. It's negated. It's not what I want. It's maybe exactly the opposite, but it doesn't matter because negation is not considered. This is counted as wolves. Niedersachsen is relevant for the user. He gets it. So negations cannot be a good example for a simple context that gets lost. So syntax, semantics, like negations, no wolves have been seen, are lost. Why is that the case? Isn't that a bit stupid? Does it make sense? Shouldn't we... You get negations. Wouldn't it be better to put it on? What do you think? Well, we just simply cannot fully process natural language. That's the main reason. It's a technological border. For mass data, for huge data, we cannot find all forms of negation and exclude them. It's just too difficult. So our linguistic processing is very, very limited. We don't look for negations or anything like that. It's limited to the lexical and morphological layer of language. And current data, as I said, it's just too difficult right now, is not ready for this mass processing. You study with Professor Hayd about the... teaches the current state of the art in natural language processing, and he will talk about parsing and semantics and syntax. But this is yet not perfect enough for mass data like I have. For example, the partching approach has been adopted by Hartrumpf and Divelling, around 2005 or something like that. And their deep analysis only covered 50% of the sentences in a collection. That means they didn't find a solution for the half of the collection. And for the other half that they could process, their interpretation was probably also wrong in many cases. Of course, they couldn't check this completely. So parsing just isn't ready yet. The next issue is, of course, the linguistic processing, the preparation. We might have a query with Baum and we could have a lot of documents. Some might contain Baum, others might contain the plural or a genetic form or compound words that include the word Baum. Now, which ones should we get when we query Baum? Of course, we expect these documents, right? What about this document? Yes? Well, if it just happens to be two trees, right? It might not really be so important for me. I might be interested in these. Several roofs have been seen near Sachsen. If I'm interested in what happens to the roof near Sachsen, for example from the first class, I want the plural as well. I might even want something like Nieders-Echstischer-Wölfe or Nieders-Echstischer-Bauer-Z-Wolfe, something like that. That's more difficult. More? The area of the wolf is growing. That would also be interesting, right? Yes? What about these two? What was it? Tree work or working with trees, cutting trees. In English, it would be easier because we don't have a compound. We have cutting trees. So if it was a creamy tree, we would find this. What about the German case, Bob Fenn? I don't think these documents would be relevant. So I agree that since these compound words teacher and camera, by the second word, we would have a working term. This is not interesting. We have a so-called technical term for the second part. The main part is the head of the friends, the so-called head, and this is a modifier. We can have different kinds of work. It's modified by the tree, homework, whatever. So your opinion is we shouldn't have this stuff. But only if we have a second part. Let's think of an example. Door, tier, Eingang's tier. You would say a query for tier, we should get Eingang's tier, but maybe not something like tier block part of the door. So we should only get a compound if our keyword is in the head. What about tier? Would you think too? Everybody agrees? What are examples of pro or contra? Should we give the back the head? Let's assume we don't have any documents with these words. We have our theory bound, and we only find these two. Do you want them back or not? Well, in a way, of course, we never know. It's a very abstract question. We don't know if this is relevant for us or not. We have to have a, however, we need to have an algorithm that covers everything. So we have to make a decision. And basically the decision about compounds is quite complicated. Let's put another example. One nice example is of course the Niedersachsen. It's a compound, but it doesn't make sense to separate it. Only in the Formier'sachsen it's a negative entity. And if we separate it, we really mess up. What about the idea with the head? Cheyenne? Okay, German. We have the word. Now let's think of a few compounds where it is the head. Yeah? Good? Fast. Fast? Yeah, very nice. Fastool. Another idea would be Dachstuhl. Now what about the idea of the head? Right? Questionable, right? In this case, it really modifies the meaning of the head very much. Only the whole thing, or it's more related to Dachstuhl, is more related to Dachstuhl, and here, quite complicated because neither part is really related to the whole, because the whole creates something new. Dachstuhl and elevator. Well, so also this idea is very questionable. It depends so much on the English. You have so many different compound words. Basically, what we do in retriever is we try to cut them. We keep both parts and the compound. That in German leads typically to an improvement of 5 to 10%. So in what you're going to ask, we haven't talked about metrics yet, so do you know? You cannot know yet what is 5 to 10%, but of course this will talk about this. Overall, this improves general, so we cut it, and we have to deal with these cases that Dachstuhl, somebody who looks for a farran and someone who looks for a shul, will find this document. You'll find it in this case, and in this case, it doesn't really make sense. But a query with only farran doesn't make sense anyway, so the person will add some other terms. Overall, in the whole process, on average for most users, it turns out to be better to cut these things. If you have a balm, you will find also a balm arba and balm fern. What is the word really? Let's start with the compound. Of course, we have problems like somebody put a nice example here. We ask, where are the words here? If we look, typically words are limited by what? Blanks, right? So space between two words, and then if you are everything, there is between two spaces. So here is a space. So this is one word, blank, blank, blank. What about this? So if we do word separation here, place some blanks, we will find a few problems. Our first word will be three, and the dark should be indexed. Maybe not, why not? Well, the thing is about the dot, we usually use lamb stoplists because there are too many of them, and you don't meet them. They are not looking for dots usually. So we take out all these things, we don't care about sentences anyway. And the number itself, you could use it with a bigram or something like that, but free as it stands for itself, it doesn't carry any other information. Not really so useful. Maybe there could be queries, let's check later. So we have a number, response, the bomb, and the click, then another number, 1958, is that a word? Yeah, it's representing you, so it's something. It's something, it's an entity basically for a special time period. Yes, somebody might look for that. So it's a word. And then we have this problem of course, we take all our diacritical marks, so you're left with one character here, and something else. And here we also have a different problem, we have a Roman numeral, fifth something, so here or something, we would take this out and probably we would left with one numeral, same thing here, just we have to recognize that it's a normal numeral. Also getting complicated. So looking for words, what are words, how many words do we have? It's already not as easy as it seems. We can see that as many problems. We have hyphens between words sometimes, and so. In Asian languages we don't have blanks, we have to find the word borders by ourselves, so even more complicated. In Western languages at least, we have blanks. It's fairly easy but still, it's not absolutely. It's abbreviations, okay, so what else has to be done with our words? So of course, the idea is here. How is this step called now from Baume to Boime, Baume's? Stemming. Stemming, mm-hmm, that means I have a stem, and I have to find all words that are related to this stem that basically mean the same thing. They're not just there for grammatical reasons, but they don't really change the meaning of the word itself, right? Baume, Baume's, of course in the sentence they make a big difference, but they don't change the word itself. So they're all brought back to the same stem. Before that, of course, we have to decide what is really a word, and compound words, phrases like red cross, is it one word or two words? Might be difficult. We'll talk about that later. And then comes another process. Are all the words added to the index? What about this one? Who speaks French? Who reads French at this? Or any other wrong-est language? It's a stopper for that one. It's necessary for our words. D, Charlie Gould? It's more complicated because Charlie Gould is itself a negative entity, right? It's very complicated. Let's assume it's something else, if D, E appears quite a lot in French, then you call this a stopper. It's like, I think, a really stop-words, right? And your definition of stop-word was it's not necessary for... For queries. That's a good definition from a user's point of view. You would say nobody would ever search with D, E. Maybe somebody would search with Charlie Gould, or Schal Gould, because he says he knows that there are stop-words, I don't have to type D, E. But nobody would search with D, E, or German, D, E, R. Articles or... Right? Have you ever searched something like this? Anybody? Okay. So that could be a reason from the user's point of view. How do I define stop-words from a system point of view, from a language point of view? I have my 30,000 words, maybe, which are the stop-words and which are not. I cannot ask every user, would you search for that, would you search for that, right? Not the way to determine stop-words. How can I find which words are so-called stop-words, meaning they are eliminated from the index, right? They are cut out, and I cannot search for them. But if taking them out makes the system more efficient, more effective also. So how do I determine stop-words? What belongs to the stop-words and what not? By the way, simple numerals might also be stop-words. Simple numerals. Also, single characters are considered stop-words because they don't really contain anything. Any more kind of words do we have here? English. Yes, articles. Exactly, articles. Pronouns? We could say articles. Pronouns, okay. What else could be in those stop-words? So here, he and she. Prepositions. Why prepositions? Why prepositions, Peter? What is characteristic now for all these three types of words? We wouldn't search for them, but also something else from a system point of view, from the language point of view. They happen to be in every text. Yes, and that's a very crucial thing, exactly. That's the main thing why they are eliminated. We wouldn't search for them because we also know that they don't contain real information because they are in any text, right? If I have 10 million German texts, they will all contain stop-words. And if I search with an article there, I will find all of them. So it doesn't make sense. Yes? So we could, maybe they are not in one text, so how could we also define this, rephrase this? A little restrict as you did, any text, they are very frequent overall, right? Very, very frequent words. And this is the typical definition that we have for stop-words. It's also by dubious, very frequent words. Okay? I could have also, and of course these classes, this is like this typical process, also could be other words, right? Conjunctions, yes, and so forth. Exactly, again, very frequent classes will be in any document and nobody will search for it. What else? Pronouns, maybe? Pronouns, we have already pronunced? I think it's pronunced prepositions and conjunctions. And... Are they closed, yes? No, no, no. But we could also have, like for example, we have a web page that deals with programming. Programming tips or something like that, and people can exchange ideas. Now, if I look at all the documents, all the posts in this community, maybe, in this community site, find very frequent words that are not yet in this list. For example, then I could define them as you did. They are in almost, or in almost all the texts, or in all the texts. I have 10,000 posts on programming. What could be frequent words? Program, or code, or Java, something like that. Not foreign words, necessarily. They could be in English, right? That doesn't really matter. That's interesting. That's another issue when we talk about technical terms from this community. So there might be any page, there might be program code, something like that. And then these don't matter. They also turn stop words for a special domain. Because they're everywhere, they don't help. Now, let's look at the stop words here. This is basically our approach. Here we have a stop word list for German. We see things that we have talked about. There are particles we had, temporal expressions, single characters, both, besonders. Just because it's very frequent, doesn't help me to find a specific one. Now, I'll say things like für, dafür, dagegen. Does that really stop words? Not all adjectives necessarily. Something like besonders in this case, yes, because it seems to be very frequent, but maybe not something like colorful, far-pick or something, or other adjectives. They're not so frequent, they're not eliminated. Depends on the frequency. Something like schön could be a stop word, because it's very frequent in a specific collection. That's a decision we have to make when we design the system. Typically what we do, we just accept one of the standard stop words lists, like this one here, published by Jacques Savois, or the one that is delivered, shipped with programs that we use, like Lucino, other things. Usually we don't worry about this very much, but it's certainly a problem. Look at the case for and against, für, dagegen. Let's see. Can I query with these words, or does it make a difference? For example, I could look for documents that are in favor or against the use of nuclear power. I would type nuclear power, Kernkraft in German, für Kernkraft, could be my query, or gegen Kernkraft. If these are stop words and they are eliminated, what would be the results of these two queries? Of course, identical. But I would get the same results. So, if these are stop worded, these are stop words I cannot search with. And it's quite a difference if I'm against nuclear power. In favor of nuclear power, if I'm looking for arguments for any of the directions, it makes a difference. What happens in search engines if I type in a stop word? Like you said, nobody does it anyway. Yes? Can I do it and what happens? And why could I do it? I could do it in this case for nuclear power because I think I might get arguments for it against it. Or I could just type in a stop word. What happens then? We could use a normal word. We could use a normal word. As a regular word, just like any other word? But if it has been stopped worded, it's not in the index. Then I should get an empty result. What do I get if I type a stop word in Google or me? And why could I do it? Why am I interested in it? Not because we studied this here. For example, what is that? I think I need to go out and type words. Then I will get a list of words in the address and then I will get them out. Maybe it's another kind of search engine that I'm getting. Other kinds? Of search engines. Which kinds? I think language. Another language? Another language. The generic entries. Okay. I could say, well, this is a Google MyTec hour. I have eliminated stop words. And this is a stop word. I send the person to dictionary. That could be a thing. For some reason, he wants to know something about D or whatever. But what could that be also? Maybe an abbreviation. Perhaps. Yes. There is something like the Deutsches Institut für Erwachsenbühne. D, I, E. Now it's a problem if it's stop worded, if it's eliminated. I cannot find this institution. That would be not nice for users, right? Now I could, of course, make more complicated processing first. Is it a content word named entity? Or is it a article? And then decide if I throw it out or not. And if we look at the reality, it's quite interesting to look at this. Here I just query the stop word für. DIN maybe a year ago or so. And we find, surprisingly, not only in this case, it's not an abbreviation for anything. We send something, sign up for Facebook. So obviously, this is exactly what we think of when we think of a stop word für Facebook registry. And it's still indexed. So Google or Bing has not eliminated something. So they don't do what is written in every retrieval textbook, stop words are eliminated. They just don't do it. For nuclear power, I also find documents with both terms. In theory, I shouldn't find anything. I should only find something about Kernkraft. I shouldn't find für in the index. So why is that? Let's take a look again before we make a decision here. Why is that the case? We've talked about this. They don't contain content by themselves. So it's not very frequent. Interesting, if I take out stop words, I take away one third or one half, between one third and one half, of the documents of the collection sites. So I have to use much less computing power, much less storing power if I reduce the god stop words. And each list is only a few hundred words for German, maybe two or three hundred words. We should reduce. Of course, language specific. And now again, there's a question, why does Google not take it out? Of course, one reason is, we have discussed it already, named entities, abbreviations for organizations, for whatever. UND, for example, looks like a stop word, but in fact, it's United Broadcasting Corporation. I don't know why that is. It doesn't matter. There is something that is called UND. That's one reason not to take out the stop words. What could be other reasons? Why? Because Google is based on partial match principle, so that you have to include more words, so that you have a ranking. Yes, they might think. Well, usually they don't query it, but if somebody really wants to search for a stop word, I add it. It won't hurt many users, right? But it's something additional. Some additional information for nuclear power, against nuclear power. Other reasons. One other important reason. Think of your degree. Google works in several countries. What could be a reason there? The German article D is quite different from the English verb die. They are both written the same way about different information. Exactly. Stop words are language specific. In German it's a stop word, but in English it's a verb. And there are many other examples. If I work for a collection, which contains many languages, I have to be very careful in excluding something. The English article could be T in French, or something else, right? So that's quite risky. So abbreviations, multilinguality, and the reason, well, if somebody wants a stop word, I give it to him. There might be reasons why Google, and other search engines, has decided. We do it different as it is taught in the textbooks. We include the stop words. Here we have the case of DI, also indexed, as we see, not only for the organization, but also in the normal text here of the title. And there's also, in being the same query at the same time, here being recognized as a DI is also a place in France, right? And offers hotels in the city, or place, whatever. And there's also Wikipedia article in German about the city, Die, or whatever it's pronounced, in France. So here we have not only English, but also a French place. Quite complicated. And if I exclude it, might not find it. Okay. And DI, of course, could also be in-named entities in German. For example, here we have the three-fragan sign, which is, it's as a whole, the phrase as a whole is a name entity, which contains the, if I eliminate DI, it would be hard to find the three-fragan sign later, to recognize it as a name entity. So we see that despite the typical text knowledge, we find a lot of issues, even with such a simple thing as stop-word. Sometimes we have even domain-specific stop-word lists. So there are legal systems for lawyers, judges, and so forth, for legal professionals, that say, okay, we include foreign against, in our index, we do not eliminate it, because our clients, they might really be interested in that, right? A court ruling, overruling something, overruling another case, might be quite interesting, there might be only a pronoun, but that could be really crucial for me, if I query here. And the case of name entities, and there are plenty, M-A-N, N-U-R, all German stop-words, and also German organizations, that maybe somebody wants to search for. Also interesting case is the single character, we talked about single character, single numerals, and here we have, I haven't checked this in a while, but it would be interesting to look again, vitamin A or vitamin C. Here I query A and vitamin, and this was quite some time ago. Google gave me as a first hit vitamin C. If I turn the query around, vitamin A, it gave a different result. Remember the first session, we said the order doesn't matter? Yes, that's true, it should be like that, it's all in the textbook, but sometimes the engines do something right. In this case, it even makes sense. And vitamin A is different from vitamin C, but it's only a single character. Usual stop-words, this thrown out, you cannot find any difference in the number of points. Okay, quite interesting. Okay, now for the rest of the class, let's talk about morphological issues, about stemming as you call it, reduction of world forms to a stem. Oops, time is fast. In our languages, the words are changed based on their function in the sentence, and their morphology change, their appearance changes, we could say, and that is something we have to also be aware of when we count the words, we want to count all the bomb bombings together. This is not such a big problem in some, in some East Asian languages, there is sometimes no morphological change, but in German it's quite a challenge, not as much as in English, and English is not such a big issue. If I don't do stemming, I lose less in English than in German, and of course there are other languages where it's even a bigger problem. For example, Finnish, Hungarian, they have much more morphology than German, so if you do retrieval for these languages, you need more, more, more rules. Here is a query example with hotel. Interestingly, I also find the plural, of course, right? I find directly hotels and hotels, and of course typically nobody needs two or three hotels for the same day, but if I query with singular, I also am given the plural, and I will be typically satisfied. Now what happens if I type in the plural, hotels, or any other plural? It should be the same, right? Yes, we can see if it's really true later, right? But basically stemming should work that way, that all words are brought to the same stem. For example, German cases, English, Hungarian, 18 cases, in English we also have four cases, but of course the words don't change really much, so we don't have to do a lot of stuff. Also, of course, what we typically change is nouns, the stem, the form that we reduced them to, is the nominative singular, man, and for verbs it's the infinitive or the infinitive without the last one or two characters, so as then we reduced to ESS, as then there, something like that. Inflecture endings, activities, could be reduced to active, applies, apply, apply, we would say this is all the same, if somebody searches with apply, it should get all of these words. Of course we get problems too, we have something like computer, to compute, computation, computerization, should that all be compute? Maybe not because of course computation, computerization, the meaning has changed a bit, maybe the user doesn't really want this in this case. There are two main ways to write code, to do that automatically. Two main differences, or three we could say, actually three different three ways, to do stemming, or in German called Grundformer-Education, how can we write code? The main and the most important, most often used one is rule-based. So we have rules that you learn when you learn a language, you have to learn, ah, when does the morphology change, if I use it in genitive I have to do this form, and these rules are basically just turned around, inverted we could say, ah, if manes with ESS at the end occurs, then you should reduce it to man, just take away the ESS. So these rules have to be inverted. Another approach is to simply say, well, there are so many problems here, exceptions, what are exceptions? Typically in German, of course there are millions of exceptions, what is a really difficult word class that brings in a lot of exceptions. How's that? Which one? Which words? I think verbs. Verbs, which kind of verbs? Unregular verbs. Yes, how are they called in German, unregular verbs? They cause a lot of trouble for learners, but not for systems as well. What kind of verbs are unregular? Which ones? Singen, yes, is a good example. What is the problem with singen? In German, also in English, we have a... It's highly irregular. Sing, and the other form is sing. German is unregular. So that seems to be a vowel change that has occurred before German and English separated. These are old verbs. All new verbs that have come to the language are typically regular. So-called weak verbs, schwache werden, strong verbs in German are irregular. And then we have a problem, we cannot just look at the ending, we have to do something in between, and that is quite challenging. That is a few one-to-one hundred verbs, right? That's already quite a lot of... And these are verbs that are very frequent because they are very useful in the language, very old. So they are used a lot, and then we had a lot of errors. So an alternative then is to say, well, but we know what we have all these verbs somewhere, but we cannot find the rules really to study the rules for the different work classes. It takes a few classes in old languages. Let's just write a list of all the work forms that we have in our language, and the computer looks it up, and we add the work form. So we say, for example, Gizungen, we would have a lexicon entry that says, Gizungen, that belongs to Zingen. You cannot find it because u and 2e, that's very difficult. Let's just lose a dictionary kind of, or list of words. This is called the table lookup. So there is a full list of all the inflected words, and there stand. Now again, this also has disadvantages. Which ones? That's quite a long list, right? Maybe 100,000 work forms in German, or 50,000 to be reduced to 10,000 stems. So that's quite a lot of such time when we look through this. Whereas typically for stemming, we have maybe mostly, as most 20 rules, applying 20 rules is more time efficient for the computer than to search a list of 10,000 documents each time you find any word, and we have millions of words typically in our collections. So that's quite time consuming. Another difference? What's the advantage? This language is so static, not really if we open the newspaper. We find new words all the time that are being invented. They come into the language, right? Think of something like Grexit that nobody used two years ago. Now everybody knows what it means. It has to be in the dyslexicon. This happens all the time. So we have to write new tables all the time. And then of course, again, very inefficient. Nobody can do that. And that's why this is a more used form of... Most systems use rule-based stemming. So the Ngram approach has similarity based. Ngrams have something that has been taught in the introduction, right? Here we say standard. Here we have the disadvantages, computation-expensive, and it's never complete, dynamic, right? Okay, here for examples, let's look for the Ngram. Okay, interesting examples also. Of course we have in German something like Finden. We have Gefunden, Fendes, so we have several changes in the Finden. Now, we have the Fundenes, Fund, Finder, Fintling. Now of course we can say, well, for Fintling we would all agree that this is a different word already. It shouldn't be reduced to Fintling. But when does it happen? Always when you have LING, does it always change or not? We have names, complicated, we have something similar, right? But it shouldn't be reduced to Finden. And we have again, very tricky for indexing in German, we have compound words. And those mostly should not be reduced. Because often their meaning is different from the original military. Of course we also have a lot of regularities, for example German plural. Sometimes it's quite regular. We just add an N, in the end an Name, Namen, and that makes the plural. And that is easy for our system to make a rule that turns this around. It inverts this kind of. Then we have different ways to create nouns, of course that's getting a bit tricky. Changes within the verb, that's also quite complicated. The warfeng, already complicated but very complicated. Even tricky. So the lexicon can never be complete, requires too much storage. And the rule based has the disadvantages of those many many exceptions that we cannot cover with simple rules. Let's look at the rule. This could be a very simple rule that is in any German stemmer. What does it do? Pseudo code here. What does the rule do? If length, length of course looks like length, so the length of what of the word is larger than 4, then I ask again another condition, is it true that the word length, meaning the character at the last position equals N? And the character at the last position minus 1, first before the last one equals E, meaning if I have the ending E, N, then simply return word, but before returning it, make this character change it to 0, that means end of word. So the end of word is added here, meaning I take away the E, N. So if I have G, N, this rule would disapply or not? Yes, it's more longer than 4 characters, G, N, I would take away the last 2 E, N, G, my stem would be G. If I find Herren, applies to, I take the last 2 characters away, my stem would be B20, Herren. What about with ramen? Stem would be ramen. Now there is a confusion, potential confusion with ramen, it's also word. Interesting, you can check in the search engine if they confuse this. Of course now we have these problems there. Is there an example where we have less than 4 words? Amen, churchgoers know about amen, if you have that, you do not apply the rule because it's 4 characters only. It's not larger than 4 characters, or woman. Okay, so this is just one example. How many rules are there? In recent times we have seen a reduction of the rules that have been applied. Or the stem, for example, very famous one that is for English, 60 rules. Before that, Robert was one of the first ones to write a stem with many, many rules, very detailed, trying to get all the differences. Jacques Savoir, who was actually here last autumn, last fall, he gave a presentation. He reduced his rules set further and worked with 10 to 27 rules, depending on the language. Minimal stemmer, if I do nothing, if I don't have a stemmer ready, I don't want to program it, of course. Then I can say, remove any S at the end. What does that help me? I have English, I have the plural, resolved, hotel, hotels. I have adjectives and verbs in second person singular. Walk, walks, also resolved. That's most of the cases in this show. In Romance languages I also have the plural resolved. Castello, castellos. I will be castello. So a minimal stemmer with one rule already helps a little bit. But the quality, of course, will increase if I employ something like Jacques Savoir's rules set or something that is ready made in Lucine, as we will do in the lab class. But of course I should always be aware that there can be problems. I should be able to identify these problems and look at them. Potential errors are over stemming and under stemming, over stemming, reduce too much to the same stem. It doesn't really belong together, but it's the rule. It's the things like it, like messa, messa. Well, maybe ancient times messa had something to do with messa, but nowadays we would say that's quite something different. If we query messa, anophomessa, we don't want anything messa, or experiment experience. If we stem too much, experience, yons, and ment are typical endings that the stemmer might take away. And in this case we might get over stemming, because it's completely different, right? Under stemming words they belong together, but they are not grouped together. Some irregular stuff. But here, the decision. We can really, does it belong together or not? We could even, in this case we can doubt, maybe, should last for a different example. But I think the idea is key, right? These are the dangers that can occur, depending on the rule set. We can take a look at stemming if we want to. For example, in the system that we use in the lab class, here is a text. I can add my own text and check what is being brought back. Here we see that irregular verbs are a problem. Essen is not dealt with correctly. We are awesome. Fluidiform is not reduced to Essen. To us. Threat also, irregular verbs, treat in thought, not recognized correctly. So typical of the shelf stemmers, not always the very best. See some other examples. Anyway, okay. Now the third approach, and that will be relevant for your homework. That will be a small calculation with stemming, n-gram based stemming. N-grams have been talked about with bigrams, trigrams, sequences of characters in words. Easiest way to explain this is an example. If we have these words, we can do a trigram analysis, say R. Analysis starts with a blank and these two characters. Then there are three characters, overlapping, of course, A and A. I go to the next one, N, A, L, and so forth. Now I get my trigrams that make up the words I use. Okay. Sometimes we can also do it without the blank. You can just say R. The first trigram will be A and A. That depends, and you can do it as you like. And the four-gram analysis will be. Understood so far? Another idea is how can I use that for stemming? The idea would be if I have two words and they have a lot of n-grams in common. Oops, it's in German here. In Warten, if I do an n-gram analysis, I will find a lot of identical n-grams. Then I can say, oh, these must be related. They must belong to the same stem. Okay. So of course a very simple form of stemming, but also possible. You can say if they don't have a lot of n-grams, they don't belong to the same stem. How can I calculate that? Or can you calculate that basically at home? Check how many n-grams does the word have? Both words have the sum of n-grams in both words. Meaning how many n-grams does the first one have, the second one have, I make the sum. The second step in the formula, I find the number of common n-grams. I compare the n-grams and say, this is the same, this is the same, this is different. And then I multiply the number of identical n-grams with two and divide it with the sum of n-grams in both words. Then I get a similarity score for two words. Then I can say, if the similarity score is higher than x whatever, these all belong to the same stem. Okay. For example, here, Warte and Liegen, I will have no common n-gram, low similarity, not related. Of course I can have problems like, well, in Vietnam there can be issues, but not so many. And also n-grams are used sometimes as a representation instead of words. It's also interesting, we don't have a lot of time to do this, but that shows, yes, it's worth to use n-grams. If you have a stemmer, always use one and the best, if it's available, use a rule-based stemmer. But some, let's go to the homework. The other problems we have to skip, the other problems like, we have to skip them. Unfortunately, we have to. Multi-word expressions like rotus croiets or things like that, of course, that can easily get lost. And here we have again Lucine.
This lecture gives an overview on Information Retrieval. It explains why documents are ranked the way they are. The lecture explains the most relevant ways for content representation: Automatic indexing and manual indexing. For automatic indexing, the frequencey of word is of special relevance and their influence on the weighting of term are discussed. The most relevant models are introduced. The session on evaluation discusses new metrics like the Normalized Discounted Cumulative Gain. The session of information behavior provides a brief overview and explains the relation to IR. The session on optimization mainly introduces term expansion and fusion methods. The session on Web retrieval is concerned with the quality aspects and gives a basic insight to the PageRank algorithm.
10.5446/16258 (DOI)
Let's start with the class on representation one. We've talked some about representation. Last time we already delved into counting words, and we talked a lot about differences between IR and databases. And we saw that, well, representation, and especially imperfect representation, is the key characteristic of the information that can go. And before that, even when we talked about wolf and, what was that, Ninosaxon? And we've before counted in the words, we'll agree that wolf and Ninosaxon are terms that would represent this. I must have seen. OK. Good. So words. Some words, very few words, actually represent a full document, a full book, or article. We could also say some entries in the database. They represent a business process or a business project, which could also, we could even debate if it's still representation or not, if it's already databases. And image, for example, can be represented by different things, for example, tag, which is also basically a word, meaning a word, and hopefully meaning the word with some of the sign, or a color distribution or something. And typical difficulties that we encounter are just natural, are inherent to natural language, like homonyms and synonyms. Bank, of course, is a typical example that we see once in a while in this class. It can mean furniture, so sit down, or it can be an institution in the financial area. Also, synonyms, of course, pose a problem. If I query for money, I might not get documents of capital. But I could be interested in them, because money and capital are related, in some context, we see the synonyms. And the other problem would be for homonyms, we have the opposite problem that we would query with a word, bank, and typically we would not be interested in both the financial institution and the furniture. Typically we only want one, and the rest of the documents that deal with the other meaning might be garbage for us. There are different ways to represent now. We have manual and intellectual indexing, manual, intellectual, and automatic. These are two approaches that we'll talk about. We can also talk about abstracting and clustering. Here we have different meaning behind the word representation. Basically what we talk about today will be briefly clustering. Then we'll talk more about manual indexing. And next time we talk about automatic indexing, which is state of the art, of course, today, for large text collections. But still there's a lot of niche, there's a lot of application areas where we still have manual indexing. Intellectual work is still carried out in the area of information retrieval. And people actually sit down, look at documents, and design terms to them. So we deal almost a full class. Abstracting is a form of index representation where we reduce a shorter text replacing or being a representation for the full text. We can use that typically when we are in the second step of the information process. When we see a result and we're not sure should I read the whole document, I might just look at the abstract, a brief form of the document content, and then make a decision about relevance or non-relevance. Abstracting can be something that is done by the author himself in our discipline typically. All applications, they have their abstract, the authors are required to write it. But the other discipline is like a new many piece where it's uncommon to write an abstract. And often information professionals writing an abstract without writing the article afterwards being paid for that. Also something that still happens. What could be an abstract of a book? Where do we look for the abstract of a book, for example? Yeah? On the back, yes. On the cover page, on the back page, we often find a hopeful brief representation of the knowledge of the content. Often, of course, a little bit biased by the author, by the company, right? By the publishing company. But we are typically aware of it. OK. Automatic indexing something we'll talk about next time. Let's talk about clustering very briefly. Clustering means putting objects into clusters. These objects can be anything basically. You look surprised? Any questions? No? OK. Can be, again, books, text documents, articles, web pages, whatever. For us, it's always text documents, unless we state something else. And now we want to put them into objects. Why? How does it represent it? How can it be helpful? What do we put into groups and whatnot? And well, these are the three questions, basically. Which object should we put together in groups and which not is a question? Why is it helpful? And how does it, what does that have to do with representation? OK. Three. Main questions. Any answers? Let's start with the first one. What was the first one again? Which object should we put into groups? Putting things into groups is something that is also relevant for later stages of automatic indexing, in a way. But which object should we put into groups? Speaking very abstractly. Of course, objects at all, maybe? Answer? Objects that are similar. Yeah, similar. Exactly. That's the definition of clustering. Similar objects are in a cluster. Unsimilar objects are in different clusters. That's it. So we want to put semantically similar objects, documents, into a group. And if they're not in a group, we assume, or when they're typically not similar. How is it helpful for the user? How does it help you, for example, as a user? OK, let's start with, and we go to the third question that was again. What does it have to do with representation? It's very closely related to the second question. It's too hot outside, maybe? People get tired. Weather is getting too good. So basically, what we want as users is not having to read all text documents. It's impossible. And I want to get to relevant documents, typically more than one, which is very often the case. And I have some semantic interest looking for an answer to some question. And if documents that are similar are together, or in a group, it might be very helpful for me if once I find that group. Then I have not only one relevant document, but potentially a larger number of them. That's also basically the same explanation for why. What does it have to do with representation? A group, a single document. Each of the documents, basically within this group, is like a placeholder. It's like a representation for all the others that are similar. I only have to look at one. Then I can make a decision, relevant or not relevant. Then I can forget about the whole group. And I don't have to look at a lot of others. So it's depending on, basically, the assignment of a cluster to document is a semantic decision. It has something to do with the meaning. And then the cluster represents something, ID or concept. And if I'm interested or not, that's quite in this concept. And it's helpful for me to have this definition. So there are a lot of different methods to do clustering automatically now. We have to start off with some similarity between documents. And let's assume we have a similarity measure, we have a similarity metric, something that you know briefly from Introduction Information Science. And that's now we can assume. But can imagine what distance means. Similarity means distance similarity. I'll just post it terms, but they mean basically the same. And we talk about a, you can think of a two-dimensional coordinate system in which we place these objects. Of course, we'll talk much more about similarity in later sessions. Now let's assume we have this similarity calculated between five documents. So should be symmetric or not, also something that we can discuss. And now we have a similarity between object D and object A. Of 0.01, meaning it's a low similarity or high distance. Now of these five objects that we have here, which should be together in a cluster? We think remember your answer, what should be in a cluster? Similar objects, right? Which object should end up or document should end up together in a group cluster? Everybody making their guesses. Let's give you some time for everybody has an answer for himself at least. It's not very difficult, I guess, right? What should end up in a cluster together? In this case, are those numbers the distances? Are all these similarities? Similarities. OK. This means this E and B have 0.01. And E and C have 0.2 meaning. C is more similar to E than B. OK, so D and E will be the first group in step four. Why? Because the similarity is at 0.7, and this is highest. It's the highest in this group. So its context depends if we have higher low values, but this is the highest here. So D and E should be in a group. For sure, basically. This is a problem. First thing we can say for sure. Next. And so. It gets even easier, right? Next decision is still easy. Then becomes a harder one, so better give an answer now. So D and A? D and A, let's see. Why? Because it's higher than the other. Higher than the other. So second highest similarity, right? And it's two new objects that we have not grouped together yet. Now we have two groups. C, E, and AB. OK? We can still say this. Similar objects are together. These similar objects not yet. Now what about a C? It's kind of alone by itself. I want you to see any of those groups with the AB. So that those three form group of AB and C. Because that way it doesn't matter. So the similarity from C to A and to B is 0.4. And since AB and A are in a group together, then we can just add C to that group. Thus we could say, well, the third of similar. Or now we have left with C. Let's see where we can put C. It's more similar to these ones than to B and E. We could say. Then we have two groups, one with AB, C, and one with D. Now what about opposing views? Who would say? That's not so good. I'm not sure if I agree. So if you compare AB to 0.6, then if you throw 0.6 into a group with twice or 0.4, it's a significant difference. We could say, well, we have such a nice group of AB, in a way it's more or less similar. It's almost like a compromise, we could say. Does it really belong there? C, it's not so nice to have 0.4 there. What do the others think? What do the others think? Difficult decision, of course. Now, and there is no rule for that. A nice way to visualize that is a so-called dendrogram, dendrogram from teeth. And here we can see we have a visual, which is basically an object in one scale. And we can say, OK, we go from a more similar to the less similar, and we join. Whenever we reach a certain level, 0.7, we join these two. Then we join AB, but at a lower similarity level. And then the next level, as you suggested, is 0.4. We join C to AB. And then at an even, less lower level, we could join all five together at a level of 0.2. Because then C and D are also similar with the similarity of 0.2 to D. And now the question is, one of the questions, should we have two groups? Or should we have three groups? AB, C, and D and D. And depending on which decision we make, we have a cleaner cluster, as we could say. We have a higher similarity within the cluster. If we have two groups, three groups, we have higher similarity, 0.6, within all three clusters. If we move to the second group, this level, we have a lower similarity, and we could argue this is not sufficient. I give you time. 0.4 or above 0.4, we have two groups. And in the then program, we can nicely see how this decision on how many groups we have on the clusters affects the similarity between the clusters. And knowing that, it could even be more extreme. It's our toy example, of course, that we're talking about. What could be another decision? All right? Obviously, we could also say, wow, three groups is too much. We want a higher similarity. We accept 0.8, maybe 0.7 is really the minimum. Then we end up with one group, two groups, three groups, four groups. OK? Wherever we draw the line, we have more or less groups. We could have five groups. Each document has its own cluster, not very helpful. We could have a very low threshold. We have one cluster, all documents. Also, no information in there. That doesn't help at all. I have to select one of the other possibilities, but which one depends, of course, on the context, on the user and many other things. So of course, in reality, we have much larger clusters and stuff. Hopefully, here's a nice example. We see a dendrogram for states, for countries. More similar states are grouped together more earlier with a higher similarity values. It's just turned around the dendrogram. And now, I could ask you from which period is this similarity metric probably? Four nights and nights. Yeah. We have a few more. But obviously, that's exactly we see, that this must be a Cold War scenario. Some countries are similar. It's obviously the Eastern countries here. There are other clustering algorithms. We have time in later classes. We also talk about the clustering algorithm. For now, we can just have a quick look at where can clustering be used in the information retriever process. We have seen this graph, right? And now, we project applications of clustering into the process. We can see that. For example, we could have clusters in our collection. We could have a collection of 10 millions documents. And we can have clusters in there. And then, it's helpful maybe to, this can be used for browsing. I can have a dendrogram, 10 top clusters and top clusters. And so I can explore, as a user, the full collection based on the clusters to browsing. I can use this information to have a more efficient similarity calculation. I do not calculate the similarity for each document, but only for each cluster. And then, in the clusters with a high similarity, I explore all documents. And this is something that is more, that is done more often. We'll see. This clustering of the result set, as it's implied here. I have the full, normal process of the measure retrieval. I get as a result set, still very large. Then, I do clustering. And I show the user, here is your result. There are five main groups of documents. Anybody have a question? I'm going to ask you. Five main groups of documents. Anybody ever use such a clustering search engine? Yes, which one? OK, don't you remember? OK. So not part of the everyday routine, still even if you have seen it. Anybody else? Interesting, we see that this is not used very much. It's like a good idea, but not part of everyday information processes, even for information specialists, or for future information specialists like you. So at least you should have heard about it and know about them. There is VVC more, for example. I'm not sure if it's still online, because nobody uses these things. They are also not very successful. And sometimes disappear. Here we see. I have made a query, retriever of the sign, and I get theoretical clusters for my results. So here's a cluster evaluating clear systems, that has two documents. Now I can see, well, what did I actually mean with my query? Well, I'm more interested in automatic indexing, or maybe literature. Then I can explore a much smaller result set. The typical engine, I would have to look at all the documents. Similar here for cluster D, I think that's still online. I queried Fussball here, and it gave me clubs, tables, clubs again. English and German, not very nice, of course. Video, I can decide what aspect of software I'm interested in. Again, helpful. I have some additional work. I have to query, then. I have to evaluate clusters. But once I've done it, invested more work, I have much less documents to look at. And the Cartoon system, which does clustering plus also visualization of the relationship between clusters. OK, so much briefly on clustering. We move on into any questions? No, not the case. Very good. Then we move on into manual indexing, also called Intellecture Indexing. We see we have manual work being invested. People sit down, index documents. Information experts are trained to do that. That's their daily work. And we are interested now how does it really work, what do people with Loo, how are they supported, and what kind of concepts are there. First of all, what are typical situations where this is done? What kind of jobs do you know where people do manual indexing? And would this be in your library? Yes, library. The typical example, excellent. The university library, also a public library in Germany or anywhere else, people sit there and index documents. Not only, of course, there's also other jobs, but librarian training is, to a large extent, at least one third of education is based on knowing how to manual index. OK. Where can you study that? OK, sorry. Misinterpretation of the gesture here. Are you trained to be librarians after you finish? Can you, with your BA in IAM, apply for a job in the library? You can always apply, but there are graduates of IAM who work in the library. But this is, of course, not a typical job, not what our training is focused on. But there are other degrees in information science, since information science is often historically rooted in library science, so library studies are courses in information science, but this is the main thing you learn, like the Hochschule-Arnofer-Offers-A-Degreen Information Science, which is much more rooted in library studies. Where else, apart from the library? OK. Any ideas? Information professionals? I don't know the English term, but... Just say it in German, no problem. Fachinformationszentrum. Fachinformationszentrum, a typical special German term. What could be in translation? Sensor for... These are special purpose information or special domain information centers. There is no real, general English term for that, but this Fachinformationszentrum, which exists in Germany, exists for example for which... Not which discipline, for which scientific field, one of them... Technical information, yeah? This is a very... It's quite a unique infrastructure that Germany has here. The Fachinformationszentrum has been founded in the 70s and they were thought to overcome one of the drawbacks of libraries. Libraries typically index what kind of information, what kind of objects, library consists of books, of course, right? And a lot of information is conveyed through, especially scientific information is conveyed through patents of course, we had that in the other class, but also through... Well, what else is there except from books? If I don't write a book, but I publish something, what could I publish? Yes, how do I publish it? What is the venue or the organization? Yeah, paper, and the paper appears where? Maybe online? If I put the paper online, nobody might read it, right? Maybe on scientific magazines or something like that? Yeah, magazines or also other word is... Like we have the books, all these publications basically, we have books and we have... Well, this note is called... Let's call it article or paper, right? And then we have these papers or articles can be published in magazines or let's call journals, scientific journals or scientific... Another organization structure where we publish is... The scientific conference. It's just social constructs for people to come together, talk about things in the conference, and to exchange ideas which are also written down in conference proceedings, other typical menu is journals. And journals and conferences have the characteristic that they appear frequently or in certain structure like a journal publishes, maybe published four times a year, conference happens every year. Okay, and now we call them... What is a special term or I think independent publications? The librarians call this... I hope it's a standalone publication, the article... How does standalone publication? Where do I find the journals in the library, in our library? If you want to look at the scientific journal, journal of information processing for example, information processing and management, where do you go? Well, you have to do a library tour, I think that... This is at the bottom floor, the first floor of the library, and you cross the deck, the desk, and there is a large hall which is sometimes used for exhibitions, and you go straight through it, there are some paper wooden structures there that you can open, and you see one of the journals on the outside, you can open it, and there are several issues of the journal, the Kamoskar issues of the journal will be inside this box, and you can take a look at the most recent articles in there. And since this box has a limited space, once there is a full year of new journals, these journals go to a shop and they are bound together when they come back as a book, and this book is put into a shelf at the journal section of the library. I think it's called Z for science shift in our library. This I think is just... It is behind these wooden shelves. Okay, so we have independent publications, independent publications, and how do I find the article in the journal? Now that's the problem, the library of the journal, they only index the journal itself or the annual proceedings of the journal, annual collection, or the conference proceedings, so you didn't know what was in the journal until you really looked at it. And this has been a big problem in the 70s, nowadays in online times it's much easier, but in the 70s in Germany the Soka-Fahinformation Center, when you stop for different disciplines, and their job was to index journals and conference papers directly, so you could find them based on the content of the paper, and not only based on the journal. And this infrastructure was created for several disciplines, not for all disciplines, one for technicals in Karlsruhe, one for pedagogical science, for example, in Frankfurt, and so forth. And some of them exist today, and their job is to provide best information for researchers in this area. Okay, so there also people work with these tasks, the job is, some of the staff there is mainly worried, concerned with manual indexing, they look at journal papers and make decisions about index terms. Okay, other areas, this is all now in the public sector, all these people are paid by tax money, some of the information centers are also very successful in the market, but not all of them obviously. Now, do we have these things also in the private sector, in companies? No ideas yet? Well, let's think about it for the rest of the class, maybe later we have some ideas. Now let's look at the process. What basically happens? Let's talk about the librarian, he gets a new book, he looks at the book, then he does what? How much time does he look at the book, and what is his job after this? This will be, to try to do that, will be part of the homework today? How much time do you think the librarian has to look at the book? What's the reader? Can you read the book at this time, or does he index it without reading? Strangely, no? Ideas, yes please. At the table of contents? Yes. Maybe just to support in on what's going on, he looks at the table of contents, and there makes his decisions. What would be a strategy not to read the full book? Or does he need to read the book to be able to process it? Now let's look at the result. What is the result of his work? And then we can make a better decision. What does he actually read? Only the title, only the table of contents, or does he read the full book? What is the result of the work? Manual indexing means the assignment of representational terms of words to a document, to an object. So he has the book. And basically the idea of why the library is there, why the librarian does that, is not for him to get to know the book or something, but the library is there for its users, it's for you, so that you can find this book more easily, or find it at all. You need a large number of books of documents that are there, and in order to help you to find it, the result of the librarian's work is what? Yes. Yes, thanks, meaning words, right? To this books. Thanks is basically the work that we have for a few, since 10 years or something like that, that is useful for a social media object, but basically it's very similar, it's basically the same thing. We assign words to the book. How many words? Practical questions. We have a book of maybe let's say 200 pages. How many words are assigned? Think of you, you want to find a book, right? What should these people do for you? Two words, one word, 100 words. And how does he get these words? Where does he get them from? It's also crucial questions. I look at the book, how do I find a word to put into the system the catalog into my index? So when you enter that word or select it somewhere, you're given, presented with this book, maybe among others, right? Where does this representation terms come from? Does he make them up? Does he think of them at the moment? I could put the term X. Let's think of, he gets a new book on programming apps for Android. Tips on creating apps with Android. What could be terms now? I could put apps, development and Android. Other suggestions? Why not, right? Programming. Now what is better, programming or development? Development, not developing, not programming. I use a web app, but I think it's not good enough because I could use a web simulator. Yes, well we would say in this case development and developing are synonyms in this context. If I use the verb or the noun. But I have to make a decision. Or do I put both? Of course, I don't think I would put the noun. What? I develop that. You would use the noun. Okay. Now, next question would be do we use the noun for programming? Yes, so now? Programming doesn't exist, right? So it's for programming, doesn't matter. Programming or development? These are the everyday questions of librarians, right? Interesting. Good example actually, that I came up with. I have to write it down. Developing, development or programming? Which is better? Or let's not make a decision which is better. Which is, how can I find out which is better? Any help for me now, my poor librarian? We have to come back again. We could do that. That is what automatic indexing does. That's automatic. There will be much too much work for the programmer. How can we count words? We could okay count what is more frequent, right? Because we could say, ah, I see that programming is more frequent in the language. Maybe development would be more frequent. And I use the programming, development, right? Development agreed, right? So far, not sure yet. I could use information on a general collection. What would be the problem here? Any problems? Develop, development. I think development or development is very general. Would it be applied to any topic actually? Developing for Android? Then we know what we talk about. But we talk about development itself. We have the problem from the first side. It can mean a lot of things, like bank, right? We shouldn't use such a general term. Development bank, developing countries, development in architecture, right? This is a term that has several meanings. Only if we combine it with a programming language or a system like Android, we are sure what it means. But the user will look at development itself and then might be a big problem. So for every term the Jambiren has to check not only is it frequent, do people at all use it in this case? Development would be very frequent, but it's ambiguous. Highly ambiguous. Bad for the library. Bad for the user. I'm very interested in maybe care, right? So programming in this case would probably be better. And now let's look back. Wait a minute, what was the name of the book? What did I say? Who remembers the name of the book still? Developing apps, tips for developing apps for Android? For Android. For? Create apps for Android. Okay, creation. Let's create even better. Creating tips for creating apps for Android. That's the title of our book, okay? Now we have decided on programming as a noun for indexing term. Now wait a minute. Programming is not at all in the title. Can we then use it? I think yes. I think yes, why? Because the menu indexing has less words than the automatic one. The person can decide which one is the best. Yes, he can make a decision. I mean, it's creating, we said creating. Maybe other books are called developing apps, right? Developing is not so good because of the reasons we talked about. Creation is also not so good because of reason we talked about. And what could be other things to help him in this decision making process? So I have to agree, it doesn't have to be in the title. It doesn't have to be at all in the book. Maybe the book doesn't say programming. None of its pages, right? We can still use it as indexing term if we say yes. This is the best description for the content, right? Something that will never happen in automatic indexing as we will see next week. Not any support for this book. A lot of things to think about. How frequent is it? How ambiguous is it? Probably there are lists for different topics that we can choose the words from. So there are not too many different words. Yes, absolutely. There is a list of terms that they can look at, right? And what are these lists called? If you go to the library, where do you go? To A. Copy machine maybe, but first you get a book. Where do you find the book? How is the library organized? Let's see, our library. Did you not show your hand? How is the library organized? In different sections, there are special subjects. Scientific disciplines, subjects like which disciplines do we have? We don't have a section for medicine, right? We have a section for... It's really interesting for which subjects do we have sections? Again. What is this? CSC? What else? Spanish. What is Spanish called? No, it's in German. It's another issue. Now which language does this library use? Computer science, CSC is in English. English, for example, I know for sure, is called English in German. Then we have BVL, I think. We have Spanish, SPA maybe. So ambiguous is it Spanish or Spanish? We don't know. Anyway, let's not get too many open questions here. We have these disciplines. This is an entry, basically these are the allowed terms. The top terms. I have computer science and I have subterms like multimedia or information systems. Within information systems, I might have information retrievers. Actually, we don't have a shelf for information retrievers. These are the subterms. So I have a hierarchical category system with disciplines, sub-disciplines. Then I end up with a shelf for all books on computer science information systems. There I can look around. This is like clustering, all the similar books are together. That's the general idea. Where is information science, by the way? If you look for books for this class, where do you go? Well, for your degree, information science or international information management, where do you look at? How come you only know about linguistics in Spanish and all these strange topics? Have nothing to do with information science? Where do you have to look for books? Or a ground floor? Ground floor, CSC. CSC, why not information science? There's no category for it, so we have to look for it. Why? It's strange. Why not? Why is there computer science? If you study, well, there's no degree computer science here. You cannot study computer science here, informatic. As a degree, it's part of some degrees, but you can study information science. But there is no shelf for information science. Strange. Why is it? Why is that the case? Why does a librarian, library boss say, okay, we have computer science, but we don't have information science? These are, of course, historical reasons, right? When the university was founded, they decided on some categories. And this is typically not changed later, even if disciplines are added. So information science books that we buy for you are dispersed all over the library. Most are in computer science. Others are in mathematics, whatever, in economics, in linguistics, in media studies on the first floor. So you're required to walk quite a lot if you look, search for books. Because there has not been a discipline library science for when the university was founded. And also the BUB category, book, and bibliotheque also doesn't refer to study. Now it's close to the reading room where you can find books on knowledge organization library studies and some information science things. So at least six, seven categories that you will have to look at. But since you're an information specialist, that doesn't worry you. Okay, let's go back to our example. Tips for creating apps for Android. We have the term programming. We have kind of agreed now computer science, right? There's a subdiscipline programming languages or programming guides somewhere. What else? We have suggested Android as a key term. Now what about Android? Does library have a shelf for Android books? Books on Android. Same thing. It's quite a new term. It's even more recent than the addition of information science to the library. So there is no shelf for that. Also, there is no... It's historically not in this list of terms as you suggest as you called it. So first of all, it cannot be used. Of course, it would be helpful and a librarian can add some keywords in addition and not in this. So he may do it or not. Basically, for this problem, it's not so serious because even Android might be a search term and he finds it in the title. So how do we call this list of terms that the library is given and that the user also can use when you look for the disciplines and the sub-disciplines? You work with this list of terms to reach finally your book. How do we call this? The abstract term is... It's called control to vocabulary. A librarian uses typically controlled vocabulary. He has to be trained on the... The librarians in Germany have kind of agreed on a basic vocabulary that they can use. So... called basis classification. And within this classification, they choose the subject that they have at their local library and use these terms. That also leads to a lot more efficient work. So don't have to be worried that the librarians will waste too much time with the books that we buy. Because once the book is indexed somewhere in Germany, this information is exchanged and the other libraries use it. So it's more efficient and they use kind of the same vocabulary. So this is a controlled vocabulary. Of course, we can also have open vocabularies, but for manual assignment, typically a controlled vocabulary is required. And that will also help me what other functions of a controlled vocabulary be. Think of the problem with creation, developing, programming. We have synonyms in our language. And the control vocabulary says, that's it. We use this term. And everybody else will stick to this definition and use the preferred term, X, whatever, programming maybe. And then there might be a side note, or if you enter developing, the system might tell you, you're using developing, basically, we call it programming. And refer you to the correct or agreed definition. So again, help for the indexer, but also help for the user in the control vocabulary. Synonym control, we have categories, narrow, broader terms, narrow terms, things like that. Okay, so we come back. We have explained a lot of things about the indexing process. Basically, it's a selection from indexing terms. We still haven't agreed on the number of terms that we need. We need at least one term, so we have some place to put it in the library. But the library can add other terms. And maybe say, if I wouldn't have put it in this category, the second best would be the other one. So we have different categories here. The number of indexing terms, how many, if a book has 1,000 pages, should we use more or no? Is it depending on a page? Well, between one, three and ten objects are typically assigned. So imagine this reduction of information, we have one book of maybe 200 pages, 300 pages, and the librarian selects five terms. So immense reduction. You can only find it with these five terms, or with the title search, and otherwise you cannot find it through the rest of the content. And how much time is given to the librarian? There was a last question, I think, that we still haven't resolved. You look at the book, a typical number is for one information unit, which can be an article, as in the information center, or a book is 30 minutes. It's obvious that you don't have time to read the book. Maybe you can check the content, but typically not very much time there, including the time that you enter this thing into the system. This information, the decision, so over a day you have to maybe process 16 books. That would be a typical workload here. And then for some you might be faster because you know this topic. For some you might need some more time because you're not so familiar with it. Okay? So this is the process. Where else is this done? I've asked if there's something in the private sector, maybe there's some other ideas, where manual indexing is done. We have, still on the web, we have applications of manual indexing, for example, so-called internet catalogs like Yahoo, WebDE, or Demos. Demos is an open project where volunteers categorize web pages and say, ah, we have to turn web pages that belong into this certain category. And then you can maybe more easily find them if you look at Demos for a certain category. Libraries, information centers, information infrastructure centers. Oops, misspelled here, but you know what I mean. Then there is also private sector companies, enterprises, where manual indexing is done. Like news agencies, writers, or DPA, they sell news to journals. And they say, there is a new message, of course, the journalist has to write the news. And then he himself or somebody else says, this article, this news belongs to sports. This article belongs to economics. Typical section of the newspaper, manual work. Guy has time for that, he has to do it, he's paid for that. So again, an application of manual indexing. Then the news, a typical process in the journalist, I have to fill the page for economics. I only look for the news releases, press releases for the news agency ticker for the economy information. So again, helps the user in having to look at fewer documents. Patent offices, of course, something we talk in the other class about in the other class. Company archives, archival, are also typical application of manual indexing. People put something in a binder, put, write something on a folder, put it on a certain desk. This is beginning, is office work, but it's the beginning of manual indexing. I try to do something to assign terms to find information later again. And big company archives, like newspaper archive with photographs of people, articles, whatever people collect, is also typical application. And we have talked about one application, another is of course social media. People say, okay, my image is about whatever, right, soccer, or a name of somebody. If you name your picture based on what you can see on the picture, this is a form of manual indexing. If you do it in your spare time, for you to reaccess your pictures more easily. Okay, so, it's a lot of applications. Well, we still have some time. Started at 2, right? So, what does it mean? As we said, it's selection of representation terms as specific as necessary, as exhaustive as possible. So, cover everything that the thing is about, but be very specific, don't just call it medicine or computer science, be specific, but not too specific. And it's important for now information experts who work professionally in the FIS to know the tools and resources. If you want to become a volunteer on Demos, you of course should know the Demos structure. If you want to work at a library, you have to be trained with library catalog and the basis of the big as your own libraries. Quantity, quality, and of course the usage context is very important here. What kind of users do I expect? Now we come to the control vocabularies that we talked about. There are different kinds of control vocabularies. The most simple would just be a list, as it was suggested. The next more complex is a fissores. Fissores is a systematic collection of words, but systematic means sorted in some way. In certain ways, it's sorted, it's alphabetical order. It's for a special domain, output on the learn web, some examples for domain specific vocabularies, fissores, fissores. So you can look at them. And it's a support tool for information retriever for representation and for searching. Here we have a definition, a norm definition, fissores in IUD as we call it, information documentation, is a collection of terms. And which are terms which are in natural language in a certain area in a script domain for indexing, saving, and accessing. So keyword lists, fissores, a systematic collection. Keyword lists will just be a collection of the allowed terms. The fissores adds some more specific information. It will give information on broader terms. For example, there will be information retrieval, BT, broader term, computer science. Or table, broader term, furniture. There can be narrower terms, just vice versa. Related terms, computer science has something to do with related term information studies. Synonyms, in this case, like programming developing for a specific meaning of developing, we would have a definition of the preferred term. Developing will have an entry saying, don't use it, use programming. Prefer terms. This is what the fissores brings in. The next complex control vocabulary would be a classification. Basically, the library is a classification because now we have a hierarchical order of our fissores. The fissores is just an alphabetical order. Now we can say, okay, we have a first level CSC, psychology, Spanish, like in our library. And then we have several terms, we have a classification. Hierarchical structures, library, Yahoo, WebCatalog or Demos again. The next complex control vocabulary would be ontology. Here we have descriptions of features and the loud values and their relationship. More or less as you know it from data modeling in the introduction class. I did some data modeling in the information management class with ERM, right? Entity relationship modeling, what is a book, library, user, how can they be in relationship, and to end one-to-one relationships. And this is something that ontology is. For example, ontology of an airplane would say, typically airplane consists of several things. For example, two wings. One to two relationships, each airplane typically has two wings. Then there will be entry for wing, and the wing would have some flaps to steer. And then I can say, an object has some flaps, logical reasoning. What it can attach to, and then I can find out, this object I have been able to do. Very simple. Okay, so a description of what is an object. For example, a book would be an object in the library, in the library ontology. A book is an object consisting of a metadata of content. Metadata would be the author, the application, the publisher, and so forth. There would be a data model for the library about what is a book. It would not consist of something like color, because the color of the book is not so relevant for the library. Of course, a book would have a color, but the data modeling in the library would ignore this information. Okay, questions? Not the case so far, so we move on. Now we have the solution. In our example, the control tool capitalator would either avoid pseudonyms, or it would make very clear what kind of meaning it would use. So an ontology on furniture could use the work bank if they want to. And put the TV set, okay, we talk about something to sit on. Whereas an ontology in economics would say, okay, we always, when we say bank, we use, we talk about a financial institution. And also prefer terms talked about. Okay, Pizoros and everything control vocabulary, the ontology can be shown to the user to say, hey, that's what we have. You can find things about these terms here. It's a support for the retrieval or for the exploration process. We can see many of these controlled vocabularies. Here it's for pedagogics, we have metadata. It's just an interface that allows you to select controlled terms from a controlled vocabulary. Key terms, Amazon area, of course we could also have different expressions for this area, right? For this geographic unit. Could say, well, North Brazil or something like that. But here we have a decision. Okay, there are many problems, of course, still associated with controlled vocabulary. We can have different perspectives. One might be from the perspective of psychology, another from the perspective of medicine, and those are maybe not congruent and might lead to, in the overlapping area might lead to problems. Well, not done yet. The most important things are to come. Another course. And as we said, maybe also in our example, we can see that the role of a term is quite unclear. For example, if I have a document with these two terms, production and industrial robots, I don't know if it's the production of industrial robots, but is it about the production using industrial robots? Which is quite some, can be quite different. Make a difference for my relevant judgment, right, for my interest. And I don't know which one is, which meaning is applied. So if it comes to the combination, we have two or two more complex vocabularies. We have two different ways of thinking, basically. We do brief and schools, two different approaches. And these will be actually two final terms that we have to learn today. Precoordination and postcoordination. Ah, it's doing the German terms. Ah, yeah, but also English here. Precoordination is a monolithic classification with all the content inside. Whereas a postcoordination allows several facets, different parallel control vocabularies that you have to combine at the end. Looks quite strange, but let's look at the examples and it's much easier to understand. For example, a typical precoordination approach is the Dewey Decimal classification. Anybody ever heard about Dewey? Dewey classification? No? It's an approach, similar to the library, to say, has been in the 1900s somewhere. Dewey was, I think, an American. We said, oh, it's strange that we don't have a real good classification system. I want to create a classification system for all the world knowledge. So every book that appears, every article, I have a place for it in my Dewey Decimal classification. And he said, I will assign a typical Decimal system. So there are always 10, well, placeholders or 10 aspects. And we have a hierarchical system and we can get narrow or almost specific and more specific. For example, it starts with category 0 to 9, then there is another, another, another, another, that can be more specific. 3, 6, 0, for example, is Social Services Association. 3, 6, 1, then is General Social Problems. 3, 6, 2, Social Welfare Problems and Services and so forth. Now he says everything can be inserted somewhere. Problems, for example. Here is the main classes. Of course, Dewey was a kind of information science librarian person. He was the most important main class was information science. This is a German translation of the Dewey Decimal classification. And then he had nine categories. And as I said, everything should fit somewhere in here. Let's look at Dewey, but also at one of the problems that we can see in the German translation. He said I started with general knowledge, general things, library studies. 0, 0, 3 is General Encyclopedia. 3, 1 is Encyclopedia in American English. 3, 2 is American Encyclopedia in English. Between Encyclopedia in German and Dutch and so forth. What do we think about it? Any problems that we can identify here? Since we have to put all the knowledge into these nine categories, it's quite strange. 0, 3, 9 is Encyclopedia in other languages. In other categories, the other will be quite broad. All the other languages is quite a lot. In this case, I have another aspect. I have basically Encyclopedia. Now he wants to say, okay, we have Encyclopedia in different languages. I also want to put this in the same hierarchy with nine categories or ten. But I have more than ten languages always, or more than ten countries or whatever I do. So I could either again make a hierarchy classification, languages in the continents or whatever. But I would have to do this also, not only here for Encyclopedia, I will have to do it again for maybe dictionaries. And something else that appears in different languages. So this will appear again and again, copy to other places in this two decimal classification. It might be more interesting to say, oh well, I simply have Encyclopedia, dictionaries and something. And then we have parallel information, additional information about the language. This can be another facet. And the facet idea, for example, we can see in the read-up system, which is a system to access economic information, a WOS1. And here we have different parallel facets, each of them representing some kind of control vocabulary. For example, again, countries, economic branches in the electric industry, television, mobile, or is that digital, something whatever, don't know the translations here, telecommunication issues and problems. So the whole electric industry has to, when you produce something, when you have an enterprise, you have to tell the statistics, the government what kind of products you produce. When you export them as well, when you use different control vocabulary again. So this is a control vocabulary for electric industry. Now I can select my industrial branch, I can select a country, and then I can select an economic indicator like export or growth rate or something like that. And do we try to put all that together? So some things have to be copied all over again, right? Try to do this information for all this, and I can say, okay, where do I start? I start here, then under this I have all countries, and then for each country I have all of those, I have to copy this into here. And here it's more elegant, I select in parallel, and solve this problem. And then I have these examples, which this is called pre-coordination, because before I'm starting to do retrieval, before my information process, everything is already coordinated into one classification system. And this is called post-coordination, because after indexing, when I'm looking for it, I have to coordinate myself, right? So this is the most indexing coordination we could say. After the indexing work, I, as the user, as the searcher, have to do the coordination. Now to the homework, the homework will be to assign a category to a book in our library. Here we can see what the librarian did, for example, for this book. There are so-called schlagwerter. These are things that are not from the door from a control room, but not from the one for putting the books into shelves. For putting books into shelves, we use the so-called zahkebedeger. CSE 611 and others, but the first one will decide about the shelf where you could find a book in the physical library. So your job will be to find this and maybe some secondary categories for this book. Zahkebede also, we have here the basic classification. This is the basis classification. It's the same identical all over libraries in Germany, and this is the local rank. Local specific range should be set. And this is the category and the signature of the book, if you will read the back of the book. It's just the place in the shelf where it is, plus an abbreviation depending on the author. So the first name of the individual author will be starting with V, let's say, opforium correct. Vickery, yes. Okay, so this is the task to find this zahkebede. You can also look in the basic classification. Just enter that, you can access this and browse through the categories. You can see what kind of knowledge is used here. You can also search, access the local categories. That's something you might need, because then you can say what is there and where can I best put this book. Okay? So this might be helpful. Here you see the main categories within computer science, and then there are still subcategories. So three levels, computer science, main categories, subcategories that we have. So more about the zahkebede. Here's the homework. Here is the book. If you need more information for the book, try to find it online. But here is the title and the author. In blue. Okay? Now, and how do we turn this in? We can do it online, I guess, right? I will open a homework task on the Learn Web, where you just can enter some text. So the main, you don't have to explain a lot. Basically what I want is the zahkebede, the local, the shelf, where we put this book. Clear? Questions? On the homework? On the class not? See you next week.
This lecture gives an overview on Information Retrieval. It explains why documents are ranked the way they are. The lecture explains the most relevant ways for content representation: Automatic indexing and manual indexing. For automatic indexing, the frequencey of word is of special relevance and their influence on the weighting of term are discussed. The most relevant models are introduced. The session on evaluation discusses new metrics like the Normalized Discounted Cumulative Gain. The session of information behavior provides a brief overview and explains the relation to IR. The session on optimization mainly introduces term expansion and fusion methods. The session on Web retrieval is concerned with the quality aspects and gives a basic insight to the PageRank algorithm.
10.5446/16238 (DOI)
Classes in Polymer Dynamic. Based on George Philly's book, Phenomenology of Polymer Solution Dynamics, Cambridge University Press, 2011. And today, this lecture is lecture 15, more on probe diffusion. I'm Professor Philly's, and this is lecture two of our discussion of probe diffusion, experimental studies of polymer solution dynamics, based on measuring the diffusion of rigid objects, small or large, through polymer solutions. In our last lecture, we discussed the dynamics of small and large probes, in which we have something that might be a fraction of a nanometer, up to hundreds of nanometers across. It diffuses through polymer solutions, as it diffuses, we can measure how rapidly, and we can determine how the diffusion rate depends on the concentration of the matrix polymer, the polymer and the surrounding solution, how it depends on the molecular weight of the matrix polymer, and other features. Today, we're going to push on to consider a few other things. The first thing we're going to discuss, which is section 9.5, is re-entrance. I'm not quite sure where to put re-entrance into the discussion. It could be described as, there is this extra phenomenon you see on rare occasions. It's not clear why you're seeing it. It's not clear what it is. But it's been seen enough times that we can group all of the observations together. The starting point, we measured diffusion versus concentration, and we are doing this for polystyrene spheres, and polyethylene oxide water. This is my own work with the Oman's. We are in polyethylene oxide water, and the simple behavior, which is what is usually seen for the diffusion coefficient, gives us a curve that looks like this on a log-log plot. That's a stretched exponential. However, and we didn't actually quite discover it in the natural order of things on how we worked things out, if you take particular size of spheres and particular molasses, and particular molecular weights of polyethylene oxide, and since it's not the small ones or the large ones, I think it must be some specific chemical property of the particular polymers. At least that was our best guess. The right size sphere is what you see is a phenomenon that looks like that. That is the diffusion coefficient first increases with increasing polymer concentration. And then goes down again, and gets back to about where you would have expected D to be at that concentration. Gee, what is this? Well, that's a good question. You can see the same phenomenon or something somewhat similar. There is nice work by Wann at all, and what Wann at all looked at was a cross-linked polystyrene sphere and polyvinyl methyl ether toluene. And what they found was something that looked, if you look at their diffusion coefficients, a bit different. Namely, here is the smooth curve you find if you fit most of their measurements to a stretched exponential. But in the middle, what you suddenly see is this region where the diffusion coefficient heads off for whatever reason, and then comes back again. We're talking about polystyrene spheres that are seemingly of a reasonable size. The concentration in what are called natural units, where the intrinsic viscosity has dimensions one over concentration, C eta gets up to about 36. And there is this narrow regime where, gee, things don't work the way you would have expected. The question is, what are you looking at? Well, there are other cases that look like the first of these two found for polyelectrolytes. I've just shown two examples for neutral polymers. The question is, how are you seeing this? What are you seeing? One general explanation has to do with the question, well, what sort of a diffusion coefficient are you looking at if you are measuring things with quasi-elastic light scattering? And if you simply have a single set of diffusing species, everything is completely uniform, your light scattering spectrum or the dynamic structure factor, that's all discussed in chapter 4, on a semi-log plot is a nice straight line. You can measure its slope anywhere, you get the same answer. Life is very simple. And that would correspond, as a general statement, to Brownian particles in water, some system with no memory. And polymer solutions, though, life becomes more complicated. And if you look at the relaxation spectrum that you get with light scattering, you can get several relaxation modes. What information does light scattering give about those modes? Well, let us redraw this, and I will now redraw the light scattering spectrum on a log-log plot. And if the modes are separated, you get a first relaxation like that. That's sort of what an exponential looks like on a log-log plot. And then you may get another exponential or stretched exponential or something. And you have different things happening on different time scales. The reasonable presumption, which is there, then there's a way to test it, we'll get to in a second, is that both of these processes are going on at all times. And therefore, if you measure the initial slope of the relaxation, it's the sum of the rate determined by this faster mode plus the rate determined by this slower mode. And light, if you do what we call cumulant analysis, that is, if you ask, what does the initial slope of this curve look like, and it doesn't have to be approximated as a straight line, it could be approximated as something that is fancier than a straight line, what does that initial slope give us? Well, it gives us the average of all of the relaxation rates weighted for how much amplitude they each have. On the other hand, if you sit around to long times, if you look out here and measure the relaxation rate, you only see the slowest mode. The fast mode has relaxed away. Now, that doesn't mean that if you wait a bit, the particles are no longer moving in the rapid mode. What it means is the particles have moved far enough that their positions with respect to the rapid mode are not correlated with their initial positions, and therefore, even though the rapid process is going on, the rapid process quits contributing to the relaxation spectrum because this has gone off to zero. The new process relates to particles starting at positions that are uncorrelated with their initial positions. It is sometimes said, if you see these fast and slow modes, there are people who look purely at the time dependence of the diffusion process and say, oh, you're seeing caging. And the notion in caging, which is perfectly legitimate possibility, is that we have a pro-particle. It is somehow confined into a region of space where it can move rather rapidly, but it can't get very far. And if it wants to move a large distance, it must, for example, hop to a new region where it's happy to sit, and the hopping process is slow. Now, if you do have caging, if you have regions that are in some sense low potential energy or dynamically restricted but free to move within, then over short times, you'll see motion within a cage and that long times, you will see the motion, the slower process in which things hop from cage to cage to cage. Now, once the object gets here, it still does have the fast motion within its new cage, but these positions and these positions are uncorrelated, and therefore this motion does not contribute to the spectrum. Having described caging, I would like to emphasize that the time dependence measurement gives you absolutely no evidence as to whether the caging interpretation is correct. Question? What happens to the processes when it's moving from the confined region and moving slowly when it's hopping? What happens between here and there? How does that show up in the time scale like that? The slower process, how does that show up in the time scale? The slower process when it's doing this is the slow mode. That is, there is a part of the position of the object that stays correlated with its initial position as long as the object is trapped here. And only when the object has done a bunch of hops does the last piece of its initial position get forgotten. Thank you. Okay? So that is the question. The reason you don't see anything is that light scattering spectroscopy, a single spectrum, only gives you information about motion on one distance scale. Now it's a Fourier scale, not a linear distance scale, so you're looking at the relaxation of a spatial Fourier component of the concentration. That's why you can see both modes in the light scattering spectrum. However, if you just look at a single queue and measure the spectrum as a function of time, which is how the experiment is usually done, you are not directly generating any evidence of hopping. Indeed, if you go back a few chapters, there is this paper by, I believe it was Nemoto, the collaborators, where they look at light scattering and sedimentation. Sedimentation is intrinsically a long distance process, light scattering, light wavelength distance. And what they demonstrate is that the long distance motions are faster than the slow distance motions, which is sort of the negative of caging in some sense. Okay, so we have said, at short time, you are seeing all of the relaxations put together in a light scattering spectrum, and at long time, the light scattering spectrum only shows you the slowest relaxation of modes. You will sometimes encounter a literature error, and the literature error claims that if you look at the light scattering spectrum at short times and measure the initial slope or whatever, you are purely and exclusively seeing the fast modes. That is complete nonsense. It is simply not true. Okay, so if there are several modes, can we just measure them directly? And that is answered in section 9.6, and 9.6 discusses the spectrum with multiple modes. Now, we will come back to discuss multiple mode spectra again when we discuss optical probe studies of hydroxy-probe, cellulose solutions. However, there are so many studies of HPC solutions that they are grouped all in their own subchapter, their own section, and we will get to those separately. And we start out with studies by Bremmel. And what was done was to say, well, we will take solutions of sodium polyacrylamide, and it is a polyelectrolyte, and we will put in probes, and the probes that we will put in are polystyrene latex or hematite particles, and we look at the spectrum. And what was found was that there were two spectral modes. The two spectral modes were both fairly close to exponential. They could be characterized separately in terms of a diffusion coefficient. But what was observed under various conditions is that if you have several modes, life can become much more complicated. And you can have modes whose relaxation rate does this and modes whose rate does that. That starts to look familiar. That's the polyethylene oxide thing. And modes for which you only see this, and well, you could always think it might do that. And so you have two modes visible in the spectrum, and the behavior of the two modes is different in different cases. Now, having said that, there is now a possible explanation, or partial explanation, for some of the polyethylene oxide data. Namely, if you have a spec system that's giving you two spectral modes, it's possible that under some conditions one mode is dominant, under some conditions the other mode is dominant. With more limited anyhow technology that was available in the early 1980s, which is when the polyethylene oxide data was done, well, sometimes you basically see this as the dominant mode in the spectrum, and sometimes you see this. And if you examined the same system with modern technology, you'd find that two modes were present, and one or the other was more prominent. Okay, let us consider to advance to some additional work by Bremel and Dunstan, and we are looking at polystyrene sulfonate spheres. What is the point we say polystyrene, and you will notice there, there are carboxylate modified polystyrene spheres, or this modified or that modified, and in the case in question, they're in a copolymer. Okay, why do I mention the modification? Well, it's like this, you have a sphere, it's made of polystyrene. Polystyrene is not at all water soluble. If it were water soluble, you'd pour coffee into your cup in the morning, and the cup would dissolve. And it would be full of polystyrene, which is not necessarily something you want to drink huge quantities of, not if it's water soluble suddenly. And the answer is in order to get this stuff into water and persuade it to stay there, what you have to do is to charge up the surface, and the charge up the surface is usually done by chemical surface modification. So for example, you have, there's a carboxylate group, it's an organic acid group, and it's been fully ionized, so you add a little base, or that was done by the manufacturer, really. And now you have these little charged spheres, and since the spheres are charged, they repel each other, and they're happy to stay in water, and they're a real solution. Okay, so what Bremel and Dunstan did was to look, and they see their modes again, and they see things that look like this, except, well, perhaps it doesn't go on quite as far. And then they could say something, they do two things. The first thing they do is to say, gee, the diffusion coefficient can be used to calculate a microdiscosity, a data micro, this is the Stokes Einstein equation, but it's being written where the variables I measure are the particle size and the diffusion coefficient, and the quantity I calculate from them, and K and T and 6 and pi, the quantity I calculate from them is the microdiscosity, the viscosity that would explain the diffusion coefficient. And what was found for these experiments was that the microdiscosity for the slow mode appeared to be larger than the micro, the orthodox viscosity of the solution, and the microdiscosity that corresponded to the fast mode was clearly less than the viscosity of the solution. That is, if you have two modes, you have two diffusion coefficients that you can infer from them, and because you have two diffusion coefficients that you can infer from them, you can calculate a microdiscosity for each one, and since the two curves are not parallel at all, you get two different behaviors for the microdiscosity. The next thing that they did was to say, well, we will look at several scattering angles, and because we are looking at several scattering angles, the wave vector Q, the scattering vector in the light scattering experiment, changes. Now, the reason that's significant is the thing that you're measuring in light scattering, the dynamic structure factor depends on the scattering vector. What does the dynamic structure factor tell you? Well, we wait a while, and eventually there is a fluctuation in the concentration, a cosine-useoidal fluctuation, that has a wave vector Q, and the wave vector Q corresponds to the scattering vector of the scattered light. Now, actually, these fluctuations go on all the time. They occur simultaneously at all wave vectors, not just one, but if you wait for a moment when the scattered light is really bright, you can correctly infer that the fluctuation of the correct scattering vector to get light out of the laser beam and to your detector, that fluctuation must be fairly large. And now you wait and ask what happens at later times. And on the average, you have to do this many times. That's what the correlation function measures. The size of the fluctuation on the average relaxes back to zero. It doesn't do it all the time, relaxing back to zero, or it could never get big. It fluctuates, but on the average, if you wait for moments when it's large, the fluctuation relaxes back to zero. And the relaxation is the scattering vector. If you change the scattering vector, you're changing the distance scale over which you're watching emotions. Now, since you're looking at a concentration fluctuation that looks like this, particle emotions over distances that are significantly less than this wavelength contribute to relaxing the fluctuation. And the particles can move various different distances, but if they start at a maximum and head off towards the minimum, they reduce the size of the fluctuation. So you're not looking at exactly emotions on a single distance, but there's a distance scale in there. So what happens? What happens when they did this experiment? There are two modes, a fast mode and a slow mode. And what they found was that the diffusion coefficient of the slow mode is independent of scattering angle. And that is the behavior you get if you have a process that is purely diffusive. A single diffusion process will give you a relaxation that goes as e to the minus the diffusion coefficient q squared t. That's a single exponential relaxation. The relaxation rate goes as q squared. But if you pull the q square out and ask what is the diffusion coefficient, the diffusion coefficient is the same on all distance scales. And so what they said is that their slow mode process goes as q to the zero, and that implies that their slow mode process is in fact diffusive. The fast mode process, however, was quite different. They measured the apparent diffusion coefficient at different scattering vectors. A larger scattering vector corresponds to a smaller typical displacement. Large q is short distances. And what they found is that as they made the distance scale shorter and shorter, the particle motion was faster and faster. The important issue is that since they're saying at shorter distances, the particles move more rapidly, we are not discussing a diffusive process here. We're discussing something rather different. The other issue is, going back, the slow process is q independent. And so, gee, that actually speaks to Cajun models. The reason it speaks to Cajun models is that if you have a particle that does Brownian motion by parking in a cage, hopping to another cage, parking in a cage, hopping to another cage, and this random hop, hop, hop is the random walk that gives you the slow diffusion. If you look at very short distances, something should happen to that slow process. Because at short distances, the slow process hop, hop, hop starts to become visibly discrete. Well, that's not what you see. What they see is that the slow process, whatever it is, is q independent. Okay. Let us chug ahead. That's Bremel and Dunston. And we'll look at another set of experiments. And the new set of experiments are due to delphenol that we collaborate with. And the notion in the new set of experiments is we will look at a polymer. And the polymer they looked at was carboxy-methylvalyloids. And the material they looked at was fairly high molecular weight. And as a result, the radius of gyration of the polymer happened to be about 50 nanometers. Question? Is carboxy-methylylvalyloids used in chromatographic columns for adsorptions? Um, you might... I actually don't know. The short-form answers know this stuff is water-soluble, so it's not used in column. But a lot of these should remember, this is now a synthetic issue, a lot of polymers, yeah, they're water-soluble, but then you do cross-linking, and instead of having a linear chain you form blob that is big, big, big, and it ceases to be soluble in a practical way, and now you have a column. But this is not cross-linked. And what they did was to take spheres of three sizes, 17, 47, and 102 nanometers. And guess what? They have here spheres that are considerably smaller than Rg, and they have at the other end spheres that are considerably larger than, well, twice Rg, and they have spheres that are about the same as Rg, so they have three sizes of spheres, and they compare the diffusion of the three sizes of sphere in the same system. And you are now comparing the diffusion of objects that are smaller than the polymer molecule the same size as or larger than. Now that's a perfectly reasonable comparison viewed abstractly to make. You should, however, notice that there are theoretical models that say you have a polymer solution and it's fairly concentrated. You have these long chains and they are entangled. And once they're entangled, the model says, well, it doesn't matter how long they are because the ends don't do very much except from forward or backwards. And the interesting distance scale becomes this distance scale, so it's now xi, and xi is sort of the size of a hole in the polymer mesh. And if you say that's true, then the interesting comparison is between objects of different sizes and the size of the hole, and the hole is much smaller than a polymer chain. The sort of limit, why do we know the hole is much smaller than a polymer chain? Well, suppose we have a dilute solution that is just on the edge of entangling. Okay? In that case, the distance between entanglement points can't be much larger than a polymer chain because if it was, this polymer chain would only be entangled at one point. There wouldn't be a mesh work. So the entanglement notion is the lowest concentration at which you can possibly have entanglement is approximately when the radius of the chain is the same as the distance between entanglement points. And at all higher concentrations, xi becomes smaller and smaller, the radius of gyration shrinks, but much less. And in that case, the interesting length scale and solution is much smaller than the polymer chain. Well, having said that, they did the experiment. And the reason the experiments appear in this chapter is the spectra they found were bimodal. Their spectra were adequately fit by a single exponential, or two exponentials. That is, each mode was close to being a pure exponential. That says that each mode, separately, could perhaps be viewed as a diffusive process though there must be some complications we are not quite seeing. Nonetheless, they do see two modes. Now, the reason you have to worry about saying it's pure exponential is as follows. Suppose semi-log plot. Here's the dynamic structure factor. Here's time. This is a semi-log plot. Suppose the actual spectrum looked like that on a semi-log plot, meaning the relaxation is not a pure exponential. It occurs more rapidly at fast times and on the semi-log plot less rapidly at long times. Suppose you had a spectrum like that. Suppose you charge in, and not paying much attention to the data, fit this to a single exponential. Well, you could say it's an approximation, but there's a problem with the approximation. If I just fit to the measurements up here, I get one slope. If I just fit to the measurements down here, I get a quite different slope. And the pure exponential fit, unless you're careful, and I'm not saying the paper we're talking about was not careful, I'm just showing where there's a hazard here. The hazard is that if you fit on different time scales, you get different slopes for the exponential. And that's because the exponential is somewhat cooperative. You can always draw a straight line through a set of points. It may be a foolish straight line because the points don't resemble a straight line, but you can always draw that straight line, and you will get a result. And so what happens when they do this? Well, first of all, what they found was that there are two modes, and for the small spheres, the fast mode was dominant. For the slow spheres, I didn't say that right, sorry. At low concentrations, they found the fast mode was dominant. At high concentrations, the slow mode was dominant. And so far, we've only talked about the intermediate-sized spheres. I said that backwards the first time. We can draw a picture showing this. The other thing they found, so we'll start with the small spheres, and there are two modes, and the two modes both slow down as you increase. This is the diffusion coefficient corresponding to the mode. This is the concentration. And the first thing they find is that for the small spheres, the slow down is over the observed range of polymer concentrations. It's, oh, a factor of three or six. I'm being approximate. For the large spheres, the figure is in the book, and you can look at this, and for the large spheres, the slow down is, oh, maybe a factor of 10. And for the slow mode, it's so number like 300 or 400. It's more than two orders of magnitude. And so what you're saying is, for the small spheres, the two modes are affected only modestly. Well, actually, it's factor of three. It's plenty to see by the polymer. For the large spheres, the effect of the polymer is vastly more dramatic. And there is a crossover region when the spheres are about the same size as the polymer molecule. And it really does appear, though, of course, they only have three sphere sizes. It does appear there's a transition between the sphere is much smaller than the polymer and the sphere is much larger than the polymer. Now, if you really want to say that statement and say there's a transition, you would really want more than three sphere sizes for the simple reason that you ask, well, is it a continuous rollover or is there a real change from behavior class A to behavior class B? Does the rollover actually occur when r is equal to rg or is it someplace to one side or another? And if you want to settle that question, you'd have to use plenty of sphere sizes, and then, if you're lucky, life becomes transparent. Well, that's what Karel Stroletsky did. He was my PhD student for his doctoral thesis. He did this in HPC, not carboxy-methylcellulose, but he used a whole bunch of sphere sizes and he, in fact, found exactly what is being implied by the very nice experiments of Delfino. Namely, there is a behavioral class for spheres that are smaller than rg. There is a different behavioral class for spheres that are sort of larger than rg. There are several ways of characterizing the size of a polymer molecule, so let's say it's a bit imprecise as to exactly what you call the size of the transition. But, in fact, the transition is fairly large. It occurs approximately at the size of the polymer coil, and therefore, the solution, in my opinion, unsurprisingly, has a longest characteristic length scale, which is the size of a polymer chain. Well, that's fine unless you believe entanglement models because entanglement models say that the characteristic longest length scale is this much shorter length scale psi. Yeah, that's not my problem. Okay. We now advance to section 9.7. And 9.7 talks about probes in polyelectrolyte solutions. Now probes in polyelectrolyte solutions are something that is not covered very heavily by my book. At some point I decided I have to stop someplace, or the book will get bigger and bigger, and the publisher only gave me 500 pages. And the time required to complete, instead of being a fixed date within my lifetime, will be divergent, and I'll never finish the book. So I had to stop at some point. And I remind you, things that we stopped before getting to include rods, polyelectrolytes, liquid crystal polymers, block copolymers, salts, oh, thread like micelles, there are solutions, small molecules, but they form dynamic structures in solutions that are long and thready, that look a lot like polymer chains, except they're not covalently cross-linked, and they can go straight through each other. Now, having said that, there are a bunch of things we did not cover. Nonetheless, I do briefly mention some of the results that do exist on polyelectrolyte systems, and I bring up some of the polyelectrolyte results so that you know they're there. So what did we say about polyelectrolytes? Well, the first issue is there's some extra variables in a polyelectrolyte system. That is, if you have a polyelectrolyte, the first material we worked on, polyacrylic acid, could be a polyelectrolyte, but we worked with the non-neutralized material. These are results of tyholin and I, in which here are the carboxylic acid groups, and they're mostly non-neutralized. Well, some of them spontaneously ionize a bit. But what you could do is add base to the system, and now you have ions on the chain, and you have sodium ions, and you added sodium hydroxide, and guess what? You now have neutralized polyelectrolytes. Well, how much base did you add? There is a percent neutralization that determines what fraction of those groups are charged. Also, you could have a salt concentration determined by its ionic strength, I, which for monovalent ions is simply the molarity of the solution. We can go on. There's more complication, but we aren't doing polyelectrolytes, and there are extra variables. And having said, there are extra variables, there are all sorts of experiments you can measure. Okay. For example, we will take, I believe it's figure 9-25, and we can measure the diffusion coefficient. We're looking at polystyrene spheres in a partly neutralized polyacrylic acid. This one happens to be 60% neutralized. It's a nice round number. It is, if I recall correctly, a 600 or a slightly smaller polymer. And we measure the diffusion coefficient of spheres as we add salt to the solution. And so we start with no added salt. That's, now you may say, gee, that's simple, isn't it? No, that's actually quite hazardous. The reason it's quite hazardous is, A, water ionizes a bit, and so you really can never get less than in water, about 10 to the minus 7 molar ions, because the water is ionized. Furthermore, water absorbs carbon dioxide from the air, unless you're really rigorous about dealing with issues. And therefore, the pH of water, if you actually look at it under these conditions, is more like 5, meaning you have something like 10 to the minus 5 molar hydrogen ions and other good stuff floating about. And so, unless you're extremely careful in your work, it's very hard to avoid introducing at least traces of small ions. And therefore, if you want to know what's going on, the experimental process is to be sure to add at least tiny amounts of salt, sodium chloride, potassium chloride, whatever, 10 to the minus 3 molar, 10 to the minus 4 molar, and then you are probably closer to knowing what's going on in the solution. Nonetheless, that's I equals 0. We didn't add anything. And as we increase the amount of salt, well, at 0 polymer concentration, this is D versus polymer concentration, changing the salt concentration does almost nothing. But at higher salt concentrations, with increasing ionic strength, those curves flatten out. And as you increase the salt concentration, the polymer is less and less able to retard the motion of the spheres. Why? Well, that's a good question. And you can come up with all sorts of explanations. Gee, you added salt, the polymer is less rigid. You added salt to the electrostatic interactions between the polymer and the sphere are weakened. You can come up with all sorts of explanations. You notice it started to get very complicated. You can also look at Stokes-Einstein behavior, and you can compare the product diffusion coefficient times measured viscosity of the solution with what you would have in pure water. If the Stokes-Einstein equation worked, this number would stay 1 as you run up the polymer concentrations, increase the salt concentration. That's not what happens. In fact, what happens is you have non-Stokes-Einsteinian behavior, and the viscosity goes up, the diffusion coefficient goes down, but the matrix polymer is more effective at increasing the viscosity than it is at retarding the diffusion, and this number is larger than 1. There are systems where it's less than 1. The statement, it's larger than 1, has the attractive feature that you simply cannot claim, oh, it's just the spheres are aggregating, there's stuff sticking to the spheres. That would give an effect of the wrong sign. So as we proceed, we see non-Stokes-Einsteinian behavior, and we ask, well, are there ways to make the non-Stokes-Einsteinian behavior less? And the answer is that this number tends towards 1, it doesn't get there. If you make the spheres larger, if you make the ionic strength larger, or for that matter, if you weaken the polyelectrolyte effect by reducing the percent neutralization, if you do any of these things, or all of them, the degree of non-Stokes-Einsteinian behavior goes down. Okay, there is another theoretical piece in here, and the theoretical piece is there are people who will claim, this is a claim, you can find it in the literature for yourself, that you get non-Stokes-Einsteinian behavior if the probe size is less than, or maybe much less than, all of the characteristic lengths and solution. However, if the probes are big, the claim is that you must be getting Stokes-Einsteinian behavior. Well, that's very nice, but if you believe that, gee, if you see spheres and they're showing non-Stokes-Einsteinian behavior, the spheres must be smaller than the length scale. How big were the spheres we used in that experiment? Well, there were a series of sized spheres from 21 up to 655 nanometers, yes, diameter, and based on our measurements, since we were seeing non-Stokes-Einsteinian behavior, except perhaps for the largest spheres, the longest length scale in these solutions must be greater than 300 nanometers. Certainly, it must be greater than 50 nanometers, and it appeared it had to be greater than 300 nanometers. Well, that's a very long length scale. That's not what you would necessarily have expected, but that's in fact what's there. Okay, there's not a figure I drop in, and the figure is, again, the work of myself and its work with LeCroy and Yamburt, and its figure, if I recall correctly, 9-27. And if you look at the figure, yeah, it's B versus C again, so if you look at the figure, yes, it's a semi-log, it's a linear semi-log plot, if you look at it, you realize D doesn't change very much, and we have a whole lot of points, and the points are concentrated on finding what is very clearly the initial slope. And in some cases, a simple exponential slope is adequate, in some cases, a linear slope fit was adequate, and we worked very hard. We used spheres of size 734, and if I recall correctly, 95 nanometers. The measurements you see are for two sizes of polystyrene sulfonate, but we had a fair amount of sulfonate, and we look at several molecular weights of polystyrene sulfonate, and the question is, why did we work so hard to determine the initial slope, which is what we did? We determined the leading linear effect of adding polymer, and how it slows down, a probe diffusion. The reason we did this is that there was a theoretical model that was mycoutic, it's based on the Kirkwood-Reisman picture of single chains, and what it says is, here's a probe sphere. We are in dilute solution, we want to find the leading slope, which is the dilute slope, so here is a polymer chain. The sphere moves, and as the sphere moves, the polymer chain gets to respond, but it's somewhat constrained. It can move, it can translate, so it has some translation. It can notice this side of the chain is closer to the sphere than this side is, so it can rotate, and it has some rotation vector, omega, which is perpendicular to the blackboard, and we can describe the polymer chain, lowest approximation, as a sort of bag of frictional beads that translates and rotates. But we can now calculate the hydrodynamic interaction back and forth between the sphere and the beads, and we can therefore calculate quantitatively how effective the chain is at slowing down the spheres. And the point of this was to see, well, can we do the calculation and does the calculation work? And the direct calculation ends up with a single free parameter, there's a way of beating that limit, which we did eventually, but the point is we could actually calculate the slopes and we get the right answer. So that particular paper is a theoretical model test. It's an extremely important theoretical model test because it tests the hydrodynamic interaction theory, and it works. Okay, I'm doing fine on time since I started a bit late. Okay, let us step ahead again, and the step ahead again is to say there is something called solvent quality. And you can sort of look blindly and wonder, what is the solvent quality? Suppose we have a polymer chain. Here's a polymer coil. We have states of the system in which here comes a polymer coil, here comes the polymer, very crudely drawn, and here comes another piece, and the two are in more or less direct contact. Yes. And you might ask, does this polymer prefer to have another piece of polymer coil as neighbor, or does it prefer that the two polymers stay well apart from each other, and it has solvent molecules as neighbors? It isn't an absolute, we must have one or the other. It's a thermodynamic preference in contact. And the net result is, if the polymer chains prefer to be next to each other, the polymer coil stays fairly compact. If the polymer prefers to be in contact with solvent, well, you have different pieces of the polymer in this picture have to push apart from each other so they don't bump into each other as much, and suddenly the polymer coil gets larger. And if we choose our solvent, we can have what is called a good solvent, in which the polymer prefers to see neighboring molecules be solvent. We can have what is called a theta solvent, in which there's sort of indifference, and we can also have a poor solvent. An extreme example of poor solvent behavior are protein molecules. Protein molecules are nice and neatly folded and have particular groups on the outside and are lumps because proteins view water as a very poor solvent. And so they fold up tightly. And they fold up tightly so that all of the aliphatic groups, the hydrocarbons, try to hide on the inside of the polymer chain. Of course, they can't all manage to do this. And the groups on the polymer that are charged, the carboxylic acid groups, the amine groups, the charge, sit on the outside and be charged. And so the polymer has charges on the outside or tries to, and things that look more like oil on the inside, and it doesn't expand at all. And this is why polymer molecules are stable, because water is a very poor solvent. Now, you might think, aren't proteins soluble in water? Yeah, they're still soluble because they do this, but it's still a poor solvent effect. Okay, so we ask what happens if we measure the diffusion coefficient versus polymer concentration? And we take the same polymer and we put it in two different solvents. Well, the answer, there are two sets of experiments that are both in the book. And one approach to this is we will change the chemical identity of the solvent. And if we do that, we see a theta solvent behavior, and we see a good solvent behavior. And the diffusion coefficient in the theta solvent just falls stretched as a pure exponential, e to the minus alpha c to something close to one. And then the good solvent is e to the minus alpha c to the nu. And g at high concentrations, we're out here in concentration, the theta solvent slows down the polymer more effectively than the good solvent does. Rather, the polymer and the theta solvent slows things down more effectively. Now, there's another way you can change solvent conditions without having to change the solvent molecule. And the alternative way to do things instead of changing the solvent is to change the temperature. Because there are systems that approach being good solvents at one temperature. But if you change the temperature enough, you approach a theta point, and at the theta temperature, you get a different behavior. And those were experiments that were done by Delphine, Coleman, Neal and I. And once again, in the theta system, you see something that's sort of like a straight line. And in the good system, you see something that's sort of like this. And oh my, what's occurring? Well, the answer is you have different good and theta behavior. And from a bit it appeared, we were at the first bit of enthusiastic, g we found a solution, we found a prediction that is actually made by my model of polymer dynamics. And you see exactly what you expected to see. And then we realized, but if you look at the alternative models, all the models we found actually show the same prediction. So this very pretty experiment doesn't really tell you anything, except that in some sense the models agree with each other. And the last thing I will discuss. There was a period in the late 80s, early 90s, when I would go around giving speeches on probe diffusion and what you saw and what it appeared to imply about polymer dynamics. And so I would go here or there or someplace else and I would give my remarks. And someone would stand up and complain that I had not reduced my measurements relative to the glass temperature. Now hiding behind this not at all innocent question is a very complicated piece of theoretical issue. But a piece of the notion is as follows. Here is a polymer solution, a polymer and solution. And I am representing the polymers looking like a pearl necklace. The rate at which the polymer can move is determined by a drag coefficient, a resistance to motion of the individual beads. That's the same as the Stokes law drag coefficient, but it refers not to the whole chain, but to the little piece. Now if you work in polymer melts, the issue is that G, as you change the temperature up or down, the drag coefficient of the individual polymer beads slows down or increases. The drag coefficient goes down or up. And as you change the temperature there for the whole dynamics of the system, changes on its time scale, not because something particular is happening to the whole chain motion, but because the drag coefficient of the individual beads is changed. In fact there is a whole experimental approach called time temperature reduction. And the notion in time temperature reduction is that you can change the time scale on which things are happening by changing the temperature and therefore you can do what appears to be a measurement over a very wide range of frequencies, response frequencies, by doing a measurement over a much narrower range and doing it over a series of temperatures and at each temperature the same behavior occurs at different frequencies and therefore by changing the frequency of the temperature you can compress all of these behaviors so you actually look at them experimentally in one relatively narrow frequency regime. Well that's time temperature dependence. Now how does time temperature dependence come into our probe diffusion work? And the answer is that this drag coefficient was said to depend and it was a very vague question. It wasn't a detailed analysis, exponentially on the concentration and therefore as I changed the concentration I was changing the drag coefficient. Now what does this have to do with the glass? Well I'm going to draw another picture. Two pictures. And so we will plot a property that depends on how easily the beads can move, namely the viscosity, the resistance to pouring. And I will plot this as a function of temperature. And if I plot this as a function of temperature, what happens is I cool the system off, the viscosity increases and then it increases very rapidly indeed and eventually I get to something called glass. Now the picture I've just drawn is for a polymer mept. And the notion is that as T approaches this temperature Tg, the glass temperature, the viscosity sort of diverges. Now it doesn't really diverge, but if the viscosity is 10 to 14 times the viscosity that you had when you had a simple fluid melt, it doesn't pour. It's practically but not quite a solid. That's the glass temperature. If you look at large numbers of behaviors, you can say the interesting variable is T minus Tg, the distance from the glass temperature. And if you want to compare how different polymeric melts are behaving, the sensible comparison is not to compare them at the same temperature, but at the same temperature distance away from the glass temperature. Now you could also be a little fancier and say maybe you ought to normalize like this because there are systems that have very low glass temperatures and systems that have very high glass temperatures, but that is the glass temperature idea. The notion for polymer solutions is that if we look at 1 over D, which is sort of like a viscosity, and we look versus polymer concentration, 1 over D is doing this, there is some glass in some sense that is eventually things don't move or something happens, and therefore I ought to reduce relative to this glass temperature. So Carol Quinlan and I did this experiment, and what we said is, well, how do we reduce relative to the glass temperature? And the answer is we will do measurements of D at a series of temperatures, and the reason we will do this at a series of temperatures is as follows. This picture here actually leads to something called the Vovogl-Folker-Tamann equation. It says D should go as sum D0, and then there is a temperature T because the basic diffusion coefficient scales as temperature, and then there is an e to the minus sum constant, there is a constant, over T minus Tg. And you notice this thing has the feature as we approach, oh, I should put a plus sign there, the sign is of course not meaningful because A is a signed number. But the issue is that as the temperature approaches the glass temperature, this factor becomes extremely large, and therefore I will put the minus sign in its diffusion, and therefore this number becomes large, and therefore the temperature goes to zero, as you approach the glass temperature. Well, if I am doing this in polymer solutions, the claim is at each concentration, because Tg is a function of polymer concentration, as I change the polymer concentration I am changing the glass temperature, and therefore instead of quoting everything at the same temperature, I should quote all of my measurements at the same T minus Tg. Okay? So some place I should establish Tg for these solutions, and if I quote measurements at different concentrations, I should also be at the same time be comparing measurements at different temperatures rather than at the same temperature, which is what I have been doing. Okay, well this equation has a feature. It makes a prediction as to how D should depend on temperature, doesn't it? There is however another prediction as to how D should depend on temperature, namely D should, if we compare with the D with T over eta of the Solmon, we should get a straight line. Yes? Okay, so we did all the experiments. And the first thing that happened is that if you measure D in a series of temperatures and fit to this form, you discover that Tg is extremely low, like minus 100 centigrade. That's not a physical temperature, of course, for water because it freezes. However, what we found, you do the temperature dependence at a series of different polymer concentrations, and you ask how does the apparent Tg, whether you believe this makes any sense or not, depend on temperature, and the answer is the apparent Tg is independent of polymer concentration. And therefore, if you believe that you were supposed to reduce relative to Tg, well, we did the experiment, and the answer is Tg is the same at all concentrations contrary to what was being said, and therefore the reduction relative to Tg wouldn't do anything. The second thing we said is, well, we can look at D versus T over eta sub s, and Gd is linear in T over eta. This is eta of the solution, or eta of the solvent. It doesn't really matter which because they just track each other. And if you look for curvature here, well, the curvature is slightly larger than the scatter of the points, and so you could say there's a little curvature, which, by the way, is independent of polymer molecular weight. And the best we can say is, quite good approximation, D just scales as temperature over solution viscosity linearly, and there's no sign of any deviation from linear behavior. Maybe there's a slight sign, it's sort of a one or a two percent effect, and your measurements are accurate to a bit better than one percent. So you're sort of at the point where you are leaning hard on your data claim that you're seeing a deviation from what Stokes Einstein would tell you to expect for the temperature dependence. So that's temperature dependence. However, we were challenged to do a reduction relative to the glass temperature, and what we showed was that the relevant glass temperature for these experimental measurements is independent from polymer concentration. And since it's independent from polymer concentration, the reduction relative to the glass temperature doesn't do anything. And we could also say, ask, well, what is the dependence on solvent viscosity, which is temperature dependent, and the answer is D is linear and T over A is a solvent, and therefore any hydrodynamic picture or any picture that says the solve viscosity controls things, works just fine. So that's it for temperature dependence. And we are now at the end of today's lecture.
Lecture 15 - more on probe diffusion. Lectures are based on my book "Phenomenology of Polymer Solution Dynamics", Cambridge University Press 2011.
10.5446/16237 (DOI)
Classes in Polymer Dynamic. Based on George Philly's book, Phenomenology of Polymer Solution Dynamics, Cambridge University Press, 2011. And today, this lecture is lecture three, Models of Polymer Dynamics, Electrophoresis. I'm Professor Philly's, and this is course 597D, Phenomenology of Polymer Solution Dynamics. I've just handed you the first homework assignment. This is class two on doing the recordings. The camera didn't start properly, but we don't want to do that to what two terms in a row. The homework objective, I have assigned you a specific journal. I have assigned you two years of the journal, and you are to find interesting papers that deal with things that are similar to what you find in chapter three. The homework assignment, which goes on for a page, lists a whole bunch of different ways you can do searches. I emphasize Web of Science because it's a citation searching tool. Web of Science means that if you have found a good paper, you can look at everyone who used that paper as a footnote in papers they published at a later date. And it lets you trace forward in time exactly the same way as looking up footnotes lets you trace backward in time. There are a bunch of alternatives. For example, if you go to the journal, you discover the issues and pieces within the issue have been nicely sorted by the editors. And since they've been nicely sorted, instead of just trying to read two years of the journal, you can look at journal titles and abstracts, and you find a few issues that have most of the red news that you were looking for. The results you wanted. On brute force search works very well. My book in substantial part is based on brute force search. I actually went through most of the standard polymer journals going back over 30 years, pulling out all of the articles. However, chapter three is a little different. Chapter three uses of electrophoresis to study how polymers move in solution. You cannot find in any other review on polymer dynamics because no one else realized that the approach was possible. I found it slightly by accident because one of the papers we discussed in detail in chapter three, the work by Rod Barden-Krombach, happened to footnote one of my papers. And the entire contact between the polymer dynamics literature that most of you know about when you're done with course, and this chapter was very tiny. It was this one paper and one footnote crosslink. But I looked up everyone who had used my papers as a footnote, and there was this one peculiar footnote in a journal I wasn't familiar with at the time. Very good journal. And so we were able to do the link. What you're being asked to do is to take all of these research tools, and I list a bunch of them on the homework, apply them to one journal for two years, but it's a thick journal, so you've got a fair amount of paper there, much more than you would want to read on your own, even if you just have to sit there and pull it onto your computer screen one page at a time, which you can do. It's on the internet after all. Go to the library and you can pull it down. You should know how to use that tool for your own graduate work. This is a learning experience on how to do library research. Okay. The target date is about two weeks out, and this week we will discuss how much progress you've made, and you should all have made a reasonable amount of progress, and we can adjust the target date if there are issues or challenges, or God knows the internet goes down for a week or something silly. I'm willing to be somewhat flexible. The objective is at the end you will write a research paper of some reasonable number of pages, and you will tell me what happened. I realize a number of you do not have English as your native language. As I told my other class, or at least a few students in it, I could tell English was not your native language because you did not make all of the sloppy grammatical errors that the American students made. So your written English will be fine. Don't worry about it, and I'll make kibbutz on your English, and I'm not going to penalize you for making grammatical errors. We're here to talk about polymers. Okay, what I am going to do, assuming the camera doesn't die again, I assume it's still recording, what I am going to do is to go back and talk a bit more about what we said last time on Chapter 2 on sautrophugation. I am going to repeat in somewhat different words what I said about polymers and solution. I am going to, however, call your attention to a couple of sources you might find useful to look at. And... The first is two review articles. They're both in advances in chemical... physics... and one is by Jeffrey Skolnik... and one is by Tim Wodge. They're footnoted in the book someplace. They're both very good review papers. If you read them, first of all, you'll get some background in theory, more than you need, but you should know it's there. Second, if you read them, you'll discover people are looking at sort of the same literature, not completely, and they're coming to opposite conclusions, or at least they're coming to different conclusions. And finally, particularly the watch paper, which is concerned with experiment, you will get an impression of what the review literature, prior to my book, tended to look like. So there's some alternative places you can do some reading. So let us go back and start at the beginning. And here's a polymer molecule in solution. It's very drawn out. It's stringy. If you actually had a polymer coil in solution, it tends to be a little more compact than I've drawn it. However, over short distances, I'm drawing the simplest possible, more or less, polymer, over short distances, you have a short structure fixed by covalent bond lengths and bond angles. Over long distances, well, the molecule is not quite totally free to rotate around that bond, but it's pretty close. And as a result, over long distances, a polymer molecule, a typical molecule, tends to resemble a piece of yarn, a long piece of yarn that's been dropped on the floor. It's floppy. It's not floppy over short distances. Over short distances, the molecule tends to be fairly rigid, but polymer molecules are very long and stringy and are fairly floppy, soft. Like overboiled noodles. Second, if I have here a piece of polymer coil, and if I have here another piece of polymer coil, we're in solution and there are also solvent molecules. And you can then ask the question, and this is a thermodynamics question, do polymer coils prefer to sit next to each other, or do they prefer to sit next to solvent molecules? If polymer coils prefer somewhat to sit next to each other, then states of this floppy shape that are fairly compact tend to be preferred. If the polymer coil would prefer to sit next to other solvent molecules, the polymer spreads out. An extreme case of this form is given us by proteins. Now proteins are water soluble because their outside edges are coated with charges and with hydrophilic groups, things that like to attach to water. But most of most protein molecules are organic things like methylene groups that aren't very water soluble. And what happens is that a protein molecule forms balls up because water is not at all a good solvent for proteins. And because the protein molecule balls up and forms always the same structure, we have enzymes. This is the actual basis for life. So that is what a polymer molecule looks like. There's another molecule, model of a polymer molecule, and the other model is of more use for physics purposes if you want to calculate dynamics. The dynamic papers that correspond to the model, well, there's a paper by Kirkwood who was one of the great theoretical chemists of this country in the first half of the century, Kirkwood's Reisman. And there is a paper by a fellow by the name of Rouse. And there is a fellow paper by someone by the name of Zim. And you can look all of these up if you want. And if you look all of these up, the math is a little challenging, but they discuss how polymers move in the solution. Their model for a polymer is that it's a string of beads, like a pearl necklace, and the space between the beads is little springs. And the springs are attached to the beads in such a way that this distance, well, you don't stretch springs too much, they pull back. But there is no force controlling the angle between adjacent beads. If you try to build this, you discover your model tends to try to keep the springs at fixed angles with respect to each other. Well, the polymer equivalent model does not have that feature. I discussed last time some of how the model works. Now, there is a great deal of work on models of polymers in dilute solution. These models. And there are many more complicated models. And there is a book by Yamakawa, which takes you outside the domain of this course, which treats polymer molecules in dilute solution. So here is a dilute solution, and there are some polymer molecules. And in dilute solution, the molecules are far enough apart relative to a typical radius of the polymer molecule. And the one I just drew in is RG, and it's called the Radius of Gyration. We'll talk about that somewhat more in a later lecture. But the key issue is that R, the distance between molecules over RG, a typical molecular size, is very large. Now, RG is somewhat imprecise. It's not like the radius of a ball bearing. These molecules are floppy. Each one is a somewhat different size. Furthermore, they don't have sharp edges because they're a very loose ball of yarn. They're not nice and wound up. They're what happens to yarn after a kitten has found it and played with it for a while. So here are the polymer coils, and they're far apart. And because they're far apart, you can ask how the polymers affect, for example, the viscosity, the resistance to pouring of the solution by calculating what one molecule does, and then you just multiply it by the number of molecules you have. That's dilute solution behavior. Now, suppose you stir in more and more molecules, like adding sugar to your morning coffee. As you add in more and more polymer molecules, eventually you get to the point where the distance between polymer molecules and the size of the molecules is some number like one or two. And at that point, the molecules start to run into each other. Now, the first question you might ask is, well, when they start to run into each other, what do they do? And there are two answers, and for a very long time, there was a major debate in the literature which got somewhat key hit, or so I am told. It's before I was active in the field. And one answer is that as we run up the concentration, instead of having molecules like this, you have molecules that fold in on themselves and neighboring molecules that are also folded in, but they don't go through each other. Could they? Sure. This is a very open structure. If I had a polymer molecule here, and you could see it, and I fired a laser pointer through the molecule, it would hit a polymer strand about twice or three times, some number like that on the way through. And most of the space in here is open solvent. And therefore, if I have several polymer molecules, in principle, they could go through each other. We now know what, in fact, happens if you run up the concentration so that you are no longer in dilute solution. The polymer coils pass through each other and become intermixed. Well, it's one of the alternatives, and that's the one that nature prefers. If we want to talk about how do polymers move in the solution, well, there are several answers, and they all go back to a question, how do we model things? What are the dominant forces? What is the question? What are the dominant forces? And what are the other? Is the nature of the motion. The whole purpose of this course is to use phenomenology, that is, actual experimental measurements, to tell us something about what is going on in solution. However, much of the literature, perhaps less the literature you're about to read, but much of the literature is very heavily influenced by a small number of models or classes of models, and you will read discussions of experiment in which phrasings are used, and the reason the phrasings are used is that the authors are advancing from a mental image of what is going on in polymer solutions, and from that mental image they get to answers. So let us look briefly at the mental images. We mentioned Kirkwood and Reisman, and Kirkwood and Reisman did more or less the first theoretical model of how single chains move in solution, and what they said is, here is a single polymer chain, it's sitting in the solution, we put it in a flow field, and the flow field, let's say moving rapidly here and more slowly there. So this is a velocity v sub x, and if you look hard, dv sub x, that direction is z, dz is not equal to zero, that's called a shear. Now we're talking about liquids, if you put a shear on a liquid it flows. You've seen this, if you step in water on ice and you start to fall over, the water touching the ice isn't moving, the water touching your shoes isn't moving, but between the two there's a slip, and that's what's occurring. Kirkwood asked, well what happens if you have this flow field applied to a liquid? And the first answer is, there's some sort of average velocity of the water within the polymer solution, polymer molecule, and that causes the polymer to float along and drift with the liquid. If you throw a piece of wood or watch a twig floating down a creek, a small river, and the water is flowing, you see the floating objects just bob along in the water. However, and you may or may not have noticed this, if you have something that is close to shore, at the shore the current is weak, out in the middle of the stream the current is moving more rapidly, and two things happen. First of all, it doesn't move as fast, the floating objects don't move as fast as they're right up against the shore, and second, if you watch carefully, you'll notice they're rotating. Why are they rotating? Well, here is a polymer molecule, the water here is moving slowly, the water there is moving more rapidly, the molecule here on average is moving with the flow, but this edge is moving more slowly than the middle is, it's moving backwards with respect to the middle. This side is moving forward, and therefore the whole body molecule rotates in the shear, and that rotation leads to extra friction, and is why dilute polymers increase the viscosity. Well, this is the Kirkwood model, I have omitted all of the theoretical details on how you treat this hydrodynamics. The important thing that I wish to bring up though is having said, here is Kirkwood and his model, you can then say, let's extend this to non-dilute solutions, and we can take the Kirkwood picture of a hydrodynamic polymer chain, and we can say here are several polymer chains, and if one of them moves, it influences how the neighboring chains move, and you have hydrodynamic interactions between whole polymer chains, and you can carry these calculations out to fairly high concentration, and if you do that, you can make predictions. These are difficult calculations. On one hand, if you are not careful, instead of getting good answers, you get divergences, you get integrals that are infinite. It's very bad. The other problem is, if you do it carefully, you can avoid those, and you get integrals that cover acres of paper, and fortunately there is now computer algebra, instead of you doing the algebra, you have a computer expand 8 or 12,000 terms in a polynomial, and combine them. You would not like to do 12,000 terms in a polynomial by hand. The computer loves to do it and does it more accurately than we can do it, and so you can actually do this approach. Now, there are two other approaches, and the two other approaches come from two other pictures of what polymer solutions are like. One approach is to start with a real gel, and in a real gel, you have long, linear polymer molecules, a real gel, gelo, gelatin gels. You have a real gel, and where the molecules stick to each other, or touch each other, they become fastened. And in fact, there are organic synthetic processes, for example, if you make a polyacrylamide gel, and you actually, those bonds are chemically cross-linked. In terms of available thermal energy, they're totally permanent. So here is a gel, and I now put into the gel a polymer coil, or a polymer chain. And I ask, that's a perspective drawing, the polymer is winding in and out, how does the polymer move? Well, it's basically limited. It can't move sideways, because if it moves sideways even a modest distance, it runs into another polymer chain. All of those polymer chains are fastened in place to each other by covalent bonds. They're all fastened in place, except this one polymer that's free to move. So the polymer can't move sideways. All it can do is move back and forth parallel to its own length. Now, of course, parallel to its own length is a little tricky, because this is an irregular curve, and when the polymer moves backwards and forwards, it can find, has to find gaps through which it can move. But basically, most of the polymer is mostly constrained to moving parallel along its own length, sort of like a railroad train, backwards and forwards, but not sideways. That's imprecise, because this whole life sketch is actually fairly, can be fairly large relative to the polymer coil. So it's got some to move sideways. It's not that it can't move sideways at all, but it doesn't have a lot. This motion is called reptation. It's from the Latin, reptare, to crawl. The thing moves back and forth. And now we have the interesting theoretical guess, and he was fairly specific, that it was a clever guess of Dijon, who was a French physicist, no longer with us, unfortunately. He was a Nobel Laureate for his work on polymer statics and for his work on liquid crystals and such and on. And he made the guess that if we go to a polymer solution, we can use this picture. Now, a polymer solution is different from a gel in the, these are polymer strands, and they criss and cross and maybe wrap around each other. However, they aren't attached. And the assertion is we have points here. Here's a typical point. That is called an entanglement. And the notion of entanglement, because the polymers are wrapped around each other, is that on a short time scale, the entanglement looks just like a covalent bond. And therefore, if I have here the polymer chain of interest, the neighboring chains are not bonded to each other, so they can move on a long time scale. But on a short time scale, this chain of interest is not free to move. And therefore, the chain of interest, approximately speaking, is constrained to move backwards and forwards along its own length. Well, once you've said that, if you're clever, you can do all sorts of analysis and make all sorts of predictions. A certain set of the predictions that are made come out of the, in essence, come out of the fact that the people were doing this had previously worked on critical phenomena theory, which was fine. And in critical phenomena theory, you find things called scaling laws. You know that, for example, you have a specific heat, and you measure the specific heat close to a critical point. And you discover that the specific heat depends on the difference in temperature from the critical point. And some power, this is a critical exponent, people generally don't try to calculate the pre-factors. But in this case, the specific heat, as you approach the critical point, the specific heat diverges. Also, if you look at the liquid, a liquid near its critical point, it becomes opaque. In addition, and this is actually a very practical materials issue, there are lots of liquids near the critical point that become extremely good solvents for all sorts of things they otherwise would not have dissolved. And in fact, there are people who sell even carpet cleaning services where the solvent they use is carbon dioxide near its critical point. And it pulls out all sorts of stuff that you could not get out, elsewise. It's a very clever use of basic physics. However, once you think of this as an appropriate relationship between variables, you look at something like the sedimentation coefficient that we talked about last time. You say the sedimentation coefficient is going to depend on the concentration of polymer to some power, and the molecular weight of the polymer to some power. And if we have a polymer solution that is concentrated enough that it allegedly looks like this, we can calculate the powers. And that's what was done. Because this is a scaling law, there is a feature. If I plot logarithm of sedimentation coefficient versus log of concentration, if this equation is correct, my measurements should lie on a straight line. Because that's a general feature of power laws. If you have a power law and you put it onto a log-log plot, instead of having a funny curve, you get a straight line out. And that is exactly what the Degen model did for all sorts of transport coefficients. It predicted the power laws that are associated with the transport coefficients. Okay, that's model number two. Model one was the hydrodynamic models based on taking Kirkwood-Riesman and going to concentrated solution. Model number two is the Degenotype models. There is a third set of models. The third set of models are applied more to colloids, solutions of spherical particles. They are applied more, well, they started out with sand beds. Why would anyone care about flow through a set of bed of sand? Any ideas? Because if we go back a century, flow through sand is actually an extremely efficient way of purifying water. And you drop stuff through sand and perhaps you add things that cause clay or whatever to flocculate, fall out of solution. And sand flow is actually extremely efficient at filtration. Just before water chlorination, which is what we now use in the United States, was put in, sand bed filtration had reached the point where it was almost as good in terms of removing harmful bacteria and other materials. It wasn't quite as good and it was much more trickier and more expensive. So we now use water chlorination to keep things out of our water system. But sand flow has the property that if I have some object here and I apply force to it and it's flowing through holes between the sand grains, this object moves, it creates a wake, it dumps momentum into the liquid. But the moving water tries to push on the sand particles in the bed which refuse to move. And as a result, the momentum is hauled out of the flow and transferred mechanically into the walls of the container. And the flow is very locally confined. This is called screening. It does occur in sand beds. And the result of the screening is that this wake, this motion of the water induced by one particle, instead of falling off inversely as the distance, that's very long range, with screening the fall off is as e to the minus kappa r over r. This is an inverse length. It's exponential with distance and this means that the flow is confined to very narrow areas. And there are an entire set of models of hierocolides or polymer molecules moving and it's assumed that these molecules, which are free floating, behave the same way as sand bed would and you can then do a series of calculations on what you get for polymer dynamics. Now I am not going to do the theories of all of these. They would chew up a whole semester. I am going to point out though that there's a certain amount of language that comes out of these models and the language is used and it should be realized that when people are using the language they are thinking back to particular hydrodynamic models whose applicability has to be tested. Oh yes, the sand bed model. There's another piece of language that comes out of it. I've already mentioned last lecture, Augustine. And the idea of Augustine models is that if I have a concentrated polymer solution, this is a very tiny piece of it, and I have an object trying to move through the solution, the object has to find holes, gaps between big polymer chains that are big enough to allow it to move. Well if the polymer chains are completely immobile, if you have a chemical gel where you put in cross-link points that are unbreakable for practical purposes, then this is a very sensible model for how things move. But there are people who carry it over to polymer solutions where these cross-links, these chemical bonds aren't there, and then you have to ask how applicable the model is. In any event, I have spent a certain amount of time discussing models, and the point of the models is that the discussion I just gave you is applicable quite systematically to everything we're going to talk about in the book. And therefore, since I'm going to keep reusing the models and the language, I thought I'd show it more systematically. Having done that, let us go back to our text and let us go back to considering phenomenology of sedimentation. What I showed last time, there are a series of figures in chapter two, is that if we just have a polymer solution, polymers in a liquid, we put it in the centrifuge and we ask how fast the polymers sediment. What we find, assuming the polymer coils are denser than the solvent, is they tend to be, if we spin the centrifuge, the polymers are precipitated to the bottom. There is a sedimentation coefficient that describes the rate of fall, and the sedimentation coefficient is a physical transport property. It's independent of how fast, how hard you spin, or how strong the local gravitational field is. And S, if plotted against concentration, does this. It goes down. And if you look at a much heavier polymer coil, in dilute solution, the larger chains sediment more rapidly. But if you increase the concentration, the sedimentation coefficient falls more rapidly for a large chain than for a small chain. That is, in some sense, big chains are more effective at getting in each other's way than small chains are at getting in each other's way. So far so good. And so these curves intersect, and if you look at the measurements in some of the figures, they tend to cross, though in general, measurements aren't extended out here far enough to say very firmly. It's obvious that they cross. Indeed, there are theoretical predictions that at large concentrations, instead of saying the curves cross, you might set, predict that the curves merge. The experiments were conducted out to the point where you can certainly see the curves reach each other, but beyond here there's a question of what happens. Now, there's several other things you could do, and we are now going to advance to figure 2.13. And one of the other things you could do is say, well, the polymer solution, polymers are effective at inhibiting sedimentation. Polymers are also effective at increasing the viscosity, the resistance to pouring. If you go back to what I discussed last time, I said that the sedimentation rate was a ratio between an applied force, which could be the centrifugal field, or if it's really big object, it's just gravity, and the drag coefficient that for spheres, the drag coefficient f can be written as 6 pi, viscosity of the solution, radius of the sphere, and this gives us a sedimentation rate. That's not the sedimentation coefficient, because I have to divide this out partly. But the larger the viscosity, the slower the object will sink. The larger I make eta, the smaller will be the sedimentation, and therefore you might say, well, yes, the sedimentation is slowing down. It's slowing down because you're making the solution more viscous. Do you see how the argument works? Sort of? Well, we can look at 2-13, and 2-13 is one of the few experiments I could find where the people measured not only the sedimentation coefficient, but also measured the viscosity of the solution. And what they find, solid lines, is that the viscosity goes down with increasing polymer concentration. The sedimentation coefficient also goes down with increasing concentration, but not the same way. And, well, at intermediate concentrations, there is a region where the objects are sedimenting more rapidly than you would have expected from the viscosity of the solution. And then at large concentrations, it appears the two curves emerge. That is, the sedimentation rate is governed by the macroscopic viscosity of the solution. This pull apart and go back again, and it has a technical name, which I introduce in the book. This is called re-entrance. And the issue in re-entrance is that you have some systems in which there is a domain in which the macroscopic viscosity does not govern how fast things move on a microscopic scale, but it is only a domain. It's not an issue of low concentration. It's not an issue of high concentration. But in between, there is this region where particle motion is not governed by viscosity. And re-entrance is the relatively rare phenomenon that we happen to have encountered at first, so I introduce it. Okay, what else can happen? Well, one of the things that can happen, if you push ahead to page 25, you can say, G, if we make the polymer molecule bigger, or make all of them bigger, and we measure sedimentation, the bigger the chains are, the more effective they are in getting in each other's way. Now, is the effect due to the fact that you're making the molecule you're watching bigger and bigger, or is the effect due to the fact you're making the surrounding chains bigger and bigger? Let me draw a picture. We are going to look at probe sedimentation. The image in probe sedimentation is that here is one chain whose sedimentation rate we are going to measure. It's surrounded by other chains. If these chains are all the same size, and I make the chains bigger and bigger, well, two things happen. I plot S versus C. In dilute solution, the larger chains sediment faster than the smaller chains, but if I plot sedimentation versus concentration, if I make all the chains bigger, the curve S versus C falls more rapidly. And now I ask, is it falling more rapidly because the chain I'm watching gets bigger, or is this curve dropping more rapidly because the surrounding chains are getting bigger? There are two possibilities. Of course, both things could be going on at once. How do we test this? And the answer is we have a chain, a probe we're going to watch, molecular weight P. We have these matrix chains, molecular weight M, and we make the probe chain and the matrix chains different in their molecular weight. And then we can change the molecular weight of the probe or the molecular weight of the matrix independently from each other. And as we do this, we can also change the concentration of the matrix. The probe chain is diluted. It's supposed to be all by itself. You don't really have to have only one probe chain solution, but if you only have a tracer concentration, a low concentration, you can ask how the probe moves through the matrix. Well, that's seen in figure 2-15. And figure 2-15 shows you, above and below, figures parts A and B, it shows you two different molecular weights of matrix chain, and then you look at a series of different probes moving through the same matrix as you change the concentration. And what do you find? We are looking at 2-15, and we have various different molecular weights of probe, and we look at two matrices. And what we find, this is the sedimentation versus coefficient, and so here is a small probe through a fixed matrix, and S versus C does that, and this is P small. And if we make the molecular weight of the probe large, well, if the matrix is diluted, the large probe sediments more rapidly, but as we increase the concentration of the matrix, the sedimentation coefficient of the large probe falls very rapidly. It falls more rapidly than the sedimentation of the coefficient of the small probe, and the two curves cross. You can see it in the lower figure. And so at large concentrations of matrix, the large probes unambiguously sediment more slowly than the small probes do. The other curiosity, if you look at the curves, it's the figure in the book, you can see that these curves appear to cross maybe not exactly at a point, but they certainly cross at something resembling a point. So for a given matrix polymer, there is a concentration at which the matrix polymer retards all of the probes to the same, to different extents, and the probes all sediment at the same rate and do not separate much. Yes? In graph A, y then 2000, the one hammer to 40, k dot and k dot, is just not the same like that. Because it's too big. The answer is, how am I going to explain that clearly? The answer is that if you have a very, so in figure A, there are two different matrix polymers used, but the dashed lines refer to a very small matrix polymer, and the very small matrix polymer slows all the chains down about the same extent. That's why the dashed curves are sort of parallel to each other. But if you make the matrix polymer bigger, you get the solid curves, and a somewhat larger matrix polymer gives this behavior. Figure B shows a really big matrix polymer, and you get this behavior very dramatically. But the answer is, if you have a very small matrix polymer, you don't see this effect. People who believe in the Degen type models would discuss this in terms of entanglement. And they would say, if we have a very small matrix polymer and different size probes, we see, you can see it in the actual data, curves like this. The reason is that if the matrix polymers are very short, instead of having an entangled structure like this with very long chains, if you make the matrix polymers very short, most of the chains are so short that they don't have pairs of entanglement points, and therefore are not tied together. The solution at small matrix molecular weight M is unentangled, but at large matrix, that is, there's no entanglement, but at large matrix M, the solution is entangled. And I might ask, well, gee, what are the effects within this model that determine if a polymer's solution are entangled or not? And there are two of them. First of all, the polymers have to be long enough that as you march along their length, you find that other polymers are wrapped around them. Second, the polymers have to be close together enough that you get multiple entanglement points along the same chain. Now, if you dilute the solution, these entanglement points move farther and farther apart, and eventually at small concentration, C less than some critical concentration, C star, the polymer solution is unentangled, and at C greater than C star, the polymer solution is entangled. And so if the polymer is too small or it's too dilute or some combination of those two, the polymers are not entangled, but if you make the concentration and the molecular weight large enough, you get entanglement. OK, I should stick into qualifications. I drew entanglement points as though they were loops. That's a pretty thing to draw on the blackboard. There is absolutely no evidence that those loops are what cause entanglement. But it's a pretty way to draw it so you can see what's happening, and the image is consistent with an interpretation of the phenomenology. Point number one. Point two, when I said this, C less than C star and C greater than C star, you do not want to get carried away with the impression that the translation from unentangled to entangled behavior has to be sharp. Most theoretical models do not predict whether the translation from here to there is sharp or is broad. The one theoretical model I know of that makes a prediction on this says, this is what is called a percolation transformation, and percolation transitions are sharp. However, the data doesn't tend to resemble that. It's not clear saying it's a percolation transition is right, and so if you want to say the transition is broad, well, that's certainly consistent with measurement. But most theories just don't treat that transition region at all. They talk about we are well into the unentangled regime where the chains don't entangle, or we are well into the entangled regime and the chains are really entangled. One thing that you should realize, there is the highest possible concentration you can get, the melt. You cannot get a higher concentration of polymers than all polymers, no solvent. It's got a density of about one, give or take, one gram per cc. And in the melt, you're at the highest concentration you can get. In the melt, there is a lowest molecular weight in which you see entanglement behavior, whatever entanglement behavior actually is. And there is, in the melt, there is a distance between entanglement and M sub e. It's a number of daltons of polymer measured along the chain. And if your polymers are shorter than that, they don't entangle. If you dilute the chains, M sub e becomes much shorter than the distance between entanglement points, because you're diluting the chains. And the molecular weight you need for what is called entanglement behavior in this Dijen picture goes up. In the Dijen picture, people would say the dashed lines in 215a correspond to chains that are too short to be entangled at the concentrations at which the measurement is done. Okay? Now let's shove ahead, and we will push ahead to figure 216. What is the point of figure 216? Well, there were a whole bunch of measurements of sedimentation versus concentration. And what is found numerically, you can see all the pictures earlier in the chapter, is that the sedimentation coefficient goes as some constant times e to the minus alpha c to the nu. That's a stretched exponential. And experimentally, if you look at plots of log s versus log c, you see smooth curves, measurements lie on smooth curves. You do not see power laws in regions of straight line. Well, you can always say here is the straight line I predict, and the straight line is a tangent to the smooth curve. And so there's some region in which you sort of see power law behavior. But you would see that no matter which straight line you predicted. Your theory doesn't predict what concentrations you should see the power law behavior. So phenomenology, what you see earlier in the chapter quite consistently, are stretched exponentials that actually go through the data points pretty well. However, you might say, well, maybe that's true, but if there were anything fundamental there, these two constants would be pinned in some clean way on other solution properties. For example, they would be pinned on the molecular weight of the probe, or the molecular weight of the matrix. And what I show you in figure 2-16 is plots of alpha and nu against molecular weight of the probe in systems in which the molecular weight of the matrix has been held fixed. And if you look at those figures, you notice the data lie on nice straight lines. Well, in some cases there aren't many points, and that doesn't prove very much. But in other cases, there are a decent number of points. And you see straight lines through the points. And therefore, alpha and nu are varying in some smooth way as you change the, for example, the size of the probes. Now that smooth way depends on what your matrix and probe are, and if you actually plot all of the measurements on the same graph, it looks rather scattered because there are other factors coming in, like the probe molecular weight, matrix molecular weight. However, we can do, and that's the significance of figure 2-16, the next step in the phenomenological analysis, which is to ask, well, it's very nice you pulled parameters out of the data, but do the parameters show any systematic behavior, or do they look like you generated them by rolling dice? The answer is there is systematic behavior. And if you compare the curves in 2-16, you discover that as you increase the molecular weight of the matrix, as you increase the molecular weight of the matrix, the alpha curves do that. And so there is a smooth dependence on molecular weight of, or at least a dependence on the molecular weight of the matrix. And that is about as far as you can carry that sort of analysis. Okay, I am going to give you a specific additional reading assignment, and the reading assignment is footnote 22 and footnote 25, and this one is in physical review letters, which is a core polymer journal, and this one is in macromolecules, well, PRL publishes all sorts of physics, but it certainly does publish some polymer material, I should say that a bit differently, and macromolecules, which is as core polymer journals as you can get, and you should look at each of those papers, and the reason you are reading each of them is to get some impression of what actual experimental data looks like, as opposed to my very compressed summary, is it compressed, yeah, well, the book is 500 pages, so the book, if I print it out, is about yay-thick. The papers on which the book is based more than fill a four-door filing cabinet, and what I did to write the book was to compress this huge amount of information down to something much smaller. And you have seen that here. Okay, we are done with sedimentation, and I have a theoretical digression, but let us push ahead, and let us push ahead to chapter 3. And in chapter 3 we talk about electrophoresis. The notion behind electrophoresis is actually quite old. The notion is if we have a liquid and we apply an electrical field to the liquid, and there are charged objects in the liquid, the charged objects move. However, the actual first experiment was quite different. It goes back two centuries through the Russian experimenter, and there was a tube basically filled with bicaridicals, which couldn't move. And there was water in the liquid, and the water had been reasonably purified. Of course, it is 200 years ago. It was distilled, it was actually pretty good back then. And the water contained no ions. Nonetheless, if you applied an electrical field to the liquid, it moved. And this is the hint that electrophoresis is actually an extremely complicated phenomenon, and understanding what is happening in an electrophoretic experiment is a whole lot more complicated than understanding sedimentation. What do we learn? How does it work? Okay. The original electrophoresis experiments, this goes back to a fellow by the name of Zezalius, the modern quantitative electrophoretic experiments on polyelectrolyte molecules, were done in bulk solution. And if you do electrophoresis in bulk solution, you have this little problem, you have this large object, you're flowing an electrical current through it, there's resistance, it gets hot, just like a wire will get hot if you pass a current through it. If you're heating the liquid, you see the same effect that you see. If you're heating water to make tea, namely, you get roiling and rising because the hot liquid is less dense than the cold liquid and floats upwards. Now, in water, you can cheat by doing the experiment in bulk for Celsius, because water has a density maximum at 4 Celsius, and therefore if you change the temperature just a bit, the density doesn't change very much. The modern solution, the more modern solution, well, there were really two of them. And one is to say, we will run the experiment inside a gel, and the cross-linked gel that just sits there, and the cross-linked gel suppresses convection because there's no opportunity for easy bulk flow. The other solution is to say, instead of running the experiment in a bulk solution where the tubes are this far apart, we will do this inside a capillary, and this distance will be, say, 100 microns, a very tiny distance, and we have a very small vessel, so there's good hydrodynamic interaction with the walls, and this suppresses convection quite effectively. Having said, we can make convection kind of go away. We then ask, what is happening inside that tube? Here are the two walls of the tube, and it's filled with water, and there has to be some salt in it, and here's a protein molecule that is positively charged. And I apply an electrical field E, which means down at the two ends, I have to be doing electrochemistry or something similar, because I have to have the current flowing out of the wire into the solution, and out of the solution into the wire, it leads to my battery. I actually have to do some electrochemistry to get the current flowing, because there has to be a current everywhere, so there's an electrical field. And this, we're going to hide. It's off camera. We don't see it. But there's an electric field here, and the electric field applies a force to the charged object. The charged object is Q. The force is QE. The description I have just given you is so totally oversimplified that it's wrong. What else is going on at the same time? Gravity. Well, yeah, there is gravity, but actually that's negligible. If I drop you in water, you float, because your density is about one. And the gravitational force is so small, thank God, that it can be ignored. The next thing that comes along, this has a bunch of positive charges on it. The solution has to contain salt. And out here, there are about as many minus and plus charges. And therefore, the electrical force on the solution out here is about zero. Because the negatives pulled one way, the positive ions are pulled one way, the positive ions are pulled the other way, and over any reasonable macroscopic volume, the net force is zero. There is one place where there is not true. And that is the region close to this positive ion. The positive ion is positively charged. It tends to attract negative ions from the surrounding solution. It tends to repel positive ions. My figure is exaggerated so you can see it. And each of these negative ions has a net effect force on it pulling that way. Because there is an electric field. Now what does this electric force do? Well, it does two things. First, here is the positive ion charged object. And there is something called the by screening. Which says that preferentially there are negative ions here. But I am dragging the negative ions this way. So there are more negative ions on this side than there are on that side. Because I have dragged them sideways. And because there are more negative ions here than here, they create an electric field which tends to drag this positively charged object that way. Opposite to the direction in which it was trying to move. Furthermore, each of these negative ions is experiencing an electrical force that is pulling it in that direction. So far so good. And because the ion is moving this way, it creates a wake in the surrounding solution. It creates a hydrodynamic flow that pulls on this ion. And in addition, if there is some poor negative ion here which is directly in front, it is dragged straight into the positive object. And we have a car-truck collision in which the negative ion plays the role of the car. And the protein plays the role of a truck. And all of these effects I have sketched do the same thing. They slow down the motion of the positive ion. Now some of these effects, this hydrodynamic interaction, would exist if we were just looking at the ion in equilibrium in solution because the ion, the big ion, macro ion, if we were moving in solution, would have hydrodynamic interactions with negative charges. Others, notably this, arise because there is an applied electric field on the solution. But the electric field seen by the ion is not the same thing. There is dielectric polarization. Complicated enough? Well, no, there is one more effect which goes back to the clever Russian experiment of 200 years ago, almost. Here are the walls of the tube. And most objects in solution tend to pick up charges. Or they have charged groups on their surface that ionize. And typically those positive groups, not always, but often are negative charges that are bound to the surface of the tube. They could be positive charges, I'm drawing an example. And they're negative because they have released positive charges into solution. Now most of the solution is approximately neutral, as many pluses minus charges, like the salt solution out here. But very close to the surface, if the walls of the tube are negatively charged, they tend to attract positive charges to the wall. They don't bind them, or the wall would be neutral, but they're a surplus of positive over negative charges in solution close to the wall. So far so good? Okay, now I turn on the electric field. The electric field tries to pull these charges, the wall charges that way, but the wall is mechanically rigid. Those charges just sit there because they're nailed down quite literally. They're bolted to the experimental apparatus. These charges aren't nailed down, and the negative charges move that way, the few of them there are, and the positive charges all move that way, and we get an effect called electro-osmosis. You say, where is the osmotic flow? You have a membrane, you have water flowing through it in one direction for a reason. Where is the flow? All these positive charges, the positive charges down there, are moving this way, and if the solvent sitting here looks at the wall, it sees this flow of positive charges that way, and it doesn't see anything else, and therefore the liquid here is dragged along by this flow at the walls. Fortunately, tube is nice and uniform, the flow field of these positive charges is the same at the wall everywhere, and so the solvent does what is called plug flow. It chugs along this way, and it chugs along with the same speed all the way across the tube. That's entirely different than flow you introduce by applying pressure. If you try to introduce a flow, not this way, but by introducing pressure, the solvent at the walls is stationary, the solvent in the middle is flowing that way, and if I plot velocity of solvent versus position along the pipe, there is a velocity profile fast in the middle, zero at the walls, that looks like a parabola. Plug flow is central to electrophoresis. Why? Because if I'm clever, I arrange things, and so the positively charged object is moving this way. I have switched charges around, so the plug flow is that way, and the charged objects, instead of moving through stationary solvent, are moving through solvent that is moving in the other direction, and so a short length of capillary, because I have to move upstream against the flow, takes a much longer time, and the short capillary behaves just like it was actually a much longer capillary. So if you did this same like the Viscosity? No, this is actually a velocity field. Now, if I say the charged object is trying to move that way against the flow, the flow is moving that way opposite, well, if the flow is slower than the speed of the charge, the charge moves this way, but it moves that way very slowly, and so it spends a long time in the capillary tube, and we'll get next time to why this is advantageous. If I read things right, the plug flow is faster than the motion of the charge, and the object moves the other way. Yes, the electric field on this positive object is trying to drag it this way, but the hydrodynamic flow is too fast, and so it's still dragged that way downstream. You may think of this as the motorboat in a fast river, and if the motorboat has a powerful engine and moves upstream, and if it has a weak engine, it is dragged downstream. Okay? Are we good on this? Okay, we are out of time, but I have gotten us through how electrophoresis works. You should very definitely for next time have read the electrophoresis chapter. There are two papers that you will... I'll show you the author's names, because it's in the footnotes, so you will find them. And one is Anna-Lise Barron, and she and her research group did some very interesting things. And then there is a paper by Rodbard and Krombach. I've met her. I have not met them. And you should actually look up those two papers and see what they did. The two papers are too long to have read between now and the next class, but you should glance at... at least skim through them and see the types of measurements that were made, even if you don't follow all the detail. And in the next lecture, we will discuss electrophoresis. This has been lecture three of Physics 597-D, Phenomenology of Polymer Solution Dynamics.
Lecture 3 - sketch of polymer dynamics models, results from sedimentation, how electrophoresis works. George Phillies lectures an advanced graduate course based on his book "Phenomenology of Polymer Solution Dynamics" (Cambridge, 2011).
10.5446/16235 (DOI)
Classes in Polymer Dynamic. Based on George Philly's book, Phenomenology of Polymer Solution Dynamics, Cambridge University Press, 2011. And today, this lecture is lecture 9, More Dielectric Relaxation. I'm Professor Philly's, and this is part 2 of my classes on dielectric increment in polymer solutions, based on my book, Phenomenology of Polymer Dynamics. Okay, so what we are going to do today is to continue with our reading of chapter 7. I'll have homework again next time. The issue is as follows, dielectric spectroscopy is an enormously effective tool that gives us a wide variety of different measurements about a polymer solution. In particular, if we have a polymer chain in the solution, it has a few features. It has an end-to-end vector, that is, a vector that starts at one end of the chain and points straight to the other end. And that's a measurement of how big the polymer is in the solution. It's not the only measure, but it is a measure. Furthermore, we can use dielectric relaxation spectroscopy to characterize the relaxation of this vector. That is, as time goes on, the little segments along the chain all perform Brownian motion. They diffuse. Now, they have to stay attached to each other as they diffuse, so they can't just move randomly completely. But if the chain starts in this configuration, after a while, two sorts of, or three sorts of things will have happened. Number one, the center of the mass of the chain, which is originally here, will move, say, over there. Number two, the two-chain ends will have moved, I'm marking the two-chain ends relative to each other. And therefore, the end-to-end vector will change length and direction. And number three, the parts of the chain between the end will change which way they're pointed. Now, the details of how a chain moves, how it translates and rotates and changes shape, are quite complicated and not shown in that sketch. Furthermore, there are a number of entirely contradictory theoretical models for how polymer chains move in solution. The key feature here is from the standpoint of dielectric spectroscopy, if we have a chain whose pieces are all asymmetric, so each piece has a little dipole lined up along the polymer chain, the result of all of these little dipoles lined up is that the polymer chain has a dipole moment which points from one end of the polymer chain to the other end of the polymer chain. And what dielectric relaxation spectroscopy and the measurement of the dielectric increment can do is to determine number one, the average length of these things, after all, average, yes, it's different for each chain in solution. The relaxation with time in the length and the direction, that is, as time goes on, the direction fluctuates, the length fluctuates, it's fluctuate one value at one time, another value at another time. And this relaxation and the direction gives us, after a piece of theoretical work, the frequency dependence of the dielectric response. Oh, that reminds me, I said dielectric constant is 81 and you asked me the dimensions and I went off mentally and way the wrong direction. The answer you were looking for is what units are you in? And the answer is that if we write the potential energy between two charges in that form, we are in CGS ESU units. Epsilon is one in a vacuum and oh, maybe about four in the oil and at low, this is low frequency, it's standard, around 80 or so in water and Epsilon is dimensionless. However, most of you are probably more used to SI units, in which case you have a four pi Epsilon not down here in vacuum and you pile up an extra constant or you change Epsilon, not Epsilon, and that's SI units of electrostatics in which the charges in Coulombs and in that case Epsilon is no longer 81, it's a number which we could go into but doesn't matter. The important issue is that the dielectric constant in water is much higher than it is in the vacuum because in water if you apply an electric field you can line up all of the little water molecules at least a bit. If you think for a moment you may ask, well, gee, if you apply a big enough electric field, don't you line them up completely and then don't they, don't you stop getting an increase in the electric increment and that's correct. Now if you actually try to apply an electric field like that externally, you will have an entertaining time with your experimental efforts. On the other hand, if you take water and you drop into, say, a sodium ion, presumably chloride someplace, this is a lot of charge and it's very tiny and there is a big electrical field here and the water molecules very near a sodium ion are very much lined up. So surrounding ions in water there is a shell where the water molecules are pretty well lined up and at that point, since they're basically already completely aligned, they behave basically like an oil in their dielectric increment. That is you get some increase in the dielectric constant because you can move the electron shells around a little bit. But basically you can't move the water molecules, they're already lined up by the local ion. Okay, let us go back to here and what I was saying is you can measure different sorts of things about a polymer chain. The next thing you can do is to say, well, suppose we have more than one chain at the same time. So there is a chain and there is another chain. What happens? 50 years ago this was a major literature dispute and there were no very easy ways to do the measurements. You knew roughly how big a polymer chain was but the question is, if you increase the polymer concentration what happens? And one answer is, if you really run up the concentration, the chains shrink in on each other but don't overlap. It's now quite clear that if you run up the polymer concentration, the chains do shrink a bit. But they also spend a lot of time interpenetrating each other. I suppose that leads to an obvious question, why do they interpenetrate? How can they interpenetrate? Well, there is a trick. We have a polymer chain and it has some typical size r. It matters a bit which method you use to characterize the typical size because it's a random structure and it's not a solid body. But r goes as molecular weight to a power around a half or a good solvent, not quite 0.6. Suppose you have an object whose radius grows as the half or a bit more power of the molecular weight. That means the molecular weight of the object is proportional to r squared or maybe a bit less. And so you have an object whose molecular weight grows as the square of the radius. Have you seen an object like that before? Yes, it's a spherical shell. If you have a spherical shell, the mass is proportional to the surface area. Yes? And there it is, mass proportional to surface area. Of course, polymers are not spherical shells but they have the same properties as spherical shells. If you increase the mass, the object has to get bigger fairly large, bigger fairly quickly I should say. And furthermore, a spherical shell is mostly empty space inside. Now a polymer chain, the polymer strand runs around all inside this blob where the polymer is. However, as I increase the molecular weight of the polymer, this polymer gets bigger and bigger and it still has an important property of the spherical shell, namely if I imagine running a line through it, on the average I hit the polymer coil twice. So if I have a really big polymer and it's really big radius and I send a line through it, and if I only hit the tiny polymer, you can cross tiny in cross section, polymer twice, most of this is empty space and polymers can then interpenetrate and that's what they do. Now if you will take out your books in advance to figure 7.4, I'm fairly sure, I grabbed the wrong thing this morning, but I'm fairly sure 7.4 is heptane solutions of 140 and 743 kilodalt and polyisoprene. Yes? Okay, very good. And what is the point of this? Well, the first issue is, the vertical axis is delta epsilon over C. Delta epsilon is the change in the dielectric constant, there are some constants involved, of the solution. And as you add more polymer, well if you add more polymer, each molecule makes its contribution to the dielectric constant. But if you divide out, hang on a second, this is getting noisy. If you divide out the concentration, what is left is the dielectric increment per molecule. And the dielectric increment per molecule is approximately proportional, there are bunches of constants involved, to the mean square size of the molecule. So let us start with figure A and you have a comparison there, and as you make the molecule larger, R square gets larger because the chain is longer, and therefore, hmm, R square gets larger, and therefore, the dielectric increment gets larger, and therefore, if you stay at the concentration equals 0, the left hand axis, the larger molecule gives you a larger dielectric increment. Now the next thing you can do is to increase the concentration of the molecules. And if you do that, there is one thing you might need to worry about, that is, suppose I put a lot of molecules into solution. So here is a polymer molecule, and here is another polymer molecule, and they are somewhat wrapped around each other. And one thing you might ask is, well, are those chains independent of each other? And the reason you would worry about this is that if you pack the chains, and the chains line up, for example, parallel to each other, the dielectric increment of a lot of chains is not linear in how many there are, it is some larger number. But the answer is, here is our first chain, and if I do an average over the solution, the second chain could run through this way, or it could run through in the opposite direction and have its end-to-end factor pointed oppositely. And these two cases are essentially equally likely to occur. The two states have about the same free energy. And because the two states have the same free energy, the direction of this factor and the direction of that factor are completely independent from each other. And so when we look at the dielectric increment of a concentrated solution, we are just seeing this very good approximation, the single chain behavior. Now there is an exception to that. There is an exception to that, and the exception to that is if you have molecules that you run up the concentration and they do some sort of a phase transition to a liquid crystal, in which case the molecules are straightened out, at least approximately. They tend to, they perhaps pull all point parallel rather than parallel and anti-parallel. There is a bunch of interesting phenomena here that are outside this course. In this case, if you run up the concentration enough, suddenly there is a very dramatic change in the dielectric increment, which we aren't going to talk about. So what happens if we increase the concentration of the polymers? The point of this discussion I just gave you is that this relationship between the dielectric increment and the mean square size of the polymer chain does not care about the polymer concentration. And so if I plot delta epsilon over C versus polymer concentration, I am essentially seeing the single chain behavior. I am seeing that to good approximation. And what do I see happen? Well, if you look at 7.4a, as I run up the polymer concentration, the dielectric increment of the single chain falls. And if I do that to a bigger chain, the dielectric increment falls, and it falls faster. So at low concentrations, with increasing molecular weight, the dielectric increment increases, but at high concentration, these curves can cross, and with increasing polymer molecular weight, the dielectric increment of the system is less. What does this mean? It means that a very large chain at high concentration has contracted down, and now its size has gone decreased considerably, and its ability to increase the dielectric constant of the solution has been substantially reduced. So that is figure 7.4a. Figures 7.4b and c, b is the big one at the bottom, show a slightly different effect, which is also important, but it shows an effect that is of separate physical, class of experiment, I should say, that is of separate physical importance. And what is shown in those figures is that we take a polymer chain, and the general experiment is as follows. We have a polymer chain which we are interested in. We put it in a solution of other chains, this is the probe, out here is the matrix, and we choose the matrix polymer so that our experimental technique doesn't see it. So from the standpoint of our experimental technique, we are looking at a probe polymer, there is a solution out here, the solution, well actually it does contribute at very high frequencies to the dielectric relaxation, but the solution is basically inert, dielectricly or optically or whatever, and we only see the probe chains. Now you might ask how can you do this with a polymer and dielectric relaxation? Go back to what we were saying about last lecture about how a polymer contributes at low frequencies to the dielectric constant. A polymer contributes at low frequencies to the dielectric constant, and if it has as a result of its molecular structure, a bunch of little dipoles inside and the dipoles are arranged so they are fixed to lie along the backbone of the polymer. For example, this is a polyester, and the important feature here is this is the repeat unit, the two ends of the repeat unit are different from each other, and furthermore the structure in the middle does not give me any way to get mirror symmetry, so this has a dipole moment which lies along the bond axis one way or the other. On the other hand, suppose I took, those are hydrogens, the repeat unit here is actually that, that's polyethylene oxide. Well, polyethylene oxide, yeah, it looks like the repeat unit repeats, but if I drew the repeat unit like that, it's the same repeat unit, and so there's a center of reflection and there's no net dipole along the bond axis. So this material does not contribute to the dielectric constant, and if we stir it into the solution, we get no effect on the dielectric behavior. Now you have to be a little careful of that statement, because this is replacing your solvent, and just as the solvent has a dielectric increment, which is the same at most frequencies, a pretty good approximation, so does this. And so you have to pay a little attention to the fact, but even though this doesn't have a big dipole moment, you're doing a little bit of something and you should think about that on occasion. Nonetheless, here's our probe, here's our invisible matrix, and we are now looking at what a ternary, three components, ternary system. So with the system you have a probe, polymer, in this case, you have a matrix, which in this case is also a polymer, and you have a solvent. And you are looking at, G, the behavior of the probe as you put a polymer into the solution. Now why would you want to do a probe matrix experiment, as opposed to just stirring the polymer in? After all, you are going to see in this experimental technique, no matter whether you have a tracer probe or a high concentration of probe, the experimental method gives you something close to single molecule behavior. The reason you do this is that the matrix has a polymer molecular weight, M, the probe has a polymer molecular weight, P, and if these are different species, I can vary P and M separate from each other. That's very useful. If you look back at 7.4a, you notice that, G, the small polymer contracts slowly with increasing concentration. The big polymer starts out large, but contracts very fast with increasing concentration. The solid lines, by the way, you notice they are data points in the figure in solid lines. The solid lines are stretched exponentials, I didn't call it alpha, I called it rho. The solid lines are stretched exponentials, they are smooth curves. If you plot them on semi log paper, you don't quite get straight lines out. The issue is, yeah, there's okay, so you mix chains together and they shrink. And big chains shrink more than little chains. But we come to the key question. Is the statement, big chains shrink more than little chains, occurring because this guy is big and shrinks more, or is it occurring because a big neighbor is more affected by causing a probe to shrink, no matter how big the probe is? Question? For instance, like, if data percentage of a string is shrink, the large chain may shrink to the size of the larger down the small chain. But what about the percentage? If you, 7.4a is delta epsilon over c, delta epsilon is proportional to the mean square radius. If you consider that, you find that for the smaller chain, which is a 140 kilo-deltin polyisoprene, we drop the delta epsilon over c in these units from about 0.18 to about 0.14, which is 20% sort of, or a little more. For the large chain, the 743 kilo-deltin, the shrinkage in r square is from 0.222 at a much lower concentration, 0.14, which is a much larger fractional change. I'm looking at 7.4a, and therefore the larger chain fractionally is actually contracting more. Now, you'd have to do, you have to do, sit there and do some arithmetic if you want to calculate what the change in the length as opposed to the fractional change in the length is. But the answer, I think, is that, yeah, the big chain really, but is it that the big chain is just more sensitive and compresses more, or is it that the surrounding chains are bigger and are more effective at contracting? And the first step to understanding that is a ternary probe matrix solvent experiment, such as the experiments you see in figures B and C. Figure B shows the 140 kilo-daltin polymer, and you notice, well, I will point it out, there are a series of curves, this is B. And as you sweep this way, what is happening is you are increasing the size of the matrix polymer. And therefore, a larger matrix polymer is more effective at, that's C. A larger matrix polymer is more effective at compressing neighboring chains than a small matrix polymer is. In contrast, well, it's not really contrast, it's the same effect. If you go to C, which is a large chain, and you ask what happens as you increase the matrix size, you see the same behavior. That is, as you increase the matrix size, I'll explain the significance of that in a second, there is more compression of the chain that's being observed. Now, I will point out an issue. Suppose I have polymers at a certain concentration, very low. And suppose I increase the molecular weight. As I increase the molecular weight, the M goes up, the radius goes up, the radius goes from M to the one-half, so what happens at low concentration to the density of polymer inside a polymer coil as I increase the molecular weight? The density of polymer inside a polymer coil, not in the solution as a whole, that's determined by the concentration, but inside a single coil, the density of that chain inside the coil is the ratio of the molecular weight to the volume of the chain, which is the molecular weight to a radius cube, which is the molecular weight. Oh yes, R is proportional to M to the half or M to the 0.6. So this is M to the 1.5 or 1.8, and therefore the density of a polymer inside its own self is proportional to, well, take the ratio, it's M to the minus 0.5 or M to the minus 0.8, or some number in between. This is the behavior you would get in a theta solvent. This, or very close, is what you get in a good solvent. 0.6, R proportional to M to the 0.6 in a good solvent is a slight overstatement, but you can also get solvents that are in between these two. And the important issue I want to make is that as you make the molecular weight larger and larger, the density inside the coil gets smaller and smaller, and therefore the same number of grams of polymer at low concentration occupies more and more space as the molecular weight is increased. And thus, if we have a solution at some concentration, the bigger the molecular weight, the closer the chains are to rubbing shoulders with each other. And rubbing shoulders is about right because we're talking about the radius of the chain, sort of like this. Okay, so that's figure 7.4. The other thing we can talk about, if you want to ask how effective are matrix chains at compressing a solvent, at compressing a polymer chain, let us look at the next figure. The next figure is 7.5, and it gives us a plot of rho, I'll say what this rho is in a second, against matrix molecular weight. And I remind you the notion is that R squared, which is the same, up to units is like delta epsilon over C, is like a constant e to the minus rho c to the nu. And rho is the constant that describes how steep this stretched exponential is. Well, I've plotted this as a function of matrix molecular weight, and there are two things you should notice. And the first is rho does this. I would not put too, too much emphasis on the last hollow point, which indicates that it might curve back at very large molecular weight. It's possible, but there's only one data point there, you really don't want to extrapolate based on one data point. That's a little dangerous. The other thing that happens though, there are actually two curves there, and the two curves show the 140 and the 743 kilodalton probes. And so the answer is, the large probe has a larger rho, and thus contracts faster in some sense than the smaller probe does. And furthermore, rho stays larger for the large probe than for the small probe at all molecular weights. And therefore, we can say, yes, there's a contraction, and part of it is the difference between these two lines, so part of it is that large chains are easier to compress. After all, they're in a certain sense more and more hollow inside. And it's also the case that if you increase the matrix molecular weight, there is a dramatic change in rho, meaning big chains are more effective at compressing each other than small chains are. So that is how that works. Okay, there's something else that can be done, and the analysis in figure 7.6 could be carried out in another step. And what 7.6 does is to say, well, we have polymer properties, different properties, and we could compare them. And these are results from Urukawa at all, and what they did is to measure two things. You will observe, there are footnotes, and I don't always read all of them out loud, but they are there. And the first thing they did is to measure the viscosity. And they measured the viscosity relative to the solvent viscosity. After all, if you change the solvent, the viscosity of the solution does change, but it's not changing because of polymer properties. And what they observe is a series of curves, viscosity versus concentration. And with increasing polymer molecular weight, there's only one polymer in solution. This is a binary system we're looking at. With increasing polymer molecular weight, what happens is the viscosity of the solution goes up. So if you have a solution, the bigger the chain is, the more viscous the solution is. And that is a nice, uniform rule. However, they then take this stuff, and they do the experiment on a binary solution and a ternary solution, and they determine how the relaxation time, and what the relaxation time tells you is how long it takes for the polymer coil to the end-to-end vector to forget which way it was pointing. So if we have a typical polymer, it has an end-to-end vector by arm. As time goes on, the vector gets longer and shorter, and it points in different directions. And eventually, the initial direction of the polymer end-to-end vector has nothing to do with the final direction, and the two directions are independent from each other. That is, if you wait long enough, yes, originally the end-to-end vector was pointing this way, the direction my arm is pointing, but if I wait long enough, the end-to-end vector is equally likely to be pointing in every direction. So first, are good? Well, if I increase polymer concentration, it takes longer for the end-to-end vector to reorient and to change length. I'm not telling you how it reorients. I'm just saying it does reorient. That's proven by this experiment. And therefore, as I increase the polymer concentration, that's the horizontal axis, the orientation time goes up. And you can now do a binary experiment or a ternary experiment. You can look at the polymer chain in solution with itself, or you can look at the polymer chain in a solution with a matrix. And what we observe is that as we increase the matrix molecular weight, you notice there are two families of curves in 7.6b, and as I increase the matrix molecular weight, even at very high concentrations, the reorientation slows down. So the molecular weight of the matrix has a direct effect on how fast the polymer coils can reorient. This result is possibly not equally consistent with all theories of polymer dynamics, all theories of how polymers move in solution. However, you can always say, well, the concentration isn't high enough, the molecular weight of the matrix isn't high enough, and therefore it isn't that the theory, some particular theory is wrong, is that it's really not applicable to these solutions. Okay, we now push ahead, and we are going to reach section 7.3. Chain dimensions. Chain dimensions, we're actually going to talk about all of the techniques that can be used to measure chain dimensions. This is one of these sections that was, well, it's a little too short for a chapter by itself, so you ask the sensible question, where do you put it? The answer is there were lots of sensible places, it could have been placed. Many people would have put it in an early chapter discussing dilute solution behavior. The basic question we are talking about is, we have a polymer molecule, it has a molecular weight, M, it has some characteristic size, and lots of ways to characterize the size are, and are, however you define it, is proportional to M to some power. Now, the reason why you're interested in this question is actually an analytical chemistry question. That is, you have people sitting there, and you have the chemical plant going, ka-chunk, ka-chunk, ka-chunk, or whatever, and you're looking out polymer, and you would like to characterize the polymer so you know what the molecular weight of the material you're making is. The reason for this is there are large numbers of industrial applications of polymers. Many of them are sensitive to some particular property of the polymer, such as its molecular weight, and you would like to know how big the polymer is and how it would be paying for a particular molecular weight and would be annoyed if it doesn't show up, or something like this. And so what you do, what you do historically is you take the polymer coming out of the plant, and the polymer coming out of the plant may be a solid or a melt, or a very concentrated solution, or God knows what, and you take this, and you almost certainly are not producing a dilute, you could be, a dilute solution that tends to be inconvenient on an industrial scale because you have to pay for the concentration step. And you take this material to the lab, and what you do is you take a bit of it, and you dilute it. The reason you dilute it is that once it's dilute, there's a polymer chain here, and there's a polymer chain there, and there's a polymer chain here, and on the average the chains are quite separated from each other. And because they're separated from each other, these polymer chains change the behavior of the solution, but they do so in a way that is not affected by their interactions, it's just determined, if you're lucky, by how big the chain is. And so you measure dilute solution properties, and the dilute solution properties can be used to tell you how big the polymer is. Now you might sensibly ask, well, what dilute solution properties are there? And one dilute solution property is the viscosity, which is proportional to the viscosity of the solvent, one plus a constant, times the concentration plus. But if you just add a little chain, bit of chain to the solution, the linear term is quite completely accurate. And the feature here is that K... Is that a power long? Power long. Well, it is a power long. This is a power series. However, you can measure K1 quite accurately, and K1 is approximately proportional to some power, like the Q, of the size of the polymer. Now I put a subscript on R. This is the viscometric radius. It's the radius inferred by asking, how effective are these chains at changing the viscosity of the solution? The reason I bring up often say, this is the viscometric radius, is that, well, there are several other techniques for measuring polymer size. For example, you could take the solution and shine a light beam. We now use a laser, but they didn't, when this technique was first developed, through here, they didn't have lasers back then. The solution scatters light, it becomes cloudy, opalescent. And the reason the solution becomes cloudy is the little polymer chains in solution do not have the same index of refraction that water does, so they scatter light. And if you measure the intensity of the scattered light as a function of angle, why is a function of angle? Well, if the polymer chain is very small, it scatters light effectively in all directions. I am skipping a light polarization issue, which is experimentally very important. But if vertical polarization in, vertical polarization out, scattering plane horizontal, so if we're looking down from above, we look at the light scattering like this, and the polarization vector of the light is perpendicular to the paper. I can make the statement, if the molecule is very small, it scatters light equally in all directions. However, if we make the molecule bigger, eventually the molecule starts to approach being comparable to the wavelength of light. This is a light wave, you see, it oscillates. And as a result, this part of the molecule is being illuminated by this part of the wave and has one phase. This part of the molecule is being illuminated by a different part of the light wave, and scatters light with a different phase, and the light scattered from these two points can interfere. I am omitting details. And as a result, the light scattering depends on the scattering angle. How does it depend on the scattering angle? As you make the polymer bigger and bigger, you make the polymer bigger and bigger, the light scattering is focused more and more in the forward direction. The extreme case of this, this is shown by window glass. If you make the polymer enormous so that you have this block, and it's only one molecule, the laser beam goes straight through. If you want to say there's scattering, it's only in exactly the forward direction, and if the block is homogeneous, there's almost no scattering to the side at all. The object is transparent. A similar mathematical effect, it's not a polymer, is seen in window glass. Window glass is transparent because to the extent you get scattering, it's forward. So, you say the scattering versus angle depends on the polymer molecular weight, and therefore there is another radius we can infer, and the other radius we can infer is determined by the scattering experiment. I can run down a series of these, and there are a series of different ways we can physically measure an apparent radius of the molecule. For example, we could measure the diffusion coefficient, how fast an individual chain moves, and the diffusion coefficient, d, is proportional to r to the minus one power average. We could do an osmotic pressure measurement, I won't get it, go into that. So, we have a bunch of different ways we can do the measurement, and for each of them we can ask r is proportional to m to some power in dilute solution. The power depends on the solvent. It's a little less, a shade under 0.6 in a good solvent. It's 0.5 in a theta solvent. Those are the same numbers I put up a few moments ago. However, most experimental techniques give the same number out. That is, most experimental techniques, if you ask how does the scattering power, how does the experiment depend on molecular weight, you turn out to get the same solvent. The one exception is that if you do light scattering measurements under some conditions, you get a slightly different exponent. Now, as long as you know what you're doing and are using the same technique, you need to know what the exponent is. Of course, it's very nice to say, oh yes, it depends on the power law. How do we determine what that power is? We take some property like that constant K1 I mentioned, and we measure it for a series of different, of polymers of different molecular weight. There are a bunch of different ways to determine polymer molecular weight. And having done this, we plot log K versus log M, and we get, if it's power law behavior, a straight line of some slope. Well, if K is proportional to M to the nu, log K is proportional to nu log M, and the slope here, the slope of this line, gives us the exponent. There's one modest challenge. The modest challenge is that if you want to determine this slope, this number and this number both have to change by a significant amount. And therefore, if you say, I would like to get a slope accurately, well, of course, it depends how accurately you can measure K. If you can measure K and M incredibly accurately, you can determine the slope with a fairly short piece of line. The problem is the data points are doing this. And if you just look in a little box, yeah, the slope could be this, or look at where those points are. Maybe the slope is more like that, or maybe it's more like that. And if you don't cover much range this way or that way, relative to the scatter in your points, you can't tell accurately what the slope is. So what you need to do is have something that covers ideally orders of magnitude here and fewer orders of magnitude there, and then you can make things work. If you look at the read-through chapter, you will observe that people looked at synthetic polymers with molecular weights up to about 50 million. 50 million is a heroic molecular weight to synthesize, except for biopolymers. DNAs do not view 50 million as extremely large. And they are extremely high molecular weight of the polyethylene. Yes? It's more than 15 million. Yes, you can get extremely high molecular weights. However, if you go to a standard industrial catalog and look at the polymers they're selling, you will find they run in the tens of thousands and the hundreds of thousands, and maybe a shade over a million. If you want something that is, say, 48 million Daltons and that is highly mono-dispersed, you have a significant synthetic chemical challenge ahead of you which people have risen to salt. The reason they rose to solve it is that there are theories for dilute polymers that predict this slope, and there are people who wanted to test those theories and were willing to go to a great deal of work to test them. Okay. Now we will skip ahead to figure 7.8, but I won't talk about it for a few moments. The issue here is, now what happens if you increase the polymer concentration? And what turns out to happen if you increase the polymer concentration is the polymer coils shrink. We've already talked about dielectric relaxation data. And in the dielectric relaxation data, um, G, dielectric relaxation, if we increase the concentration, the dielectric increment per unit concentration, the increment per molecule, falls the chains contract. However, there are two other ways of doing the same experiment. And the first is light scattering. And the second is neutron scattering. The original light scattering experiments, there's a historical note here. Go to a paper by, I am pulling the names from memory, but it's in the book Bouchot and Benoit. I have met Benoit. And they looked at this interesting problem. Suppose we do experiments on dilute solution. It's, the technique was very well established to get polymer molecular weight of polyethylene, for example, in dilute solution by looking at how much light it's got. However, if you looked at a random copolymer, you didn't get very consistent or reasonable results. A copolymer, well, there are block copolymers. This is a block copolymer. This is a die block. This is a block copolymer. There are copolymers that are absolutely patterned. And then there are random copolymers, where the other A's and their B's. But the synthetic process doesn't impose a great deal of ordering here. And so there are polymers like, you try to measure the molecular weight of these and you don't get very sensible results. And the question was why? And they had this very bright idea, which they tested. And the bright idea was that the reason you weren't getting quite sensible results is that each chain was not the same as all of its neighbors in the ordering of the A's and the B's. And therefore you got excess scattering because there was this difference between one chain and the next. I'm not explaining the details of how that worked. So they asked, how can we test if the excess scattering is arising because the chains are heterogenous? That is, they don't all have the same pattern. Well, it's a little hard to control the synthesis. So what we'll do is to make a polymer that is as a die block that is as heterogenous as possible. And finally we will look at a mixture of these and these. You can't get much and you can't get any more dramatically different in composition chain by chain than what you see here. And what they confirmed was that yes, the fact that these aren't the same contributes to the light scattering. And this proceeded along, the development proceeds along to the point where if you have these chains the index match, have the same index of refraction as the solvent people realize, you can't see these anymore and you just see the behavior of the B chains. And therefore if you have large B chains where you can use light scattering to measure the radius and you stick in, start substituting in A chains for solvent, the A chains will cause the B chains to contract. Exactly the behavior we talked about for dielectric relaxation, but now we are using light scattering to see the contraction. And there were a series of experiments and memory serves the core author in that list will be Kuhn. And the observation is that if we have a chain of some size and we increase the concentration of invisible matrix chains, R square of the radius is what you measure, you discover that the chains shrink. That experiment is hard to push out to high-polymer concentration. At least it wasn't. However, there is a third way to do the experiment. And the third way to do the experiment is neutron scattering. You can't get much more scientifically impressive than an experiment where you need a large nuclear reactor in your lab in order to carry things out. Now actually you do not have a large nuclear reactor in your lab. Instead you go to a national facility, there are a very small number of these in the world, which has a big reactor which produces a very large, very stable, very well characterized neutron flux. It has little holes in the side. Neutron beams are filtered and come out and you can now use them for scattering. And you are very careful to do bunches of things for safety. But the net result is you can scatter neutrons. You might ask, why do you want to scatter neutrons? And the first answer is neutrons will go through things rather cleanly, at least if they're low atomic weight. You might get absorption eventually. If you walk around with silver coins in your pocket and you were careless with where the neutrons are going, after a while you notice that the neutrons are becoming radioactive, the silver is becoming radioactive rather, because it's absorbing neutrons. And there are some interesting pre-World War II cyclotron descriptions where people should have been more careful and eventually were. However, having said that, the virtue of neutron scattering is they scatter very effectively from hydrogen and from deuterium. But they do not scatter the same way. Again, there is a large theoretical development which I am skipping over. But the net result is that if I take perduterated, this is perduterated polyethylene, and I put it into a sea of hydrogenated polyethylene, I mostly see only one of the, or vice versa. I really only see one of these. And if I am very clever, instead of I take a material that is some amount h and some amount x, and if I get the ratio just right, the neutron scattering almost does not see this at all. And I can now produce invisible polymer chains. Neutrons scattering invisible polymer chains. And I then put in them the chains that are going to scatter. And so I have, for example, trace quantities of the visible chain and large amounts of the invisible chain. And I can use neutron scattering on a polymer that is very well characterized. The material you are looking at is polystyrene and toluene. And also polystyrene and carbon disulfide. Why carbon disulfide? It doesn't contain any hydrogen. It is neutron invisible. Well, it is not quite neutron invisible, but it is good enough. And we can plot in figure 7.8 the chain radius versus the polymer concentration. And what you see if you don't look at the figure too hard is that as you change the chain rate, as you change the matrix concentration, the chains shrink. Now, that figure is a little more complicated than you might think if you didn't look at it hard. First of all, the two polymer chains are very nearly the same size. The two chains we are looking at, Daoud et al., the solid points, are looking at 110 kilodalton polystyrene. A king et al. are looking at 111 kilodalton polystyrene in different solvent. So what I did to generate this figure, and this is one of these things that is worth talking about, to generate a figure, is if you look hard, you will notice there is a size scale on the left and right axis, and they are not the same. The reason is that the polymer in these two solvents has a different, even though it is almost the same molecular weight, it is different by 1%. And you should realize that saying these polymers differ in molecular weight by 1% means you have an unreasonable confidence in your experimental accuracy, or you are working with DNA where you know the molecular weight exactly. So what I did was say, okay, we will take two vertical scales, one for each set of experiments, and they will have the same top end, so that at zero concentration we have the same starting point. And the two axes will then cover the same fractional change in radius. And that is what we do, and you notice the axes are quite different, but they do have the same starting point, and they do both cover approximately a factor of two change in radius. Exactly a factor of two. The other problem I had, which is a little more also significant, is that one of the research groups perfectly reasonably reported the polymer concentration in grams per liter, grams of polymer per liter of solution. That is a very reasonable concentration unit. The other group reported the mole fraction, and you then have the little difficulty that those two are of mole fraction of polymer, except at the two end points, those two scales aren't quite the same, fundamentally. Why aren't they the same? The reason they're not the same is something called volume of mixing. The traditional freshman chemistry example of this is to take a liter of water and a liter of non-denatured ethanol, that's tax stamped ethanol, and you mix them, and you now have, for all practical purposes, vodka. However, do you end up with two liters? No. No, you end up with, does anyone happen to remember the number? Oh, 1.7 over 8. Yes, it's about that. I was going to say 1.9, but you're right. You end up with, you have a liter and a liter, and you mix them together, and as you mix them together, the system contracts. And the details of that are quite complicated. The reason this is important is that if you want to convert how many moles of this and moles of that we have, or weight fraction, we have so much weight, I think that may actually be weight fraction, I think I misremembered, we have 50% by weight of polystyrene. That doesn't mean you have, given the density is 0.9 roughly, that doesn't mean you have 450 grams of polystyrene in a liter of solution. It means you have 450 grams of polystyrene in 900 grams of solution, and you cannot convert unless you do some auxiliary measurements, which were not done, or at least which I did not find. And so there are two sets of concentration units, bottom and top. They are arranged so infinite dilution, and melt, all polymer, line up, because in the melt, 100% by weight, and whatever the, it's around, oh, 943, if I recall, gram per liter, the melt numbers are the same, and so we have composed this graph to show two sets of data. Now why are, what is the issue here? Well, those two sets of data, they're both very hard to measure. They were done by two slightly different methods. Daoud used a tracer amount of polystyrene, labeled polystyrene. King used concentrated polystyrene and used a statistical mechanics trick to pull out the radius of the labeled polystyrene from the unlabeled polystyrene. We're very controversial for a very long time because they do not agree with each other. The core issue, why are we even interested in this graph? Well, there is a scaling theory of polymers. It's in the Daoud paper, and the references are there. And what is claimed, if we look at R squared versus concentration, at low concentration it's claimed that nothing happens. Well, that's what it says. And then there's some sort of crossover, which means there is a region where several things are happening simultaneously, and the theory doesn't make a clear prediction. And then out here, the theoretical prediction is that R squared is proportional to C to the minus one and four. Chains contract as you increase the concentration, and the prediction was that you get a power log. What is the prediction? And so there was an effort made to measure experimentally what the power was. And there are these two sets of data, and if you plot the two sets of data on log-log paper, you get two somewhat different exponents. And the black dots agree sort of with the exponent that was predicted, and the open dots get a different number. And since some people were very attached to the theory, there was a great deal of back and forth as to what was going on. Let me, however, point out a couple of features of this graph that aren't emphasized quite as much, except by people who aren't quite as involved in the dispute. And the first is that if you look at that graph, I said there was a crossover, and the crossover occurs roughly at a concentration, a C star, the overlap concentration, at which the polymer chains, approximately speaking, are shoulder to shoulder, but haven't started to inter-penetrate too much. Well, that's what theory says. But if you look at that data, it does this. And in fact, something like, oh, a third of the contraction has occurred in this dilute solution region where there's not supposed to be any contraction. And then the contraction continues. You can see the contraction. And if you ask what happens in solution, well, the contraction continues and continues. And I'll draw a line here, which is something like 500 grams per liter, sort of half a polymer. And beyond that, there's, perhaps it continues a bit, that's the solid dots, or perhaps it runs out of steam. The open dots sort of show that the polymer doesn't contract much at all, above about 50% by weight. And there are people who will be happy to explain why you should be seeing that. But the net result is the curve does not look a great deal like the theory qualitatively. If you only look at the high concentration dots, the constant d, if you only look at the dots that are, oh, above, say, 100 grams per liter, you can put them on a power law. This is a log, log plot, radius squared versus concentration. You can put them on a power law, but the variation this way isn't exactly very big. The variation this way isn't exactly huge. So there's some uncertainty in the power law. And there was then a great deal of dispute as to what was happening. I put in something different from a power law. I put in a, those two solid curves are stretched exponentials again. And you see the solid curves do quite nicely at going through all of the points, including the low concentration points that the power law does not predict. And the only difficulty occurs if you go out to the melt concentration. Okay, that's it for chain size. We will now push ahead, and I will at least slightly talk about relaxations. The first point is experimentally, the easy way to do the experiment is to take the sample, put it between metal plates, and apply an oscillating electric field that oscillates at a given frequency. Okay, so we have a capacitor. Yes? So we can apply an oscillating electric field at some frequency, and we can then measure the capacitance of the capacitor. Now how you measure the capacitance, well, if you want to do it accurately, you have to work a bit. The important issue, and this is actually physics too, is the capacitance of the capacitor is determined by the dielectric constant of the material between the plates. Okay? Yeah. Dielectric constant. And if we are talking to freshmen, we tell them there's a material that has a dielectric constant which can be small or large. In the real world, epsilon depends on omega. Dielectric constant depends on frequency. You've actually seen this in practice. That is, if you ask, what is the dielectric constant of water, it's very large. If you go up to optical frequency, the index of refraction is directly related to the dielectric constant. Yes, water has a dielectric constant which is not one, and an index of refraction which is not one, but it's not very big. And in fact, if you plot the dielectric constant versus frequency, you discover, we'll use log omega because we want to cover lots of frequencies, and we will use log of the dielectric constant. It rolls off. It rolls off quite dramatically. We'll discuss the details of the roll off next time. Now, what I want to do first is to simply get to the question, well, why does it roll off? Why does the dielectric constant drop at very high frequency? Well, let's go back and think. Here is a molecule, and it has two charged ends. This is a simple case in water. Say, in amino acid. It's actually got charges on it that it's fixed. We apply an electric field. We get a dielectric constant because the molecule can line up with the field. However, if I have to, if I flip the field around, it's an oscillating field, the molecule has to turn around through 180 degrees to match. Okay? That takes a while. If we are at very low frequency, the molecule has no difficulty flipping and flopping each time we change the sign of the electric field in the capacitor. However, if I crank up the frequency enough, the molecule starts to flip. It can't keep up. And before it's rotated significantly, the field is pointing back the way it was originally. And as a result, at very high frequency, the molecules simply aren't capable of keeping up with the field, and they do not, and they simply sit there. Yes? Okay. But, um... Gee, that's the rule. Now, there's something else that can happen that does not come up in dielectric constant studies but comes up in certain related studies. Namely, um... As you keep changing the field around, the flip and flop back and forth may get out of phase with the charge you're applying here. You could describe this either as I am applying a voltage across the plates, or you could say I am applying a charge on the two plates. You can do several different things, and there are some complications that we will not get into in dielectric response. So there is something I have over somehow time for the dipoles to relax. That would happen at very high frequency. The dipoles just sit there, and they do not contribute to the dielectric relaxation. However, the dipoles still... the molecules still have electrons, and the electrons shuttle back and forth even though the molecules do not have time to rotate. And so at very high frequencies, way, way out in the optical almost, you will still have a dielectric response, but it will be much smaller because it will just correspond to the electrons moving. However, the molecular rotation or local rotation, part of the dipole behavior, doesn't contribute anything.
Lecture 9 - dielectric relaxation, part 2. George Phillies lectures on polymer dynamics, based on his book "Phenomenology of Polymer Solution Dynamics".
10.5446/16232 (DOI)
classes in polymer dynamic based on George Philly's book Phenomenology of Polymer Solution Dynamics, Cambridge University Press 2011. And today this lecture is lecture six, small molecule motion. In any event, I'm Professor Philly's. This is Physics 597D Phenomenology of Polymer Dynamics. And today we're going to discuss the rest of chapter four of my book on light scattering. And we're going to discuss chapter five. And because the class composition is more materials than physics, I'm not going to discuss very much of the theory issues of how light scattering works or how you can calculate colloid dynamics. The calculation is important and will show up in chapter 10 because we will demonstrate we can actually calculate for spherical particles how the diffusion coefficient depends on concentration. And we get more or less the right answer. And the fact that we get more or less the right answer gives us good reason to believe that we are actually seeing a phenomenon that we understand. So let us go back to where we were last time. And as usual, the chalk has gone for a walk and the erasers have gone for a walk. And therefore I must briefly go for a walk. So let us sketch how scattering works. And what I'm going to sketch works equally well for x-ray static, x-ray scattering, for electron scattering, for neutron scattering, and for light scattering. We have a wave coming in. The cross lines I'm drawing are in planes of constant phase. The light is going that way. I'll say light because that's what I agreed. And here are particles that scatter the light. And the light strikes them. And it actually heads off in all directions. However, some of it heads off and reaches the detector. In a real experiment, if it's light scattering, this distance is very small. The whole scattering cell is the laser beam is focused down to, say, 100 microns across. And the image we collect this way is the same size. And this distance, the photo multiplier to might be a meter or two. So in fact, the light rays going to the detector have to be traveling very nearly parallel to each other to get to the detector. If you are worried about whether they actually get to the detector at the same time or same direction, you look up a paper by my PhD advisor, George Benedec. And this is a late 60s, early 70s paper. And he worries about the question. And the answer is, we're good as long as the optics are set up properly. However, this path length and this path length are not the same. And therefore, if we consider these two light waves getting to the detector at the same time, they had to start here at different times. This ray had to start off at the laser earlier than this one did. And since they started at two different times, they had two different phases. Now, I'm going to assume you had some freshman physics at some time, and you've seen an interference experiment with lasers. Yes? Yes? OK. So what happens is the two light rays get to the detector. And if they're in phase with each other, the light is bright. And if they're 180 degrees out of phase, the light waves cancel. There's no light at all. Now, in fact, there's some more than two scattering particles. There are loads and loads of them in solution. And you have to add up what all of them do. Now, the first approximation is the light shows up with all different phases equally and goes to zero. But that's only an approximation. And there are fluctuations. And so at some time, I have drawn things so that these two particles happen to give light that is scattered in phase. And other light waves, I'm drawing lines like this for a reason. Other particles that lie along these lines scatter light that also gets to the detector in phase. Particles that are here scatter light that is 180 degrees out of phase. So the light scattered from these particles adds coherently. The light scattered from these particles adds anti-coherently. But you notice there are more particles here and here than there are there. And therefore, in net, there is some scattered light. Have you ever seen this phenomenon? Yes. This is why the sky glows in the daytime. Namely, there's light scattering from the air because the air on an atomic scale is not perfectly uniform. In fact, what we are looking at is a density fluctuation that has some concentration fluctuation, that has some amplitude that I'll call a, and that varies in space as a cosine wave. x is the distance this way. There is a great deal of math that just got suppressed. The important issue, though, is you are looking at microscopic fluctuations that occur because the particles move around at random and by random chance produce density fluctuations that look like a cosine wave. Question? Yeah. The excess is perpendicular with the atom lines. It is perpendicular to the atom lines, but the useful feature is this k is a vector pointing that way. We could do this as k, vector pointing that way, dot r, position of the atom in the system. And how do we define k? Well, the light wave length is lambda. 2 pi over lambda is the wave vector of the light. And the wave vector going out is pointing in a direction. That's a unit vector pointing that way. Minus 2 pi over lambda. The light is the same color after scattering as before, but it was initially going in the direction i. It is at the end going in the direction f. And the change in the wave vector of the light, it's just a change in direction, is this change in k is called the scattering vector. And since I do not have a great infinite amount of time to tell you about this, I am going to refer you to several superb books on the topic. And there is a book by Bern and Pecorra. I believe it's now two books by Ben Shu, who was, I think he's just retired, at Stony Brook. Very nice fellow. Very good books. There are two collections by Winn, Brown. There are two NATO Advanced Science Institute collections. Those go back to about 1970. And if you refer to these, you can get a much more detailed description. What is the important issue here? Well, first of all, I said you get scattering because the particles give you a density fluctuation that looks like a cosine wave. You should realize that every possible cosine wave is being driven by the fluctuations at the same time. And each different cosine wave gives you a scattering in a different direction. So we just sample one of them. And so if we plotted concentration versus x and ignore the fact that there are all these other things going on at the same time, you would see a fluctuation of the concentration around its equilibrium value that looks like that. This distance is microscopic. It's sort of 1 over the square root of the number of molecules in the system. It's very tiny. That's all you need. And what does this thing do if you watch it and measure its amplitude as a function of time? On the average, if you wait for a moment that it's large, the fluctuation decays as e to the minus exponential decay. This k, which is sort of this distance, is 2 pi over k equals the scattering wavelength, k square diffusion coefficient t. And in the simple case, these things simply relax exponentially because the particles are just doing diffusion. And therefore, if you look at the scattered light and the time evolution of these fluctuations and the scattering, you can measure a diffusion coefficient. That's actually very useful. Now, there's something else you can do that's a little more subtle. Many molecules, or polymer monomer segments, are not spheres. As a result, if you send in a light with one polarization, when it strikes the molecule, what happens? Well, it's an electric field. It's striking molecular matter. It creates an electrical dipole in the matter. And the dipole is not parallel to the original electric field because this isn't a sphere. And when the light comes out to the side, this is coming out straight towards you. The light was originally polarized like this, vertical in the plane of the blackboard. But when the light comes out towards you, some of the light will be depolarized. It will come out not with the original vertical polarization but with horizontal polarization. However, this molecule, it moves. It rotates. As a result, as time goes on, the required time is nanoseconds or picoseconds. It's something very short. Microseconds, very short time. This thing rotates. And the amount of light going out towards you that has been depolarized changes. If we can characterize the time scale on which the depolarization occurs, we can measure how long it takes for a molecule to rotate or how long it takes for a segment of a long polymer chain to rotate. And you can actually do this experimentally. That's depolarized light scattering. OK? So that's depolarized scattering. Now, there are other ways of getting depolarized scattering. If you simply scatter light from a rough surface, some of it comes off. But we're talking about molecular scattering. Depolarization is often quite weak. And you have to work very hard to isolate the depolarized light. But you can. Questions? Is that always assumption that the light is depolarized? Oh, yes. The light actually has to be polarized to make the experiment work. Now, in fact, if you are doing 90 degree scattering, there are some math details and things cancel. But in general, you have to start with a polarized light source. Any good laser will do this. And you have to separate out the depolarized scattering, which is very weak, from the polarized scattering, which is very strong. And there are optical devices that will do this for you. OK, that's depolarized scattering. It gives you molecular reorientation. OK, what else do we discuss in chapter 4? There is one last bit. We are talking about diffusion coefficients. Now, for, say, an isolated sphere in water, there is a result due to Stokes. Well, he didn't know about what was going to be done with this work. And Einstein, who did the important part, what Einstein said was the diffusion coefficient could be the thermal energy, kt, over a drag coefficient, f, that resists the motion. What is f? Well, if I pull the sphere through water at some speed, there is a drag force on the sphere, which it points backwards to the direction of the velocity. That's where the minus sign comes from. And the proportionality constant is f, the drag coefficient. For spheres in water, Stokes' law, this is where Stokes comes in, f is 6 pi eta. A is the solvent viscosity. A is the radius of the sphere. There's A. OK, there's the Stokes-Einstein equation. Now, suppose, however, you have a solution that is not dilute. What happens? Well, what actually happens is fairly complicated. It's the section of chapter 4 we're not reading. However, there are people who will say, you can write the diffusion coefficient as kt divided by 6 pi eta. And they put in, that is a Greek letter psi, what is called a dynamic scaling length. This equation comes from critical phenomena theory. If you take a liquid and you heat it up and you compress it, there is a temperature and a density, I guess liquid would be over here, a chi density. There is a temperature and a density which the liquid phase and the gas phase become indistinguishable, the critical point. OK. And if you go near the critical point and measure diffusion, or diffusion of heat, or a binary liquid diffusion of the two components through each other, you discover that diffusion becomes very slow. And if you look at the composition or the density here, you discover there are non-unit form regions of size psi. I'm being very indistinct here. And psi gets bigger and bigger as you get very close to the critical point. And the assertion is the reason that the diffusion slows down is that psi gets very large. And you're looking at a diffusion of a collective object whose size becomes very big near the critical point. So it doesn't move fast. Now, we are not going to talk about critical points more in the course. You should be aware of critical phenomena, because there are a bunch of materials processing methods where critical fluids turn out to be useful. In fact, one of my carbon cleaners uses critical carbon dioxide instead of our organic solvents to clean rugs. However, having said that, I think that people have taken this result and tried to carry it over and try to use it to describe mutual diffusion that relaxes concentration fluctuations in polymer solutions. As we will see when we get to the right chapter, this approach is completely wrong. But you should be aware that there are people who do this. Question? The dynamic scaring mass is similar to the which involved the apparent hydrodynamics radius. Because you write in work, it has i h in for this equation, and there it is the radius. Psi plays the same role. It's a length. And it plays the same role as the hydrodynamic radius. And so what happens, the proposal is that if you have lots of particles, instead of their individual radius or polymer coils, instead of their individual radius governing how they move, what governs how they move is this length that gives the distance over which polymer motions are correlated. And so psi replaces r sub h. OK? OK, let us shove ahead to chapter 5. Now we are now actually going to start discussing motions. And we are going to start by discussing what are mostly single particle motions. And we are going to start by discussing the smallest things we can find, which are solvent molecules. We will eventually push on from solvent molecules to chapter 6, and we will discuss little bits of polymer chains. We have to start someplace. So having said, we're going to discuss how solvent molecules move through a polymer solution, or other small molecules move. The first question is what you might ask. Well, suppose I take a liquid. I can measure with various techniques how fast small molecules diffuse through it. And suppose I add to the liquid polymer coils. What's going to happen to the liquid? And how is this going to happen to the liquid? How is this going to affect the ability of the small molecules to diffuse? And the most simple mind and answer is that if you increase the concentration of the polymer molecules, you might expect the viscosity of the solution will go up. The resistance to motion will go up, and therefore diffusion will slow down. That's at least vaguely a reasonable sounding thing. So let's start with the simplest question. Suppose I am looking at a solvent molecule, or a sodium ion, or something diffusing through a simple liquid. That is, we're going to start. I'm going to start with something even simpler than this. I'm going to start with a simple liquid. And I am going to change the viscosity. How do I change the viscosity of a simple liquid? Ideas? Increase the concentration of the polymer. Well, we're starting with no polymer in the solution. Well, number one, we can change the temperature. Temperature? Decrease the temperature, right? Increase or decrease? Have you actually seen this effect? Suppose you have a glass full of ice water, and a glass full of water that you have just brought to a boiling point. You shake each of them. The hot water seems to slosh back and forth more. That's because there's a factor of three change in the viscosity between ice cold water and boiling water. And you notice factor of three change in the viscosity is just barely perceptible to human senses. It's not something we're set up to see. The other thing you can do, though, which is the actual procedure we're going to look at, is you can add solutes. That is, you take, for example, water, and you stir in sugar. Or you stir in, or you take hydrocarbon solvent. You add some other molecule to it. And you can change the viscosity of a liquid a great deal by looking at solutions rather than simple liquids. And we can go back to the results of Heber Green, who's an Australian who worked at the turn of the last century. The paper references are to about 1908. And what he did was to measure electrophoretic mobility of small ions. And other people have since measured diffusion under similar conditions. And the question is, how does the solution viscosity, as we change the composition, affect the diffusion coefficient of small molecules? Yes? The solvent can be the polymer. It would be entirely possible to take a polymer, melt it, and measure the diffusion of small molecules through a polymer melt. OK, so can that be the shear thinning for the viscosity? Sure, sickness? No, we're actually, well, it might be shear thinning or shear thickening as a substance. But we're just going to be talking about small molecule liquids. We're going to start with the small molecule end rather than the melt end. OK? So we're going to look at small molecule liquids. And we measure the electrophoretic mobility, how fast the ions move through the solution. We measure the conductivity. Or we measure the diffusion coefficient. And we can plot this versus viscosity. And what we find is that not very viscous solutions, 1 over the diffusion coefficient is proportional to a to the first power. That is, if d is like 1 over 6 pi, actually kt, over 6 pi a to the a, then 1 over d is proportional to the viscosity, says Stokes Einstein. And if our liquids aren't too viscous, that's exactly what we find. Now, there's a limit down here because there aren't a lot of really low viscosity liquids. They're not, they're liquids that are less viscous than water. You can't get down to zero. And we chug ahead. A pure water at room temperature is like 0.9 centipoise. That's the unit of viscosity. And we chug ahead and we get out to about 5 centipoise. And at some point near 5 centipoise, there is a sudden change. And at higher viscosities, we find that d inverse is proportional to a to about the 2 thirds power. And so if you look at diffusion through a viscous liquid, it doesn't behave like this at all. There have been an extensive series of papers, people who have studied this, in different systems, using several different experimental variables. The crossover location depends a bit on the molecular system. I mean, if I start in an organic solvent and stir in things that will dissolve in it, there's no reason for 5 centipoise to be the magic number. But it's fairly small. But the crossover behavior is the same. There's a strong dependence on viscosity. And then at larger viscosities, there's a much weaker viscosity dependence. Now, one thing you might say, OK, so that's not quite what you expected. But now you know what happens if you just change the viscosity and it's a small molecule liquid. Suppose, however, this is the plot for, for example, a sodium ion or a solvent molecule. Suppose you instead use a solvent molecule, a solvent molecule, and you have a solvent molecule. A polystyrene latex. And I shall say very briefly what a polystyrene latex is. Namely, there are procedures for synthesizing polystyrene. And they are run in water and a surfactant. Or they end up, I should say, in water and a surfactant. And you make little balls of polystyrene that have either been coated with a surfactant or are surface modified. That's a carboxylate group. And because the surface has been modified, the sphere is charged. And because the object is charged, because it's been surface modified or you stuck the surfactant in or something. These little polystyrene drops dissolve in water. Of course, the polystyrene isn't water soluble. It stays as a little drop. And the trick is, if you are very clever with the synthesis, you make these things and they're all identical to within 1% anyhow in size. They're all very spherical. So they're a very nice object to use as a pro particle. Of course, the Rodko and Krombach, when you read their paper, they're talking about these. And we will measure diffusion coefficient using light scattering against T over eta. And there are two ways to change the viscosity. You can change the temperature, at least over a moderate range. And you can change the viscosity. For example, instead of working in water, you can work in water plus glycerol. And so back in about 1979, I did this. And we show that D is very nicely linear in T over eta. That is. And that's true up to 1,000 centiphoes. And that's exactly what you would expect from Stokes Einstein. However, if you're alert, you'll notice for these things, which are quite large, D is just linear in T over eta out here. For these small objects above about 5 centiphoes, 1 over D is linear in viscosity only at first, and then there's a crossover. So this peculiar crossover effect in small molecule behavior does not repeat itself for mesoscopic particles. OK. Now let us push out. And we will push ahead to figure 5.1. And because we have pushed ahead to figure 5.1, what I do there is to plot the diffusion coefficient versus the concentration of polymer. That is, we are now going to chug ahead. And we are going to look at how fast the solvent actually moves through a polymer solution. And we are on in the book page 116. And so we are looking at the motion of, for example, the solvent, or we're looking at the tracer diffusion of a small molecule in the solvent as we add polymer and increase the viscosity. And we ask what happens. And the answer is that out to around 400 grams per liter, there is a decrease in the diffusion coefficient as I add polymer. And this decrease is approximately e to the minus sum constant concentration to the first power. And the data lies on the line. And there are bunches of people who've done the experiment. You see the same thing. And then at higher concentrations, there is a rollover, and the measurements still lie on a smooth curve. But this is a smooth curve e to the minus some other constant a, c to some power nu. And in this case, nu is greater than, oops, I don't mean zero, I mean one. That is, there is one behavior out to about 400 grams per liter, and there is another behavior at higher concentrations. At the time I wrote the book, I could describe this, but there was no good explanation for it. And I wasn't sure what the explanation was, but I describe it when we get to the last chapter. There are a whole bunch of information, but there are a number of molecular properties, polymer solutions, that change vaguely at this concentration. Now for each system, the crossover is a sharp line, but it's not the same concentration in every polymer solvent mixture. And if I said you look at a bunch of these and you find at some place in the range 350 to 500 grams per liter, that's approximately right. And if you look at subsequent figures in the book, you will see more examples of the same curve. So what happens if we increase the viscosity of a polymer solution by adding polymer? We see this rather odd concentration dependence. That concentration dependence does not come very close to being the concentration dependence of the viscosity of the liquid. We'll get to that in a piece as you add polymer. So you look at this and you wonder what's happening. And there is a very recent paper by Kai et al. It's in macromolecules 2011. It's late last year. And they discuss various conditions and concentrations. And they make a prediction observation, which can be translated as, once you look carefully at what's going on, we are looking at a polymer solution from the side. And if we were all the way to the melt, there are some polymer chains that are in the melt ABC. And they're quite close together. These are cross sections. They're quite small. And here is a solvent molecule of some sort. I just drawn a hexagon, so you can tell it's not a solvent. And if we're in a melt, the space between the polymer chains is small, and the solvent molecules can't get through the gaps. But if we dilute the polymer with solvent, at some concentration, there is a distance, which I will call psi. I'm not a very good artist. I should not try drawing three letters. Psi, which is a distance between polymer coils. And at some point, the typical size R of the polymer is such that it can slip between polymer chains consistently. And so the solvent can flow between the coils as though it's a liquid as opposed to isolated molecules that have to look for holes in some sense. And if you ask, how big is this concentration? Well, realistically speaking, the polymer coil cross sections are considerably larger than the size of an average solvent molecule. But if you estimate what sort of concentration is this transition from no space to space occurs, it's something like the 350 or 500 grams per liter we're talking about. Now, it's not exactly that, but it's not obvious that the criterion should be this is the same size as that. There are molecular arguments involved. And the details, this is basically a back of the envelope estimate. But the answer is that that change in solvent behavior appears to take place when the gaps between polymer coils seen from the side are no longer large enough on the average for solvent molecules to fit through the gaps between the polymer chains. That's a very approximate statement, but it appears to be roughly correct in terms of giving the correct concentration for the crossover I had just described. So in any event, that is the motion of small molecules, the translational motion of small molecules through polymer coils. And there are lots of measurements of the same thing. And in most systems, you see the same effect. Now, if you make the molecule only slightly larger, figure 5.7 shows fluorescein dye. It's a small molecule. Its size is something like a half a nanometer. And it's diffusing through hydroxycropyl cellulose. So here is the diffusion coefficient of the fluorescein. Here is the concentration of hydroxypropyl cellulose. This is a semi-log plot on which you see a straight line. And the diffusion coefficient goes as e to the minus alpha c to the nu. This is the polymer concentration, except it's c to the first in this case. And you just see an exponential drop off of the diffusion coefficient. Now, there's one interesting feature of hydroxypropyl cellulose hang on a second. Hydroxypropyl cellulose has a liquid crystal phase transition about halfway across that graph. The hydroxypropyl cellulose is a rigid polymer, not quite as rigid as these things, though the short pieces are. And there is a concentration at which, if you concentrate them enough, the polymers want to line up, or at least the segments line up, parallel to each other so they pack more cling to each other. The fluorescein diffusing through the HPC doesn't notice that this has occurred. It just diffuses. But there's just a straight line straight through the phase transition. Question? The y-axis is the ds divided by ds0. It does that for relatively diffusion coefficient. Correct. What has been done in figure 5.7? And I do this fairly often as we take the diffusion coefficient, and I divide by the diffusion coefficient of the fluorescein in pure solvent. And therefore, up at the top, the diffusion coefficient is 1. Now, if you didn't do the division, you'd simply have different numbers on the side scale. But this is just division by a constant. And if you're on a semi-log block when you divide by a constant, you just change the labels on the side axis. You don't change anything else. So that is diffusion in a polymer. Except there is this one peculiar feature. It is possible to do measurements in all sorts of solvents. And one of the solvents that you could measure diffusion in is Erochlor 1248, which is a polychlorinated biphenyl. In terms of chemical safety issues, it is a material you treat with respect. But it's been used industrially on a large scale, although there have been issues when people haven't been careful. Not this particular one, but just polychlorinated biphenyls in general. That's down for BPA. Now, this would be a PCB. However, the interesting feature is, suppose we put something, a probe, a small molecule into it. And we measure the diffusion coefficient of the small molecule as a function of concentration. And we are in this particular material, which has been used very systematically, experimentally in studies of this sort. It's a very viscous liquid. And there are a number of polymers you can add where you get exactly the effect I've talked about, namely the diffusion coefficient of the small molecule falls off exponentially with increasing concentration. However, you can also find a polymer where you add the polymer, and absolutely nothing happens. And most peculiarly, if you add polybutadiene, you add the polymer, and the diffusion coefficient of the probe goes up instead of down. And what we can say on this is that the polymer appears to be modifying the behavior of the solvent. Well, that's not unreasonable. After all, we say we have a solution. The solvent modifies the behavior of the polymer. The polymer and dilute solution doesn't behave the way it did in the melt. And therefore, why should we be surprised that the polymer is changing the solvent at the same time? And in a certain sense, you shouldn't. But a lot of people didn't think of this at first, and it was nice work by, let's say, Lodge, Amelar, and various other people who sorted out the fact that you add polymer to a solution, and it perturbs how the solvent behaves. There is a somewhat more dramatic demonstration of this when we get to section 5.4. And in section 5.4, we look at solvent rotation. How do you look at rotation? Well, you take the solvent molecule that is not a sphere and that depolarizes scattered light. And then you can do measurements to determine the time scale on which the solvent molecule, excuse me, here's the solvent molecule. You can determine the time scale on which it changes the direction it faces. And if you do this in a simple liquid, you get some very small numbers, not picoseconds, but nanoseconds anyhow. But if you add solvent, you discover that there are two populations of solvent molecules, one which relaxes very quickly, and one of which, at least in part, rotates very slowly. Now, you have to be a little careful when I say fast and slow. You don't actually know that it's some molecules that rotate fast and some molecules that rotate slowly. It might be that you have a molecule, and for example, it can rotate rapidly around this axis at least at the moment, but it's inhibited from rotating around this axis. As you increase the polymer concentration, the fast-moving molecules tend to disappear, and the slow-moving molecules dominate. And the inference is the polymer, at least the typical polymer, is inhibiting the rotation of molecules. And well, how could it do that? And one answer is, here the blackboard, the thing that the erasers are resting on, represents the polymer chain. And here is a solvent molecule near the polymer chain, or here's the blackboard. The blackboard surface represents a polymer chain. Well, this molecule is free to rotate like this, but it can't rotate into the plane of the board any faster than the blackboard moves. Of course, the polymer, it does move, but it moves quite slowly relative to solvent speed. And therefore, the polymer is doing something somehow to affect polymer, how fast the local solvent moves. The question, OK, so what is the effective range of a polymer coil? Is it simply affecting the whole solution uniformly? Is it only affecting molecules close to the polymer chain? Since the mechanism isn't specified yet, I just hypothesized a mechanism so you could see what might be happening, the question you might ask is, well, what is going on? And the experimental data I just talked to you about on Erochlor leads to a solution. How do you do the experiment? Well, it's a very clever experiment to do Cron and Lodge. And what Cron and Lodge said was, we are fortunate to have a couple of polymers that influence solvent motion in opposite ways. One speeds solvent motion up, the other slows solvent motion down. So we can make block copolymers. And one choice is half of the molecule is A, half of the molecule is B. And another choice is, we'll just look at a mixture A and B that's as nonuniform as you can get. And the third choice is, we will have random copolymerization, and we will have A and B right next to each other. Now, if the range is extremely long, all of these will do about the same thing, because each solvent molecule will be close to A, enough to some A's and some B's. If the range is very short, these will tend to cancel each other, and these will not cancel each other. There will be a fast component and a slow component. Didow here, that ones near an A will be fast, and the ones near a B will be slow. And by looking at these, we can infer the range over which the polymer affects molecular motion. And the answer, they do a detailed analysis, is 1 to 2 solvent diameters. So the solvent molecules very near a chain have their rotational motion perturbed, and the solvent molecules that are way out from a polymer chain, yeah, way out. Three or four molecular diameters. 20 angstroms, that's way out enough, are not perturbed. And that was the analysis of Lodge and Krahn on rotational diffusion. It's a very clever experiment, and you can read about it yourselves. I see, however, we are out of time. We will continue this discussion next Monday when the first writing project is due. And you each have a description of what you should have done. A reasonable thing to do is to have some tables at the end or someplace which you mentioned what each of your searches found. And then three to five pages written on papers you found that in some sense use electrophoresis to give measurements that like the Baron and Rod Bargen-Krombach papers could be used to study what polymers do in solution. We're done.
Lecture 6 - small-molecule motion. George Phillies lectures on polymer dynamics based on his book "Phenomenology of Polymer Solution Dynamics".
10.5446/16226 (DOI)
classes in polymer dynamic based on George Philly's book Phenomenology of Polymer Solution Dynamics, Cambridge University Press, 2011. And today this lecture is lecture 26, non-linear viscoelastic phenomena. I'm Professor Philly's and today we're going to continue our discussion of polymer dynamics. We're going to talk about non-linear effects and polymer solutions. If we ask what sort of effects are there, there are three sorts of things that are discovered by the general notion of non-linear effects. There are issues involving the pressure tensor, which describes the forces within the liquid. There are issues involving memory, namely the polymer solution has some rather long relaxation times that you can see experimentally. And then there are what I will somewhat vaguely describe as modern methods, things that can now be done, that were not done historically. So the starting point of the discussion is, what are the forces within a fluid? So we imagine we have here a small cube of fluid, does not have to be a cube. I can make one of the thicknesses extremely small relative to the other two. And we have coordinate axes. So here is x and here is y and here is z. And we ask what are the forces on the liquid? The general statement is we have this object and across each surface, for example, this surface, there is a force across this surface, but the force across this surface has a component in the x direction and a component in the y direction and a component in the z direction. The component in the z direction is a compressional or triaxonal force across the fluid, extensional force. The sideways forces are shirring forces. How can we represent this force? Well, what we do is we introduce an object known as the pressure tensor. And the pressure tensor has components, pxx, pxy, pxz, pyx, pyy, pzz, pzx, pzy, pzz, pzy, pzy, pzy, pzy, pzy, pzy. And the pressure tensor describes these forces in the way it describes the forces. First of all, this we said was a tensor. It's a three by three matrix, it's equal to that. And what we do to ask what those forces are in terms of the pressure tensor is we dot the pressure tensor with a unit vector parallel to one of these that is perpendicular to a face. So for example, here is I have the unit vector perpendicular to the yz plane, which is in turn perpendicular to x. And so for example, I have dot p is I have dot these things. There is slight mixed mathematical symbolism here. This is a three by three tensor, we're dotting the vector into it. And what do we get out? We get out the force and it's a vector. This is the vector force perpendicular to the x axis. That's not the same as the x component of the force. And that force is pxxI hat plus pxyj hat plus pxzk hat. So there are forces across that surface and the forces across that surface have an x, a y, and the z component. For those of you who want to see this in unit vector form, you would write f vector x is I hat, that's this I hat dot pxx. That's the outer product of two unit vectors plus pxyI hat j hat plus pxzI hat k hat. And there you have the pressure tensor. Now you might say that you vaguely recall having heard about pressure o some place back in the sixth grade and you don't remember the pressure being a tensor. And the answer is that if you were in a simple liquid that isn't doing anything terribly interesting, the pressure tensor is p0, p0, p0, 0, 0, 0, 0, 0. Better close it. That is, there is a pressure, it is a single number. It gives the force across the x, the y, and the z faces of the cube and they're the same on all sides. Now there's one additional issue that gets skipped over and the book might confuse people on this having regretted one more time. And the issue is as follows, here's our cube. Okay. And we said the force across the x face is given by this object. And you might legitimately ask, is it this x face or is it the x face on the other side? And the answer is, if it's a very small cube, then first approximation, it's the pressure as the force on both surfaces is the same. Now that's clearly imprecise because if we have a pressure, we have the force across the surface, there's no necessary obligation that the force cannot be changing as position goes on. That is, we could have, for example, dfx dx. And the question is how big are those forces? And the answer is there are two ways that the differences between the two phases can be significant. The first way the forces can be significant is if the object, the volume is being accelerated very rapidly and then the different, if we have, say, an acceleration in the x direction, the net of all of the x direction forces across all six phases had better not be zero, it had better be MA. However, for the most part, we talk about inertial systems, inertial less systems, that is, systems that are so heavily damped that we do not have to worry about inertia. And in that case, the sum of the phase forces has to be zero. Furthermore, in saying the sum of the forces has to be zero, I can always take the volume to approach being a sheet, yes? And if the volume has approached being a sheet, the volume, the thickness becomes very small, the area of the side phases, because now we have a volume like this, the area of the side phases becomes very small relative to the top and the bottom. And therefore, for this very thin volume, the top and bottom forces are almost all the force. And if that force has to sum to zero, the forces here and here have to be the same. That's an approximation saying there's no inertia. Similarly, in addition to there being no inertia, you also want to say that the body forces have to be small. That is, if this is a liquid someplace, there's a force of gravity on it, probably presumably, and because there is a force of gravity on it, there's a wave, and the lifting, the force downwards here and the force upwards there had better be arranged to match the weight because the thing doesn't fall. And therefore, there will be a pressured gradient as you go downward, but under normal conditions, that's also negligible. And you should realize that was just an approximation, it's not exact. Furthermore, it must be the case since the volume is very small, if you're saying inertia is negligible, then the moment of inertia should also be treated as negligible and the angular momentum stored inside this object should be neglected. That's again an approximation. However, if the angular momentum is to be negligible, let's look down on the box from above. We're now looking down from the box from above, and we will look at the angular momentum relative to this center. Well, in order for the angular momentum to be negligible, the torque on the object must be negligible, and therefore, the torque this way across this surface and the torque this way across this surface had better add to zero. That's again an approximation. However, within that approximation, we're saying this torque adds to zero, this is saying, well, let's see, this is the x-axis, this is the y-axis, this is the force in the x direction across the y-axis, that's p, y, x, and this is the force across the x, the plane perpendicular to x in the y direction, this is p, x, y. Those two would better be, and therefore, p, x, y, and p, y, x should match. Now that's fine as long as the angular momentum is negligible, and in the hydrodynamic problems, that's typically a safe assumption. One can go far astray into other areas, and then you would better worry about whether it's true or not, but the assertion is the matrix must therefore be symmetric, and our simple fluid liquid p00 is symmetric. And we now ask what happens to the pressure tensor if we make life a little more complicated. We now introduce a convention, and the convention there actually is a real logical organization to keep people's attention on important things, they have the following convention. We have a fluid velocity, and the fluid velocity is in the x direction. If we have a shear kappa, or if you prefer gamma dot, a rate of displacement, the direction in which the fluid velocity is changing is the y direction. So I will draw a picture of this. Here is a top plate, here is a bottom plate. The bottom plate is stationary. The top plate has some velocity v sub x. The assumption, and we're going to get to the validity of this assumption in a bit, is that if I plot v sub x versus height, you see something like this, and v sub x at some distance y between the two plates. The two plates are at zero and L, and v sub x is going to be y kappa. Kappa is also represented as gamma dot, that is a flux, and I'll explain why at some point. And this is divided by L, where kappa L is the distance between the two plates. Now if you look, the thing is moving this way, the gradient is this way, and there is a neutral direction z. Okay, there's a neutral direction. So we now take the liquid and we put a shear on it. And several things happen. The one we're going to talk about first looks only at the diagonal components of the pressure tensor, p x x, p y y, p z z. And in an equilibrium fluid, those components are all the same. In a polymer liquid, when you are shearing, they become unequal. Unequal, oh yes, they're not equal to each other. And therefore, we can define something n one, which is p x x minus p y y, and n two, which is p y y minus p z z. And having introduced the two ends, you don't need this anymore. This is just the direction in which we're getting a velocity shear. They have names, namely n one and n two are the first and second normal stress differences. Now the description of these things as being normal stress differences means that the pressure the liquid is exerting is not the same with respect to the three principal axes, x y's, p. If you go through the literature and go back far enough, you can find people who claim n two identically equal to zero, and the best that can be said for that assertion is that it is old and it is known to be wrong. There are two normal stress differences are not zero. They are zero if there's no shear. But if we increase the shear, the normal stress differences become unequal. If you think about this, there's a symmetry issue, and the symmetry issue, let's draw this again. Here's x, here's y, here is the velocity at some height. So we have some kappa, which is the velocity gradient. Yes, and as a result of the velocity gradient, the pressure this way and the pressure out of the plane become unequal. Yes, now suppose I reverse the direction of the shear, yes. I have reversed the direction of the shear, and so p y y and p z z, z is this way, y is up, become unequal to each other. I've reversed the direction of the shear. However, if you think about the symmetry for a bit, should we reverse the direction of the pressure difference? Well, probably not, because if you walk around the room behind the wall and look in, you're going to be seeing the picture running backwards. That is, the system should have reflection invariance, and you can confirm that, because if we imagine this blackboard is transparent, yes, people sitting on the far side of the wall, looking in, see the mirror image picture, and they had better agree as to whether p y y and p z z are relatively larger or smaller than each other. And therefore, the argument works for both of these. Both of these should depend as kappa square. Now, there's another set of forces, which I will simply drop in briefly, which are the terms that come in because there's a viscosity. And the point that there is a viscosity is, let's see, the liquid up here is moving, the liquid down there is stationary. So if we imagine this plane perpendicular to the y-axis, there's a force across it in the x direction, because the liquid is viscous. And therefore, well, this is a force in the x direction through the y, that's p y x, and therefore p y x, and since the two of them are equal, p x y are not zero and are proportional to kappa to the first. Those go to zero if there's no shear force. The mirror, what does the mirror argument say? It said that it should be kappa to the first, yes. If the shear, if the fluid is going that way, the dragging force is also in that direction. If I reverse the direction of the fluid flow is by standing on the other side of the blackboard and looking at it, the force also reverses direction. And therefore, these two terms are kappa to the first. Okay, so that is the stress tensor, and those are the two normal stress differences. I am oversimplifying appreciably. However, from the standpoint of what we're going to do with these, I have said enough. You may say, okay, we have this solution. And because it has polymers in it, it has non-zero normal stress differences. What are the implications of this? And the implications of this are that polymer solutions have some fairly bizarre flow properties. They also have flow properties that involve not just the normal stress differences, but also involve, for example, the fact that the system has memory. And I am now going to show what some of these peculiar flow properties are, non-Newtonian flow properties. The simplest one is called rod climbing. So, let us imagine a simple experiment. And the simple experiment is we have our mixing bowl full of cream. And I put in a stirring bar, and I start stirring, and we are going to turn the whipped cream into whipped cream. Well-defined experiment. And if you do this, what you observe, and this would be just as true if you had water, but for some time, it's even more true with water, the fluid moves away from the stirrer towards the edge of the container. This is certainly the normal expected behavior. In a polymer solution, you see exactly the opposite behavior. You see the liquid climbs off the rod. Now, why is the liquid climbing off the rod? I am not showing the math. Okay, first of all, the answer is that we're sharing the liquid because we're spinning, and we are sharing the liquid, and therefore, kappa is not zero, and the normal stress differences are not zero. And, gee, we want to arrange things so that the top surface of the fluid has a surface of constant pressure, and there is a surface of constant pressure. You have to pile the liquid up because the normal stress differences become non-zero. Well, that's rod climbing. I haven't explained the math details at all. The important issue is that rod climbing is actually a fairly demanding test on any model of polymer dynamics. There is a nice paper by Haseger. It's referenced in the book, and the point of the book reference is that. Gee, it's right there. You try to compare this prediction of rod climbing against models of polymer dynamics, and you discover that some of them don't predict rod climbing. That's a serious flaw. Let us consider a few other exotic phenomena. This one is a little more complicated. So here is a pipe, and we are pushing a polymer solution, or polymer melt out the end of the pipe. And if you've ever done this, it's like a lawn hose, basically. If you do this with water, the water flows out, and it heads off in more or less straight lines. If you do this with polymer melt, you get extrudate swell. And extrudate swell refers to the issue. The fluid comes out of the pipe, and it bulges and gets wider, and because it's getting wider, it also, of course, must be slowing down. Yes? Well, that's extrudate swell. That's certainly not something you see in a conventional liquid. Okay, there is a traditional trough experiment. So we have a tilted flat object with walls, and we send the liquid running down it. And if you do this with water, the liquid surface is flat across the range of the system. If you do this with polymer solution, you discover there's a bulge, which I have not necessarily drawn very well, and the fluid in the middle of the trough is higher than the fluid at the edges. The nice book by Curtis and Bird, the polymer book, collects some dozens of these different strange effects. Let's consider another one. Again, some of these are not easily understood as just saying, oh, normal stress differences, that's normal stress, this is normal stress. On the other hand, there's always a drag reduction. And if we say there is drag reduction, what we mean is we are pushing liquid down a pipe. And if it's going fast, there tends to be turbulence at corners, and this increases the resistance to flow. So what we do is we drop into the liquid, trace quantities, part per million quantities of high malignments. And we drop into the quantities of high molecular weight polymer. And suddenly, the drag reduces a great deal. There have been military experiments where the idea was you dump stuff out of the prow of the ship, you inject fluid out of the prow of the ship, and this will reduce drag, because you eliminate turbulence. I haven't heard that this was put into action, but it was an idea that someone thought was worth spending money on. Why is there drag reduction? Well, that may be related to another experiment. Here we have a very large bath full of liquid, and we open the drain hole in the bottom. This could, for example, be a large bathtub. And having dropped the liquid in, we then let it drain, and what you observe after a bit, there's angular momentum in the tub. The angular momentum cannot easily escape down the drain, and therefore, especially if you got it going a bit, you see a vortex forming, and you see entrained air bubbles, and you have this interesting thing right where it's draining. Yes? Drop some polymer in, mix, and this effect disappears completely. You see no vortex, and you see no entrainment of air bubbles. Now that's not, that's, again, G, we put in a polymer, and the basic qualitative flow properties just changed appreciably. Okay. Oh yes. There's a classical phenomenon with a famous photograph, and the famous photograph is Lodge, and this is Lodge Sr., the father of the editor of macro molecules, and what he does is it follows. He has a beaker, and we are pouring a polymer solution out of the beaker. Yes? And so, and by the way, it's flowing very slowly. This is a very thick solution. And now, we go in with a pair of scissors, and we cut the liquid, which you can do. You could not do this with liquid water, it's moving quite fast, but you can cut it. And what happens? Well, the piece down here goes to wherever it's going, and the piece up here retracts. It's a very high use. The pieces of the, the adjoining pieces of the liquid behave as though they're connected to each other by little springs. This is the storage modulus I made. This phenomenon is known as elastic recoil. There is a, it's actually sort of the same phenomenon. Here we have a beaker, and we would like to get the polymer solution out of it to a container down here. So we will use a siphon, except we're going to cheat using tubeless siphon. Now, ordinary water, you need the tube because it has to do things, but what we do is we take this, and we, it's, this liquid drags and stretches, it has an extension of viscosity, and we, after carefully repositioning the container, so it will catch the spill, we've grabbed the liquid out, and it flows over the top, and down like this, into the lower container, and it's being siphoned out even though it is not in contact. There's nothing there, and that is a matter the liquid behaves as though it's connected with itself over extended regions. Curiously though, the physics is entirely different. You can get exactly the same phenomenon using superfluid liquid helium. It will creep up the walls, creep down, and flow to the lower container without artificial intervention. Well, other than the artificial intervention, you need to keep the helium superfluid on this planet. So there is elastic recoil, and this object is the tubeless siphon. Now, the issue here is there are a lot of odd flow phenomena which you would like to explain. The challenge is that some of these are a little difficult to quantitate. So for example, I can say I cut it and it retracts, but what did you want to measure in there? How far it retracted? How fast it retracted? What happened as the scissors went through? These are phenomena that are important, that are non-linear. Some of them are hard to quantitate, and therefore are hard to drop trivially into a theoretical model. But that doesn't mean they aren't real. It just means they're a little different from the sort of phenomena we've been talking about so far in the book. And that is it for peculiar flow phenomena. And we will now push ahead and we will talk about memory effects. There are a bunch of these. The simplest, I'm going to sketch this as though we have an extremely, simply two infinitely long plates. And one is moving with respect to the other, and therefore in the space between there is a shear rate gamma dot. And we run the system, and we run it for a long time, and we measure force on the lower plate, the force on the upper plate. We have to put forces on the two plates to get them to move with respect to each other. And the force is determined by the shear rate, it's also determined by the viscosity, the sensible thing to talk about is the force per unit area, of course. Now we go in and we make a sudden change in the shear rate. And so if I plot, here's the time axis, I plot gamma dot comes along and it now jags up. So there is our sudden change in the shear rate. I've increased the shear rate. And now, having increased the shear rate, I ask what happens to the stress? There's a force per unit area that we need to do all this. And we ask what does the stress do, PXY, PYX, if we do this? And the answer is nothing at all happens until we change the shear rate. And then the stress pops up and comes down again. And there is in this region, you notice it's gone up, this is overshoot. That is if we had a polymer solution that was habituated to being sheared at a particular rate. And we then suddenly increase the shear rate, we eventually get back to a steady state. But before we get back to the steady state, we have this transient known as overshoot. The Curtis Berg book suggests that sometimes you can get oscillations. However, you can also propose several experimental artifacts related to the inertia of the apparatus, et cetera, that would give the illusion of those oscillations even though they were not occurring. That is, it would be a machine artifact, not a physical artifact. What if you go in the other direction? Well, here's the time axis again. Here is our shear rate gamma dot. And we slow things down. And we ask what forces are involved to do this? And the answer is the force comes along, and it does that, and it recovers. And we have under shoe, that is. Once again, there is a memory effect. The system takes a while to accommodate to what is going on around it. The effects that I am describing can be also be, this is the stress. You can also see the same sort of thing if you measure the first normal stress difference. That is, you change the shear rate, the polymer solution accommodates, but it does not accommodate instantaneously. If you want, you can also do a more complicated experiment. But I'm simply going to note the more complicated experiment. Namely, you can take the shear rate, and you will say, we will have the two plates, and we will have a gamma dot, because we're oscillating the plates back and forth, which is some A cosine omega t. It's oscillating, yes. And that is a dvx dy. And then you can add to it a second shear, which is a constant. And the second constant part could involve another motion in the x direction, parallel, or it could involve a motion out of the plane of the board perpendicular to the first shear. And now you have a liquid that's being subject to a fairly complicated set of displacements. Now you might say, could you also do a shear rate this way? Now that's compression, and the liquid would have to escape out to the sides, and that would get a little messier. So you want to do this so that the volume of fluid doesn't feel some obligation to flee. OK, we have now described what goes on in these systems. So we will now push on to a different set of experiments. And these are experiments where we set control of the shear rate. We move one plate with respect to the other, either at a constant speed or an oscillating speed. There are several actual, why don't I briefly mention the sort of experimental configurations that do this. It's very nice to say mathematically, we have two infinitely large plates. However, they wouldn't fit into the laboratory, and they'd strain the budget. So there are several traditional arrangements which you do instead. And one arrangement is to say, this is a cylinder coming out of the board toward you. This is another cylinder sitting here. And I will rotate one cylinder with respect to the other. Now the difficulty with that is that we're doing a constant angular rotation. As you go out on the radius, the shear rate is obliged to change. Or there's some other complication. Another answer is say, we will have a small disk here. We will have another disk here. The liquid goes out forever, at least far enough. The two disks don't. And we rotate one disk with respect to the other. And the distance between the two disks is a constant. Now that also has a problem. Because if I look on the surface here, v equals omega r, yes, it's rotating. And that means the top surface here is moving much more quickly than the top surface here. And if I calculate dv, dy, there's a velocity, say, out of the plane of the board. And if I calculate the velocity gradient, it's proportional to r. Because the distances are the same, and the velocities are larger. A way of avoiding this is called the cone and plate instrument. And the notion of the cone and plate instrument is that we have a conical off plate like this. We have a flat plate like that. We rotate one plate with respect to the other at omega. And vx, that's the velocity perpendicular to the board, is proportional to omega r, yes. However, l, the separation between the two, is also proportional to r. And therefore, the ratio vx over l, there's our gamma dot, is proportional to omega and is independent from r. In order to avoid complications where joining pieces of liquid are moving at different spaces, you keep the cone angle very shallow. So that's how you do the experiment. And then if you want, they're more complicated alternatives, pipe flow, for example. OK, but now we're going to talk about a somewhat different experiment. So here is our bottom plate. Here is our bottom plate. Here is our top plate. And at t equals 0, they're like this. And then we displace the top plate with respect to the bottom plate through a distance gamma. Yes? Gamma is the strain. And what we do is an experiment in which I plot gamma versus t. Well, the simplest one, we start off with no strain. And then we suddenly displace one plate with respect to the other. Well-defined experiment, not trivial to do, because you have to displace the top plate with respect to the bottom one very quickly, get it up to a high speed to move it where you want it to be, and then stop it again in such a way it doesn't oscillate or other bizarre things. But it can be done. And we ask, what is the response? What is the force per unit area appropriately normalized with respect to gamma? And the answer is the force on the two plates due to the liquid is 0 until we do the strain displacement. And then the force pops up and at later times, decays downwards. This shape is g of t and gamma. It's a function of time because if we wait long enough, the force disappears. It's also a function of gamma because this is not a linear system. And therefore, we can measure g of t and gamma. What shape does it have? Well, approximately speaking, say it's a sum of two exponentials. You should realize, though, if you say it's the sum of two exponentials, two exponentials will fit a lot of things, even if that isn't quite perfectly right. So you shouldn't really be insistent that that's exactly the shape. If we go to large concentration and molecular weight, you shift from something that looks like this to something that has gotten bigger and has an initial very steep drop off. And therefore, you change the concentration in the molecular weight and something happens. And now we have the question of what behavior g of t and gamma has. You will notice I give several literature references, but I decided to stop the book without doing a detailed study of nonlinear response. There were several reasons for this. First of all, I was running up against a page template. I was also running against a time limit because I had to get the book to the publisher. I was also running up into something of an exhaustion limit because this had been going on for five years. And at some point, you want to bring projects to a stop. It was also very clear that the literature on this topic was very much in a state of flux, as opposed to most of the rest of what we have discussed, where if I had done the literature search and did it cut off a year or two later or a year or two earlier, yeah, there'd be a few lesser more figures. But there wouldn't be anything very new there. That is, there might be some additional measurements, but people would agree on what they were and what they said. The nonlinear stuff is currently in a state of instability, if you will forgive very bad pun. And therefore, I decided to stop. I shall, however, illustrate we're going to discuss a g of t and gamma. And we are simply going to note it's in the book. An interesting bit. Suppose you plot g of t and gamma. That is, the stress that's developed, if you do a sudden strain of size gamma, and we ask what the force per unit area is as time goes on. Well, there is an analysis which is, as far as I know, everyone agrees is correct. In fact, both of these, everyone agrees is correct, but they're different. And what in no way, collaborators do, there's a nice paper. They show if you look out at large times, and you take g of t and gamma over some function of gamma. That is, if you multiply the curves by some constant. At large time, well, small time curves may be different, but at large time, if you multiply the curves by gamma, the curves all lie on top of each other. And therefore, at long time, the assertion is a common relaxation process which some people claim they know. And I don't believe there's any disagreement that if you do the math and math, the curve moving around this way, you do see this. However, there is also the nice set of experiments by Topadia. And the analysis by Topadia shows what happens if you start with the measurements themselves and look at the measurements themselves and don't over-process them before you look at them. Because what they did is they plot not this is g of t gamma over this function of gamma, but what they did was to look at g of t gamma. And implicitly, that's the same as saying they're going to look and ask what f of gamma is. And what they demonstrate is f of gamma is not monotonic in gamma. That is, yes, you can make the curves agree with each other, but this division factor you have is a bit more complicated than, say, one over gamma. And the issue there is that there are a number of models that predict the second behavior, but if you ask what they would say about f of gamma, you would think f of gamma would be monotonic in gamma. And their result is that it is not. OK. There's a further conflict. This is supposed to be a critical test of some models of polymer dynamics. And we get to a nice review by Vanaras. And it's beautifully done and extremely systematic. And what is done is to say that if we look at before, oh, what was the year? It's about 2004, before about the year 2004. There are about two dozen studies. And about half of them show the predicted behavior. And about half of them don't. And so what is done is to do a very thoughtful and careful analysis and to look for artifacts. There is an extremely long list of artifacts that could cause various deviations from the expected theoretical behavior. And therefore, the proposal is that, in fact, sometimes you see the theoretical behavior. And sometimes your experimental apparatus is uncooperative and causes you problems. If you read that paper carefully, though, there is one bit that does not get done. I'm not faulting the author. And maybe it really was done. And I just missed the point. But the question is, were those artifacts taking place in the studies which did not find the expected answer or in the studies that did find the expected answer? And it isn't as completely sorted out. Now, of course, this would be a little more delicate to do, because if you're saying x is wrong because he painted his apparatus pink, maybe you shouldn't have painted it pink or whatever. I'm making that one up, obviously. But if you say x is wrong, you annoy a certain number of your colleagues. And therefore, there is a certain matter of delicacy here. Nonetheless, the question is, which experiments were good and which were bad? Assuredly, the author does describe the oscillating response, which you can in fact find in some experiments. That really is an artifact, and people have actually seen it. OK. More experiments. There's a gain nonlinear. If the system were linear, this would be completely dull. Double step strain. And so we have a strain gamma, and we will have another strain gamma prime. And if I plot strain with the motion of one plate with respect to the other, here is gamma, and here is gamma prime. And we ask, what does the system do after I displace it the second time? Or if you want to be huge, you can use the same time, or if you want to be huge, perfectly legitimate. Here is gamma prime, and the second displacement is in direction opposite to the first. Those are very demanding experiments to be predicted by any system. OK, another experiment. Extensional viscosity. The core issue here is extremely important commercially. We take a polymer. This is usually done in terms of melts. And what we then do, because the polymeric liquid, is we stretch it. So we have it coming out of something here. And for example, we attach it to a reel there, and we spin the reel very, very quickly. And therefore, if I look at hypothetical markers, I see that if I look at hypothetical markers in the liquid, the fluid gets stretched out. If I imagine an observer sitting here, what the observer would see, here is the observer's marker, and here is something going this way, and here is something going that way. Now, if you think about this for a moment, you may realize, what is happening to the volume of the system if you do this? And the answer is, this is a physics two problem. It's a wire stretching problem. As you stretch the system, the wire gets narrower. And this is how you convert a very thick cable into a very thin thread. This is thread drawing. Very important commercially. The important feature of this, though, or an important feature, is zero shear. That is, the top and bottom surfaces of the thing are moving at the same speed. The liquid is moving apart, so there's inwards compression, but there's no significant shear gradient here. So there's not shear viscosity you're looking at. And there is extensional viscosity. There are, however, a series of interesting experiments due to Wang, Wang, very nice people. And what they demonstrate is that even if you're extremely careful, you get localized regions where nonlinear things are happening. And the notion that you can just say it's a uniform thread being stretched the same way everywhere along its length at the same time, does not hold up to detailed experimental analysis. Yes? OK. We shall now push on, and we will now push on to discuss what I very loosely describe as modern experiments. Are they really modern? Well, some of them are in the sense that it would have been much harder to do them a long time ago. Some of them are modern in the sense they have only been done recently. So let us take a few of these. And one of these is shear banding. The issue is as follows. We have a polymer solution. We put it between a pair of plates. And this could, for example, be the rotating cone and plate I described in which the center axis of the cone is way into the blackboard. There are other ways to do the same experiment. And we set the system to shear it. So we just shear the system, and we ask what happens. And the answer is that if we had a nonlinear, if we had a Newtonian fluid, let's start with Newton, if we had a Newtonian fluid, dvxdy is going to be constant across the height of the system. So the shear rate, gamma dot, I'm going to plot, here's height, here's gamma dot. This figure is a little unusual in that the independent axis is vertical. And what you have is the shear rate is constant. However, if you actually do this to real polymer solutions that are adequately concentrated, et cetera, et cetera, et cetera, and do the experiment, what you discover is that the shearing tends to be confined to some fairly narrow bands, as opposed to being the same everywhere. It's as though the fluid had formed yield planes and was yielding near those planes. These appear to be equilibrium shear bands, and that if you do things for a long time, everything just sits there. Yes? Now, the actual experiments where shear banding is observed are again due to diphthadia. It's a very clever set of experiments. It was done with a large angle oscillating shear. We'll get to that in a moment. So in fact, the top plate is going back and forth with respect to the bottom plate. And what you find in steady state is that you get bands. You get these zones, and inside the zones the fluid is yielding, and between them the shear rate is much lower. Well, that's not at all what you would have expected to find from the usual discussion, but that is in fact what you find experimentally. We will get to a more dramatic form of this in a moment, and the more dramatic form of this in a moment, instead of oscillating things, we will look at relaxation after strain. OK, and the more dramatic form of this is non-quiescent flow, or relaxation. The issue is as follows, and we're going to need a little bit of microscopic picture. Here are two plates. I displace one plate with respect to the other through the bottom and stop. And so along what was originally this line, there are polymer molecules. After I've done this, the liquid presumptively has moved, and the form of the motion is the phrase that is sometimes used. I'm not sure this will not horrify mathematicians. This is an affine displacement. And then that result is the polymers that were originally like this are now here. And there's a force that's on the two plates to hold them in place, because we displace things, and the g of t gamma is not 0. And we now ask, what happens? What do the polymers do after you've done this? And the answer is that the polymers move. They do local things. This one rotated. Yes, you can see it's rotated. And this one got stretched, and as time goes on, it retracts. And all of these, in the context of this model, diffuse. And because they diffuse, they move relative to their immediate neighbors. The forces between them and their neighbors are reduced. And therefore, because the forces between them and their neighbors are reduced, goes to 0 eventually. However, when we're done with all this, suppose I had carefully used bright green polymers along this line. Suppose I wait a long time, and g of t gamma goes to 0. Well, polymers diffuse sideways, so the line gets a little blurry. But during the relaxation, the polymers basically will end up where they started at the moment gamma had occurred. That is, you move things to the side, and they then sit there. They diffuse, but they don't move over any significant distances. And this is a core part of the so-called do I add words picture. Well, that's nice, except you can actually do the experiment I described. You can actually go in, and you can, well, I'm not sure you'd use green paint as the ideal solution, but you can actually ask what the polymers do. And the answer is, at first, they're doing some sort of relaxation. But then you keep waiting until times that seem to be fairly long with respect to any polymer relaxation times. And what you discover is that there is macroscopic motion, and the shear has resorted itself, and the shear is now confined to more confined to more limited regions of space. Now, it's not that there is no strain, and then there's strain, but the amount of the strain in here, displacement per unit distance, maybe, oh, say, seven times as large as the strain here. Yes? So, there is what happens. OK, that's non-queuescent flow. There are large numbers of models of polymer dynamics, most of which do only predict quiescent flow. This is a serious problem for polymer models that fail to allow for that. OK, two final, well, three final techniques. OK, large angle oscillatory shear Laos. We look in it and say the cone plate experiment. That's supposed to be the top disc of the cone. And we rotate top with respect to the bottom. And in an oscillatory experiment, there are actually two variables. And one is omega because the delta theta is some theta 0 cosine omega t. That is, there's an oscillation frequency. And the other variable is delta theta. That is, how far back and forth are you taking one plate with respect to the other? And in the standard experiment, delta theta is small, a few degrees. And therefore, if I look in like this, the top plate, which is, of course, a cone of which I've drawn a little piece of cross-section, is going back and forth. But this displacement is very small relative to this distance. So the angular displacement of the top cone is a few degrees. That's small angle. However, there's nothing mechanically that requires you to do that. You could also do this. And we therefore have what is called medium angle and large angle oscillatory shear. And in these, the sideways displacement, well, here's the sideways displacement. It's about the same as L. And if you're under this limit, this is medium angle oscillatory displacement. And if you go out here, you have large angle oscillatory displacement. The point of medium and large angle is the polymers sitting here now have to rotate quite considerably if they're being pulled along. And therefore, the level of rotation gets to be quite substantial. So you have to do this with a large angle oscillatory shear. And you can actually do the experiments. You can also do experiments where you measure the dielectric relaxation at the same time you are doing this. And you should realize you now have the frequency at which you're doing the dielectric relaxation measurements. You have the frequency at which you're doing the oscillation. And the two frequencies are independent. And therefore, life is a little more complicated than it was before. OK, last experiment. The picture is as follows. We have, for example, two flat disks. And the top disk is oscillating back and forth at omega. We measure the force on the bottom disk. Yes? And we can use piezoelectric elements. And we can use electronics. And we can be very clever. And we can actually get a recording of force versus time on the bottom disk that is very precise. And what we say is we will do a Fourier transform of the time record. Now, the simplest part of the Fourier transform is we have this object oscillating back and forth at omega. And therefore, the force on the bottom will have some constants. And there will be an e to the i omega t minus delta. And the omega t is that the force down here oscillates at the same frequency the oscillation up here is occurring. But there is a phase difference. And the phase difference corresponds to the fact that there is a storage as well as a loss modulus. It's like any other driven harmonic oscillator. These are quite heavily damped. But the harmonic oscillator imaging works. OK. Well, that's true. However, if I have a long time record, I can look for all of the frequency components, not just the one omega. And if I look at the response as a function of frequency, well, yes, I find a big response at omega. But it turns out that the response also has peaks at 3 omega, 5 omega, and so forth. That is, you are seeing a nonlinear response because the system is not linear. And so you're seeing frequency. In this case, it's frequency tripling. The optical version would be, say, frequency doubling. But here you get tripling. Now, that experiment could be made considerably more elaborate. I have not found this in the literature. It does not mean it is not there. And the notion is that instead of driving at 1 omega, I have some gadgets doing the driving. It's computer controlled. And I could drive at two frequencies at the same time. And then I could look for all of the harmonics and combination frequencies. And this is what you do in nonlinear laser optics. Well, this happens to be a mechanical system. But you can do the same thing. You can look for the responses at other frequencies. And now, yes? There's two different frequencies. Oh, what do I mean two different frequencies? I mean that theta, the angular position of the top plate, is some theta 0 cosine omega 1t plus theta, I better call this theta 1, theta 2 cosine omega 2t plus alpha. That is, I can drive the plate at two different frequencies rather than just one. And if I do that, I can look for the harmonics and combination frequencies. And now we return to chapter 3. Because chapter 3 gives us some additional nonlinear phenomena. We are looking at capillary zone electrophoresis. So we have a tube that's very narrow and very long. And it's filled with polymer solution. And we put things in it, which are not really that as big as I've drawn them. And we measure the velocity of the things down the capillary. And the measured velocity, how do you measure velocity? Well, you start everything here at the same time. And down here you have a detector. And the detector sees the things coming by. And so you can measure, you have a length and a time. And therefore you can infer velocity. And if you divide the velocity by the applied field, you have an electrophoretic mobility mu. Well, mu shows not some nonlinear properties. If we increase the electrical field enough, you discover that mu becomes a function of the driving field. That is, instead of the mobility just being a number, the mobility depends on the applied field. It is unclear in my reading of the literature, I suspect it's very there someplace, whether the nonlinear transport effect cuts in above some field or just becomes too small to measure. However, you do have a very controllable, because you can change the field by a lot, system in which you get a nonlinear response to probe motion. And therefore you have a nonlinear viscoelastic response, which you're probing directly. Second, I can plot mu versus probe size. And what you see in the measurements is that mu falls as an exponential or a stretched exponential in probe size, until you get to a critical probe size. And above that critical probe size, which depends on the polymer concentration in the field, there is a very weak drop-off, which is close to a power log. The transition may or may not be perfectly sharp. At least some of the, let's do the probes at the same concentration, you can see what appears to be a few points around it. Now, from the standpoint of the analytic chemist, this is good, but this is very bad, because the mobility now depends very weakly on probe size, and you can't get separations out here, so this is not good at all. However, since this crossover and this crossover would appear perhaps to be the same. I didn't say they were the same, I'm not positive, but it looks like they're the same or seem to be the same. It might be the case, you're getting a good separation, you're getting a good separation, you now go to a lower field, and because you go to a lower field, you can get out to higher, larger probes before you run into the problem. And by varying the field strength as you go, you can get rapid separations, and you can also get separations that might work at higher feet, at larger probes. Perhaps, I have not done that experiment. However, this is very clearly a nonlinear change in the dynamics. If you ask yourself, what is the variable that is determining the nonlinear change? The answer, based on probe size, field behavior, etc., appears to be that this transition corresponds to how much force you're putting on the polymer solution, and the polymer solution behaves as though it has a yield point. Okay, we are almost out of time, so I will remind you of what we have done. And the answer is we have discussed Chapter 14 and the teeny bit of Chapter 3, in fact, shows nonlinear effects, and we talked about three sorts of issues. We talked about things that were clearly due to normal stress differences. We didn't talk about the instrumentation problems that come up to try to measure normal stress differences, but they're there. Then we talked about things that arise because the system has memory. There are molecular relaxations which take place on time scales so long you can observe them. And finally, we talked about some new methods and observations, such as shear bending, non-pliacid flow, electrophoresis. The discussion in this chapter was really at the level of taxonomy. That is, we classified the phenomena, but we didn't try to describe them quantitatively. So that is it for all of the different types of phenomena that I found. I still have the suspicion that there is some sort of experiment I missed and did not include, but I didn't try to look hard. The next lecture will do a review of all of the experiments and high points of what we found, and then we will have about one or two lectures in which we will actually try to interpret this in terms of polymer bottles. But that's it for today.
Lecture 26 - Nonlinear viscoelastic phenomena. George Phillies lectures from his book "Phenomenology of Polymer Solution Dynamics"
10.5446/16225 (DOI)
Classes in Polymer Dynamic. Based on George Philly's book, Phenomenology of Polymer Solution Dynamics, Cambridge University Press, 2011. And today, this lecture is lecture 25, Viscoelasticity. I'm Professor Philly's, and today we're going to continue our discussion of polymer viscoelasticity. After we stopped taping last time, we had this very interesting class discussion on how we found, or managed to find, the temporal scaling on Zotz and where the whole model that has sort of been hiding implicitly under all of the phenomenology came from. And so I'll add in some comments on that toward the end of the lecture. What I'm going to do though, is to discuss first where we are, where we are advancing to, and what we found so far. So what we did was to discuss viscoelastic properties as actually measured. And we looked at results on lost modulus and the storage modulus, and we looked at results on shear thinning. The experimental outcome is that we have two set of all of these measurements, and we have a theoretical approach, temporal scaling, that reduces all of those functions to a small number of parameters. Now one thing you might legitimately ask is, well it's very nice to say we have a small number of parameters here, but there is still the key question of, do those parameters say anything sensible, or are they sort of randomly scattered over the entire universe? One way to answer that question is to take, look at the parameters, and to ask how the parameters depend on solution properties, how they depend on polymer concentration, polymer molecular weight, and other things that one could measure. In doing that, note that the viscoelastic measurements, well their tip in some cases are carried out to the belt, are largely reported for fairly concentrated solutions not tending down towards zero concentration. And therefore the measurements we looked at are a little limited in some respects in terms of which concentrations they point at. That's not always true, and if you look hard there's some that get down to fairly low concentration. So what we can do is, for example, well let us recall what functions we have been using to fit the measurements. And so the general form for either of the moduli is normalized by frequency, that is either g prime over omega square, or g double prime over omega, have been fit by something which is a g i zero e to the minus a omega to the delta, and that's up to some frequency omega c. And that entire frequency, there is a g bar i omega to the minus x. There are two storage moduli, and there, well I'm sorry should I say that wrong, there is a storage modulus, there is a loss modulus, and because there are now two moduli, there are actually two sets of these parameters that we can look at. If we start by looking at g 10 or g 20 as a function of concentration, we see that as a function of concentration, and this is all in figure 13.33, what you see is power law behavior. The power law behavior extends down from the melt, a fair distance, but not absolutely all the way to zero, and then at least some systems there is a trailing off below. We should recognize that g, that's the omega equal, goes to zero limit of g double prime over omega, that's a viscosity, and therefore we are looking at systems that are in the melt-like regime, that is we are looking at things in which the viscosity has gone over from a stretched exponential concentration dependence, which is always found at low concentrations, to a power law concentration dependence, which is sometimes found, but not always found at higher concentrations. We can also look at alpha versus concentration, and what we see for alpha versus concentration is again power laws. The linear polymers have a weaker dependence of alpha on concentration, the star polymers have a stronger dependence of alpha on concentration. Finally, if we look at the exponent delta, this exponent, what we see is that for linear polymers, typically delta stays fairly close to one, and for star polymers, concentration has increased, delta falls off. I could go into considerably more detail by putting the slides up one at a time, but that's certainly the sort of results we're talking about. On the same line, we could then ask, well, gee, what happened, where is the cutoff frequency? There's a cutoff frequency that transitions us from the low frequency region to the large frequency region, and if we plot the cutoff frequency versus concentration, what we see is that as we increase the concentration, the cutoff frequency falls, and fairly consistently, these things are fairly close to power laws. Now, in some cases, we say, yeah, these are close to power laws, but you legitimately ask, but what range of the power law are you seeing? And the answer is, in some cases, you're not seeing a great deal, and you might be able to fit to something else, and in other cases, you're fairly clearly covering a decent range. It is also the case that you might wish for significantly more points on a single curve. You should realize how much work is involved here. That is, to get to this point, you start off by measuring the storage and loss modulus at some frequency, and you repeat this measurement many, many times at a whole series of frequencies, and now you have done one solution. And then you can fit the, you then can fit and extract the parameters from this functional form, and you now have on each of these graphs, on the, as a result of many experiments, you have one parameter. You notice that there is a lot of work here. Now, to some extent, you can hope to avoid this by the simple expedient of automation, which is much more practical than it was 20 or 40 years ago. I mean, there are even people, and we can go back 20 years on this, or 30 years, who are experimenting on automation of organic synthesis. But the net result of all this is that you might like more points, but there is a huge amount of work to get to what you see. Nonetheless, the cutoff frequency does indeed fall with increasing concentration. It falls more slowly with linear polymers. It falls more rapidly with star polymers. Okay. So let us perhaps chug ahead a bit, and let us ask what happens at higher frequencies. That is lower frequencies. At larger frequencies, we have power law behavior, and there are two sorts of things we could plot, and we could plot g bar n. n is one or two, and that is not exactly a physical number. g bar n is the value that the modulus as normalized by frequency would have at unit frequency if you had power law behavior into unit frequency. Of course, you do not have power law behavior into unit frequency, and therefore, this particular number is perhaps not quite what you want. You could imagine, it is not what I happen to have done in the book, plotting instead g bar i omega at the cutoff at the transition to the minus x, and then omega over omega cutoff to the minus x, and you could call this, I don't know, g double prime, g double bar, and therefore, you would actually give the value of the modulus at the frequency at which there is a transition. That would be perfectly legal to do. It would be a little harder to, it would be a little trickier to fit because you would probably, having done a great deal of nonlinearly squares fitting, you would probably find the fitting process was a little more insistent that you give it decent starting parameters, or you get off, you might get divergences. Okay, so what do you find this thing does? Yes, or is it increases with increasing concentration? And if you actually go back to the original figure, the original data of, say, Paul Beatle, beautiful measurements, you may remember you saw something that looked like this. Curves do not cross each other. The stretch exponential parts, if extrapolated, do. And so what is happening is we are moving up, we are moving the rollover of the low frequency to lower frequencies, but we are not doing it so fast that these curves interpenetrate. And what is the slope of, what is the slope here? The slope there is delta. It's an omega to the minus delta. So we can ask, well, that, okay. So what does delta look like as a function of concentration? And the answer is delta times starts off at lower concentration, tends to an asymptote. And for delta one, the delta corresponding to the storage modulus, you get up to about 1.9, roughly. For delta corresponding to the loss modulus, you get up to something like 1.25. Now, those are the slopes for g prime over omega square and g w prime over omega. If you were actually looking at the two curves, the actual g prime or g w prime, g double prime itself, what you would say is that this is not omega to the minus 1.9, it's omega to something very close to zero, but slightly negative. And this would be omega not to the 1.25, this would be omega to the 0.25, and it would be very slightly climbing. We'll come back to the pretty pictures those match in a bit. Okay, so that's the concentration dependence. We can also talk about the molecular weight dependence. And the molecular weight dependence takes us up a forward couple of figures, if I recall correctly, about 13.35. The molecular weight dependence results are a little constrained in that there's not quite as much information as you would like. There is very nice information at 641 gram per liter of an f equal to six star. And there is nice information going down a bit on a linear polymer, but there's not quite as many results that we can point at where there was where a lot of different molecular weights at fixed concentrations. Nonetheless, what do we see? Well, g i zero is going to climb with increasing molecular weight. We are in the melt-like regime where we see a power law behavior, and we saw that for concentration. And here we also see it for molecular weight. I should redraw this slightly. For the star polymer, the slope is relatively steep. For the linear polymer, the slope is less pronounced, though present. We can also look at alpha versus m. And again, we see something close to power law behavior. The best I can say for delta is that there are lots of values of delta. There are two moduli. There are a certain number of samples. There were a bunch of molecular, and we can look at versus molecular weight. And there are a lot of values of delta, but I didn't see any easy generalizations. Okay, what else do we see? Well, there are two more parameters here, aren't there? Yes. There's g bar i, and there's x. And if we look at g bar i for the molecular weight dependence, that's certainly well defined. If we look at g bar i and the molecular weight dependence, well, you see a result. And the best that can be said, if you want to say it's power law behavior, is that it's a rather weak power law because the variation in g bar i with m over the observed range of m is not very large. And therefore, if you want to say it's power law, your evidence isn't as strong as you might like. For x, you see something slightly peculiar, and it's a little more, I will emphasize, this is a linear scale for x and an exponential scale for m, or if you prefer a log scale. And there are nice lines on the plot, and therefore, we can say that x is proportional to log of the molecular weight. The slope is very weak. It's not flat. And that leads us to, recalling it's omega to the minus x here, we have something that is omega to the minus some constant. Oh, there's an additive constant, too. Log m. And if you have something of that slightly unusual structure, you can rearrange it, and you can arrange it to say that it is m to the log omega. There's some constants. But I'm not sure what, if anything, that tells you. It's a rearrangement. Nonetheless, if you actually go and look at the figures, what you see is, we can extract parameters out of them. The parameters we extract out are typically fairly well behaved in the sense they show smooth dependencies on the solution properties that you think ought to be determining them. And therefore, in some sense, you're getting a quantitative description. Now, having said there's a quantitative description, you ask, okay, you have a quantitative description. What good is a quantitative description? And there are three sorts of goods. The first is that if you actually have a quantitative description, and you can say how the fitting parameters depend on concentration and molecular weight, you now can use the fitting parameter behavior and measurements at some number of points, and you can use them as interpolants. You can use them to calculate the storage and loss moduli at polymer concentrations and molecular weights that are not the same as the concentrations and molecular weights that you actually studied experimentally. That is potentially quite useful. I emphasize there's a lot of work required to do these measurements, and if you have a good instead of interpolating functions, you can significantly reduce how many experiments you need to perform to get the measurements you want. But the second point is that if you discover, you know how these various parameters depend on concentration, molecular weight, solvent quality, whatever, those are clues. Those are clues as to how you take the theory and you push it ahead to the next step. They don't tell you what the theory is, but they are hints from mother nature to what you ought to be doing. Finally, if you do manage to cook up a model that end a detailed calculation that actually explains what you are finding, which may happen at some point, if you are able to do this, you have a result. Namely, you can take the experiments and actually use them to test your model and see if it works or not. And so, the phenomenological approach I've outlined here actually has three sorts of uses. Okay, now we're going to change topic entirely. We are going to talk about chronic promise relations. Chronic promise relations are a statement about relationship between time response and loss modulus and storage modulus. They're in a certain sense of mathematical relationship that must hold because the system is linear. So, let us consider what we mean by that and we start with g of t, nu g, which is the shear stress relaxation function. And what the shear stress relaxation function does is we consider the following. Here's the time axis and we have ideal test to infinite plates and we displace one of the plates some amount very, very fast. And once we've done that, we ask what force is acting on the upper plate. And what we find is the force is zero until we displace the upper plate and then there is a spike and then there's some sort of a relaxation whose details I am not giving you at the moment, but there is a relaxation. And if you look very hard as you are moving the plate, you can also ask how you manage to get up here. Now, the next statement is system is linear and therefore if we are making a motion of the plates, we can decompose the motion of the plates into a large number of steps and each step separately creates at the time it happens its own change in the shear stress. Okay? So, we go in and we could imagine making a second step and the third step or you could make a whole series of steps in the form of a cosine wave or a sine wave. So, we are saying we have a linear system and therefore it must be the case that the response of the system to a sinusoidal oscillation and to a linear oscillation, a step motion, must be calculable from each other. The last piece is causality that is g is supposed to be zero if you were at times less than zero because you should not have a response force before you've moved the plates. If you do, you can use this to construct at least if not a time machine, a device that lets you communicate backwards in time. And now I will make the point, this is the result of chronic and progress, two separate authors, two separate papers, that that statement lets me calculate g of t either from g prime over omega square or from g double prime over omega. There are a large number of different ways of writing this but let's see integral zero to infinity d omega, this is a function of omega and then you have omega sine omega t d omega or we can do an integral zero to infinity d omega of g double prime over omega and there is a cosine omega t d omega. These are frequency time transforms, they're invertible and so if you tell I tell you what g of t is, I have told you what g prime and g double prime both are. Correspondingly if we look at g prime and g double prime, they're not independent. That is, if I start with either of them, I can calculate g of t and then from g of t I can turn around and calculate the other of them and if you want you can eliminate the middle man and presumably calculate one from the other now, can't you? Okay, so we can do this calculation and the chronic-chromers relations are a self-consistency result. There is however a constraint on the chronic-chromers relations if you actually want to use it like this, namely you need to know the loss of the storage modulus or the loss modulus out to infinite frequency because the upper bounds on these two integrals are infinity. Well if you're actually doing physical measurements you can cover four orders of magnitude in frequency. If you do time-temperature superposition, you can do seven orders of magnitude in frequency but what you know is that there's a reasonable estimate as to what the full function should look like, namely initial stretched exponential regime, power law regime and some place out here, if we're looking at the loss modulus, there should be a rollover because in the end the solvent has a viscosity and that viscosity does not depend on frequency and can get way, way out there and there and in between there may be, let's see, we saw several deviations from a simple power, added power law, there may be some structure. Well if you only measure up to here you can't see the structure which is an issue but if you measure far enough down the functional fits in essence have another use, namely the functional forms work quite well and therefore you can use the functional forms as extrapolance to cover into regions you did not see. Also if the functional forms are reasonable they describe the measurements accurately and the noise and individual measurements is smoothed out when you use a good fitting function. Which will be a chronic Cromer's fitting function? That's quite correct. You can now have something that extrapolates and smooth that you can put into the chronic Cromer's relations. Well that's actually done and it's figure 13.39. And if you look at 13.39 there are measurements of the G prime and G double prime and they have been used to compute G of t and this is all done for one polymer in a series of concentrations and if you look you see a curve that rolls over and there's another line which is G of t calculated from the other storage modulus and it rolls over and accepts very near the two ends this works just fine it's quite accurate. Right close to the two ends things start to get noisy and the if you look at the times at which you start to see issues the times at which you start to see issues correspond to the upper and lower frequencies at which G of omega was measured that is the extrapolate does a nice job of smoothing the ends curves out for you but you really didn't do measurements out to very high frequencies at least in that those results and therefore when you get into the region where you didn't do measurements in frequency space the time domain numbers get a little more noisy. Okay now let's do the chronic-cromer relations what can we say about them well first of all there is a physical result so you should certainly be convinced that the original measurements is found in the original papers if you actually looked at the storage and loss modulus the original numbers must have obeyed the chronic-cromer's relations there's no absolutely no reason to doubt that. When we went here what we did was to say we will take the fitting functions and we will use the fitting functions and the parameters we determined to calculate analytically what G of t is and those calculations were done and when you do those calculations you get the curve seen in the figure. If the chronic-cromer's relations had not been satisfied if the two curves were not right on top of each other there would be a very strong message that something was wrong with your fits and furthermore it was something at the level of does the fit describe the measurement reasonably I mean the original data did should have done this so far as we know and therefore when you have the fits which seem to map give you curves right through the data and I'm going to emphasize that much more in a bit um gee it should work and it does if it didn't something would be really wrong. Okay now we come to the next section of the book and the next section of the book is on optical flow by refringence. By refringence is the notion that if I have a material object and I send a ray of light through it I can send through linearly polarized light say vertically or horizontally polarized. If the sample is birefringent there are a few crystals that are birefringent I think Iceland spar comes to mind. By refringent means that the index of refraction is not the same for one light polarization as for the other. This phenomenon was known to ancient Norse navigators they had a device the sunstone you looked at the sky through it in a completely cloudy day and you could tell where the sun was and the reason you could tell where the sun was was that the light that is scattered through clouds even if you can't see the sun picks up a polarization because of the way the scattering works and experimentally someone went out and bowed and tried this you can actually estimate the sun height with some reasonable precision because you can tell where the polarizations are pointing at the sun location. I'm not going to swear it competes with the magnetic compass except in the far north where magnetic compasses are not very useful because the earth's magnetic field is somewhat vertical. Any event that is birefringent. Now the next thing is is that you have a system and you shear it or you do something similar to it and you line up the polymer chains. Now what exactly there are a lot of ways to do the experiment and the general notion is for example you apply an oscillatory shear and you get transient alignment of molecules and once you have persuaded the molecules to line up well they don't stay lined up they try to relax to random directions but once you've done some sort of arrangement of the molecules you now have a system that is optically active in a way that gives you birefringence. We then have the stress optical the idea here which is is that you can describe this we will align the molecules and we see relaxation and so we are applying a driving it's technically not a force a driving effort that is cosine omega t and the response has an in phase component which sounds just like the storage modulus and an out of phase component which sounds very much like the loss modulus and these two should be the same. These two should be the same as the corresponding objects that you find if you do dynamic shear measurements. Now that says something about the system namely it says that what you are seeing optically corresponds to what is causing the polymer to show its viscoelastic behavior they can't just be uncoupled variables they have to be the same variables and what is traditionally done if you look in literature is that you record an amplitude s and you record a phase delta because if you have two components in phase and out of phase that's the same as one component with a phase angle that's simply the statement that you can write a complex number is an amplitude in a phase that's a math statement and therefore these two ought to be related in fact I give an example where I took data on optical flow by refrigerants measurements replotted them from this form to something that corresponds to this and if you plot them versus frequency you see something that looks very familiar. I confess however that this section of a book was an extreme disappointment. The primary issue since there was a certain amount of writing effort here and the writing effort became time consuming was that I was really not successfully able to find an entry point in the literature that revealed solution data. There's a very considerable amount of melt data on optical flow by refrigerants through groups at Stanford and Caltech and if no one goes into places that come immediately to mind and there's some very good people who work on it. However it seemed all of the stuff I found were melt measurements and since I did decided not to cover melts there wasn't much to talk about. I have the distinct suspicion that sooner or later someone is going to send a polite email. Here is where you should have looked. Here is the obvious set of search words to use in Web of Science and you would have found a whole chapter full of results. Well I didn't. It was almost as frustrating as a pulsed field gradient NMR where in principle you can use NMR studies to look not only at translational diffusion but also at segment motions and you do find the discussion way back in about chapter six where there are some NMR studies of segmental motions but I have the distinct impression that once again there was something I didn't quite find. Nonetheless there is such a technique and there is a section on it. Okay what is next? General results. Well first I am going to talk about a picture and I am going to point out some terminology. And we will start by talking about G of T, the stress relaxation as a function of time. And so we do something to the system and we look at the response and we ask what the response shows us. And what we find for G of T is there is what we call a glassy region and then there is a transition. And then there is what is called plateau. And eventually if you go out to sufficiently high frequencies there is a terminal region and the terminal region is the response going down to the point where you look as a function of frequency and basically what you see is the solvent. Let's turn this around and let us put this in frequency domain. And I will show you the plot as it is traditionally done where you plot G prime or G double prime itself and not normalized by frequency. And you have a take off and the take off is what is called terminal regime. And if you are plotting it in frequency domain you might wonder why is it called terminal? The answer is that in time domain it is terminal and frequency domain, well you know high frequencies correspond to short, excuse me, long times correspond to low frequencies. And then there is a rollover and we get into the plateau. And if we get out here far enough there is a transition and then there is the glass. And if you refer back to the previous lecture I do a one figure where I take the onsots and I re-plot everything in this form. And I show that the low frequency spread exponential regime gives me a curve that looks like that. And some place in here is omega c and beyond omega c the power law regime plus additive stuff at high frequencies gives me a curve that actually matches the measurements quite nicely. And the curve uses a small number of parameters whose behavior is known. So this glassy transition plateau terminal terminology just not describes nicely time domain. The next point I would make is that if you start off with a solution and go to a polymer and you make the concentration and molecular weight higher and higher this plateau gets longer and longer. That is you have a final relaxation. Oh I should emphasize this is log t. If you plot linear in t it's much less impressive because you cannot both see the fine detail at short times and see what's happening at long times. And therefore what happens is you increase the molecular weight. The terminal time moves out as a power of the molecular weight. And that is a general behavior of the system. Okay so there are the time and frequency domain pictures. It's the same picture in sort of in symmetry and in addition there is the traditional nomenclature that's used for the regimes. Okay so having said we've done that what do we find experimentally? Well the first point we could say about experimentally we're now going to advance from 13.7 general results. And the first point I would make is the temporal scaling works. It works extremely well. If you go back to the early figures in the chapter there are plots of the storage and loss modulus which cover a considerable number of powers of 10 in frequency and which show considerable decay over a number of powers of 10. And over that you have the lines and the lines go right through the points. It's not there's a line the set of points which say has a smooth very gentle curve and we try to fit a power law to it. It's the lines go through the point so temporal scaling works very well. Furthermore there was the prediction here is the behavior of low frequencies. Here is the crossover to higher frequencies. The stretched exponential curve continues like this. The power law curve continues like this. And after transition the transition is very clearly smooth. There's no bump. And it's analytic that is the first derivative is quite clearly continuous where we cross over from one to the other. So there are very few cases in which you see a little bit of a gap. The power law being higher. And the question is why do you see a gap? And one answer is well maybe you need to improve the model in some cases slightly and insert a bridge function a very small bridge function. And the other answer is if you have something that is doing something at large frequencies and as a result your curve is not quite as simple as you thought it was. When you get up to here there will be a very small deviation. For example there's extra activity in the data and that displaces the best fit power law of a tiny bit. And I'm not going to make any explanation as to which it is or try to rationalize it away. I'm just saying there is a limit. At large frequency we do see some additional phenomena. And one choice is we curve over to an additive constant which sort of corresponds to the solvent. And another possibility is we see two power laws additive but each of those power laws is covered over a decent number of orders of magnitude. So it's fairly clear they really are power laws and this region right at the crossover really is well described if you assume the two power laws are simply additive. And finally in some systems we see what appears to be an additive exponential bump as if something was happening that gave you an extra relaxation out the higher frequencies. Oh and there isn't a lot of constant of continuation down here to say exactly what happens at really large frequencies. We see the additive say exponential relaxed. But I can't tell you whether it relaxes into increasing the power law or comes back to the original power law curve. The measurements just aren't quite there. After all if you didn't know they were interesting why would you do them? It's very hard work. Okay so that's the frequency dependence. There are a series of fitting parameters g10, g20, alpha for each storage and loss modulus, delta one for each, omega cutoff one for each and then there is a g bar i, zero and an x. So there are sort of six parameters. Well maybe we should count this as one parameter in which case there are five parameters one set each for the storage and loss modulus. The cutoff frequency is not exactly an independent parameter. The reason it's not exactly an independent parameter is that if I tell you what these three are and I tell you the slope here yes and I tell you that when we cross over from one to the other the slope and the altitude have to be consistent that constrains what some of these parameters are. So there really aren't quite that many free parameters but I show all of them because it's not quite clear what should be viewed as the independent free parameters and which are sort of along for the right. The behavior of these parameters on solution properties is that we see power loss and therefore from power loss in many cases we have exponents of the power loss which become a theoretical target. We don't always see power loss. X is a straight line with gentle slope plotted against log M. So there is a behavior which is at least smooth and consistent. Okay is this good or bad? Well I am going to find two quotations from the book. Okay and so we have a quote this is now going back a quarter of a century ferry discussing the dependence on concentration and molecular weight. It is evident that the concentration reduction scheme for the transition zone described above that's in his review article is referenced in the introduction cannot be applied in the plateau zone and indeed no simple method for combining data at different concentrations can exist. The shapes of the viscoelastic function change significantly with dilution. Well as I have just shown you while there are changes in shape it is possible to describe them as concentration and molecular weight. Dependence was with the fitting parameters and therefore it is indeed possible to unite data at different shapes. Now combined when ferry wrote it had a slightly different meaning. Ferry was thinking in terms of reduction plots that is you change the scale of the frequency with a concentration dependent function or you change the vertical scale with a concentration dependent function and the curves just lie on top of each other. That actually works reasonably well for viscosity for example as a function of molecular weight. While ferry was saying the reduction approach does not work here we've shown something better than the reduction approach that really does work. Okay there is if I remember to bring it another quote here. Where did it go? It's right in front of me. Well I thought it was right in front of me. And the other quote which is from the Pearson review goes in essence that if you dilute things it is obscure what is going on. The obscurity has now been lifted. We can now show you at least in a descriptive sense what is happening. Okay there is one difference between what has been done here and what has been done in all earlier parts of the book that I really want to emphasize. In earlier parts of the book where we were talking about for example the self diffusion coefficient or the viscosity we took the standard visco elastic standard transport parameters as actually found in literature. We took the transport parameters that are actually in the literature and we did an analysis in terms of universal scaling forms. Here we said we're going to replace the standard literature functions with two new functions g double u prime over omega square and g prime over g double prime over omega. And we are going to analyze our new forms in terms of what we know. Now of course this one isn't exactly new because g double u prime over omega as omega goes to zero is the viscosity. And if we're interested in the viscosity you'd think that we would like the viscosity and g w prime over omega to be consistent which means you have to analyze in terms of g w prime over omega. And once you've done that it's also sensible to analyze in terms of g prime over omega square. The real motivation of using these though additional motivation is slightly different. Namely we have something plotted against omega. The reasonable expectation if you have a transport parameter that is dependent on frequency is that the intrinsic transport parameter at very low frequencies shows quasi static behavior just as you can do electrostatics and you don't have to worry about radiation if you're very low electromagnetic frequencies. Well in order for that to be true the slope as you head into zero frequency ought to be zero and these functions have that property. The traditional g prime g double prime g w prime functions both go to zero at zero frequency. Okay so what else can we say? Well we can say one last thing and it's an interesting correlation and it's a correlation between alpha and g the corresponding g i zero. And what we find if we go in is that alpha goes as g i zero this is alpha for one modulus or the other goes as the zero limit of the modulus to some power x. The correlation is somewhat less impressive for the storage and loss modulus than it turns out to be for the shear thinning. However for g prime x is about a quarter and for g double prime g double prime x is about roughly speaking a half. Now that's actually a very important correlation. The reason it's an important correlation is that g i zero which is the zero shear viscosity is simply a concentration is number determined by concentration and molecular weight and in a certain sense the zero shear viscosity if we're talking about g two zero doesn't depend on time dependent parameters. This is something parameter that gives us the lead frequency dependence and therefore it too is also being determined by polymer concentration and molecular weight. And therefore the low frequency behavior is seemingly being determined by quasi-static properties not entirely because there's also that parameter delta though delta is close to one at least for smaller polymers. Nonetheless there is this very peculiar correlation between the zero frequency limit which is some function of c and m and the lead frequency dependence parameter. That's about how far I've gotten on that one. Now you could repeat this. Yes, we have an alpha for shear thinning. The shear thinning measurements show the same general properties that the viscoelastic other viscoelastic properties do. In fact I talked about the Cox-Mers rule and the Cox-Mers rule tells us that eta of kappa and eta of omega that is g w prime of omega over omega should be the same or very close and therefore the parameters should be the same. And for these we get the interesting feature and we are to figure 13.42 and figure 13.42 compares alpha with eta zero and there are very nice straight lines the points right on the lines. The limitation of course is that g there's a material dependent part which since different materials are different we don't have and we find that alpha is proportional to eta zero to the two and therefore we can write eta of kappa this is the shear thinning as eta zero exponential some constant b which is material dependent but it's the same at all concentrations in molecular weights eta zero to the two-thirds kappa and kappa the exponent of kappa is pretty close to one so we actually have a formula for the shear thinning entirely in terms of the zero shear viscosity that ought to be a theoretical clue but I don't really can't really tell you what it is. Now there's another clue on that figure though it's most visible on the top line which is made of circles and if you look you'll notice that some of the circles are open and some of the circles are filled and this is explained in the lead the open and filled circles correspond to linear polymers and star polymers yes you see that well they correspond to linear and star polymers and the curve for linear and star polymers we could say the value of b they lie on the same line that is where you place a linear polymer with a star polymer and so far as shear thinning is concerned nothing happens and you have the set of points that cover stars and a set of points that cover linear polymers and there's only one line there so the topology doesn't do anything the best fit of course if you look at the figures for poly alpha methyl styrene so you have now seen a discussion of shear thinning and this very peculiar correlation which once again must sort of be a hint on how to do the calculation if only you knew what it meant I have about 15 minutes left so what I am going to do is to make a few remarks on leading into the next chapter and then I'll discuss a bit how we got here the remark on the next chapter is the next chapter is the edge of where the book was going and the edge of where the book was going was nonlinear viscoelastic properties so far we've talked about linear viscoelastic properties where if you combine a series of displacements they add linearly and therefore what you see is fairly transparent and you see things which may be frequency or shear rate dependent in the next part we're going to take up what are loosely described as nonlinear properties and these actually fall into two sets the first set arises because the pressure of the liquid the pressure tensor actually displays its tensor properties and therefore in a polymer liquid being sheared the pressure this way the pressure this way and the pressure that way are not necessarily the same and you actually have a pressure tensor not a simple linear pressure and there are a whole series of slightly odd phenomenon appear that appear that are in some sense seem to be related to this piece of the issue there are then secondly a set of what I will describe as modern nonlinear phenomena and we'll actually show some of these and these are things such as large angle oscillatory shear where we have two plates and instead of doing this they're doing that or we shear a fluid and stop and we look in and ask what happens and we see a phenomenon known as shear banding in which there are displacements in the fluid but they're not uniform across the height of the container or we do some experiments and there's an amusing background story here that we'll get to where you for example apply a shear sit around for a while apply a second shear in the same and up or opposite direction and ask how the system responds and you look for deviations which from linear behavior deviations are quite substantial we could also drop in it turns out to be a nonlinear issue discussions of what is called extensional viscosity okay so I have now given the lead and I now have some modest number of minutes to explain how I got here and the front end is that one front and I suppose there's several front ends and one will get front end was that I had for a very long time been interested in diffusion of colloids as a theoretical problem and the traditional core question is how does how do the diffusion coefficients the self-diffusion coefficient and the mutual diffusion coefficient depend on concentration if you go way back in time 40 years you can find people who believe that light scattering spectroscopy measured the self-diffusion coefficient a mean square motion of particles well that's entirely wrong and one of the key results of my doctoral thesis was to make emphatically clear the diffusion coefficients depend on concentration light scattering theory experimenters hadn't all known this and further more light scattering spectroscopy unambiguously measures the mutual diffusion coefficient so there's then a question of how do you calculate the mutual diffusion coefficient from the known properties of the liquid and that was something that I worked on for several decades with increasing refinement and improvement of the calculations some of the details of that you can find in my other book elementary lectures and statistical mechanics is the last few chapters however that meant if in order to do this you needed a fairly solid background in hydrodynamic interactions which are important in these systems now hydrodynamics had also been introduced in polymer theory and the introduction in polymer theory is due to the Kirkwood-Reisman model for the dynamics of a single polymer chain that is a dilute polymer chain and the notion of the dilute polymer chain if we have a chain and if a piece of it moves it creates a hydrodynamic wake on other parts of the chain and that modifies how the chain moves so that's the Kirkwood-Reisman model one natural thing to do at the time was to take the Kirkwood-Reisman model and extend it to treat interacting polymer coils and you can find some people who did this unfortunately a major part of the emphasis it was a very reasonable part of the emphasis was to try to do a calculation of the viscosity is a function of concentration by treating interactions between pairs of polymer coils if you try to do this what you run into if you are not careful is that the long-range nature of the hydrodynamic interaction causes the integrals you rationally set up to be divergent and therefore the theory seemed to hit a dead end that was in the early 50s and people didn't pursue it for okay in any event coming back to the work I was doing I we did quasi-elastic light scattering and very early on about 1980 we started doing probe diffusion and the original idea was we can measure the d of the probes we can compare with the viscosity of simple liquids and we can look and see if there's anything interesting um I didn't I was doing this in terms of oh water water glycerol temperature dependence and before we got very far into this um one of the graduate students who was doing her research rotation came by and spoke to various professors and spoke to me and I'm quite sure though I didn't think of it for quite a long time afterwards it was Allison White who suggested well why don't you add polymers it's a very good idea now the particular polymer she suggested would not have done well with the particular polystyrene sphere probes we had because it was the wrong charge but that's easily fixed and you can find poly as she would have readily told me you can easily find poly electrolyte polymers of either charge you can find neutral water soluble polymers and so we did measurements of probe diffusion and after a piece it became apparent the probe diffusion follows a universal scaling form e to the minus a c to the new this is for probe diffusion you can also compare the diffusion of the probes smaller large with the viscosity of the liquid and what is found is that the um viscosity of the liquid does not determine the probe diffusion coefficient even for large probes that is if you look at the product dp eta over the zero concentration values goes up the probes diffuse faster than you would expect from the viscosity the faster than part as a practical matter is very important if you found that the probes diffused more slowly you would think well the polymer is sticking to the probes and it's increasing their size the polymer is causing the probes to stick to each other and you are looking at the diffusion of increasingly large aggregates but all of those effects would cause a diffusion to be too slow and you'd see deviation in the other direction fortunately nature was not arranged like this and it was clear we were seeing something physical rather than some dull and pointless artifact at some point in here about 85 uh tim lodge sent me a preprint of one of his studies on polymer self diffusion and i looked at his data and it was immediately evident that d versus c was again a stretched exponential and while it took me a while to get around to i after a piece was able to sit down and analyze d versus c for a substantial number of different polymers uh this got this got considerably easier this was after i got here to wpi you had micro computers that were available you had nice computer resources you had fitting programs that helped you do all of this and therefore you could combine all of these things and by combining all of these things it was apparent that the cell polymer self diffusion coefficient followed a stretched exponential and viscosity and furthermore if you looked at the viscosity you also in general saw stretched exponential behavior now we had actually seen tyho lin like one michigan grad student had actually seen the the solution like melt like transition and we noted oh it happens and the concentration dependence goes from a stretched exponential to a power law and that is roughly the point where the probe diffusion spectrum becomes probably bimodal which was outside of what we could do with dynamic range of our correlator was 64 channel linear correlator we hit our dynamic range and so i put all that aside because we i didn't have the instrument and to a certain extent i never completely got back to it but it was clear there was a regime at least for um polyacrylic acid and if you went to too large molecular weight and too high a concentration something complicated in any event i event about 87 i publish the uh universal scaling equation paper which attracted some attention and since it was a bit controversial i made a point for viscosity in particular and also for probe diffusion data more for viscosity of going through the literature result after result and showing that if you plot eta log eta versus log c you see this nice smooth curve at which i could draw a line through and the nice smooth curve did not have a straight region a power law there weren't power law regions it would eventually be discovered polyacrylate excuse me hydroxy propyl cellulose that there was a system that showed the straight line region uh we got into hpc for a considerably different region reason namely you change the temperature and you can change the solvent quality from good to theta and so the original interest our interest was we can look at the good to theta polymer transition and that's straightforward to do so benware syracuse prodded me a bit that if there is an equation there ought to be a derivation for it one week i sat down i sort of had hints on where to go and i found the derivation and part of it was a self-similar similarity that is d alpha dc proportional to constant times alpha and showing where that came from and then alpha the constant and the constant were proportional to polymer size to the power or but that only gave me exponential pure exponential behavior however back while i was at ucla in about 78 i had sat through a series of seminars with phil pinkus who was studying polymers and he was working through the degen papers which he thought were very important and dan kibbleson was my postdoctoral advisor which me get involved which i dutifully did i haven't gotten involved in it what i the factoid i pulled up and finally remembered was that it was known that if you took a polymer and changed its concentration it contracted it's not much but it contracted and if you and it all did this if it was big enough polymer and it didn't do if it was small and if you put this into here you got a stretched exponential behavior with about the right expo so your respective of increasing or decreasing concentration would still come back if you increase the concentration of the polymer it gets smaller it doesn't get a lot smaller but it gets a little smaller so we put this together um and one of the things that we did was to analyze and it was sort of the front end that led to this book all sorts of things that were polymer transport properties um and you can now see them in book chapters here uh the other thing i did i happened to run into a paper by andy alton burger where he discusses the positive function renormalization group and the point that drew my attention was that depending on where the fixed point was you could get either a stretched exponential or a power law my interest was very strictly at that point the stretched exponential and therefore in about 98 i did a derivation i renormalization group derivation that leads to the stretched exponential using the first few terms of the hydrodynamic form as inputs and that actually leads to the renormalization group derivation of the concentration dependence long since published um i was sort of aware of visco elastic properties and there was some point where i don't really recall all the details i tried reducing these and i found the temporal scaling analysis and eventually though it was a bit later i found the rationale that leads to it uh and it really is an onsots i do not have a derivation i happen to have been fortunate when i was doing this and doing the calculations and for example viscosity versus concentration i had come in from the hydrodynamic side through colloids that's why we have a colloid chapter i had had some exposure to biology and what at somewhat after i heard annalease baron give a talk at a gordon conference um i happened to run into a paper by rock bard crombach that's an elector pharesis chapter where they referenced one of mine suddenly realized gee this is another way to do driven motion studies before then the major driven motion studies i knew about were the ultra centerfuge studies which are now a half century in the past so in any event things sort of came together because i happened to be in the right place at the right time and was able to bring things together um we are out of time but i have given a little bit of historical background in our next discussion we will which will be next week we will take up nonlinear visco elastic properties and that is it for today
Lecture 25 - Viscoelasticity. George Phillies Lectures from his book "Phenomenology of Polymer Solution Dynamics"
10.5446/16224 (DOI)
classes in polymer dynamic based on George Philly's book, Phenomenology of Polymer Solution Dynamics, Cambridge University Press, 2011. And today this lecture is lecture 24, viscoelasticity, the storage modulus, the loss modulus, shear thinning behavior. I'm Professor Philly's and today we're going to finish off our discussion of viscosity and push on to discuss viscoelasticity. Let me recall where we left off roughly speaking. We left off in chapter 12 discussing figures 12.22 to 12.24, which are treatments of the molecular weight dependence of the viscosity of the solution. Now, there's actually a fairly considerable number of measurements of these and they're actually fairly consistent, namely what they find, what is found with considerable consistency in many systems is that the viscosity depends on the molecular weight as a stretch exponential in molecular weight. And that's brought out quite emphatically by, for example, figure 12.24. 12.24 are measurements they actually appear in Doyle and Edwards who report them as measurements done by Onogi et al. The measurements are a little more complete than the measurements in the original paper actually. Having said that, an interesting feature here is that if you go back into the literature and ask what people said about these measurements, you can find people who say we have measured the molecular weight dependence of the viscosity and as predicted, we see a power law dependence of the molecular weight. There actually are measurements that find power law molecular weight dependence. For example, as we discussed earlier, Tao et al's measurements which push out to the melt, find close to the melt a power law dependence of viscosity on molecular weight. And volume fraction one, log log plot, what are rather clearly straight lines, viscosity changing as a power law on M, and if you actually say we'll stay fairly close to the melt and we will exclude points that seem to be deviating from the original straight line, you actually could find in Tao et al. M to the three behavior. However, most of the measurements you find, which are done at considerably lower concentrations, do not find power law behavior, they find stretched exponential behavior. An interesting example of this is provided and this is also in essence stretching over to the next section, which is a discussion of topology. Are the measurements by Grace Lee et al. And Grace Lee look at polymers and they look at linear chains and they look at four-armed stars and they look at six-armed stars. And if you look in at the results they show as a function of concentration for the linear chain and their linear chain is a molecular weight of 1.66 megadolves. As a function of concentration, a log log plot, they find what is very clearly a straight line, you look at it, it's clearly a straight line and it's quite clear that you are seeing power law behavior. On the other, this is for the two-armed star, the linear chain, two-armed star. On the other hand, for f equals four and for f equals six, very clearly on the same plots, these are chains of about the same molecular weight. This is about a 1.95 megadolton polymer and the other is about a, what was it? It was about a 1.44 megadolton polymer. For the other two, you very clearly see stretched exponential molecular weight dependencies, concentration dependencies. These are fixed molecular weight, you're looking at concentration dependencies. If you look at the molecular weight dependence where they had a considerable number of samples, they were looking at concentrations of about 300 gram per liter of polymer, extremely concentrated systems. And the molecular weight dependence was a straight line for the linear polymer. And was a stretched exponential, two stretched exponentials, actually, for the star polymers. As a function, taken as a function of concentration, the upper figure, the linear chain showed the power law behavior. The stars showed a stretched exponential behavior. The viscosity of the star was, for the most part, less than the viscosity of the linear polymer. However, if you look carefully at the measurements against concentration, you can see a point up here. And so if the concentration was taken to be high enough, eventually some of the star polymers, as they're obliged to do, given the function of the star polymers, are obliged to do, given the functional behavior, did show a viscosity that was higher than the viscosity of the stars, of the linear chain, I mean. However, for the most part, star polymers, comparable molecular weights, are up to fairly high concentrations, less viscous than linear chains. And at fixed concentration as a function of molecular weight, again, the stars are less viscous than the linear chain is. You can also look at results of, we'll get this erased in a second. You can also look at results of Casura and collaborators. They looked at three-arm stars. Their stars were a poly-alpha methyl styrene. They look at a series of concentrations and molecular weights. You can see the results in figure 12.26. And what they found was that, again, if you're at about the same concentration and about the same molecular weight, the viscosity of the star polymer, this is the F equal 3 star, is less than the viscosity of the linear chain. Now, why is the statement that the star polymers have a lower viscosity than the linear chain of some interest? Well, there are a number of models of polymer dynamics which assert that linear chains have extra modes of motion that are, in principle, not accessible to stars. In particular, for example, the reputation-type models say that if you have a linear polymer, it can creep through holes in a transient lattice, like the snake in a bamboo grove. But if I am a star polymer, in order to make forward progress, what I have to do is retract by fluctuation one of my arms. That's difficult to do. Push it out in a new direction, and now I can take a very modest step forwards. I mean, that result is that one would expect that it is harder for star polymers to move with respect to each other than, at least if they're very concentrated, than it is for linear chains. That does not appear to be quite what we are observing. Question? Can it also be possible that since linear molecules are linear in structure, they have better packing systems than they are for linear closer to each other? That could be an issue in the melt. However, in the solution, the amount of packing is determined by the concentration. And the answer is that we make comparison and equal concentration for linear and star. The other thing you should recall with respect to, let us pack chains into a melt. Is that I am a star polymer. I have one point where there are three chains coming together. I'm an F equals 3 star. There is a narrow region around this where there are three of my chains close together. However, if I'm a large star polymer over most of my length, I look exactly like a linear chain. And therefore, this is the effect. You might find interesting effects with small chains, but with large ones, you would not expect that to arise. OK. We advanced to figure a discussion around figure 12.25. And the issue around figure 12.28 and figure 12.27 is, suppose we take a polymer chain and we change the solvent quality. What happens if we change the solvent quality? Well, the expectation is that if we are in a theta solvent or close third to, which means we also have to be at the right temperature, we have a polymer chain in the solution. It would rather see itself as its neighbor than it would see something else. And therefore, if we increase the concentration, nothing terribly interesting will happen to the chain radius. I didn't say nothing at all. Because theta, you can't get perfect cancellation of effects. It's an approximation. On the other hand, if you were in a good solvent, we see what we saw in the chapter on dielectric relaxation, which is what has been used for most of these measurements. Namely, as we increase the concentration and look at the chain radius, the chain radius falls with increasing concentration. So these, however, are static properties. They're how big is the chain if we take a single snapshot? We're talking about viscosity here. Viscosity is a dynamic property. And we ask what happens to the behavior of the system if we change solvent quality and such and on. The first answer is, if we look at a series of chains of different molecular weight, the viscosity, as has been the case in most other systems, not all other systems and not at sufficiently high molecular weight concentration, rather than those systems, the concentration dependence is a stretched exponential. And what we happen, fine, as we increase the polynomial molecular weight and do our measurements, we find that the parameter alpha increases and the parameter nu falls. So there is alpha and there is nu. And both parameters are molecular weight dependent. The second thing we can do is to compare these parameters in a good solvent and in a theta solvent. And what we find is that in a theta solvent, measurements discussed here, the parameter nu is from 0.8 to 0.954. And if we go, however, take the same material over to a good solvent, the parameter nu is 0.6 to 0.73. That is, we are going in and when we change the solvent quality, when we go into a better solvent, the parameter nu falls. The model that derives this equation, in fact, predicts this behavior, we will eventually get to it in later lectures. If you look at the figures, especially 12.28, this change of nu is physically quite visible. If you look at a semi-log plot, log viscosity versus concentration, for a theta solvent, the concentration dependence of eta is fairly close to a straight line, because this is fairly close to a stretched exponential. But if you look at the measurements in the good solvents, you typically see a more pronounced curvature because you are looking at a stretched exponential rather than a pure exponential. Now, having said this, let us push ahead and consider the general features of viscosity. We've looked at lots of specific measurements. And we did, of course, find that the viscosity depends on the polymer concentration and the polymer molecular weight, not to mention the one could put in the topology and one could put in solvent quality. With respect to topology, there was a very long dispute, this goes back a considerable piece, about the viscosity of ring polymers. But that was mostly melts, which the book does not cover. The issue there was, well, you can try to make melts, but you have to ask, are the, you can try to make rings and put them into melts. Here is a ring, is it a nice open loop, or during your synthesis process, did it get itself all tied and knots so that it's a bolt? Under modern conditions, if you want to answer that question, you can resort to using DNAs, since you can synthesize DNAs with absolute molecular weight control. But absolute, we mean, we have something that's 180,000 base pair, meaning its molecular weight is tens of millions, and the molecular weight is, well, there are fluctuations because there's several isotopes of carbon and deuterium in nature. A carbon and deuterium hydrogen, I should say, in nature. But the molecular weights are totally exactly controlled, no fluctuation. You can also synthesize things, this is synthetic DNA, not stuff that occurs in nature, that is a star and completely mono-dispersed. You can also synthesize, it happens naturally, some viral DNAs do this, DNAs that are ring polymers that are, again, completely mono-dispersed. Electron micrographs of those rings indicate that the rings are completely open, they are tied up and knots at all. However, having said, the ring polymers are mono-dispersed, the star polymers are truly mono-dispersed, synthetic DNA comes that way, there is the significant practical issue that people have not done studies of DNA viscosity that are as extensive as the viscosity studies that have been done on synthetic polymers. OK, having said this, let us look at the question of what you find. And the first point, which I am going to emphasize significantly more than was emphasized in the book, is that eta is not a universal function of C and M. That is, if you think that you can take your viscosity measurements and yes, there will be some chemical dependent constants, and there will be things that depend on the concentration and the molecular weight, or maybe the length of strands and the molecular weight, some representation of amount of material and size of a chain. If you think that all of the chains are the same because of that, and therefore there should be a universal function that gives you viscosity as a function of concentration, you are going to be disappointed. Instead, it is quite clear from the literature that viscosity is not a universal function of C and M. Instead, there are two distinct phenomenologies. And in one of the phenomenologies, you see simply a stretched exponential, which in measurements of Tager goes up. Tager and a few other people go out more or less to melt. And in another set of systems, you see a stretched exponential up to a point, and then beyond the point, you see a power law. It is worthwhile to recall that the same phenomenology, this class, is occupied by spherical colloids. Spherical colloids show the same behavior. The slope of the power law depends on the exact material. For spherical colloids, the slope is appreciably steeper than for linear chains. And if you look at many arm star polymers, the slope creeps up and makes a transition from linear chain behavior up to very nearly heart-sphere behavior. It should be clear that any model that explains this phenomenon is subject to the constraint that it must be of work for spherical colloids. Spheres can do a number of things, but form entanglement lattices is not really one of them. And form and reptate is certainly not one of them. And so we have the issue that the viscosity behavior might not be quite what you would have expected from some of your reading from some theorists. OK. Furthermore, if you merge up to viscosities above about 10 to the 8, that's eta over 8 is 0 above about 10 to the 8. And most of those measurements are literature studies due to Drival, though there's some nice work due to Colby. However, if you look at that, it does appear that if you get above here, instead of continuing along on the power law, there is upwards deviation. The nature of the upwards deviation is not quite certain. One might, however, for example, propose, since we're at extremely high concentrations, that it's some sort of a jamming transition in which things actually mechanically get in each other's way, and the chains can't slide across each other because they can't move after all, they're rigid over short distances. They can't move sideways as easily enough to get through each other. However, that is a fairly specific, that's a fairly rare exception because there are not a lot of studies that have looked at this. In particular, if you look for predictions that eta should be some c to the x, m to the y, that prediction is very generally not true in polymer solutions. There are some exceptions. There is this family. But predictions that you can simply assume scaling are not sustained. If you want to say we have scaling, you need to explain why sometimes you have scaling and sometimes you don't. Otherwise, you can't just assume it. OK, so that is the sort of behavior we get. We might then ask, well, what other generalizations can we find? And one answer is for a very large family of veterans, we have eta over eta 0 as e to the some constant alpha c to the nu. And now I'm pulling the molecular weight of dependence of alpha out. We'll actually see graphs of that dependence in a few moments. But if we actually look at the measurements and ask, OK, we get this thing to fit, what do we see? Well, gamma is usually in a range around 1 half. And I can be more precise than that, but that's a reasonable approximation. In point of fact, if we say, oh, I should add something. We go from theta to good solvents. We go from theta solvents to good solvents. The parameter nu goes down. And what I will call alpha m, because it's alpha with m pulled out, what I will call alpha m, m to the gamma, what's left, what's not c to the nu, increases as we go from theta to the solvents. As we go from theta to the solvents to good solvents. Generalization. OK, so what behavior do we see for gamma? And the answer is seen in figure 12.30, where we plot alpha, which in terms of this equation is alpha m, m to the gamma. It's the coefficient of c. And we plot alpha against polymer molecular weight. Now, if I just showed you the points, your reaction would be, gee, that was a nice piece of graph paper, Professor Philly's. At least it was very nice before you stood way back and started firing the shotgun at it. And the points are all over the place. However, if you look carefully at the points, I have labeled the points by the chemical family of the polymers. So we have identified separately each set of homologous polymers, polymers that are chemically identical, but different lengths. And if you do that, you see that for each single family of homologous polymers, alpha lies very much on a straight line. The straight line has a slope. So this is a log log plot. So we say alpha goes as m to the power gamma. And what can we say about gamma if we look at all of the measurements? That is, all of the measurements where the same lab looked at a large number of different polymer molecular weights. And other than that, things are consistent. And the answer is gamma is typically in the range 0.6 to 0.67. One can find a case where it's 0.5, or I'll call it correctly, 0.94, where it's quite large. You can find a case where it's 0.51. If you look at this, you notice there are different slopes for different materials. And therefore, it's not quite a universal. The slope is not quite a universal constant. But for a given set of samples, the power law dependence is actually fairly sharp. This result is also implicit in the systematic review of Dervol and many collaborators and all of the wonderful Russian data he compiled in one place for us. Because what Dervol says is that C eta is a good reducing variable. That is, if you take all of these measurements and you plot them as functions of C times the intrinsic viscosity rather than C, so we have eta log log again, C eta here, you find the whole series of different polymer molecular weights. But the measurements lie on the same curve. Now, in order for the experiments to lie on the same curve, when you are doing this plot, it must be the case that eta is some function of C eta, which means that all of the molecular weight dependence of the viscosity has to be built into the molecular weight dependence of the intrinsic viscosity. You can't have any other molecular weight dependence out here someplace. Because if you did, when you plotted viscosity for a series of polymers of different molecular weights, they wouldn't lie on the same line. They lie on the same line because viscosity is a function of C times the intrinsic viscosity. That statement, universal function, goes all the way up to the melt. However, it is an important fact that the intrinsic viscosity depends on m as a power. And therefore, this equation says that the viscosity is a function of, what is it a function of? C times, and there's some constants, m to the a. An example of a function that has the property that viscosity has this dependence is given by this function. Now, you would have to say, what is the function of this? This is e to the alpha m, Cm to the a to the power nu. And therefore, gamma, this gamma, is equal to a times nu, the exponent of an exponent you obtained by multiplying the exponents. And therefore, this equation and that equation are totally consistent so long as gamma, a, and nu are related like this. Nonetheless, your vol's results are entirely consistent with everything we've said. I would be delighted to say that I can draw a pretty picture of how nu depends on molecular weight or solvent quality, but that really isn't quite true. The first obstacle is that if you look at this formula for the viscosity, a to the proportional to e to the alpha C to the nu, nu is the exponent of an exponent, which means it's a bit hard to measure. Furthermore, nu doesn't change a whole lot. If we said nu is almost always in the range 1 half to 1, that's correct. And therefore, there's not a lot of range. And if there's any difficulty at all measuring nu accurately, you run into a problem that graphs are a bit scattered. There are some other parameters, however, that have been sort of hidden. Tonight, we can point at a few of them. One we can talk about, we now advance to figure 12.33, is the transition concentration C sub t. Transition concentration? Yes, we are now limiting ourselves to systems that show the solution-like, melt-like transition. That's the transition we actually found. And we charge up like this. And there is a concentration, I'll use it as a subscript, C sub t, at which the transition occurs. And correspondingly, there is a viscosity, eta sub t. When we discuss this for colloids, well, for spheres, it's always, there's really only one curve, because the volume fraction is the only variable. And phi t was about 0.41 or a bit higher. And the viscosity of the transition divided by the viscosity of the solvent was about 10 plus or minus 5. That's very clearly not the concentration and viscosity at which you start getting this phase effect. And the question is now, for linear polymers, how does the transition concentration behave? And the answer given in 12.33 is there a bunch of points, a bunch of points, or more or less on a straight line. And the transition concentration goes as m to the minus 0, 1.1 or so. That is, as you increase the polymer molecular weight, the concentration at the transition falls. Of course, you need a set of measurements where the same material that shows the transition has been studied for a lot of different polymer molecular weights. Otherwise, you should be hesitant about believing the fit. And the results that do this are on poly and hexalisocyanate, which we've discussed before. And you look at those, and g, you get roughly this behavior. So you might, OK, so that's the transition concentration. You might also say, well, g, that's the transition concentration. And maybe we should re-express the transition concentration in natural units. When I say a natural concentration unit for polymer solutions, what people generally mean is c times the intrinsic viscosity. Now, there are people who will say c over c star, where c star is the overlap concentration. But in fact, the overlap concentration is usually inferred from the intrinsic viscosity. Namely, c star equals some number in the range. I've seen things as small as one, as large as four in the literature, over eta. So this is a distinction, but not a difference. OK, so we look at c eta, and we will ask at what transition concentration in natural units do we see the transition? And if you go through all of the accumulated measurements, you can find transitions as high as 80. You can find things that occurred at about 35, and that is a more typical value. You can also find transition concentrations as low. For example, hydroxychropylcellulose, which Carol Streletsky and I and others studied, four. And so if you look at the transition concentration in natural units, it covers a very wide range of values. You should realize this is a significant part of the way to melt. The melt concentrations in deralsmet and all's analysis were not more than about 300. The transition concentration for hydroxychropylcellulose, the intrinsic viscosity was one over, oh, one or two grams per liter. And the transition occurred at around six grams per liter. So if we say we'll put the concentration in natural units, you don't see any tendency of the measurements to collapse onto each other. There is, however, something you can do which gets the measurements to match. Not perfectly, but actually pretty well. And the answer is, if you look at all of those curves where you see something like this for linear polymers, not for spheres, for the measurements where people looked at a lot of different molecular weights and you look at their graph, you realize that the crossovers occur at about the same viscosity. And in fact, it is approximately but not exactly universally true that eta t over eta is some number like two or 300. That is, there is not a concentration of which you see the crossover, but there does appear to be a universal viscosity at which the crossover sets in except in hard sphere systems, which may be different. And the crossover at viscosity occurs when eta over eta 0, the reduced viscosity is a few hundred. So you produce something that's fairly viscous, but not completely impossible to work with. Oh, yes. You remember we said there was power law behavior out here? You might legitimately ask, well, that's very nice. What is the exponent? And the answer for a fair number of systems is that x is about 3 and 1 half. However, you can find systems in which x is much larger. Well, I don't know, much larger, but somewhat larger. For example, in hydroxypropylcellulose, x is about 4.4. And therefore, there is a power law exponent, and I can even tell you what the power law exponent is. Let us skip back from the systems that do show the transition, which, well, it is universal. A universal transition is a function of the transition viscosity. It's not a universal transition viewed as a function of the concentration. And it doesn't even happen in some systems. So let us go back and let us look at the system in which we do not see a transition, which is a large number of things. Going out in some cases to very high viscosities, I seem to recall reduced viscosities up to about 10 to the 7 in a few cases in which no transition is seen. So there is the curve. And one of the things you might legitimately ask is, well, G, couldn't we manage to describe this measurement, perhaps, with a couple of power laws? And if you go through the literature, you find people who draw a power law curve up here, which is sort of tangential or goes through the points over some distance. And you find down here, typically, a linear curve. And in between, there's some transition, which is sort of talked over. As G, there's a transition that's broad. And therefore, you might propose, yes, the measurements actually show a crossover from one type of behavior to the other. But the crossover is a little hard to see because there are other things going on at the same time. So there's not a third one. There's not a clear transition the way there would be if there was a phase transition, which I don't think anyone has proposed for. Well, that isn't quite true. Almost no one has proposed for these systems. So there's a transition, but it's clearly not a phase transition. And the question is, is this a reasonable description? And the answer, I think, is that if we look at e to the alpha c to the nu behavior, the stretched exponential behavior marches out alpha and nu are independent of concentration. They're proportional to c to the 0, independent of concentration. And that's true along the whole curve. And therefore, viewed from this perspective, there is no transition. There is a single smooth curve that just chugs ahead. So far, so good. OK, I have just run us out of chapter 12. We have now come to the end of the discussion in viscosity. I am now going to advance to discuss viscoelasticity. And in order to do that, I'm going to have to draw a picture. The idealized picture is here to infinitely long plates. Now, that's a little dangerous because plates aren't infinitely long. And there are several ways around this, for example, you bend plates in circles. And you have two concentric cylinders. Or you have a flat plate. And you have a rotating cone. You may say, why would you want a rotating cone? And the answer is that the velocity up here v is omega r. That's velocity, parallel to the plate. This distance, distance out from the center, increases as r. And therefore, this distance, because it's a cone, this distance is proportional to r. And therefore, if I look at the velocity of the plate over the distance from the plate to the bottom, this distance is proportional to r. And there is a velocity gradient here. And the velocity gradient is uniform over the distance. Now, you have to be a little careful with this, because there's going to be some question of whether things are uniform as you move this way. There are potential complications. Nonetheless, we have to be careful with the velocity gradient. And I shall draw this pretty picture. And the bottom plate is stationary. And the top plate is being moved back and forth. And the top plate has a displacement, which I'll call d, for the moment. And the displacement is some amplitude, a, and the velocity gradient. And the displacement is some amplitude, a, cosine omega t. That is, we have a displacement. And the extent of the displacement oscillates as a harmonic in time. You can do that equally with either that or that. The displacements are fairly small. Now, you might say, well, gee, could you get information out if you made the displacements large? And the answer is yes. And there is an object, an experimental method called large angle oscillatory shear. And we shall reach Laos in the next chapter, chapter 14. However, we have just reached chapter 13, which talks about linear viscoelastic behavior. OK, so we have this displacement. And in order to get this displacement, we have to apply a force on the upper plate to move it back and forth. Now, if you think about this for a moment, you realize that if the plate has inertia, you're also accelerating the plate at the same time. And you have to deal with this. But that's basically an experimental question. And the inertia of the plate has to be made small enough, in some sense, that you can measure the part of the force that is due to the fact there's a liquid in here. I'm going to call this A0, the amplitude of oscillation. We'll come back to other notation for it in a moment. And the reason is that if we look at this system, if you make the plate bigger and bigger, the amount of force you need goes up. But it's basically a linear process. And therefore, the interesting thing is the force per unit area. And the force per unit area is known as the stress. And the first force per unit area, which, by the way, is going to be oscillating as a function of time, the stress is determined by the amount of oscillation the displacement divided by the distance L between the plates. That is, if I take this whole apparatus and make it twice as big vertically, the amount of oscillation seen locally by a molecule, here's a molecule, and it starts out and has some neighbors along this line. And when we increase D to its maximum, this is oscillating, the neighbors are displaced sideways one way or the other, in an awesome back-and-forth manner. And the amount of local displacement is determined by the ratio of the linear displacement of the plate to the distance between the plates. And this is known as the strain. So far so good. Now, suppose we actually do this experiment. There are a couple of different ways we could do it. One thing we could do is to apply an oscillatory force up here and measure the motion. And so we have a force which is very precisely controlled and shifts things back and forth. And we ask how the plate moves. We could also have a driver and feedback electronics and we move the plate back and forth through a fixed distance. And we instrumentally determine how much force we're having to exert so that the plate actually does this. Those two experiments are equivalent. That is, they measure a series of curves where a certain amount of force per unit area, a certain stress and a certain strain are matched with each other one for one. But whether you are actually experimentally creating the stress and measuring the strain or vice versa, doesn't matter. There's one curve and the distinction there is purely how you built your machine. Okay? Now, having said all this, we now go in and we measure this question. This is assuming that it's linear. When you're basically saying that the stress, when you measure the stress, it's equivalent to measuring the strain. It's basically assuming that it's equi-relative. It seems to me the answer is no. That statement is more general than its linear response. That is, you have a stress, you have a strain. If I double the stress, what happens to the strain? Well, the strain may or may not double in the linear response it does. But however the statement is, there's a stress-strain relationship would be true whether it was linear or not. The only place where you would get into trouble with this is that you could have a hysteresis issue where if the, you apply different, for example, different strains and the stress changes someplace or vice versa, you might not be able to reach every point on the curve instrumentally, very easily, by doing one control or the other. Now this is linear response in the sense that the theoretical analysis that is done of linear viscoelasticity assumes that if I apply a series of forces at different times, the fluid has a response at later times, and I can just do linear response that is add up all of the responses of the fluid. Of course, if I've applied the force at several times, there's several time delays, and I will get the answer. And we'll get to linear in a bit. So what is the net result though? The net result is that if we actually look at the strain, the stress, the force bringing in that area, and we compare it with the strain, we find something. And what we can divide up, because this is linear, we can get force, we can divide out the A0, and we have force per unit area per unit of strain. And that force turns out to have two pieces. There is one piece that is determined by a function g prime of omega, and g prime of omega is now multiplying the displacement. Traditionally, it's sine rather than cosine. Oh, I'll write it as cosine to avoid confusion. This is cosine omega t. There is one component where the response has a response that is in phase with the displacement. So you have a displacement, and there is a restoring force. That's what it is. It grows linearly in the displacement. Here's the oscillating displacement. And then we have another component. And the other component is 90 degrees out of phase. And so there is a response that is 90 degrees out of phase to the displacement, meaning this is a force that is largest when the displacement is at zero. That may look very odd, but you have in phase and out of phase responses. And you might ask, what does this all mean? Why is there a force that's in or out of phase with the displacement? This is a liquid, after all. If you made the displacement stop and sat there after a while, the force would disappear because the liquid would just flow. That's a little more complicated than it sounds. And the answer is that the low shear viscosity is... Well, it's really a low frequency limit of g double prime of omega over omega. These two objects have names, by the way. G prime is the storage modulus, and g double prime is the loss modulus. And you might legitimately wonder, gee, that seems like a peculiar way to represent things. And I will now show something that makes much clearer from a physicist's point of view what you are looking at. I am going to multiply each of these objects by one. And of course, whenever a physicist says, I am going to multiply by one, you know it's going to be some huge object that is in fact one, but if you look hard, but not if you don't look hard. And I am going to multiply this by one in the form omega square over omega square. And I am going to apply this one by one in the form omega over omega. That's perfectly legitimate. You notice this now looks exactly like the viscosity. But now I ask you, okay, I have something that goes as omega times the displacement, and it's 90 degrees out of phase with the displacement. And I have something that goes as omega square times the displacement. What am I looking at here? Well, if you think of a harmonic oscillator, you realize that this object is the velocity, and I have a term in the force that is determined by the velocity. Gee, that's a lost term, isn't it? And here I have a term that goes as omega square times the displacement. That's the acceleration. And the thing that goes as the acceleration is a restoring force. The system looks like it has little springs in it. And in fact, there are a set of math models that I will blame on, if I recall correctly, Maxwell. And this whole picture can be modeled as, here is a system, and it has a spring. And coupled to the spring is something known as a dashpot. Dashpots were very important back when people had teletypes. The dashpot, the plating of the moving object in the teletype comes back very fast. And you don't want it to whack the far end hard because if it does, the vibration will damage things. So you have this little piston-like object, and the moving object coming this way has an arm coming out. There's a rubber disc here, that's the traditional image. And the rubber disc hits this thing, and there is a little hole here. And the, this is a piston, but it's a leaky piston. And so you compress the air and it slows things down. And then when it's done compressing the air, the air blows out the end and you can tune the opening because there's a little sliding arm. And so eventually, after a very small fraction of a second, the moving object on the teletype is brought to an end here. It's brought to a stop at exactly the right location and with no unpleasant sharp acceleration that would damage anything. Well, yeah, it does that. And it does it via high-quality Victorian engineering. There are no, essentially no moving parts in this other than the object you're trying to stop. There's no active feedback system, no electronics, it's all done mechanically. And that is a dashpot. And you can fill the dashpot in this picture with some liquid. So you have this hanging object in the liquid. And the hanging object in the liquid, it's just a liquid with a viscosity and no frequency dependence. And so you, or it has a frequency dependence. And this object represents the storage modulus and this piece represents the loss modulus. So that is a picture of the viscoelastic parameters, the linear in terms of a storage modulus and a loss modulus. And you notice that I have reparameterized them starting here and then doing this for consistency, basically. And once I've done this, the loss modulus looks exactly like the viscosity. There is, however, and I'm just going to show the qualitative picture first. Here is, however, a consequence of reparameterizing things. If I plot G prime or G double prime themselves, you remember they contain something, I divide out an omega which I can do safely. And therefore G prime and G double prime both go to zero. This is a little hard to do on a log log plot. They both go down and at low frequencies, this is more typically a log plot, so you never really get to zero, there is a linear behavior which is omega to a power. And then there are things that happen out here and perhaps the curves cross. And if I keep making omega larger and larger, eventually the curves turn up again. I am not going to draw this in any detail, you can find it in standard texts. The important issue is that if you talk about G prime or G double prime themselves, you find that they have curves that look sort of like this and there is a region where things are reasonably flat and there is a low frequency region and a high frequency region I could introduce terminology. We'll get to that next time. The main issue is though, there is a shape for the curves. If I instead say I am going to go in and I am going to plot G prime over omega square or G double prime over omega, if I do that I have divided out the leading slope. And if I divide out the leading slope, that says that at low frequency the behavior that I am looking at is frequency independent, which is sort of what you would naturally expect. And these curves all look, as you will see from the book, about the same, namely there is a flat region, there is a rollover, and then down here there is at least at first a power wall. And then maybe something interesting happens at larger frequencies. And the question we ask, given that I have drawn these pictures is, well, how do I derive or rationalize a curve that explains this? And so I will give an onsatz based on a theoretical, the theoretical model we have not yet discussed in detail, and a renormalization group argument. And so we are going to actually invoke, which we have not actually done before, the hydrodynamic scaling model, which is my model. And we are going to invoke a renormalization group argument. The renormalization group argument being invoked is much more general and qualitative than many renormalization group arguments, but it will be clear what it is. So we are going to start out, and here is a straight line representing concentration. And we will imagine that the viscosity is plotted perpendicular to the blackboard out towards you. And what we find is the viscosity increases as I go to higher concentration. And eventually, at least in some systems, we reach the transition concentration C sub t. And down here is a solution-like regime, and in the solution-like regime, you see stretched exponential in concentration behavior. And up here is the melt-like regime M, where you see the X behavior. That is the pure phenomenology. Now what I say is, I have used something called the Altenberger dollar positive function renormalization group method. And I applied this in the context of the hydrodynamic scaling model. And I said I can calculate a certain lead behavior at lower concentrations. And the Altenberger dollar renormalization group method lets me turn the lower concentration calculation into the stretched exponential. And we are going to assume that is true. Now I am going to put another little bit on top of it. The PFRG, the Altenberger dollar positive function renormalization group method, actually predicts two sorts of functional behaviors. The functional behaviors it predicts depend on where what are called the fixed points of the renormalization group are located. In particular, if you have a fixed point at zero, then near zero, you predict stretched exponential behavior as is found. On the other hand, out here someplace, if you have a fixed point that is way out there, and that is the dominant fixed point, there can be several fixed points, but as we go along here, there is always a dominant fixed point. If the dominant fixed point is out here, you would get power law behavior as is observed. Next trick. The viscosity is the low frequency limit of P double prime over omega. The viscosity is a function of concentration, and so is the dynamic loss modulus. And therefore, when we say we are looking along this curve at viscosity, I could also say, I'm going to put another axis on here. I'm going to insert the vertical axis I've skipped over. I'm going to call the vertical axis omega frequency. And I'm going to say, viscosity is what we measure along the omega more or less, maybe not quite, zero one. So we measure at low frequencies, and we get the viscosity. This is the zero frequency limit. However, it's the zero frequency limit, and while I wrote it as eta, I could just as well have written it as G w prime of omega, and by the way, concentration over omega. And therefore, I now have a two variable function, and I have used the renormalization group argument to work along that omega is zero. Okay? Now comes the somewhat bold part. I will start here. Yes. And I will march sideways. And at first I'm in a regime where this fixed point at zero is presumably still there and dominant. And then at some point, I move sideways enough, and these other fixed points, about which I know very little, become dominant. And therefore, I have moved from a solution-like regime to a melt-like regime. However, I am marching along at C fixed with omega as the variable, because I'm moving sideways along this graph. And I am looking at something that is really only a function of omega. If this picture is correct, and there is a great deal of extrapolation in this, it's an anzatz, not a derivation. At lower frequencies, this fixed point is dominant, and therefore I should have a stretched exponential in frequency. However, eventually I cross this line, and once I am across this line, I am in a region where the melt-like fixed points out there are dominant, and I should have omega to a power behavior. This doesn't really calculate what the power is at all. And in order to do the extrapolation even at low frequency, I would need something that gives me viscosity is 8 to 0 plus presumably, some intrinsic viscosity 8 and not C, plus some beta omega. That is, I would need some way to generate a model that gives me the low frequency, frequency dependence, so I could move off 0 at all. And after I have moved off 0 at all, I can then use the Altman-Burger-Dollar renormalization group to say what happens with frequency. However, I have not found, I have been working on other things truthfully, I have not yet found the frequency, yes, and therefore the assertion is that you find this linear step and you can then fill in the low frequency behavior. That has not yet been done, and the crossover and the fixed points out there have not been done even for concentration. Okay? So that is the rationale, and you notice what it predicts. It predicts stretched exponential in frequencies and lower frequencies, and it predicts power-long frequencies and higher frequencies. However, there are some interesting, if you look carefully, there are some little bits I have skipped around. And one bit I have skipped around is what it says about frequency dependence way out here. That is, if you are way out there, you are in the Mount-like regime all the way, and therefore you see simply power-law and frequency behavior. Well, you can actually point at data that looks like this, and the question is, are you really going to see that, or is it the case that there is a very low frequency, stretched exponential in frequency regime? And corresponding to that, is it possible that this curve actually bends over and very close to zero frequency if you get down to sufficiently low frequencies, the seed of the X behavior would disappear? I don't have an answer to that. The other point is that if we go up to very high frequencies, is it going to be the case that we have gotten into the omega to the X regime at very low concentrations, or does this curve bend over so that at very, very low concentrations, where we can just barely see particle interactions at all, we are still in the concentration to a stretched exponential regime. That is, the model does not handle this very, does not discuss this, the model sort of dodges around that, it covers this large area. Now, you could say, of course, that, gee, this picture explains why some systems show the solution like, melt like transition, and some do not. Namely, depending on exactly where the fixed points out here are, it might be the case that in some of these systems, you chug along the concentration axis all the way to the melt. This curve does not go to infinity, it stops at the melt. And even at the melt, the detailed numerical parameters going into the calculation or such, that you can get all the way out here, and the fixed point at zero is still dominant. But there are other systems, and all you need are differences in chemically dependent parameters, such that these fixed points become dominant at some concentration, and out here you see a power law. And therefore, one universal physical model plus chemically dependent parameters that determine where which fixed point is dominant predicts both behaviors. Well, that's fine for the hydrodynamic scaling model, which actually allows you to have two behaviors in one general model. It is not at all so good for models that assume scaling, because they assume that you always get scaling out here, which it seems that you do not. Okay, that is the Anzatz. And having given you the Anzatz, we have now reached experiment. We have also reached approximately the end of the hour, and therefore we have reached a natural point to stop. Therefore, I am going to stop. So this has been a lecture finishing off my treatment of viscosity, and advancing to my treatment of the hydrodynamic scaling model, and its Anzatz for predicting linear viscoelasticity. I am George Phillies. This is the end of the lecture.
Lecture 24 - Linear viscoelasticity. George Phillies lectures on polymer dynamics based on his book "Phenomenology of Polymer Solution Dynamics".
10.5446/16223 (DOI)
Classes in Polymer Dynamic. Based on George Philly's book, Phenomenology of Polymer Solution Dynamics, Cambridge University Press, 2011. And today, this lecture is lecture 23, Viscosity, Introduction to Viscoelasticity, the Temporal Scaling Onsots. I'm Professor Philly's, and today we're going to finish off our discussion of viscosity and push on to discuss viscoelasticity. Let me recall where we left off roughly speaking. We left off in chapter 12 discussing figures 12.22 to 12.24, which are treatments of the molecular weight dependence of the viscosity of the solution. Now, there's actually a fairly considerable number of measurements of these, and they're actually fairly consistent, namely what they find, what is found with considerable consistency in many systems, is that the viscosity depends on the molecular weight as a stretch exponential and molecular weight. And that's brought out quite emphatically by, for example, figure 12.24. 12.24 are measurements that actually appear in Doyle and Edwards, who report them as measurements done by Onogi et al. The measurements are a little more complete than the measurements in the original paper, actually. Having said that, an interesting feature here is that if you go back into the literature and ask what people said about these measurements, you can find people who say, we have measured the molecular weight dependence of the viscosity, and as predicted, we see a power law dependence of the molecular weight. There actually are measurements that find power law molecular weight dependence. For example, as we discussed earlier, POW at all measurements, which push out to the melt, find close to the melt a power law dependence of viscosity on molecular weight, and volume fraction one, log log plot, what are rather clearly straight lines, viscosity changing as a power law on M, and if you actually say, we'll stay fairly close to the melt and we will exclude points that seem to be deviating from the original straight line, you actually could find in POW at all M to the 3B8. However, most of the measurements you find, which are done at considerably lower concentrations, do not find power law behavior, they find stretched exponential behavior. An interesting example of this is provided, and this is also, in essence, stretching over to the next section, which is a discussion of topology, are the measurements like Grace Lee and L, and Grace Lee look at polymers, and they look at linear chains, and they look at four-armed stars, and they look at six-armed stars. And if you look in at the results they show as a function of concentration, for the linear chain, and their linear chain is a molecular weight of 1.66 megadolptin, as a function of concentration, a log-log plot, they find what is very clearly a straight line, you look at it, it's clearly a straight line, and it's quite clear that you are seeing power law behavior. This is for the two-armed star, the linear chain, two-armed star. On the other hand, for F equals 4, and for F equals 6, very clearly on the same plots, these are chains of about the same molecular weight. This is about 1.95 megadolptin polymer, and the other is about a, what was it, it was about 1.44 megadolptin polymer. For the other two, you very clearly see stretched exponential molecular weight dependencies, these are fixed molecular weight, you're looking at concentration dependencies. If you look at the molecular weight dependence, where they had a considerable number of samples, they were looking at concentrations of about 300 gram per liter of polymer, extremely concentrated systems, and the molecular weight dependence was a straight line for the linear polymer, and was a stretched exponential, two stretched exponentials actually, for the star polymers. As a function, taken as a function of concentration, the upper figure, the linear chain showed the power law behavior, the stars showed a stretched exponential behavior, the viscosity of the star was, for the most part, less than the viscosity of the linear polymer. However, if you look carefully at the measurements against concentration, you can see a point up here, and so if the concentration was taken to be high enough, eventually some of the star polymers, as they're obliged to do given the functional behavior, did show a viscosity that was higher than the viscosity of the stars, of the linear chain, I mean. However, for the most part, star polymers, comparable molecular weights, are up to fairly high concentrations, less viscous than linear chains, and at fixed concentration as a function of molecular weight, again, the stars are less viscous than the linear chain is. Okay. You can also look at results of... We'll get this erased in a second. You can also look at results of Kajura and collaborators. They looked at three-arm stars. Their stars were a poly-alpha methyl styrene. They look at a series of concentrations and molecular weights. You can see the results in figure 12.26. And what they found was that, again, if you're at about the same concentration and about the same molecular weight, the viscosity of the star polymer, this is the F equal 3 star, is less than the viscosity of the linear chain. Now, why is the statement that the star polymers have a lower viscosity than the linear chain of some interest? Well, there are a number of models of polymer dynamics which assert that linear chains have extra modes of motion that are, in principle, not accessible to stars. In particular, for example, the reputation-type models say that if you have a linear polymer, it can creep through holes in a transient lattice like the snake in a bamboo grove. But if I am a star polymer, in order to make forward progress, what I have to do is retract by fluctuation one of my arms. That's difficult to do. I can push it out in a new direction, and now I can take a very modest step forwards. The net result is that one would expect that it is harder for star polymers to move with respect to each other than, at least if they're very concentrated, than it is for linear chains. That does not appear to be quite what we are observing. Question? Can it also be possible that since linear molecules are a linear instruction, they have better packing methods, therefore, they adhere closer to each other? That could be an issue in the melt. However, in the solution, the amount of packing is determined by the concentration. And the answer is that the concept, we make comparison and equal concentration for linear and star. The other thing you should recall with respect to, let us pack chains into a melt, is that I am a star polymer. I have one point where there are three chains coming together. There, if I'm an F equals three star, there is a narrow region around this where there are three of my chains close together. However, if I'm a large star polymer over most of my length, I look exactly like a linear chain. And therefore, this is the sort of effect. You might find interesting effects with small chains, but with large ones, you would not expect that to arise. Thank you. Okay. So we advanced figure, oh, discussion around figure 12.25. And the issue around figure 12.28 and figure 12.27 is suppose we take a polymer chain and we change the solvent quality. What happens if we change the solvent quality? Well, the expectation is that if we are in a theta solvent or close there too, which means we also have to be at the right temperature, we have a polymer chain in the solution. It would rather see itself as its neighbor than it would see something else. And therefore, if we increase the concentration, nothing terribly interesting will happen to the chain radius. I didn't say nothing at all. Because theta, you can't get perfect cancellation of effects. It's an approximation. On the other hand, if you were in a good solvent, we see what we saw in the chapter on dielectric relaxation, which is what has been used for most of these measurements. Namely, as we increase the concentration and look at the chain radius, the chain radius falls with increasing concentration. So these, however, are static properties. They're how big is the chain if we take a single snapshot? We're talking about viscosity here. Viscosity is a dynamic property. And we ask what happens to the behavior of the system if we change solvent quality and such and on. The first answer is, if we look at a series of chains of different molecular weight, the viscosity, as has been the case in most other systems, not all other systems and not at sufficiently high molecular weight, but concentration rather in those systems, the concentration dependence is a stretched exponential. And what we happen, fine, as we increase the polymer molecular weight and do our measurements, we find that the parameter alpha increases and the parameter nu falls. So there is alpha and there is nu. And both parameters are molecular weight dependent. The second thing we can do is to compare these parameters in a good solvent and in a theta solvent. And what we find is that in a theta solvent as discussed here, the parameter nu is from 0.8 to 0.954. And if we go, however, take the same material over to a good solvent, the parameter nu is 0.6 to 0.73. That is, we are going in and when we change the solvent quality, when we go into a better solvent, the parameter nu falls. The model that derives this equation, in fact, predicts this behavior, we will eventually get to it in later lectures. If you look at the figures, especially 12.28, this change of nu is physically quite visible. If you look at a semi-log plot, log viscosity versus concentration, for a theta solvent, the concentration dependence of eta is fairly close to a straight line because this is fairly close to a stretched exponential. But if you look at the measurements in the good solvents, you typically see a more pronounced curvature because you are looking at a stretched exponential rather than a pure exponential. Now, having said this, let us push ahead and consider the general features of viscosity. We have looked at lots of specific measurements and we did, of course, find that the viscosity depends on the polymer concentration and the polymer molecular weight, not to mention the one could put in the topology and one could put in the solvent quality. With respect to topology, there was a very long dispute, this goes back to a considerable piece, about the viscosity of ring polymers, but that was mostly melts, which the book does not cover. The issue there was, well, you can try to make melts, you have to ask, are the, you can try to make rings and put them into melts. Here is a ring, is it a nice open loop or during your synthesis process, did it get itself all tied in knots so that it's a ball? Under modern conditions, if you want to answer that question, you can resort to using DNAs since you can synthesize DNAs with absolute molecular weight control. But absolutely mean we have something that's 180,000 base pair, meaning its molecular weight is tens of millions, and the molecular weight is, well, there are fluctuations because there are several isotopes of carbon and deuterium in nature, a carbon and deuterium hydrogen, I should say in nature, but the molecular weights are totally exactly controlled, no fluctuation. You can also synthesize things, this is synthetic DNA, not stuff that occurs in nature, that is a star and completely mono-dispersed. You can also synthesize, it happens naturally, some viral DNAs do this. DNAs that are ring polymers that are again completely mono-dispersed. Electron micrographs of those rings indicate that the rings are completely open, they are tied up in knots at all. However, having said, the ring polymers are mono-dispersed, the star polymers are truly mono-dispersed, synthetic DNA comes that way, there is the significant practical issue that people have not done studies of DNA viscosity that are as extensive as the viscosity studies that have been done on synthetic polymers. Okay, having said this, let us look at the question of what we find. And the first point, which I am going to emphasize significantly more than was emphasized in the book, is that eta is not a universal function of C and M. That is, if you think that you can take your viscosity measurements and yes, there will be some chemical dependent constants and there will be things that depend on the concentration in the molecular weight or maybe the length of strands in the molecular weight. Some representation of amount of material and size of the chain. If you think that all of the chains are the same because of that and therefore there should be a universal function that gives you viscosity as a function of concentration, you are going to be disappointed. Instead, it is quite clear from the literature that viscosity is not a universal function of C and M. Instead, there are two distinct phenomenologies and in one of the phenomenologies, you see simply a stretched exponential which in measurements of Taeger goes up, Taeger and a few other people go out more or less to melt. And in another set of systems, you see a stretched exponential up to a point and then beyond the point, you see a power wall. It is worthwhile to recall that the same phenomenology this class is occupied by spherical colloids. Spherical colloids show the same behavior. The slope of the power law depends on the exact material. For spherical colloids, the slope is appreciably steeper than for linear chains. And if you look at many arm star polymers, the slope creeps up and makes a transition from linear chain behavior up to very nearly heart-sphere behavior. It should be clear that any model that explains this phenomenon is subject to the constraint that it must be of work for spherical colloids. Spheres can do a number of things, but form entanglement lattices is not really one of them. And form and reptate is certainly not one of them. And so we have the issue that the viscosity behavior might not be quite what you would have expected from some of your readings from some theorists. Okay. Furthermore, if you merge up to viscosities above about 10 to the 8, that's eta over eta-zero above about 10 to the 8. And most of those measurements are literature studies due to Drival, though there's some nice work due to Colby. However, if you look at that, it does appear that if you get above here, instead of continuing along on the power law, there is an upwards deviation. The nature of the upwards deviation is not quite certain. One might, however, for example, propose since we're at extremely high concentrations that it's some sort of a jamming transition in which things actually mechanically get in each other's way, and the chains can't slide across each other because they can't move after all their rigid overshort distances. They can't move sideways as easily enough to get through each other. However, that is a fairly specific, that's a fairly rare exception because there are not a lot of studies that have looked at this. In particular, if you look at predictions that eta should be some c to the x, m to the y, that prediction is very generally not true in polymer solutions. There are some exceptions. There is this family, but predictions that you can simply assume scaling are not sustained. If you want to say we have scaling, you need to explain why sometimes you have scaling and sometimes you don't. Otherwise, you can't just assume. Okay. So that is the sort of behavior we get. We might then ask, well, what other generalizations can we find? And one answer is, for a very large family of parents, we have eta over eta zero is e to the some constant alpha c to the nu, and now I'm pulling the molecular weight of dependence of alpha. We'll actually see graphs of that dependence in a few moments. But if we actually look at the measurements and ask, okay, we get this thing to fit, what do we see? Well, gamma is usually in a range around a half. And I can be more precise than that, but that's a reasonable approximation. In point of fact, if we say, oh, I should add something. We go from theta to good solvents. We go from theta solvents to good solvents. The parameter nu goes down, and what I will call alpha m, because it's alpha with m pulled out, what I will call alpha m, m to the gamma, what's left, what's not c to the nu, increases as we go from theta to good solvents. Generalization. Okay, so what behavior do we see for gamma? And the answer is seen in figure 12.30, where we plot alpha, which in terms of this equation is alpha m, m to the gamma. It's the coefficient of c. And we plot alpha against polymer molecular weight. Now, if I just showed you the points, your reaction would be, gee, that was a nice piece of graph paper, Professor Philly's. At least it was very nice before you stood way back and started firing the shotgun at it, and the points are all over the place. However, if you look carefully at the points, I have labeled the points by the chemical family of the polymer. So we have identified separately each set of homologous polymers, polymers that are chemically identical with different lengths. And if you do that, you see that for each single family of homologous polymers, alpha lies very much on a straight line. The straight line has a slope. So this is a log-log plot, so we say alpha goes as m to the power gamma. And what can we say about gamma if we look at all of the measurements? That is, all of the measurements were the same way. I've looked at a large number of different polymer molecular weights. And other than that, things are consistent. And the answer is that gamma is typically in the range 0.6 to 0.67. One can find a case where it's 0.5, if I recall correctly, 0.94, where it's quite large, and you can find a case where it's 0.51. If you look at this, you notice there are different slopes for different materials, and therefore it's not quite a universal constant. But for a given set of samples, the power law dependence is actually fairly sharp. This result is also implicit in the systematic review of Dervol and many collaborators, and all of the wonderful Russian data he compiled in one place for us, because what Dervol says is that C eta is a good reducing variable. That is, if you take all of these measurements and you plot them as functions of C times the intrinsic viscosity rather than C, so we have eta log log again, C eta here, and we find a whole series of different polymer molecular weights that the measurements lie on the same curve. Now, in order for the measurements to lie on the same curve, when you are doing this plot, it must be the case that eta is some function of C eta, which means that all of the molecular weight dependence of the viscosity has to be built into the molecular weight dependence of the intrinsic viscosity. You can't have any other molecular weight dependence out here someplace, because if you did, when you plotted viscosity for a series of polymers of different molecular weights, they wouldn't lie on the same line. They lie on the same line because viscosity is a function of C times the intrinsic viscosity. That statement, universal function, goes all the way up to the melt. However, it is an important fact that the intrinsic viscosity depends on M as a power, and therefore this equation says that the viscosity is a function of, what is it a function of? C times, and there's some constants, M to the A. An example of a function that has the property that viscosity has this dependence is given by this function. Now, you would have to say this is e to the alpha M, Cm to the A to the power of nu, and therefore gamma, this gamma is equal to A times nu, the exponent of an exponent you obtained by multiplying the exponents. And therefore this equation and that equation are totally consistent so long as gamma, A, and nu are related like this. Nonetheless, your Vol's results are entirely consistent with everything we've said. I would be delighted to say that I can draw a pretty picture of how nu depends on molecular weight and quality, but that really isn't quite true. The first obstacle is that if you look at this formula for the viscosity, A to the proportional to e to the alpha C to the nu, nu is the exponent of an exponent, which means it's a bit hard to measure. Nu doesn't change a whole lot if we said nu is almost always in the range 1 half to 1, that's correct. And therefore there's not a lot of range, and if there's any difficulty at all measuring nu accurately, you run into a problem that graphs are a bit scattered. There are some other parameters, however, that have been sort of hidden, and we can point at a few of them. One we can talk about, we now advance to figure 12.33, is the transition concentration C sub t. Transition concentration, yes, we are now limiting ourselves to systems that show the solution-like, melt-like transition, that's the transition we actually found. And we charge up like this, and there is a concentration, I'll use it as a subscript, C sub t, which the transition occurs, and correspondingly there is a viscosity, A to sub t. When we discuss this for colloids, well, for spheres, there's really only one curve, because the volume fraction is the only variable, and phi t was about 0.41 or a bit higher, and the viscosity of the transition divided by the viscosity of the solvent was about 10 plus or minus 5. That's very clearly not the concentration and viscosity at which you start getting this phase effect. And the question is now for linear polymers, how does the transition concentration behave? And the answer given in 12.33 is there are a bunch of points, a bunch of points are more or less on a straight line, and the transition concentration goes as m to the minus o, 1.1 or so. And the answer is as you increase the polymer molecular weight, the concentration at the transition falls. Of course, you need a set of measurements where the same material that shows the transition has been studied for a lot of different polymer molecular weights, otherwise you should be hesitant about believing the fit. The results that do this are on poly and hexalisocyanate, which we've discussed before, and you look at those and g, you get roughly this behavior. So you might, okay, so that's the transition concentration. You might also say, well, g, that's the transition concentration, and maybe we should re-express the transition concentration in natural units. When I say a natural concentration unit for polymer solutions, what people generally mean is c times the intrinsic viscosity. Now, there are people who will say c over c star, where c star is the overlap concentration, but in fact the overlap concentration is usually inferred from the intrinsic viscosity, namely c star equals the sum of number in the range. I've seen things as small as one of as large as four in the literature over eta. So this is a distinction but not a difference. Okay, so we look at c eta and we will ask at what transition concentration in natural units do we see the transition? And if you go through all of the accumulated measurements, you can find transitions as high as 80. You can find things that occurred at about 35, and that is a more typical value. You can also find transition concentrations as low, for example, hydroxypropylcellulose, which Carol Streletsky and I and others studied for. And so if you look at the transition concentration in natural units, it covers a very wide range of values. You should realize this is a significant part of the way to melt. The melt concentrations in D'Rivolve's Metals analysis were not more than about 300. The transition concentration for hydroxypropylcellulose, the intrinsic viscosity was one over, oh, one or two grams per liter, and the transition occurred at around six grams per liter. So if we say we'll put the concentration in natural units, you don't see any tendency of measurements to collapse onto each other. There is, however, something you can do which gets the measurements to match. Not perfectly, but actually pretty well. And the answer is if you look at all of those curves where you see something like this for linear polymers, not for spheres, for the measurements where people looked at a lot of different molecular weights and you look at their graph, you realize that the crossovers occur at about the same viscosity. And in fact, it is approximately but not exactly universally true that eta t over eta is some number like two or three hundred. That is, there is not a concentration at which you see the crossover, but there does appear to be a universal viscosity at which the crossover sets in except in hard sphere systems which may be different. And the crossover at viscosity occurs when eta over eta is zero, the reduced viscosity is a few hundred. So you produce something that's fairly viscous but not completely impossible to work with. Oh yes, you remember we said there was power law behavior out here? You might legitimately ask, well that's very nice, what is the exponent? And the answer for a fair number of systems is that x is about three and a half. However, you can find systems in which x is much larger, well I don't know, much larger, but somewhat larger. For example, in hydroxypropylcellulose, x is about four point four. And therefore there is a power law exponent and I can even tell you what the power law exponent is. Let us skip back from the systems that do show the transition which, well it is universal, a universal transition is a function of the transition viscosity. It's not a universal transition viewed as a function of the concentration. And it doesn't even happen in some systems. So let us go back and let us look at the systems in which we do not see a transition which is a large number of them. Going out in some cases to very high viscosities, I seem to recall reduced viscosities up to about ten to the seven in a few cases in which no transition is seen. Okay, so there is the curve and one of the things you might legitimately ask is, well, G, couldn't we manage to describe this measurement perhaps with a couple of power laws? And if you go through the literature, you find people who draw a power law curve up here, which is sort of tangential or goes through the points over some distance. And you find down here typically a linear curve. And in between there is some transition which is sort of talked over. As G, there is a transition that is broad. And therefore you might propose, yes, the measurements actually show a crossover from one type of behavior to the other, but the crossover is a little hard to see because there are other things going on at the same time. So there is not a clear transition. The way there would be if there was a phase transition, which I don't think anyone has proposed for, well, that isn't quite true. Almost no one has proposed for these systems. So there is a transition, but it's clearly not a phase transition. And the question is, is this a reasonable description? And the answer, I think, is that if we look at e to the alpha c to the nu behavior, the stretched exponential behavior marches out, alpha and nu are independent of concentration. They're proportional to c to the zero, independent of concentration. And that's true along the whole curve. And therefore, viewed from this perspective, there is no transition. There is a single smooth curve that just chugs ahead. So far so good. Okay. I have just run us out of chapter 12. We have now come to the end of the discussion of viscosity. I am now going to advance to discuss viscoelasticity. And in order to do that, I'm going to have to draw a picture. The idealized picture is here are two infinitely long plates. Now, that's a little dangerous because plates aren't infinitely long. And there are several ways around this, for example, bend plates in circles. And you have two concentric cylinders, or you have a flat plate, and you have a rotating cone. You may say, why would you want a rotating cone? And the answer is that the velocity up here, v, is omega r. That's velocity, oh, parallel to the plate. This distance, distance out from the center, increases as r. And therefore, this distance, because it's a cone, this distance is proportional to r. And therefore, if I look at the velocity of the plate over the distance from the plate to the bottom, and there is a velocity gradient here, and the velocity gradient is uniform over the distance. Now, you have to be a little careful with this, because there's going to be some question whether things are uniform as you move this way. There are potential complications. Nonetheless, we shall persevere, and I shall draw this pretty picture. And the bottom plate is stationary, and the top plate is being moved back and forth. And the top plate has a displacement, which I'll call d for the moment, and the displacement is some amplitude A cosine omega t. That is, we have a displacement, and the extent of the displacement oscillates as a harmonic in time. So you can do that equally with either that or that. The displacements are fairly small. Now, you might say, well, gee, could you get information out if you made the displacements large? And the answer is yes, and there is an object, an experimental method called large angle oscillatory shear. And we shall reach Laos in the next chapter, chapter 14. However, we have just reached chapter 13, which talks about linear viscoelastic behavior. OK, so we have this displacement, and in order to get this displacement, we have to apply a force on the upper plate to move it back and forth. Now, if you think about this for a moment, you realize that if the plate has inertia, you're also accelerating the plate at the same time, and you have to deal with this, but that's basically an experimental question. And the inertia of the plate has to be made small enough, in some sense, that you can measure the part of the force that is due to the fact there's a liquid in here. I'm going to call this A0, the amplitude of oscillation. We'll come back to other notation for it in a moment. And the reason is that if we look at this system, if you make the plate bigger and bigger, the amount of force you need goes up. But it's basically a linear process, and therefore the interesting thing is the force per unit area. And the force per unit area is known as the stress. And the first force per unit area, which by the way is going to be oscillating as a function of time, the stress is determined by the amount of oscillation, the displacement divided by the distance L between the plates. That is, if I take this whole apparatus and make it twice as big vertically, the amount of oscillation seen locally by a molecule, here's a molecule, and it starts out and has some neighbors along this line, and when we increase D to its maximum, this is oscillating, the neighbors are displaced sideways one way or the other in an back-and-forth manner, and the amount of local displacement is determined by the ratio of the linear displacement of the plate to the distance between the plates. And this is known as the strain. So far so good. Now, suppose we actually do this experiment. There are a couple of different ways we could do it. One thing we could do is to apply an oscillatory force up here and measure the motion. And so we have a force which is very precisely controlled and shifts things back and forth, and we ask how the plate moves. We could also have a driver and feedback electronics, and we move the plate back and forth through a fixed distance, and we instrumentally determine how much force we're having to exert so that the plate actually does this. Those two experiments are equivalent. That is, they measure a series of curves where a certain amount of force per unit area, a certain stress, and a certain strain are matched with each other one for one. But whether you are actually experimentally creating the stress and measuring the strain or vice versa, doesn't matter. There's one curve, and the distinction there is purely how you built your machine. Okay? Now, having said all this, we now go in and we measure this question. This is assuming that it's linear response. When you're basically saying that the stress, when you measure the stress, it's equivalent to measuring the strain. It's basically assuming that it's linear response. It seems to me the answer is no. That statement is more general than its linear response. That is, you have a stress, you have a strain. If I double the stress, what happens to the strain? Well, the strain may or may not double in the linear response, it does. But however, the statement is there's a stress-strain relationship would be true whether it was linear or not. The only place where you would get into trouble with this is that you could have a hysteresis issue where if the... you apply different, for example, different strains and the stress changes some place or vice versa, you might not be able to reach every point on the curve instrumentally very easily by doing one controller or the other. Now, this is linear response in the sense that the theoretical analysis that is done of linear viscoelasticity assumes that if I apply a series of forces at different times, the fluid has a response at later times, and I can just do linear response that is add up all of the responses of the fluid. Of course, if I've applied the force at several times, there's several time delays, and I will get the answer. And we'll get to linear in a bit. So what is the net result, though? The net result is that if we actually look at the strain, the stress, the force bringing in that area, and we compare it with the strain, we find something. And what we can divide up, because this is linear, we can get force, we can divide out the A0, and we have force per unit area per unit of strain. And that force turns out to have two pieces. There is one piece that is determined by a function g' of omega, and g' of omega is now multiplying the displacement. Traditionally, it's sine rather than cosine. Oh, I'll write it as cosine to avoid confusion. This is cosine omega t. There is one component where the response has a response that is in phase with the displacement. So you have a displacement, and there is a restoring force. That's what it is. It grows linearly in the displacement. That is the oscillating displacement. And then we have another component. Another component is 90 degrees out of phase, and so there is a response that is 90 degrees out of phase to the displacement, meaning this is a force that is largest when the displacement is at zero. Now that may look very odd, but you have in phase and out of phase responses. And you might ask, what does this all mean? Why is there a force that's in or out of phase with the displacement? This is a liquid, after all. If you made the displacement stop and sat there after a while, the force would disappear because the liquid would just flow. That's a little more complicated than it sounds. And the answer is that the low shear viscosity is......is actually a low frequency limit of G double prime of omega over omega. These two objects have names, by the way. G prime is the storage modulus, and G double prime is the loss modulus. And I might legitimately wonder, gee, that seems like a peculiar way to represent things. And I will now show something that makes much clearer from a physicist's point of view what you are looking at. I am going to multiply each of these objects by one. And of course, whenever a physicist says, I am going to multiply by one, you know it's going to be some huge object that is in fact one, but not if you don't look hard. And I am going to multiply this by one in the form omega square over omega square. And I am going to apply this one by one in the form omega over omega. That's perfectly legitimate. You notice this now looks exactly like the viscosity. But now I ask you, okay, I have something that goes as omega times the displacement, and it's 90 degrees out of phase with the displacement. And I have something that goes as omega square times the displacement. What am I looking at here? Well, if you think of a harmonic oscillator, you realize that this object is the velocity, and I have a term in the force that is determined by the velocity. Gee, that's a lost term, isn't it? And here I have a term that goes as omega square times the displacement. That's the acceleration. And the thing that goes as the acceleration is a restoring force. The system looks like it has little springs in it. And in fact, there are a set of math models blamed on, if I recall correctly, Maxwell. And this whole picture can be modeled as, here is a system, and it has a spring. And coupled to the spring is something known as a dashpot. Dashpots were very important back when people had teletypes. The dashpot, the plating of the moving object in the teletype comes back very fast, and you don't want it to whack the far end hard because if it does, the vibration will damage things. So you have this little piston-like object, and the moving object coming this way has an arm coming out. There's a rubber disk here, that's the traditional image. And the rubber disk hits this thing, and there is a little hole here. And this is a piston, but it's a leaky piston, and so you compress the air and it slows things down. And then when it's done compressing the air, the air blows out the end, and you contume the opening because there's a little sliding arm. And so eventually, after a very small fraction of a second, the moving object on the teletype is brought to an end here. It's brought to a stop at exactly the right location, and with no unpleasant, sharp acceleration that would damage anything. Well, yeah, it does that, and it does it via high-quality Victorian engineering. There are essentially no moving parts in this other than the object you're trying to stop. There's no active feedback system, no electronics, it's all done mechanically. And that is a dashpot. And you can fill the dashpot in this picture with some liquid. So you have this hanging object in the liquid, and the hanging object in the liquid, it's just a liquid with a viscosity and no frequency dependence. And so you, or it has a frequency dependence, and this object represents the storage modulus, and this piece represents the loss modulus. So that is a picture of the viscoelastic parameters, the linear, in terms of a storage modulus and a loss modulus. And you notice that I have reparameterized them starting here and then doing this for consistency, basically. And once I've done this, the loss modulus looks exactly like the viscosity. There is, however, and I'm just going to show the qualitative picture first. Here's, however, a consequence of reparameterizing things. If I plot G prime or G double prime themselves, you remember they contain something, I divide out an omega, which I can do safely. And therefore G prime and G double prime both go to zero. So it's a little hard to do on a log-log plot, but they both go down. And at low frequencies, this is more typically a log plot, so you never really get to zero, there is a linear behavior, which is omega to the power. And then there are things that happen out here, and perhaps the curves cross. So if you're making omega larger and larger, eventually the curves turn up again. I am not going to draw this in any detail, you can find it in standard texts. The important issue is that if you talk about G prime or G double prime itself, themselves, you find that they have curves that look sort of like this. And there's a region where things are reasonably flat, and there's a low frequency region and a high frequency region I could introduce terminology. We'll get to that next time. The main issue is, though, there's a shape for the curves. If I instead say I am going to go in, and I am going to plot G prime over omega square, or G double prime over omega, if I do that, I have divided out the leading slope. And if I divide out the leading slope, that says that at low frequency, the behavior that I am looking at is frequency independent, which is sort of what you would naturally expect. And these curves all look, as you will see from the book, about the same, namely there's a flat region, there's a rollover, and then down here there is at least at first a power wall. And then maybe something interesting happens at larger frequencies. The question we ask, given that I have drawn these pictures, is, well, how do I derive or rationalize a curve that explains this? And so I will give an Anzatz based on the theoretical model we have not yet discussed in detail, and a renormalization group argument. We are going to actually invoke, which we have not actually done before, the hydrodynamic scaling model, which is my model, and we are going to invoke a renormalization group argument. The renormalization group argument being invoked is much more general and qualitative than many renormalization group arguments, but it will be clear what it is. So we are going to start out, and here is a straight line representing concentration. And we will imagine that the viscosity is plotted perpendicular to the blackboard out towards you. And what we find is the viscosity increases as I go to higher concentration, and eventually, at least in some systems, we reach the transition concentration C sub t. And down here is a solution-like regime, and in the solution-like regime, you see stretched exponential in concentration behavior. And up here is the melt-like regime M, where you see C to the X behavior. That is the pure phenomenology. Now what I say is I have used something called the Altenberger dollar positive function renormalization group method, and I applied this in the context of the hydrodynamic scaling model. And I said I can calculate a certain lead behavior at lower concentrations, and the Altenberger dollar renormalization group method lets me turn the lower concentration calculation into the stretched exponential, and we are going to assume that is true. Now I am going to put another little bit on top of it. The PFRG, the Altenberger dollar positive function renormalization group method, actually predicts two sorts of functional behaviors. The functional behaviors it predicts depend on where what are called the fixed points of the renormalization group are located. In particular, if you have a fixed point at zero, then near zero you predict stretched exponential behavior as is found. On the other hand, out here someplace, if you have a fixed point that is way out there, and that is the dominant fixed point. There can be several fixed points, but as we go along here, there is always a dominant fixed point. If the dominant fixed point is out here, you would get power law behavior as is observed. Next trick. The viscosity is the low frequency limit of double prime omega over omega. Viscosity is a function of concentration, and so is the dynamic loss modulus. Therefore, when we say we are looking along this curve at viscosity, I could also say, I am going to put another axis on here. I am going to insert the vertical axis I have skipped over. I am going to call the vertical axis omega frequency. I am going to say viscosity is what we measure along the omega more or less, maybe not quite, zero one. We measure at low frequencies, and we get the viscosity, zero frequency limit. However, it is the zero frequency limit, and while I wrote it as eta, I could just as well have written it as Gw prime of omega, and by the way, concentration, over omega. And therefore, I now have a two variable function, and I have used the renormalization group argument to work along at omega is zero. Okay? Now comes the somewhat bold part. I will start here. Yes. And I will march sideways. And at first I am in a regime where this fixed point at zero is presumably still there and dominant. And then at some point, I move sideways enough, and these other fixed points, about which I know very little, become dominant. And therefore, I have moved from a solution-like regime to a melt-like regime. However, I am marching along at C fixed with omega as the variable, because I am moving sideways along this graph. And I am looking at something that is really only a function of omega. If this picture is correct, and there is a great deal of extrapolation in this, it's an anzotz, not a derivation. At lower frequencies, this fixed point is dominant, and therefore I should have a stretched exponential in frequency. However, eventually I cross this line, and once I am across this line, I am in a region where the melt-like fixed points out there are dominant, and I should have omega to a power behavior. This doesn't really calculate what the power is at all. And in order to do the extrapolation even at low frequency, I would need something that gives me viscosity is 8 to 0 plus presumably, some intrinsic viscosity, 8 and a lot C, plus some beta omega. That is, I would need some way to generate a model that gives me the low frequency, so I could move off 0 at all. And after I have moved off 0 at all, I can then use the Altman-Burger-Duller renormalization group to say what happens with frequency. However, I have not found, I have been working on other things truthfully, I have not yet found the frequency, yes, and therefore the assertion is that you find this linear step and you can then fill in the low frequency behavior. That has not yet been done, and the crossover and the fixed points out there have not been done even for concentration. Okay? So that is the rationale, and you notice what it predicts. It predicts stretched exponential in frequencies and lower frequencies, and it predicts power-long frequencies and higher frequencies. However, there is some interesting, if you look carefully, some little bits I have skipped around. And one bit I have skipped around is what it says about frequency dependence way out here. That is, if you are way out there, you're in the mount-like regime all the way, and therefore you see simply power law and frequency behavior. Well, you can actually point at data that looks like this, and the question is, are you really going to see that, or is it the case that there is a very low frequency, stretched exponential in frequency regime? And corresponding to that, is it possible that this curve actually bends over and very close to zero frequency if you get down to sufficiently low frequencies, the seed of the X behavior would disappear? I don't have an answer to that. The other point is that if we go up to very high frequencies, is it going to be the case that we have gotten into the omega to the X regime at very low concentrations, or does this curve bend over so that at very, very low concentrations where we can just barely see particle interactions at all, we are still in the concentration to a stretched exponential regime. That is, the model does not handle this very, does not discuss this, the model sort of dodges around that, it covers this large area. Now, you could say, of course, gee, this picture explains why some systems show the solution like, melt like transition, and some do not. Namely, depending on exactly where the fixed points out here are, it might be the case that in some of these systems, you chug along the concentration axis all the way to the melt. This curve does not go to infinity, it stops at the melt. And even at the melt, the detailed numerical parameter is going into the calculation or such, that you can get all the way out here, and the fixed point at zero is still dominant. But there are other systems, and all you need are differences in chemically dependent parameters, such that these fixed points become dominant at some concentration, and out here you see a power lock. And therefore, one universal physical model plus chemically dependent parameters that determine where which fixed point is dominant predicts both behaviors. Well, that's fine for the hydrodynamic scaling model, which actually allows you to have two behaviors in one general model. It is not at all so good for models that assume scaling, because they assume that you always get scaling out here, which it seems that you do not. Okay, that is the Anzatz. And having given you the Anzatz, we have now reached experiment. We have also reached approximately the end of the hour, and therefore we have reached a natural point to stop. Therefore, I am going to stop. So this has been a lecture finishing off my treatment of viscosity and advancing to my treatment of the hydrodynamic scaling model and its Anzatz for predicting linear viscoelasticity. I'm George Phillies. This is the end of the lecture.
Lecture 23 - Viscosity, viscoelasticity. George Phillies lectures from his book "Phenomenology of Polymer Solution Dynamics".
10.5446/16221 (DOI)
classes in polymer dynamic based on George Philly's book, Phenomenology of Polymer Solution Dynamics, Cambridge University Press, 2011. And today, this lecture is lecture 21, Polyelectrolyte Slow Mode, Thermal and Sore Coefficient. I'm Professor Philly's, and this is the continuum of the series of lectures on polymer dynamics and their phenomenology. Today, we're going to be continuing and probably concluding our discussion of chapter 11 on dynamic light scattering and the relaxation of concentration fluctuations and polymer solutions. We go back first to the discussion of the neutral polymer slow mode, and I'm going to bring out a few particular features that we didn't entirely discuss last time. The first issue is that if you look at polymers in solution, and there's, for example, work of there are a couple of things you can do. And one thing you can do is to look at the first cumulant of the spectrum, which is a light scattering intensity weighted average of all of the modes. And another thing you can do is to do a mode decomposition. And what is found, if we look in, is that at low concentration, there is a mode, and its relaxation rate, this is gamma, the relaxation rate, increases with increasing concentration. And then at some concentration, you start to see a slow mode, we're not on the same scale. The slow, important issue with the drawing is the gamma of the slow mode decreases as you increase the polymer concentration. So you have two modes, they both show this behavior. The other important feature of this, we're saying there are modes, and the modes are both found to be Q squared dependence. The significance of Q squared dependence is that it corresponds to diffusion. That is, you have something where there is a diffusion current which is proportional to a diffusion coefficient and a concentration gradient. And then you have a continuity equation, DC DT equals del dot J, divergence of the current. And if you stick the diffusion current in here, you get DC DT goes as del dot D grad C. Now, if you approach this by saying we will put in, take a spatial Fourier transform, equivalently we will look at the relaxation of a spatial cosine wave. Each of the grads gets replaced by an IQ, and you have DC DT proportional to minus DQ square T. And that is a mark of normal conventional diffusion, namely that the modes have a Q squared dependence in. If you however go in to systems that shows a slow mode, there are a variety of ways of breaking down how many modes there are and what you're seeing. There's some question of how many parameters you can really pull out of a light scattering spectrum. And I've presented simulational evidence that a reasonable numbers, oh, six or eight, random 20, nonetheless there are alternative approaches for doing this. And for example, I note the work of wind Brown, who of course was looking at light scattering spectra that had very wide ranges of decay times, meaning it's easier to pull out more parameters, and who found in the slow mode a mixture of Q squared dependent modes, and Q to the zero, that is wave vector in dependent modes. How can you get something that's Q to the zero? You're looking at some internal relaxation, and the particles do not have to move to change how much light they're scattered. So there's no particle motion involved. Instead, the particles, for example, change their ability to scatter light through some manner, which is not at all specified, and they're just sitting there. So this is seen the same at all scattering vectors. So there's another way to push beyond this, which was discussed when we talked about probe diffusion and hydroxy-procyl cellulose, and that is to take spectra, and if you don't think that the modes are well characterized, will you try to fit the modes to something? One choice is to fit the modes to sums of a few cumulant series, which as far as I know is perfectly legitimate, but that has never been explored very much. Another alternative though, is to fit the dynamic structure factor to something of the form a e to the minus theta e to the beta plus a slow e to the minus theta t to these p-slow, beta-slow. Printers hate you if you give them something like, a form like that, and so you fit to a sum of stretched exponentials in time. The point of fitting to stretched exponentials in time is that stretched exponentials in time appear to describe well the time dependence, accommodate well to things that really aren't simple exponentials, and only use a few parameters. However, based on the comparison with concentration stretched exponentials, where you can pull out a derivation based on renormalization group arguments, there's no reason not to suppose that those aren't the fundamental forms. So that is the sort of thing you have. Now, if you then go in and ask what sort of behavior you have, there are a few slightly odd things that you sometimes find. For example, there have been found slow modes in which the slow mode goes as q square up to a point, then rolls over and is basically independent of q. So at short distances, you're seeing something that's q independent of q. You're seeing something that's q independent, some sort of structural whatever, and at large, at small q, small q corresponds to large distances. At large distances, you're seeing something diffusive. There is also one perhaps should note, an odd note due to a nickel out. They look at mode relaxation times, and they find the slowest mode, and it has a relaxation rate. And then they go and look at mechanical relaxations. That is, you take the liquid, you apply an oscillating shear and vary the frequency, and there are internal relaxations that contribute one way or another to what you're seeing, but those have a characteristic time. And if you measure the characteristic mechanical time, it corresponds exactly, or at least approximately, to the slowest time you see in the optical spectrum. However, there is a complication. The complication is that if you do this in a theta solvent and you take up the temperature, if you take up the temperature, the optical mode disappears. The mechanical mode is still there. So apparently the mechanical mode was coupled to something that scattered light, and whatever the coupling was, either the coupling may not have disappeared, but the light scattering did. However, the slow mode is still there. Now, one thing you might do is to ask, well, you have a fast mode which gets faster and faster as you increase the polymer concentration. You have a slow mode which, when it appears, slows down as you are forming the polymer concentration, increasing the polymer concentration. And you reasonably ask, how is this to be interpreted? What are we seeing? An indication of this is done by crossing over and looking at polyelectrolytes. It has been known for a rather long time that polyelectrolytes show a slow mode. And the issue with the polyelectrolyte slow mode was asking what it was or was not. Some of the same debates that also took place with neutral polymer slow mode were present. And the issue is that if you go back to, oh, 1980, digital correlator technology was really at or beyond the edge of what it could handle. I do recall when we were working at Michigan with polyacrylic acid and probe diffusion and polyacrylic acid, you hit a point at which there was a viscosity transition of the sort we'll be discussing. The diffusion coefficient of the probes decoupled more and more from the viscosity of the solution. And there were issues with the scattering spectra of the probes. And since what we had at the time was a fairly limited 128 channel linear correlator, it really appeared as though you were getting additional slow modes, but we knew we couldn't get at it with the technology of the time. So we sort of cut off where we were going about the time the slow mode cut in. OK. So most of this text does not treat polyelectrolytes, which are a very complicated problem. The reason we are discussing polyelectrolytes here is that we have a specific set of experiments due to SEDLAC, which clarify what the polyelectrolyte slow mode is, and do so in a way which makes reasonably clear that similar sorts of things are reasonably interpreted as happening for neutral polymers slow modes. A SEDLAC worked with 50 and 710 kilodalton sodium polystyrene sulfonate. A polystyrene sulfonate is a polystyrene that has been chemically modified, and out at the edge you have a group that can be neutralized, and at this point the polystyrene sulfonate is a polyelectrolyte and this provides water soluble. And so what SEDLAC did was to work with these, and the first thing he did was to say, well, we'll dissolve the powder, it typically comes as a powder, and we will look at the spectrum, and since there's some question of whether we're in equilibrium or not, we will simply keep looking at the same spectrum. And he did this out to a one-year time scale. It just requires patience. And what he observed was that S of Q, which is the light scattering intensity versus angle, which gives you the distribution of sizes of whatever they are in solution, was independent of time, proportional to time to zero. And therefore the distribution of clusters sizes was not changing. However, as time went on, the scattering intensity fell, and the diffusion coefficient increased. Now that's a little peculiar, and the question is how can the diffusion coefficient be increasing and the intensity falling if the size distribution isn't changing? The following picture appears, which is consistent with everything else you're going to hear, it appears to explain this. We start out, and there are some number of fairly dense clusters, and the fairly dense clusters will occasionally have a single polymer strand sticking out. This is not yet an equilibrium arrangement. Over a year, what happens is, these very exposed chains, which are not very well attached to whatever it is, fall off. The number of chains in a cluster decreases because chains get out of the cluster and not back into them. Because the cluster has less matter in it, it scatters less light. Because the cluster is more porous because you've taken parts out, it has a higher diffusion coefficient. And this picture from the initial cluster to the final cluster explains all of the experimental measurements. Now that doesn't prove it's right, and one of the questions you ask is, well, is it possible you're just looking at the fact you didn't really dissolve things completely, and if you had waited to a 10-year or a century timescale, is it possible that you would have found better data? And indeed, on glass problems, there are several Dutch groups that have set samples. We need to do long timescale, and the working timescale of the experiment, which will not be done by the current people completely, is out to a century. Of course, they have to be very careful with their samples. So having asked, is this equilibrium or not, Settler cried another set of experiments. Now let me back off a step in a historical note. The polyelectrolyte slow mode probably should, early studies should all be credited to making sure who very recently retired Washington and his collaborators. And the feature that is observed is that if you have a polyelectrolyte at high salt, there is no slow mode. But if you take the polyelectrolyte down to low salt, very little dissolved salt, you see the slow mode. And the fact that there's a salt sensitive shows that it's a polyelectrolyte effect in some sense. The fact that you see odd effects for polyelectrolytes in solution in the absence of background ions, this is a general issue and low ionic strength polyelectrolytes are even more complicated problem. Nonetheless, what Settler did, which was in line with this, was to say, we will take sodium polystyrene sulfonate at high salt, and we will pass it through an 0.05 micron filter. 0.05 micron is about the finest filter you can find commercially, the filter, water, soluble, whatever. Getting anything through it is a feat of great patience, but he did it. And having done all this, he found high salt very well filtered. There was absolutely no slow mode. Now, when Settler said no slow mode, his experiments did one thing that a lot of others do not. That is, there are a lot of experiments that will report, effectively, a ratio of the intensities of the fast and slow modes. What Settler did was to use light scattering standards to calibrate, and he was therefore able to measure the absolute intensity of the fast mode and the slow mode separately. And so he could actually say how much fast mode there was. The reason this is of interest is revealed by what happens when you take these samples and dialyze to remove salt, or equivalently, if you just make samples in different salt concentrations, namely, you do dialysis, which extracts the salt from solution, and after you have done this, you find the slow mode. Now, the question is, why are you seeing the slow mode? And there are, in fact, two sorts of explanations, one of which is ruled out by Settler's experiments. One notion relies on the statement, I have a polymer solution, a polyelectrolyte solution. I pull the salt out. I measure the intensity of the scattering or the intensity of the observed fast mode as I take the salt out. And what I find is that I pull the salt out, the intensity plummets. The particles repel each other, it's much harder for them to form concentration fluctuations, and so the concentration fluctuations are small, and therefore the amount of light scattering is small. Now, one thing that could happen if the intensity of the fast mode is falling rapidly is there actually was a slow mode here all along. However, at high salt, the slow mode was so weak that it could not be seen due to the scattering by the fast mode. As you pull the salt out, though, the intensity of the slow mode is perhaps independent from concentration, salt concentration, and therefore the slow mode rises up out of the deeps like a rock left behind on a beach as the tide recedes. Well, that's very clever. However, you can rule that out if you do absolute intensity measurements because if you know the absolute intensity of the light here, you can work out how and, you know, also how intense the slow mode would have been here, you know how intense the fast mode is here, and you can simply ask if there was actually this much slow mode present would it have been hidden or not, and the answer is no, would not have been hidden, it would have been quite visible. And so what SEDLAC showed by measuring scattering intensity is that quite clearly the slow mode actually does appear when you pull the salt out. Now, the other thing that SEDLAC did was to sit there and look at the scattering from due to the slow mode as time went on, and what he found was that S of Q, scattering versus angle, was fixed. That is, you had some distribution of objects and solution that were contributing to the slow mode, and there were large enough to be comparable in size with a light wavelength, and their size distribution did not change if you let the sample sit for very long times. On the other hand, if you let things sit, the intensity of the slow mode rose, this is this year experiment again, and the diffusion coefficient fell. And if you looked at intensity of the slow mode versus time, you could start with let's dissolve things in pure water, and the intensity of the slow mode fell as time went on. You could start with no clusters, let's dissolve things in pure water, let's pull the salt out, and the intensity changed the other way, and the two attempted to converge. There is a reasonable interpretation of this observation, that you are looking at equilibrium clusters. If you have them, and they're inwards or two concentrated, the inwards empty out, if you have them and they're inwards or two dilute, they pull in more chains, and they tend towards an equilibrium size. And therefore, the reasonable interpretation is that you are looking at equilibrium domains and solution, and you can approach the domains either from a side where you have them initially two concentrated, or a side where they are initially two dilute, and they converge to the same point. We can also note two other sets of experiments. One set is due to Kong at all, and this is the Russo group in Louisiana, and what they did was to say, well, maybe these, let us look at this accusation that the domains are due to problems with dissolving the powder. We will synthesize our polymers from scratch, and we will never take them out of solution. They were made in solution, they have always been in solution, there is no dissolution step in their history to cause any difficulty. The second experiment, quite different, is due to Tanito and Pryold, and they did, in essence, optical microscopy, and they looked at systems, among other things, systems that show domains. Well, they look at the systems that show domains microscopically, and they can actually visualize things that are about the right size. They are not very distinct, because after all, these are solution structures, they are not solid bodies, and the solution structures, whatever they are, are about, if I said they were about a half a micron or a bit less, that is about it. So they are quite large, they are much larger than an individual polymer chain, and you can actually see them. Maybe you can't see them very well, but you can actually see them. Oh, last experiment, SEDLAC. SEDLAC started out, again, and this time, it started out with polystyrene sulfonate that was non-neutralized, so that, well, non-neutralized is not precisely correct, because if you have any acid group in water, organic acid, whatever, it ionizes, auto-ionizes to some extent, so it has some charge on it, but mostly it's not charged. And what he then did was to say, we will now add sodium hydroxide, and we will neutralize the polymer, meaning we will pull off the protons from the acidic groups, and the polymer will now become extremely heavily charged. And what was found was, as you add NaOH, the intensity of the fast mode drops dramatically, and the intensity of the slow mode keeps climbing. That is, you can approach this along one more axis where the solvent basically doesn't change. The polymer is dissolved in the solution, and all you are doing is changing the charge on the polymer molecule, and you see the same behavior this other way. Okay, so we have seen all of these alternatives. Question? So both of these experiments show that what we are observing is the equilibrium structure you mentioned. Yes, the experiments all converge to agree that you are looking at an equilibrium structure in solution, a structure that is considerably larger than a single polymer molecule. And the various properties, some of which I have skipped over of this picture, agree completely with the slow mode seen for neutral polymers. Namely, for neutral polymers, both modes are Q-square dependent. The fast mode, which corresponds nearby to chains interacting with each other as single chains, D increases as you increase the concentration because the chains retell each other. The slow mode, the vitrified region, whatever it is, acts like a diffusing large object, and it behaves as though you were doing probe diffusion, and therefore as you increase the polymer concentration, the probes are slowed down. Okay, so you might reasonably ask, how does this compare with what we know about glasses or things where you might see this phenomenon? And there is an interesting analogy which I will pursue for a few minutes. And the interesting analogy is with the Kibbleson last model. There is Dan Kibbleson mostly. He was one of my postdoctoral supervisors. He's since passed away. The issue is as follows. We'll plot this versus temperature. We look at the behavior of a liquid versus temperature, and we look at the viscosity. And if we look at the viscosity versus temperature as we cool something off, the simple behavior is you cool it off, you cool it off, you get to the melting point, and at the melting point you go from a liquid to a solid, a crystal, and, gee, there's no more viscosity. It's, the thing just sits there. That's simple freezing. However, many substances, if you take them and cool them off, you cross the melting point, and you can just keep on going. And the viscosity goes up and up. Now there are some substances in which you cool and cool, and you eventually hit a lower limit, the low which you cannot have a liquid. And at the lower limit what happens is you are forming, you start, the likelihood of forming crystallites in solution goes up very fast with decreasing temperature. And the material turns into a crystal and solid. But there are other liquids where you cool things off and cool things off and cool things off, and in the end you get an amorphous solid. But it really does seem to be solid. It's a glass. The question of the nature of the glass transition is very controversial and very complicated. And it's mostly beyond what we're going to talk about. There are, however, a few other peculiar features of the glass transition. One of which is you cool the liquid off, you now heat it up again, you heat it up to out here someplace, and you see excess light scattering relative to the amount of light scattering you expected in solution. And the excess light scattering stays around for very long times. It wasn't here originally, but once you've run the liquid down and back up again, you get this excess light scattering. And the question is how are we to interpret all of these different phenomena? What Kibblesen proposed was you cool the liquid off, and at some temperature it starts forming clusters. He talks about, and his collaborators talk about, clusters that are icosahedral packings of atoms. And the clusters are thermodynamically stable. However, the clusters cannot lead to crystallization. Why not? Because icosahedra aren't space-filling. If you get 20-sided dice and try packing them together, you can pack cubes and make nice crystals. You cannot persuade icosahedra to pack, because it's not a space-filling geometry in three-dimensional space. And as a result, the icosahedral crystals' objects try to grow, but as they grow, because it's not space-filling, they have to distort, and there's strain energy coming in. And at some size, you have a frustration limit, and the clusters cannot get any bigger. That's the front part of the model. Now, the fact you make more and more of these clusters down here sort of explains why the viscosity goes up. The viscosity is going up for the same reason that the viscosity of icosahedra is higher than the viscosity of liquid water. Namely, there are these little unbending things in solution, and they get in each other's way. Now we come to the truly brilliantly creative contribution. And the creatively brilliant contribution is the statement, these structures are not space-filling, they can't give you the crystal, but they're thermodynamically stable. That is, the clusters have a melting temperature, and the melting temperature of the clusters is higher than the melting temperature of the pure crystal. As a result, they're still stable and liquid out here. Once you've made them, you heat up beyond the melting temperature, and the clusters stay around to some significantly higher temperature. And they then contribute to the viscosity increase. They then increase, and the large increase, and the extra light scattering. They explain why you form a glass, and there's one other thing. Suppose you've made these, and you would like to crystallize. You would like to rearrange the atoms in a nice crystal lattice. In order to get from here to here, you have to break up this structure and form the preferred structure. Well, the potential energy, the free energy barrier in between can be quite large. So even though this is the preferred structure, the stable structure at low temperature, in order to get from here to here, you'd need to supply very large amounts of thermal energy to the transition state. And guess what? That thermal energy is not available. And therefore, even though at low temperatures, the frustrated crystal may be less stable than the real crystal, once you've made them, you can't make them go away easily, not just by sitting and waiting. So that is the Kibblesen glass model. I have done molecular with Paul Wittford molecular dynamic simulations, which appear to show, reveal the presence of Kibblesen clusters that have exactly the properties that Kibblesen would ascribe to them, including a few that I haven't gone into. The only difference between the clusters we found and the ones he described is that the clusters Wittford and I found show septahedral seven-fold ordering, not icosahedral 20-fold ordering. Septahedral ordering is extremely unusual in nature. If you didn't think there was a reason to look for it, you might not have done so, but that is what we found. Okay, what does this have to do with the slow mode? And the answer is we have the polymer molecules, and they form these objects, which in polyelectrolytes have a higher density than the surrounding solution, and in neutral polymers apparently do not. And these objects are frustration limited, meaning they can't grow more than they do, clusters. They form at higher concentration and contribute to the viscosity because you've got clustering. And so these are the glassy objects of the Kibblesen glass model, except we have a working experimental case where they actually exist. Okay, we are now approximately done with the discussion of the slow mode, and the question is where we push on to next, and one answer is Rayleigh-Brilwans scattering. So we will talk for a piece about Rayleigh-Brilwans scattering. This is, in some sense, the scattering that answers the question, why does the sky glow bluish in daylight? In any event, the answer is that we have set this thing up, we have a liquid, we scatter light from it, and the light to some extent changes frequency when it's scattered. We can do this with a simple liquid that does not contain diffusing macromolecules. The scattering line changes in frequency, and scattering are quite large, like 10 to the 9 hertz or more, meaning you don't use a digital correlator to study them, you use a fabric per o-interferometer or one of several relatives. And if we put in monochromatic laser light, here's frequency, I will put in laser light of one frequency, 0, and I will ask what frequencies of light are scattered out. And the answer is, you see a spectrum sort of like this, and there is a central peak, and there are two shifted peaks, one shifted up in frequency, one shifted down in frequency. And how do we interpret this? Well, this peak is due to heat diffusion. You have fluctuations in the local energy density and solution. The energy fluctuations create mass density fluctuations, so they scatter light. The way the fluctuations go away is that the energy diffuses out of them, which is heat diffusion, and so the width here is determined by the diffusion coefficient for heat. These two side peaks are out at frequencies c, not speed of light c. And the frequency c is the local speed of sound waves in the solution. Why are they sound waves in the solution? They're thermally excited just the way diffusive motion is thermally excited. And the sound wave peaks also have a width, and the width is determined by the patient that kills off sound waves. So you have three peaks there, and one of the things you can imagine doing is to add a polymer to the solution, and ask what happens. And the short form answer is that polymer molecules occur on a very long time scale. This material is at the gigahertz range, a very short time scale, and therefore the two don't couple to each other a lot. However, if you go through from low polymer concentration out to the melt, what you find is that the frequency shift here changes with polymer concentration. The widths change, and therefore there is some sort of weak sensitivity of the Rayleigh-Brilwain spectrum to the fact that you're replacing the solvent with polymer. That shouldn't be very surprising. After all, there's no reason for the polymer to have the same thermal diffusion coefficient and speed of sound as the solvent. And as you move from pure solvent to pure polymer, something ought to happen, and it does. Another thing you can study is the Soray coefficient. The notion in the Soray coefficient is revealed by the experiment used to study it. You use a laser interferometry to create an interference grating in the solution. That is, you send in two beams of light, same source, so they're coherent. They come in at two different angles, and they interfere. And because they interfere, they appear to produce a brightness grating. Well, if you do this with a high power source, you don't just have a brightness grating. You have locally heated material. And now there's several things that happen. And the first thing that happens is you produce a fluctuation in the local energy density, and that diffuses out due to thermal diffusion. The second thing that happens is that you have the Soray diffusion, and the notion here is that you have a matter current which is proportional to the Soray coefficient and the temperature gradient. That is, if you create a temperature gradient in the solution, or in a gas, objects diffuse parallel to the temperature gradient. There are a couple of minor complications here. First of all, the effect is also known in gases, and the mechanism in gases would appear to be quite different than that in liquids. Second, unlike some other diffusion coefficients, the Soray diffusion coefficient can have either sign. That is, if I produce a temperature gradient like this, the temperature gradient will just drive the motion of the macromolecules, but it may drive them one way, or it can drive them the other way, and each is an allowed outcome. That's the Soray diffusion coefficient. People have actually observed that in polymer solutions, and you can actually see things. Okay, that's the Soray...Oh, I was describing modes. So we have an energy mode that relaxes. We have a concentration mode which relaxes with the diffusion coefficient for mass concentration. And then in some systems, there's also an intermediate mode, which is sometimes described as an alpha mode, and sometimes described as a structural mode. However, the evidence that it's... If I ask you what is the structure that is doing its structuring, there's no real answer, just described as a structural mode. It is a reasonable interpretation by comparison with viscoelasticity. But if I ask you what is the structure, well, that's who we're still working on that one. Okay, another set of experiments, a different set of experiments. And the set of experiments are here is a solution, and here are some A polymers, and here are some B polymers, and we take the A's and the B's, and we just alfaminate a solvent, and we say that the mixture, that is, the polymers are such that both polymers are concentrated. Now, we had previously talked about the case in which there was a matrix polymer that was allowed to be concentrated or not, and a tracer polymer, which was always dilute, and we looked at the single particle diffusion of the tracer polymers through the matrix solution. Here, matrix and tracer are both potentially concentrated. There are an extensive series of experiments to test theory by Ben Mouna. The agreement is reasonable. There are a number of tricks you can pull in this system. First of all, if I am clever, I can arrange things to do index matching, and if I index match, perhaps I can only see one of the two polymers and not the other. Second, I can arrange things to what is called zero average contrast, so that if I increase the concentration of A, and I increase the concentration of B, I can keep the ratio constant. That is, if I have a concentration fluctuation that moves more mixture into the solution, that's a fluctuation, I can arrange the solvent so that there is no change in the index of refraction. And therefore, fluctuations in the total concentration of polymer, uniform, and composition do not scatter light. Under this condition, what scatters light is something that changes the concentration of A relative to the concentration of B. So you do a theory for this and get out answers. What does the theory look like? The core issue is as follows. There is a diffusion coefficient for the motion of A as driven by a concentration gradient in A, and that causes the concentration of A to change. Also, there is a diffusion coefficient that couples to the concentration gradient of B, and that causes the concentration of B to change. I should actually be consistent and write these as the currents, not the B's concentration, B times. Yes, however, these are a non-dilute solution, these are cross-coupled. There is a DBA, which is the diffusion of B as driven by the concentration of A, and there is a DAB, which is the diffusion of A driven by the concentration of B. So they're cross-diffusion coefficients. Furthermore, there are reference frame effects. The basic issue in reference frame effects is if I say there is a current of A, there are A particles moving that away, there are two sensible ways to measure the current of A, and one is to look at the motion of A relative to the solvent. The other is to say we are in a closed container, here is the closed container. If there is an A particle moving that way, it must be because it's a closed container, it must be displacing the solvent, so solvent moves that way, and it just displaces everything in a non-preferential way, and therefore, their D particles pushed the other direction because A is displacing them. This is the reference frame effect. The reference frame description is due to the cold wood, and reference frame motions are independent of these cross-diffusion coefficients, that is, the two effects add independently. The consequence of the reference frame effects is that if I write the diffusion equation in a reference frame fixed on the scattering cell rather than the solvent, these various D's get mixed up in different ways to some extent. So, DAA, concentration gradient of A driving a motion of A, will also directly, due to reference frames, contribute to the motion of B, and vice versa. And so there are some interactions. The problem was solved for polymers by Ben Muna. There have been a series of experiments to test his theoretical models, and the models worked pretty well. They really do. There are somewhat coarse-grained models in the sense that their description of polymers is not, we are looking at single-polymer chains and polymer hydrodynamics in the Kirkwood-Reisman sense. It's a very coarse-grained description of what's going on, but at that level it works quite well. I shall very briefly note alternative to light scattering, neutron scattering. Now the core issue in neutron scattering is, in particular, in elastic neutron scattering. We send in neutrons of one energy. They are scattered by the sample in this first-born order approximation scattering. They come out in some direction, and they come out with some change in energy. The change in energy is equivalent to a change of frequency for visible light, and it corresponds to the fact that they are scattering off something that is moving in the system. Now the one difficulty is, in order to use this to study, say, diffusion, energy changes are extremely small, and therefore you have to do something very clever in order to measure them. The clever thing is a result due to Mizzai, known as neutron spin echo. This is an experiment which is very clever. It's been done with a small number of laboratories, and it, in fact, gives you the energy shifts to very high precision. Okay, so you scatter from neutrons. It's like scattering from light. There's one minor difference. If I have a polymer, let's say polyethylene, it might be regular ethylene. We could also replace the protons with deuterons, and now we have perideuteroethylene in which all of the hydrogen has been replaced with D, and we could also do this, so they're different amounts of H and D. The scattering properties for neutrons of hydrogen and hydrogen-1 and deuterium are radically different. And as a result, by changing the isotopic substitution in what is otherwise exactly the same system, you can see some fairly interesting results. A particular note, if you adjust the H-D ratio just right, you have, say, a polymer and it's randomly substituted with H and D. If you get the ratio just right, the polymer becomes very hard to see with neutron scattering, and you can focus your attention on other things. And this allows you to do what is, in essence, tracer diffusion, self-diffusion measurements in polymer solutions. What has been done is also to do experiments on dilute chains. Dilute chains have been studied. The feature here is, and this goes back to what we said a lecture or two ago, the Cora made fairly specific predictions, namely, there is a mode that is diffusive. There is a second mode that includes diffusion and also a relaxation object whose relaxation goes roughly as QQ. And the issue there is you can actually see the two modes. This has also been done before using light scattering. However, with neutron scattering, a representative wavelength from which you might be working, say, eight angstroms, so where you are looking at motion on a much shorter distance scale with neutrons than with visible light. That brings us to the end of our discussion of light scattering. And I'm going to put a break in the tape at this point. And greetings to the second part of today's lecture. We're now going to advance from a discussion of quasi-elastic light scattering, neutron scattering, and such-not, to a discussion of viscosity. The physical notion of viscosity is represented by a single sketch. We have two extremely large flat plates. One of the plates is stationary. One of the plates is moving with some speed vx. The two plates are separated by some distance l in the z direction. And therefore, if I plot v sub x versus position, the traditional assumption was that v sub x versus position is linear in position, and therefore, dvx dz equals this v0x, the actual velocity at the top, over l. You see, it's a velocity gradient. Velocity that way, gradient that way. The assertion that the velocity gradient is the same everywhere across has recently been shown to run into some severe problems if you push hard on the system. Now, of course, there's an obvious alternative assumption. If I take two parts and move one with respect to the other, if I have a solid block and I start moving the solid block top with respect to the bottom, I get crazing and shear planes and the thing severs. Well, polymers are somewhat in between simple liquids, which do this, and solids, which break if you twist them hard enough, and come back to that late in the course. Having said that, in order to keep the upper plate moving, you have to apply a force per unit area. And the force per unit area is determined by the velocity gradient, assumed to be the same everywhere across, and the constant eta, eta is the Greek letter, and it stands for viscosity. It's the resistance to pouring. Now, this is what is called shear, and this is the shear viscosity. There are two other viscosities that arise somewhat. One is, if we imagine a little volume of material here, the viscosity that comes in, if we imagine compressing or decompressing the fluid, for example, at some frequency, and changing its volume. And this is what is called bulk viscosity. The polymer solutions we're talking about are mostly dissolved in substantially incompressible solvents, and therefore you can't do this a great deal. There is, however, a third viscosity, which is extremely important in polymer engineering. And the third viscosity arises if we take a thin long piece of polymer that's usually done with the melt, and you grab the two ends of it and pull them in opposite directions. And as you stretch, there is a resistance to stretching. The resistance to stretching is known as extensional viscosity. At the point on extensional viscosity, there's a resistance to stretching. Well, this is how you make a lot of polymer threads in some part. You push the polymer through holes, and then if you want a really fine thread, you may, for example, stretch it, as shown here. Well, stretching is not quite trivial, and I've sort of indicated how it's done. Okay, so there are several different viscosities that you can imagine measuring, and what is found is that the viscosity of a polymer solution depends on concentration. And so as you add polymer, the viscosity of the solution changes, and if there's no polymer there at all, you have 8 and 0, the viscosity of the solvent, and then you have something of the form 1 plus k1c plus k2c squared plus dot, dot, dot. K1 has a name. K1, which is usually given the symbol eta embraces, is the intrinsic viscosity. It's the low concentration leading linear slope. Yes? Low concentration leading linear slope. We can rewrite this equation. We can actually rewrite 1 plus c eta plus khc eta squared. And k1 is eta. In this form, I factored k2 and khc eta squared. kh is the Huggins coefficient. The Huggins coefficient is the lowest concentration term that reflects the interaction of polymer coils with each other. The linear term here appears simply because there are polymer coils, and you can calculate it to some level of precision by calculating how a single polymer chain moves and then saying, well, there are a lot of them, so there's a lot of times as many effects. Huggins coefficient describes interaction between polymer chains. So that's k sub h. Okay, so having said that, that's k sub h, and we have an intrinsic viscosity. What do we do next? Well, one thing we could do is to ask what happens at higher concentrations. And the classical answer to that is the Martin equation. Martin presented a paper in 1943. My footnotes give the papers title thanks to our very hard-working library. I have not found another source that supplies the title other than the original conference meeting report. And what Martin said proposed was that eta would be eta zero e to the constant times the concentration. And he had data that supported this. And he had a very nice approximate formula, and that is the Martin equation. Now, one can do better than that, but I am going to pause for a second and point out something about the intrinsic viscosity. This has dimension one. Therefore, this must have dimension one. And in units, the intrinsic viscosity in units must be one over concentration. So C eta is dimensionless. C eta can be described as the natural units for concentration units for discussing viscosity. Where is C eta of one? Well, for a large industrial type polymer, meaning molecular weight is 100,000 or something in there, C eta is one when the concentration is one or 10 grams per liter, something like that. It is a fairly low concentration. It is not like a volume fraction where phi of one is unattainable because you cannot pack it better than 0.72. C eta you can get up to, you will be approaching the melt, 100 or a couple hundred. Now, there are two other sets of predictions of viscosity at higher concentration. And one, there are some nice papers due to Dale Schaefer who put this together. And the prediction is that viscosity at elevated concentrations goes as C to the x, m to the y, where x and y are powers. This is a repetition of scaling prediction. Now for polymer melts, and I emphasize for melts, for polymer melts the theory derives not only the y, but also the fact that it is a power law. That is, for melts the model actually predicts the functional form. For solutions that is not the case, but in solutions you can predict the x and the y. Can we put in some numbers here? Sure we can. Let's get some more board space. You can actually measure viscosity of polymer melts for different molecular weights. It's not quite as easy as it sounds, and you have the serious issue that if you have a polymer, how do you determine exactly what its molecular weight is? And the answer is there are a bunch of methods that give you information, but there are several to ten percent methods, not.01 percent methods. In any event, the repetition prediction looking at C to the x, m to the y, is that y is some number, the traditional number is three. Experimentally, y is about 3.4. This is for melts. And there are some theoretical treatments that endeavor to resolve the difference. For solutions, you can also predict x, and x is some number in the range of 3.75 to 5, depending on solvent predictions, that's the predicted. And so you can actually predict some exponents. The alternative to this is eta proportional to eta zero e to the alpha c to the nu, where alpha is proportional to m to the power. And that's the stretched exponential form for viscosity based on the Kirchwood-Reismann hydrodynamics and based also on the positive function renormalization group method. You can actually predict this form, you can predict numerical values with no prefactors for alpha, and the agreement is, well, the cup is half empty or half full, it's actually pretty full. So those are predictions. Let me just start on measurements since we are almost out of time. And we will start with figure 12.1. 12.1 shows measurements of Jameson and Pelliford. The measurements are on a 7.8 megadolton polystyrene in, if I recall correctly, tetrahydrospurean. Now 7.8 megadolton is an extremely large polymer. They looked at viscosity versus concentration, they did some other things which are in the book. However, if you look at viscosity, this is a log plot, you see a nice stretched exponential form. The stretched exponential form works well at all concentrations. You can also look, for example, at figure 12.3. 12.3 shows measurements of Enomoto, and these are on a Schizo-Phyland. And again, if you plot viscosity versus concentration, you see these nice curves, and the lines are at least decently close to them. If you look very carefully at the curves for the highest molecular weight polymer and dilute solution, the measurements actually show a viscosity which is a bit lower than you would expect from the curve, which describes everything else quite well. But if you look at those curves, you notice the viscosities up here are extremely large indeed. Looking at increases in the viscosity of multiple orders of magnitude, in some cases up to 5 or 7, and the stretched exponential describes nicely over the full range of concentrations, what the viscosity is. I see, however, we are out of time, and therefore we will continue this in the next lecture.
Lecture 21 - the polymer slow mode; thermal diffusion and Soret coefficients. George Phillies lectures on polymer dynamics based on his book "Phenomenology of Polymer Solution Dynamics".
10.5446/16220 (DOI)
Classes in Polymer Dynamic based on George Philly's book, Phenomenology of Polymer Solution Dynamics, Cambridge University Press, 2011. And today, this lecture is lecture 19, Quasi-Elastic Light Scattering Spectroscopy and the Light Scattering Spectro. In any event, today we are going to finish off our discussion of colloids. There's one little bit that's going to be postponed later in the course. And we will take up our discussion of quasi-elastic light scattering from non-dilute polymer systems. Let me remind you of what we found when we discussed colloid systems. First of all, we have a series of parameters like the self-diffusion coefficient and the mutual diffusion coefficient and the rotational diffusion coefficient. And each of these have some concentration dependence. The useful concentration being the volume fraction. The important point I want to make is that this constant is different for these three parameters. But in point of fact, if one does reasonable calculations, one can calculate what the slope is for all three parameters. And you get reasonable answers. The statement that you get reasonable answers tends to say that you understand what the forces are. You could by accident get one of these right, but this parameter depends on the hydrodynamic interactions and also the direct interactions in very different ways for these three constants. And the fact that you can get all three of them more or less right tends to say that you really do have an understanding of how hydrodynamic interactions work in colloid systems. If you look at the light scattering spectrum of these systems, and this is actually true not only for the light scattering spectrum from a dilute monodispersed system, but also from a tracer system, you find that the spectrum is bimodal. That is, there's something that approximates being a fast decaying mode, and there's something that approximates a slow decaying mode. Now, what does light scattering spectroscopy do? It looks for fluctuations in the concentrations of the particles. If the concentration were absolutely uniform, there would be no light scattering. You get light scattering because there are regions where momentarily, due to Brownian motion, there are fluctuations, and therefore there are regions where momentarily the concentration is unusually large or unusually small. It is the fluctuations in the concentration of particles that lead to the scattering of light, and it is the relaxation of those fluctuations that leads to a scattering spectrum. Now, the one thing that has to be remembered, we'll be doing this in more detail in a bit in this lecture, is that light scattering spectroscopy is driven by a single spatial Fourier component of the concentration of particles. And therefore, you aren't looking at, here is a lump of particles, you are looking at, here is a cosine wave fluctuation of some size. Now, all of the cosine wave, all different wave vectors are fluctuating at the same time, but light scattering picks out one of them. The one it picks out is determined by the wavelength of the laser, and by the scattering angle. And the picking out detail is the same as the first order Born approximation found in scattering physics. So you say you have two modes, fast and slow, and there is an automatic and totally incorrect response, which is to say that you're looking at particles moving over small distances and large distances. That's completely and totally wrong. The reason it's completely and totally wrong is that you are working at fixed Q. The distance scale for particle motion to which you are sensitive is something like R, is something like the inverse of Q. And so at all times, you are looking at motion at the same distance scale. What you are saying is that on this distance scale, there are some fluctuations that are transient and don't last very long, and there are some fluctuations that are persistent. The particles have either packed up close together or have spread out and made a bubble in solution in which there's a low concentration of particles, and whatever they are doing, the slow mode corresponds to the persistent fluctuations, and the fast mode corresponds to the transient fluctuations. Now, if you actually do particle tracking from colloid systems, you can actually see somewhat what's going on, and what you see corresponds to what is going on in computer simulations. That is, you form some sort of dynamic structure in which the particles stay together for a long time and don't move very rapidly. And you also, and these are lumps, and you also have ribbons, this is by direct observation, doing particle tracking, along which the particles can move fairly rapidly, but only parallel to the ribbon. I emphasize there are no, this is not polymer coils, these are spheres, but you get trails in which the lead particle can move rapidly, and then other particles can follow in its wake, so to speak. If you look at the slow fluctuation, even for the slow fluctuation, we find that the diffusion coefficient and the viscosity are both concentration dependent, but their product goes up as the concentration goes up. So what we are saying is that you do see non-stokes Einsteinian effects even in colloid systems. In addition to this, we talked briefly about concentration versus and how it drives viscosity, eta, and what we find is that at lower concentrations, there's a stretched exponential concentration dependence. There's then a crossover to a power law. The crossover is quite sharp. How sharp? Well, there are a few experiments where people have managed to get an experimental point very close to here, and it appears to be right at the intersection of the two lines. This power region is a power law. This point occurs when the viscosity divided by the viscosity of the solvent is about 10 plus or minus five, and the volume of fraction is about 0.4 to 0.45. Different workers' data, the analysis showing this transition is mine, the data are in the literature, from the literature, find the transition at modestly different locations, and it's not quite clear why one might propose that this effect is very sensitive to short range interactions between the colloids, and different samples give slightly different results. This point is very different from another point up here, which is at five of about 0.49, and an eta over eta zero of about 50, and that is the concentration at which, according to computer studies that are actually quite old, you start to form a biphasic region in the system, and since you are forming a biphasic region, well, it's interesting to ask what happens at the edge of the biphasic region. The statement that you're forming a biphasic region undoubtedly could use additional experimental and calculational work. Okay, so we have discussed this. The last point I want to make in summary, we will briefly talk about, in fair part, protein systems. The reason we talk about protein systems is that proteins are globular, they are highly mono-dispersed, and they are charged. Why are proteins charged? Because they have on their surface groups like carboxylic acid, or amine, they're also secondary amines, that's primary, and the carboxylic acid can ionize, and here comes a proton out into solution, and the protons combined to the amine at neutral pH, most of these things are charged up. If you go to very acid conditions, you can force protons down onto these. If you go to very basic conditions, you can strip the protons off of here, and if you plot the charge of a protein molecule versus the pH, you get a smooth curve. Of course, eventually the curve has to flatten out because everything is either ionized or not. The shape of that curve can be calculated with great precision using statistical mechanics and electrostatics. You can use nuclear magnetic resonance to track for an identifiable one of these groups one at a time whether the group is neutral or charged, and look, statistical mechanics works, you can calculate the charging curve for each group in a protein molecule if you know the structure, and you can do that with great precision. And so you can calculate this curve, which is quite smooth, and you can get charge versus pH. Well, having said that, suppose you measure the light scattering and measure the mutual diffusion of proteins in solution. We have proteins in solution, and so we get concentration fluctuations more here, less there, and the mutual diffusion coefficient describes the relaxation of the concentration gradient. What happens, well, the forces between the protein molecules drive the relaxation, and there are direct interactions, for example, screen-to-electric static. There are hydrodynamic interactions as a particle moves. It sets up a wake, fluid flow in the surrounding solution that drags the proteins along. You can actually calculate this, we talked about it for hard spheres. Now, what happens is that if you have proteins that are fairly highly charged and you start reducing the salt concentration, or you change the pH to drive up the charge, the forces between the protein molecules increase, they're all charged the same, so the forces are repulsive, and the net result is that the interactions drive up the diffusion coefficient. And you can actually describe this in terms of something that you can refer to the osmotic pressure, a drag coefficient which you can also calculate, and then as I discussed the last time, a reference frame correction, which refers to the fact that the proteins are moving this way, and fluid is incompressible, the simplest case, if the proteins are headed as indicated, some of the solvent gets pushed the other way, and this modest effect can actually be calculated. Having said, we can do this calculation. There is an interesting bit here, namely there is an alternative form that is sometimes used to describe mutual diffusion of macromolecules, which is a D is a KT over six pi theta psi, and psi is called the dynamic scaling length. This formula comes out of critical phenol. If you have a mixture of two liquids that phase separate, and you change the temperature and the pressure enough, you can often arrange things so that there's temperature and pressure beyond which the liquids are missable in all proportions. Right at the point where the two cis liquids become missable, there is what is known as a critical point, and at the critical point, the concentration fluctuations become very large, and at the same time, the concentration fluctuations become large, the diffusion slows down a great deal. And therefore, what one observes is the dynamic scaling length becomes long, the length over which particle motions are correlated becomes long, and in this system, correspondingly, diffusion slows down. Indeed, it slows down asymptotically as the critical point is approached. In these systems, precisely the opposite thing occurs. If you take the proteins and you charge them up, or you reduce the salt concentration, their interactions are made stronger. Because the interactions are made stronger and longer range, as you do the things I just described, the distance over which particle motions are correlated increases. However, at the same time that this correlation length for dynamic motions increases, diffusion speeds up. It speeds up a great deal. For example, there are nice experiments by Dardy and Benedict, and they observed up to a three-fold increase in the diffusion coefficient of bovine serum albumin and water as they did various things to change the pH and salt concentration. Calculating that increase, doing the calculation for a charged system is not trivial. However, experimentally, phenomenologically, you make the interactions stronger, the repulsive interactions, and therefore the diffusion coefficient goes up. At the same time, the correlation length for interactions goes up also, and therefore, this formula, which comes out of the critical phenomena, is totally and completely wrong as applied to macromolecule diffusion. It gives completely a wrong physical impression of what is happening. Okay, so that is the sort of thing we have found by dealing with diffusion coefficients, and we're now going to push on to the next chapter. And we're now going to talk in the next chapter about quasi-elastic light scattering, and we are mostly going to be talking about quasi-elastic light scattering as applied to binary systems. That is, we will have a system in which we have a solvent, and you notice this goes on apiece, there is an abbreviation. There's the abbreviation, and having put down the abbreviation, the notion is we have a mixture, and there is a solvent, which is more or less invisible, and there are things on drawing colloids, they're faster to draw, but they contrast as well to polymer coils, and we study the light scattering spectroscopy. There's, we send in a laser beam, and here is k incident, the wave vector of the incident light. We observe scattering through some angle, there's the wave vector of the scattered light. We have quasi-elastic light scattering because the light does not change frequency color significantly during, well, one part in 10 to the 10. The frequency doesn't change, so the two wave vectors are equal in magnitude, but point in different directions, and the amount of scattering, this is, does this look familiar? If you think back to quantum theory of scattering, did you see the Born approximation? Good, well, this is just the Born approximation, except things are a little bigger than the inside of a proton, and it takes longer to happen, but the math is the same, and therefore the scattered field is proportional to some scattering cross-section, and in a binary system, the scattering cross-section is the same for all the scattering particles, so we drop it out, and it's proportional to how bright the incident light is. If you make the incident light brighter, make the incident field E0 larger, the scattering gets larger, and then there is a sum over all the particles in the system. E to the i, square root of minus one i, k scattered, what's k scattered, which is usually called k, it's k final minus k incident, dot position of particle i at time t. The reason I put in the t is that that term includes interference between scattering from one particle and another. It includes interference because the scattering volume in a typical experiment, you may have a focusing lens here, you have collecting optics here with some pinholes, there is a region in which you are collecting scattering, which might be oh, 100 microns across, and it's surely less than one millimeter long. The visible light, you're using a laser, it has a coherence length of kilometers, typical experiment. Across this distance, the visible light is coherent, and therefore you can see interference between scattering from different particles. Another way to interpret this picture is to say this is a Michelson interferometer. Each of the molecules acts like a little mirror. You have a very large number of teeny, teeny, tiny, tiny little mirrors in there, and the intensity of the scattered light is determined by the amount of interference. As the particles move, the amount of interference changes, and so does the intensity of the scattered light. So having said that the intensity of the scattered light is going to vary on very long time scales because the particles don't move that fast, how do you characterize them? And the answer is that we measure the scattering spectrum. So here is the spectrum of the scattered light. It is both theoretically and experimentally more convenient to work in time rather than frequency domain, but the time domain and the frequency domain, it's a math issue. It's Fourier's theorem basically. The time and frequency domain are obliged to give you the same information, and even though this is time domain, we call it the spectrum, we take the intensity of the scattered light scattered with one Q or K at time T, and the intensity at some later T plus tau, we multiply these together, and we add up over a lot of separate times T, and we end up with something I should be notationally consistent in which in the end, the scattering spectrum is only determined by the time separation between these two. For, well, okay, so we have the intensity. The intensity is proportional to the square of the field, and so if you wrote the intensities out in terms of the field, you would get a four sums. Fortunately, there's a physics simplification. Here is the system, it's very big. Here is the distance over which particle positions and motions are correlated. It's very small. As a result, you have a vast number of terms with four particles in them, and you have a reasonable number of terms in which two of the particles are within the same volume, but the number of terms in which you have two particles in the same volume is huge relative to the number of terms of this to the fourth power in there, the number of terms that put four particles in one volume. As a result, this object is dominated by two particle terms, and we can write s of qt is equal to, well, there's a constant a, and there's something called g1 of q. q and k are the two symbols for the scattering vector, and I will wander between them. Squared plus b. And g1 of qt is the dynamic structure factor. It's also known as the intermediate scattering function, and g1 of qt is the dynamic structure factor. It's also known as the intermediate scattering function, and g1 of qt is equal to, what's it equal to? I'm going to do a sum over all of the particles in the system, all capital N of them, and I'm going to do a double sum because g1 of q and t is a product of two of those, so there's an e0 square and an alpha square. All the particles are the same, so they have the same scattering. Alpha square is the scattering cross-section. e to the i q dot rm at some time tau minus i q dot rn at some time tau plus t. And you're seeing interference between pairs of particles. Now, you have to be a little careful because this sum includes m equals m terms, and in that case, you're seeing one particle at one time and the same particle at a different time, but it also includes m not equal to n terms, in which you're looking at one particle at one time and a different particle at a second time. So this is how g1 is related to particle positions. There are also here two constants, a and b. The constant a is a normalizing factor. If you go through the literature, you'll discover there are lots of people who are happy to quote normalizing factors. They're different. It doesn't matter, within reason, how you quote the normalizing factor because it's a constant. The constant has no effect on the time dependence of this function. The only place where it matters a bit what you're doing is that if you say, I have some external means of calculating a, and you put in the wrong value for a, or there's noise so your value of a is off a bit. This object, how does this object behave at zero time? At zero time, tau and t plus tau become equal, and there's some quantity that you reach at time zero. And if you put in the wrong normalizing constant a, there's some minor hazard of messing yourself up a bit. b is the long-term limit. How do you measure the long-term limit? Well, you have a device that actually calculates this quorum. This thing, I've said spectrum. This is actually also a correlation function. It's a function of the variable at one time, the variable at a second time. You see that? And the correlation function is a function of the time separation. Well, you can actually measure these experimentally. They're complicated machines for doing that. And if you actually do this thing, do this calculation, you can measure out to very long times, and you discover that the spectrum goes out and becomes a constant at very long times. There are methods of calculating what the constant should be with your worthwhile checks, but you can actually measure this directly, and when you measure it directly, there are certain fluctuations that give you noise here that is repeated there, and if you measure directly, that noise gets subtracted out. So, you're actually measuring the dynamic structure factor. The dynamic structure factor can be split into two terms. So, g1 of qt is equal to g1s of qt. This is the self part of the dynamic structure factor, plus g1d, those are superscripts, same function of q in time, and the other part is the distinct. What can we say about the self and distinct parts? Where do they come from? They both come from this double sum. The self part are the terms in which m is equal to n, so you're looking at the same particle at two times. The distinct terms are the terms in which m is not equal to n. Now, you might say, aren't there going to be a whole pile of more distinct terms than there are self terms? And the answer is, yeah, there are going to be a lot more distinct terms than there are self terms. However, in the distinct terms, what has to be the case is the particles m and n have to start out rather close to each other, and because if they aren't, their positions at, say, the start time are uncorrelated. If they're halfway across the system from each other, the two positions are uncorrelated, and the contribution to the distinct term averages to zero. Why? Well, if the two particles are out here, they could be out here, or they could be half a wavelength closer, and those two states are very nearly equally likely because they're way far apart from each other, but while the particles are being way far apart from each other, the contributions to this object average to zero. We have a particle, and only the particle near neighbors contribute to the distinct part of the correlation function. Well, the number of near neighbors is very small, you know, 3, 12, whatever, and therefore this term, the m not equal to n term, and the self term, the m equal to n term, are at least vaguely the same size. There is a specific exception, and the specific exception are tracer experiments. In tracer experiments, we have a background fluid, which could be a polymer solution, invisible polymers mixed with solvent, things we do not see, and dropped into there are dilute particles. The particles that are doing the scattering are almost always very far apart from each other, and because they're almost always very far apart from each other, the position of pairs of particles, pairs of scatters, are uncorrelated. And as a result, under tracer conditions, the self term is, gives us a, essentially, all of the contribution to G1, and the distinct term, in which you're looking at correlations and positions of pairs of scatters, averages very nearly to zero. This is how tracer diffusion experiments get done with light scatters spectroscopy. So under tracer conditions, the distinct terms vanish. Well, having said the distinct terms vanish, you then ask, well, what information do G1S and G1D give us about the system? I shall skip the math, but the core answer is, there is a probability distribution. And this is the distribution, the tracer particle of interest will take a step delta x during time t. Well, this object for Brownian particles in a simple solvent is a Gaussian and delta x. If we are in a polymer system, we find, and we're doing tracer, we find G1 is typically not unimodal. This is a plot of log G1 versus t, and G1 is not unimodal, and correspondingly, Dukes theorem, which I've mentioned before, p of delta x and t is not a Gaussian. It's something more complicated. However, the self-distribution function, the self-part of the dynamic structure factor, gives us the likelihood that a given particle will take a step delta x during time t. Now, actually inverting the spectrum to get to delta this function requires that you do the Q, the dependence on Q for a series of a lot of Qs. It requires that you have high precision measurements, and what you can actually determine are the even moments, the average over p of the even moments of delta x. The odd moments average to zero. That's a symmetry outcome. But you can measure the even moments, or at least some of them, and you can measure them by doing the Q dependence at any time you want. So you could actually get approximate information about this function. If the particles are not dilute, life becomes less pleasant. Let's take a finer detailed view on what p is. And p, the likelihood that a particle will displace some distance of delta during time t, it actually is modulated by the position of all of the m particles in the system. Here is a list, it's a list of particle or particle. One is particle, two is out to particle, I guess I called it m, all of the particles in the system at time t. T of delta and t is equal to an average over all the starting positions. So they're going to add up over all the starting positions of all of the particles. And there is a p of delta t, rm, there's the list of all the starting positions. And there's an e to the minus beta, potential energy minus a normalizing factor. And that is the actual p of delta t, the thing that the self-distribute part of the distribution function talks about. And this is in terms of where all the starting positions are, why do the starting positions matter? Well, here is a particle, and at time zero, if I give you no other information, it's more or less equally likely to move in any direction at once. On the other hand, at time zero, if there are three neighboring particles here, and the particles all repel each other, these steps tend to turn into those motions, bounce back. These steps get enhanced, and therefore, if I give you information about where all of the particles are at time zero, I've given you information about what steps this one particle, the particle of interest is likely to do, what the step is likely to be over the time t. Well, there is a fair amount of crank turning, and it turns out that the distinct part of the correlation function is determined by the probability that a particle one will take a step during time t. That's what it is. And it's determined in part by the vector from the particle of interest to one neighboring particle during time t. And it comes through as an e to the i q dot r12. And then you average over where the last particle is. And this effect, this term, which comes out of the exponential in here, says that the probability of displacement during time t is due to spatial Fourier transform of a complicated distribution function with respect to the position of one other variable. Now, you might say, gee, could we make that term go away by going to small q? And the answer is that if you go to small q, if you really go to small q, the g's go from 1 or 0 and have not yet started to decay. You can look at the initial slope of g1, self, and g1 distinct. You can look at the initial slopes of these two functions. That's the simplest time dependence to calculate. And if you do that, you discover that this object contributes substantially to how fast the relaxations take place. OK, so that is a discussion of light scattering spectroscopy. And now we chug ahead and we're going to apply this to a polymer solution. And if we apply it to a polymer solution, there is a complication. And the complication is, here is a colloid. It's a rigid body. If you scatter light from the colloid, you can get interference between light rays scattered from different parts of the same colloid. But as long as the colloid is a sphere, that interference modulates the intensity of the scattered light, but it has no effect on the time dependence. Why? Because that interference happens even if there's a sphere. It doesn't change if I rotate the sphere. It's just that there is scattering from different parts of the sphere. For a polymer, life is a little more of something. Here's a polymer coil on sort of the same scale. And we have a light ray that chugs along and a scattered off the polymer bead here and heads off towards the detector. And we have a light ray that chugs along and gets here and is scattered off towards the detector. And gee, the polymer is flexible, isn't it? That means that as time goes on, I'll draw it as a dashed line, the polymer coil can change shape. There's the new shape. This piece has now moved from here to here. And the scattering that reaches the detector has a different path length than it did before. As a result, the internal motions of the polymer contribute to the time dependence of the scattered light. In order to do that, this distance has to be sort of comparable with a light wavelength. If the polymer, the traditional distance scale, which I've just stuck in like that, is the radius of gyration, if the polymer radius of gyration is much smaller than a light wavelength, that is, if rg times the scattering vector 2 is much less than 1, the polymer can change shape and the interference doesn't change because it's almost zero. The distance from here to here is tiny relative to a light wavelength. And so light scattered from here and from here comes off with about the same phase. However, you can make really big polymers. And because you can make really big polymers, as the polymer changes shape, you get a contribution to the time dependence of the spectrum. I see, however, because of time constraints, I am out of time today, and therefore in the next lecture, we will continue to discuss how this effect contributes to light scattering from polymers. But that's it for today. We're done.
Lecture 19 - light scattering spectroscopy and the light scattering spectrum. George Phillies lectures on polymer dynamics based on his book "Phenomenology of Polymer Solution Dynamics" (Cambridge, 2011).
10.5446/16219 (DOI)
Lecture 18, Colloid Dynamics, based on George Philly's book, Phenomenology of Polymer Solution Dynamics, Cambridge University Press, 2011. And today, this lecture is lecture 18, Colloid Dynamics. I'm Professor Philly's, and this is the next lecture in my series on polymer dynamics. Our last lecture finished off with a discussion of optical probe diffusion. What I'm going to do today is to push on to an entirely new topic in chapter. The new chapter is 10, and the new topic of chapter 10 are colloid suspensions and colloid solutions. If you go through the reviews found in the early part of the book, the reviews by people like Tyrell and Lodge, and a whole variety of other books on polymer dynamics, you'll make an interesting observation. There are lots of discussions of polymer diffusion, polymer viscosity. Almost none of the other sources you look at discuss probe diffusion at any length, and there's absolutely no discussion of colloid dynamics at all. Now, why might you be interested in looking at how colloid particles move? Well, let's remember what a colloid particle is. The simplest case, we have objects in solution, and the objects in solution are spheres that are quite small, like a half a nanometer or tens or hundreds of nanometers. They've been surface treated in some way in most cases, and as a result, well, you say this is a suspension of particles, they're big particles, but they can be density matched, and even if they're not, they can be charged or coated, given polymer coats on the surface, so that they sit in solution. Why would we be interested in these round hard objects? They're quite rigid. I've drawn a sphere, but some of them aren't spheres, they're ellipsoids or whatever. Why would we be interested in these in a discussion of colloid? Why would we be interested in these in a discussion of polymer dynamics? Why are we talking about colloid dynamics? The answer is, these are sitting in solution, and there are three sorts of forces on them. There are random thermal forces due to fluctuations in the solvent. That's what drives Brownian motion. There are hydrodynamic interactions. I'll talk about those in a bit. Namely, as one particle moves, it sets up a wake in the surrounding solution, like the wake of a motorboat. Well, not really exactly like, but somewhat similar. And the wake drags neighboring particles along with it. And then there are direct interactions of which the most important is volume exclusion. Because these objects are solids, they cannot pass through each other. I could also discuss polymer solution. Here is a non-dilute polymer solution. The important feature of the non-dilute polymer solution is that the forces between the polymer coils, thermal interactions, which are the polymer coils of the solvent, are dynamic interactions between polymer coils, and volume exclusion interactions between polymer coils. Those forces are precisely the same list of forces that act between colloids. So we have two systems. We have the same sets of forces acting. We're looking on the same time scales, so the same basic dynamic equations apply. However, there is one substantial difference between polymer dynamics and colloid dynamics. And the substantial difference is shape, or as some people would call it, topology. My apologies to real topologists who will point out that this, if linear, and that, both simply connected volumes end of discussion. They're the same, aren't they? Well, no, not really. However, the core difference is, this is a sphere. That's something that's long and stringy. And you can certainly imagine this object having modes of motion and interactions that are not stunningly the same as spheres. For example, if you have two polymer coils, you can certainly imagine one of the polymer coils wrapping itself around the other, like two rattlesnakes in a rattlesnake ball. Sears, however, just sit there. They can't wrap around each other. And if you believe in the arguments that propose there are entanglement constraints, though the entanglement models don't say what an entanglement is, then you would say that a polymer coil plausibly entangles, at least for many pictures of what entanglement means, but spheres certainly do not do so. Okay, so there is a fundamental difference in shape and in possible dynamics, but the basic forces are the same, and the basic, gee, this is over-damped motion, f equals ma equations are the same. There's some similarities, some differences. One thing one might, however, propose is that phenomena that are common to colloid dynamics and to polymer dynamics cannot be explained on the grounds that polymers can wrap up in knots, and polymers can perform repetitions, snake-like motion like the boa constrictor going through the bamboo grove. Anything that is common to these two systems very certainly cannot be assigned to topology. So that is our perspective here, and what we are going to do is to look at a variety of discussions of how sears-interim move as opposed to how polymers move, and we are going to look at colloid phenomenology, and that will tell us something about, gee, the importance of topology. The short discussion on how does this compare with other works is, there is no comparison, and the reason there is no comparison is that if you look at other treatments of polymer dynamics, you will be very hard pressed to find another that does a substantial discussion of colloid motions. So let us start out, and we will say a bit about hydrodynamic interactions. The front-end piece is as follows. We have here is a colloid sphere. It is small enough that we can approximate it as a point in terms of the other distance scales. It moves through a polymer-polymer solution. As it moves through a polymer solution, it is exerting a force, there is the force, f being exerted on the solution, and this causes the liquid to move, and so you get a flow field. And if out here the flow field has a velocity v, the coupling at low frequencies between v and f is v equals t. T is a tensor. It is a 3 by 3 matrix. The reason it is a tensor is that we are starting off with a vector here. We are ending up with a vector here, and the two vectors are not parallel. The mathematical structure, this is linear, that we can use to transform one vector into the other, is a 3 by 3 tensor, the O-sine tensor, whose I will strip out constants. The O-sine tensor folds off as 1 over r, where r is the distance from the point particle to the point of interest in the fluid. And then the tensor dependence is the identity tensor, plus this is an outer product, r hat, r hat. r hat is the unit vector pointing from the sphere to the place in the fluid. There are some constants which you can put in differently depending on whether you describe here the force or the velocity of this point. What are the important features of the O-sine tensor? First of all, the flow field is not simply parallel to the velocity of the original object. It has this motion that I have sort of sketched. Second, the O-sine tensor falls off as 1 over r. That is not a potential energy falling off as 1 over r. If I have an object which I try to hold still at this point, the flow field creates a force on the object, f prime. And the force, we have a force here creating a force there. The force falls off as 1 over r. This is the longest range force in nature. Now you may worry a bit, oh gee, is it really infinitely long-ranged? And I will now point out the constraints because O-sine is an approximation. The first constraint is that if I apply a force here for a brief period of time, you have to set up the O-sine flow field and that takes a while. And until you have set up the flow field completely, there are a more complicated set of equations due to Bowson-esque to describe how a force here for some short period of time creates a flow there. The flow field has to propagate outwards and momentum propagation in a fluid occurs at the speed of sound. So if you are looking at very long ranges or very short times, this is not perfect. But we aren't at very long ranges, we aren't at very short times. There is another constraint that is sometimes lost, which is as follows. We are actually doing this experiment inside a container. And the container is filled with a solvent, and the solvents we are talking about are mostly pretty incompressible. As a result, if I draw a line, an imaginary mathematical surface, across the container at some point, the total volume flow of fluid across that line has to be zero. If it wasn't, we'd be compressing the fluid and this can't be done to any significant extent. And therefore, as we approach the walls of the container, somehow there has to be a backflow, which one could in principle calculate, and the backflow has the effect that the total volume flow across the system is zero. Now you have to put one qualification in. Suppose I drew the mathematical line here. In that case, the mathematical line is intercepting the solid object. And if we have a flow delta V of object into this volume, we must have a net flow minus delta V of solvent out of it. This is incompressibility when you remember this object has a volume. This is a significant effect in real experiments. At a certain reasonable level of approximation, the whole thing is known as reference frame corrections. The core issue in reference frame corrections is that the reference frame that is stationary with respect to the walls of the container, that's an obvious inertial reference frame, and the reference frame that is stationary with respect to the solvent, because the solvent is moving this way, are not the same. And reference frame corrections are treated in great detail in a paper by John Gamble Kirkwood and collaborators, 1960 paper. The paper actually appears after Kirkwood's death in the Kirkwood Memorial volume, and his four co-authors insisted he'd done the work so he should be the lead author. So there is something called a reference frame correction. However, the Oceane tensor says there are these interactions, and one moving particle puts a force on its neighbors, unless the neighbors simply are bobbing along with the solvent. And if the neighbors are bobbing along with the solvent moving at this speed V, they're moving, and this velocity and that velocity are correlated. Okay, so that was the Oceane tensor, and now we're going to put down something else. And the next piece comes out of an arc, what is known as a fluctuation dissipation argument. Fluctuation dissipation arguments are potentially tricky and hazardous. That is, it is possible to charge in, make a set of statements, all of which seem reasonable, and you can get surprised by what you predict, and by the way, you were correct to be surprised because the prediction somehow went a bit astray. However, in this case, it's fairly clear what you're saying, and it's been tested experimentally. The statement is, if I have a particle here that does this under the influence of an external force, say there are little magnetic bits inside the particle and I apply a magnetic field gradient, a nearby particle will be dragged along, and the correlation between the displacement of particle one and the displacement of particle two, the correlation is given by the Oceane tensor. That is, it's something that falls off as one over R, and it has this tensor nature rather than everything simply being parallel. So that's the fluctuation dissipation argument. And what we say is that if you see this behavior for driven motion, and if you're in the low speed linear regime, which we are, you will also see it for diffusive motion, and therefore, if you look at the random walk of two particles in a liquid, their motions are correlated as described by the Oceane tensor. Let me say, if you think back to what you may have read about Brownian motion and undergraduate thermal physics, you may have thought, well, gee, aren't the motions of Brownian particles uncorrelated? And the answer is, if they're way far apart, or if you don't know what questions to ask, they appear to be uncorrelated. And the whole study of Brownian motion for a very long time was that you could do all of these statistical correlations, but until Langevon did his very clever equation, until Langevon did his work on Brownian motion, it wasn't clear what questions to ask about Brownian motion, and therefore, you got lots of answers that weren't stunningly helpful. And afterwards, everything worked. So there is the statement, well, at the front end, we have driven motion, and then if you look at diffusion, you get the same result. Notice, however, we've said the effect falls off as inverse as distance, and the basic unit is the size of the particle over the distance, so we have a dimensionless number here. So if the particles are tiny and way far apart, that's what far apart means, the interaction falls off a lot. Nonetheless, it doesn't fall off to zero, and there have been a series of experiments due to O. Crocker, a very clever fellow, and Miner, and Quake. And what they do is to say, well, let us look into a system, let us actually, one way or another, track the motion of parallel particles, pairs of particles, we get the particles close together, and we can then actually look at diffusive motion, and we can ask, how does diffusive motion work? And the short-format answer is, you can actually do the experiment, and you find that displacement one, displacement two are correlated. This defines what is known as a cross-diffusion tensor. The cross-diffusion tensor describes the relative motion of pairs of particles, and the cross-diffusion tensor, as is predicted by the fluctuation, dissipation arguments, is proportional to the O-seam tensor, and falls off as one over R at long distances. So those are hydrodynamic interactions. Hydrodynamic interactions were inserted into polymer dynamics very early on by the original treatment of Kirkwood and Reisman. Same Kirkwood as the reference frame fell, a very clever man, leading American statistical mechanization of the first half of the century, of the last century, Kirkwood and Reisman. And what they did is to say, here is a polymer chain, and we can model the polymer chain as a string of frictional beads, and the frictional beads are attached to each other by little springs, or perhaps the frictional beads are monomers and they're bonded together. And if the chain moves in solution, well, the chain has some center of mass velocity, B. If I apply an external flow field, the polymer flips head over tail, and has, this is coming out perpendicular to the board, it has a rotational velocity omega, and then the individual beads have fluctuation velocities relative to the general drift motion. But you know, these things are all attached to each other. It's like a very long snake. If it moves a large distance, the head and the tail have to stay attached. And therefore, this object will sort of like a bag of beads with internal fluctuations. What they put in though is the key issue, is that these beads, or at least some of them, are obliged to be moving with respect to the flow field. Why? Well, let us draw a polymer coil again. It has a center of mass velocity this way. I've applied a velocity shear to the fluid, so the fluid is going in opposite directions. The polymer is tumbling head over tail as it does this. And so up here, the beads try to move with the fluid. And down here, the beads try to move with the fluid. But if you think of the recall, this is circular motion, that means that here, and not so probable, let's extend this a bit, here, the beads have to be moving sideways. So at least some of the beads, to some extent, have to be moving with respect to the fluid. If they weren't, the shear would tear the polymer apart. And these motions with respect to the fluid, not resolved. We have a bead, it's moving with respect to the fluid, it is exerting a force on the fluid. And so all of these beads, moving with respect to the fluid, each create, and there are lots of beads, but they each create their own flow fields, which act on all of the other beads. And then you have to be a little clever in the math to re-sum things to get a self-consistent solution. And you do, and this is the Kirkwood-Ricewood model for polymer dynamics. If you are clever and careful, you can take this model, and you can use it to treat pairs of interacting polymer chains. And when you put it in to treat pairs of interacting polymer chains, which can be done, you get out the concentration dependence of all the self-diffusion coefficient or the viscosity. You have to work a bit harder to do that, but it's possible. Okay, I've talked about hydrodynamic interactions. For most of the objects we're talking about, the direct interaction is basically excluded volume. That is, we have a sphere here, and we have a sphere there, and they can not actually, they can touch, but they can't interpenetrate because they're solids. Now, there is a bit of a cheat at this point. The bit of a cheat at this point is that the real spheres you're talking about, well, they might be charged because you put them into water and you suspend them by charging them. They might have very short polymer molecules adhered to their surface to keep them from sticking. They may have van der Waals forces between them, and the net result of all of those is that at very short distances there's some extra interactions that are not exactly hard sphere. Now, approximately speaking, if you're just talking about hydrodynamics, this is not serious, but the real hydrodynamic interaction between two spheres is not exactly O-sene, that is, the real interaction tensor, real, is the O-sene tensor, which is 1 over r, and then there are 1 over r to the n, and there are a bunch of higher order corrections. Higher order corrections are short range. Their importance depends on how likely it is who find two of these objects real close together, and that likelihood of being real close together is modulated by these short range potentials, which are a little harder to determine in great detail. So life gets a little trickier than we would like, but approximately speaking, it can be made to work. Suppose, however, we had real hard spheres, and I get real hard spheres, sure I can. I can do a computer simulation. Real hard spheres have thermodynamic properties. The thermodynamic property is described by phi, their volume fraction, and what is phi? Phi is equal to the number of spheres in some volume times 4 pi r sphere cube over 3, the volume of a single sphere, this is the total volume of all of the spheres, divided by the total volume of the system. So we can plot from the 0 out to, I suppose you could say 1, but you can't get to 1. I'll point out why not in a second. 0 to, you actually don't get out here. Because the spheres are hard, their distribution functions, their statistical mechanical distribution functions, don't depend on temperature. That is, either a configuration of spheres is allowed, because no sphere is overlapped, or it's forbidden, because you've got two spheres trying to be in the same place at the same time. And guess what, no matter what you do to the temperature, it's allowed or forbidden. The highest volume fraction you can actually get is close packed, and that's about 0.74 volume fraction. You can be a little more precise, and there are actually several close packed configurations, but they all have the key feature, you can't get more than 0.74. As a historical aside, this result was believed, but had not actually been proven when I was a graduate student. The issue, which is a math issue, is that you couldn't find any configurations that were closer packed than this, but what the mathematicians had to do was to prove there was no way to pack a sphere. Here is a sphere. So the sphere had 13 other spheres touching it. If you could do that, which you can't, you could create that configurations that were regular in all these other things, and had densities higher than hexagonal close packed. Well, at some point, fairly recently, the mathematicians were able to prove this result, which most people believe. And we can now say, this is the upper limit, that's how many spheres you can fit, because that's all the packing. On the other hand, you can imagine, current spheres are like ball bearings or toy marbles. So you take the toy marbles or the ball bearings, you put them in a cloth sack, and you shake them. And you really want a cloth sack, not a box, because the ordering from the walls of a nice rectangular box propagates into the side. So what happens if you put ball bearings or old flour or salt, anything that's nice and regular, in a bag, and you shake it? Well, the first thing that happens if you shake it, it starts to settle. And if you shake it longer, it settles more. And after a while, you shake more, and you just don't make more progress. And it is fairly hard to pack things so that you get a volume fraction that is much above 0.64. Now, that's not an exact result, because if you keep shaking, you can find better packing. But there's an upper limit about here to what an irregular packing will look like. There are two more pieces. The two more pieces are due to a set of computer experiments by Ray and Hoover. The computer experiments go back almost half a century at this point. And what they did was to study hard spheres and the behavior of hard spheres at different densities. And what they demonstrated is that there is a boundary at about 0.49, people will quote, four. I think the four, that terminal four is a little enthusiastic, given the size of the system that we studied. And coming up here to about 0.55. The issue with hard spheres is as follows. Suppose I have less dense hard spheres here and more dense hard spheres there. Well, if these were small molecules that could form a liquid, we would have, for example, a gas up here, a liquid down there. And there would be a free surface that would be supported by the attractive interactions between the molecules. So you have a well-defined liquid and you have a well-defined gas. Hard spheres have no attractive interactions. That's why they're called hard. And therefore, there is nothing to keep, to maintain this boundary between two regions of different densities. Nonetheless, if you do the computer experiments, what you discover is that you have one phase out to about this density. You then have a region in which you have two phases, you have a biphase at region. And in the biphase at region, there are less densest zones and more densest zones. And then up above here, going out to here anyhow, you have a single phase. And what is said is the lower density phase is a fluid gas, basically. And the upper density phase is claimed to be an expanded solid because its structure is more or less solid-like, but the density is too low. But if you think about things, the argument I gave as to why hard spheres cannot have a gas-liquid boundary works just as well for a gas-solid boundary. And therefore, what you presumably actually have here, this must be a gas because it can't be anything else. And this is actually really also a gas because there's no attractive potential to hold the things into the solid. And therefore, we have here what would appear to be an example of a gas-gas phase transition. But one of the two gases is really very much solid-like. And the solid crystal structure, if you really get up, where the closely packed amorphous structures, and gee, there are more gaps in it. My search of the literature says this issue has not been studied very recently. And if you studied things with much larger systems, you might get a more interesting result or a clearer result anyhow. The important issue is, if you are packing hard spheres at about half by volume, there is expected to be a thermodynamic phase transition leading up at a density of 0.55 to what is a distinct phase. And in between, you have two phases present. And that's worth keeping in mind as we chug through this. Okay, so what have people done? Well, people have done on colloids more or less all of the same experiments that have been done on polymer solutions. We haven't talked about all of the polymer measurements yet. We'll get to them. What sort of things can you do? Well, the only thing you can do for a colloid solution is to measure the single particle, self or tracer, diffusion coefficient. And if you do that, this is for example figure 10.1, you measure the diffusion coefficient using any variety of methods. And it's some function of sphere concentration and the diffusion coefficient falls. You have one interesting little bit which has sort of been brought up also with polymers. That is, you can measure the diffusion coefficient over fairly short time periods. You can measure it using light scattering spectroscopy. And you can measure the diffusion rate over very long time periods using, for example, fluorescence recovery after photobleaching. Or you can measure the mutual diffusion coefficient, concentrated system, and use a very large scattering vector. This works too. That's due to PUSI and co-workers. And the net result is you can measure two diffusion coefficients more or less. You can measure a short time diffusion coefficient which does roughly this. You can measure also a long time diffusion coefficient. And the long time diffusion coefficient drops off significantly more rapidly than the short time diffusion coefficient that stays down there. For the short time diffusion coefficient, we can write dS is d01 plus some constant short volume fraction. And you can actually measure k1S and you get O minus, oh, what's the magic number, 1.8 plus or minus 0.06. And there is a theoretical result which is about minus 1.73. Those two are in quite satisfactory agreement because part of this k1S depends on fairly short range interactions. And any deviation of the sphere is from perfectly hard sphere would give you a slightly different number here. So you can actually predict this concentration dependence. You can also do a completely different set of experiments. And the completely different set of experiments are due to D Georgiou. And the notion here is we will take spheres. And because we have a wonderful supplier, we have spheres that are optically anisotropic inside. And as a result, if I send in polarized light and look at the scattered light, some of the scattered light is polarized the same way as the incident light. And that basically just says the light sees a sphere. And some of the light is depolarized, so I put in vertically polarized light, light polarized perpendicular to the scattering plane. I look at, I'm putting in prisms here and here, horizontally polarized light. And that light, the amount of that light is determined by the shear orientation. Now there are two features of the VH scattering. First of all, if here's a depolarizing sphere, if it rotates, the sphere rotates, the amount of VH scattering changes. The sphere is doing Brownian random rotation, so the intensity of the VH scattering fluctuates in time. And the time scale of the VH intensity fluctuations exactly tracks the time scale on which the sphere reorients. So after a great deal of clever math, I can use VH scattering to measure rotational diffusion. Question? This is an average measure of the light. What you do is to measure the time correlation function of the scattered light. So we have the intensity of the VH scattered light at time t. We now measure it at a later time, t plus tau. We do this, repeat this for lots and lots of times t to get an average behavior. And what is left is a function of tau, and tau is the time separation. And so what it means is if the light is bright at one instant in time, so IVH of t is large, because this is pointing exactly the right way, after a while the sphere forgets which way it's pointing, and IVH relaxes back to some average value. The correlation function, however, this is why you have staff mech classes discuss correlation functions, does not average to zero, instead if the light is initially bright, it remembers that it's bright and stays bright for a while. If the light is initially dim, it remembers it's dim, it's dim, it stays dim for a while, and these products do not cancel out if you form them properly. So the first thing is you can see rotational motion. The second issue is if I have two spheres, the directions in which they point, the static correlation averages to zero, that is the spheres will bump into each other, they won't get closer than a certain distance, but these optical and nice tropies are hiding inside the spheres, and so one sphere does not know which way the other sphere is pointing. As a result, the VH scattering from this sphere and the VH scattering from that sphere are uncorrelated, and when we do quasi-elastic light scattering, we see the single particle correlation function, we measure how fast single particles are moving, and we don't see any of the effects due to the position of this particle and that particle being correlated. This is not the same as regular quasi-elastic light scattering in which the relative displacements of these two particles contribute to the light scattering spectrum. So in any event, D'Giorgio was able to do this, and he was able in one experiment to measure both how ds depends on concentration, and he found a slope of O minus 1.83, and he was also able to measure how the rotational diffusion coefficient depends on concentration, and he found a slope of minus, oh what was it, about minus 0.55. So sphere-sphere interactions are less effective at affecting rotation than they are at affecting self-diffusion. He also was able to measure the curvatures of these curves. They both curve upwards, that is, you have something that's headed down, but since it can't go negative, it sort of has to pull away, gets to larger volumes, and it does. Okay. What sort of other experiments can you do? Well, one thing you could do is say I can measure the self-diffusion coefficient of the sphere. I can measure the viscosity of the sphere system. This was done by Van Bladeren. And what Van Bladeren did was to show that you have a long-time diffusion coefficient, and that large concentration, but not small concentration, just large concentration, d long times the viscosity was approximately constant. So you had a pseudo-stokes Einstein behavior. It's pseudo because the product isn't necessarily what it was at dilute solution, but there's a region in which self-diffusion, the slow self-diffusion coefficient times the viscosity, is more or less independent of concentration. Okay. So that's those experiments. Another set of experiments you can do are to measure mutual diffusion. The mutual diffusion coefficient, d sub m, describes the relaxation of a concentration gradient. Now you would correctly infer that the single particle diffusion, particles each individually moving around at random, also contributes in part to the relaxation of a concentration gradient. If we have interacting objects in a concentration gradient, more particles here, fewer particles there, the interactions between the particles contribute to d sub m. And you can then say that d sub m should depend on concentration as 1 plus k1m5. So you can measure the slope. There are a couple of challenges here. The first challenge is the slope isn't very big. Big issue. The second issue is you can't go out to very large phi. Remember phi is always less than 1, but you can't go out to phi of a half or a tenth or whatever because if you do, approximating this as a straight line isn't adequate. Furthermore, if you have concentrated spheres in a solvent, unless you're very careful, the spheres and the solvent are not perfectly, or nearly perfectly index mashed, and you get multiple scattering in which the photon bounces off several spheres before it gets to your detector. You can describe the diffusion measurement as a Doppler shift, but it's not straight line motion at constant speed. And these multiple Doppler shifts make the particles appear to diffuse faster than they're really moving. Nonetheless, MOS and collaborators using homodyne coincidence spectroscopy, if you want to know what it is, you read the two papers where I first invent the technique and then demonstrate that it works. You can read my two papers doing that. We're able to measure case of 1M. There is, however, another complication which is not trivial, which is you would like to know what the volume fraction of spheres is. And the problem with, there are several problems with this, namely, if you actually have spheres in solvent, they may imbibe a bit of solvent. If you drop the spheres into vacuum and measure their radius with an electron microscope, gee, you measure it with an electron microscope, there may be artifacts that change the shape a bit. And so you know the volume fraction, but you don't quite know it perfectly. Nonetheless, case of 1M was something like minus 0.3 or minus 0.8 approximately. Is that a large error bar? Well, you should realize that if you look at the plot of d sub m versus phi, over the observed range of phi, d sub m only changes by a modest number of percent. The slope then, to be accurate, requires that you have a measurement accuracy, which is small relative to a modest number of percent. You know, that's pretty difficult. These are wonderful measurements, and this is in a reasonable agreement with the theory. At least if you read Phillies and Carter, was one of my Michigan undergraduates, very smart woman, and the predicted value was a little smaller than that, maybe minus 0.9, but the number is certainly in the right ballpark. Okay. If you increase the concentration of spheres, you can do light scattering spectroscopy from a uniform system, or you can do light scattering spectroscopy from a system in which a small number of the spheres are different from the rest, so you only get light scattering from a few spheres. The first experiment gives you the mutual diffusion coefficient. The second experiment actually gives you self-diffusion. The important issue is that both of these measurements, though not at the same concentration, find a multimodal relaxation. Multimodal, well, if we plot intensity at one time, intensity at the other, if we plot the correlation function of the scattered light versus time, for simple Brownian motion, you get a straight line of constant slope. For these experiments, what you find is an initial decay, and then you get a second decay, which is also more or less exponential. And you can then pull out, approximately speaking, two diffusion coefficients. One diffusion coefficient that describes short-term motion, and one diffusion coefficient that describes long-term motion. So at long times, you have a long-time diffusion coefficient. At short times, both processes are presumably active, and you see an average relaxation, which is typically pretty close to the short-time relaxation rate, or you do clever high-accuracy measurements and curve fitting, and you can pull out the two diffusion coefficients. Now we come to an odd result of sagray and Pusie. And what sagray and Pusie did is to do this experiment, and they did this experiment at a series of different scattering angles. So you have an incident light beam. The scattered light goes out to the side and is measured. This is where you measure I of T. And there is a scattering angle theta that by that convention is defined the way I have described it. As you change theta, you change the distance over which you are measuring particle motion. That is, at low theta, you are looking at very long-distance motion. At shorter theta, the distance becomes shorter. Have you seen this before? Well, if you have sat through a quantum course, you first probably saw the first order, born approximation, yes, or you had a description of scattering in solids. In all of these, you had scattering something with wave vector coming in K as 2 pi over lambda. And the scattered light comes out at some K or x-rays or whatever, which is very close to the same but not quite. K initial, K final. And K final minus K initial defined a scattering vector Q. Well, Q does not measure particle motions over a single distance and see how long it takes. You are looking at spatial Fourier components of the density. And in this case, because it is a liquid, the spatial Fourier components relax as time goes on. The relaxation that Segre and Pusey found was bimodal. The other thing they found was that if you plot the intensity of scattering versus Q, you find the curve has bumps. And in particular, they found at some scattering vector a maximum in Q. And the maximum in Q corresponds to a typical case. We have spheres. There are packing constraints between the spheres. The concentration here, after all, is pretty high. And therefore, there is a typical distance between the spheres. And if you use Q that corresponds to 1 over this distance, you get a lot of scattering because there are a lot of pairs of spheres separated by that distance. Boy, did I just oversimplify, but that's the general idea. In any event, the first thing they found was that if you compare D-short and D-long, yes, both of these diffusion coefficients depend on Q, at least for some Q, but their ratio goes as Q to the 0. And the ratio is approximately Q independent. The other thing they found, here is a wave vector Q sub m at which S of Q is a maximum. Now, if I say S of Q is a maximum, that means that it's relatively easy to make fluctuations of that wave vector, and therefore, that wave vector, the fluid in some sense, is relatively soft because it's easier to make fluctuations of that wave vector. And what they found is if you compare D-zero with the diffusion coefficient you measure at Q max, of course, it's going to be inverses, but it scales as 8 of 0 inverse over 8 of 0 concentration. This is at the concentration here. The viscosity does determine the diffusion coefficient, but it does it at a specific wave vector, namely the wave vector, which is easy to make fluctuations. How can you explain that? Well, S of Q is a maximum where it's relatively easy to make fluctuations, so the fluid is soft. And if you apply a shear to the fluid, the fluid tends to yield by yielding where it's softest, namely here, in some sense. Of course, this is a microscopic result. This is a microscopic result. More precise we cannot be. I should, however, also note the results of Del Santy. Very nice person, better ones. And if we put in D sub S, there's a gamma fast, there's a gamma slow, there are two relaxation rates, and as we increase the polymer concentration, the fast mode relaxes more rapidly, and the slow mode relaxes more slowly. Okay, why is this interesting? Well, if you do the same experiment on a polymer solution, you see it, you see the same thing. You see a fast mode and a slow mode, and their concentration dependence is crack each other. If you put probes into the polymer solution, you get other results, and we'll consider this in much more detail later on in the course. So that is, you see multiple modes and their concentration dependence. Okay. And now we will shove ahead, and we've discussed light scattering, and I have skipped some details here, but they're useful to talk about after we've talked about polymer systems. We will now skip ahead and talk about particle tracking. And the issue is that you can look in with a video camera, like the video camera that's recording, only fancier, and you can actually in a microscope see particles and you can watch them move. And therefore you can pull out full details of the statistics of the motion. Now there are a couple of disadvantages to this experiment that aren't quite as obvious. First is you're limited by the video scan rate, and you cannot do video scan rate with current technology on a microsecond time scale. The second thing is you're only looking at a small number of particles at once, so you have to spend a lot of time and do a lot of computation if you want to pull out considerable results. A time scale problem is not as serious as it sounds. That is, instead of speeding the camera up, what you can do is to say we will increase the viscosity of the liquid, say replace water with water, glycerol, and presumably all of the same things will happen, but they will happen on a time scale described by the viscosity, which is one or two or three orders of magnitude longer. And therefore instead of speeding up the camera, we slow down the system. And the nice set of experiments are due to CASPER. And what was done was to watch this motion as you change the polymer, increase the polymer concentration, and the first thing that was found was the particle motion slows down. And the next thing you do is you find that the polymer motion becomes saltatory. Saltation is moving by jumping. A cricket spends much of its time moving by saltation. So what happens is the sphere sort of sit here and move, and sit here and then move. This is not an actual picture. And that's saltatory motion. And if you run up the concentration enough, eventually you get to the point where there is relatively little motion. The spheres sort of are trapped at their current locations. Okay. Now you might like to interpret this. You could do this motion, this measurement, and that measures diffusion. You can also do quasi-elastic light scattering spectroscopy. And that measures a diffusion rate, and you can compare the two. And what was done was to interpret the light scattering spectra. This result is not always correct. I've learned about this before as e to the minus q square x square. Before you could get out a mean square motion that depends on how far apart the two time intervals are. That's according to this equation, which is only true under certain conditions. You could also measure mean square displacements microscopically. Yes. At a volume fraction of about 0.6, these two agreed. At smaller concentrations, they didn't. However, at the volume fraction of 0.6, the light scattering spectra was a single exponential. And as we know from Dubb's theorem, this result is not true if the spectrum is not a single exponential. If the spectrum is a single exponential, Dubb's theorem says this result is true. And so when Dubb's theorem said you were in good shape with this equation, which is only true at one volume fraction, the indirect and direct measurement studies of particle displacements agreed with each other. You can, however, because you're doing video tracking, do measurements that you cannot pull out of quasi-elastic light scattering. For example, you could measure the particle displacement between times t and t plus tau. And the particle displacement between the joining time interval between t plus tau and t plus theta. You do both of those. And this was actually done by Gao and Kilfoil. And what they demonstrated, approximately speaking, was that you had, first of all, you could, if you had a substantial motion this way in one time interval, in the next time interval you tended to have some recoil as particle more often than not would drift back. You could also, and a series of people have done this, look at how neighboring particles move. And what happens if you look in at spheres and look at their motion? Well, what you find is that there are groups of spheres that form somewhat ordered clusters. I'm being imprecise about that. And that move more or less as a group, and this is sort of a ball, in a rather slow manner. You find other groups of particles, and the other groups of particles lie on ribbons, typical, that move rapidly. You may say, okay, why is that happening? What is going on? And I can give one piece of an explanation. Namely, there are also large-scale computer simulations of glass formation. And there are a number of people, large group is at Michigan, that have done this sort of work. And they find experiment, they find simulationally, that as you approach glass, you get groups of more mobile particles and groups of less mobile particles. The less mobile particles tend to lie in lumps, the more mobile particles tend to lie on lines and ribbons. So we have now seen experiments that validate the computer simulation work. Okay. What else can we do? Well, I was saying we could observe single particle confusion. And so here is a probe radius rp. Here is a matrix particle, radius rm. rm may be larger than or less than rp. And you can measure the self-diffusion coefficient of the probe as you change the concentration of the probe. And you can do this for tracer particles that are small or large relative to the matrix particle. And here are small probes, small relative to the matrix. And here are middle size. And here are large probes, meaning small matrix particles. And you can actually do these experiments and you can measure how the probe diffusion coefficient depends on the relative size of the probe particle and the matrix particle. Okay. We are almost out of time, but I have enough time to discuss viscosity and viscosity by estheticity. For herd spheres, viscosity as a function of concentration has a fairly consistent experimental form that everyone has found. And here with hypothesized data points on it is the lower concentration region e to the alpha phi nu. And we chug along and we get to some point and at some point we have a power law behavior phi dx. You can continue the mathematical curve of the stretched exponential, but the data is doing this. In at least a few experiments people have gotten points that are very close to here and it does not appear that there is any crossover regime in most systems. We talked about a similar behavior for polymer solution viscosity, but for polymer solution viscosity at the crossover, the slope of the power law line and the slope of the stretched exponential line were the same. For herd spheres the crossover is not analytic, the power law line just takes off. There has been a substantial amount of work to study this or rather to study viscosity of herd spheres. Most people didn't say, well the viscosity goes up very quickly and they didn't have the functional form description that would tell you sharply what you were seeing. You can measure approximately where this is and this crossover occurs in a volume fraction of about 0.4 to 0.45. It's a little hard to measure more exactly than that and it's slightly different in different systems. And the crossover occurs at a viscosity 8 over 8 of 0 where 8 of 0 is the solvent viscosity of 05 to 15. Also slightly different in different systems. Now most of these studies were interested in something completely different. Namely if you continue this curve up to 0.494, give or take, there is a viscosity at the point that thermodynamic phase transition sets in. And that viscosity is about f50 plus or minus. It's a little hard to be more precise because this is an extremely steep curve. A small error in determining or deciding what the concentration is leads to a significant difference in viscosity. The core point I would make is that this crossover is down here. So what happens at the phase transition is up here and very, very certainly the stretched exponential to power law transition is not vaguely at the same location as is the thermodynamic phase transition. We are out of time. We're pretty close. And that's it for this lecture.
Lecture 18 - colloid dynamics, a topic rarely included in discussions of polymer dynamics. The forces are the same; the particle shapes are different. George Phillies lectures on polymer dynamics, based on his book "Phenomenology of Polymer Solution Dynamics".
10.5446/16213 (DOI)
Classes in Polymer Dynamic. Based on George Philly's book, Phenomenology of Polymer Solution Dynamics, Cambridge University Press, 2011. And today, this lecture is lecture 17, Final Discussion of Probe Diffusion. Good morning. Today will be our last lecture on probe diffusion. We're going to consider a few particular special cases and sorts of experiments that also look at probes, mesoscopic particles moving through polymer solutions and we'll see what information we get out of those measurements. We'll also consider, at least at short length, a summary of the sorts of experiments that we have looked at and what we get out of them. So the first thing we're going to consider is rotational diffusion. That is, we have a polymer solution. We put it into a probe. Even if the probe is a sphere, we can arrange its innards to be ordered in some sense. And if the innards are ordered, when light impinges on the sphere, the scattered light is depolarized. The intensity of the depolarized light depends on the scattering angle and the orientation of some internal orientation of the sphere with respect to the incident light and the scattered light. And in the writing in between, there is a description of the sphere. If the sphere rotates, the intensity of the depolarized light changes and therefore, if we have polarized light in, polarized light out, and we monitor the intensity-intensity correlation function of the scattered light, we can actually measure rotational diffusion times for nominally spherical particles in a polymer solution. This has been done by several authors. I will note here, for example, worth pondering who looked at spheres in the 4-megadolton Zanthan and who did a whole bunch of different experiments. The bunch of different experiments is useful because you have a system, you look at a series of different parameters, and by comparing the behavior of the parameters, you get out considerably more information than you would out of the same four parameters being measured in four different systems. What was found looking at the experiment and what is being varied is the concentration of the polymer. Varying the concentration of the polymer, what was found was that the probe diffusion coefficient and the rotational diffusion coefficient fall, but not very much. You can follow along in the book in Figure 9.37. What also measured was the sedimentation coefficient, which was affected considerably more dramatically than the probe or rotational diffusion coefficients were, and also measured was the viscosity, I plot here the inverse of the viscosity, the fluidity, and the fluidity of the solution was much more affected than the polymer than were any of the probe transport coefficients. What we can infer from this, first of all, you're seeing here very transparently non-stokes Einsteinian motion. That is, the viscosity of the solution, the macroscopic viscosity, is not determining the transport coefficients. Furthermore, the sedimentation measurement is what can be called a micro-reology measurement, that is, you have an object, you are applying a true external force to the object, you're watching the object fall through solution. And the apparent viscosity that you can infer from the sedimentation coefficient is not the same as the macroscopic viscosity of the fluid by direct measurement. Similar measurements have been done by any number of other people. For example, we have Figure, let's erase this, we have Figure 9.38, which looks at tobacco-mosaic virus. The virtue of tobacco-mosaic virus is that it's a rod, it's a very uniform rod, all of the TMV molecules are the same size. The disadvantage of a TMV and many other biological probes is that they're charged and you have to worry a little bit about electrostatic interactions between the probe and the environment. Well, you can't always have everything. Nonetheless, what was done was to measure dr again. And once again, on an appropriate plot of dr versus concentration, what was found was a stretched exponential concentration dependence. The interaction between the polymers and the probe giving us a decay of the decrease of the rotational diffusion coefficient as polymer concentration was increased. An interesting variation on this, see the text again, was to look at polybenzyl-L-lutamic acid. As the probe, it's a nice, another rod-like polymer, it's like TMV. The background material, however, in this case, were little spheres. And so you are looking at a rod diffusing through spherical particles. That's a very clever additional experiment. We'll come back to spheres diffusing when we get along to look at, um, G, colloids. In fact, there's a whole chapter on colloids and we are coming to it quite soon. What sort of other experiments can we do? An extremely important whole set of experiments is provided by particle tracking. The notion of particle tracking is that instead of looking at light scattering spectroscopy, which measures the relaxation of a spatial Fourier component, a fluctuating component in the concentration, or looking at fluorescence relaxation after photobleaching or some similar method, which gives you a diffusion measurement, but only over fairly long distances, we will actually look at the particles through a microscope. We'll use modern video technology to measure position versus time of the particle, and we can actually get detailed statistical distribution of the particle position as a series of times, and therefore we can get detailed statistics for displacement factors r of tau minus r of tau plus t. And from the particle tracking, what we can do is to actually talk about in somewhat more detail how particles move in solution. Now, the limitation at the time I speak, I suspect this is going to be obsolete in a few decades or sooner, the limitation on the technique is that if you want to measure how the particles move, you actually have to record the positions. This is done on a large scale in many cases using video technology, using video cameras, and if you do this, you hit a limitation, namely the frame rate of the video camera. You are taking an image of the system, but you can take the image of the system. What? 30 times a second, 60, 100 times a second. On the other hand, light scattering spectroscopy, you get particle position information going down to the certainly 50 or 10 nanosecond time scale and routinely down to the microsecond time scale. So there are still some technical limitations, but particle tracking is really useful because you actually measure directly how the particles move. Now, there is a historical theoretical basis for this, which is the Langevon equation, which describes particle motion. The Langevon equation says, we'll keep this very simple, we have a particle in solution, it is moving, it experiences a drag force minus Fv, which in the original model was the Stokes law drag coefficient. We'll come to the problem with that in a second. The particle is also subject to what is called a random force F. Now, the force F is not random in the sense there's no physical determination. It's due to the solvent molecules around the particle we're interested in. We can't see the solvent molecules directly, so we don't know what that random force is going to be. Nor, since we can't see all the solvent molecules and track all of them and all this other things, can we predict what F of t is in the sense we can predict what the sun's gravitational force on the Earth will be for the next year? So there's a force we can't measure directly or predict. However, it is a force, and we can therefore use the random force and the drag force to write F equals ma, F equals ma for a Brownian particle. The random force in the Langevon model has the important feature that if I tell you the value of the force at one time, that gives me no information whatsoever about the value of the force at any other time. The force fluctuates, but the correlation time in the force is approximately zero. From this, and a great deal of work described in my other book, Elementary Lectures and Statistical Mechanics, and there's an accompanying set of video lectures on that now being produced too, from this and this we can say, G, we can calculate how the particle moves and we can calculate what the particle's displacements look like. And for a one-dimensional component of the random displacement, say displacement along the x-axis, the probability of some displacement x during time t is proportional to e to the minus delta x square over, and in one dimension, it's 4 dt. Isn't it? Yes. Well, any event, the key point is, it is a Gaussian in delta x. Now, the proof that the form is a Gaussian relies on two pieces, and one piece is called the central limit theorem, and the central limit theorem says that if you add up a large number of random events, the distribution of the sum of the random events, the statistical distribution, is a Gaussian. So if I roll one die, it would be a d6, right? You get a 1, 2, 3, 4, 5, or 6, equally likely. If I take 100 dice and roll their roll and add up the numbers I get, the average I get for 100 dice is 35, as the sum of all of the points showing. However, 35, 350, 100 times 3.5. However, if I look at the distribution of die rolls, the distribution of the sum of 100 rolls around this average, which is 350 for 100 dice, I get a Gaussian of some width. The other piece of this, which is just as important as the central limit, is that this is what is known as a Markov process. A Markov process is a process with no memory, so that the velocity of the particle over, or displacement of the particle over one piece of time, and the displacement over the next piece of time, and the displacement over the third piece of time are all uncorrelated. Now, obviously at very short times, this is not a Markov process, but over reasonable time periods, it's in my other book, you can show that the behavior from the Langevant equation is a Markov process, and therefore you get this Gaussian distribution of displacements versus time. Okay, now having said that, there are then people who say, well, we have this nice result, you can find the result in the beautiful book of Burnham-Pakora. And they actually show you a considerable number of details, and Burnham-Pakora book on light scattering then shows that, well, if you have a Langevant particle, the displacement distribution is a Gaussian, and therefore the light scattering spectrum, and I will write the field correlation function, is some constant e to the minus q square average displacement during the time t over two. This result is perfectly true for the system in which it was derived, namely a system in which the particle motions are a Markov process, there's no memory here, and in which the drag force on the particle is minus fv, where f is the Stokes law drag coefficient, if you have spheres. Now we come to the minor technical difficulties with this result. The first technical difficulty which goes back, oh, to the 1970s, is that if you have a Brownian particle in a liquid, the Brownian particle in its motion does not move at a constant speed in a constant direction, its velocity is not constant, that's what Stokes law is, instead Brownian particles move like drunkards, I will not attempt to fake drunkardness too much, but drunk can walk, and if you have a particle moving with an irregular speed through a fluid, and it's simply driven motion, Stokes law is not the correct form, and the correct form is the Bosnian equation. A significant effort in statistical mechanics back in the 1970s, there are several very nice papers about it, you can look up for example Hermann's and Schau, and there's a bunch of others, but that's the first that comes to my memory, demonstrate that if you have a particle that obeys the Bosnian equations, this is spheres in water, doing Brownian motion, the calculated diffusion coefficient is the same as the diffusion coefficient that you get assuming Stokes law. However, we now have the other two, some more technical obstacles. The first technical obstacle is of course we're talking about complex fluids, polymer solutions, colloid solutions, my concentrated, my cell systems, you can come up with a very long list of these, all of the very long list of these have in common, why are we interested in them as complex fluids, well rather consistently the fluids are viscoelastic. The statement, the fluids are viscoelastic means Stokes law does not apply. It is possible you could rejigger Bosnian calculation to handle the statement that the viscosity is frequency dependent, there's several other ways of saying the same thing. However, the important result is the system has memory. If you have a viscoelastic fluid thanks to the fluctuation dissipation theorem, we can say that if we look at what we were calling the random force, thanks to the fluctuation dissipation theorem, we can say that the random force is some function, has some dependence on tau, so the random force at one time and the random force at another time are partly correlated with each other, and thus the long javan description does not apply. Now you might start to worry, gee, how can I tell if long javan is applicable? How can I tell if I'm in a system in which I can use this result and that result, or as opposed to a system in which I can't? Fortunately there is a wonderful answer to this, is Dubes theorem, let's see, it's a nice mid-1940s result, it's an analysis of random motion, it's an analysis of random walks that show this sort of behavior. And the important answer is that if you have a Brownian Markov process, if you have a process for random motion, the answer is that if you have a Brownian Markov process, if you have a process for which the whole development leads to this result, then we can say with absolute mathematical certainty that the light scattering spectrum, gamma is a number, the light scattering spectrum is a single pure exponential. If the light scattering spectrum is not a pure exponential, then you aren't looking at something described by the simple long javan equation. The displacement distribution function is not a simple Gaussian, in fact successive displacements are correlated. G is not determined by the mean square particle displacement, and you actually have to know what you're doing somewhat to get any further. You might ask, well what do we do to replace the long javan equation? And the answer is there is an exact result due to the Zimni-Morrie and Bob Swanseig, known as the Morrie-Swanseig theorem, that is also covered in great detail in my book, Elementary Lectures in Statistical Mechanics. This is an exact result and it tells you what happens next. Of course evaluating it, you may correctly infer it's a little more tricky. Okay, so I have now said, gee there's this question of what is going on, and you have to realize that there is something of an intellectual mind field hot hiding here, and people who say, oh Gaussian long javan equation, well that's unlikely to be correct, in fact as I'll show you in a sec, we know it's wrong, and polymer solutions. And now we come to several extremely important works. The first of which is the nice paper by Apgar, by Sen, and the list of other authors. The paper as an aside is noteworthy, is a superb illustration of professional ethics. Namely I have listed two names because the paper has two first authors, and there is even a footnote telling people there are two first authors, and they equally deserve credit for their incredibly important piece of work. And what they did was to do particle tracking of 0.43 to 0.6 micron spheres, and they looked in a series of solutions, and they started out okay, does the experiment work? Well let's look in something like water or water glycerol. They also looked in solutions of actin, and solutions of actin fashion. These are proteins. Actin polymerizes and makes long threads. Indeed if you're trying to work with it, one of the problems is that it will do so under all sorts of different conditions, and controlling the polymerization is a bit tricky. And what they actually did was to measure, not talk about, but measure the displacement distribution at different times. Now they were doing this with particle tracking, and the limitation of we're doing it with particle tracking, and we're doing computer analysis, you can do computer analysis of images these days, is that the number of measurements you get out is a bit less than you would get out if you were doing quasi-elastic light scattering. And so there's a certain amount of signal-to-noise question, which they obviously worked very hard on. The important result is that P of delta XT, for these spheres and these protein solutions, which have been very heavily studied by variations on this technique, P of delta XT is not a Gaussian in delta X. Well, since we believe that polymer solutions are viscoelastic, that's not a big surprise. If you understood the underlying theory, that would be an immediately obvious result. There's absolutely no reason to expect a Gaussian for P of delta X. The core issue is that in order to have Gaussian behavior, the central limit theorem won't do it for you. You also need that the system be a Markov process in its motions, which in viscoelastic systems you do not have. Okay, so we do not have a Gaussian. This was not the only other experiment to... Look at this, there is also another nice pair of... I mean, I've seen several other co-authors looking at the same sort of thing. And at all, you also look at delta X squared at fairly large times. And what they found is that if you go to large times, delta X squared grows as a power in time, and the power A is less than 1. That is, you have what could be described as a sober diffusive behavior. Same and weird. Well, having said that, it's not a Gaussian, there's an important aside on this. If you go through the literature, you can find bunches of experiments under the title Microbiology, where people actually assume that P of delta X is a Gaussian. All of the techniques which simply assume Gaussian behavior in a fundamental way, as opposed to saying they're going to assume it, and then doing something quite different that doesn't really rely on that. But all of the techniques that really rely on the assumption that P of delta X, T is a Gaussian and delta X, the outcomes of those methods are invalid. They're invalid because the underlying ground assumption is incorrect. Can they be patched up? Well, if you're clever, you can do a lot of patching. Now, that is Apgar and Sen. There are a series of other people who have done Microbiology, and let's see Crocker and Flaborators, and who else, Shen, and there are a series of other references in the text, and the other is the same data in a sense, but two experiments. And one is to say, here is a particle, and it has a displacement delta 1, and we can look at, for example, delta 1 square of T, and its average behavior. And the other is to say, we have here two particles, delta 1 and delta 2, and if the two particles are moving, we can look at the cross correlation. The cross correlation is the displacement of particle 1 during time T, and this object describes a cross diffusion object. These are both vectors, but composition is legitimately an outer product, a tensor product, so D1 and 2, in the general case, is a cross diffusion tensor, and it depends on the vector displacement between the two particles, and the viscoelastic system, you might discover there was in principle time dependence. Now, we've discussed at some length, we have a mean square displacement here, and therefore, if we pretend this is a diffusion coefficient, we can extract from it some sort of a microscopic viscosity. We can also go in here, this is a different diffusion coefficient, it's a cross diffusion tensor, however, the same general analysis says we can extract from here another microviscocity, D is 1 over the viscosity, and then there are a bunch of constants and numbers, and we can extract from each of these a microviscocity. The interest in doing this is that if you go in and you do this twice, well, there are two key features. One is there is a time or a frequency dependence here, that is, the apparent microviscocity depends on the time step we use to measure the displacement, and this corresponds to the system having, in some sense, a dynamic storage modulus, and a G double prime, a dynamic loss modulus, and so you have two moduli, and you can look at both of them. And what comes out of this? Well, the first thing that comes out, you can say these are in essence a frequency dependent viscosity, we will be considerably more precise in a later chapter, and the important issue is that this is sort of mu 1, this is mu 2, and eta mu 1 over eta mu 2, the two viscosities are not equal to each other, their ratio can have a difference, which is a factor of 2 or a factor of 3. So the viscosity viscoelastic properties that cover single particle motion and cover two particle correlations are not the same. You also could find, at least in the system that was studied, that the two particle microviscocity is something like, or quite close to, the macroscopic viscosity of the liquid. Well, that's very interesting. Of course, there's some complications when I said macrophysiocytic viscosity, I mean the viscosity you measure if you actually get a reometer, which is a box of some size, and measure fluid flow over distances you can see with the naked eye. Okay, these are the nice experiments of Crocker and the series of others, and the core issue is they do what is called two particle microreology. And having done two particle microreology, they infer viscosity. Now they actually did something else, so it's sort of, you read the paper very carefully and you notice it, this is a cross diffusion tensor, it's cross diffusion tensor, and the cross diffusion tensor tracks T, the O-Sine tensor, which the O-Sine tensor describes hydrodynamic interactions between pairs of spheres. There's a power series in one over the distance to various powers, the O-Sine tensor is the lead term. There's an interesting implication of this which we come back to very late in the course. So these are the studies of Crocker and many other people on two particle microreology. Okay, very pretty experiment. We're going to look at microreology again, but we're going to look at the segmental scale, and this is a wonderful experiment by Dichtel and Sackman. And what they did was take a long polymer, so there is our long polymer, and what they do is to attach to the long polymer a bunch of, if I recall correctly, little gold particles. And these little particles are attached at various points along the chain, and therefore you can see the chain and where it is, and you're taking pictures of it and you can watch the chain move. And one thing you can do beyond, we can measure the center of mass motion, as we can say, I can look at this piece of the chain, and this piece of the chain has a motional component parallel to the apparent backbone and perpendicular, so there's parallel and there's perpendicular, and this piece of the chain, and every other piece by the way, has a parallel component and a perpendicular component. Of course parallel and perpendicular are now pointing completely different ways at different places along the chain, but because they very cleverly attached all these beads, I can see a coarse outline of where the chain is, and I can tell where parallel and perpendicular points at each place along the chain. Isn't that clever? And they can now measure a parallel diffusion coefficient, and a perpendicular diffusion coefficient. They actually did the experiment. So having done the experiment, what did they find? Well, the first thing they found was that at very short times, typical displacements parallel and perpendicular are about the same. But if you wait a while and look at significantly longer times, the parallel displacements can be rather rapid, that's parallel to the chain. The perpendicular displacements are now much slower than the parallel displacements. I should stress the timescales on which these observations are being made are much shorter than the timescale on which this polymer molecule, in terms of the repetition model, has escaped from its tube and moved to a new location site over here. So we are looking at measurements on still fairly short timescales. However, what we find is the parallel motion is faster than the perpendicular, and this is actually a prediction of models that on short timescales you have local chain motions which can get fairly fast. There is, however, another piece of this. Namely, if you look at the parallel motion, you can extract from the parallel motion, which is after all diffusive, a microviscocity, and the microviscocity is something like an order of magnitude larger than the solvent viscosity. There is a microviscocity in there, but the polymer segments do not think they are moving through playing the solvent. They think they are moving through something that is quite resistive to their motion. Okay, well this is a very pretty experiment. It is all done microscopically. It is an experiment, it would be wonderful. Gee, this is a computer technology issue. If it could be carried out and you observe the motion out for very long times, and perhaps by the time this lecture is over, I will discover that it is in fact now been done, but it did not seem to have been done at the time I wrote the book. But it was very nice. Okay, let's look at another experiment. And the other experiment we will look at is due to Goodman. And what was done was to look at diffusion and motion in DNA solutions. Now why are we interested in DNA solutions? Well, there is all this biotechnology stuff. But from the polymer physics standpoint, there is something very important here. There are a number of DNAs you can procure that are rings. And they can be quite large. The largest they looked at was greater than 1 times 10 to the 4 base pair. They were essentially totally mono-dispersed. And thanks to modern biotechnology, you can go into the ring and cut the ring, and be sure you are cutting the ring exactly once. So you get a linear polymer whose molecular weight is identical to 1 part and 10 to the 5, or 1 part and 10 to the 6. After all, you probably did insert a hydrogen atom or an OH or something in here. And you have split the ring, and these two are exactly the same molecular weight, except this is now a linear polymer. And having said that, okay, well, we will now do spheres and microbiology in these two systems. They got up to concentrations in natural units only about 6, unfortunately. However, what they found was that the microviscocity of the ring system, yeah, that is related to the microviscocity of the linear system, but in elevated concentrations, the ring system was considerably less viscous by several fold. Now, the reason this is of some interest is that there are theoretical treatments of rings. It is always possible to claim that those theoretical treatments are so brilliant that they do not yet apply to this system because the system is not concentrated enough. But what the SIT predictions say is that linear chains have modes of motion, for example, translation along their whole length, that are not usefully available to ring polymers. And therefore, the viscosity of a ring solution ought to be considerably larger than the viscosity of a solution of linear chains. Such a behavior has never been observed. Historically, when you talked about ring polymers, though, there were always objections that, well, the rings, instead of being the pretty picture I've drawn, could be tangled all in knots and ramp, and so the thing not only is biting its own tail, but it's doing loop-to-loop around itself. There could be concatenated rings, and there were all sorts of complaints if you discuss synthetic ring polymers. With biopolymers, these objections are invalid. With biopolymers, you can say, as a result of the way they're synthesized, there are essentially zero concatenated chains. Furthermore, the chains are all wonderfully mono-dispersed. There are no linear chains mixed in there until you do the cutting. Furthermore, you can do electron microscopy, and you can see rings as they come out of various objects. If you see rings, you don't see balls. And therefore, with great certainty, we can say, we actually have a rings polymer system. It's mono-dispersed. It's not concatenated. It doesn't have all of those nasty criticisms that have been made of it. And this is a legitimate test of the system up to the limitation that you wish the viscosity had been taken to be some higher number. Last experiment. And you might say, well, this system, we're about to talk about very pretty work of Shu, et al. And we are looking at wheat gliadin, which is a natural product. And we are looking at the diffusion of objects through wheat gliadin. And gee, there are some very odd results that come out. They're quite solid. And one result is that we have spatial heterogeneity. That is, if you measure the motion of probes through the solution in different parts of solution, you can sort out there are regions where the properties of the liquid are a bit different. And one can imagine saying, well, as a qualitative description, this is not going to be shown directly, there are result regions which are relatively fluid. There are regions where particle motion is more slow down, which are in some sense vitrified. Those of you who are familiar with the Kibblesen glass model and other recent work on glasses may realize that I am describing a solution. It should be heterogenious, shouldn't it? Well, no, it's not. And the implication is you might, well, let's look at the last piece of the experiment and then I will say what you might. They looked at a series of concentrations, such as 250 gram per liter and 400 gram per liter and the 250 gram per liter P of delta x, the displacement distribution function was a Gaussian, and the 400 gram per liter of this material P of delta was varied that definitely not a Gaussian. Gee, the system, it's a liquid, but it's not homogeneous, it's thickening up. We are looking at the Kibblesen glass model, which proposes that glass efficaciation occurs because you start forming heterogeneous regions which are in some sense vitrified and resist motion. Well, here's the experiment and you're seeing it directly. Okay, so much for the pretty work on wheat glyadins. Let us chug ahead and we have another section. In the next section of the paper, we are going to talk about true microbiology. What do I mean by true microbiology as opposed to diffusion and a microfascosity from diffusion measurements? I mean, we're going to do an experiment in which we take a mesoscopic particle. We apply to the mesoscopic particle of force F. We discover the particle has an associated velocity B. We can do such things as applying an oscillating force, and then we get a velocity omega. This is a real viscosity measurement in the sense that we're actually applying an external force to the system. Okay, a whole bunch of experiments are studying this. And so, for example, we got our amplard. What is done is to apply a force to the system, and this can be done, and they did something careful. They both applied a force, and they looked at diffusive motion, and they asked, okay, what happens to, for example, the mean square displacement at long times, or what happens to the displacement if I apply a force at long times? And both of these were found to depend on P to the A for A sum number around three-quarters. And so, what they did is, at long times, at least as long out as they went, they applied a force, and the response, the displacement, increased as a power long time, but not linearly. And the diffusion measurement did what you would sort of expect from a fluctuation in this patient argument. The diffusion experiment measurement showed the same behavior, namely, we have T to the A for A being some power. Okay. One can also, and there are a series of papers in the book, look at such things as, let's look at spheres that are bigger and bigger and bigger. There is sometimes a hand-waving argument. Hand-waving is very dangerous. There is sometimes a hand-waving argument that for small spheres, you might expect that the Stokes-Einstein equation could fail. What does small mean? Small relative to some length scale unspecified in the liquid. But if you have really big spheres, to the really big spheres, the fluid outside looks like a continuum, and therefore say the hand-wavars, if you make the sphere big enough, you will see Stokes-Einstein type diffusive and driven behavior. Well, that doesn't appear to be the case. The experiment has been done, and if you go to large particles, you rather definitely do not move over to the micro-fascosity agreeing with macro-fascosity. Okay. There are also a series of other experiments. Let's put a few names on it. Schmidt, a Ho, a Ho, a Ho, and so there are a series of experiments in which real microbiology is done. We have a mesoscopic particle, the force is applied, a viscosity is inferred. And what is found is that the micro-fascosity is not at all the same as the viscosity measured experimentally. And if you look at frequency dependence, well, that doesn't help you. Okay, so what is the importance of that result? Let's step back a bit. Do you remember I mentioned a single particle microbiology and two particle microbiology, and people said that the micro-fascosity inferred by looking at the correlation in the motion of two particles was the same as the viscosity measured macroscopically. The true viscosity measured macroscopically. Well, yeah, the true viscosity measured macroscopically, let's just call it eta, may match the two particle micro-fascosity. But there's an interesting issue here. That is, if you do a real viscosity measurement on mesoscopic particles, the real viscosity measurement, this is a real microbiology experiment using mesoscopic distance scales, does not equal the macroscopic viscosity. And therefore, the diffusion viscosity inferred from two particle tracking, the fact that it's equal to the macroscopic viscosity, gee, what does that mean? Shouldn't the micro-fascosity agree that the diffusion micro-fascosity agree with the real-reology experiments made on the same distance scales? There are some interesting issues here that are still under investigation. Okay. We shall advance a piece, and we shall advance two gels. For the most part, this book does not discuss true gels. When I say a true gel, I mean we have long pieces, and the long pieces are covalently cross-linked so that for essentially all purposes those are permanent bonds and you have mesh work. The book does not cover true gels a great deal. Nonetheless, we do look at it a little bit, and we can make a few observations about it. The first observation is that a true gel, not a concentrated solution, but a true gel, is a size filter. That is, a true gel really does let small particles through and really does stop large particles so they cannot enter. This physical result has been the basis of gel permeation chromatography since I was an undergraduate more years ago than I care to emphasize in detail, but there is the result. True gels are size filters. Real polymer solutions are not size filters. They may return large and small particles to different extents, at least modestly. However, particles of all sizes can pass through polymer solutions. They can't pass through gels. For small particles, there are some experiments by Park et al. on how the diffusion coefficient depends on solution properties, and you do still get a stretched exponential. The stretched exponential has a concentration exponent, which is slightly less than 1, and it has a size exponent, which is for the probe, which is about 3-fifths, but you should realize this is all for probes that are small enough to get through the holes. If you present a large object, it gets trapped. It does not get through. Okay. One more set of experiments. These are all duquets, ruby-felps, and collaborators, and I am not going to sort out exactly who leaves. We can talk about k-buby-felps and work, and the core issue is let us attempt to model how objects move within a cell. There's two sorts of issues, and one is, looking at a diffusion coefficient, there's a concentration dependence, and the other is, one can look at the limit for very small particles of the diffusion coefficient, and ask what the effect of the system on very small particles is. And there is a great deal of experimentation done, and the net result of all of this, as you can say, sort of what you might have expected, except this is the actual experiment showing it, is that you have the interior of a cell, and there's sort of long, thready things that actually do trap and solve, but not all large particles, and mixed in with the long, thready things are smaller macromolecules. Here they were there, and the smaller macromolecules also act to retard diffusion, but not in the same way that the matrix meshwork does. And so this very pretty set of experiment says we can construct physical models that are made of materials we understand, and the diffusion behavior of probes can be shown to be the same as the diffusion behavior within a cell, and we can then sort out what's going on. Very beautiful piece of basic physics. Okay. Okay. We now hit a section of the chapter, which was an original draft of the manuscript was much longer, except in the end I basically scrapped it. And the section we are going to talk about has the name micro-reology, and the reference here is to a series of experimental methods, which assume, and I really mean assume, that if we are looking at a probe in solution, let's use delta, the light scattering spectrum, and I'll write that in terms of the field correlation function, is e to the minus, and up here is the mean square displacement, which is of course a function of time. As we have seen, going back to the experiments of Apgar and Seng, and as we can see directly, namely if you look at the system, the spectra are not pure single exponentials, this assumption has one basic problem. What's wrong? As a result, experimental analyses that rely on that experiment with a few exceptions are unreliable in their determinations of what's going on. Now there are some very clever exceptions. One of the very clever exceptions is an experiment due to Popeshko. It's a light scattering experiment, except what was to do, Don, was to use a diode laser that gave us light whose wavelength, whose correlation length I could see, was extremely short. That is, if I look along the light that has come out of the laser in a series of points, the light is more or less, but not quite monochromatic. However, if I look at the phase at a series of points, over very tiny distances, the phase of the light here tells me the phase of the light here, but if I go out any distance at all, the phase of the light here and the phase of the light there are independent of each other. What's the experimental consequence of that? Well, suppose I do scattering off two particles. If the two particles are separated by this sort of distance, the phase of the light here and the phase of the light there are independent from each other, and there's no interference. There's no way to tell the relative motion of particles with one exception. The light is going in through a window, and we have some number of particles which are very close to the glass surface. And now we have scattering off the glass surface, and we have scattering off particles that are incredibly close to the surface, close enough that they are within a correlation length, a light wavelength correlation length of the surface. And the light scattered by these few particles and the light scattered by the window are coherent, and you get interference. And you can then look at the motion of a few particles very close to the window surface. This is very useful if, for example, you have a basically opaque turbid solution, milk-light solution, regular light scattering is disappointing, not withstanding heterodyne coincidence spectroscopy and related techniques. Light scattering spectroscopy is disappointing, but the Popescu technique, and there's a theoretical analysis if I recall correctly, how, lets you measure the diffusion of the particles close to the window. However, Popescu and all are very careful to emphasize one thing, namely, they are looking at the diffusion of particles over very short times in which they are only looking at the lead term of this hypothetical exponential. The lead linear term behaves the way it's supposed to. It really does give you the mean-square displacement. All of the difficulties arise if you try to carry out over any distance. So there is this very pretty experiment. It really does these nice things, and it works. Okay, those are the experiments of Popescu at all. You're a little limited to how far out in time you can get, but it works. And we have reached the end of the chapter. That is all the chapter there is except for a summary. The summary is we have looked at the diffusion of probes through polymer solutions. There is an enormous literature on probes in polymer solutions, and that enormous literature is mostly non-communicating with the so-called micro-reology literature based on our friend here. So in two literatures, they don't talk to each other much. However, we're going to be talking about the optical probe diffusion literature, optical including some other techniques like particle tracking. And the first point we can say is that if we measure diffusion using light scattering, we find three sorts of behaviors. And one is a, yeah, there's a diffusion coefficient there and it falls as a stretched exponential in concentration. The second behavior which we see on occasion is re-entrance where there is some regime in which you're usually quite limited in which you do not see this behavior, but then after you leave the regime, the behavior emerges. The third sort of thing we have seen, and this has been studied in detail in hydroxy-prol-cellulose solutions, is that on occasion you also get multiple modes, that is, you see several relaxations going on at the same time in the same system. How the system manages to do this is a little less clear, but it clearly does do it. Figure 943 turns to a study of this behavior, and what is plotted in there are alpha and nu as a function of polymer molecular weight. For each polymer molecular weight, there's a concentration dependence. You measure d at a whole series of concentrations and you get out of that two numbers, an alpha and a nu, which correspond to one molecular weight. If you look at the figure, you see that from 1 times 10 to the 4, up to something like 5 times 10 to the 5, over an order of magnitude and a half, there is a fairly nice line where alpha depends on molecular weight to a power. The reason I show the dextran measurements first is that historically these were the first measurements in which on one hand the values of alpha and nu weren't very noisy, and on the other hand there were a whole bunch of different molecular weights, and you can see rather cleanly a linear behavior here, there is also for nu a similar behavior in which nu goes from one bound to about a lot of 6 over the same molecular weight range, and you see the parameters have a fairly clean polymer molecular weight dependence, which is what you should have expected if the theory actually meant anything. Okay, another piece, these are experiments of James Yambert and Mickey McCroy and I, they did all the work. We will look at alpha against polymer molecular weight, these are for probes and polystyrene sulfonate, and there are some fairly short polymers that really aren't random coils, and then you have a straight line and some points more or less on the straight line. The point of the experiment was to test a theoretical calculation of alpha, and there is the line. The reason the theoretical calculation of alpha is of interest is that for big spheres and polymer coils in which the beads are quite small, there are no free parameters in the line. The model actually gives a number out, not a number with a fitting parameter, and the line really is where it belongs, not over here or over here. So that was very satisfactory because it was a direct test of can we calculate interactions between probe molecules and polymer coils in polymer solutions, and the answer was a rather definite, yep, we can. Figure 945 continues this, and over for polymers with a wide range of molecular weights, if you plot alpha versus m, you get data that pretty much lie on the line. Of peculiar note, way at the bottom of the line, there are a couple of points where the polymer is in fact bovine serum albumin. Now bovine serum albumin is not a random coil, it's a ball with a well-defined structure, and so to plot bovine serum albumin on this graph, we didn't put in its actual molecular weight, we plotted it as though it was a random coil polymer of some molecular weight, a molecular weight that would give us a random coil polymer of the size of a BSA molecule, and those points sit right on the line. Okay, what else can we do? We discussed comparison with gels, and the important point is that a real gel, a cross-linked gel is a size filter, it really does block the motion of objects that are bigger than the holes in the gel, a polymer solution is not a size filter, its effect on moving probes may depend somewhat on the size of the probes, but even if you have really big particles, they can diffuse through polymer solutions. We also emphasized, if you can infer from the diffusion coefficient of microviscocity, there is a very complicated story here, but we in general find that the microviscocity is not the same as the viscosity, except for polymer solutions in a very small molecular weight. If the polymers in the solution are very small, the microviscocity and the macroviscocity are pretty much close to each other. There are a variety of tests of the so-called Langevon-Rondelais picture, which was derived for centrifugation, and in defense of the original authors, there are people who apply this to diffusion, perhaps this is not quite fair to the original work, nonetheless the original picture, which was for sedimentation, was that this should be some e to the minus a concentration to a power, probe size to a power, polymer molecular weight to the power zero plus an eta zero over eta, solvent viscosity, solution viscosity. If the polymer solution is much more viscous than water, say if the viscosity of the solution is 50 or 20 centipoise even, this term in concentrated polymer solutions disappears. Russo has given a very pretty test where they looked at fluorescein and labeled dextrans and such not in various polymer solutions, and they found including this term was useful, but they didn't get to go to very high viscosity. So, eta is not much larger than eta zero, this term is indeed significant. The important piece here as a conclusion, the one point I want to stress, is that the original theoretical model claims that there should be an m to the zero dependence, that is, the resistance of the polymer solution to particle diffusion or sedimentation should be independent of the molecular weight of the matrix. And the basis for that is the notion here is a polymer solution, and the polymer coils are very large. Now if I go in and say cut the molecular weight of the polymers in half by splitting them up, it is possible that I will go in, and suddenly I have made a hole that is twice as big by cutting the polymer there. But if the polymers are really big, if I cut the molecular weight of the polymer in half, there are a very small number of cuts, there are a very large number of these hypothesized gaps through which it is claimed the probes move, and almost none of the gaps have a polymer cut to their side. And therefore, S and D, in terms of this picture, would be expected to be independent of the polymer molecular weight. Well, the probe diffusion is quite sensitive to the matrix molecular weight. That was the pretty graph that showed alpha versus m, and we saw quite strong dependence like m to the 5 sixths or m to the first. And the reason for that must be that this picture of probes as advancing through a polymer solution by looking for gaps is incorrect. Instead, what presumably happens in a polymer solution is the probe is advancing, the polymer chains move out of its way, they're dragged with it, they move along with the probe because they're strong hydrodynamic interactions, and therefore the probe particle in a polymer solution does not look for holes between the chains because the chains aren't anchored to specific spatial coordinates, they're free to move. And because they're free to move, this mesh picture is simply incorrect. That's it for today, and we have reached the end of chapter 9. We'll discuss something else in the next lecture.
Lecture 17 - probe diffusion, part the last. George Phillies lectures from his book "Phenomenology of Polymer Solution Dynamics".
10.5446/15976 (DOI)
So good evening. Welcome to the closing round table. First I would like to introduce our panel which consists of five very distinguished mathematicians. On my far right, Leonard Carlson, Professor Emeritus at the University of Obsola, and a former president of IMU. His research interests are in harmonic analysis and dynamical systems. Now all of our panelists are recipients of many awards and I decided that it would take far too much time to list them all, but I make an exception in reminding you that Leonard Carlson was awarded this year's Arbel Prize, for which we offer many congratulations. I would like to say how much IMU values its collaboration on several fronts with the Norwegian Academy of Sciences and Letters and the Arbel Fund. So next, Ronald Koipman, who is Professor of Mathematics and Computer Science at Yale University, and his research interests are in analysis, in particular harmonic analysis and wavelets, and applications to information processing. On my immediate right, Yuri Menning, who is Professor of Mathematics at Northwestern University, a former director of the Max Planck Institute of Mathematics in Bonn, and a former chair of the Fields Medal and Program Committees of the ICM. His research interests lie in algebraic geometry, in number theory, differential equations, and mathematical physics. On my immediate left, Helmut Neunzert, Professor Emeritus at the University of Kaiserslautern, he's a founding member and former president of the European Consortium for Mathematics in Industry, and his research interests are in kinetic theory and fluid dynamics. And finally on far left, Peter Sarnak, Professor of Mathematics at Princeton University, and his research interests are in number theory and analysis. Just in case I betray at some point my own views on the subject of the roundtable, I work in nonlinear analysis, especially the calculus of variations and its applications to material science. By way of introduction, perhaps I can show the two early instances I know of in which the terms pure and applied mathematics feature in the literature. These were the panel, I forgot to do this. Here's the first one, the first issue of the journal for Dirayne und Angevanter Mathematik, Krellers Journal, which appeared in 1826. And the contents of the first issue is shown, and actually you can see several papers there by Arbel. And ten years later, the first volume of the Journal de Mathematik pure et appliqué, Leeuville's journal, appeared, and here you see the papers are much more applied. The authors include Coriolis, Leeuville, Ampere, Lame, Jacoby and Sturm. So these were not bad for first issues of these journals. The old volumes of these journals incidentally are retro digitized and freely accessible, which is where I obtained these images. So pure and applied mathematics have been explicitly mentioned for nearly 200 years, and were doubtless recognized as being in some way different before that. And our topic is whether they are drifting apart. Each of our panelists will give a 10 minute presentation, and then the subject will be open to the floor, and I hope we will have a lively discussion. So I ask Leonard Carlson to begin. So mathematics really has a different phases. So the first concerns general education and mathematics is, of course, just as important as learning to read, and this is a very important part of society. And the second relation to the outside world is mathematics as the language of science, and this is a way in which I'm going to use the word applied mathematics as the language of science. And the third aspect is, of course, a subject of its own right, a logical system. And it is what most of us who are here right now represent, and we must clearly understand that of the three, we are the weak part, and that it is absolutely vital for the continuation of our science that we love so much to stay with good relations to the other two aspects. So the answer to the question, if the mathematics and applied mathematics are drifting apart, I would say that we should make every effort that it doesn't happen. And I would like to object somehow to the word drift also, we are not really jellyfish, but we can do something about this ourselves. So I should like to concentrate on the aspect of the issue as far as it concerns the teaching of mathematics. We like to talk about mathematics and applied mathematics in this order, which seems to indicate that applied mathematics is some kind of corollary of mathematics and that we are looking for ways of applying this. This of course is completely wrong from the point of view of history. Through these years, mathematics has slowly been being built from nature. And we have observed the remarkable fact that the laws of nature can be coordinated into groups and they follow rules. This started with geometry of course and numbers. And then we all know how difficult it has been to make continuous, that is movement into something logically reasonable. And it's been around only for like 200 years in a logical setting. If we take a subject like probability, it's, well it may show that I am old, but anyway, it has been built as a mathematical subject in my lifetime really. And looking into the future, we can see new areas emerging where the mathematics is missing. And the most spectacular there is probably computer science. Nevertheless, teaching of mathematics has always been done in a deductive way. That one goes from the general to the special, either as a logical system or as being applied. And this of course is contrary to the traditional way of how things should be taught. Let me mention to you that this also has happened in my lifetime. When I started studying at the University of Uppsala in 1945, the first lecture was devoted to the data in the cut. And we defined continuous functions with epsilon and deltas. And we had axioms and we had definitions and we had Riemann's integrability and I don't know what. As the number of students have increased and they're interested in the logical structure of the field has decreased, one has successfully been cutting off these typically mathematical aspects of the mathematics teaching. And to put it in a striking way, I would like to say that this only applied mathematics that remains. So, and we have, I guess all of us experienced how there has been pressure from other fields, I mean from physics or technical subjects or even biology, that they want to teach their own mathematics and that we don't teach in a relevant way. And I would like to say that I can somehow see that point because we have not really made any real effort to implement any kind of inductive way of teaching. That is going from examples and cases and applications to the concept. And you would think that the use of computers would have changed this in a drastic way, but that doesn't seem to be the case at all. We are still fumbling for ways of using computers in the teaching. So, my thesis here today would be to say that we should make a really concentrated effort to make our teaching into an inductive teaching. We can think of different ways of accommodating students with different interests. So, I have made a short list of what one could possibly do to change here. One of the essential points is clearly the attitude of our, of ourselves, so to say, and also of our colleagues. Everybody knows that most, well, most many mathematicians are very disinterested in things which are not leading to theorems or news statements. And there is a skepticism among our colleagues in other areas that anything useful can come out of contact with mathematicians. And it is my real wish that you would all, or we would all, try to remedy this situation. So, what could be done is, for example, I have made three points here. So, the first is that one should have a closer contact with the, between the teaching and the applied areas. At least in Sweden, most departments which are applied have separate buildings and we don't really see them. I mean, the few people stay in one area and the applied people stay in another area. And it would be my wish that people with applied interest would be involved already in the construction of the, and the teaching of the basic courses. And also, one would need to change the curricula in some suitable way and try to speed up the use of computers in the teaching. I mean, most, we should really accept the fact that most students are not really interested in mathematics. I mean, they are interested, well, many are interested in their lives, but some of them are also interested in other areas and one should accept that and we shouldn't try to put our values on people who don't really want them. You know, one could compare these people with how you learn how to drive. I mean, most people have no idea how the car works and why it works, but you can still use it. And it's similar with the people who learn mathematics. They only want to be able to read books or understand the formulas that they are taught in the other courses. And I think one shouldn't criticize this. One should accept that this is a very natural attitude. I mean, after all, mathematics, as we know, it is rather sophisticated and not very applicable field. Also, that should be, for example, something like partial differential equations. Everybody should have heard about that. They are going to meet it somewhere else. And finally, there is a movement in the world around us to make different applied, different sections of mathematics. There are a few mathematics and there is industrial mathematics and there is applied mathematics and there is teaching of mathematics, which have different organizations and different meetings and live their own lives. And I think that makes sense because it's so big and they have their special interests. But nevertheless, there should be places where you meet and where the people from these different areas come together and can exchange experiences. Okay, thank you. So, one little question. All right. Well, try to address some of the issues of drifting apart. You know, the mathematics is a big ecological system of different species of mathematicians and each species likes to think of itself as better than the others. The issue, though, is that the world of mathematics has expanded dramatically. Our universe is so much bigger that everybody is drifting apart from everybody else. But in reality, we enrich our lives substantially. What we have seen, I would say, over the last two decades is the insertion of the computer into our lives of the digital age. Now, that insertion is occurring at a variety of levels. I mean, on the sort of everyday ability to collect numbers and collect data to the ability for the mathematician to actually run experiments in mathematics. And I would say that if Gauss were here, he would probably run experiments like crazy, like Leibniz too, and everybody of those. And they will, if you ask them the question, are they pure or applied, they would just laugh at you. In a way, the drift that we seem to see is mostly social and not necessarily intellectual. I mean, we have seen in this Congress, and many, many people and many of the talks are related to outside scientific fields or inspired by outside scientific fields and so on. The way I see it now is that, in fact, the need for mathematicians, pure mathematicians, not necessarily in the areas of applications, it's actually much greater than it ever was. This is sort of a pre-Newtonian time in a way, and we don't have the mathematics to do the simplest of all things. We don't have descriptive language to describe various things, and we don't even have the ability to define the geometries that need to be defined in the real world. So I think there is a serious opportunity here from a mathematician. I mean, that opportunity to realize it, we need to follow what Leonard just said, sort of revamp on teaching style, I would say. Not that I'm saying I'm not advocating changing what we teach, and just the way that we do it, in a way that makes it more transparent for people who don't necessarily want to invest the same effort as somebody who was born with mathematics in his blood. The opportunity is really that it's the same that occurred in the sort of physical scientific revolution at the time of Newton and Leibniz, which is that there is a need to quantitate and describe specifically and precisely all kinds of phenomena that surround us. And the number of phenomenizers and their complexity is really growing exponentially just because we can. And so digital data is generated in overwhelming quantities all over the place, whether this is web data, document data, sensor data, and we're stuck. So let's give you an example. I mean, the data may be you have the results of some medical tests, a blood test, or some number that you get, and you'll want to evaluate the function which is how healthy you are, what health score you can have. We are dealing with a very simple object which depends on 10, 20 parameters, and we don't have the tools to approximate them. We heard around the board today telling us something about some potential tools, but this is the most elementary object of mathematics, which is the function, except unfortunately the function depends on many more parameters than that we used to do before computers. The number of parameters may be 10, 20. In reality, you may have 10,000 or 10 millions of them, and the tools are not there. So what is needed in this context is for somebody to think very deeply and come up with potential solutions. So mathematicians, pure mathematicians, and their modes of thought are necessary. Computer scientists are not trained for the job. I know of a multitude of examples like that. Having to do is acoustic calculation, electromagnetic calculation. Unless you completely revamp the mathematics and reorganize everything you need to do, rebuild the language for describing the objects, you can't go anywhere. So it doesn't do us any good to just throw a big matrix at some problem and say this is a linear problem, we can invert the matrix or do this, so that doesn't do anything. So the obstacles confronting us are actually much more monumental than they ever were, and they require the ability to build a language to organize very complex objects, to organize them in a variety of geometries. So I just described a minute ago, say, the acoustic in this whole room. That's a problem that, say, 20 years ago nobody could calculate, and even now I doubt that there's more than maybe 10 people in the world who can actually calculate anything. Because the object, which is, you hear the echoes and everything, and the acoustic here, is so complex that unless you build a language, you cannot use formulas because formulas will not deal with that. But unless you build a new language to describe it, you're dead. So that's one opportunity. Similarly, by the way, if you go to the social sciences or to, say, just documents or machine learning or others, fields of that sort, the language and the geometry to describe the objects that you want to manipulate and their internal relations between them, all of that is yet to be invented. I mean, we need people of the kind we had at the beginning. So a few of them we had last century, like Shannon, von Neumann, Benoam Albrod, who is here, who recognized certain geometries that people consistently ignored. All of those are opportunities for mathematics, and that mathematics is pure. Although the opportunities and the challenges are coming from the outside world. But in the past, it has always been that the outside world was probably the most inspirational in actually pushing us towards discovering structures. It's very nice to be sort of motivated by sort of internal ideas. But I mean, I don't think one should be that arrogant in thinking that we know everything that needs to be done. I mean, we should let the world tell us. And as I said, invention is really what's needed. And that's the crafting of tools. And the people who craft the mathematical tools are people who are interested by the tool. And the application is a test if you wish that the tool is effective. But the people who build tools are mathematicians. You call them, I mean, they may be working like Shannon as an engineer, but he built mathematics. And it was, it is pure mathematics, no matter what, no matter what we say. In fact, it's being used consistently everywhere in pure mathematics. Is probability an applied field? Of course not. It is motivated by application. So I think basically, we see in, by various communities, like I say, the machine learning community, the bioinformatics community, the computer science community, we see emerging a variety of sort of methods which are mysterious, somewhat ad hoc, but extraordinary successful. And the question really is what are the underlying structures that enable us to assert that certain methods will work or will not work and what they are capable of achieving. And what are the real deep structures underlying it? This is a job of pure mathematician. Thank you. You are in bending. Thank you. I am certainly a pure mathematician. And what I would like to discuss here is the implicit presupposition that lies in the base of our distinction between pure and applied mathematics, namely that mathematics can tell us something about the external world, that mathematics can be a cognitive tool, although it doesn't look like a cognitive tool. It doesn't start as anything specific in the surrounding world. So in order to understand how mathematics is applied to the understanding of real world, it will be convenient for me to subdivide it into the following three modes of functioning, model, theory, and metaphor. A mathematical model describes a certain range of phenomena qualitatively or quantitatively, but feels uneasy pretending to be something more. Probably one of the most successful early models is a Ptolemy's model of epicycles describing planetary motions, about 150 years of our era. And one of the latest models, which does call itself model, is a standard model describing the interaction of elementary particles that's around 150,000, 960. And generally, quantitative models cling to the observable reality by adjusting numerical values of sometimes dozens of three parameters, at least 20 in the standard model. And such models can be remarkably precise. And there are, of course, qualitative models offering insights into stability, instability, attractors, critical phenomena. As an example, I quote a recent report which is dedicated to predicting a search of homicides in Los Angeles. As a methodology, it uses pattern recognition of unfrequent events, result. We have found that the upward turn of the homicide rate is preceded within 11 months by a specific pattern of the crime statistics. Both burglaries and assaults simultaneously escalate, while robberies and homicides decline. Both changes, the escalation and the decline are not monotonic, but rather occurs sporadically, each lasting some two, six months. Now, the age of computers has seen the proliferation of models which are now produced on an industrial scale, sold numerically, and very often used as black boxes with hidden computerized input procedures and oracular outputs prescribing behavior of human users, for example, in financial transactions. What distinguishes a mathematically formulated theory from a model is primarily its higher aspirations. A theory is, so to speak, an aristocratic model, or if you wish, a model is a democratic theory. A modern physical theory and also all physical theories generally propods that it would describe the world with absolute precision if only it, the world, consisted of some restricted variety of stuff, massive point particles, are being only the law of gravity and things like that. The recurring driving force generating theories is a concept of reality beyond and above the material world, reality which may be grasped only by mathematical tools from Plato's solids to Galileo's language of nature to quantum super strings. A mathematical metaphor, when it aspires to be a cognitive tool, postulates that some complex range of phenomena might be compared to a mathematical construction. Probably the most known mathematical metaphor now is the artificial intelligence. We know very complex systems which are processing information because we have constructed them and we are trying to compare them with human brain which we do not understand very well or do not understand almost at all. So at the moment it is a very interesting mathematical metaphor and what it allows us to do mostly is to sort of cut out our wrong assumptions. If we start comparing them with some very well known reality it turns out that they would not work. My feeling is that mathematical metaphors, more often than not some models and theories also are used as mathematical metaphors and that as such they then contribute to changing our value system or at least influence our value systems. I am a little bit concerned about the proliferation of both mathematical models which are hidden inside computers hardware and software. Also I am concerned about the moral issues that are not often addressed to in discussing the implications and in discussing the utility of mathematics in our society. Just to very briefly show you what I am concerned about I will quote a recent sentence, two sentences actually from a recent book, Mathematics and War. I think this sentence is very written with bitter irony. Mathematics can also be an indispensable tool. Thus when the effect of fragmentation bombs on human bodies was to be tested but humanitarian concerns prohibited testing on pigs mathematical simulation was put into play. Thank you. Albert Noinsad. Oh, oh, oh. Now you'll get a little bit of a contrast program. After a meta-series of applied mathematics, we go back down to Earth. Maybe there is a difference between a pure and other applied mathematician. You'll see it now live. But I must say we are not drifting apart with respect to his last sentence. I totally agree with him. But from the point of view, I would like to change a little bit our point of view. Now, when I have spoken with people, our pure and applied mathematics drifting apart, some said, oh, this is this old question. I mean, we had this, some say yes, some say no. I believe it is really the question of a department. If the people, the pure and applied like each other, then it's fine. If they don't, you have a drifting apart. But I would really like to change. We always do as if mathematics would be the mathematics we do. We, academic mathematicians, are the world of mathematics. Are we really? There is a second world, in my opinion. There's a second world of mathematics. Now let's try. It is working now. Oh, yes. There it is. There's a second world of mathematics. And in this second world of mathematics, almost all our graduates live. These people we educate, they are not in general entering our world of academic mathematics. They go somewhere else. And they go into industry. They go in banks, in insurances. They go in R&D departments. There is a second world of mathematics outside of our world, outside of academia in industry. And this is what I would call mathematics as a technology. And we should all be very happy that mathematics has become a technology that Ronald Koiffman has already described. It is really for us, also, even if we are pure mathematicians, it helps us a lot. I will come to this point. So this mathematics as a technology, this second world of mathematics, is it pure or is it applied? Now let me describe you a little bit the results of a project. I had together with a psychologist and a historian. It's nice for a mathematician to work with other people. And it was a Volkswagen Foundation project. And we were trying to find out what happened to all the graduates in Germany in mathematics in 1998. This is eight years ago. And you know, these psychologists are unbelievable. They have really asking questionnaires. These people, unbelievable questions. I would have never dared to ask, are you planning to get children and who are with the realization between your profession and your family and so on. But she did. And the people answered. And the people answered. And the question is, what have they done in the next eight years? I mean, what happened to them? Did their dreams, wishes come true or not? You must understand we had 3,000 graduates in Germany in 1998. That's quite a lot, I think. I think the number today is even higher. Mathematics is very attractive in Germany. You may ask why. And of these 3,000, 1,420 went into high schools. So they normally become high school teachers. And the other ones, 1,600 made a diploma or nowadays a master. And we asked these people and 600 of these 1,600 were willing to answer a questionnaire. This is a very good sample. I mean, be aware that 600 of 1,600 altogether is a rather good sample. And we asked these people again in the following years, 2001, 2003 and 2006. And now what happened? What came out? First of all, of these 1,600, if you take this as a sample, only 10% became academic people. I mean, they entered universities or research centers. So we speak always about these 10%. And we forget the other ones. 80%, I mean, 10% disappeared somehow. 80% work as software designers in R&D, in banks, in insurances, in consulting, and so on. Do they do mathematics? They don't do much pure mathematics. I must say. I have asked 20 PhD students, former PhD students of mine who work now in industry, and they were laughing and saying, are you kidding if you ask us, do we do pure mathematics or applied? Of course, we do not metaphors, but models and algorithms if we do mathematics at all. Not all of them do real mathematics. So if I saw it correctly from the thing, 25% of all our graduates are doing mathematics in industry. The rest has changed. They do management. They don't do something which is not really mathematics. But 25%, now compare 25% to the 10% which go in academia. I claim that the second world of mathematics is a little bit larger than the first world, and we should keep that in mind. There was a citation of the German Mathematical Society News of a mathematician who works at IBM. He said that we should not overestimate the value of mathematics in industry. It is the midwife, but not the mother of innovation. But maybe it's good to be a midwife, a very active midwife which gives so many births to so many good innovations. So you see what is the result. In the second world, at least as many people do mathematics as in the first world. But of course, they do mainly applied mathematics. Now are these people drifting apart from pure mathematics? What would you say? I mean, there is this world, second world, and the first world of pure mathematics. I don't think that they know whether they are drifting about. They don't see each other. It's so far away. How many of these second world are at this conference here? If I am very optimistic, I would say 10 or 4,000. So you see, there is a real big difference between this applied mathematics in industry and the pure mathematics happening here. And this, of course, is very, very bad. I think this is a damage for both worlds. It's a damage for the second world, for the world of mathematics in industry. Of course, there were some arguments already we heard by Koiffmann and others. Of course, we need more mathematics to make it better. It is not at all good in medicine. For example, we are totally missing good models which really describe the complex system of a body. So we would urgently need good mathematics which deals really with their problems. But are mathematicians really dealing with their problems? Yes, if they fit in their own way. If not, I am doubting. And for the first world, of course, for our academic world, mathematics as a technology offers a lot, I think it offers a good new challenges that was also already said. Many, many good problems come from this outer world. They add certainly public prestige. The second world adds money if we have contacts with them. And it attracts students. And it's not such a minor thing. So I think really that both worlds need each other very urgently. Need each other, but we have to do something. We in the first world have to have open minds. We have to go into industry and see their problems, to speak with them, to get in contact with them, that they know that we care for them and vice versa. They would be interested in what we are doing. Thank you very much. Thank you. Finally, Peter Soneck. Okay, I speak as a pure mathematician. I have a very keen interest in mathematics broadly, so I try to follow most mathematical topics. And I've tried applied maths too, it's much more difficult I find, though my main credit is I'm an actual double major math and applied math. Had a difficult choice choosing between pure and applied math. My views are going to be, I think, a little extreme from the pure math side, but I think that's a good proportion of the people in the audience here, so maybe I'll try that angle. And also, as Helmut says, I think one's views are highly influenced by one's local interactions daily, so what happens in your department and the discussions you have with your colleagues impact you, and you'll see my views are impacted by my colleagues. Alright, does this work? So I'm supposed to get out the way, or what? Thank you, that's applied math, yeah. Okay, I'm just going to make a move. So firstly, are they drifting apart? Well, certainly we have to take into account this inflationary process of everything drifting apart, but even given that, it is my feeling that they are drifting apart, and I've been in mathematics for 30 years, and my own, over this very short period, my own experience is that it's not exactly what it used to be 30 years ago, and I think one of the big impacts is the computer changing our views on many things. Is it a problem? I think it is a problem, but not one that's serious. I think that these kind of matters should evolve naturally with good science surviving and not such good science going away, and I think that's what'll happen here, too. However, they are alarms, and we've heard a few suggestions, which sound very good. I will mention some alarms sounded by some of my younger colleagues who I think we should definitely listen to. Anyway, I'm going to take a very maybe controversial way of dealing with this question of whether they're drifting apart by trying to see what is good math, what's good applied math, is there anything in common, in fact, between these two activities? All right, to give a formal definition of what pure math is would be very dangerous here. I'm sure that I wouldn't get out of the hole by the end, but I think there's one thing that most pure mathematicians, I'm talking here about pure math mathematicians, we all sort of recognize it when we see it, like a fox, when it sees a rabbit. You can see something that's really good, exciting, and cuts to the bottom of the problem. I think the key ingredients, and now does this one here, should I only use one? Can I use it? Okay, there's a light somewhere. Yeah. So the key sort of cycle and ingredients in our, in mathematics, or firstly, insights, mathematical insights often become conjectures, theories, language, these are crucial. But to me, the holy grail of mathematics and something we can never give up is that of proof. To me, once there's no proof, I'm not sure it's mathematics, at least that's my take on the difference between mathematics and any other science. And given the exciting week we have here, let me give Thurston's geometrization conjecture as the epitome of this sort of conjecture. It's a conjecture which when put forward immediately clarified what one is looking for. As all great conjectures, if they true, they great. Of course, if it turned out to be false, it would be much less interesting. But since it appears to have turned out to be true, it's a unifying conjecture. It clarifies the shapes of three dimensional topological spaces. There's nothing, and it's not something that was obvious, and it was something that was built up with many examples and theories that he developed in order to come to that conjecture. So conjecturing is of course a major part in our subject. But Thurston thinking about this, I can't really, I haven't spoken to him recently, but I think you could say this was internally driven rather than by applications. And it's damn good even though it's driven internally. Many fields have such powerful conjectures that unify the theories. And I'm not talking just about these great conjectures, and I'm not talking about only mathematics that's sort of unique and comes once in a very, very, in a blue moon, let's say. So there's that part of the cycle which is conjecture, and then of course, as I said before, and I'll repeat, without proof, without proof, it's not our subject. So we really need that part, and as it seems clear now, Perlman has proved this conjecture, and sort of this is as good as it gets. There have been other successes of this magnitude, as I said. We can't judge all the mathematics by such great success, but this is what we strive for. And I think many people, young people, very strong people, go into math. It's with this sort of aim in mind. Of course, we all get disappointed and a few succeed. But I do believe these driving central conjectures are what drives the subject. So we have the cycle of conjecture, theories built around the conjecture solution, and then good solutions, good problems always develop further conjectures and further theories. And this cycle seems to just go around, and it looks for someone from the outside, might look like a recipe for disaster. This is completely internally driven. It looks like it's a recipe for sterile, a sterile subject. In fact, even within pure mathematics, subjects that are introspective, that interact with no other subject, that are just three experts in the world talk to each other, and then they submit their paper to the annals, and you get the second expert's opinion. And of course, this is the best thing ever written, but you can't get a third opinion. That's a problem. And such subjects naturally shrink, and I think the evolution is the best way to let these things run. However, I don't believe that mathematics is purely internally driven, and I strongly believe that we are. So I want to argue that pure math needs other sciences. This is my point, as badly as they need us. Now everybody here will tell you that we need to develop more theories for more applications, and they're beautiful applications here, there, and there. These are very important to make mathematics as active as it is. But we are living in a golden era of pure mathematics, I believe, because of the successes we've seen in recent years, and it is not the case that this could have been internal. If there are initial conditions, are we at this stage only because of the giants we have around? I don't think so. I think that we are impacted from the outside, and let me continue this with this Parallelman example a little to repeat what Hamilton said in his talk here last week. So, Parallelman's work depends heavily on Hamilton's work, which in turn is based on Ricci flow. And as Hamilton explained, his Ricci flow was motivated to him by Einstein's equations. In fact, the process, the very process Einstein went through in writing down his gravitational equations in equating the only invariant tensors that are around. So when he was performing his Ricci equation 25 years ago, and at that point he had to, everything was very experimental, he relied on this thing that he knew about Einstein's equations. This gave him a lot of confidence that he's on the right track. So this is a very indirect, you might say, means of saying physics impacting this particular program, which on the face of it seems very internal, but it does give you the confidence that you're in the right direction. And I could give you many, many examples of similar things where the, our input comes from, often from physics, but much from computer science. Of course, the more complex the applied math, the more engineering the impact is a little harder to see. But we do live in a world where we impact each other, and I don't believe we are closed cycle, and we do need applied math. Just on a social sociological level, let me tell you a story that I always find, if someone comes into my office and I want to decide whether I interact a lot with, let's say, mathematical physicists, and I want to decide whether somebody who's coming to my office is a mathematician or a physicist. And this tells the difference between these two cultures and one that I think shows our weakness and a bit of their weakness. A mathematician will always come into your office and tell you how complicated what he's doing. My proof is a thousand pages long. It's kind of a strange discipline where you have to convince someone that you have to write a thousand pages. Probably it means you don't really understand what you're doing. The physicist comes into your office and he's always trying to tell you how simple what he's doing is, and he's always lying because he's hiding 50, 60, maybe 100 pages of calculation. Yeah, that's just a trivial straightforward calculation. But this difference of culture is one that I think explains the difference between math and sort of theoretical physics. The idea that something is to be good, it has to be complicated, is something that's evolved in certain quarters of mathematics, and it seems strange to me and unfortunate. In the end, we are always looking for the simple thing, and the real truth is somewhere in between. That's a sociological thing. Let me turn to good applied math about which I can really, I have very little right to talk, so I asked a few people. But let me first take a complete extreme view that you will find. I'd always remember this article by its title. Of course, he made that title. This is an article by Helmos called Applied Mathematics is Bad Mathematics. I didn't bother to read it until I was asked to be on this panel, then I thought, I wonder what he's got to say. So I went and read it, and he's very entertaining. He's a good writer, but I think he's entirely misguided. There are many bad points in the article, even if it is entertaining. There's some interesting points, as I say. But I think the one bad point is very relevant to what I'm saying. He argues that mathematics can exist without applications. I talk applications generally, not just necessarily applied math, but all other sciences. And he says, of course, mathematics can exist and will exist without it. But the converse he would argue is false. And I don't agree with him at all. I think mathematics cannot exist without the applications. Even the most pure of math would not be where it is today if it weren't for the applications. Now, of course, if you look far enough back, everything is, I mean, if you talk about Leibniz or Newton, they are philosophers, mathematicians, applied mathematicians simultaneously. But today, with everything requiring people to be very specialized, it's much harder to be universal. But even so, I strongly believe the impact of applied math or applications is crucial to the development of that. Now, I asked a colleague of mine, Wayne Anier, he's quite an opinionated applied mathematician, who's opinion I value, as to what, just to give me a definition of what's good applied mathematics. And he responded as follows, it has to be relevant to application areas, whether the application areas in science, engineering, technology, or industry. That's one thing he demands. The second thing, and this, I found interesting, it has to help putting the relevant application area on a solid sound scientific foundation. This typically requires laying out the mathematical foundation. So emphasizing this foundational aspect that a mathematician is supposed to do in another science. And then he added, and this worries me, personally, I'm very worried that mathematics and applied mathematics are gradually drifting apart. And he says this is particularly a worry in the areas in which he works. He works in computational PDE, scientific computation. So let me end here by saying, there's obviously a common ground, it was always the common ground for mathematics and anything else. And that is, what really, what we always feel is the crucial thing on new breakthrough ideas in applied or pure math. And when I was young, I felt there was absolutely no difference. However, I'm beginning to feel, maybe I'm just getting old, that it's not really, there is, there are differences. So that, as I said, in pure math, I can't imagine pure mathematics without proof. Not that I can't imagine without proof, but where proof is not important. Where people say, well, I don't even care about a proof, that would bother me. In applied math, it's clearly the big issues or insight and explanation of some phenomenon. So it is not clear to me that if proof is valued, or if indeed this has been so far in the past, that it will be, that it is valued in applications. I often go to a lecture and the person ends by saying, well, I don't, especially if it's someone who's got a code or something, my code works. Why do I need a proof that it works? Well, it's a little hard to work, argue with something that works, that it requires a proof. Although presumably in an ideal world, the proof will give further insight, or an insight will lead to a proof. And that was always the kind of ideal world that I thought 25 years ago was what it was all about. But I think this drifting apart is occurring, and I think you see this with the scientists involved, and I'm just an observer. So let me end by just saying, while I think the goals and the requirements of pure and applied math are diverging in taking into account inflation, I think that myself, that evolution will take care of things. But I'm quite concerned by the comments of Wenan Eh, and the comments of my panelists who seem to be also quite concerned, or maybe not all of them, but some of them. So I am concerned. Thank you. So thank you for these interesting presentations, which I hope now can provoke a lively and of course, good-natured discussion. So I invite contributions from the floor. These can be either comments or questions to the panel. When you speak, could you please use one of the roving microphones? I hope there are roving microphones. And say first who you are. Can we have a microphone at the front here? Okay. Is something? Okay. I, yes. My name is John Newberger. My qualifications on this subject are half a century of teaching and research and consulting. I'd say yes. Pure and applied mathematics are drifting apart. It's unfortunate. And I'll have a practical suggestion for beginning to pull it back out of cataclysmic. Working mathematicians are badly needed in industry, and I'll say industry to cover a broad spectrum. But there's a great deal to be gained by working mathematicians to begin to connect with industry. They can find out the, they need to know what students are faced with intimately. They need to know. This has been touched on, but they, that would influence their teaching if they, if they know, if it's just some theoretical thing about what people do in industry, it's hard to make an informed decision. Now most mathematical questions in industry are phrased in terms of computing. And it's, if someone's going to be of much influence, they need to, they need to understand about computing. Some, now generally computing to get started is, is quite easy. We're talking about several weeks to begin to gain some confidence in, in computing. It's not that hard. Several weeks is nothing on a pure mathematical, on a pure mathematical question. Now the fruitful consulting arrangements are not so easy to come by. And I'm suspicious of a bureaucratic solution to try to pair mathematicians in particular to industry. But that's, that's, that's a possibility. But each concerned individual can begin to take some steps with, and with, over the medium term to be able to make some connections of their own to, to industry. And I think that would help modify the courses somewhat and begin to draw the two, the two together. Real hard frontier scientific problems demand the abilities of pure mathematicians in, in, in, in my view. And they become applied once they get, get involved in this. And thank you. Yes. My name is Bijan Zangere and I am working stochastic evolution equation. I think the name of pure and applied is depend to the tradition of the country. For example, in North America, my field is pure mathematics and probability and partial differential equation and nonlinear analysis is part of the pure mathematics. But in French school, probability and partial differential equation consider as applied mathematics, mathematic application. Then I think there are different kind of the applied mathematics and industrial mathematics. You can consider this as a spectrum of be pure, only pure, which hardly and maybe almost, you know, called pure mathematics, or you can have something which applied doesn't have a proof or something else. Then I think this is a spectrum you will be pure or math or applied, which you stand where you stand. And in the left part, you is pure, more pure than you. And in the right of you is more applied. Thank you very much. Yes. My question is what do you think? Okay, do you hear me? What do you think about the need to invent new mathematics to deal with self-reference or self-organization, the way you meet in systems biology, for instance, or to put it otherwise, what do you see if you look available mathematics nowadays to deal with the input-output black box metaphor or the snake eating its own tail metaphor? Is this working? Yeah. It's a terrific question. I, in terms of dealing with the kind of mathematics that you need to deal with, say, biology or social science or the more complex structures where every piece of information you measure is sort of linked to the others, I think what seems to be emerging is somewhat what emerged in physics a long time ago where Einstein decided that physics is geometry and that you can describe the physical equations as basically the geometry of space-time and later on in young Mills and Gagefield series. Somehow the physicist got to the point that the geometry encapsulates the relationship between all objects around it. I think we see this emerging in the analysis of actually data of various kinds, whether it's data on the web where you actually can do a fast search by relating every unit of the web to each other and doing some sort of global geometry of the web in order to get a Google rank or some others. This, it's a subtle and profound idea possibly. We see it happen in biology and neurology and everywhere else. It's sort of the web of relations between objects which encapsulates sort of their internal geometry or their content. At least that's my view at the moment. I think it's just emerging though. Professor? I just a little thought about this. I actually don't feel it so much that these sides of mathematics are drifting apart. Maybe that's because I sort of grew up in a branch which was considered applied. I never considered it. I always considered it basically a pure, discrete mathematics and graph theory. I always considered it basically an area of pure mathematics which has good applications. Now what I would like to point out is that there are really applied mathematics or applications of mathematics. It's really a very wide range. In this Congress, in the program of this Congress, I can find excellent examples. Professor Ito won the Gauss Prize for a work which he did by motivations that I would consider completely purely mathematical motivations, internal motivations. Then it became extremely important in very real life activities like option pricing for stocks and so on. Another kind of application is where the mathematician looks at some phenomenon and then begins to think about it what kind of mathematical phenomena could make this, could help to understand this. This is like the Nevanina Prize winner, John Kleinberg, how he was looking at the Internet and how it relates to the eigenvalues of the corresponding matrix. Or Shannon, by looking at channels of communication, came up with the fundamental ideas of information theory. Then there is also applied math which Professor Neunzert for the second word was talking about. But I just came down from here from Martin Grötzscher's talk which also described somewhere where you actually have to produce applicable results. Now I think that to ban any of these or to consider any of these as inferior would be a very serious mistake. I think it's the intellectual content of the work that should matter and not its particular form. I think all three and maybe other variations of how we relate to the real world are terribly important for us. Now I also just one more thought is that the level of mathematics that other scientists need varies very much. Sometimes a simple thing as solving a quadratic equation could be extremely useful. In other cases of course you really need very sophisticated and new mathematics. But I don't feel that drifting apart really. I think in mathematical areas which are thriving there's always a lot of exciting connections with mathematics, with applications and with ideas that come from the real world. So my name is Bernhard Boss from Oskilde University in Denmark and I was actually the author of this one sentence which was quoted from the book Mathematics and War by Yuri Manin and I can confirm you that this was meant really as a sarcastic remark and a kind of polemic against the former president of the American Mathematical Society who felt that the time was best for making more funds for National Science Foundation and for mathematics in hearing for Congressional Budget Committee just in the very moment when the efficiency of mathematical technology of Professor Neunzer's second culture of mathematics was transmitted in television around the world with pinpoint accuracy in the Iraq war. And this leads me to the broader question of moral because I noticed that Professor Neunzer for example answered on your comment just we should all be happy that mathematics has become technology it helps everybody. All innovations are good innovations and of course we need this is a new situation perhaps for mathematics that we are confronted in the degree that some of our theories become more applicable as Professor Neunzer correctly pointed out that there is the second culture of mathematics developing that we are also we confronted with problems where physicists or medical doctors have a longer tradition of discussing ethical issues than we have in mathematics and we should perhaps add that one special thing of the ethics of mathematics should be not to overstate the role of mathematics in different connections. One of the disasters perhaps of this Iraq war was that there was a general belief both in United States and in some European countries that deep complicated political problems can be solved like nothing by help of modern mathematics based technology and I think if we try it as mathematicians a little bit to be more modest in our own statements about the value and the of our mathematical applications not only about what we really can do and where we are then we would perhaps prevent the prevent is that the public is putting too much expirates aspirations in mathematical technology and I think this is a kind of second moral issue that we should try to keep just to the substance and try to be modest in our statements about possible mathematical applications. Since you cited me several times may I try a short answer when I say mathematical technology I mean that mathematics has become a technology it's not so that all technology is mathematics you turn it around I mean it's not so that I think I don't believe that mathematics is not responsible for all weapons which are in the world there may be some mathematics used can you guarantee with any mathematics you do that it is not used by somebody else for weapons you can never guarantee that we can only have the following moral we use our ability as mathematicians as good as possible with the full responsibility for what what we can do so we must have a moral we I would not work for weapons for example that's that's my decision but we cannot say we should stay abroad from from from from applications of mathematics since there may be some some some applications which we would not like I mean this is an old discussion in in Germany and everywhere but I wouldn't guess I wouldn't say that we can go so far I am still happy that mathematics has become a technology and I would nevertheless try to avoid any bad applications that's another story. Well let's speak. Thank you very much this is Burjan Zadef my subject is homological and commutative algebra as we all know that mathematics was part of science until the Newtonian stage if we consider still as a part of science we know that the mission of the science is to recognize the phenomenon of natural society and everything can be considered a phenomenon to predict its behavior and to use use it for the sake of human beings. If we consider mathematics as a science I think pure math is a part of applied mathematics its root is within the phenomenon and the diversion I think can be said diversion might be not diversion is the point that when mentally like to generalize something from very real problem to mental problem at that point we call that pure mathematics for instance if the generalized equation of degree two which is real to degree three may be still real to degree n such as thermo equation x to n plus y to n equals to z to n takes several hundred to be solved but might be no immediate application it's pure math but it's rooted from the phenomenon and from the science itself and applied mathematics so I don't know why as the people doing mathematics especially mathematician and like to apart mathematics from science indeed it's a sort of science rather than generalizing very very pure object pure concept pure theory some of I think I remember in the notices of AMS one considered this case as the river Mississippi in some places diverse and from the root head from the mainstream to somewhere else which is apart from the mainstream some part of mathematics looks like the same very generalization very theoretical they may not have immediate application I think in the future those sort of things would be very very limited either maybe not at the table of any mathematician thank you very much modern the microphone in the front here no tomorrow okay just take this okay okay first you and then Martin could I speak your first question my name is a leg we're all I am sort of pure mathematician I think and I want to ask the people leading the discussion about definitions I do not understand who are drifting from me I'm applied with a pure mathematician so how do we define applied mathematician really in any science people do calculations use formulas I think it is impossible to call all this mathematics applied mathematics it is impossible say to hire anyone of even very successful to master department as a professor of applied mathematics there should be some definition of this say should those people who are applied mathematicians who are considered applied mathematicians know mathematics have mathematical education I'm asking these questions because they are practical and also make sense for this discussion thank you okay maybe I can try one of the things that air was most concerned about was that people that he defines as doing applied math be educated mathematically in the traditional way he felt this was really important that's on when he said he was very concerned he in fact added that one of his concerns in this direction of education and students is that somehow the applied math community was not attracting the very best mathematically talent people so I'm just firstly answering your question in connection with what he told me I think I agree with you I who's drifting from who that's a good question but I think there is a difference in what an applied mathematician does and what a pure mathematician does the as it when we love us mentioned itto he had a major impact on the world but he was not motivated in what he was doing by applications most pure mathematicians feel they are working on problems purely because of trying to understand number geometry the theory of equations more deeply but the application we all hope will come and if it wasn't for that it wouldn't be that important the subject but we do have a different way of going about things applied math mathematics I think again quoting air has to have applications in mind and the style is very different if you pick up a journal in pure mathematics there's a theorem there's a proof or there's an attempt of making a certain kind of discussion many applied math or scientific journals you pick up and they're talking about a phenomenon and then there's a pictures and there's a phenomenology which is all very interesting science but we do things very differently and and I think that where these things surface is in in your own department so I I would like to say that I agree with you that we all the same but I think the way we go about things is very different so this you stumped me on so maybe I can offer a definition so I'm Martin Grotschel and I would like to address one issue that has been implicit in all talks that's psychology and I would like to add psychology of institutions usually there are camps in different buildings and Leonard Carlson said well the applied mathematicians are in this other building and that basically contributes to the feeling them and us and they could be just different people and all of a sudden he is applied or not and many of us have lived different lives I've been a pure mathematician for a while and now I'm very applied but I value both sides and I think one of the greatest experience in my life is I moved to Berlin and they didn't have an institute of applied and of pure mathematics they were all together and this is actually something very valuable and now people the scientists the mathematicians and students float between the various areas which I find extremely positive for both sides and that will help to keep these pieces together and I believe that this driving out of certain areas from mathematics into applied mathematics from applied mathematics into other institutions has been a really bad historical process I mean if I look at the US situation applied mathematics is basically if you deal with differential equations the optimizers are sitting in industrial engineering the discrete mathematicians often in computer science statisticians sit anywhere else but in mathematics and what is the reason for this during certain periods of time there were power games and that's how they split apart and and this is a bad evolutionary process and if we can we should try to bring these groups together and I think that would be an important part on the institutional side and that would resolve many of the issues we are discussing here and I don't I cannot offer a definition of pure and applied mathematicians but I believe that this kind of space distributions that contributes a lot to this feeling that there are other parts of mathematicians and mathematics David Livermore, go to the end of the front I want to thank the panel for a very thoughtful discussion Dave Livermore from University of Maryland I have a hat that is a pure hat and an applied hat so I can try and speak both sides of the issues I think Martin raised a very important point is institutionally what can we do I think the the issue is not so much we drift apart I think that criticism is is valid because we do have control of this I think the phenomenon has to do with the expansion of human knowledge and endeavor and I think all disciplines to some measure are are confronted with this in particular universities but all institutions not just academic and I think one model from the states that's grown up in response to this balkanization that you describe actually is one to look forward to because whereas you were talking about tying people who call themselves wear different mathematical hats together really it's an issue of being aware of the intellectual landscape about us and that means not only talking to statisticians or computer scientists but also to engineers and and physicists chemists biologists and one model that does exist in the U.S. I think and is thriving at some institutions is the development of centers centers that focus around maybe an application or an idea that brings together people of mathematical paradigm or an engineering paradigm or whatever to work together learn from each other stimulate each other I think just for example the mathematics department at Maryland is tied to a Norbert Wiener Center and applied harmonic analysis which involves pure mathematicians and and and engineers we have a list of several institutes like that and I think we if we put our minds to it we can overcome these sort of intellectual barriers that separate us artificially because ultimately I think the picture the whole panel has played that this is a human endeavor is really the right one and I look forward to a very good future. Perhaps I could say something about my own experiences of applied mathematics I work in the calculus variations but also in its applications to materials and I've written papers with electron microscopists now I believe in the value of theorems in applied mathematics and I believe in the value of theorems for telling us when computer codes work. To me it seems that there's the three elements of modeling analysis and computation and and they all feed on each other to improve what goes on but I think it's an interesting process when you start working on in a new area a new scientific or some new application area where you something gets you interested in it and then you you see that there's something mathematical and you learn a bit more about it and at some point you have to have some confidence that you can offer something to this field. At the same time you've not done a degree in the biosciences or materials or whatever subject it is and so you have to somehow be humble and put in a lot of work to learn at least a little piece of this area so that you can break down these language barriers and I think that's a really exciting process but to start with you may encounter some resistance from the people in the area and one piece of advice I have is always to cut out the middleman or woman who talks directly to the person who's doing the experiments and try maybe to avoid some of the intervening theory so who else would like to speak? One more friend? I want to make clear what I really asked about I didn't ask really about definition. I want you to say if you really believe that applied mathematician is a mathematician that he or she should have qualification of mathematician should know basic things, should speak mathematically, should know some basic things about this. And that's it if it is defined if they are defined as people who are publishing papers in journals with applied mathematics in the title this is one thing if we have something more specific in mind it's another thing. Well my answer would definitely be yes but I think I better let some of the other people on the panel give their view on this. Would you say yes with respect to the question he must have a mathematical qualification to be a mathematician that's your answer yes to a certain extent not you know he needs to know certain basics absolutely. How many mathematicians in this room do not have a mathematical education? I mean I know many physicists who have become very good mathematicians later on would you not count them? If we follow Peter Sarnac here he would tell you that anybody who can prove theorems qualifies right the only issue is what do you mean by proving theorems and that's I know what you mean but I think you did not think about it enough. An applied mathematician would come up say with a computational algorithm then the theorem involved in that algorithm is that he can by a certain scheme compute something to some precision that's a theorem right and the the goal there is not to climb the Everest and prove some old conjectures or do something that will impress your colleagues. The goal is to achieve to solve the difficult problems to find the tools to do it and it's really it's really the intellectual challenge involved which maybe will qualify him as being a good mathematician or a good applied mathematician I don't think it makes any difference it's really the the intellectual novelty and contents that will allow you to think of the people as a person as a mathematician I think Shannon was an engineer right but there's no way you could say that he wasn't a mathematician. Hi I'm Bob Kohn from the Korn Institute. I actually find it very reassuring that there's some difficulty in defining this separating the applied mathematicians and the pure mathematicians here and I think that something that nobody took the time to do was to talk about how mathematicians pure and applied are really rather different in our mission in our worldview and in our in our functioning from well the other sciences I think the most important thing here is not that we worry about creating separations between pure and applied mathematicians or defining those two it's really more about making sure we don't leave a big gap between mathematics on the one hand and other areas of science on the other in the areas I work in which tend to be mainly close to the physical sciences or finance the mathematician's job is to think about whether the algorithm really works to think about what are the properties of this model to solve problem to form to look at whether opportunities have been missed by not bringing to bear the right set of tools to develop new tools sometimes if they're called for in the application area and there's nobody else out there who's going to do that if we don't fortunately I think to a large extent we're not drifting apart I sort of disagree with many panel members and I think that the talks at this meeting are the best possible proof of that there's a question in the middle same level I'm Harry Bielinki from Barolani University Israel when there is a discussion and discussion in the last four many years I like to think each time about epigraph which Carl Popper put before his book on a philosopher of science Carl Popper put before his book there were two statements one by Kant and another by Schlich Kant said that if he knows about prolonged discussion he believes that the reason serious issue in the kernel and the leader of the veneers circle Schlich says that when he knows about prolonged discussion he believes that it is a matter of words or definitions so several people pointed to the bad definition of pure and applied mathematics and I didn't hear that any of the panelists suggested even attempted to define these words but I believe that there is a word in the question which was proposed which could be defined or at least must be attempted to define and it is the word drift in fact one of the second panelists I think suggested that the matter is mostly sociological and I think drift probably could be defined in sociological terms in fact one could try to formulate a theorem in a sense or suggest a parameter which could define this drift and probably you would agree that drift between two parts of mathematics could be measured somehow in sociological terms I was pleased to hear that someone here tried to suggest military power of state as a proof that the drift between pure and applied mathematics is not great in a sense if applications are very close to kernel mathematical kernel this is a witness that military strength of countries is great so I would like to hear say maybe there is another parameters which could define drift or any other words at least in the question proposed but since I'm in the last time I'm doing history of mathematics let me also make a remark on style of presentations by panelists and I think the most interesting in this respect in respect of style where the fourth and fifth panelists the one who argued for applied mathematics suggested two numbers 10 percent for state in academia and 25 percent which are its interest and definition the second world of mathematics and I believe that this would be the only two numbers which would be remembered from this discussion in fact so I believe it was very successful presentation where two numbers and all of us would remember them while the fifth panelist instead of statistics he suggested to quotes and one from a famous mathematician from halmas and another from his friend of course it's good to remember what halmas said but again this is like two examples against statistics presented by fourth panelist in my view the fourth panelist argued for applied mathematics made his presentation much better in both in from any point of view even from point of view of actual presentation on the screen so I believe that the fourth panelist in a sense one they okay I think we're not we're not rating the panelists are there any more comments or questions one over there yes okay I am Hassan Bouzair from Morocco I am an engineer in school I am working on applied mathematics but in my opinion we should say mathematics there is one mathematics none pure and applied mathematics and the question is is mathematics drifting from technology this is the the question and this is the question that should be answered by mathematicians and I think that mathematicians should work on things that that are good for for technology and not only on mathematics that's my point of view thank you and as a question over there thank you thank you very much my name is Alexey Tretikov I'm from Russian state Moscow State University and Russian Academy of Science but last year I'm working in Poland and what I would like to ask as a the famous Russian mathematician andrey kalmogorov as the last years of his life said that that he thinks that there must be a new type a new mathematics new type of science that for investigation of reality and now observing the last achievements last events in mathematics I see that they have more and more general and global character and is it mean that maybe we were standing before possibility of creation so-called global calculus global differential calculus what it mean I'm trying to explain when Levenis and Newton have created their differential calculus it has been local of course differential calculus because we are investigating local behavior of function in the neighborhood of one point and but now based on the achievement of scientific community we have possibility to create new so-called global differential calculus when we can observe behavior all many fault all many fault and based on this theory we can unify your question pure and apply mathematics very powerful powerful will be unification of this two direction based on these global differential calculus maybe what your opinion about this maybe kalmogorov said about this possibility this new mathematics thank you I wish I knew the answer I don't know I think something will emerge obviously in the next 10 10 years that will allow us to see new interesting mathematical structures but more than that I don't know question champion champion again my specialities differential geometry and global analysis it seems to me that in the discussion we are meeting three different levels which I think is makes the discussion a bit confusing the three levels for me or the following one the first one is really the science itself and for me if you speak about the science itself I think the terminology mathematics and applied mathematics is not a good one as was pointed out already by some people the second level is as professionals most of us are really making a living by teaching so it means from that point of view that the what happens to our students is something which would be of primary importance to us and from that point of view I found the remark by professor Neunzert very adequate that is if so many of our graduates are really working in the world of technology or all sorts of technology I think we need to know as professionals a little bit what our graduates are doing and the third level is as scientists and because the impact of science on society is growing I mean bigger and bigger every year we really have also another role which is really providing answers to problems which are posed to the society in general and then we are asked to I mean interfere or interact with say other scientists but also people from working in the world of technology and it seems to me that depending which level you take then the drifting has to be measured by different means it's certainly true that if you take the first area I think there is no drift because I think and this congress is a very good proof of that we have now ample evidence of the fantastic impact of new questions coming from technology to mathematics if you take the second part of you then the growing number of our graduates who are really involved in the working in really in the society and therefore really using the skills we give them in a very applied way something which forces professionally to get a better knowledge of these applications and the third level is even another one which is in which way and that's probably the one in which the question of ethical dimension of our profession is very important in which way we can contribute to really this scientific enterprise which is more and more shaping the world and shaping the world means good and bad things at the same time the back of the microphone over there well thank you very much this discussion is extremely interesting but it is not so new it originate already in Plato where he was discussing pure and applied mathematics in respect to Archimedes implication in the defense of Syracuse but already about 40 years ago Mark Katz tried to define the difference between pure and applied mathematics by saying that pure mathematics tried to find the difficult answer to situations which were rather easy while applying mathematics was to try to give easy answers to situations where extremely complicated I think it is very hard to make difference definitions which will which will agree with everybody but the distinction probably for me is more between the pure mathematician and the applied mathematician than between pure mathematics and applied mathematics the pure mathematician has been saying is somebody who finds his motivation inside while the applied mathematician is somebody who should be more willing to find his motivation outside or in a simpler way the applied mathematician should be the mathematician who should be encouraged to answer questions asked by other scientists or the mathematician engineers in order to try to be useful to society and maybe what we should do is to encourage all mathematicians to be more receptive to dialogue with other fields and with other people thank you thank you time for one more one more question i think so anybody no it's not it's not turned on well i would like to ask professor mani here and to the closing of this scene to recall a saying from harald bore which i learned from burry yesen he they had in this discussion with hardy sometimes a little bit legia distinction for mathematics and mathematics qualities and they distinguished then generally signs whether it was in an extensive phase or whether it was in a consolidating phase and as i recall from what i learned from burry yesen harald bore was always in vying and had big admiration for this phase of consolidation in the physics in the 20s and 30s after a previous period of extension with many many new results in the end of the 19th century and they always claimed that mathematics at their time that means other in the 50s or in the 40s of last century still was unfortunately in a phase of extension and that what we needed for mathematics was a new phase of consolidation and would you agree with sarnak when he says we had a period of 30 years or something like that a kind of golden period for pure mathematics which then in this bore term was a phase of truly consolidation where various fields in pure mathematics showed and proved interconnected and said this could be a good starting point for us also to contribute make real valuable contribution to other fields like biology which in spite of the great achievements of watson and crack 50 years ago still is as was also described by other speakers in this phenomenology and would you share such an optimism that perhaps the time the basis of these last 30 years gives us a new impulse really to do to do something also for contributing consolidation in these more phenomenological expanding sciences okay first of all yes i do agree with bitter sarnak that the last 30 years years of great consolidation and maturing sort of of the mathematics of the 20th century i'm less sure i mean emotionally less sure about how to characterize in such admittedly simplistic terms the development that the development that is connected with computers computer science and internets kylmagorov whose name was mentioned here introduced the notion of kylmagorov complexity kylmagorov complexity very roughly speaking kylmagorov complexity of a piece of information is the length of the shortest program which can be then used in order to generate this piece in information in this respect one can say that classical laws of physics such fantastic laws as newton law of gravity or einstein's equations there are extremely short programs to generate lot of descriptions of real physical world situation i am not at all sure that kylmagorov's complexity of data that was and data that were uncovered by say genetics in the human genome project or even modern cosmology data i'm not at all sure that their kylmagorov complexity is uh sufficiently small that they can be really grasped by human minds one should be aware that if a certain large piece of information has a very large kylmagorov complexity then we are bound not to understand it we are bound to relegate the processing of this data to computers or computer nets or whatever and i uh have a very strong suspicion that this is a new situation in natural sciences with which we really do not yet know how to cope we produce technology and it might happen that this technology is absolutely indispensable to deal with this data and since i believe that uh a good joke is the best end of everything let me propose you a joke i do not know whether it's good or bad of applied mathematics or rather of some far end of the spectrum of applied mathematics okay and and applied mathematicians helps to do well things that should not be done at all so with that uh we have to draw uh our proceedings to a close um i wouldn't dare summarize this discussion very interesting uh uh though it has been uh but can i thank you all for your contributions there will be a record of this round table in the proceedings we've yet to decide exactly how to do that but those of you who have contributed may find yourself getting emails to get our permission to put your words of wisdom in the proceedings could i thank her first of all on your behalf martyce and soli who organized the round table and especially our paneltu
Closing Round Table discussion of ICM 2006.
10.5446/15974 (DOI)
So it's my privilege to introduce this final plenary speaker who is older, Sram from Microsoft in the United States. So Sram has a background in complex analysis. He was a student of Thurston and he worked early on on packing problems and the proof of the Riemann mapping theorem that way. And then he was interested in graphs and in probability aspects of that. But the real jackpot he hit by having the idea that one should study the Lerner equation not with only smooth coefficients as was Lerner's original idea but also in the general case, in more general cases. And that the choice of the Brownian motion as the leading function in the differential equation was going to produce interesting results. And he was indeed completely right. It was later supplemented by his work together with Lawler and Werner who was one of the recipients of the Fields Medal this year. And as you know, they were able to prove, for example, the Mandelbeut conjecture about the dimension of the outer boundary of the Brownian motion, that the outer dimension is four-third. And if you had mentioned to me ten years ago that somebody was going to prove this conjecture, I would have said that you were out of your mind that there is no chance that this can ever be proved in a strictly mathematical sense. So I think we have witnessed here one of the most interesting development and it would be a great opportunity for you, if you don't know it already, to learn about this in the lecture that Odechram is going to deliver. I cannot help making a personal comment about the method. So it's all based on the deformation of a domain in the complex plane according to the learner scheme. And in this way it is a flow where time plays an essential role. And it's really remarkable to think about that. The proof of the, it started with the Biebebach conjecture which is based precisely on a deformation also of a domain into what was supposed to be the extrema case. And in this case you could get monotonicity. And of course one can think about Perlman's proof of the Frank-Array conjecture as based on completely the same idea that you have a difference equation according to which you deform a domain and then you, there the great difficulties are to understand the singularities. So I think this method of deformation of domains in the complex is one of the greatest achievements of our time in analysis. And I think there are prospects for very interesting developments in the future also. Personally I've been involved in trying to do what is called the DLA. And there is a healer show flow which one doesn't really understand which is similar to that. So I think you will see even a continuation of this method. But right now we look forward to order some lecture on conformal invariance scaling limits. Thank you. Thank you very much, Lena, to a great honor of being here. Okay, well most speakers had some slides showing a plan of the talk and this is my plan. We're going to look at random systems such as the one you see on the slide. This one has a very simple description. All the hexagons are independently white or black with probability one-half. And we're going to study such systems. And the way we're going to learn information about that is by studying random curves that appear in these systems such as the red curve here. Percolation is going to be the primary example but not the only one. So this is called percolation. There are two closely related talks, actually very closely related talks. I'd like to mention the talk by Stanislav Smirnov yesterday and Wendelin Werner's talk later today. And some things which I will omit for lack of time, you can probably learn from Wendelin Werner's talk. Let's start with something rather simple. Let's discuss one-dimensional Brownian motion. One way to think of one-dimensional Brownian motion is a limit of simple random walk. Here you see a simple random walk where the x-axis is the time axis and the y-axis is the spatial coordinate and at specific time units on an intergeal lattice, the walk with probability one-half goes up or down regardless of what has happened in the past. When you get such a graph, the shape, these rectangular steps are not drawn with the height difference equal to the time difference because we're going to take a limit as the time and space scales and get a Brownian motion. And you have to scale time and space differently if you scale space by delta, you have to scale time by delta squared. So this is some definition and a refresher on Brownian motion. Here are some properties Brownian motion has. It is a continuous path with probability one. It satisfies the Markov property which means that if you want to know, if you're a time t and you want to predict something about the future, not with certainty but with some reliability, well, the only information that's relevant for you that's in the past is your current position. And Brownian motion satisfies Brownian scaling. Delta times the Brownian motion is the same in law, has the same distribution as the Brownian motion with time scaled by delta squared. Let's go on to two dimensions. The rest of the talk will be entirely two-dimensional. When you take two-dimensional Brownian motion, you have two natural ways to define it. Since we define Brownian motion as the limit of simple random walk, you can also define two-dimensional Brownian motion as a limit of simple random walk on the square grid. Alternatively, you can take two independent Brownian motion, one in the x-coordinate, one in the y-coordinate, and therefore get a two-dimensional curve. And these two definitions are equivalent. Both definitions suggest that Brownian motion will actually, well, both definitions use the coordinates, the x and y coordinates, but a nice property of two-dimensional Brownian motion and higher-dimensional Brownian motion is that Brownian motion, in fact, doesn't depend on the coordinate system, only depends on the inner product, namely, its rotationally invariant. In two dimensions, it has a higher form of symmetry. It's conformally invariant. That means the following. You have here a domain, and you have a base point, and you start Brownian motion from the base point, you stop when you hit the boundary, and you consider an analytic map or a conformal map to another domain. You start in the other domain a Brownian motion from the image of the base point. Well, let me put it this way. The image of this Brownian motion in this domain is the same as a Brownian motion starting at the image of the base point, except that the time parameterization changes. You should keep this in mind because later on we will discuss various conformal invariants of different models. I will be rather loose, and I will not say what conformal invariance means, but you should think of something in this flavor. Maybe I should say a simple effect is just that the hitting point on the boundary is conformally invariant, but this conformal invariance means much more. That means the whole distribution of the curve is conformally invariant. Let's move on to a more difficult topic, which is percolation. Here you have a mesh of hexagons. This is the most successful two-dimensional percolation model right now. If you're doing a Bernoulli-P percolation, each hexagon is white or open with probability P independently. Then, well, so far it's just a system of independent bits, but once you start studying connected components of white or red or black regions, then you're doing percolation. There are various alternative models. There's a large variety of alternative models of percolation. One alternative model is you work on a grid and you flip a coin for each edge to decide if you keep it or throw it away. Now that's percolation. Now we will discuss critical percolation. It turns out, and that's pretty easy to see, though I will not prove it to you, that there is some number between 0 and 1. See, we have this P as our parameter, which we can decide. P is the probability for each hexagon to be white. We can play with this parameter. There is some specific value that is crucial, and it's called P sub C, the critical value of P, and it can be defined in various different ways, has been defined in various different ways, but the simplest definition mathematically is by saying that if P is larger than that value, you will have an infinite white component. If P is smaller than this value, then the probability for the existence of an infinite white component is going to be 0. This is a good model for a phase transition because the large-scale behavior of the system changes drastically when P pushes beyond PC. What do I mean by that? If P is slightly smaller than PC and you look at a very large picture, then almost nothing is connected. The white regions are small islands or very small lakes inside a vast area of black, and the situation reverses when P is above PC. This is very easy. It's harder to determine what PC is, and for some percolation models, we will probably never know what PC is. For this particular model that I've described here, it has been known. Well, Harris proved one half of the theorem. He showed that at P equals one half, there is no infinite white component. There is no infinite cluster. This means that PC is at least one half. Twenty years later, Keston showed that PC actually is one half because above PC, above one half, you do have infinite cluster. The one half here is a value which is easy to guess. This is because at one half, you have a symmetry of white and black. They behave the same. If you believe my story about one completely dominating the other when you're off of PC, then you should believe that one half has to be the critical value. Putting it was a significant achievement, and based on an earlier important result of Uso Simo in Welsh. Well, Harris's theorem tells us that at one half, we don't have an infinite cluster, and that is the PC. What happens if you condition on this rare event that the cluster of the origin is going to be large? If you sample percolation, you know that with probability one, the cluster of the origin is going to be finite. Typically, you'd expect it to have maybe size five, six, zero hexagons, something like that. But you can run your simulations a few times, and you can wait until you get a simulation which has many hexagons in the cluster of the origin, and then this is roughly what the cluster looks like if you wait till you get a sample with a thousand hexagons or more. You can go on and you can ask yourself, well, how long will I have to wait till I see a cluster of a thousand hexagons? How unlikely does the probability of having a large cluster, how fast does it decay as the notion of large increases? Such questions are governed by what's known as critical exponents, and physicists such as Denise Nienhoes-Cardy, they've predicted the values of these exponents. In particular, the probability that the origin is in a cluster of diameter, at least r, behaves like r to the minus five over 48 with some low-order corrections as r tends to infinity. If you ask a similar question but require that the connection be inside the half plane, then the exponent you get is minus a third instead of minus five over 48. These were conjectures and the arguments typically assume asymptotic conformal invariance and many other unproven properties. The way the physicists like to assume conformal invariance, they look at, say, the probability that two hexagons are in the same cluster and they believe that there's going to be a limit to that as the measure finds after you rescale appropriately and then they say, well, this limit should have nice conformal invariance properties and make similar assumptions. Then once they have a few assumptions in place, they can work some machinery to calculate the values. Conformal invariance plays an important role. Now, there are also predictions which are more refined than just saying an exponent. See, an exponent just gives you one number which is very important because it tells you what these rare events, how unlikely they are and so forth. As useful as it is, there are more precise things to know about. For example, Cardi had the following prediction. If you look at the rectangle and you take the mesh very fine and you ask, what is the probability to have a connection in white hexagons from the left to the right? In the limit as the mesh refines, this quantity approaches this expression right here. This is the gamma function. F2,1 is the hypergeometric function and eta is the cross ratio of these four points after you map the rectangle conformally to the upper half plane. This is a reasonably complicated formula. Leonard Carlson found a nicer form for this formula. As far as conjectures go, you want to assume it naturally because otherwise you can't get even started. One's formula is the following, you take a triangle and you want to connect the bottom edge to a segment on the right-hand edge. If the edge length of this triangle is 1 and the length of this segment is x, then the probability to have this connection converges to x as the mesh goes to 0. This is somewhat simpler than Cardi's form. Just so that we don't get carried away with percolation, I want to mention a few other models because percolation, though it's a central theme, it's not the only topic I want to discuss. Here I can use this picture to describe three different models. What is this picture? You have a white grid here and on the white grid, you should think of the white grid as a graph. Well, it's all a graph, but you should think of an n by n grid, square grid, and the white is a spanning tree in that grid. If you notice, the white maze here has no cycles and it spans every vertex in the appropriate grid. If you have that grid, n by n grid, you can think about choosing a spanning tree from that grid and you can say, well, I want to be unbiased, I'm going to choose the spanning tree uniformly among all spanning trees in that grid. Well, at first I was kind of skeptical about this model. Why would the uniform measure be a natural thing to study? Well, it turns out this uniform spanning tree has many nice connections to different types of questions such as electricity and random walks and so forth. So it is very natural, in fact, to take the uniform measure. So our white is the uniform spanning tree. The pink is also a uniform spanning tree of the dual of the original graph. The pink is just essentially the complement of the white. And the black curve between them is what's called the piano path associated with the uniform spanning tree and it's also known as the Hamiltonian path on the Manhattan lattice. Now those, so here we have a two-dimensional system. We have one curve associated with it and in fact there's another nice curve associated to that system. If you fix two vertices, say white vertices and you fix them before you see the tree, then you sample the tree and there's just one path in the maze connecting the two vertices because it's a tree. That path is what's known as the loop-race random walk. The loop-race random walk is a model invented by Greg Lawler and it is very intimately connected to the uniform spanning tree. Well so there was a belief that the uniform spanning tree in the loop-race random walk ought to be conformally invariant. And there has been some partial progress in that direction. Rick Kenyon proved that some properties of the uniform spanning tree in loop-race random walk are conformally invariant in the scaling limit. And here's an example for a property which he proved. If you take here a domain and you fix three points in the boundary and you take the uniform spanning tree of the grid restricted to this domain, then near these three points you can take vertices in your grid and join them up by the paths that join them in the tree and you will have a kind of a Y-shaped topological Y-tree spanning these three points and there will be a special meeting point, this red meeting point. And if you can do this thing in another domain and you can, I'm assuming the domains are simply connected and you can map one domain conformally to the other domain and what Rick Kenyon has proved is that the distribution of this red meeting point is conformally invariant, namely the image of the measure here of the law of the red point is going to be the law of the red point here provided the three base points are mapped to the three base points. That is a good bit of progress. It doesn't quite prove conformal invariance of this whole system, for example. You don't yet know that these paths are actually have the same distribution in both pictures. Okay, so to study percolation it turns out that we want to look at an interface. Let me, I have a demo here showing this interface. So you start out by declaring that you want the boundary hexagons to be, to change color across your origin here. This is what's called as the devotion boundary conditions. And then you can start an interface between the blue and the yellow. And you want to know how this interface continues so you look at what this hexagon is going to do. And this hexagon, you flip the coin, you see it's yellow so you turn right and then you look at the next hexagon, it turned out blue, you turn left and the next hexagon and the next hexagon. Now sometimes you flip one hexagon and at this point we flipped one hexagon and we could, that's made three steps of the interface because the interface when it hits hexagons that has, that have been colored before, it doesn't need to sample a new one. And you continue and so on and so on, so on. And you continue like that and you build this interface which has no choice but to go out to infinity. You should think of it as an interface connecting the two boundary switching points which is zero and infinity. So now we're going to take, well so far I haven't done really any math, I just told you about math. Now we're going to, for four slides roughly, we're going to actually do some math here. And I'm going to try to walk you through the intuition for why you want to consider Levener's equation with Brownian motion as the driving term, though you don't need to know any of these words. We're going to see, and this is roughly the way this actually, the original motivation, though the original motivation was in the context of Lupe or Israel and more, but the picture is similar. So we take our curve and suppose that we only look at part of it and we stop when it reaches size epsilon. And I'm not going to be very clear at this point at what size means, we'll see later what size means. And then we believe in conformal invariance for now, the task of justifying conformal invariance we'll discuss later. We believe conformal invariance, we have sampled the part of the walk, of this interface and we'd like to know what the rest looks like. Well, the trouble is that this piece here is very complicated, it's kind of fractal shaped and we want to simplify it, but we believe in conformal invariance and see the boundary of this domain is, after you remove the path is colored by two colors, the white and the black here and they change in exactly one point which is the tip of the curve. And therefore, we can simplify the domain, we can map over by the conformal map, we use Riemann's mapping theorem telling us that the complement of this region that we've examined already is simply connected, therefore we can map it over to the upper half plane again. And when we map it over to the upper half plane, there will be a special point where the image of the black curve meets the image of the white and the red here is indicated to show what part of the boundary corresponds to things that are adjacent to the curve. And this map that we're going to use has some power series expansion near infinity and we normalize it so it fixes infinity and the power series expansion looks like that. This is all very easy consequence of the Riemann mapping theorem. And if we think about the continuation of the path here, that's going to look in distribution about the same as what the original path would have looked if we started it at the tip at the boundary point right here because we believe conformal invariance. And now we can continue the procedure, right? We can, you know, after we can start here, we can build the interface until it has size epsilon and then map over by another map, F2. And then F2 has essentially the same distribution as F1 except that it's translated to start at the point here where black changes to white is not necessarily the same point as black change to white here. And so F2 has given that we know F1, F2 has the distribution of F1 except translated therefore conjugated by this translation. And this here denotes equivalence in law. And we can continue inductively. If WJ denotes the place where the image of the jth map takes, the jth composition takes the tip to some place, then we have this composition GN which is the composition of all these Fs and each Fj has essentially the same law of F1 except conjugated by translation depending on this WJ. And this GN is just the map from the complement of the curve we've examined so far to the upper half plane because it's, you know, each time we've added a piece and we've taken the map that makes it disappear. And now each Fj is going to be very close to the identity because our epsilon is going to be very small. And therefore we can attempt to think of this composition as a flow rather than a composition of discrete steps because if you have a map that's close to the identity you can think of it as approximately a vector field. And to understand this vector field we're going to essentially find out what it is. We can look again at this map F1 which has this power series representation near infinity. And now to simplify things we want to choose cleverly how we measure the size of the initial segment. It turns out that A1 is monotone in the path as you extend the path A1 increases and therefore to make life easier we choose A1 to be essentially to epsilon. Therefore we stop the curve once A1 reaches size 2 epsilon. And then easy calculation with power series and the scaling argument implies that F1 is well z plus 2 epsilon z inverse plus the other terms which are bounded by epsilon to the three halves. So we're not going to do this calculation for you now. And then the consequence is that Fj plus 1 is going to be the same thing except conjugated by the translation to Wj and therefore has this expression. Fj plus 1 is the same thing except you have minus Wj. And therefore so this is essentially a vector field where this is the vector at the point gt of z. And therefore instead of thinking of the composition of these new identity maps we can rather flow according to this vector field where this Wt is some continuous version of the Wj's. And well what is this continuous version of the Wj's? Well the Wj's have stationary independent increments. As you go from Wj to Wj plus 1 the distribution of that translation is going to be the same as from W1 to W2 from W2 to W3. So all the same because we've remarked that the law of these maps are all the same except for the translation. Once you take away the translation the law is the same and therefore the Wj's are have the same, it's like a simple random walk except the steps not necessarily have a fixed length but have a random length but they have the same random length each time. And so this Wt which is a limit of these Wj's is going to be a process that has stationary independent increments. It's symmetric because our problem is symmetric and it's continuous that you have to argue by some other means but that's not too hard. And it follows easily once you know a little bit about Brownian motion that it's a multiple of Brownian motion. And this leads us to the definition of what SLE is. SLE is stochastic levonner evolution. Well we fix the parameter kappa and we define Wt as Brownian motion at time kappa t and this Brownian motion is one dimensional. And then you solve this differential equation starting at g0 of z equals z. This is essentially what is supposed to be the conformal map from the complement of the interface to the upper half plane at time t. And this equation is known as Levener's equation. He's used it to study the Biberbach conjecture. And well he studied a different, mostly used a different, slightly different form of the equation so this is why it's called Cordo. And once you solve this equation you can recover the path itself by taking gt inverse of Wt. So what is the basic idea here? The idea here is that we want to invoke differential equations to study our percolation. Now the percolation interface is a fractal object and it's hard to study fractals with differential equations. Differential equations are better suitable to smooth objects. Well we have a smooth object at our disposal, the complement of the interface. That's a domain and therefore a smooth object. So instead of studying the evolution of the interface we study the evolution of the complement and that's amenable to differential equations and therefore we can not only find out what we guess it to be but also then we can do calculations to calculate properties of the complement and therefore of the interface itself. Well this whole discussion was conditional on having conformal invariance but well we didn't have conformal invariance so what do we do? Well it seems just to be a nice physics like prediction, not so satisfying for a mathematician. At this point Vendeline Verne came up with a brilliant idea that well we have a fractal at our disposal which is known to be conformal invariant but still has some nice questions that haven't been answered yet so why don't we apply this method of understanding conformal invariant fractal to Brownian motion. If you look at Brownian motion itself in some ways it's very simple. However if you look at the outer boundary of Brownian motion that's not very amenable to analysis directly from the definition of Brownian motion. So what you see in this picture is Brownian motion started at the origin and stopped when it hits the boundary of the unit disc and the red is all the places visited by the motion and the black is the outer boundary of the Brownian motion. The set of points so that you can get to from the exterior without hitting any other points before that. There was a conjecture of Benoit Mandelbrot that the outer boundary of Brownian motion has dimension four thirds and well Vendeline was right. SLE was the right tool to attack this problem and several related problems about the Brownian motion. I think Vendeline will tell you more about this later this afternoon but let me just give one of the punch lines. Greg Lawler Vendeline and I proved that indeed the Mandelbrot conjecture is correct. The outer boundary has dimension four thirds and you can extract other information about the curve. For example the set of cut points has house of dimension three fourths. The cut points are those points where if you remove them you disconnect the Brownian motion. And this is based on first, well this is not how it was written but a posteriori we now know that in fact the outer boundary of this Brownian motion is essentially the same as the outer boundary of SLE six and that's what permits the connection between why studying SLE can tell us something about the Brownian motion. At first we didn't know that they're actually the same but we knew that the dimensions should be the same. Well okay so that was one use but we didn't have to wait very long and Stas Mianoff actually proved Cardi's formula in Carlson's form and in fact the way the simple form of thinking about triangles and the hexagonal lattice turned out to be an important observation leading to the proof. The proof is quite brilliant but and amazingly simple and short. So Stas proved that in fact Cardi's formula is correct and from this it's not hard to conclude that the percolation interface scaling limit the same curve that I showed on the opening slide is SLE six. So six here is the parameter cap that you choose remember SLE has a free parameter. And this theorem allowed the determination of many of the exponents associated with percolation in particular we showed that the exponent that I discussed in the beginning the probability to connect to distance r indeed decays like r to the minus five over 48 with some lower order corrections and many other exponents and properties were proved by Smeonoff and Verna based on earlier work by Keston relating all these exponents to one another that SLE point of view only helps to determine things at PC at the critical probability and there are some exponents relating to off critical but Keston linked these with the critical exponents. Subsequently the looper is random walk was proved to converge to SLE two in the uniform spanning tree. Piano path scaling limit was proved to be SLE eight. This was with Loller and Verna and here the situation is kind of the opposite in the sense that first convergence to SLE was proved and the consequence of that was conformal invariance rather than the other way around and here you see a picture of what the looper is random walk looks like. So you see that with different values of the parameter cap are give entirely different processes and this may be a little bit surprising at first that you can change things like the house of dimension and properties just by changing the speed of the Brownian motion. It is like you can think of the Brownian motion as a driver in a taxi driver so if the taxi driver has one beer maybe can still drive so and reasonably well but after a few more drinks becomes erratic and the consequences are easy to see. So those are two phase transitions in the behavior of SLE as depending on Kappa this was proved with Stefan Rode. If Kappa is at most four it is a simple path if it is between four and eight then it is not a simple path anymore it kind of swallows parts of the domain as it goes along that is roughly what the SLE six pictures look like if you remember we will see them shortly again and when Kappa is eight or larger the curve is actually space filling it visits every point. Now the SLE path it cannot cross itself but it can touch itself therefore in this case once you circle some part of the domain you can't go in there and fill that up that is one difference from Brownian motion. Brownian motion can make a loop and then go in the loop and out again that is why Brownian motion is more difficult to study from this point of view than SLE. Now notice the parameter eight is precisely what the uniform spanning tree piano path converges to and it is easy to see from the definition that it has to converge to a space filling curve and that is exactly the transition where the Kappa where the transition to space filling occurs. There is some information about the dimension of the SLE in this paper with Stefan we showed that the expected box count dimension well expected box count dimension I am not going to define this concept because you can probably guess what it means by the from its name the expected box count dimension of the path is one plus Kappa over eight when Kappa is between zero and eight when Kappa is larger than eight it is of course two because it is space filling and in the range where Kappa is larger than four the outer boundary of the path is not the same as the path itself because it is not a simple path and the expected box count dimension of the outer boundary is given by this formula which was earlier conjectured by Rick Kenyon and Vincent Befar in a quite clever and technical work managed to show that this expected box count dimension of the path is in fact the actual house dwarf dimension. The expected box count dimension is much simpler to determine because you just need to know you need to calculate expectations and for house dwarf dimensions essentially you need to calculate second moments and that is more difficult. I think that the corresponding statement for this is still open but probably won't be for very long. I want to tell you about yet another model which whose relation to SLE has been determined this is the Gaussian free field. What you should think let's discuss the discrete Gaussian free field there is also a continuous version which is of the same flavor only harder to define. You should think first of a graph and then at each vertex you have some value H of V and you want to penalize discrepancies. You can think of it as a model for a surface which has penalties for discrepancy from being smoothness. You penalize by H of V minus H of U squared for vertices that are close by so the total penalty is the exponential of minus the sum over all neighbors of this square difference and this is a nice formula because the distribution of H under this probability density is going to be a Gaussian. The Kenyon showed that this Gaussian free field is a scaling limit of the domino tiling height function which is another discrete model which I'm not going to describe to you but this was partly a motivation to study the Gaussian free field. Here's what you do with the Gaussian free field if you want to do you put the Brouche and boundary values again you put boundary values plus A here and minus A here and then you can look at the set of edges. You can color the vertices one color if they're positive and another color if they're negative and then you can take the interface and this is a graphical depiction of this interface. Scott Schaaffield and I proved that this interface of the discrete Gaussian free field converges to SLE4. Now 4 is the special value where the interface above 4, the SLE above 4 stops being a simple curve and you can see that this is an actual simulation. You can see that this curve is almost not a simple curve anymore and the shades of blue and red they depict the heights of the field. Now in order for this statement to hold true you need to pick this value of A, a very specific value. If you take a different value then you get convergence to something very closely related which we also know how to describe. And there's an analogous statement about the continuum Gaussian free field but that's harder to formulate. There's a conjecture and Stas Miernoff reported on much progress on this conjecture that the critical easing model is converging to SLE3. This is this picture and Stas also shows that with other parameters things associated with the easing model also converge to SLE but with a different value of kappa and this figure is drawn by David Wilson. And so here the white is plus spins and the dark is minus spins. If I assume you're familiar with the easing model if not then think of this as just another pretty picture, hopefully pretty. Yes. And if you take the uniform measure on lattice paths that are self-avoiding and stay in the upper half plane the belief is that that converges to SLE8. I can't say I can't report any progress on this conjecture but it's supported experimentally by simulations by Tom Kennedy and this figure is being drawn by Tom Kennedy. Contrary to all the other figures in this talk this figure is one which you really need to work hard to produce. All the others have very simple algorithms. Well, time is running short and I'm going to give a quick summary. Here a list of things we know SLE describes the scaling limit of. Critical site percolation on the triangular grid and the value of kappa is 6. Lupo-Aisland and walk and the value is 2. The piano curve the value of 8. Gaussian free field and the value is kappa. Like explore is a simpler model that is kind of built, defined to be easy for this statement. And I'm not going to discuss it. And here are some conjectures regarding other models converging to SLE. Well there are other models of critical percolation which should converge to the boundaries should converge to SLE 6 but that's not been proven. The easing model with different parameters converges to 3 and 6 and we hope to see a proof by Stas pretty soon. And the FK cluster boundaries well I haven't defined them but I mentioned them here because they actually there's another parameter they're Q and therefore you have a whole range of kappa between 4 and 8 and similarly with the ON models. The self avoiding walk is of course very challenging. The double domino has relations to the Gaussian free field we've mentioned this already. And there are some things that SLE does not give. It does not give DLA. There's been a Lerven type analysis of DLA by Carlson and Makarov and by Hastings-Levitov that actually proceeds this SLE definition. It appears that the paths of the minimal spanning tree are not described by SLE and are probably not conformally invariant. This is based on simulations by Willand and Wilson and SLE doesn't apply to many interesting questions in dimension higher than 2. Let me mention that percolation in high dimensions is almost completely understood due to the work of Haar and Slade and others with a method called the lace expansion which is very different than what I described today. And the existence of the looper-ace random walk scaling limit in R3 has been proven recently by Gadi-Cosma but Gadi hasn't determined what the limit is just shown that it exists but that's quite a remarkable achievement. Well, so we have seen things that you can do with SLE, things that hopefully will be done and things that you need to use other methods. Thank you very much for attending my talk.
Many mathematical models of statistical physics in two dimensions are either known or conjectured to exhibit conformal invariance. Over the years, physicists proposed predictions of various exponents describing the behavior of these models. Only recently have some of these predictions become accessible to mathematical proof. One of the new developments is the discovery of a one-parameter family of random curves called stochastic Loewner evolution or SLE. The SLE curves appear as limits of interfaces or paths occurring in a variety of statistical physics models as the mesh of the grid on which the model is defined tends to zero. The main purpose of this article is to list a collection of open problems. Some of the open problems indicate aspects of the physics knowledge that have not yet been und erstood mathematically. Other problems are questions about the nature of the SLE curves themselves. Before we present the open problems, the definition of SLE will be motivated and explained, and a brief sketch of recent results will be presented.
10.5446/15973 (DOI)
So good evening, I'm Bjorn Enquist and it's my honor and great pleasure to introduce the last plenary speaker, Alfio Quateroni. He started his career in pavia, education, early research and then he moved around. He's been professor in the University of Minnesota, Minneapolis and now he has a chair in modeling and scientific computing at EPFL, called Politecnik in Los An. He's also a professor in numerical analysis in Politecnico in Milano. The research field of Alfio is numerical analysis of partial differential equations. He has a rich body of work stretching from fundamental results in finite element in spectral methods, domain decomposition and he has applied that to a variety of different fields. So one field is what we will hear about physiology and there is another example for example and it could be an optimal design of sailboats. I assume that's why he has an honorary doctorate in naval engineering. Today's talk will be cardiovascular mathematics. Alfio, please. So thank you very much, Professor Enquist. I'd love to thank the scientific committee for inviting me here, it's a great honor. So I'll talk about the cardiovascular mathematics and the problem that we want to face is how to set up a mathematical model to carry out the analysis of the cardiovascular system, the secretory system, which is made of arteries and veins and of course the heart. Now when you face a problem from applications, you basically want to set up a mathematical model from one side but then we also need to compare the results of our mathematical model with actual experimental data to make sure that what you are computing is meaningful. So the mathematical model, I have to start from a geometry. The geometry of for instance an artery that you want to analyze. Then you need equations that describe the physics that you are considering and these equations very often are based on partial differential equations. Then you have to carry out the analysis of these equations because you want to make sure that the system is well-posed for instance. And quite obviously you have to use the mathematical methods because the partial differential equations do not have a close form solution. So you need to set up a method that allows you to go to a computer to solve the problem and to see results. And once you have these results which may have very complex form, you need to post-process them. You need to visualize to represent them in order that they could be from one side comparable with experimental results and from the other side that can be shown to your counterpart, say the doctor or clinician. So a very important part is the comparison or the assessment. And when you compare results in this business, you have a different kind of data. You have in vivo which are obtained from investigating a real patient, hopefully by non-invasive techniques or in vitro that you can get by constructing some manufact which represent arteries for instance. And then you can compute fluid dynamics. You can run fluid dynamics in satisfied manufact. And trying to end up with results which are similar to those that you would hopefully have in a real patient. Or otherwise you have to resort to results from the literature. So if the results of your model are satisfactory, if they compare successfully, then on a variety of cases then you are happy and you think that your model is correct. Otherwise you have to iterate and you iterate again and again and very often before having a reliable model you have to iterate very many times. Now this picture can provide a paradigm of many other applications not only this one that I have gone to describe to medicine, but many other applications in classical fields of engineering or physics say or chemistry. But the extra difficulty here is that the data that you want to use are data which are provided by real patients. So you have data that you get from medical investigation and sometimes or often say or almost always these data are uncertain. So you have to filter them. You have to regularize them. You have to carry out a statistical analysis beforehand in order to make sure that you are using with data which are enough correct to carry out your models. So the first step is the geometric preprocessing. And geometric preprocessing consists of extracting 3D images from medical images, 3D geometrical models from 3D images where you are going to solve the partial differential equation. So you need a computational domain where you are to solve the equation. You need statistical analysis to classify this data according to clinical protocols. You need to understand which kind of patient you are considering, you are treating. And then you need boundary initial conditions for the mathematical model. So boundary conditions are needed because although the system is virtually infinite, it is not infinite, but it is basically infinite because it is made of so many pieces, you are actually considering a single part of it and you need influence and doubt for conditions. So you have to generate these influence and doubt for conditions by clinical data set. You also need initial conditions. You are starting evaluating the physiology of a patient from a specific time and then you need to set up your initial solution. And then before going to the computational solution, you need to generate a computational mesh for services and for volumes for 2D and 3D cases set. So I go through these different pieces before showing the equations set. So you start from a patient. You can perform magnetic resonance or CT scan or digital and geography. In the case of magnetic resonance, you have a stack of images. The distance is one millimeter, one from the other, say, from these data, which represent a carotid artery. The carotid artery is the main artery that we have in our neck and it is responsible for bringing blood to our brain. Then from this stack of data, you have to construct little curves over every section. So you can do it in many different ways. You have to extract contours by segmentation and in our case, you use these points to generate this little curve, say. Then you have to sample these curves by points and these are very critical. So you have to end up with a sequence of points sitting on two-dimensional curves before generating the grid. Now, the surface, which is the external surface of an artery, for instance, is defined, for instance, by an implicit definition by the level set of a curve, phi, say. Phi typically is represented by radial basis expansion. So phi of x is the sum of certain weight, time, radial functions, which depends on the distance from the simple points. The radial functions can have different forms. They can be phi of r equal to r. This is the linear case. Most often, this is phi of r equal to r to r part 3. Now, we need to extract information from this surface. In particular, what we need is this normalized Hessian. We normalize it with the modulus of the gradient because when you look at the spectral analysis, when you carry out the spectral analysis of this normalized Hessian, then you have the minimum curvature and the maximum curvature of the surface. And this is what you actually use when you have to generate the grid. So we have these curves and we have to generate the grid. This is a preliminary grid made of little triangles that you put on the surface. And here we use the marching cube algorithm that was proposed by Ruben Tully in 1994. It's a very fast and efficient algorithm. But unfortunately, you cannot control the quality of the grid. You cannot control the acuteness of the triangles. So you need some kind of optimization procedure in order to optimize the grid. And here is the actual grid that we are going to use. Once you have this grid, this is a superficial grid. You actually want to solve the flow equations inside the domain, inside the artery. So we need to construct a three-dimensional grid. So this is our carotid artery. This is a piece of the three-dimensional grid that you have inside. And this is already a numerical simulation. We have this three-dimensional domain. You can solve the equations on this grid. And you can have, for instance, these pressure poles that are generated by this historic, the historic activity of the heart. So this is the preprocessing. You have to end up with a domain that is suitable to be treated. And in that domain, you want to solve the partial differential equations. And you need a grid which is representative of the actual complex geometry of the domain, say. Now we come to the mathematical model, which is the most essential part of my talk. The mathematical model, you need, first of all, to identify the relevant parameters. You have to study a problem on a specific patient. And you want to see which are the parameters that most importantly affect the type of solution that you're going to have. So blood viscosity, blood density, and more generally, rheological properties of the vessel wall, of the arterial wall, say. Then you need to set up the partial differential equations. And this would be based on conservation laws, say. And then you have to carry out the analysis. And set up the numerical methods. And you care about the stability, the efficiency, and the accuracy of the methods. You want the method which is stable, which means that it depends smoothly, regularly, on the data, in particular on the numerical data, the grid spacing and the time spacing, say. The time step, which is efficient, will allow you to obtain the solution in a reasonable time, not wait for a month, to wait for days, or hopefully to wait for hours. I'm not saying minutes because these are quite complex problems. And, and accurate enough to make sure that eventually you have a solution that resembles the real solution, the physical solution, say. Then you have to go to a computer, choose a computer to carry out the simulation. And for many problems, you have a control to activate because you want to control the actual form. For instance, of a surgery, of a bypass, or a shunt, or a stent, as you shall see, in order to make sure that the results, the final results, are optimal in a suitable metric, say. Clinically optimal, of course. So I'm going to talk about the local flow analysis, what happens in a single artery. What happens, concerning the interaction between the flow field and the deformation of the artery. And this is the fluid structure interaction. What should we do when we want to look at the problem in its entirety, going from a single artery to the old set of arteries in the cardiovascular system. And this is geometric multi-scale. And then try to see at which stent we are far enough, again, we're far from the complete understanding of what is going on in our bodies. So local flow analysis. This is the global system that we like to work on. But it's not strictly speaking a fluid. It's rather a suspension of red cells, or retrosites, leukocytes, or white cells, and platelets, or the liquid, which is called plasma. So this is the entire system. It's a simplified picture of the entire system. That's too complicated. So we want to go to the local level. Go, for instance, to the local circulation between the heart and the lungs for the excisionation of heart. And then sticking with the specific part, which is a specific part of an artery, a single artery set. So we want to concentrate on a single artery in a truncated artery. The first thing to decide is which kind of representation we like to use. As you know, in partial differential equation, continuum mechanics, you have at least two different kind of representation, the Lagrangian and the Eulerian. The Lagrangian, you follow the gold particles. But this is unsuitable for this problem, because if a particle which is sitting somewhere in the aorta, sending aorta, you follow its displacement, then you are forced to follow it all along the complex pattern of the cardiovascular system. So this is not the right approach for this problem. You could rather use the Eulerian representation. You fix a specific domain, and you just go through this window and see what happens in the specific domain. The problem is that you are missing the deformation of the arteries in this case. So not suitable, either. And indeed what you do is using this so-called arbitrary Lagrangian layer, or ALE. With the ALE, you target a specific domain, an artery with an inflow and an outflow, but to allow this artery to deform, because you actually want to compute the change of energy between the fluid and the artery, the vessel deformation. And it is called arbitrary Lagrangian layer because actually it combines the Eulerian approach for fluids with the Lagrangian approach for the vessel deformation. And not only that, it's called arbitrary because you have to retrieve at each time the actual form of the computational domain, which is a 3D volume. And for that, you only need the shape of the surface. So you have an arbitrary in reconstructing the shape of the volume moving from the shape of the surface. This is why we call it ALE. In a very synthetic form, and after form, you may think of having a global domain omega hat at the reference time. This is the old cardiovascular system, say, and we target a specific domain, omega hat F, where there is the fluid, for instance, a single artery. And then you have this Lagrangian transformation that gives you the domain omega T, which is the complete cardiovascular system at time T. But you have the arbitrary Lagrangian Eulerian, the ALE map, that retrieves you the actual form of the computational domain at the given time T. And what we need is the derivative of the deformation, is the rate of the deformation of this AT, and we call it W hat. So W hat is a crucial velocity that will enter the Navistok equations to solve, to model our system, say. So these are the Navistok equations for our problem. So in a very simplified manner, this is our omega F of T. This is a branch bifurcating artery. We have plenty of bifurcation in our body. We start from a single artery, which is the aorta, and then you have at least 14 levels of bifurcation in the old, so 14 different levels in the old body. We truncate this domain artificially. So we have an infromoundary, gamma D, where we put a Dirichlet data, and an out-from-boundary, gamma N, where we put an Eumond data. And then we have gamma of T, which is a critical part, because gamma of T is actually the surface of the domain, which is deforming. So if it were fixed, you would have a Dirichlet homogeneous boundary condition there. But this is not the case, so we see that we have to couple this with an actual deformation law for the wall. So we have the assumption that it's true in the heart and in large arteries that the flow is homogeneous and Newtonian. So the viscosity is constant, which is not true in the capillaries, as we shall see later on. We use the Cauchy stress for the fluid, sigma F, which has the classical form of Newtonian Navier-Stokes equations. And then we have a strain rate tensor. And the incompressible Navier-Stokes equations in the ALE form are here. So what you see is from one side, the momentum equation, where you have some extra terms, which are due to the ALE map. In particular, you have this ALE velocity field that I was talking about before. You have the continuity equation. And then you have Dirichlet data and Neumann data on the inflow and outflow boundaries, which are artificial. And finally, you have that UF, the velocity field, is equal to U gamma on gamma, and gamma is the formable domain. So we need an extra equation for gamma. This is an example. This is a solution in a rigid domain, so rigid boundary. And here you simply see the velocity profiles, different sections. And you see that since the inflow is kind of dependent because of the pulsatility of the heart, you see that you have flow reversal, which poses some difficulties in the analysis of Navier-Stokes equations because at the outflow, the outflow actually is not an outflow. You have the section where the integral of the normal component of the velocity field is, of course, positive, but you have flow reversal, which is critical. And this is physiological. Why is mathematics so important in this business? Well, 15 years ago, some medical doctors in the U.S. discovered the correlation between this quantity, which is called Warshia stress, and the on-rise of atherosclerotic plaques in arteries, so occlusion on arteries. So Warshia stress is something that you can easily get from the Navier-Stokes equations as a post-processing step, but it's something which is very difficult to measure in the clinical practice. And here you see two examples. This is a congenital heart disease in neonates, and here is on the coronary arteries, this is the distribution of the Warshia stress. In particular, what is considered to be dangerous is a Warshia stress, which is low with respect to the physiological values, and which is highly oscillating in time. So low, small, but highly oscillating in time. This is the best case. So you are interested in looking to this kind of dynamics in time of Warshia stress, and with Navier-Stokes equations, this is something that you can not say easily achieve, but you can achieve if you have a good solver for Navier-Stokes equations in this domain. As I was saying before, blood is not really a Newtonian fluid because of different kind of phenomena that occur in the arterial system, say. One of those is the so-called Ruh-Loh aggregation. So the red blood cells segregate as in many stack of coins, as you see here on the left end part of the picture. Besides, you have this so-called Faheus-Linquist effect, which of course is more vessels, where red blood cells move toward the central part of the vessel, and the blood viscosity shifts toward a plasma viscosity, which is much lower. So the viscosity is not constant anymore. So you have to account for this variation of the viscosity. And of course, it depends in general non-linearly from the four equations. So this is the standard Navier-Stokes Newtonian Cauchy stress, say, and there are several models that try to account for this dependence of the viscosity on the velocity field, and more specifically on the rate of deformation on the shear rate, which is called gamma dot. And this is a classical conventional notation, say. And gamma dot is up to a certain multiple, the square root of the trace of the square of the rate deformation, say. The most classical model is so-called power-low model. So mu of gamma dot is simply proportional to gamma dot to the power n minus 1, and then is a critical exponent. If n is less than 1, which is somehow accepted by everybody, then you have an increasing function of gamma dot, as you see here. These are different values of the, different behaviors of the curves for different values of n and k. And you get a shear thin inflowet. And it's somehow understood that the blood in small vessels should behave as a shear thin inflowet. But there are more sophisticated models. Here is just a list of those, the power-earing, the cross, the modified cross, the Karo, the Karo Yasuda, and they are all function of, in all of them, you have that mu of gamma dot minus mu infinity over mu 0 minus mu infinity, where these are constants that you can get from the blood analysis, say. Then these are non-interfunctions depending upon critical values of lambda and m, which are material constants for blood. So this is far from being the final word on these models. People are still working to provide more convincing or more effective models to describe blood flow in small arteries, in small capillaries, as non-initonium fluids say. The vessel wall deforms. It deforms under the action of the heart. And it deforms because it has to store energy during the systolic phase and return energy to the blood during the systolic phase. Systolic is the compression of the ventricle, the systolic is the relation of the ventricle set. So first of all, you have to develop a model for the vessel wall. And then to consider the coupled system between the fluid and the vessel deformation set. So this is the mechanical interaction between the fluid and the vessel. That is not the only important interaction between the fluid and vessel. We should also account for biochemical interactions. And this is regulating the different kind of mass transfer processes that you may have. You have macromolecules, like cholesterol, for instance, passing from the lumen to the vessel wall. You have drug delivery when you take drugs. And you have the dynamic of oxygen. So what you see here is just an example of the oxygen solution in the heart, sorry, in the lumen. And then you have to set up suitable interface problems to allow this oxygen or these macromolecules to go from the lumen to the different layers of the heart, and so. So you have no linear Kedem-Kacharski-like equations to describe the flux across the different layers. I will not talk about that in this talk, say. I will stick on the mechanical interaction. And first of all, what you need is a model for the vessel deformation. And here, again, there is not a general consensus. There are several models that are proposed. And some of them are also very simple, oversimple like one-dimensional models. And very often, bioengineers use zero-dimensional models, just algebraic loads. Here, what we use is a three-dimensional model. Of course, we use a Lagrangian approach, which is more suitable. So you describe the deformation from an initial state. You need a deformation gradient. And you have this roll hat, which is the density in the reference configuration, the density of the, or the arterial wall in the reference configuration. The deformation gradient I've had, the Jacobian, the second Piolla Kirchhoff tensor, and this sigma hat, which is obtained from a density of elastic energy. And you take the derivative of W hat with respect to green Lagrange strain tensor. I think this is quite classical. Of course, what is difficult is to characterize the different rheological behavior that you want to consider in these cases. And this is the global problem that we have actually to solve if you want to retrieve the solution of the couple from the structure problem. So we have three components. Two of them are obvious. We have, you see Navier's talks that we already seen, the blue equations, the equations for the structure. And then you have some red terms, which are responsible for the coupling. First of all, there is an extra gas tier, which is the geometry. Geometry is an unknown of the problem. You have to deform your problem. You have to deform your domain. And for that, you need, first of all, to compute the deformation of the surface. And then, the deformation of the surface on the interface between the fluid and the vessel. And then you have to extend it, for instance, by an harmonic operator or by an elastic operator or by a Stokes operator, to get a deformation inside the domain in order to compute this famous ALE velocity field. This is the arbitrarity. You can use any kind of extension operator here. And the solution at the end will not depend on, let's say, is that solution not depend on this? The numerical solution, of course, will depend on that. Then you have the actual form of the problem of the domain omega f of t. And then you have two equations, two red equations. You have equal to w, which is a kinetic condition. You are just stating that black particles that are hitting the dotting surface, they have to move according to the deformation law of the wall. So there is no cavitation. There is no empty space, fortunately. And you have an extra equation, which is a kinematic equation, which states that the forces should be in equilibrium. So this is a kind of action reaction principle. The energy which is transferred from the blood to the artery is returned from the artery, from the vessel wall to the blood. So this is the global system that you have to solve. And here are some references. There are very many people who have been working on those kind of problems, not necessarily on the application to blood flow, but whose results are somehow relevant to this type of application. So let me call the paper by Léthalle Camero on the ALE formulation, the paper by Béraud Avega on the existence of a simplified model of restructuring interaction, and on the same line, paper by Desjardins and de Stéban, Cécent Puyel, Rémon et Made, on existence results for fluid structure problems, even though not necessarily applied to this specific kind of example. And then techniques used by Jacques Lyon, Jean-Gazoua, on control. Then other papers by Mourain Vazquette and Cienc-Coutard scholar on generalized tox problem for this type of applications on moving domain, paper by Formagin, Nabil and Bofing, Castaldi on the stability of a time-dependent approximation of problems in moving domain. So there is a lot of work that has been done, and there is much more work that is still to be done to come out with a complete analysis of the previous systems. So this is the way we treat it numerically. We want, first of all, to see the whole problem as an interface problem. Roughly speaking, we go from three dimensions to two dimensions. We work on a two-dimensional manifold, which is the indotainment surface. So this is a sketch of the situation. This is a two-dimensional section, vertical section. You have the blood flow here. You have the structural equation in the blue part. And then you have a tiny region, the yellow one, which is the interface. This is the indotainment interface. So can we reformulate the whole problem as a problem leading only on the interface? And we do it through the so-called Stecco-Prancare equation, where we have an operator equation, which is made of two pieces, the Stecco-Prancare of the fluid and the Stecco-Prancare of the solid, which is the arterial wall, say. Lambda is the displacement of the arterial wall, which is the unknown. And to construct it in a very formal manner, we proceed like this. Assume that we know this deformation of the arterial wall. You solve the Navistock's equations, U and P. And this is the resolvent in the fluid part. Then you compute the normal stresses, Cauchy stresses. And this is the Stecco-Prancare form in the fluid part. So basically, it's directly to normal map, which is not linear in this case. You do the same for the structure. You move from lambda, which is displacement. You compute the solution of the structural equation. And then you compute the Cauchy stresses for the structural part. And you have these two basic components, the Cauchy stresses of the structural part and the fluid part. You have to equate them for the kinematic equation, and you end up with this Stecco-Prancare equation. Now we have to solve it. So, so far, we have only been reformulating the problem as if it were a problem only on the interface. Now to solve it, to solve it with an iterative method, let's say a classical gradient method, for instance, where you compute this slope, and you compute the residual, and you have to precondition the residual. And this is the critical part. If you don't precondition, you have a very ill-posed problem. Sorry, you have a very ill-conditioned problem. The spectrum of this operator is fairly big, numerically speaking, and you will never be able to end up with a convergence procedure. So you have to precondition, and this is the way we precondition it. We take the linearization of the Stecco-Prancare for the fluid and the Stecco-Prancare for the structure, and we take the inverses, and we sum up the inverses, and we weight them by suitable weights. And this is the inverse of our operator. So roughly speaking, at every step, we have to solve independently a fluid problem from one side and the structural problem from the other side, and recombine the two stresses that we have and proceed according to this conjugated method. So this is a solution that you get after solving this problem. This is a carotid bifurcation, and here is the deformation of the vessel wall under the action of the historic phase A. And this is the same solution, but now you see not only the deformation of the wall, but also the flow problem. So the colors refer to the different values to the intensity of the flow or the vector field, and arrows indicate the vector, say the velocity vector. So it seems good, but indeed, there is a major problem here, and the problem is due to the back reflection of the pulse wave. So you see here, this is a section of a cylindrical artery. We just break it down to its half part just for the purpose of representing the results. And what you see here is that there is a very strong back reflection, which is due to the fact that this treatment of the outflow is not mathematically correct. First of all, this outflow is not a physical outflow, it's an artificial outflow that we put to solve our problem. And then we have solved it by using a free stress condition, no-imam condition for the fluid part, and absorbing boundary conditions for the vessel part. This is an astrodynamic system, so it's a hyperbolic equation, but this is not enough to prevent these waves to be reflected back. So we are solving an incompressible fluid equation, but yet we have a reflection because of the pulse-attied nature and because of the fact that the domain is deformed. So how to cure this? Now, to do that, you have at least two ways. Either you resort with smarter boundary conditions, which is not easy, or you try to see these as part of the old system. So embedding these in a more general picture and trying to see how it behaves when it's part of the old story. So this is why we developed the geometric multiscale, which is due to the fact that, which is motivated by the fact that the circulatory system is a complex one made of the heart of this multiplication or the global circulation. You have different pieces that are acting in a different manner. The fluid dynamics features of the different pieces and also the structural features are completely different. The Reynolds number is 3400 in the Aorta. These are indicative numbers and it's 0.002 in the capillaries. So first of all, the flow is not turbulent, unless in some specific conditions. It might be turbulent in stenosis or it might be turbulent when you have low blood density like in anemia or in other blood diseases. So you have a plurality of situations and you have very many pieces because you have a single Aorta, but you have as many as three times 10 to the power of nine. So three billions of capillaries, say. So how to keep track of the old story. So here comes the geometric multiscale help. We like to solve the problem in a specific domain like this one, but indeed we want to see it as part of the global nature. And we go to the global level two where we have very few arteries and very few veins, the most important one. And then we go to a global level three where we have the capillaries. So we start from a specific environment where you want to solve the local three dimensional. And then we like to go to a level where we have the most important arteries and veins and then go down to the zero level where we have the third level where we have a capillary network. And the problem is that here we use Navistokes in three dimensional domains. Here we want to use one dimensional order system and you want to use zero dimensional or differential equations. So this is the geometric multiscale stuff. And this is the way it works in a let's say in a sketchy form. This is 3D. If you have a bifurcation, of course, you have to go to the 3D. Now here when you have straight cylinders, you may want to use 1D and this will be based on all that equation. And here when you have capillaries, we like to regard them as being analogs of electrical circuits because in that case we will be able to use ordinary differential equations, system of ordinary differential equations. So we go from 3D to 1D to zero dimensional. You can carry out this type of simplification according to different strategies, mathematical strategies. One way is to use a synthetic analysis, making some assumptions on the physical behavior in, for instance, in these blue parts and then you end up with a system that is virtually the older system with some extra dissipative terms, say. You can do more. You can linearize, consider all the average quantities and in that case you obtain zero dimensional systems, ordinary differential equations. And to do that, you have to keep in mind that you are somehow assimilating the flow field to electrical circuits. And here is the dictionary. See the pressure as a fluid dynamics quantity is the equivalent of voltage. The flow rate of current, blood viscosity of resistance, blood inertia as inductance and wall compliance as capacitance. And in this case you are able to describe the behavior of a very complex network of capillaries, for instance, as a system of electrical circuit. Now, what can you simulate with this reduced resistance? This is the one-dimensional. With one-dimensional you can easily simulate single pieces of arteries, cylindrical, accounting also for narrowing, say. You can do junctions like in this standard abdominal aorta. So if you join different pieces like in a ligo, then you can use one-dimensional models to account for propagation of waves. But you can do much more, actually. You can account for this complex system of 55 arteries, which are the most important arteries. These are those that you have seen in the previous picture, the red arteries, say. And you can solve your network of order equations in this set of problems and have a global picture of the circulation. You can go at a lower scale, the zero-dimensional scale, to divide the whole body into compartments to set up electrical circuits for the compartments, which ends up with a system of ordinary differential equations, actually differential algebraic equations. And if you solve this, you can compute the solution in the peripheral part of the body. So once again, the idea is to embed a two-dimensional or a two-dimensional system, for instance, a bifurcation or a coronary bypass, then prolongate it with one-dimensional models here and there, and then embed it in a big map of zero-dimensional networks, like in this simple case, where you see that you have a complete picture here, and just taking the pressure at some specific places. But this is part of the old system, and here is a two-dimensional coronary bypass, which is solved by NaviStokes, and the influence, the outflows is generated by this global one-dimensional system. And there is a feedback mechanism, so the system is actually coupled. By this manner, we cure these pathological, let's say, back reflections that we had in the single system, like in this case. Here is a 3D cylindrical vessel with a 1D extension, and you see that waves, the pressure pulse, propagates safely through the one-dimensional extension set. So here is a couple of, a coupling of 3D and 1D. This is a 3D in a bifurcating channel. This is a carotid artery. There is a 1D extension that you don't see here, but is there, and you don't have back-spoilers, back-reflections, say. And you see, for the same problem here, the pressure wave, you have this one-dimensional extension. There is another one here, which is not reported for graphical purposes, but you see here is leaving the domain and going through the one-dimensional system to the rest of the bodies. A few references. First of all, I think it's important to mention that the first paper is the one from Euler, Leonard Euler, who actually provided his system of Euler, which is so popular in aerodynamics, for example, gas dynamics, for during, on the occasion of his investigation on blood flow in one-dimensional vessel, say. Then you have a series of papers where it's dealing with the continuous dependence of one-dimensional from the data. Existence of local and time-regular solution in half-space for 1D by Chenich, Keem, and Mikelitch, asymptotic analysis for 1D-0D coupling. Existence of regular global solutions on boundary domains without source term and special boundary conditions. And treatment of interfaces, because you have to set up suitable models to have the different models interact one another, the 3D with the 1D and the 1D with the 0DC. And here we use either Lagrange multipliers or optimal control to account for this interface treatment. This is already a substantial part of the investigation, but it's only a piece of the only investigation, because what we have been doing so far is concentrating on a circulation model. We actually have models for the heart that have not shown here, but this is the kernel of the investigation. Now, to be more realistic, once you also add the nervous system, the erythric compartments, like the spline check system and the respiratory system. And then, since we're not talking about inanimated structures, but we're talking about leaving people, we have to account for the metabolism, the way the body reacts to the specific physiology or to the specific extra condition. In particular, the metabolism that we have developed on the part should be capable of account for the chemoreflex effect and the baroreflex effect that have a very strong impact on the circulation model. If you try to investigate the behavior of blood in people like you now at rest, you find completely different picture than on athletes or people who are under severe condition say. The final part is post-processing and model validation. Did you work correctly? So far we have seen plenty of mathematical equations, but now we like to see whether with this equation we can try to solve some problems which are considered to be relevant for doctors, for instance. Now, let me mention again the mathematical aspects here. The first one is to develop an error analysis. We like to compare with an exact solution that you seldom have, of course, but sometimes you have an exact solution of a simplified problem or compare with bench mal problems or other results. Compare comparison with experimental results in vivo or in vitro and then having the final assessment by medical doctors and clinicians. I'm not going to give any dogmatic presentation in this respect. Let me end up with a series of tests that one has to pass in order to convince the doctors that the model is correct. I'm simply mentioning three different kind of applications that were suggested to us by doctors and in which we can hopefully see the potential of these tools in order to solve a specific problem. The first one is cavopulmonary shunt for congenital heart disease. The second one is on cerebral aneurysms and the third one is on stent for occluded artery site. The cavopulmonary shunt is a collaboration with the Great Ormond Street Hospital in London and the laboratory of biological structure of the polytechnic amylan and the Carrick-Pleau Foundation, which is the sponsor of this work. Here the problem is that may have unfortunately children who have only one ventricle. So for them it would not be possible to establish the small circulation between the ventricles, the heart and the lungs. So the blood could not be oxygenated and there is no chance for these neonates to survive unless this pathology is detected at the very early stage in the fetus and then they are immediately operated. So here are two clinical surgical approaches. The first one is the so-called central shunt. One is to create a bridge between the ascending aorta, heart is somewhere here. So this is the ascending aorta and the right pulmonary artery. So we are just sucking blood from the aorta to bring it to the lungs. And here is another operation that both use every day in very specialized places. That's the second operation. You take now the blood from the inominated artery and not from the ascending aorta. So this is the modified blood-actosic shunt and this is the central shunt. So the obvious question is which one performs better? And as you can imagine this is a fluid dynamics problem. The second question is the following one, assuming that it is clear that one is better than the other. How can you shape this CS or MBTS in an optimal way in order to make sure that not only you oxygenate the blood but you make sure that you keep iterating the other parts of the body, the brain, the coronary, the lower part of the body. So you have to end up with a kind of flow conservation and to flow redistribution of blood and in order to guarantee that all vital processes in our body would keep working properly. So as a macroscopical element of decision for the clinician, there is the choice of the radius of the shunt and make sure that the balance is correct and that the coronary flux is kept at the right level. This is our computational domain, our omega F, which is deforming in time, so it's omega F of T. But the problem here is that this omega F, this mathematical omega F, has plenty of inflow and outflow boundaries, which are virtual boundaries. How do we provide data to these virtual boundaries? There is no way to provide data to these virtual boundaries in a zap form, so there is no way to guess which kind of inflow and outflow condition you can put there to be respectful of the actual nature of the problem. You know if you're working for dynamics, how far the solution Navier-Stokes equation is sensitive to the inflow and outflow boundary conditions and of course also to the geometry. So we have embedded the system, the shunt, in our geometric multiscale problem. This is the shunt, this is the global zero-dimensional system described in the rest of the circulation. So here is an interaction between a 3D and a zero-D system, 3D Navier-Stokes with a system of differential algebraic equations. And in this way, you don't need to force boundary conditions. Boundary conditions come automatically from the external environment and it's not only an inflow point here, it's a feedback mechanism. So the shunt is providing conditions also to the ordinary differential equation system. And we are very proud of this type of investigation because for the first time a numerical solver was able to see that in the shunt, there is flow reversal that was observed by the doctors, by the surgeons, that was never reproduced before due to the mistreatment of boundary conditions. So now we can better shape the form of the shunt since we have more reliable physical data, mathematical data to compare with. This is the second application and as to see with cerebral aneurysms. Now this time the main body is the Niguarda hospital in Milan, which is perhaps the most important hospital in Italy for the cure, the treatment of cerebral aneurysms. And Siemens is the main sponsor of this project, say. So cerebral aneurysms are lesions arising on the brain. What you see here is the brain from the lower part. And you see this circle of willies which is sketched here. The circle of willies is there to dispatch and to distribute the blood that is coming from the internal carotid artery to the brain. So it has, of course, a vital importance. And this is a very smart system because it can self-adjust even in those cases where you may have an occlusion on specific piece. So quite often the aneurysms are subject to rapture and in that case you may have dangerous cerebral hemorrhage, I say. And it's estimated that 5% of the population has some type of aneurysms in the brain and they are completely asymptomatic. So in general there is no way of deciding or understanding or monitoring them if you are not undergoing a specific clinical investigation, say. The project wanted to highlight the role of the vascular morphology with the correlation between vascular morphology and the risk of presence of development of cerebral aneurysms, say. And the method that was used is merging of statistical investigation and numerical investigation, numerical simulation to end up with some results which could provide doctors with a better understanding, say. So this is the geometrical preprocessing. This is an arterial vessel in a real patient. We have been working on 65 real patients. It's very difficult to have real data from patients. These are in vivo data and they are very difficult to get. So there are several ways to characterize morphologically this three-dimensional domain, say, by center lines. Then we look at the radius of the maximum inscribed sphere. We want to identify each bifurcation. We have to compute this to retrieve the center lines of each branch and then to identify the branches. So this is the preprocessing that you have to carry out in order to make sure that you are working with, you can classify the kind of patient or the kind of aneurysm that you have. This is an aeurysm that you have, this enlargement of the artery, say. This is reconstructed by geomanical data. This is the pressure field after the numerical simulation. And these are the velocity streamlines of a single aneurysm. And this is the way you can track the particles in the aneurysm. And one very important parameter here is the resonance time in the aneurysmatic sac. And the doctors are very happy to have a quantitative tool to compute the resonance time in this part because this provides an indirect measure of the capability of, how to say, the flow field to exert resistance or wash stress on the surface and eventually to create raptors, say. This is one element of investigation. And this is what you can get. And this is quite a surprising result and useful result because on the basis of this observation, what we have seen is that aneurysms do not distribute randomly. They tend to prefer two specific sites. And these sites are just downstream the regions of maximal curvature of the internal carotid artery. So it seems that there is potentially a very clear correlation between the fluid dynamics behavior and which is affected by the structure of the curvature of the vessel wall. And the presence of aneurysms. And on the basis of these results, now doctors at Inigua Hospital are investigating clinically their patients and try to see if there is a precise feedback or confirmation to this type of conjecture, say. I'm going to conclude. I think I have still a few minutes, five minutes, maybe, three minutes, okay? By talking about a third project, this is due to the dragalooting stand. So what is a stand? It's a network, metallic network that you put in an artery to restore the flux or to restore the lumen, say. You have a narrowing and you insert this balloon, you inflate the balloon and this is inflating the stand, say, and the stand then is deployed and it's there to restore the original lumen, say. So stands have a very strong interaction with the endotelium of the vessel wall. And actually they are recognized as external body and they are cannibalized actually by the production of endotelial cells over a few months. So in the new stand technology, the industry tend to use a coating of heparin or other inflammatory substances. So the new stands have a coating with an anti-inflammatory drug which is released. And this is a kind of multi-scale problem because you're working with different scale. The scale of the wall thickness, which is at the order of one millimeter, and the scale of the thickness, which is five micrometers, say. And you have three phases. The effect is a solid phase where the drug is bound to the polymer, to the coating. The visual solid phase where the polymer is swelled. You have a free interface and you have the liquid phase where the drug is dissolved in plasma. And we have a model to describe this at the micro scale for the, let's say, in the arterial wall. So it's a diffusion advection process. U is the velocity field which is provided by the Navistokes equations. Then you have the liquid phase, the virtual solid phase, and the effect is solid phase. And you have critical quantities that depend on the polymer characteristics and they are determined by stochastic models. So you're trying to use a micro description based on the stochastic approach and a macro description based on differential equations, say. This is the numerical treatment. This is just a piece of a stand. This is the geometry around the grid around. And only for this very simple piece of stand, you need almost one million of elements in the stand coating and one million of elements in the wall. And that's too much because you have very many components of this in a single stand. So we have to resort to a simplified process or simplified model. And actually the coating is not considered as a three-dimensional body, but just as a place where you have to adjust the flux condition. So we're working with a model for the transient flux and we avoid solving the problem in the coating. This is the numerical simulation. This is a drug release of epirene from our stand. And this is the blood plasma pressure for a realistic model. This simplified model is a realistic model. And this is the way the stand expands and the drug is released. So a critical, critical problem here is that in a model stand, drug is released too quickly, too quickly. In one day, about 24 hours. And that's too quick with respect to the proliferation, the critical time of proliferation of the endothelial cells. So the challenge today is to produce coatings that allow this release in a much smoother way, slower way. And you have several ways to do that. You can modify the form of the coating or creating multilayers or modifying the form, the type of drug. And here is an example where you have two different approaches. This is a uniform coating and after one day, the concentration has reached, basically after two days has reached its steady state. That's too fast. And this is what is called a coffee cup stand coating because you have three different layers, like a cup of coffee with two different interfaces. And this is released sequentially. And what you see is that after three days, the diffusion process is still working properly. And this is the challenge. Try to make this even slower in order to make sure that this effect of diffusion of heparin or other substances takes order of four or five weeks to be effective enough to contrast the release of the production of the proliferation of cells. So the conclusion is that I think this investigation, this kind of investigation, are useful to understand better the basic physiological processes. And this is basic research, after all, to assess the risk to produce risk indicators for pathological enterprises. And this is useful for the clinical diagnosis. And also to help developing therapeutic and surgical planning and here optimization and control and shape optimization and control has a lot to say. But even perhaps more important is that is a good opportunity to develop new mathematical tools that are suggested by the complex nature of the problem that we have to treat. I'm going to conclude by acknowledging the main bodies, MOCs at Polytechnicom Milan and the EPFL with different players and the many external collaborators that we are working on every day, say. And I would like to thank you very much for your kind attention.
We introduce some basic differential models for the description of blood flow in the circulatory system. We comment on their mathematical properties, their meaningfulness and their limitation to yield realistic and accurate numerical simulations, and their contribution for a better understanding of cardiovascular physio-pathology.
10.5446/15972 (DOI)
So it's a great pleasure to introduce Sorin Popper. Sorin was born in Romania and became a member of the extraordinary team of mathematicians at Bucharest. He graduated in 1977 from the University of Bucharest and stayed there for 10 years. In 1987 he moved to UCLA where he has been ever since with frequent stays in France. His work is characterized by amazing depth. Many of his results seem innocent enough, but those who know them appreciate the powerhouse behind them. Several times I have been told by colleagues that they have been stuck on a problem for a long time and I've suggested that they try using Popper's theorem such and such. The next day I hear that the problem has been solved. In the early 80s Popper began work on subfactors, a subject he still dominates today. But about five years ago he opened up an entirely new path in phonomenal gibberish, looking where no one had looked before and solving some of the hardest and oldest problems in the subject. Providing at the same time insights into ergodic theory and group theory. I look forward to hearing his talk on this work. Thank you very much for this nice words and presentation. Let me first define the objects I will be talking about, namely phonomenal gibberish associated with the actions of groups on probability spaces. First of all, by probability space, I'm sorry to offend you with this, reminding you these definitions. We mean of course a standard boreal space with a probability measure and one has two prototypes such spaces. The spaces without atoms, which are all measurably identical, measurably equivalent of course to the unit interval with the Lebesgue measure and finite sets with probability given by weights of course that add up to one. And I remind you also that isomorphism of probability spaces means that after a map between the two, say X and Y probability spaces with measures mu and respectively mu. So an isomorphism is a map from X to Y, which after removing sets of measure 0, it's a bijection with both the map and its inverse being measurable and measure preserving. Now one way to get rid of this thing with the taking measure 0s, subtracting measure 0s, subsets is to go to the associated phonomenal gibberish. So or functional algebra in this case, but this happens to be also phonomenal algebra, namely the L infinity spaces associated with the probability space. So such an isomorphism can be viewed as an isomorphism of the probability of the functional algebra L infinity of X, L infinity of Y that preserves the integral this time. And this is an observation due to Von Neumann. You have an obvious correspondence going on from here to here via this relation. So I used the same notation for, you know, to emphasize that this is really the same thing. So for this isomorphism of the functional algebra and the isomorphism of the spaces and the delta here applied to a function at a point P is the function at delta minus one of t, but this delta is the one here, of course. And the reason you take delta minus one is because you want this correspondence to be functorial. Of course, this map, because it preserves the integral, extends to the an isomorphism of the Hilbert spaces and just taking the self-isomorphism, in other words, automorphism of the probability space, this punctoriality, of course, tells you that this will be a group and this correspondence, this functor here tells you that these two groups, the automorphism group of X and of L infinity of X are the same. So now, now take a group gamma, which throughout my lecture will be assumed at most countable discrete group, where an action of gamma on the probability space X simply means a group morphism of gamma into the automorphism group of X, or if you prefer, into the automorphism group of the functional algebra. And I'll use this simplified notation, which is very fashionable and nice in fact. And I remind you also the two basic properties of a group action, which is freeness, or also called essential freeness, which is defined here and their godicity, these are concepts that I'm sure you are familiar with, so I'll just pass on quickly. And I have here a list of some examples. The first one is simply, you know, the most basic possible example, when you take a z-action, which is of course nothing but taking a single transformation, classic, if you want, dynamic, a discrete, dynamical system. The transformations, though, will be invertible here, automorphism of the probability space, like a generational rotation on the torus or a two-sided Bernoulli shift, and so on. And let me, so this is exactly the same as the z-action, of course, by taking, you know, the powers of T and vice versa. And the godicity of the transformation T, of course, is nothing but the godicity I defined here for the group action, the z-action, meaning. Freeness comes for free. Once you have a godicity, when the probability space is non-atomic, you think a few seconds to see that, so triviality. So Bernoulli actions, it's the sort of standard universal example of a group action, and this construction can be, the nice thing about it is that you can do it with any countable group gamma. Let's take it in, please say. So you take gamma, an arbitrary group, x naught, a probability space, which I think of as a base, probability space, you make the infinite product of x zero by itself gamma times, okay, indexed by the elements of the group gamma, with the product probability measure, and you put gamma to act on sequences indexed by the elements of the group by simply shifting from the left this way. You have also some generalized version of this in which you put gamma to act on a set, and then you take product of x zero with itself indexed by the set, and with the obvious shift that way. For instance, when you take gamma acting on gamma, gamma naught, where gamma naught is a subgroup of gamma, in fact, all of. But these are really not geometric at all, in some sense, there are more geometric examples of more geometric nature and I've listed here a few examples of. Now, once a group gamma on x is given, one associates to it an algebra of operators on a Hilbert space closed under week operator topologies through the so called group measure space construction. And this construction is due to more and more and going back to 1936 and is very much that like the usual cross product in algebra, except that we would simultaneously construct the cross product algebra, the way we usually do to the like in algebra, the space on which it acts. And at the same time, there will be some rather subtle closure that that is involved and actually which which hides a lot of analysis and sets up the grounds in fact for for a lot of analysis. And it goes like this. So take the the Silver space will be. The space of all Fourier like series is already explained in this kind of formal manner. So you take Fourier like series is with coefficients in L two of x rather than scholars and replacing the usual for your basis say on the Taurus on L two of the Taurus or the Z to the end by you sub and some I just called them and you know you indexed by the elements of the group. Okay, this will be my Fourier basis. And rather than taking the group the integers you take an arbitrary group now gamma. So I'll denote them by you sub H with H changing over the elements of the group. So my Fourier basis is just a copy of the group gamma. Okay, so of course this Hilbert space is nothing but L two of x directs some with itself gamma many times, but I did in this manner. And the reason is because I want to define a multiplication and this is a sort of visually proper setup for that. So here is the multiplication. So I multiply two such formal for his like series is defining on this monomials. That's what is important, of course, and then we see how we extend that. So you do times you H of course will be you sub G H. I want that to be a copy of the group. And now the issue is how you move the coefficients from the right hand side of a UG to the left side. Well, it goes through by twisting without more fish G. Okay, so it goes on the other side is G of size. And of course you then multiply the coefficients. Now you have of course a problem with these are L two functions. So you get an L one function. Let's not worry about that. You can even do this with course she's what is trivial to see that in fact even if you take infinite formal square sum mble sums. So in vectors in the Hilbert space, you can still produce do this product and you still get after you rearrange the terms, you get a formal infinite format series with series with coefficients in L one, uniformly bounded in L one by the way. We'll just put by definition L infinity of X cross gamma to be the set for now for the moment of access in a so formal series is series with the property that multiplied with any other element in the Hilbert space any other format series. It's it lands still in age. So it's L two sum mble the coefficient must be in L two you impose that condition and square sum mble. Okay, and I view this now once you have that of course you can they define any such multiplicator. Okay, we'll have the property that he defines them, but an operator on the Hilbert space. It actually close graph here immediately seats. It's a it's a bounded operator. And you view L infinity of X cross gamma, you know, both in some sense, as a subspace of H, but also as left multiplication operators on the Hilbert space. Now, by associativity of the product you see immediately that this actually is an algebra. So the product of any two such elements will still be such a multiplier. And also that it's close to the star operation taking adjoints in the Hilbert space. And they are closed in the week operator topology that this algebra is closed in the week operator topology when viewed as an algebra acting on the Hilbert space. And that's not one course upon them on algebra. Okay. Let me quickly mention that although it was quantum mechanics that led for Neumann to earlier considering, you know, this algebra such algebra so for operators on Hilbert space, this so called group measure space algebra is really the first interesting, you know, because that as mathematical objects by themselves that appear, of course relating the immediately with group theory and their Gothic theory rather visible way for God to mention that you also on the same occasion you also constructs on this was algebra infinity of X cross and the so called group measure space algebra associated to the gamma on X actually gamma on X associate also the so called group for Neumann algebra, which is just the case when the basically the probability space you take a one point sets is just this kind of Fourier series in with scholar coefficients. I like to do it this way to emphasize that elements in this algebra have Fourier expansion and that's actually extremely useful for the analysis involved. Okay, so I mentioned here some immediate observations. One is that infinity of X seats as a sub algebra in this group measure space algebra denoted by M for simplicity, of course, as the coefficients of degree zero, I should mention that although I took a priori, you know, the square fictions were in L2 of X, the fact that they are multiplier multipliers immediately implies that they are in L infinity of X. Okay, that's what no surprise. This is just by even just taking. It's a triviality. Of course, it's the standard qualifying exam problem. And that the integral on an infinity of X is another observation extends to a positive linear functional. And that's the way to put tau on all the algebra M in this manner. So the tau of a formal series Fourier series like this. It's just the integral of the degree zero coefficient. And it satisfies this remarkable the trace relation of X, Y, tau of Y, X for all X and Y. So it's a trace on the algebra. And in fact, you know, this trace is very closely related to the scalar product on the Hilbert space we departed from. Let's take one very simple example, the case when the group is has an elements. Okay. Well, the fact that the action is and fine being. So the fact that that the group is. So if the action is assumed to be free and they're going to with the group having an elements that implies automatically that that X is the end point set with me with the counting measure and algebra is nothing but and by matrix algebra with the trace with the dysfunctional tau here being just a trace on matrices. Okay, normalized of course because you want tau of one is equal to one. And also that the infinity of X as it sits here in in M is nothing but the diagonal operators. And what is that everything was forgotten by the cardinality of the group in this construction algebra for forgot the group. So to say except for the order of the group when the group is infinite. The action for your body. Then the algebra and is what one calls a to one factor, which by definition means that the center is reduced to the scholars that is the elements, the only elements in M that commutes commute with all other elements in M are the scalar multiples of one and M has unique trace of the trace we've constructed is actually the only one you can possibly take the search features of effect up to now with nothing, you know, also the matrices and by matrices have sent that it used to the scholar and unique trace. The third feature though, which makes this into a to one factor is that the trace on the set of projections of M projection meaning the item potents of M or if you prefer elements of M which don't forget M acts on a Hilbert space so elements of M which as operators on the Hilbert space are orthogonal projections. That's why you call them projections. So the trace on the set of projections is all the interval 01. So you have a trace that actually you know it's trace it's a dimension function. And it takes all the values between zero and one. This is quite amazing, of course, that in fact, you know, sets up the grounds for lots of extremely interesting mathematics in non commutative geometry of corn and Jones sub factor theory what you have this real valued otherwise integer valued for instance index of operators or index groups up group analogy to algebra sub algebra in the Jones theory of sub factors. And we're here, you know, this continuous dimension gives possibility to go beyond just integer values. Now let me just quickly say that the group out as for the group algebra cell of gamma they are two on factors if and only if the group is infinite conjugacy class, meaning that any element different from the neutral element has infinite, you know, G HG minus one is SG runs over the elements of the group is an infinite set. We have this standard example the infinite symmetry group three groups PSL and see this are all ICC. Now what remarkable thing about the continuous dimension is that it allows amplifications of a 21 factor by any real positive real number T. So the I'll just define that when he's less than or equal to one. So when when the the this positive number is less than one, just choose a projection P in M of trace T and take. So to say the corner of and given by the projection so the algebra P MP. And this is M to the power T. This involves a choice. So it's defined only up to isomorphism. But but so what you have done this way is out of your 21 factor you have constructed a whole one parameter family of 21 factors M to the power T. The definition for all these is given here and it's immediate to see that it satisfies this property that M to the T to the S is M to the T s. And this construction of the this one parameter family do also tomorrow and for Neyman a bit later in 1943. Let them immediately to consider and this property of course, let them to consider this set of all T is with the property that M to the T is isomorphic to M. Calling it the fundamental group of M. It's of course, as a group of subgroup of our plans, because it's a group because of this property. Now on this occasion, modern for Neyman proved that all 21 factors, rising from the group measure space construction, even in general, in fact, with the group gamma involved locally finite. Group that, you know, is involved in the group measure space because I'm going to give you the same 21 factor all the time. And this factor impact is the so called approximately finite dimensional 21 factor. They actually proved that abstractly, you know, they have an abstract definition of what is approximate finite dimensional factors prove that they are all isomorphic. So therefore, in particular, all these are isomorphic. So again, you don't recognize, you know, the group disappears completely. You just get all the time the same algebra. Okay. And with the observation that this denote that algebra by R by the way, and with observation that amplifications of every factors are very easy to see that they are FD also. They got that the fundamental group of the this FD factor is equal to all R plus. Okay. This is the interesting phenomenon. However, they show that there is a theory behind. So it's not like you may have the possibility that there is just one to one factor. They were able to prove that the free group factors, the fact 21 factor coming from free groups, and in fact, also some group measure space factors involved involving the free group are not isomorphic to this. To this algebra, you do have a theory behind. So now it's one the central team, of course, as you can see, you know, from this construction in the theory of one factor. So of course, the study of this algebra associated with group actions and with groups, basically investigating how the isomorphist class of the of the 21 factor of the group measure space 21 factor M. So L infinity of X cross gamma depends on the building data so on the group gamma on the way tax. And of course, along these lines, you also want to calculate in, you know, the fundamental group to understand that it's a clearly a fascinating question. And another related question being to understand the author of the more physical the automorphism of the symmetries of these algebras. Okay, ideally, of course, you would like to describe all isomorphism between such algebras allowing perhaps some amplifications allowing some amplifications. And you would like to describe isomorphism between the algebras in terms of isomorphism between the building data, whatever that means, I'll actually explain what I mean by that a bit below. Now we know that that is not possible because of the finite group case and the locally finite group case, but this will be like the main team. Okay, so perhaps it's clear that you have to specialize to certain classes of groups and group actions. So the classic questions along these lines going back to my right for Neumann and to Cádiz and are for instance, you know, whether you there exists one factors with a trivial fundamental group or, you know, questions relating to the free group factors, whether they are isomorphic or not for different numbers of generators. This is by the way, still unknown and more generally considering, you know, the group measure space version of that with arbitrary actions of fn and fm, whether that produces the same two one factor fair can produce same two one factor for any different from and so on, fundamental group, etc. Now isomorph is to come come back to that this term I've used means here conjugacy of actions. So conjugacy of action is a term that's actually well known. So, but it's just that that I'm using it for actions of different groups of gamma on X is conjugate to gamma conjugacy between gamma on X and lambda on Y means an isomorph is between the probability spaces together with an identification and isomorph is between the groups with the property that this isomorph is of probability spaces intertwines the actions. Of course, after you identify the groups this way, and it's an obvious observation that such thing implements an isomorph is that conjugacy implements an isomorph is of the algebras. But as early as 1955 is his anger. In an early paper of his anger he noticed that in fact the group measure space algebra when you have a free action, this is for free ergodic actions can at best remember the equivalence relation implemented by the action of the group. In other words, the equivalence relation on X given by, you know, X is T says is equivalent to with T prime. If T prime is G times T for some element G of the group. In other words, that T and T prime are on the same orbit of the action. Okay, this is obviously defined up to sets of measures zero. This is a very nice observation because what it's so the way he did that by the way it's just by reconstructing in giving a new construction of the group measure space construction which only depended on the equivalence relation. Okay, so you can formulate that in this equivalent form. So an isomorph is of the probability spaces when viewed as a sort of this of the functional to us extends to the group measure space of the object of us if and only if it is an orbit equivalence is very beautiful, you know, observation. So orbit equivalence meaning of course the words is sort of self explains it meaning. So Delta is an orbit equivalence if it takes the gamma orbits on to lambda orbits. So of course because of these observations we will have this trivial implications conjugates implies orbit equivalence implies isomorphism open and algebra's. And as early as 1955 59, these observations led Harry die to start studying major preserving group actions on the probability space up to orbit equivalence. Now in standard in classic ergodic you you you study group actions were usually just a transformation up to conjugacy. And, you know, with these ideas, people started to look at classifying or studying group actions up to orbit equivalence. This was initiated with a meeting paper of any time 59. Basically, how does the class of gamma next depends on its conjugacy class, right? Now the seventies have seen two remarkable results clarifying the situation of what happens when the acting group is amenable. There is this classic by now theorem of corn. Markable theorem showing that all amenable to one factors. This should be. I don't give the definition all amenable to one factors are isomorphic to the FD factor are. This is what is significant for us, namely that all to one factors from of the coming from group actions with the group involved, say, calling call it gamma. So such factors are amenable. Therefore, isomorphic to are even don't leave the group gamma is amenable. Okay. So and then so this settles the phoneme and algebra situation with the group is amenable. And then for the orbit equivalence is you have us the similar sort of statement that falls to so all amenable equivalence relations all I'll see it in fact in a restraint for solar go the actions of countable amenable groups on the non atomic probability space are orbit equivalent. Okay, so there is a long this letter final results were all time by some nineteen eighty four free actions and confed my advice for the general case and the die did already the I mean the abelian group case in nineteen fifty nine now for non amenable groups the situation is very complex. You can see that these people realize quite quickly and essentially the result being that for any non amenable group, you have at least two non orbit equivalent actions. Many people involved in you know proving this I mean it's called vice and and he or actually for this statement and further on he or to prove this very recently for property to groups which were left problem left open since nineteen eighty one showed in fact that there is time countable many for any group and countable many non orbit equivalent actions and there are many results actually which make you believe by now that any group has countable many and probably unclassifiable in some you know an appropriate sense number of actions. Let me just mention here at this point that the last hopes you know for reducing you see this implications here and mentioned you know for the combination of the equivalence implies isomorphism of one man algebra. Well you would hope to be able to prove the conversion implication that any isomorphism of one man algebra comes from an orbit equivalent. Why because that would reduce in all the classification problem orders the study of this for them and algebras to a purely measure theoretical ergodic theory problem of orbit equivalence. Okay well that was killed so to say by some a set of counter examples of conan johnson eighty two and now the early eighties have seen the emergence of the first rigidity results on both one factors in the equivalence relations orbit equivalence relations and I'll just mention first this beautiful result of con extremely important short paper extremely penetrating and and deep idea in 1980 he proved that if you take an ICC group with the property of cash done then the associated to one factor L of gamma has basically a very rigid symmetry structure by this I mean that it has a countably many the the author of the more fees group is countable and the fundamental group is also countable I regard this as you know the symmetries of the to one factor and on this occasion he formulated this you know quite striking challenging conjecture which I refer to as con serigidity conjecture which states that to groups to answer the two to one factors group factors coming from groups with the property of cash done say gamma and lambda can be isomorphic if and only the groups are isomorphic and you can of course strengthen this you know kind of this type of statement to also group measure space version of that I mean taking you know just one factors coming from some group actions and well the in the the that decade there were several very beautiful results you know touching in a way or in another to this conjecture having to do with this conjecture and this result of con and joseph of non embed the ability of factors coming from property to groups into the free group factors which they use that important idea you see in the next few bit later of how girls that the formation of the group FN and then a result of how to put calling and how to about which basically give almost an answer to this you know extremely obviously very hard to prove conjecture of of con in the case of lattices of SPN one I just keep to this last thing which is an observation I'd like to explain to you that is that this result stating here the strong the strongest possible if you want but in fact let's just take the cons rigidity conjecture holds does hold true modular countable sets what do I mean by that well I mean the following of course this says that gamma goes to L of gamma the conjecture says that gamma goes to L of gamma is one to one this factor and and proving this modular countable sets I mean by that that to prove that the function is countable to one well so let's prove this so I'm I would use what I call separability argument and some result of the group over the result of show also grow mobile remind you he proved at some point that there exists a countable many groups with the property T showing that this is an actual statement you know otherwise there will be no statement here and and show long so of course being uncountably many they are infinitely presented and and show long proved though that any such infinity presented group still comes as a quotient of a finitely presented group there are only countable many of them as is we don't distinguish between countable many and one okay we can think of this as you know basically having a mother you know property T group mother of all and any other is a quotient of that and you want to prove that you cannot have you know countable many such quotient giving you the same to one factors so you are dealing with representations of your mother property T group on the same to the same to one factor you know and there are countable many of them well I remind you at this point the property T means that the trigger representation is isolated with the fantastic thing about representations of property T groups into the unitary group of a to one factor is any representation so you know the most wonderful type of statement anybody who worked with property T groups would wish the property T you know to entail that any representation is isolated that's exactly what happens when your representations lie in the same to one factor okay so because of that of course by trivial separability argument if you would have uncountable many in a separable situation you will have some reps that are you know as close as you'd like and now you are in an algebra and so this is the reason the you know the the the representations are are either each one representation is isolated you are in an algebra so you can take the left and the right you know say Pi of G Pi and draw your reps you take but that lands in a to one factor so you take the rep that you you get by taking multiplication by Pi of G to the left and R of G to minus one to the right that will still be a representation because of the algebra situation and because M is essentially a Hilbert space and the fact that the reps are close means that the vector one is almost invariant so you have an invariant vector by property T but that means in this context an inter twiner QED so this is the proof of this fact now in ergodic theory in the meantime a breakthrough the breakthrough the sort of parallel breakthrough to cons result was Zimers rigidity results for orbit equivalence relations given by by high rank lattice lattice is in high rank groups and I just give you here a simple statement of what he has shown from the things that he had proved at that time he showed you see some sense you know in the spirit of this conjecture but on the orbit equivalent side so he showed that free ergodic actions of SLNZ and say SLMZ can be orbit equivalent only if n is equal to m and this is for arbitrary free ergodic actions which is quite amazing okay so the rank of the lattice is being recognized by the orbit equivalent kind of very striking result and the proof he derived this from his cosec's super rigidity which I remind you was a generalization of Margulis super rigidity so it really comes from deep conservatory PK free and digating and decreases a stability of the visualization of the in the early 90's to show that more than being read indeed to anthropogenizable The hierarchical lattice is most free ergodic actions of hierarchical lattice. So gamma here is again a hierarchical lattice, where for most free ergodic actions, gamma on X are orbit equivalence super rigid. Okay, that means it's really in some sense mind blowing. So the phenomenology of super rigidity at the orbit equivalence relation level requires that any orbit equivalence between, so I would say that gamma on X is always super rigid, if any orbit equivalence between such an action and an arbitrary free ergodic action, okay, entails that the groups are isomorphic with conjugacy of dashes. Okay, what that does happen and the proof, you know, he used again, Simmers-Cohsikos super rigidity results of Radner, and well this is to be taken with the grain of salt in the sense that actually, you know, the statements I said, you know, isomorphism conjugacy, it's actually virtual, so it's up to some subgroups of finite index and finite subgroups, finite normal subgroups, etc. Now in parallel to sort of contemporary to this result, there was a series of striking results of Gaborio, Damien Gaborio, who introduced a number of invariance including so-called L2BETI numbers for equivalence relations, and with which he was able to prove some really amazing phenomena for the actions, for instance, of the free groups. I'm mentioning here only two such results from the many that he obtained, that the free group, free ergodic actions of FN and FM can be orbit equivalence only if the ranks, the numbers of generators are the same, and also prove that the fundamental group of the free group, the equivalence relation given by a free group with finitely many generators is trivial. The definition of this fundamental group being very similar to the 1 for 2-1 factors. So he did this by the way, you know, basically, he proved certain remarkable properties for his L2BETI numbers, you know, like the fact that the BETI number of an equivalence relation implemented by a group, a free action of a group gamma is equal to the L2BETI number of the group gamma as, you know, the Atiyaz and Chigar Gromov notion of L2BETI number for four groups, and this scaling relation for this, of course, entails this. Once you have one single BETI number, there's no zero and no infinity, and there is another series of striking results by Mono and Shalom on the orbit equivalent side, super-rigidity-type results for groups gamma that are products of two or three perbolic groups, like products of three groups. Now, as you can see from this, you know, the success was certainly much very significant and important on the orbit equivalent side, you know, in particular this thing, you know, they calculated many, you know, the symmetry with the fundamental groups, all the automorphous groups, I didn't mention, Furman and others, and also, you know, settling questions about the free group, very fine, very precise type of statements, which were not quite matched on the Furman Algebra side. Well, in the remaining time I've left, I want to explain this, some recent results which actually do, you know, match, you know, the success in the orbit equivalent side of the story, and in fact, in some sense, even brought some new phenomenology even to the orbit equivalent side. So, all these results have something in common, namely, they focus on certain classes of group actions, and, you know, their two-on-factors, their orbit, their equivalence relations, with the property that they basically have some soft side, some soft property, some deformability property, let me in fact say that way, and some rigid property, okay? So, and it's always this kind of combination, coexistence of these two properties that makes the overall thing extremely, extremely rigid and recognizable, okay? Recognizable even after isomorphism. I still can recognize all the pieces, and everything can be, the isomorphist class will remember everything, you know, to the last detail in certain situations. So, the first sample of this is, it has the flavor of a rather particular statement, but this, at least, you know, it was like the sort of first real insight in which you could do this kind of argument, you could get this kind of statements, precise statements. So, the first theorem says the fourth, take gamma and lambda, any two non-aminable subgroups of SL2Z, the statement is much more general, but I'll just focus on this, any two non-aminable subgroups of SL2Z, and put them act on the two torus in the obvious way, so gamma and lambda act on Z squared as the restriction of the SL2Z action, okay? And I view that, I mean, therefore, we'll act on the dual of Z squared, therefore, on the two torus, and that's the action I'm talking about. It's one of the geometric sort of actions I listed at the beginning as examples. Well, then any isomorphism between this gamma, the 2-on factor of the gamma action, and some amplification of the 2-on factor of the lambda action comes from an orbit equivalent, okay? So, in other words, if you remember there was that implication with a big problem because of the con Jones counter examples. Well, for this class, you do have that the reverse implication is true, okay? So, what does that say? Well, that you reduce the problem to the measure theoretical level. So, everything, all the invariance that you have in orbit equivalent Zergodi theory will tell you things about the phonemean algebra. It's the same for this class at least of 2-on factors. And indeed, okay, of course, consequence of this theorem is with Gaborius results, you obtain that the fundamental group of any of these 2-on factors is trivial. Of course, for groups that have finite index more generally, which are finitely generated because of the betting numbers, you know, of having one betting number non-zero for the equivalence relation, okay? And the proof and also this kind of statement in which you can distinguish, so you have Fn embedded into SL2z, okay? Well, in any way, by the way, it doesn't matter in which way, well, the corresponding 2-on factors associated with this group actions of Fn with different n give you non-isomorphic 2-on factors, okay? So, the proof again uses this kind of opposing forces, the formability and the rigidity. Where is the rigidity and where the formability is the groups here? The groups are subgroups of SL2z, so basically the free group, and free group has, it's almost amenable, at least from my perspective in for the, what I'm using for the proof which is the hugger of the formation, which I remind you of this. So, hugger proved that free groups, okay? So, amenable groups can be the identity of the positive definite function constantly equal to 1 on the unaminable group can be approximated by finitely supported positive definite functions, okay? Pointwise. And for the free group, he proved that you can approximate it by C0, positive definite functions, it's the exponential of the minus length function with some scalar, and which is the one that you tend to zero and you get the approximation of the identity of 1. So, that's the, at what is the rigidity? I forgot to say. Well, the rigidity, you see, I'm using the action of gamma on t squared, which is, basically, comes from the action of gamma on z squared, but the action of gamma on z squared has this kind of intrinsic rigidity property, which is the so-called, or what I like to call the, the, the, the, the Karstahn-Margulis relative property, t of z squared into z squared crosses into z, okay? Which is sort of something a bit weaker property. I mean, it's a property for the inclusion of groups of Karstahn type. I will perhaps remind you that definition, maybe not. Now, for the next result, I will, okay. So, here is the next theorem I want to present. So, now I'm, I'm focusing on another class of two-unfactors, where the same kind of game, so to say, you know, deformability and rigidity can be played. Now, this time, the, the, what I will take are actions of groups. I will take the group, the rigidity to be a property of the, so it's exactly the opposite situation. The rigidity, I will assume more about the group. This time, there, it was in some sense for the action, okay? And the deformability, the softness will come from the, the weight acts from the action, okay? And my actions will, the, the, will, will be required. The property, this deformability property, softness property of the action is what I call maleability. And it's a property, it's an abstract property I'll give the definition of, and which, for instance, the newly actions have. So, if you have this huge class and also Gaussians was noticed recently by Furman that Gaussians also are maleable. They have a huge class of group actions. You know, this does give you lots of examples of, of this situation. So, the, the, the statement says the following. So, it's a statement, for the managra statement, but in fact, it's, it tells you new phenomenology, even when you only look at orbit equivalence side of, of part of this, this statement. So, let's, because I want to do this in the most general case, or because it's interesting. So, let's call a group gamma weakly rigid if it has an infinite normal subgroup with the relative property t. Okay. So, you have these examples I mentioned before. There are lots of example that Valet, Shalom before that Valet more recently, had constructed lots of very beautiful, large classes of examples of such pairs of relative property t pairs of group subgroup. But I'm also happy with this example. Gamma is equal to h times h prime where h is an infinite cash down group. So, this is an example of such a weak rigid group. Of course, the subgroup that's rigid is h. I mean, you know, which satisfies this condition. Well, the theorem then says the following take, take any free action. So, I'll take isomorphism between two one factors which on the left side on the, so to say source side, involve such actions of weakly rigid groups. If you can just think property t group, but the action arbitrary. And on the right side, I'll take Bernoulli action of an arbitrary group. So, you have on the target action, a condition on the way to act on the source action, a condition on the group. But otherwise everything is, is arbitrary. I'll ask ICC that's for the technical, you know, assumption that's not a big deal. And you allow amplification. Well, then any isomorphism between the corresponding two one factors of the, this group actions, okay, with amplifications automatically forces the amplification that t to be equal to one. And isomorphism actually come from a complete identification of the two actions. So, from a conjuction. In particular, you take orbit equivalence isom, you know, an orbit equivalence, which of course gives you an isomorphism of the phoneme and algebraic comes from. So, even at the orbit equivalence level, this is completely new. Of course, this gives you, you know, lots of nice consequences. First of all, all fundamental groups of Bernoulli actions of such weakly rigid groups, of course, will follow trivial. You have the author of the more physical that's calculable and even have, you know, for fun, sort of fun statement that, you know, read product version of concerted conjecture in its strongest form. In fact, you can formulate that, that, you know, you take such weakly rigid ICC groups, take the product with a discrete abelian non-trivial, of course, group. And, well, the resulting groups GI, their two associated two factors will be isomorphic, even though only the groups are isomorphic. So, it's not the functor that one wanted, but one additional functor, which is the read product. But that will be one to one. And the proof, in fact, is true, you know, not only for Bernoulli actions, but for any malleable action. What I call malleable, which is this condition that the flip, you double an action say lambda on Y is malleable if the double action on Y times Y, which obviously commutes with the flip, this is the identical double action, where you want even the flip is in the connected component of the identity in the centralizer of this double action. It's an interesting condition. Let me skip and just use the last two, three minutes. In fact, here is what I will do. Let me put this page, which tells you a lot about, you know, some further results that could be obtained with very specific computation, exactly using the same methods, very, very concrete calculations of order or tomorphys groups, you know, lots of problems that could be solved using either the result itself or, you know, little improvements for special groups to be able to calculate. You see, for instance, you want to construct a two-on factor with no order or tomorphys, like the n-by-n matrices, you know, n-by-n matrices don't have order or tomorphys. The AFD factor has a R, has huge, you can see immediately that it has an order or tomorphys group, huge. Well, when you get to this rigidity phenomenology, again, you can control that, or keep in, can obtain situations where you have no order or order, which is a very interesting phenomenon for two-on factors. And of course, when you have such data, you want your group not to have automorphys and not to have characters, and that, you know, is a bit of a stretch, but you can do that, and it's explained there. And I'll just, for the last two minutes, if I may, two, three minutes, I'll explain to you some results that's purely measure theoretical, although it's being obtained with two-on factor techniques. It's an orbit equivalent super rigidity in the style that we have seen, you know, though, for instance, Furman, Mono-Charlom obtained, but it's for a completely different class of group actions. Namely, so I'll assume that gamma is an arbitrary, weakly rigid group, no ICC assumption, and that it acts by Bernoulli shifts, okay? So, in fact, it's not arbitrary, it's in fact sufficient. Well, then gamma on X is O is super rigid, so meaning that if you, any orbit equivalent between, of this group action with any other free erotic group action comes from a conjugacy, this is what we mean by O is super rigid. Even more so, so this is a new type of statement, even more so, take any embedding of equivalence relation of this, the equivalence relation given by gamma on X into an, the equivalence relation given by an arbitrary free erotic lambda action with lambda on arbitrary group, well, then it's zero, so the orbits really sort of capture everything, so there exists a subgroup on the right side, on the target side, a subgroup of lambda, say lambda zero, such that the restriction of the action on the target side to lambda zero is conjugate with your gamma action. Well, this is being really obtained with the same methods, but with the philosophy, so to say, of Zimmer, who, you know, always tried which, for Manben also used, of course, in his result, in which you translate, you want to prove an orbit equivalence rigidity or super rigidity or any kind of results about orbit equivalence things, you, from the orbit equivalence, the orbit equivalent, any orbit equivalence gives you a co-cycle for your gamma on X action with values in the group, the target group of your, you know, orbit equivalence, which is lambda, the discrete group lambda. Well, the, if you can, there is a general principle that you can prove and which is that if you can untwist this co-cycle, then you, the untwister, you can translate that into, you can interpret that as an inner automorphism that conjugates that, so that, you know, perturbing your orbit equivalence with that inner automorphism, so fits the groups, you know, one onto the other, so giving you the conjugacy of the actions. So, with these methods from, you know, two-on-factors theory, such a co-cycle super rigidity results is, was actually behind, you know, this result here, and this is the statement, so assume gamma is weakly rigid, gamma on X, Bernoulli or Malabale in general, then given any, I'll take here a discrete group, here is the definition of the general terminology, any okay, lambda-valued co-cycle for gamma on X can be a twist to a group morphism, and I'll stop here, I'm sorry, I'll be fast.
We present some recent rigidity results for von Neumann algebras (II1 factors) and equivalence relations arising from measure preserving actions of groups on probability spaces which satisfy a combination of deformation and rigidity properties. This includes strong rigidity results for factors with calculation of their fundamental group and cocycle superrigidity for actions with applications to orbit equivalence ergodic theory.
10.5446/15969 (DOI)
Good morning. I would like to welcome you to the first plenary presentation of today. The first speaker is Akhadi Namirovsky. He asked me to be very brief, so I will follow his request. Akhadi was born in Moscow, educated in Moscow, got his PhD in Moscow, so he is from the Russian school, so to say. He moved to Israel in the 90s and a year ago went to the Georgia Institute of Technology in Atlanta. He is one of the prime movers in convex programming. He has very early on begun, begun on thinking about optimal algorithms in convex programming and has made fundamental contributions to the ellipsoid method and interior point methods. He will tell you about this today in his presentation, which will center around his main subject, advances in convex optimization, and in particular, in conic programming. Akhadi, please. Thank you very much, Martin. Okay. Good morning to everybody. It is a great pleasure to be here and great honor, and I am extremely grateful to those who selected me for this purpose. So the title of the talk is Advances in Convex Optimization Conic Programming, and the topics to be covered is, What is Convex Programming? Why we are interested in it? And the short answer is, because it is a solvable case in optimization, now how to reveal the structure of a convex program in order to utilize it in efficient and powerful solution algorithms. This is what conic programming is about. Now how indeed what are those efficient and powerful exploiting structure solution algorithms, polynomial time and interior point methods, and finally I will briefly overview two generic conic programs, conic quadratic and semi-definite program, the expressive abilities and applications. So now what I am presenting is a joint effort of many, many people to save space on transparencies. I steep most of the references. Everything can be found in the paper. Now conic programming what is it? What is it? It is a particular case of mathematical programming. Mathematical programming is about solving optimization problems of the form which we see on the transparent. We are optimizing with respect to finite dimensional real vector x and objective under a bunch of inequality constraints. Typically the objective and the constraints are assumed to be good enough say smooth. If you compare mathematical programming problems with what is studied in the original calculus of variation and optimal control, then those problems look pretty, pretty simple. And indeed the standard descriptive issues like resistance of solution, uniqueness of solution, characterization of solution by optimality conditions for mathematical programming problems are pretty easy. But it doesn't help much because mathematical programming is primarily operational. What we wanted at the end of the day is to approximate solution numerically. And in this respect the fact that solution existent can be characterized by the optimality conditions of course is important but not very much. It helps very much. So I would say that what other primary subjects of interest in mathematical programming theory is the complexity of generic mathematical programming problems and developing efficient solution algorithms for those problems. Now in late 70s it became clear that there is a solvable case in optimization convex programming called the objective and the constraints are convex. Now convex programming is computationally tractable. Under mild computability and boundless assumptions generic convex programming problems can be efficient. I admit probably efficient solution algorithms. In contrast to these typical non-convex generic problems, we do not know efficient, probably efficient solution algorithms for those problems. And unless p is equal to np which is third of to be highly unprobable, no algorithms of this type exist. I have several times used the word generic optimization problem, efficient solution algorithm, what those words actually mean. We define a generic convex problem as a family of instances. Every one of the instances in mathematical programming problem with its own design dimension, its own number of constraints, its own objective and some constraints. What we require to things, first we are speaking about convex problem. We require all instances to be convex once again that means that the objective and the constraints are convex functions. And besides we require that within this family every instance should be identified by a finite dimensional real vector, the data vector. You can think about the entries in this data vector is about coefficients of certain analytical expressions which are given by the description of the generic problem which correspond to particular instance. Well, simple example here is linear programming with objective and constraints are linear. In this situation what is the data vector of linear programming problem? What you should point out in order to specify linear programming problem. The number of constraints, the number of variables and coefficients of the objective and the constraints of those linear functions, certain ones forever prescribed all of them. Now what is the solution algorithm for generic problem? It is a code for real arithmetic computer, an idealized computer which is capable to carry, to store real numbers and to carry out operations of precise real arithmetic with those numbers for arithmetic operations, comparisons, perhaps computation of elementary functions, as Logan saying. Okay, and a solution algorithm is a code for this computer such that when you input to the computer the data of an instance and the positive tolerance epsilon, accuracy to which I am interested to solve the problem. Then the computer after finding many operations terminates and puts either an epsilon solution to the problem, same solution as a real vector at which the constraints are satisfied with accuracy epsilon and the objective is non-optimal also by it most epsilon. Or another option is that the computer says to you, the disclaimer is correct, that your instance is unbounded or is unphysical. Now we call a solution algorithm efficient or polynomial time. If the number of operations before termination, the running time is bounded by a polynomial of what? Of the dimension of the data vector, this is called size of the instance, and the number of accuracy digits in epsilon solution, what is this number of accuracy digits? This is log of something divided by epsilon, what is this something, what is written in the numerator, you should not bother, it comes from technical reasons, and it doesn't matter because what we are interested in is what happens when epsilon is small, when epsilon is small, this log becomes just log of one divided by epsilon. So, in fact, this is what is natural to call the number of accuracy digits in epsilon solution, log of one divided by epsilon. Now, why the definition is as it is, there are good reasons for this adaptation to the case of real arithmetic computations and problems if continuous variables are the notion of polynomial time algorithm, discrete mathematics. And here is one of basic statements which are now in need explains what does it mean that generic convex programs admit polynomial soil. So let's look at generic convex problem with instances which have explicit bounds on the variables, this is important. Now, we have a class of generality, let's assume that this bound just say that the variables are bounded by one in absolute value, so the solution sits in the unit box, and you have also convex general type constraints, and you have a convex objective. And assume that the instance is unnormalized in this unit box, the objective and the constraints in absolute value do not exceed one. Then there is this explicit algorithm, for example, ellipsoid method, which for every instance is capable to find an epsilon solution to this instance or to detect correctly that the instance is infeasible at the following computational price. The price of computing like epsilon divided by the number of variables in the instance, accurate approximations to the values and sub gradients of the objective and the constraints along the number of points, those points are successfully generated. This number is, as we see, polynomial in the dimension, the same dimension of the problem and the log of one divided by epsilon, and every one of those computations is accompanied by polynomial, the number of variables and the number of constraints, amount of additional arithmetic operations. Now the immediate corollary is that if we assume that our program, generic problem, is computable, that meaning that given the data of an instance and the positive tolerance and the point in the corresponding unit box, you can compute delta approximations to the values and sub gradients of the objective and the constraints at the point. We have a few of these delta in the size, in polynomial, in the size of the instance, recall this is the dimension of the data vector, and the number of accuracy digits corresponding to delta in this polynomial and those two quantities, number of operations, then your problem admits a polynomial time solution algorithm. Now if all we were interested in would be academic statements like, convicts programming is sufficiently soluble, then theorem like we have seen would be more or less convenient. But for example, if you are supposed to solve problems in reality, and in this respect, those universal methods like ellipsoid algorithm underlying those general tractability statements in convicts programming cannot very good. Because they do not utilize well the structure of the problem. So what I mean is the following. Convicts problem always has a lot of structure, otherwise you could not know why it is convex, what the words meant. Now you see how the objective and the constraints are obtained from evidently convex functions with convexity preserving operations like taking maximum sum and so on. And the universal algorithms like the ellipsoid method are black box oriented. They use this precise knowledge of problem structure and data for the only purpose to compute the values of the derivatives of the objective and the constraints. Now that this is pure utilization of your priori knowledge, and in spite of the fact that the performance of those algorithms is polynomial, it is poor because say the cost of accuracy digit for all those universal polynomial time algorithms in convicts programming has least the number of variables to the power of four. But what does it mean? You have a few hundred variables, you never will get in reality in reasonable time, you will never get a single accuracy digit. So what we would like to do is to utilize the structure of a convex program in order to accelerate the solution algorithm, to adjust solution algorithms to structure problems. But of course it doesn't pay what the structure is, it's how to reveal it. So and the best known for the time being way to reveal the structure of a convex problem is given by what's called conic reformulation of a convex program. So what is this conic reformulation? Let's start with linear programming problem, all the objective and the constraints are linear. If your traditional way to pass from linear programming to convex is okay to say linear programming problem, the objective and the constraints are linear. Now let's replace them with nonlinear convex functions. It turns out that much more productive way is to introduce nonlinearity in exactly this place to change the interpretation of this inequality here. What is meant in linear programming, what we see here is vector inequality coordinate wise one, specific vector inequality. One vector is greater than or equal than the other, the difference belongs to the non-negative photon. All entries in the difference are non-negative. But we can replace this specific vector inequality by another vector inequality, more general one saying that one vector is less than or equal than the other. If the difference belongs to a given set K-capital. For the coordinate wise vector inequality, this set is non-negative photon, now it is not necessarily non-negative photon. But in order for this vector inequality to possess all standard arithmetic and topological properties, it makes sense to require from the set K-capital to be closed point at convex cone, with non-empty interior, which we assume from now on. Now what is the conic problem? You fix this cone K-capital and this is how the associated conic problem looks like. You are interested to minimize linear objective, find the restriction that a fine image of your decision vector is non-positive in terms of this cone, belongs to the negation of the cone. So in this problem what is good? We can say that C, D and the other data of the problem, and K is the structure. Now, every convex problem can be converted to equivalent conic problem, but doesn't help much because a general type convex cone has no more transparent structure than a general convex function. So what was the point of this FSSS? And the point is in the following fact. Essentially all we are interested in in applications is covered by just three generic conic programs. The first is linear programming, the underlying cones are direct products of non-negative rates, non-negative photons. The second is conic quadratic programming, the underlying cones are direct product of Lorentz cones, and this is how the conic quadratic program looks like in the usual notation. So in this case, you are interested to minimize linear objective, and a bunch of conic quadratic inequalities, and every conic quadratic inequality says that the norm of something, Euclidean norm of something which you find that depends on your decision variables, should be less than a record than a given fine form of those variables. And the last magic conic problem, generic is semi-definite programming, the underlying cones are cones of positive semi-definite matrices, they live in the space of symmetric matrices. This is how semi-definite, generic semi-definite program looks like. You are interested to minimize linear objective under what's called linear matrix inequality constraint. It says that certain symmetric matrix which you find it depends on your decision variables, should be positive semi-definite. Now, linear, you can represent easily non-negative photons as intersection of direct product of Lorentz cones, and a linear subspace, and this is why linear programming is part of conic quadratic programming. By similar reasons, conic quadratic programming is part of semi-definite programming. Now, what a good news about conic programming, especially about primarily about linear conic quadratic and semi-definite one. First, there is fully symmetric and fully algorithmic duality, which can be used for processing problems on paper, and also is heavily utilized by solution algorithms. There exists not only theoretical, but also practically powerful solution algorithms, which is called plenomial time and terrier point methods. And at least conic quadratic and semi-definite programming, as I said, have tremendous expressive abilities, and as a result, have extremely wide spectrum of applications. Okay, so in the sequel, I'll just look at those three issues one by one. So let's start with conic duality. What is duality in mathematical programming? It is about bounding from below the optimal value in an optimization problem. In other words, it's about certifying a negative statement that doesn't exist feasible solution with the value of the objective less than this and this. Now, if you use the standard Fentry Lagrange duality and apply it to conic problems, it becomes pretty nice and algorithmic. Let's look what happens. So this is your primal conic problem. Assuming that the kernel of matrix A is trivial, this is in fact without loss of generality, you can pass in this problem from your original variable SX to what is called new variable SX, the primal slack. This vector lives in the same space where the cone lives. And this is how the problem looks like in terms of the primal slack. You can express your original objective in terms of the primal slack, and the problem becomes pretty simple geometrically. You are interested to minimize the linear objective over the intersection of a cone and an affine plane. This affine plane is shifted, the shift of linear subspace of the image of matrix A. Now, if you apply to this problem Fentry Lagrange duality, you get a problem which is of very, very similar structure. This is what you get. The dual problem is to maximize the linear objective, which comes from the shift vector in the primal problem. And then also over the intersection of an affine plane and the cone. But now the cone is one which is dual to the original cone. And the affine plane is shift of the orthogonal complement of this linear subspace cell, the image of A, by the primal objective E. Now, and this is how you can rewrite this geometric problem in the standard notation. Okay, we immediately see the duality is symmetric. Look, what happens, the dual problem in this geometric form also is connected. And what happens if we take its dual? We should replace the cell orthogonal to its orthogonal complement, which will give us L. We should replace the dual cone with its dual, which will give us K. And we should swap the shift vector and the objective, which will give us exactly the shift vector and the objective in the primal. Now, this is the summary of properties of this conic duality. Conic duality theorem, first statement we already know. So this is the primal problem, this is the dual problem. Conic duality is symmetric. The dual is conic and the dual to the dual is equivalent to the primal problem. We have the duality, the optimal value, and the dual is less than or equal to the optimal value on the primal. And we have strong duality under qualification of constraints. If one of the problems in primal dual pair is strictly feasible, meaning that the corresponding affine plane intersects the interior of the corresponding cone. And it's bounded and the other problem is solvable, the optimal values are equal. In particular, in both primal and dual, strictly feasible, they both are solvable, we recall optimal values. And we have a nice characterization of primal dual optimal solutions. Primal dual feasible pair slender is comprised of optimal solutions to the respective problems if and only if the duality get evaluated at this pair as zero, the primal objective at x is equal to the dual objective at lambda, and this is the same as to say that this complementary slackness condition, the inner product of the primal slack and the dual solution is equal to zero. It's very instructive to compare conic duality, which is a particular case of Lagrange duality, we have Lagrange duality itself, so what's Lagrange duality? We have a convex program in the mathematical programming form. You build this dual, how you build this Lagrange dual, you take the Lagrange function of this constraint problem, minimize it with respect to the primal variable six, you get a new objective, the dual objective, L-capital of lambda, it depends on Lagrange multipliers, and the dual problem is to maximize this L-capital of lambda over non-negative Lagrange multipliers. And the duality statement is that if the primal is strictly feasible and below bound, the dual is solvable, the optimal values are equal to each other. So what are the differences? Why conic duality is much nicer particular case than the general case? First of all, conic duality indeed is fully symmetric. The dual problem remembers the primal one, which is not the case for Lagrange duality. If I give you the dual objective, you usually cannot say what were the objective and the constraints in the primal problem. And also the conic duality is completely algorithmic, we provided that we understand what the con dual to the original con is. Passing from primal to dual in conic duality is purely mechanical process. And in contrast to this, the Lagrange dual objective usually you cannot, it is given by certain minimization, you cannot operate with these, or it is implicitly given. Okay, as a consequence of this algorithmic nature of conic duality, I said that it is useful to not only for algorithms, but also for processing problems on paper. Let me give you an example, a simple design. So what is a truss? A truss is a mechanical construction comprised of thin elastic bars linked with each other at nodes like electric mast or railroad bridge. The best known example of truss ability is the Eiffel Tower in Paris. And in truss topology design problem, we are given a finite two-dimensional or three-dimensional nodal set. The set of points where tentative bars can link with each other. A set of tentative bars, we are set what are the pairs of nodes from the nodal set, which you are allowed to link by bars. And we are given a set of loading scenarios and the load loading scenario, the collection of physical forces somehow distributed along the nodes. What we are interested in is to find the construction of a given weight to assign tentative bars, volumes, sizes. Okay, and we want, we have restriction on the weight of the construction, we want to get the construction, which is most rigid with respect to scenario loads. So now how to measure rigidity, stiffness. It's measured by compliance and compliance is compliance of a truss under a load. Is the potential energy capacitated in the truss as a result of its deformation under the load? The less is compliance, the better. Now mathematically, the truss topology design problem is a semi-definite program, which you see on the transference. So what are the variables? Ti are volumes of tentative bars. Fl are data that those represent the loads. And every load leaves in m-capital dimensional linear space. And m-capital is total number of degrees of freedom of the nodes. Now tau is the upper bound on the worst case compliance, the maximum we respect to loads from the set of loading scenario compliance. And those linear matrix inequalities, okay, certain matrix which you find it depends on T and tau is positive definite. Semi-definite say exactly that tau is an upper bound for the compliance of the truss, the load being a fail. And you have also natural restrictions that the volume, bar volume should be no negative and there are some bounded by the quantity. This is the weight constraint. Now in truss topology design, which starts with a dense nodal grid and allows for a lot of nearly all pair connections of loads by bars. At the optimal solution, nearly all bars get zero volumes and the optimal solution reveals the structure, the optimal topology of the construction. This is why it is called truss topology design. Here is a toy example. We start with 9 by 9 nodal grid. The most left node sign the wall, they cannot move. The remaining pair can move. Now we have only one loading scenario which is represented by a single force, this red force here. Then we allow for nearly all pair connections of nodes by bars. So we end up with 2000 plus bars. And then we solve the truss topology design problem. This is what we end up with. The optimal cantilever console uses only 12 nodes of original 81 and only 32 bars instead of this 2000 plus. Now in order for this problem to get to the design of topology, you should work with dense nodal grids, meaning that the total number of degrees of freedom, m capital should be at least few thousands. Now you should also allow for nearly all pairwise connections of tentative nodes by tentative bars. That means that the number of bars, and this is the design dimension of the truss topology design problem, how many variables you have, will be a further of squared m. This means that in real life it will be well in the range of millions. So as a result, whether you use polynomial time algorithms or not, problems with millions of variables from the practical viewpoint, you cannot solve them reasonably. So what can we do? We can use the semi-definite duality. Turns out that you can build the semi-definite dual of your original problem, this purely mechanical process, and in this semi-definite dual you can eliminate analytically most of the variables which is pretty easy. This is what we end up with. It doesn't matter what exactly we see here. What counts is that the number of variables in this equivalent reformulation of the dual problem, where we eliminated most of the variables analytically, is just the number, not the same capital, the total number of degrees of freedom of nodes, times the number of loading scenarios. And this is really a usually small number, like one to three. So this design dimension is comparably less than the design dimension of the primal problem, and you indeed can solve the dual in reality. Okay, so the dual is much better suited for numerical processing than the primal problem, but then we can do the following. Let's take the problem dual to the spin problem. If the spin problem was the straightforward dual of our heat trastopology design problem, you could know in advance what we will get. We will recover the trastopology design problem as it is. But this spin problem is not the straightforward dual. This was obtained from the straightforward dual by eliminating analytical limitation of part of the variables. So we do not know in advance what we will get, but we can try. And this is what we get. It turns out that we get a very instructive and I would say unexpected equivalent reformulation of the primal problem, and that it has perfect mechanical sense. And after you do add even this, after you guess that this problem is equivalent to the original one, you can easily prove it. But the fact is that the best of my knowledge, nobody was that brave to guess that this problem is equivalent to this one. So actually that this instructive reformulation of the original problem was obtained by twice applying duality. Okay, now the next story, Terrier point polynomial time algorithms, very brief history. The very first interior point method for linear programming was invented by Karl Marker in 1994. This moment, it was for linear programming. We already knew at this time that linear programming was polynomial-assailable. But what is important in this Karl Marker discovery was that it was the first theoretically efficient algorithm for linear programming, which was also practically efficient, which was competitive with the algorithm for linear programming, the simplex method, which is in the worst case not polynomial time as exponential, but pretty efficient in practice. So that this fact that we have both theoretically and practically efficient algorithm creates a lot of activity, initiated what is called interior point revolution that started in the mid 80s and then that the plane. In the 90s, the result of this revolution among other things, we have now, possess now general theory of interior point methods in convex programming and also very advanced theory, or chapter of this theory. In the case when we are speaking about interior point methods for conic problems, some homogenous of the cones and this is exactly the case of conic quadratic linear and semi-definite program. So what are those polynomial time interior point methods? The most efficient of them follow pretty simple path following steam, which is well known in optimization. And we'll assume that we want to minimize the linear objective for closed convex domain G capital. This is one of the universal forms of convex optimization. So what path following steam says us to do? First of all, it says to quit your G by a barrier. A small strongly convex function will define from the interior of the domain, which possesses this barrier property. It blows up to infinity when you approach a boundary point of the domain from inside the domain. No, it's evidently the level sets are closed. Okay. Then you can form this single parametric family of barriers. You take your original barrier and at t times your objective and t is positive penalty parameter. Under mild assumption, say when G is bounded, every one of those functions of t attains its minimum on the interior of G at a unique point. You get a part of minimizer, such style of t of those functions of tk. And as t goes to infinity, it is easily seen that this path approaches the optimal set. So what you are doing, you are just tracing this path. How you are doing it? Now assume that we have an iterate x t with x close to its style of t. So this is its style of t and that this is a tight approximation next to this. Now what we do at a step of path following steam? We increase the penalty parameter. We replace how it's hit with a larger value t plus of the penalty. It gives us a new target point on the path. And then we update x to a tight approximation x plus to this new target point. How we do it? Now we know this new target point of the path is minimizer of known to us function of capital t plus. And this minimizer is essentially unconstrained. So we can approximate this minimizer by whatever method for unconstrained minimization. For example, by Newton method. It makes sense to start this method at the point x. And then we run this method until we get a point which is close to the new point of the path. And then we loop. Now it's in late 80s, it became clear, it was discovered that when you apply this theme to pretty specific barriers, it becomes polynomial. So usually it is not polynomial. Okay, so what are those good barriers called self-concordant barriers? And this is the definition. So assume we add even a close convex domain in our end and three times continuously differentiable on the interior of the domain function. It's called self-concordant barrier with parameter theta. Theta is the number. Okay, if first of all f capital is a barrier, it passes this barrier property, it blows up to infinity when we approach a boundary point of the domain from inside. And besides f capital satisfies two differential inequalities. The first is called self-concordance inequality. The third is the directional derivative of the barrier. It's bounded by twice. The second is the directional derivative to the power 3 over 2. And the secondary inequality is barrier quantification. It says that the first directional derivative is bounded by theta to the power one-half. Theta is this characteristic parameter of the barrier. The time square root of the second derivative. Okay, in fact, those two inequalities admit very transparent interpretation. You have a smooth convex function that then it's hashed at every point, defines local Euclidean matrix. Okay, as you see on the transparency. And this inequality a means that the hash is the Liebchitz continuous with constant 2 in the metric which is given by f. And the second inequality says that f itself is Liebchitz continuous. Now Liebchitz constant square root of theta. We respect to this Euclidean metric. Now what happens if you apply path following steam to... We use it with a self-concordant barrier. So let's assume that the level sets of our objective on the feasible domain are bounded. And the theta is f capital is theta self-concordant barrier for the domain. First of all, the central path, what we intend to trace, is well defined. That's the first statement. Second, we can define the notion of closeness to the path. Can introduce this proximity measure. It doesn't matter what exactly this measure is. What is important, okay, it can be computed at every point. And let's say that xt is close to the path. If this proximity measure is less than or equal to 0.1. Now let's look at the implementation of the path following steam. We start with pr0 t0 which is close to the path in this sense. And then we iterate like this. We have current iterate, ti minus 1 and xi minus 1. And first we update the penalty parameter. We increase it and once forever fix three, which depends only on the state, on the parameter of self-concordance of the underlying barrier. And we accompany the step in penalty by a single Newton step in x variable. And the statement is that all our xi, ti are well defined. Xi is strictly feasible solutions to the problem. And along the sequence of strictly feasible solutions, and accuracy in terms of the objective, decreases linearly as you see on the transparency, so that every square root of theta steps of the steam add your single accuracy gauge. Now, conclusion. If we are smart enough to equip the domain of interest by a computable self-concordance barrier with reasonable value of the parameter, we immediately get a polynomial time algorithm for minimizing linear objective over this domain. So what about the existence of those barriers? First, not simple, not simple. First statement is that the self-concordance barrier of parameter of the orphan exists for every n-dimensional closed convex domain g. And this is how this barrier looks like when g is a pointed cone. It's what is called log of the characteristic function of the dual cone. This general result doesn't help much because we are not interested in the separational sense. We are not interested in something which exists. We need a barrier which we can operate with, which we can compute derivatives in the fashion, which are good in this sense. And it turns out that we indeed can point out good efficiently computable self-concordance barriers for wide variety of, let's say, standard convex domains. And moreover, we have a kind of calculus of those barriers. We know that all convexity preserving operations, like taking these sets, like taking intersection and worst affine image, they can be equipped with, if you apply them to domains given by good self-concordance barriers, you can easily combine those self-concordance barriers for the appearance and the self-concordance barriers for the result. So, as a result, essentially, the entire convex programming is within the grasp of interior point algorithms. Now, this interior point science admits its maximum depth and flexibility when we apply it to conic problems on cones with a lot of symmetries. Most notably to the cones, which are homogenous of self-dual. This is essentially linear conic quadratic and semi-definite programming. This theory is intrinsically linked to theory of Euclidean Jordan algebras. Now, here are the barriers, which we, self-concordance barriers, which we use in linear conic quadratic and semi-definite programming. And usually in those situations, they solve simultaneously the primal and the dual problem, those two processes help each other. And the result in primal dual interior point methods first are responsible for the best known so far complexity bounds for linear conic quadratic and semi-definite programming. And have a number of, let's say, additional bonuses that let me steal. Okay, to save time. Now, good news from the practical viewpoint is that practical performance of interior point algorithms for linear conic quadratic and semi-definite programming is essentially better, typically essentially better than what is predicted by also not very disastrous theoretical worst-case complexity analysis. Okay, I am not going to give an impression that as far as linear conic quadratic and semi-definite programming are concerned, everything, and that, okay, algorithms for those problems are concerned. Everything is, let's say, we do not have troubles. We do have troubles, we do have challenges, and the challenges is what to do in the extremely last few days. The conic quadratic and semi-definite programming, so extremely last few days means that from tens of thousands to millions of variables. If this size of the problem, interior point methods, okay, they find they are polynomial, but they become too time-consuming to be practical. So, you know, it's exactly the same when you are solving linear system of equations. You know that it could be done efficiently, but when your linear system has millions of variables, then you are in troubles. You might be in troubles. So, what we can do at present, the best we can do is to solve those extremely large-scale conic quadratic and semi-definite programs by computationally cheap algorithms, which possess dimension-independent rate of convergence. But unfortunately, this rate of convergence is sublinear, it is polynomial in one divided by epsilon. Okay, instead of log of one divided by epsilon. So, what we can do, the best we can hope for at present is to solve those extremely large-scale problems to medium accuracy. But the question is whether it should be solved or just we do not know really good algorithms, polynomial type. Okay, now, expressive abilities, the last story. The applications of conic quadratic and semi-definite programs. Let me start. Well, first of all, how could you recognize that a given problem can be converted to a semi-definite program or something like this? Okay, here is a simple and important fact. Consider a family F capital of closed converse clones, which is that this family is closed with respect to taking finite direct products and passing from a cone to its dual. As it is the case with the families of clones, which underlie linear conic quadratic and semi-definite programs. Then there exists a well-defined simple notion of F representation of a convex function. Converse set. Let me skip this. Indeed, it's simple, but let me skip what exactly the definition is. What are the properties of this F-representable entities? First of all, if you have a convex program in the standard mathematical programming form, we also have here convex inclusion on the top of functional constraints. Okay, and you know semi-definite representations of all the functions involved F and G and also of this set X capital. Then you can straightforwardly convert your problem into a conic program on a cone from the set F capital. Now, those F representations admit a simple calculus which shows that all basic convercity preserving operations with functions and sets, if you apply them to F-representable appearance, you get an F-representable result. Those operations include, whatever, for sets, taking, intersections, direct products, arithmetic sums, convex hulls, finite unions, inverse image underfinemating projections, and also passing to polarity. Okay. Now, calculus of those conic representations is independent of what is, it's completely universal. Independent of what is the specific family of cones, given that it is closely with respect to dictating direct products and passing from a cone to its dual. Okay, and it is algorithmic. Representation of the result of operation is readily given by the representations of the operands. Now, in this calculus, plus our knowledge of what are elementary, say, conic quadratic or semi-definite representable functions and sets allows us to recognize that the problem of interest is, say, it can be converted to a semi-definite program. Let me give you an example. So, let's look at this messy problem, what we have, linear objective, linear equality and inequality constraints. Those unpleasant algebraic inequalities, this is a kind of linear matrix inequality, and this is a non-linear matrix inequality, and this is a very unpleasant thing. This is semi-infinite inequality. We want from certain trigonometric polynomial with the variables and the role of coefficients to be non-negative everywhere on the segment. That means we have an infinite family of linear inequalities. Okay, and the fact is that this problem can be converted in a systematic way and a semi-definite programming problem with, like, 98 decision variables. Now, if you remove those red constraints, then the remaining problem turns out to be conic quadratic, and moreover, it can be, like, 48 variables, and it can be reduced in a polynomial time fashion just by linear program. So, what I express is the phobility of conic quadratic programming. This is generic conic quadratic program. Well, in light of what I said about the calculus, what we should understand is what are basic elementary conic quadratic representable functions and sets, and here is a sample of examples. Say p norm of vibrational p is conic quadratic representable as a result of approximation of p norm with conic quadratic program. Of course, conic quadratic forms are conic quadratic representable, but the quadratic or constraint conic quadratic programming is part of conic quadratic programming. Now, those power monomials with vibrational exponents which you see on the transference are also conic quadratic representable. There is a geometric programming, null algebraic form is conic quadratic representable. Now, this specific fractional quadratic function turns out to be conic quadratic representable. There is a trastopology in design, which I define to you as a semi-definite program. The fact can be simplified to a conic quadratic problem. So the bottom line is that conic quadratic programming possesses pretty wide expressive abilities, much wider than just you can guess looking at how simple the Lorentz column is. But the surprising fact is that all those expressive abilities of conic quadratic programming essentially expressive abilities of linear programming, because conic quadratic programming can be reduced in a polynomial time fashion to linear programming, and this is the underlying geometric fact, that the Lorentz column possesses a fast polyhedral approximation. So what the statement says. Let's fix accuracy epsilon and dimension of the Lorentz column. So I can point out explicitly a polyhedral cone which lives in the space of the dimension like 2n log of 1 divided by epsilon, and this polyhedral cone is given by explicit system of like 5n log of 1 divided by epsilon from a general linear inequalities. So along with this polyhedral cone, you can point out an explicitly linear mapping from the space where the cone lives to the space where the n dimensional Lorentz cone lives, such that if you take the image of this polyhedral cone under this mapping, what you get is in between the Lorentz cone and 1 plus epsilon times the extension of the Lorentz cone. It's important here that all those number of additional variables, let's say, number of and dimension of this polyhedral cone are just proportional to log of 1 divided by epsilon, not to 1 divided by epsilon to something. And as I said, the corollary is that conic quadratic programming can be reduced in a polynomial time fashion to linear programming. It doesn't mean that you should do it in reality, but nevertheless it's pleasant that such an option exists. Now what about semi-definite programming? Semi-definite programming has tremendous expressive abilities, and they'll give only a few examples. First of all, all which is conic quadratic representable is also semi-definite representable. Now, with semi-definite programming, we get access to functions of eigenvalues of symmetric matrices and singular values of rectangular matrices. Here is a typical theorem. Assume that you have a function on Rn which is symmetric with respect to permutations of the variables, and it is semi-definite representable. And now let us plug into this function instead of its argument, the vector of eigenvalues of a symmetric matrix capital. It turns out that what we now get function of matrix and it also turns out to be semi-definite representable with semi-definite representation readily given by the one of f. Another example, let's take the cone of univariate algebraic trigonometric polynomials of given degree, which are non-negative on a given segment. Those are finite dimensional cones. It turns out that every one of them is image of the semi-definite cone and explicit maypeng. And you have also an extension of this fact. Okay, and for example, minimization of univariate algebraic trigonometric polynomial over a given segment is a semi-definite problem. Now due to its tremendous expressibility, semi-definite programming has a lot of applications, and I just named some of the relaxations of difficult combinatorial problems, ellipsoidal approximations of convex sets, statistics, robust control, structural design, communication, signal processing. In fact, every month we get new semi-definite models for applications. I have time to focus on exactly one application, and this is semi-definite relaxation of difficult problems. So let's start with problem with quadratic objective and quadratic inequality constraints. We do not assume converse city here, and this generic problem could be pretty, pretty difficult. Well, for example, because you can model Boolean restrictions on the variable just by quadratic constraints to say that Xj should be I to 0 or 1, you should say that Xj satisfies quadratic equation of j squared equal to Xj. So that this family of problems contains tremendously difficult contents and p-hat problems. Now what is semi-definite relaxation of such a problem? What we can do? Let's pass from our original vector variable X to this matrix variable Xj, which you see on the transparency. In terms of this matrix variable, original objective and original constraints become just linear. Now what can we say about this Xj? It should be obtainable from certain X small. What does it mean? First of all, Xj should be positive semi-definite, this is clear. Northwestern entry in Xj should be 1, and the rank of Xj should be equal to 1. Now, of those three restrictions, the first two are pretty nice in convection. The only problem-making restriction is the strong constraint. So what we do when we pass from problem with semi-definite relaxation is just eliminate the strong constraint. So we extend the feasible set of our problem. Since I'm speaking about maximization problem, it means that we increase the optimal value. So we get a semi-definite program, which you see on the transparency program, which we can solve and that's optimal value is an upper bound on the true optimal value. And those upper bounds can be used in many, many different situations, for example, in branching bound. Now that there is a simple, useful interpretation of what this relaxation is, essentially we pass from usual solutions to the original problem to random solutions, which only at random satisfy the constraints, at average satisfy the constraints. And among those solutions, which are random solutions, which at average satisfy the constraints, we are looking for the one which maximizes the average value of the object. Turns out that this is very instructive interpretation of the relaxation. And in good situations, this relaxation produces probability-tied bounds. Let's look at the example. Well, let's look at this problem, simply looking problem, to minimize quadratic objective over the unit box. In general, it is NP-hard. It is NP-hard even if the matrix A is positive semi-definite. And moreover, it is NP-hard even not to compute this exactly, to higher accuracy, already 4% tight approximation for the optimal value is known to be NP-hard. Okay, now this is how the semi-definite relaxation of this problem is. And there are situations when I can say that this relaxation is not that bad. First of all, what happens if A is diagonal-dominated with non-positive diagonal entries? This is an important case. This is what is responsible for the max-cut NP-complete combinatorial problem, maximal cut in a graph. It turns out that here the semi-definite relaxation is tight even like 14% margin. Now, if A is just positive semi-definite, nothing else, then this relaxation is tight up to factor pi over 2. And this is an important statement. For example, it yields tight approximation of matrix NOS. What the story is about? Let us take, say, a square matrix and let's think about it as a mapping from Rn equipped with PNOR into Rn equipped with RNOR. P and R are given parameters. Now, let's ask what is the induced matrix norm of this matrix A. So this is the definition. Now, when P is equal to R is equal to 2, then this norm is easy to compute. On the other hand, when P is greater than R, you can prove that computing this norm is NP-hard. Nevertheless, the problem when P is greater than 2 and R is less than 2, turns out that the computing this norm admits tight, admits semi-definite relaxation, and so we can compute efficiently and up about on the matrix norm, which is tight within this absolute constant factor, which you see on the transparent. Now, this resembles, actually, can be linked to famous great indication equality of 1953, which deals with the particular case of this problem of P is equal to infinity and R is equal to 1. Here, the constant can be improved to what you see on the transparency. And I believe that in hindsight, in hindsight that this construction used by Gratundi is the very first known to me example of semi-definite relaxation. Now, those two examples, we were speaking about the situation when the matrix A is positive semi-definite. Now, what happens if the matrix is arbitrary? It could be indefinite. Turns out that still semi-definite relaxation is not that disastrous. It is tied up to factor log of N, N is the dimension of the box. So, the factor now deteriorates. It's not an absolute constant. It deteriorates when the size of the box grows, but pretty slowly. And there's a kind of compensation. Now, you can forget about this box. You can replace this box with intersection of N concentric ellipsaids and forget about the dimension. The dimension of the space where those ellipsaids lie, the lift could be whatever. Okay, now I believe I have two minutes to present the last challenge, and this is the last slide. It's my modern challenge. What should we do with semi-definite and conic quadratic programs? We found certain data. So, typically, the data in optimization problems are not known exactly the problem, the time when the problem is solved. And small perturbations in the data may cause dramatic, you know, say, make, can make the solution which you compute for the nominal data, heavily infeasible. So, in many situations, we are interested to immunize the would-be solution and change the uncertainty. So, when we are doing it for conic quadratic and semi-definite programming, we, depending on what we assume about data uncertainty, end up either with semi-infinite conic constraint or the form. Okay, you have conic inequality, but it should be satisfied for all realizations of the data from a given set. Okay, this is when we assume that the perturbations in the data, what is called uncertain, but bounded. And if you assume that the data perturbations are of stochastic nature, then we end up with chance conic constraints. So, now we can have a randomly perturbed conic inequality, and we want the inequality to be satisfied with probability at least one minus given set. Now, when k is either Lorentz or semi-definite con, those two problems typically are computationally intractable. So, the only challenge is to find tight, in a sense, tractable approximations to those problems, and also to recognize those rare cases when those problems are computationally intractable. Not that there are some nice results in this direction, but that this is, I would say, in general open question, not same as the previous question which I posed, but to do extremely large-scale convex optimization. So, thank you for your attention. This is all.
During the last two decades, major developments in convex optimization were focusing on conic programming, primarily, on linear, conic quadratic and semidefinite optimization. Conic programming allows to reveal rich structure which usually is possessed by a convex program and to exploit this structure in order to process the program efficiently. We overview the major components of the resulting theory (conic duality and primal-dual interior point polynomial time algorithms), outline the extremely rich “expressive abilities” of conic quadratic and semidefinite programming and discuss a number of instructive applications.
10.5446/15967 (DOI)
Good afternoon. It is a really great pleasure for me to introduce Ferdinand Werner for the final Fields Medal Lecture. Ferdinand received his PhD in 1993 at University of Paris-Sies under the direction of Jean-François Legault. Since 1997, he has been Professor of University Paris-Soud in Orsay. From 2001 to 2006, he was a member of the Institute Universitaire de France and he currently also holds a position at École Normale Supérieure in Paris. His prizes include the Rolo Davidson Prize, the European Mathematical Society Prize, the Fermat Prize, Jacques-Hubert Prize, the Leuven Prize and the Polier Prize. On a lighter note in trivia, he is also one of the few people I know who has finite identical Erdos numbers and Kevin Bacon numbers, something for people to look up later. Three in both cases. From a personal perspective, I just want to say this, it was about ten years ago that I first, I was giving a lecture about intersection exponents, the problem of brine intersection exponents, one case of which was the mandelbrot, equivalent to the mandelbrot conjecture about browning motion. When Bedlin was in the audience, and a couple weeks later I received an email with a very nice idea how to approach this problem. From the start of that email, I started a collaboration that has lasted a long time and I've seen many, many of his nice ideas. I'd only like to mention a couple of them today. One of them a couple years later was, as I think was mentioned this morning's talk by Odette, he was talking with Odette Schramm and he realized very early that Odette's beautiful SLE might be the key to understanding the browning intersection exponents. At that point, our two-person collaboration on this side became a three-person collaboration and suddenly I had two outstanding collaborators with giving nice ideas back and forth. Secondly, I'd like to sort of general idea, although a lot of the work was problem-driven by these particular problems. It's Bedlin, many times is the one who's actually sat back and said, that sort of looked and said, not only are we solving these problems that we're asking, but in fact, these structures that are done, both SLE and browning in motion, are really perhaps the keys to understanding putting conformal field theory on a rigorous basis and now that's an active study area being pursued by many people. With that, I'm happy to introduce Bedlin for his talk. Thank you very much. As you might imagine, this week has been a very special week for me and enjoyable one. I hope I'm not too tired to deliver a reasonable talk today. Since it's the first time I get a chance to speak, I have, of course, many people to thank from personal reasons to PhD supervisor, but I'll do that on personal level that would be too long to do now, but of course I'd like to, I'm very happy and glad that I'm speaking after Oded's plenary lecture this morning and that Greg was able to be here to also and as chair of this session because a lot of what I'm going to say and a lot of maybe part of the reason that I'm standing here in front of you is that I met these two nice people. So the topic of my talk is random planner loops and conformal restriction and, okay, I added a survey because of course this is supposed to be more an introductory lecture that gives some flavor of some of the ideas and some of the results that we obtained during the last years. And this talk is, of course, not unrelated and rather closely related to Oded's talk this morning and also to Stasman's talk yesterday, but I'm not going to assume that you all went to these two nice lectures. I think I'll try to make it self-contained today, but maybe with some partial repetition of at some points on some issues with what mostly what Oded said this morning. But I'll try to take another perspective which is more to try to look at description and understanding of continuous random structures in the plane using the, of course, the discrete models as a guideline but really focusing on properties of the continuous objects. So I think it's fair to start with some background about motivation and history in the physics community because this subject of research, one of the main motivation comes from physics and physicists have been given a lot of input to these subjects before we actually started to look at them. And actually I'd like to thank also take the opportunity to thank them, Michael Eisenman, Bertrand Duplantier and other physicists who also made the effort to come to math departments and actually say, here we have a nice subject problem for you that we know sort of know how to solve but which is a nice problem for you, mathematician, because the way we approach them is probably not the way one should do it mathematically. And then when we came up with some ideas, they also accepted these and didn't just say, well, we knew it before. So they acknowledged the fact that these were, was really new input. So I'd like to stress this. So I'm going to say a couple of very general statements and to start with, with the danger that it's too general so that you don't see what I mean. But in general, when you learn about physics, you learn that, well, the laws of physics is something when you repeat it twice, the same experiment, you get twice the same results. And that's the result of the laws of deterministic laws of physics. And it has been observed long ago actually experimentally that on macroscopic scale, when you are exactly at a point at which the physics is called a phase transition point, that means a point where, say, for instance, you can imagine the temperature at which liquid become vapor or there's an even competition between two possible states in the system, that when you're exactly at that point, this previous deterministic law doesn't hold anymore because you see some features that become random on macroscopic scale. So you repeat the same experiment twice, you, and you will get different answers. And part of the story is to understand some of these macroscopic random features or complex systems that you see on macroscopic scale. And a closely related question is that, well, how do you describe what phase transition point is? Well, usually it means that some of the physically deterministic quantities that you observe on macroscopic scale, when you are away from the critical point, go to zero or go to infinity. And it has been observed that sort of these quantities often obey certain power law behavior when you approach the critical point. So that some quantity, deterministic quantity, sorry, basically behaves like t minus tc to some power gamma. And this exponent gamma is called the critical exponent for this model or for this object. So here you see you have a description of the first random object at the critical point, and then below you have deterministic behavior of deterministic points, quantities near the critical point. And of course, well, not of course, but it has been observed also that these two, or argued that these two phenomena are very closely related with each other. So physicists have came up with, theoretical physicists have came up with a number of inventive and clever ideas in order to describe these problems and these questions. And the first of which has been developed by, okay, it's not to intimidate you, I just put down some names of physicists that have contributed to these issues. And of course, I am omitting a lot of names here. The first idea is that of a renormalization group. And that basically is, so of course, it's a very simplified explanation, but it explains roughly or gives you a convincing heuristic to the fact that different models or different gases or different experiments will give rise to the same exponents or to the same random behavior at the critical point. And the idea is basically to say that this random macroscopic behavior to interpret at this, at a fixed point of some renormalization function that basically you divide your system in a large system into smaller boxes and each box, this random system is created and you put them together, you create a large system. And what you argue is that at this critical point, the system is a fixed point of this because it becomes scaling variant, it will be a fixed point of this transformation. So here are three items that are specific to two dimensional systems. And so conformal field theory and also what is called Coulomb gas techniques or quantum gravity are ideas or that sort of based on analogies, based on explicit computations, based on many different facts that provides mathematical tools in order that enabled these physicists to predict the value of these critical exponents of many different systems in the case where the dimension is two. So we are looking at planar systems. And the exponents of the models are classified according to what physicists call the central charge of the model. That means each model or system you look at has a specific central charge. And this term central charge refers to the fact that some of the mathematical tools that are hidden behind the conformal theory scene have to do with the representation theory of some infinite dimensionally algebras. And the third item is explanation of the fact of the precise relation that there exists between the behavior, this random behavior at the critical point and the deterministic behavior near the critical point. And I think it's fair to say that apart from the last item which has been treated in the mid 80s by Harry Keston, that's one of the many things that Harry Keston did in all these questions. So he treated the case of percolation in two dimensions where he made sense of the scaling relations and explained that if you understand the random behavior of the system at the critical point, you also understand the behavior of the system near the critical point. And it's fair to say that apart from that, these three, even though conformal field theory was related to very deep and rigorous mathematics, the relation between these deep and rigorous mathematics and the actual questions you look at was a question mark. Okay. So I'm going to repeat very quickly so that you have one model in mine. I chose to repeat a model that has been introduced that you may be seeing once or twice in this ICM before, but, well, I guess that's the simplest model to explain so that you have in mind something specific and it's not too general. So the specific model is called two-dimensional percolation. And the idea is the following. You have a honeycomb lattice, sort of, you tile your plane with hexagons, and you toss a coin. Each hexagon is going to be black with probability P and white with probability 1 minus P and the state of each of the hexagons is going to be independent of each other. And the question you're looking at, of course, here you're just doing coin tossing, so you have to explain what question you are looking at. Otherwise, so the question you look at is you're interested in the connectivity properties of the picture you obtain when you draw, when you do the simulation, you'll see one in a moment. And P plays the role of the temperature if you want. That's the parameter you're allowed to play with. And it happens that for these connectivity properties, the phase transitions appears to occur at PC equal one-half. And so the idea is when P is larger than one-half, then you have one infinite, you have a majority of black hexagons and you get one infinite black connected component called the infinite cluster and that has, if you want, a positive intensity or a positive density is actually the right word. Theta of, called theta of P, which is a deterministic function of P when P is larger than one-half. And when P is smaller or equal to one-half, you have no infinite black connected components, only small islands. And at P equal one-half, which is the critical point, so these random, what you would expect to be the random feature, you see clusters at any scale and their shape or the shape of these islands appear to be random. So that's critical percolation. So you should just think of it as a great television screen and you try to detect on this great, great television screen the connectivity properties. And you realize that, well, it's not a trivial question because our eyes are not well trained to, to detect if there's a left to right white crossing in this, in this picture. And actually this is a symptom of a more deeper question and the fact that the, the, well, the way the randomness is organizing and put together in order to create the macroscopic event of the left to right crossing is very subtle. And this is what a percolation cluster looks like in the previous picture. So that's one island. And I surrounded, I mean, the outer boundaries here are red, but forget about the boundaries for the moment. So this is one island, one cluster in the previous picture and shows you that when you look at your television screen, you will see clusters or islands of size comparable to the screen you look at, but you will not see infinite clusters. Okay. So here are predictions by physicists that are now mathematical theorems and I'm not going to explain you the root to that, to these predictions, to these theorems now because I would repeat a lot of what Oded said this morning. So here's one, the first prediction, I think this is due to Nienhuis, Denneis, and yeah, I think these are the two, first two who probably, maybe I'm mixing up things, predicted these five over 36 and five over 48 numbers. So the first one has to do with the behavior near the critical point, the intensity decays like P minus one half to some power when P approaches one half. And the second one has to do with a description of what happens of those random behavior at one half that tells you that the probability that there's an open path from a given site up to distance R is decays in a certain way. And so these are typical examples of these predictions that physicists made and that are maybe now accessible. Another model to keep in mind is that the so-called easing model, percolation has some very specific features because of the fact that what happens here and there on your television screen is independent. And we'll come back to these very specific features a bit later. And there's a model called the easing model, which is basically a model where you're going to, similar as the previous one, where you're going to bias the probability of each configuration depending on the number of, I mean, you look at the configuration, you count how many neighbors are disagreeing, how many pairs of points there are, which are neighbors, and such that one's black and the other one is white. You count how many there are and you're going to penalize the probability of a given configuration according to the number of disagreeing neighbors. So the easing model prefers to have, if you compare it to the disordered system of percolation, prefers to have neighbors that are of the same opinion or the same color. And this induces a long range or correlation between what happens at different various points. Okay. So one of the novelties sort of of this new mathematical approach that has started in the very late nineties is that instead of looking at the correlation functions, and here I just give you an example of what you might call a correlation function in the case of percolation. So instead of looking at the behavior when the mesh of the lattice goes to zero of probability that you see that far away points are distant at a given color, you're looking, you're going for to try to describe the entire random object and the actual random picture you see instead of some sort of things that look like finite dimensional marginals if you want. Of course this is a big simplification. I'm not, but if, so if you want to oversimplify a little bit more, you might say, well, that in complex analysis in general, you always have this magic trick that on the one hand you can write an analytic function as a power series. And this has a very local feature with some of the ANZ to the N and this looks like an analytic thing when you compose an analytic map you start to have playing with algebraic structures and you have this magic trick that this corresponds in fact to actual map from a portion of the plane to some other portion of the plane and that has some geometric insight. So we are going for the second approach if you want here and now I just try to be very quick on the second item because that was the topic of Odez's talk this morning if you want. So one of the main ideas is to use or to assume the fact that these models on large scale or these critical systems in two dimensions behave in a conformal invariant way. So what would that mean? Well, you should imagine the following. You take a system, so either you look at the critical lattice based system on a very, very fine mesh, so you, or you just go directly for the actual physical system in the, in the continuous and you look at the system in two different domains, D1 and D2 and which are conformally equivalent which means that there's a one to one map that preserves the angle from the domain D1 in the plane to the domain D2. So the way to describe these random systems is to say that in system in D1 you have a collection of clusters say that you call C1 index by K and in D2 you have another collection of clusters that you call C2 index by J here and that's these random systems that you want to describe and you could just going to say that these are going to be conformally invariant if basically the law, if you take the, your system in your first domain you map it onto, you map what you see in the first domain onto the second domain by a conformal map then you get a random system in the second domain that had exactly the same law as the system itself in the second domain. So in this morning's lecture they described the status of these, what is proved and what is not proved about the fact that the scaling limit of discrete lattice based models are indeed or not conforming invariant and if you've been to Smirnoff's lecture yesterday you know that there is some ongoing spectacular progress going on. Okay so the goal now is to describe these continuous structures that you see as scaling limits or not scaling limits but that have nice conforming invariance properties in the plane. So of course there is a very easy way to create conforming invariant, conforming invariant random structures. Well just pick your favorite domain say the upper half plane, choose any random object in the upper half plane or any random structure that is scaling variant or an invariant say under the Mobius transformation that will give you one random structure in the upper half plane and you define the random structure in any other domain just by taking the conformal image of what you've seen in the upper half plane in the other simply connected domain and that defines you a family of possible observables or possible systems once for each domain that satisfies conforming invariance. So conforming invariance itself is not a very restrictive condition, you need to add more condition in order to be able to pinpoint or to describe all possible such systems. So the first additional, I mean one possibility was explained by Oded this morning is that to focus on discrete interfaces. So here this is very schematic and actually has been drawn by my daughter when she left at my, the poor way I am handling her pictures on the, okay. So imagine that you have a domain with pre-cooked and nice boundary conditions say you assume that one part of the boundary here is red and the other part of the boundary is blue, I chose red and blue instead of black and white for obvious reasons. And basically if you are going to assume that here everything here on this boundary is blue, everything here on this boundary is red, you are going to have this random color television screen with blue and red colors in there and then it's very easy to see that you'll get one single interface that is going to separate one random line that is going to separate the blue cluster attached to this part of the boundary to the red cluster attached to the other part of the boundary. So this is a random curve and the point is that discrete interfaces can be explored and that's the basic starting point of Oded's trance that led into the diffusion of SLE which is we are going to explore the system just starting here and we start to explore here this interface. Because we are looking at, we are supposing that we have nearest neighbor interaction, system with nearest neighbor interaction, this tells you that once you know that this is the way the interface starts and you think what is the law of what remains to be explored here of this strange television screen that still remains to be explored, well, it's still now say percolation or easing in the remaining domain and now the domain is the circle with the slit and with new boundary condition where on this part of the boundary you can hear everything is blue, here everything is red and so if you assume conforming variance, in fact once you start exploring the boundary, I mean this interface, you ask, okay, now how do I continue? You still have, you still are in the, asking exactly the same question, you still are looking at, have to explore an interface between, in a simply connected domain where you are looking at one part of the boundary is red, one part of the boundary is blue and you are looking at the interface between these two things and so using this, Odech-Ram explained this morning that combining sort of classical ideas from complex analysis namely Leuveness theory with basic probabilistic insight, you get that there is a, exists at most a one parameter family of such random curves in the, in a simply connected domain that satisfy both this conforming variance property and this exploration property, this idea that you can sort of explore the curve progressively like this, okay, this is of course too short an explanation but just to mention that the output of this, of this idea that just that you can explore interfaces progressively leads just to actual concrete description of the possible interfaces for these, I mean interfaces for these special, especially precooked boundary conditions and you get the one family parameter of such random curves called the Schramm-Leuven evolution, that's a difference maybe with this morning's lecture, they call it Schramm-Leuven evolution and this one parameter is usually called kappa and okay, here are lists on properties that Odech described this morning, so there's, there are two types of curves, some of them are simple random curves and some of them are, have double points, so there's a phase transition at kappa equal four, this number will come up later and kappa equal six and a third are also very special, they have some, turn out to have the very special properties and I'll come back to that later. So now the, the, I want to focus on this so called conformal restriction property. So conformal restriction is, is another idea that is complementary if you want to the previous one but gives another approach or another characterization to these random curves or I say leakers. So actually this idea has been developed and refined in a sequence of papers with Greg and then Greg and Odech and the result I'm going to present now is a paper of mine but it's really just a continuation of our earlier joint work and the idea now is not to explore the curve if you want from, explore the curve dynamically but you explore it from far away. So you see well how does the curve look like from far away and one way to, to view, to say this is to say that you're going to compare the law of the shape of the curve into, when it's defined in two different domains. So here you have, you start having this idea that you're going to, I'm not going of course to explain this and anyway I'm not sure how to understand it really, fully that you're going to start playing with a variation of the law of the curve with respect to the domain you are looking at and that's where these Lie algebras enter into the game but I'm not going to say more about that. So here I'll ask a specific question. I give the motivation later, you'll see the motivation with the answer if you want and the question is the following. You're looking for a measure supported on the set of loops in the plane. So it's, you're, the set of, you're looking at is a set of single loops that you can draw in the plane. And you want this measure to satisfy this property that we call, that I call here strong conformal restriction property and this property is the following. You take any two conformally equivalent domains. You look at the measure restricted to the first one and you look at the measure restricted to the second one. What you want is that if you map conformally the measure restricted to the first one onto the second one via this conformal map, you get exactly without any scaling factor the measure restricted to the second. So whatever domain you look at if you want you're going to see the same measure. That's the idea. If you, if you ask this condition to be true for simply connected domains only, we call it weak conformal restriction, that means that whatever simply connected domain you choose you're going to see the same measure modulo conform equivalence. So at first you say, well, such a measure cannot exist, this is too strong a condition. And some, the first two items here are not only rather easy but just trivial consequences of the definition. Suppose you have such a measure, then of course it has to be scaling variant because some conformal map, I mean multiplications are conformal maps. So if you take a disk, you look at what you see in this disk or twice disk disk, you see the same measure. The measure is scaling variant, it's translation invariance for a similar reason and therefore it must have an infinite mass. So it's going to be a measure with infinite mass where this infinite mass is going to be supported on both very, very small loops and very, very large loops. And the third item is rather easy but not completely trivial. But I ask you to believe me that this is not a difficult statement that in fact you only have one, at most one measure satisfying weak conformal restriction. So already the weak condition is very restrictive and you end up very quickly with the fact that this is so restrictive that you cannot anyway have more than one measure satisfying this condition. And a more difficult theorem which involves SLE is that in fact such a measure exists. And not only satisfies weak conformal restriction but it satisfies this stronger conformal restriction property. So there exists a measure on simple loops in the plane that satisfies this property that whatever domain you look at it, you'll see the same measure, modular conformal invariance. And it turns out that there are three different, a priori completely different construction of this measure. I'm going to say a word about these. So the first construction uses Brownian motion. Imagine you take a planar Brownian motion. So this is the scaling limit if you want of a very, of a simple random walk on a very fine mesh or the trajectory of a crazy fly say that it's just moving around at random in the plane. And you condition to be back at the starting point at time one. So this creates a loop. Of course this loop is not self avoiding. We are looking for a measure, remember, maybe I've not insisted on that before. We are looking for a measure on loops that are self avoiding loops that have no double points. Just things that separate the plane into two connected components. So a planar Brownian loop has looked like this. This is a trajectory of this and it has been studied, this type of properties has been studied extensively. And it's known since the 50s that planar Brownian motion because it's related to, that's one way to view it related to Laplacian and related to harmonic functions is going to be invariant in some way under conformal transformations. And it doesn't take, didn't take us too much effort with Greg to actually see that this measure on Brownian loops here. We can turn it in the way you of course are going to natural way into a scale invariant and translation invariant measure on paths in the plane that are Brownian paths. And what you get is a scale invariant measure on Brownian loops which is translation invariant and satisfies conformal invariant properties. And the very definition of the way you are going to define it is going to tell you that if you focus just on the outer boundary of this loop, this is going to be a self-avoiding loop. So each Brownian loops define one self-avoiding loop and that the measure under which the self-avoiding loop is defined satisfies weak conformal restriction. Now I will not insist on that but if you think about the percolation model, the percolation model on the large scale is going to describe very naturally a measure on clusters. Basically each cluster you see in the continuous, I mean in the discrete picture counts as one, has mass one and then just take this counting measure on clusters defined by percolation and you get a measure supported on the set of clusters. If you imagine that these clusters have a scaling limit that is conformal invariant, the independence properties of percolation are going to imply in fact that as before this measure on percolation clusters that you get here is going to satisfy the same weak conformal restriction property and therefore this outer boundary, so again now you focus just on the outer boundary of this cluster, this red outermost path, will satisfy, will be a measure on self-avoiding loops that satisfies weak conformal restriction. So here you see without the help of SLE at all, you get the fact that outer boundaries of percolation clusters and outer boundary of Brownian motions are just the same in the scaling limit. Now it turns out that with the help of SLE you can say more because SLE, one of these SLE, the one with parameter 8 third, turns out to have a very special property. As I mentioned before, that makes it possible to define directly a measure on SLE 8 third loops and to prove not only that this measure satisfies weak conformal restriction but also the stronger version. And one of the big open questions is precisely to prove that this measure on self-avoiding loops that we are describing is in fact the scaling limit of the, if you want, I'm cheating slightly, the uniform measure on self-avoiding loops in the plane. That's one way to understand it. For how measure if you want it to be invariant. So what you end up with here is that these three constructions, outer boundary of a Brownian loop, outer boundary of a percolation cluster in the scaling limit, and these SLE 8 third loops are going to define exactly the same measure, the same random, well it's an infinite measure but the same random shape. And furthermore you know that the last one satisfies not only weak conformal restriction but strong conformal restriction so the outer boundaries defined in the previous case define also satisfy also this strong conformal restriction property. So this is one explanation why SLE is useful to solve this Manfred-Bröhrl's conjecture about the fact that the outer boundary here of a planar Brownian loop here or planar Brownian motion in general has dimension four thirds because I just told you that the outer boundary is exactly the same as an SLE 8 third loops. SLE 8 third you can perform computation so you can compute probabilities that you couldn't compute with the Brownian motion picture and then you can deduce from that computation that you get the dimension four thirds for this outer boundary. So you see these two different ideas the first one SLE gives random or continuous objects that allow to perform computations and you have additional sort of argument that tells you that anyway even two different a priori different models will give rise here to the same object in the scaling limit. And similar ideas led to the derivation of all what's called Brownian intersection exponent that had been predicted by Bertrand Duplantier before. So okay let me now continue another consequence here just to mention to show you that this strong conform restriction is not a trivial statement it shows you something like so you see the outer boundary of this red island has a certain random shape now if you instead you look at the inner boundary so now you take the outer boundary of this white island inside this red island yes so you look at the outer boundary here so of course this is very different because here when you are inside the Brownian motion is not inside of the loop but it's outside okay but the previous theorem shows in fact that the law of the shape of the inside thing of this white island is exactly the same than the law of the shape of the red one right so you have surprising inner outer symmetries if you want. So you see here that we started with this idea that you have you are looking for asking an abstract question about we are looking for a measure satisfying a certain property and we end up with the fact that well there is just one and it's not a trivial one it's a measure you know supported on the set of loops that have dimension four thirds and so on however this it turns out I'm not going to elaborate on that but this measure because it's the only one satisfying this property is going to be probably this very natural property related to other features and maybe other parts of mathematics and useful there. Now I want to spend the remaining time discussing what we call conformal loop ensemble with Quarchefield and that's ongoing work but let me give a little motivation for that. As I explained you before SLE is going to give you the law of one interface you precook you have prescribed boundary conditions everything is blue everything is red here and you get the law of this random curve that is between them. Well you might say well that's enough we know once we know the law of this curve we have a lot of information about the system and indeed you get a lot of information of access to some critical exponents and many things. However if you want to describe the scaling I mean the entire picture that you are actually seeing you need to continue you need to say well okay what happens here and what happens there but once you did draw this curve here now you have a domain here which has monochromatically blue boundary conditions so you cannot really start the same you cannot iterate the procedure by starting a new SLE somewhere here because now the boundary conditions do not have these specific blue and red parts. So you try to you have to figure out something different and if one looks at the discrete models if you want and the properties of the discrete models well how are you going to describe what you see here in your domain well you are going to describe via a family of loops that correspond to the boundaries of the clusters that you are going to see in this picture. And these loops are going to possess certain properties that lead to the following definition in the continuous case. So here again this is a schematic picture so you are looking trying to describe what possible laws could be the scaling limits of these random collections of loops that you see there. So this time here imagine that we are looking at a configuration in our case is going to be a random collection I mean a collection of loops that are disjoint and that are not nested. So you may see this picture as some sort of a cantor set if you want if you fill in the interior of all these loops the idea is that you should see some type of random cantor set here. And these blue guys should be you can think of the outermost loops of clusters in certain models if you want. And the condition we are going to ask is the following. First of all we want it to be conforming variant so we want this to be defined in any domain in such a way that what you see in any domain is obtained by taking the conform image of what you see in the disk. And now if I cut something out of the disk so you cut anything and so you take any smaller domain of the disk. Now you see that you have two different types of loops you have those that go stay in the white region and those that go out. So let's go in black the interior of all the loops that touch that go out of this white region. So now you see you have okay I'm cheating slightly of course but you have a new domain which is this complement of all these black loops. The condition you are asking is that given the loops that intersect the outer boundary here the law of what remains to be explored here the law of what the loops in the remaining domain is the same as the law of what you started with. So in another words and this is a very natural this looks maybe a little bit surprising a condition at first sight but this is the condition you would ask if these loops correspond indeed to outermost interfaces if you want in lattice based models. So question what are the possible laws on collection of loops that you describe like this? Well we have a theorem with Scott Sheffield that tells you actually that again you only have a one parameter family of such objects and what happens is that for each of these objects the loops will look like SLE loops with a certain fixed parameter kappa. So for I haven't described to you what the parameter kappa was in the definition of SLE but you should imagine roughly speaking that for any dimension D between four thirds and three halves there exists exactly one such measure supported on this joint collection of loops like this that is supported on collection of loops with which have this dimension and you don't have any other such random collections than the ones I described to you. Okay you have three different constructions of these loops of these collections and let me just very quickly describe you one of them that I like a lot but that's just personal taste it's not a mathematical value. So the idea is the following you are going to use the previous conforming invariant measure on self avoiding loops that I've described to you this measure mu you are going to use it and use its property in order to construct a more elaborate measure which is elaborate measures which are these interacting loops now. Well you are going to imagine that it's going to rain loops on the screen and you are going to let it rain with intensity given by this measure mu and I'm not going for those who are probabilists or know about Poisson point processes they know what I mean and those who don't know also what I mean because that's just what I mean is raining loops according to the intensity given by the measure mu and you let it rain a certain time so here that's at the beginning you only see very small loops because and then you fill them in and then you let it rain a little bit more you have other loops that fall in and of course all these loops are going to overlap right because they are independent it's like if you have leaves falling on the ground if you want. Okay and you see this picture and then you continue the picture grows and what happens is after a certain time of course this is very schematic these loops you should think of these loops as being loops which have dimension four thirds and not things drawn with paint with a lot of paint on my PC. Well what you see is that at a finite time there exists exactly a finite time and you can actually prove it such that immediately after this finite time all the loops that did fall down before that are going to hook up and form a single connected component so the union of all the loops are suddenly going to crystallize in one single connected component. Now this in a way is a way to define fractal percolation or which is also sometimes called Mandelbrot percolation which in short is just defining random two dimensional kentosets that at each scale you are going to look at certain shapes and say well either we keep it or we throw it away and so that's what we are going here to do here that if you want we are defining a random kentoset that has conformal invariance properties because instead of we are going to remove shapes which have these measure mu as definition. Well it turns out that these outermost loops of the clusters you get here for any c smaller than this time at which everything crystallizes are going exactly to be these CLE loops. So it turns out that you let it rain these self avoiding loops or Brownian loops or whatever you define according to the measure mu a certain time they are going to create clusters and you look at the outer boundaries of each of those clusters they are going to correspond to define you sort of here you have one loop here like this and here you have another one and the picture you are going to see is exactly this describing for you the CLE loops. Why I am spending a little time describing to you this is that in this interpretation the time you are going to let it rain instead of calling it T you are going to call it C because that is precisely what physicists call the central charge of the model. So you have a very constructive and with bare hands construction of random geometric objects that have a lot of structure and for which you can now interpret these quantities here not as central charge of some representation but just at the time you are going to let it rain. So there is an alternative as in the previous case of the self avoiding loops you have an alternative description of these loops of these loop ensembles using other means and okay I have two. One other mean is the Gaussian free field construction which is very exciting and done by Schramm and Sheffield and Sheffield also partly and here again you have this feature that two different very different models in the continuum are going to define the same continuous I mean the same object even though a priority are very different just as the outer boundaries of percolation and outer boundaries of brownie loops were the same shape here you see that different objects are going to define the same CLEs. So one hope of course is that these CLE these collections of random interacting loops are going to define help you to understand better some of the mathematics that are behind or the relation that are behind this conformal theory field theory or okay Coulomb gas and tie links between these two because we can show these are the same. So here in the final slide here actually I forgot some many people I listed the names of people active currently active in this area and thinking or about such questions and you that you can if they are in your department you can just knock at their door and ask them more information about these questions if you want. So here you have mathematicians roughly speaking here you have physicists and I included also the list I mean in this list okay I forgot Michael Costron recent PhDs and postdocs and actually there's one Spanish one from a student of Greg. So if you want more information of this type of topics there's many surveys lecture notes ICM proceedings and even a book by Greg so there's a lot of introduction to the subject and I apologize I realize I wanted to focus really on this self-interacting loops and try to get so my goal was to get you an impression of this general feature that you are looking for a continuous object a measure supported on continuous object you are requiring certain properties that are natural if you assume that these are the scaling limit of physical models or if they are actually describing physical phenomena and you end up actually with a very restrictive collection of possible candidates either if you're focusing on one line you get one SLE curve if you focus on these more global things you get this collection of interacting loops and that just with sort of a simple arguments you get to actually rather description of a rich random objects that is and I had the time of course to describe that to you probably related to various other parts of mathematics sorry for being over time and thank you.
Lecture of Wendelin Werner, Fields medallist 2006.
10.5446/15966 (DOI)
Good afternoon everyone. I am very pleased to introduce Andrey Akonkov, who is one of the four winners of the Fields Medal for 2006. Akonkov obtained his doctorate in mathematics from Moscow State in 1995. He has held positions at the Russian Academy of Sciences, the Institute for Advanced Study in Princeton, University of Chicago, University of California at Berkeley, and he is currently a professor at Princeton University. He has won many awards prior to the Fields Medal, most recently the European Mathematical Society Prize in 2004. Akonkov is a problem solver of great breadth, power and originality. As in the case of most great problem solvers, the main weapon in his arsenal is a mastery of combinatorics, which he first learned as a student in Moscow. In the late 1990s, a connection was established between combinatorics on the one hand, in particular Ulam's problem of increasing sub-sequences of random permutations and random matrix theory on the other hand. The scope of this connection was inextended greatly in the form of a conjecture. It was this conjecture that Akonkov was the first to resolve, and he did so in a spectacular fashion. Not only did he resolve the conjecture, but he gave a beautiful explanation en route, while combinatorics and random matrix theory are connected. A combinatoric source plays a central role in Akonkov's work, together with Richard Kenyon on the statistical mechanical problem of melting cubic crystals. In another area in which Akonkov has made fundamental contributions, the area is enumerative algebraic geometry, and this is a subject of counting curves under appropriate conditions. Following on in the direction of Witten's work and Konsevich's work, Akonkov and Haripanda have obtained a particularly extraordinary and beautiful new formula to make the count in new situations. Again, combinatorics plays a key role. Can I get the video? Thank you. Thank you, Percy, for this introduction, and I will always remember that summer of 1998 when we first met in your office in Courant, and you gave me the preprint of your work with Baik and Juhensen, and out of among the so many things that grew out of that preprint are the kind of things that I'll be talking about today. Of course, all of our participants of this congress have numerous, we have innumerable things to thank the organizers for, but I particularly would like to thank professors De Leon, Saint-Sole, and Buchulance for their help resolving a technical issue that makes this talk possible. So now I'd like to explain this animation that you see, and what this animation, so what this animation illustrates is the following elementary geometric fact, that if I take, can I turn one of these microphones off? I think this is enough. So if I take eight general points of the plane, there will be 12 rational cubics, means rational means, that it has a genus zero, maybe parameterized by a rational function of one complex variable, and cubic means it's a curve of degree three, maybe described by equation of degree three, and so if I choose eight general points in the plane, there will be 12 rational cubics that meet those points. So this is a very classical fact, and that will be a good illustration, good introduction to the kind of curve counting problems that we'll be talking about in this lecture. So if I take equation of a curve of degree three in the plane, just a polynomial of degree three in variables X and Y, there will be nine undetermined coefficients there, and if I impose the condition that a cubic passes through eight points, to pass through a point, it's a linear equation, so it will cut down the dimension to one. So if I impose the condition that eight points have to line my cubic, I will be down to one parameter family, which I can write like this, it will be two fixed polynomial, F1, degree three polynomial, and F2, degree three polynomial, and a parameter T. So this T dependence that was animated on the, so as you see this. So what's changing? Here's the T changes. So if T is zero, we get the rate equation, if T was at T equals zero, we get the red curve, and if we set T infinity, then we get the blue curve. So now for a general complex value of T, we get a smooth cubic curve in the plane, and it has genus one, means over complex numbers, it looks like a torus, surface of a donut. But there will be 12 special values for which we get a node. A node is a singularity like this, when the two branches intersect transversely, and that signals the fact that instead of being genus one curve, you get a genus zero curve, a curve that may be parameterized by rational functions of one variable. And so this, and it will be in general 12, there will be 12 in general complex values of T for which this happens, and so this, this explains, this is an illustration of the fact that there are 12 rational cubic that meet eight points in the plane. And if you don't, you didn't notice before, computing this number 12 will make a nice homework problem for the subject of this lecture. Now, we'll be interested in, and this is not a, this is not a purely theoretical question, it's in fact in my, in some other things that I've been doing at some point, I will have to, I have to actually find a cubic, a cubic rational curve through eight points. It's a problem of kin polynomial interpolation. So we'll be interested in similar question one dimension up, namely instead of studying curves in surfaces, we'll be looking at curves in three dimensional, spatial curves. We were, instead of curves in plane, we'll, we'll, we'll study curve in space more generally in some three dimensional algebraic, right? So now we come to the proper title of the lecture, is the Numerative Geometry of Curse in Threefolds. So what kind of, what, what is the kind of question that we'll be asking is, we fixed some threefolds x, smooth projective threefold, and for example, for the purposes of this lecture, it's enough to consider already the case when x is just a projective space, is already very interesting. And we fixed some, we fixed the degree of the curve and its genus, and we asked how many such curves satisfy geometric condition of the form being incident to maybe points. We can, you know, we can ask about incidents for points, but we can also ask about incidents to occur for some more general cycle in our threefold. And so of course we, we would like to, to match the, the amount of constraints that we put on the curve with the amount of the freedom that we have in the curve. So there's some, there's some dimension count which says, well, on how many parameters does a curve of such genus and degree depend? And we would like to impose the geometric, just enough geometric condition so that we expect finitely many curves to satisfy this condition. It's a well-defined question of how many? Well, an example of this, so this is in the same spirit that I would have this illustration. So if I, if I'm interested in say, again, rational curves in, in projective space, and I'd like them to be say of degree five, then you, you can easily convince yourself that a rational curve of degree five in, in space depends on, just by writing down rational parameterization, a rational curve is something that can be parameterized by rational function of degree five. You, you'll see that there are 23 coefficients, I mean, not 23, but the, the number of three coefficients equals 20. And to meet the point is a co-dimension to the condition. So we have to put 10 points that we would like to meet to, to, to hit with this curve. And then the count of this will be 100 and five. This is, this is again a classical computation, somewhat harder than 12 that we discussed before. So now, of course, this, this kind of computation go back to the, to the early days of algebraic geometry. But they also, the interest in that have been reinforced by, by, by the relevance such numbers recently acquired in, in mathematical physics. And it's not, it is not the, the, the individual number that is so important. But what is much more important is the, the structure, the, the totality of these numbers displayed. So while, while of course computing that 105 is, is, is an interesting and instructive thing to do, we would, we are much more interested in, in, in proving something about the, the some infinite set of such numbers. Some kind of structural statement about the totality of all such possible curve counts. And the structure appears extremely rich and it's within it. Of course, I will have no time to, to explain this in the stock, but within it, you can, well, I certainly found, so Terry had this dichotomy between pseudorandom and structured. Well, well certainly all of the structured mathematics that I've ever seen you can be found inside this problem. This, but this is of course only my limited experience with structured mathematics. And I wouldn't say about the pseudorandom. But anyway, this structure here is extremely, extremely rich. And so, so what was the next transparency? Right. So why were we interested in, in, in, in, in, in course in three-folds? The way the three-folds is distinguished for a variety of reasons. And one of them is that case of the, when the ambient space is three-dimensional, it's a critical, critical, it's critical in the following sense. If you, if you naively count on how many parameters occur for given degree in genes depends in, in, in variety X, then the, the, the dependence of this expected dimension, that is to say the number of parameters also known as moduli, on, on the genus of the curve, it, it varies qualitatively whether this, you have a three-fold, you have something dimensionless in three or bigger than three. So what's special about three-folds is that the, this expected dimension of curves of given degree in genes is in fact independent of the genes. And it grows with genus if you have, we're in dimensions smaller and it decays with genus if we dimension high. Remember when we, when we, we had this, the animation, it worked for special values of t that the genus dropped. And that their genus was actually going down in dimension was going down with the genes. And a three-dimension is independent of the genus. So before we can translate this, this, this, this somewhat loosely formulated question to mathematics, we need a definition of what do we mean by a curve in, in X. And immediately we realize there are two, there are two ways to describe curves. And what, so, so, so this is, so if we have a curve, then we can either view it as a parameterized curve, that is to say we take an abstract Riemann surface. So I, I hope it's not confusing for you that I interchangeably use real and complex pictures. But I, all of, all of the theorems that I'm going to be talking about, they, or conjectures for that matter, they all hold for complex curves and complex three-dimensional varieties. But it's, it sometimes it's easier to draw real picture just for obvious reasons. So, so we, a curve in, curve may be given by its parameterization, that is to say we take an abstract Riemann surface and a map from it to X. For example, a rational curve may be given by just rational parameterization, that is to say you just take the Riemann sphere and a map to X. Or it may be described by its equation. That's another way to describe, to describe a curve. And what equation is, is just if you, if you need some local chart, it's, what you should do is you take all possible equi, all possible polynomials that vanish on this curve. You don't, you don't look for some minimal set of equations. Just take them all. And this, this totality of polynomials vanishing can occur forms, forms an ideal in, in the ring of function that, in that coordinate chart. And so then the global object will be called ideal sheaf. And, and, and I'll tell, so this is why we have, oh, I'm sorry. While here we have some nice geometric picture, I couldn't think of a better picture to put here and I'll explain later why do I have this, this staircase that are representing an ideal. Now this is, this distinction is very practical. Namely, for example, if I'm actually, if I'm, if I'm to draw, just, just, just for the purposes of preparing this talk, I have to draw some, some curves. And this can be done either by giving parameterization. So in Maple, that would say plot versus giving equation that will be say implicit plot. And so this is, I think it's self-explanatory. Also from the, from the mathematical physics point of view, this is the, the first perspective is the string theory perspective. So this would be, this is the world sheet of a string propagating in some targets, right? So this is the, the, the, the, the numerative problems that we're facing can be interpreted as somewhat simplified, well, maybe not somewhat, dramatically simplified computation string theory. Whereas from the point of view, an ideal is really an object of gauge theory. In, in, in a curve defined by equation, you should think of this as some sort of vortex line in a billion gauge theory. That's, that's the right way to, to approach it from the physical perspective. And they, these two perspectives bear different names. One is called the Gromwitten theory to, to recognize the, the fundamental contribution of Gromwitten to, to the subject. And the other theory is much newer. And it's somewhat modeled in the sub-serial of its essential aspects are modeled after the downs on theory of surfaces. And the, the main, the first, not the main, the first, the first deep result in the theory were obtained by Richard Thomas in his thesis written in the direction of Donaldson. So this is why we call it Donaldson Thomas theory. Now, of course, if we have a nice smooth curve, you can describe it by either parameterization or equation. And, and of course, if you, if you, if you're plotting this, the quality of the plot will depend on whether you use equation or parameterization. But in, in some sense, this is complimentary. But this two points of view differ very significantly in what kind of degenerate objects they allow. What is, if we, for the purposes of enumerative geometry, we need to allow, or in, also from the connection with mathematical physics, one needs to consider not just a nice smooth object, but allow the degenerate objects. And what is called the degenerate curve will be, will look very different from the perspective of the Gromov-Witton theory, Donaldson Thomas theory. So since we, since this degenerate object is what naturally appears in the, so degenerate objects may contribution to enumerative answers just as the smooth objects do, then is a result in the enumerative prediction of the theories are different. So what are, what are the points, what are the points, what is the, what, what, what kind of degeneration does one allow in Gromov-Witton theory? And the, well, the, right, first of all, this, I keep mentioning, right. So, Ronan Lieslstern in the, in the space that parameterizes all possible objects, all possible curves, this is commonly, what we call this a moduli space. And what are the points in the moduli space? A point in the moduli space, well, in addition to nice smooth curves mapping to x, we also allow singular curves, but the, the only singularity one allows are nodes. Node is, as we saw it at the very first, at the very first transfer, or very, very slight, is, is just the simplest possible singularity of a curve where the two branches intersect transversely. So one allows possibly nodal curves that map to, with the holomorphic map to x. And of course, since we're interested, we're not interested in, in, in parameterization. Per se, we're only interested in the image curve. Therefore, we identify two objects that differ by the reprimand. So if, if two, if two, if two such maps, such maps differ by reprimandization of the domain, which is what I schemat, which is this, this red arrow schematically represent the possible reprimandization of the domain, then we identify two such. And there are two, two minor technical points here. One is this, for, since, since we consider objects up to reprimandization, it, they, they have, everyone is, it has natural automorphism group, namely all possible reprimandization that they don't change it. And we don't, and, and the, and the technical condition here is that there should be no infinitesimal ones. But you, you may wonder just regardless. And also, while I keep drawing connected pictures, it is somewhat more natural to allow disconnected domains. So this was the description. If I'm able to go back, this was the description of the, of what's in the modulate, what kind of object I allow on this side, on the go-with-me side. A possibly nodal curve is a map to X. Now, now from the point of the equations, this is the curve described by very simple object, namely in some local chart. We look at all polynomials that vanish on a curve and such form an ideal in the ring of all, all, all, all regular function in that coordinate chart. That is to say a linear space that also closed under multiplication by any function. And a simplest example of the ideal is if suppose we were in the coordinate chart, which is, our coordinate just C3. So the, the regular function on our coordinate chart are just any polynomials in X, Y and Z. And for example, we may look at ideals that are just spanned by monomials. And I, and it's easy to convince ourselves that we'll do this in the next transparency that such, such monomial ideals correspond to, to this, to this three dimensional partition with this infinite, three infinite legs. This is, this is the, this is the pictorial representation of the ideal that I had on that side by side comparison of ground-beaten, Dalton, Thomas. And, and why, why do monomial ideal, why do my monomial ideal look like this? Well, it's, it's easy to explain. Let's do it in two dimension, which is, which is easy to visualize, but the principle is the same. So, if I, if I, if I, so I have to look for, for a subspace in, in, in all possible polynomials in X and Y that, that this linear subspace also closed under multiplication by anything. So, so here's the list of, here's the list of all possible monomials or many, many monomials in X and Y. And, and of some, we have to choose some of them to, to be in the ideal. And once we choose any one of them, say X to the six times Y, and I hope you can read it, then since we can multiply by X and Y, then everybody below, everybody to the right, and everybody in this general area will be already automatically in the ideal. So, once we remove all, so, so if, if we shade all possible monomials in the ideal by blue, then, then this shaded area will represent, will be just the area under the staircase curve like this. So, this is just sort of like a staircase curve red, a red staircase curve. And just all monomials below this curve lie in the ideal and the monomials above the curve lie in the complement of the ideal. So, it's a, it's, I think from this picture it's clear that in, if, if instead of polynomials in two variables, we look at polynomials in three variables, then the corresponding object will look something like this. So, the list of the, the, every box represent three possible exponents, the exponent of X, Y, and Z, and those which are not in the ideal will form a, this three dimensional partition with these three infinite legs. So, the reason I explained this particular picture is first of all give you an example of an object which is in the Donaldson term as modular space and doesn't look at all like, like a, like something which lies in the ground with modular space. But it's also the case that the, it is through this particular monomial ideal that this object connects to the combinatorics of random surfaces and random matrices and many other objects and so it's because, so, but I'm not going to, I'm not going to pursue this connection further. So now, so, on one hand we have objects like this and on the other hand we have objects like that and of course artistically one may view one side as resembling another but there's no precise mathematical relation between the two and in fact there's, there are two, there are two simple arguments that there shouldn't be any. First of all this, this, this, the domain of this theory are have naturally very different habitats. The ground width and theory can define for general, not, doesn't have to be complex, the symplectic manifold is enough. Whereas, whereas for, to define the ground, the Donaldson term theory you, you, you have to, your manifold had to be projective but, but on the other hand it can be defined over arbitrary field, and have to be complex numbers. So, and as I already said, not only, not only this, this theory is a philosophical difference but also if you actually compute the number of curves they will come out differently. It is how we conjectured and that's the, the main, the main set of ideas that I would like to discuss here is there is a certain change of variables that establish an exact equivalent between one side and another side. And so to be more concrete, what we will do is we will fix, so we have our three-fold X which is some three-fold for example projective space and what we will fix, we will fix degree and we fix the collection of incidence conditions. So remember we, maybe I should go way back to, to one of the first slides. Okay. Okay, here's what, what the setup was that we, we take some X, we, we fix the degree of the curve and we fix the, some certain collection of incidence condition. For example, be meeting so many points in general position. So now we'll see if it works backwards. So we, the three-fold axis fix, the degree is fixed but the genus is not and what we will do is we will sum over all genera, so we'll sum over all possible genera, we'll take the ground of the count of genus G curve meeting this incidence condition passing through this point and the argument in this generating function, we're following what's called the generating function, the argument will be called U, just, just a letter, nothing, nothing of significance and the exponent by which we use is way that it will be 2G minus 2. Again, it's not particularly important. But what is important here is that, as I said, since the three-folds for the situation of curves and three-folds is critical, meaning the expected dimension of curves of given degree in genus is actually independent of the genus. This is why we are able to do the summation. If the, if the number of incidence condition that we had to impose were dependent on the genus, that would mean no point, there would be no, no meaning to this sum. And, and also what is a slightly technical point about the removing of, of degree zero maps, we'll skip over that. Now, on the unsanthomicide we'll do the same thing, we will, again sum over all possible genera, well the genus is, the genus of a very non-smooth curve is not so well defined, but what is well defined is the holomorphic color characteristic, which is 1 minus G for a smooth curve. But in general anything defined by, you write down any ideal that the, the, the holomorphic color characteristic is well defined. And so then it will be a similar generating function with the, with the, with the variable U2, 2G minus 2 replaced by Q to the chi. Chi is the holomorphic color characteristic. And so before I proceed to the subject of the conjecture, I must mention that this, this, this is a lot more complicated than I'm presenting it here. And there's, there's a, just to define this count, the, the counts that I'm, that enter the generating function. One, it's a very, a very serious machinery that's been, the machinery that has to do with, with what's called virtual fundamental class and has been developed by, by Lin, Xian, Beringen, Fantec in the Gromwitten situation. And, and by Richard Thomas that was the subject of his PhD. It says this construction of the virtual fundamental class in Donaldson Thomas theory. And here it's extremely crucial that the dimension of X is a three-fold. It doesn't, doesn't work in any other dimension. So, so now we, we have this two generating function. Maybe I'll, I'll flash the, the function again before we proceed. So the Gromwitten generating function is defined by, you take the, the count of curves of G and you weigh them by U to the 2G minus 2. The same thing with, with Donaldson Thomas, you accept you, instead of the U to the 2G minus 2, you put U to the chi. So now, now the conjecture. And so the conjecture we proposed a couple of years ago with Davesh Malik, Nikita Nikrasov and Errol Banderipande, is that while these functions are individuals not equal, there's, there's a very simple change of variables. This one, namely q equals minus exponential i U, which is, I remember what was being very impressed reading a line in the, in the Russian undergraduate Celtic sex book that the exponential of 2 pi equals 1 is one of the most beautiful forms of mathematics because it combines all the important mathematical constants such as 1, 2, e pi, et cetera. So somehow pi is missing here, but we, we got some other, some other ones. And so the, after this substitution, they function actually equal. So then, so you, what, what, first question, what does it mean? Because it were defined, they were defined as, as formal, this, what was written was just a formal series. There was no discussion of convergence. And so this is the statement what, and here we, we, that would be a statement about the expansion of the same analytic function at two different points. And then you, you think a little bit about what does this statement of this function just being analytic at these two different points? What does it imply? And by Carlson theorem, in fact, it implies that this, this function is like a rational function of Q. And so this is, and it's just, I will focus on this one for the purpose of this talk, but there is a general set of conjectures along this lines that, that extend to a slightly more general setting. So, so in this talk, I'd like to report on the recent progress that we did together with, with Davesh Malik, Alexei Oblong, for neural binary band, and namely we proved this conjecture for any Toric 3-fold, in particular P3. So what is a Toric 3-fold? A Toric 3-fold is, is the one on which an algebraic torus, that is three copies of C star, acts with an open orbit. For example, the, the, on, on, on just the three space, if we just scale every of the coordinates by a complex number, it's clearly that any point with non-zero coordinates can be scaled to any other point with non-zero coordinates, and thus form an open orbit in, in, in, in ordinary three space. So just projective three space would be an example of a Toric writing. And I'd like to, to the, in the rest of this lecture, I'd like to quickly go over the some, not over the proof, but some of the essential ingredients of the proof. And to, to discuss the proof, one, one needs to enrich the, the setup and consider what, what's called relative theories. And what is relative theory about is, before we were interested in curves, meeting, say, points or other curves, and would be pointless to, to ask for a curve to meet a divisor, that is, I couldn't mention one object, to ask, meet the surface, because they, they meet by just topological restriction. However, if you have a surface, so suppose we have a three-fold, you know, this is working on, oh, here, if we have a three-fold, then inside a three-fold, we can fix the surface, and then the curve will be incident or not incident to surface for just topological reasons. But we, we may ask ourselves whether the intersection occurs with certain multiplicity. Here's the, this picture is supposed to illustrate the situation when the curve intersects the surface in two points transparently, and tangent in one point. So there's multiplicity one attached to the two points of multiplicity one, and one point of multiplicity two. And also we would like, we may impose the condition that this point of prescribed multiplicity lie in turn, so this point of a surface, we would like them to lie with certain, to fix, or partially fix their location in the surface. This is enrichment of the theory. Instead of just saying, well, I'd like to be incident, in this case, one also asks for particular multiplicity and for location of that point where we're incident. So this, this conjecture about the Gromviden-Donsen-Thomas correspondence that extends naturally to this relative setup. And so what does this relative theory allows us to do? So this, there's a certain sense in which it allows us to decompose any toric variety into simple pieces. So one takes the geometry, the essential geometry of a toric variety is captured by what's known as the Toric polytope. Namely, for example, for P3 you would, for example, you may know it as the image of the moment map or under some other name. So if you, if you take the projective space, then this will be a tetrahedron. And the vertices, the edges in this tetrahedron are the coordinate axis and the vertices are the torus fix points. And so this is, there's a set of techniques which go under the name of localization and degeneration. And I refer to the lectures by Michel Verne and Yasha Liashberger for explanation of, of course, these are well known techniques in the field, but in this among the planner lectures, the localization played an important role. And I hope the, in some form, the degeneration formulas will be discussed tomorrow. But anyway, there is a way, there's a certain, there's a, there's a certain sense in which the, I can take this tetrahedron if, and imagine if it was some sort of, it would meet out of pieces of, of a construction set and I can just take it apart into, into pieces which are this, this kind of triple, I'm losing the, no, I think this is, oh no, here's something. So, so this, there will be four of these vertices, six of such edges. And I can analyze, analyze these pieces separately. So what is this? This is just pictorial representation. Don't take this too seriously, especially not the colors. But what these pieces really are, they are particular relative curve counts. And I'll, and I illustrate this in, by, in the case of a vertex, which are the more, the harder of the two cases. So what's the vertex? This vertex, this symbol is just a short hand for a particular, for a particular curve count. And the particular curve count would be, would be the following. I take the three-fold, which is the product of three projective lines. So it's, it's historic polytope, it's just a cube. So this is, this is, this is it. And I, this, this point is the origin, zero, zero, zero. These are the three coordinate axes that go out of the origin. So I'm interested in a curve that I close to this, to the union of the coordinate axes. And they're relative to these three infinities with prescribed tendency conditions. So for example, here's an example of a curve that, that lies close to, in some open neighborhood of the coordinate axes. And it intersects this infinity at, at three ordinary points. So I record the intersection multiplicity as one, one, one. So this would be, combinatorials call this a partition. So it's a partition of three. The total, the total intersection multiplicity here equals three. But it occurs at three simple points. So it's, this is partition of three. It's a partition one, one, one. And now here, here we have a tendency. That means a double point. So I put two here. And here I, this is my attempt at drawing something of multiplicity three. So this, here's a triple point here. So what, I'm just, just to repeat the same thing again. This, this vertex represent a particular relative count of curves which are in a neighborhood of, of the union of coordinate axes with specific tendency to the three infinities. And the tendency being recorded by the three multiplications. And, but then the, the invariant, the curve count is taken in a querienic homology. This is what should make well-defined to, to otherwise be. So since it's, since it's taken, in fact in local as a querienic homology. So the sense is taken in, in, in, in, in, in a querienic homology. And it takes values not in numbers, but it takes values in, in the localized, in a querienic homology of a point which is nothing but the, just, just a rational function in, in three variables, T1, T2, T3. And T1, T2, T3 is just an element, just this element. T1, T2, T3 is just an element of the Lie algebra of this stores. These are the three parameters of, of the theory. And so what we really prove is we prove not, not just the global correspondence between the invariants of, of, of, of a global three-fold, but in fact we prove the correspondence between individual pieces. So this, this enriched local pieces we actually match. And not only we do this, we, we in fact give a, give a procedure how to actually determine this. And not, and not in, in principle, but rather practical one. So it's, it's, in the end you have to solve certain, certain linear differential equations of rational singularities. This can be done effectively. And this, in some sense the statement is that this, this equation have, the solution to this equation is a rational function. And now there's some manipulation with rational function comes this vertex. And so this, what this gives is, is, is all-generic computation in, in finite time. It's not, it's not like you have to compute the individual numbers like 105, et cetera every number by itself, but you get the whole generating function in the finite and rather effective amount of computation. Now, of course, at this, at this point it would be natural for me to present, to, to, to, to, to, to, to, to, to, to, to, to, to, to, to, to, to, to, to, to, to, to, to, to flash a large amount of large amount of formulas, but I will skip that part. But I should say that people who like formulas, I hope, will discover, I'm hoping that people who like to work with formulas and certainly I count myself among those, will discover many things of their liking in what we do, because it's really, in the end, it's about some concrete mathematical computation. And so this is part we'll skip. I'll just go to something beyond the formulas. And well, now before I go beyond the formulas, I must say that this, in particular, this proves, there's a special case of this that has to do with the case when your toric variety happened to be also colloidal right. This happens very seldom, but it does happen. And in this case, this proves, it specializes to something called topological, I apologize, something called topological vertex formula. And so this in particular gives a proof of this topological vertex formula. And this topological vertex formula was a striking insight first, conjectured by Gannagic, Clem, and Marini and Baffa. So now this is the theorem. But this seems to be a very, very special case. And a lot of the beautiful combinatorics that has to do with the topological vertex formula, it's not so easy to generalize in general case. So I'd like to propose that, but to close my talk with the conjecture, and this was just an example of the kind of question we can ask. So what I'm hoping for is a better representation theoretic understanding of particular triple vertex and some other objects that occur in the theory. After all, triple vertex is a polynomial in Q with coefficients and rational function in T1, T23. And depends on three partitions or say three conjugacy classes in symmetric group or any of the other combinatorial instances in which partition occur in representations theory. And I'm certain that this then, they're very soon will have a better understanding of the combinatorial representation theoretic nature of these objects and very much helping to make progress along this lines and I invite other people to join me. And in particular, there's a particular, this just to illustrate you, there's something still missing, something deep is going on here, is in fact, we conjecture this vertex to be a polynomial in Q. So this doesn't follow at all from the theory that we developed because it's, we can only prove that it's rational function with cyclotomic denominator and the fact that in fact the polynomial, it seems to be, is just an example of I think an interesting phenomenon that hints that more is going on than we understand. Well, and of course, this is just as just the, this very small, very particular conjecture and the bigger conjecture that has to do with, with any algebraic threefold, which is in total generality, that of course seems to be wide open at this moment in time. So this is where I would like to close. Thank you.
Lecture of Andrei Okunkow, Fields medallist 2006.
10.5446/15964 (DOI)
So I am pleased to announce the first morning talk by Professor Madsen on a module of spaces from the logical point of view. Good morning everyone. Yes, so I am going to discuss some relatively new results from a topological viewpoint on marginalized spaces. And the work I am going to talk about is primarily these four people that have stated here. There are various papers and these people in various combinations. So let me begin by recalling for you something you all know. Namely that if you consider closed surfaces, closed oriented surfaces, and you count two equal if they are diffeomorphic, then there is not very many of them. In fact, there is just one for each non-negative entity, namely one for each genus. If you have a surface with G-hole thin, then as I have drawn here, then that is sort of the only ones there are. So the set of all of them under this natural equivalence is just a non-negative entity. The marginalized space is a non-negative entity. It is quite different if you consider complex structures on these surfaces. So a complex structure is one way to define it. It is an atlas where the transition functions are holomorphic. Then, at least if the genus is bigger than or equal to two, then the marginalized space has dimensions 6G minus 6. So it is given by 6G minus 6 real parameters locally. There is an exception if G is equal to zero, so then it is just the complex structures on the usual two-sphere, then there is only one. If the genus is one, then the marginalized space is just a copy of the plane. So from now on, in all what I am saying, G is bigger than or equal to two. This marginalized space, MG, is a real central space to mathematics, and it has been studied from many different viewpoints. It originates, as you all know, in complex function theory. In some sense, it is culminating in the seismological theory. It has also been studied in algebraic geometry. There they consider Riemann surface as a projection curve, and the linear numphal gave us the compactification of the marginalized space, and that space, the compactification, is then a projection of algebraic variety. Mungford formulated a conjecture about part of the commodity of this marginalized space, which I will come back to. And the algebraic geometry has also given a rather complete analysis of the marginalized space if the genus is small. So for genus equal two and equal to three, they are better understood. But what I am going to talk about is the marginalized space in large genus, and there there is by no means any kind of complete understanding of the marginalized space. In diarometric physics, one has studied pseudo-honomorphic curves in a background space, which is usually taken to be a symplectic manifold. So that just means that you study Riemann surface together with a pseudo-honomorphic map into the symplectic manifold. And in group theory, one has studied the marginalized space via the mapping class group. So there are many things to be said both from a combinatorial group theory point of view and from geometric group theory point of view. And I have decided to explain, I mean to take up some of these themes in the topological context. In topology, we examine spaces of the deformation. So we count two spaces equal if one can be deformed into the other by a continuous deformation. The main tool for, the main tool in topology is the coromality of the space. That's sort of the beginning. If we can understand the coromality of the space, then we have a chance to understand the space up to deformation. But now let me start giving the definition of the marginalized space. Again, there are many possible definitions. But the one I choose is one which is particularly good for our purpose. So instead of taking complex structures on the space itself, I would say complex structures on the tangent bundle. It turns out that it's the same thing. You can integrate down and get a complex structure on the tangent bundle. Then you also get a complex structure on the underlying surface. So let me explain what a complex structure on the tangent bundle is. It's simply a five-wise map, J, so that J squared is equal to minus the identity. And since I'm assuming everything is oriented here, I will also assume that if I take a non-zero vector out in the tangent space, then that vector together with J of that vector is an oriented basis for the tangent space. I'll write this, if G, for the group of orientation-preserving diffeomorphism. That group acts on the space of complex structures on the tangent bundle. Because if you have a J and you have a diffeomorphism, then you can take the differential of the diffeomorphism and you can conjugate J with that differential of the particular diffeomorphism. Inside the group of all diffeomorphisms, you have the group, you have the connected component of the identity. Maybe I should say here that the superiority you put on the diffeomorphism is the witness of priority, so uniform convergence of all derivatives. So inside that, we have the group of the connected components of the identity. That turns out to be a contractible space. That can be deformed to a point. Anyhow, we can first let that group act on the space of complex structures and we can take the orbit space of that action. That orbit space is the so-called Teismelo space and the big theorem is that that Teismelo space is simply R6G-6. Then the martyralis space, which is the orbit space of the full diffeomorphism group, can also be thought of then as the orbit space of the Mabin-class group acting on the Teismelo space. The Mabin-class group is just the quotient group or the group of components of the diffeomorphisms of FG. So diffeomorphisms are isotrally. As I said, in topology, we want to study this martyralis space of the diffeomorphisms. I also said that the first thing to do is to calculate the commonality of the martyralis space. However, we're very, very far from achieving that goal. Let me say a little bit about what commonality groups are since I'm not sure that all of you are sort of familiar with that concept. We are celebrating at this conference in particular the solution of the hypothesis about the structure of the space sphere as we have heard several times before. Well, Poincare also, the so-called conjecture, Poincare also defined commonality groups. I won't really go into the definition of the commonality groups, but I'll say a few words about what they do. So commonality groups are associated to each topological space, a sequence of a billion groups, each of a k of x, where k is a non-negative integer. They have the property that if you have a continuous map from x to y, then that induces a map of a billion groups in the other direction, and that's functorial, that association. They also have the property that if x can be continuous deformed to a space y, then the commonality groups are isomorphic. So they only depend on the space of the deformations. And actually there's even a partial converse to that result, which is due to Henry Whitehead. The commonality groups also have a product. So you can multiply the commonality in dimension k with the commonality in dimension l and get the commonality class in dimension k plus l. So the totality of all these groups is a graded ring. They measure in some sense the, what could one say, the high dimensional holds in the space x. So I can illustrate that by the fact that if we just take the surface and we calculate the first commonality groups, then the first commonality group, then that's the 2G copies of the integers, where g is the genus. So the genus can be read out from the commonality of the first commonality. The first commonality group of the surface. Now one of the problems with this, with the marginalized space, in g is that it's a singular space. And we don't know almost, well, we almost know nothing about the singularities of the space. So there are various ways to get around that. You'll have to take some non-singular cover of the space, and the mapping class group, which was pined out of the diffeomorphism group of the surface, provides such a non-singular cover in the sense that this classifying space maps into mg in such a way that the map, the other way in commonality, is to use isomorphism on the commonality groups, at least if you allow denominators, if you tend to the commonality groups with rational numbers. In the mid, yeah, I should also say, so this classifying space is b-gamma-g, that has the following property. If you take a map from a space x into b-gamma-g, then that produces a covering space with group gamma-g over x. And if you have two maps with the homotopic, then they induce isomorphic covering spaces. So the gamma-g covering space is over a space x, modular isomorphism is the same as the map from x into this classifying space b-gamma-g, modular homotopic. I'll come back to another definition of b-gamma-g in a while. Mumpford's conjecture, now a theorem, is that a part of the commonality of b-gamma-g can be written down. So in order to explain what part that is, then let me say that in the mid-80s, Joe Herr showed that the case-cohomality group of b-gamma-g is independent of g as long as g is much bigger than k, well, bigger than 2k plus 2. So therefore, one can define what we call the stable-cohomality group of b-gamma by simply saying it's the case-stable-cohomality group is simply the case-cohomality group of b-gamma-g where g is large compared to k. For the purpose of this talk, I need another definition or space which is deformable in some sense to b-gamma-g, but it's a much bigger space and it has good properties for a lot of the viewpoint. So I call this space empty-top. So let me first define empty-top of n. It's simply that you consider all surfaces in some Euclidean space, all surfaces which are of genus g. We all know about surfaces in R3. We all learned that. But now we just simply take surfaces in a much bigger Euclidean space. Now, if you have a surface in Euclidean space which is diffeomorphic to fg, which has genus g, then you can pick a different morphism from fg to that surface. And then you mean that surface was embedded in Euclidean space so you can represent it by an embedding. And then, well, the same surface is... If you change that embedding by a diffeomorphism of fg, then you get the same surface. So this space of surfaces in big Euclidean space is simply equal to the space of the orbit space of the embeddings, martial, diffeomorphism group. So I've written down here that the... So by this embedding space, I mean the space of all smooth embeddings is the witness of party, again, a uniform convergence of all derivatives. Now, so that's another way of considering... I mean, this orbit space is another way of considering the space of surfaces in Euclidean space. Now, of course, if you have a surface in one Euclidean space, then you can think of it in Euclidean space of one dimension higher simply by adding another coordinate. So you have a whole sequence of spaces, and the union of all these is then this topological modular space, mg top, that I want to consider here. So if I have a surface in an Euclidean space, well, then that has an inner product from the Euclidean space, just the normal, the usual inner product in Euclidean space that restricts to an inner product on each tangent fiber. You also had an orientation that was assumed, we considered oriented surfaces, and if you have both an inner product and an orientation, then you know how to rotate with pi over 2, so you know how to multiply by i in another way. Another way of saying that is that you simply have a complex structure on that tangent fiber. So a surface in an Euclidean space, in the oriented surface in Euclidean space produces a complex structure on each tangent fiber, that is a complex structure on the tangent bundle, so this implies that there is a map from this space, mg top, so mg. And that actually is also a desingularization of the modular space. Now the space of embeddings, if you let n go to infinity, that's a contractable space by an old theorem of Whitney, and since we are talking about embeddings, then the different morphisms, they act freely on this space of embeddings, and mg top was the orbit space of that. You can also take this embedding space and you can cross it with a surface FG, and then you can divide out the morphisms and group it as both on the surface and on the embeddings, so you take that orbit space, and that by projecting to the first factor will give you a map into mg top. And that is what we call a universal smooth FG bundle. That means that the space, and so that's what I've written down here on the last line, so it's similar to the thing I said about the classifying space of the mapping class group, namely if I consider smooth bundles over x up to isomorphism, then that is equal to the maps from x into mg top, modular homotopy. So it's just a similar but a much bigger version of what I just said about mg top. Let me try to write down a summary here of what I've said. So because of this equation here, when we have such a thing, then we usually say that that's the same as maps into a space which we call BDIF FG. So that's the first part of the summary that mg top is opposite deformation, the same as the classifying space of this diffeomorphism group. However, I also said that the components of the diffeomorphism group are contractable, and therefore that the diffeomorphism group itself is the same as the space of components or the group of components which was a mapping class group. And that also means that the classifying spaces are equal opposite deformations. That's the theorem of early needs. And I defined this map theta from the mg top into mg, and if you evaluate that on commoner groups and introduce denominators by tensing with the rational, then you get an isomorphism. So mg top is another way to consider a desingularization of the martial arts space. Let me again put this one on, I mean with a Mumford's conjecture, which says that the stable commonality of the mapping class group is that was a ring. And that ring is simply a polynomial ring in some even dimensional classes, kappa i, one in each degree, two i. I should also say maybe that when Mumford made this conjecture in the mid-80s, then it was already known of, I mean at the same time, at Miller and Esmerida had already proved that this polynomial algebra in the classes kappa i, that was contained in this stable commonality. So let me say what the classes kappa i are. If you have a bundle, a smooth surface bundle where the fibers have genus G, then you can take the tangents along the fiber of such a bundle. There's also a map in Corma, which changes degree down by two from the commonality of the total space to the commonality of X. So you sort of integrate out in the fiber direction. And the kappa i classes come from this map in the following way. You have this oriented surface bundle, well that's, I mean, you're oriented two dimensional bundle, the tangent bundle of the fiber, so that's the same as having a complex bundle, so you can take the first germ class of it, you can raise it to the i plus first power, and then you can push it forward. So that's what the classes kappa i are. So they're actually classes in the integral commonality, but the MAMFOTS conjecture just says that it's, I mean, it's only a statement about the rational commonality. Now maybe I'll put that over here. The next thing I would like to explain is the, is the relation of these embedded surfaces, and hence the modular space, with a series of Pontjuagin and Tom, 50 years old or a little more about the Corbottism series. But before I can explain that, I would say a little bit about what we in topology call Tom spaces. So let's start with an n dimensional oriented vector bundle over a compact manifold M of dimension D. The Tom space is very simple to define. It's simply the one point compactification of the total space. So I've illustrated that in these figures. You have the bundle space here, and now you're one point compactified, so you get something like this. Now there are some, I mean, there's three fundamental important facts that I want to point out. The first one is sort of obvious from the picture, namely that the complement of the base space M in the Tom space in this one point compactification that deforms to the point at infinity. You simply just follow the lines out. I mean, if you cut out the base, the horizontal line, then you can just deform out and be in an end, continuous in an end and infinity. So in that sense, the Tom space is like a localization of the bundle along the base space or along the zero section. The second thing I want to point out is the Tom isomorphism theorem which says that the ice-core model of the base is the same as the I plus end-core model of the Tom space. And then finally, the most important thing for my purpose is the so-called Pontiwakson-Tom collapse map. So now we consider a d-dimensional manifold in your Clinton and trustee space, an oriented sub-manifold. Before we only looked at surfaces, but this is, I also needed this more general context. So now you can surround the manifold by a little tube in the neighborhood. So there's a little open normal tube around that manifold as I have drawn here. This little tube is actually isomorphic to the full normal bundle which of course you can just expand linearly, I mean, out. Even here you only have small vectors, but you can just expand that out, I mean, to the full normal bundle. So the normal tube is really the same as the normal bundle of the embedding of Mg in I n plus d. Now when you have, this is, so you have an open subset E of the Euclidean space, and now if you take one point compactifications you get a map the other way. You simply send everything outside the tube E into the point of infinity. So there's a map then, what do you mean, the one point compactification Euclidean space, that's the sphere in dimension n plus d. So there simply is a map. Once you have an embedding and you have a tube around it, then there is a map that's, Pontjuac and Tom collapsed map from the sphere into the Tom space of E. That map is a very convenient way of displaying all the tangential and normal structure of this embedded surface. There's also a map in the other direction. If you have a map from the sphere into the Tom space, then you can make that transversal to the zero section and you can get a manifold out. And it's that correspondence which 50 years ago was used by Tom and Pontjuac and many more to analyze manifolds up to corporatism. But I'm going to get, come back to that. Now I want, see, we are studying manifolds embedded in Euclidean spaces, in big Euclidean spaces. And now I want to make that, I mean, to connect that up with the space of grass manians. So the space of grass manians, grass D of I n plus D, that's simply the space of D dimensional subspaces of the Euclidean space I n plus D. Over that space of linear subspaces of dimension D in I n plus D, there's a bundle. Namely, you can consider pairs of an element U in I n plus D and D dimensional subspace V in the grass manian. And then you assume that U is a signal to V. That's an interdimensional bundle over the grass manian. Now, if we take a point, now we have this situation where we have a manifold embedded in I n plus D and we have picked a normal tube, but that's actually a unique option deformation so we can add that without changing anything. Now, if we have that situation and we take a point out in M, a point X out in M, then of course we have the tangent fiber at that point. That is a D dimensional subspace. So the tangent space Tx of M is an element in the grass manian. And if I take a U out inside the tube, well, that sits over a unique element in M. So now I take a point out U which lies over the point X in M. Then that element U of this fiber in the tube that simply belongs to this bundle U, D, N, perp that I've just defined. So all in all, we get a map using the collapsed map and then going further into the transpace of this universal interdimensional bundle. Then we get a map from the sphere into the tom space of U, D, and perp. So altogether we get an element in what's called the n plus D fold loop space of this tom space. By that I simply mean it's a space of continuous maps from Sn plus D into the tom space which sends the point infinity, I mean in the sphere to the point infinity in the tom space. So each time I have a submanifold, I get a map like that. Or I should say that if I have a submanifold plus a tube, again the tubes are not important here because they are unique up to deformations. Now I could take the collection of all these tom spaces as n-variance. That's what we call the spectrum in topology. And this loop space of these tom spaces which I got into here, then one can take a limit and they sit inside each other, one can take a union or a direct limit of those, and that's what we call the infinite loop space corresponding to that spectrum. Now that's a huge space, but that's the kind of spaces that we are very good to analyze in algebraic to particle. And maybe I should also say that these spaces which rise like that, they are like, if you think of the space or the analogy between a space and a group, then these spaces like loop infinity, empty D, that's like the appealing groups in the set of all spaces. Now the theorem that Michael Weiss and I proved about four years, five years ago, is that that map I have got now from MG top into this limit of all these loop-down tom spaces in the limit that induces isomorphism in coromality in a certain range of degrees. So, and since this empty, there are more components in this space here, so there's, I mean, the group of components in this space is a copy of the integers, and that's why I've written it, that this space here, that induces an isomorphism in integral coromality. Now, we only knew that MG top was equal to the marginalized space of transversional coromality, so the, so the, so the, or equal to the, to the stable coromality of the, yeah, so that's, now maybe I should say it in a different way. I've already seen that the, I already told you that the, that the rational coromality of the marginalized space we're really interested in is the same as this stable coromality of the mapping class group, which I, which I wrote as the, as the coromality of B-Kammer infinity. But what we actually prove in, in, in this theorem is an integral result, so therefore Z cross the, the, the stable coromality of the mapping class group is simply the coromality of, of this space, omega infinity m t 2. So that, if we could understand what the coromality of the, of the right-hand side is, well, then we would get information about the rational coromality of the marginalized space. And as I said, there's so much theory of, I mean, in algebraic topology, around calculating the coromality, and it's very easy to do it rationally. And I've written up here what the, what the result is, namely that the rational coromality of this right-hand side is simply the symmetric algebra in the coromality of the, of the grass manian in, of two planes in our infinity, of oriented two planes in our infinity. And that is the coromality in these classes, kappa i. So, so, so this theorem provides a proof of one foot's connection. Now, let me, let me, let me, let me move into co-portation theory. Well, first I should, so I fix an integer bigger than or equal to 0. We're mostly interested in, in d equal to 2 if we are talking about the marginalized space. But in general, about 50 years ago, when considered the following of Tom and Pontiakian and many more, considered the following equivalence relation on closed oriented d minus one dimensional manifolds. So we call two of them equivalent if there is a compact manifold of one dimension higher, which has two boundary components. One of them is one of the manifolds and the other one is the other manifold. But things are oriented here and this with minus m0, I simply mean m0 with opposite orientation. Now, the equivalence sign there that, that is that, is that there is an abstract diffeomorphism from m1 into one boundary component of wd and an abstract diffeomorphism from the other boundary component into md. Now, we of course, it's completely impossible to enumerate, I mean all diffeomorphism classes of manifolds, but with this rather coarse equivalence relation, it turns out that one can completely calculate the equivalence classes. That's what we call the oriented bottom groups and, and sign for it is that it's omega SO d minus one. So that has been completely calculated by, by Tom and Wal and building also on work of Pontiac in particular this collapse map that I talked about. I should say maybe that the, that these borders group that they vanish in small dimensions, it's, I mean, up to, I mean, in dimension one, two, three, zero dimension four, there should be a percentage that's a, that's a complex projection space of the real dimension four. Now, this here doesn't involve, this definition doesn't really involve the diffeomorphisms. But now I want to go back, see, we, we considered surfaces where the, which was embedded and, and that was the orbit space of the space and embeddings. More to the diffeomorphism of this classifying space of the diffeomorphism groups. So now I wanted to take the diffeomorphisms of the objects of the, of these manifolds into consideration. So I define a category. I replace the abstract manifolds by embedded ones. So I've written down what the, what the objects are here and maybe at the same time I should put on this picture. It's easier to, to see than the picture. So I have four circles embedded in one place and I have also here in this case four circles embedded in, in, in, in another infinite dimensional place. And a morphism between these is simply again an embedded, in this case an embedded surface which goes from the, I mean from, from, from this one slice defined by the real number A naught to another slice defined by the real number A1 which is bigger. So that's the sort of a funny category, this embedded corporatism category. But the picture of it is there. Composition is simply just taking union. Now there is a way, each time you have a, you have a category or a small category with the objects and morphisms, a total set of objects and morphisms are set. Then you can define a space out of that. So I'm just writing that down in abstract terms. If I have an arrow, or maybe first I should say it's a, yeah, if I have an arrow that corresponds to one simplex in this classifying space. So the points in this classifying space B, C or the vertices corresponds to the, to the, to the objects in the category. If I have two morphisms in the category that is composable, then I get a two simplex in, in this space, etc. So I build up a space simply by, by, by looking at, at the, at the composition of, of arrows like that. Here are some examples of that construction. If the objects consist of just a, a single point and the morphism of that point is a group G, then this construction simply gives a good version of the classifying space BG. So that's a space classifying principle fiber bundles or in case where G was, was a diffeomorphism group of FG. Then it also, then the same space classifies these surface bundles with fiber of, of genus G. If I do this for this co-partisan category, I've just defined as in the pictures here. Then, then I can take the components of that space and that's just the space considered by, by, by von Schubert's, the von E interesting and, and some, etc., namely these, these space or corporalism classes. Now it's the theorem I, I first want to state here is a theorem which involves this Gang of 4 that I, that I started with and it simply says we can, we can determine the full space, or as I've written it down here, we can determine, I mean just written down what the loop space, the point of loop space of this B of the category is, and that's the space I defined before, which was the loop space of these universal bundles over the Cresc-Mannion. So that's the first result I want to come to. The inspirations that consider this corporatism category, that comes from field theories. One can wonder why it was not considered before, but sometimes you need inspiration from other sources. So when you could say that a conformal field theory is simply a function from this category of embedded corporatism into some category of Hilbert spaces. And this there might be of some help in trying to understand various field theories, but that has still to be worked out. This theorem here potentially gives you information about different morphism groups and motorized spaces. Because morphism in this category, and maybe I should put the picture on again, morphism in this category, well that gives, that gives because of the definition of the space that gives you one simplex in B gamma g, or you can say it gives you a curve in B gamma g. So there's a map from the morphism space into the loop space of B gamma g, because if you have a morphism or path in this space, then the two ends of that morphism, they are the same element in the corporatism group and you can just connect them to some base point and that way you get a map from the space of all morphisms, all these embedded surfaces here in big Euclidean space into the loop space of the category. So one could ask, is there some big WD so that this space of morphism, that was the same as the offshore deformation as or I mean it was a union but it contained these classifying spaces of different morphism. So there's a map here and can we find a large D so that some large manifold so that this map is close to being an equivalence. And actually the Harris stability theorem provides such a map in the case where its surfaces we consider, where the borders here are exactly as in the picture. Nothing similar is known in higher degrees and that of course is one of the real problems in the subject. I would like to go back a little bit now to Harris stability theorem. So now we consider surfaces I mean with boundaries. We have B boundary surface let's say I mean oriented surface. So again we can take the different morphisms, the different morphism group of that surface which is the identity on the boundary. And we can take the group of components and that's called the mapping class group gamma Gb. And again you have a theorem like the early result for closed surfaces which says that the components are contractable so that the classifying spaces are homotopy equivalent they can be deformed into each other. Now what Harris considered was that if you have a surface of genus G here with B boundary components then you can add a surface here with one more genus just glue it on or you could also close off the surface. So if you have a different morphism of this surface then you get a different morphism because it's the identity on the boundary you get a different morphism of this glue on simply by adding the identity. So this gives maps like this here or we could take and then maps on components and then Harris theorem simply is that these maps are induced isomorphism on an integral chroma in a certain range. So now you can define the B of gamma infinity B to be the direct limit where you keep on gluing, you keep gluing on surface like this you keep these things glued on here. And that you can take the union of the limit, direct limit of all these spaces you call that B gamma infinity B. And the chromaity of that space that is what I've defined before as the stable chromaity. Now in order to make use of that theorem has to define a certain subcategory of this category of embedded surfaces. Namely the category has the same objects number of circles or D minus one dimensional manifolds but I've drawn in the case of surfaces and the subcategory but it has fewer morphisms namely the morphisms you consider only have the property that each connected component has some outgoing boundaries. So these things here that I crossed over are not allowed. It turns out that this subcategory has the same classifying space of the deformation. So this theorem A and B together says that the loop space of this restricted or the classifying space of this restricted category, let's now take D equal to two, is the same as this loop down Tom's space which I've listed the rational chromaity of it that was this polynomial algebra in the kappa classes. This was not the way that Michael Weiss and I have written and proved the theorem but it provides a new proof of the theorem that there is a homology isomorphism between this space B gamma infinity B whose homarity is a stable chromaity and rationally the stable chromaity of the marginalized space and then this loop space which was here. All right. I would like to... I'm losing some pages. Well, it doesn't really matter. There are some variations of these things I have talked about that I would like to end with. One of them is that, well, so far I have only considered manifolds embedded in Euclidean space and in particular surfaces embedded in Euclidean space, subsurface of Euclidean space. But you could also consider soft spaces embedded in Euclidean space together with a map from that soft space into a background space X. So, in all these variations you can consider cobaltisms together with some extra structure, I mean spin structure, you can consider unoriented morphisms and in all cases theorem A and theorem B which together determines the classifying space of the cobaltism works out fine. The problem is stability. The problem is to find a... is to find this big manifold which we can do by... by heres stability which approximates this space. But now there are two things I would like to end with. So, two variations of the problem. One is that you can consider a surface sigma embedded in Euclidean space of genus G and you can consider just a continuous map from that surface into X. So, that's sort of a topological version of the... of the... of the chromophilic and martyrized space where you have surfaces and a pseudo-hormophic map into the... into a background manifold which was then assumed to be... to be a simplectin manifold. But at least we could... we weaken that and simply consider the... just continuous map into a space X. Now the... in that situation... there's a theorem of... of Coen and myself, a very recent result which says that if X is simply connected, that is if every loop in X contracts can be deformed to a point, then in that case the chromophilic of this... of this empty top X is like in heres situation independent of the genus G if... if... if G is big compared to the dimension you consider. And the other theorems, A and B, they work well so... so therefore you also have this... the second thing that you can... that if you let the genus go to infinity, then you can consider the stable chromophilic of these spaces, then that can be computed. And the result is that it's this spectrum we... we take this... these term spaces we had before, we take the... the smash product with X and then we loop that down. That again is a space that one can... that one can handle very well in algebraic topology and in particular one can calculate its rational chromology. Now the... the components are a little different here. It involves... I mean the... the second chromology group of... of X, but the chromology of a single component, the centralЕТ rational, again is similar to... to Monkrotz conjecture, namely it's a polynomial of three commutative algebra in the chromology of Cp infinity cross X, we take that in dimensions bigger than 2 and then you shift it back two degrees in terms of that per cube. So... so... So that's the result. So this topological Gromov-Wittens space, or this weakening of the Gromov-Wittens space, can be analyzed. And I mean, one could say much more about it, but here at least are two results. The second thing, I mean, the thing I want to end with is the theorem of Saint Galatius about the automorphisms of a free group on n letters. So I call that odd n. Now I consider also all maps, all continuous maps from a sphere to a sphere, which sense the point of infinity to infinity. That's another space. You can let n go to infinity. You take a union of all these spaces in a suitable way. Then you get a space which we call loop infinity. S infinity, that's a space you have here. And the theorem of Galatius is that there is a map from the union or the limit of all these automorphism groups. You take the classifying space of that, and then there is a map like this, and that induces an isomorphism on cohomology. Now, Sarah proved many years ago that the homotopic groups of the space, loop infinity, S infinity, that's what we call the stable homotopic groups of spheres, that they are finite in dimension bigger than 0. The components of the space is z. But in particular, therefore, a corollary of this result of Galatius is that the coromar of b of the automorphism group, of the free group, that that tenses with the rational is 0. So that's completely similar to a standard conjecture. And this is sort of the standard conjecture about the coromar of the automorphism group of the free group. That has been a problem for many, many years in geometric group theory. So let me end by pointing out that the way Galatius proves this is very similar to the way that I've talked about the proof of the monfort conjecture. Maybe now you consider graphs embedded in r infinity. Where the source graphs, so you replace this corbotism category by a category of embedded graphs. So here's a typical morphism in that graph. There are stability theorems which correspond to Harris stability theorem proved by Hatser, Vulkman, and Vahl, which says that if you take the corresponding positive boundary category, so where you rule out, I mean this thing in the middle here, then that is homology equivalent to z cross b, r infinity. You might ask, so this is the first example where you can use, I mean this is very exciting because you could also ask for, in higher dimension now, for manifolds with singularities. So graphs are of one manifold with singularities. And theorem a and b, I mean Galatius proves that the analog of theorem a and theorem b works in this setting. You might well ask, we use the tangent bundles and the normal bundles in the case of actual manifolds. What is that replaced by here? Well, that's replaced by Gromers' theory of flexible sheets and there's also a hard calculation involved. The basic point is that instead of considering a tangent basis and the normal basis at given points, you take at a given point, then you take a small neighborhood of that point and you look at what part of the graph is in that manifest, I mean it's in that open set. That's called scanning, so it's like you take a point and then you put a microscope on and you see what part of the graph is in there. And that way, I mean that was, I mean then you get into the machinery where Gromers' theory of flexible sheets work. Well, I mean the time is up, so I'll end here.
This talk aims to explain what topology, at present, has to say about a few of the many moduli spaces that are currently under study in mathematics. The most prominent one is the moduli space Mg of all Riemann surfaces of genus g. Other examples include the Gromov–Witten moduli space of pseudo-holomorphic curves in a symplectic background, the moduli space of graphs and Waldhausen’s algebraic K-theory of spaces.
10.5446/15960 (DOI)
Dear colleagues, ladies and gentlemen. Dear colleagues, ladies and gentlemen. On behalf of the Organizing Committee, I would like to welcome you all to the first scientific session of this ICM. It is devoted to the Laudatio for the Fields, Medals and the Rolf Nevalina Prize recipients 2006. The Laudatio for the Karl Friedrich Gauss Prize for Applications of Mathematics is organized as a special lecture, the Gauss Prize lecture, to be delivered by Professor Hans Föhlmer, Humboldt Universität through Berlin, tomorrow in this auditorium at 2 p.m. with the title, The Work of Kyoshi Ito and its impact. We shall start with the Laudatio for the Fields, Medals, Andrej Okonkov by Giovanni Föhlmer et her city, Trissarland. Thank you. Ladies and gentlemen, Andrej Okonkov's initial research topic is in representation theory, particularly in its combinatorial and asymptotic aspects. This theory deals with very basic objects and central objects in mathematics such as partitions. So a partition of an integer n is just a non-increasing sequence of integers adding up to n. And here you see on the left a partition drawn as a young diagram. So the numbers in which they compose n are the lengths of the rows. So this is the first row, the second row, the third row. It's a partition of 28. And as you see from the caption which I took from a paper of Negrassof Andokonkov, which is otherwise written in English, this partition, this young diagram is drawn in the Russian way which is flipped and tilted with respect to the standard way. So this is one of the points of view that are important in Okonkov's work to see the things in the right perspective. And it has several advantages. For instance, you can see it as a graph of a piecewise linear function. And there are many other important aspects which I don't have time to discuss. So partitions are a basic combinatorial object and they are at the heart of representation theory, in particular of the symmetric group, but they show up in many other subjects. So in the symmetric group they label both conjugacy classes and irreducible representations. So partitions also have three-dimensional analogues that you can see here on the right. They are called three-dimensional partitions or sometimes plain partitions. So one of the main ideas in the Russian school in which Okonkov grew and to which he made important contributions is that one should consider partitions as a probabilistic object. If you in representation theory see a sum of a partition that you see all the time, you should think of it as an expectation value for some natural probability measure on the set of partitions. So after developing important techniques and obtaining deep results in this subject, Andrei Okonkov turned to the rest of mathematics and he saw partitions everywhere. And this is not so surprising as they are such a basic object, you can meet them everywhere. What is astounding is that wherever there is a partition in the game, Andrei was able to use his insight and his profound knowledge of mathematics to obtain incredible results no matter what the subject was in many areas of mathematics. So we will explain this in some examples of his more recent work. And one will be about Gromov-Witten invariance. So this is a slightly technical slide. So here you consider maps from a complex curve to a projected variety. So one dimensional complex to a two-dimensional variety or manifold. And you consider the so-called modular space of stable maps. So you have a curve C and a variety V. And you have marked points P1 and Pn on the curve. And you consider all holomorphic maps from C to V and you fix the class of the image. And you consider these up to certain equivalents and stable means that you consider also singular objects in order to get a compact space. And Gromov-Witten theory is about intersection theory on this space. And so there are two classes of two natural classes of homology classes in this space. One comes from the target, the manifold V, you send your curve in and you can pull back this map at this evaluation point. And the other class has to do with the curve itself. And so it's the first-turn class of the cotangent bundle at each of the marked points. So you have these classes and the problem in the intersection theory is to compute integrals over this class. So you take some powers of these first-turn classes and these classes which comes from the target. So and the left part is coming from some notation which comes from quantum field theory. That's where these things were originally coming from. But they are a very classical object. For instance, if this Ki, the powers of the psi classes are zero, then it's something which was studied classically. So you consider, for instance, curves in the plane of certain degree which pass through marked points, given points. And you count them. So it's a numerative geometry. But so this theory is already deep and non-trivial if the target is just a point. And it is related to integrable systems as with and conjectured and conceived proved. So you can consider a certain generating function of this object. And this is actually a tau function of an integrable hierarchy, the KDV hierarchy. And it has many important properties which can be formulated in this way that there are some differential operators in this variable, which kill this exponential of this function. This can be translated into a recursion relation which can be used to compute them all. So this is one class of results. If the target is a point, there is another class of known results which deals with a case of genus zero curves, so rational curves, and is related to quantum comology. But the challenge is to understand this for general V and all general. So, and this is where Okunkov and Pandari Pandagi made a very important contribution. The first thing they did is they gave an exhaustive description of the Gromov-Witton invariance for curves V. And the second target is also curved. And there you can speak about degree D. And one of the basic formulas of Okunkov and Pandari Pandeg is this one. It gives a formula, a finite sum of this invariant if you take the Poincare-Duel of a point as a class on the target. And some of the partitions, so lambda is a partition, a partition of D, of the degree of the map. And so G of V is the genus of the target. And there are some functions on the space of partitions that I will describe later which are very interesting, which are this P of lambda. Well, so the result is called Gromov-Witton-Hurvitz correspondence because Okunkov and Pandari Pandeg were able to relate this to a classical formula due to Burnside for Hurvitz numbers. So there it is a combinatorial problem. You count all possible covering, ramified covering of your curve with prescribed monodromy adoramification points that you fix. And this can be solved with the theory of the symmetric group and you get a formula like this. So this K plus 1 factorial should not be there. Sorry about that. But it's a very similar formula. So the formula of Okunkov and Pandari Pandeg can be understood as a substitution rule. So you take Hurvitz numbers and you replace a function F, which I will describe by another function P. So this is this, this displacement reflects a very deep fact in representation theory of the symmetric group of which this is a geometric realization as was shown by Okunkov and Pandari Pandeg. And it is in the theory of shifted symmetric functions. So this FK and PK, which appear in the two formulas, are basis of the algebra of shifted symmetric functions. So this was a theory which was created by Kirov and Olshansky and to whom Okunkov made a contribution. So it is given by the Fourier transform in this representation theory sense. So you have the set of all partitions of N and you consider all partitions of all integers and the center of the group algebra of the symmetric group. And one definition of the algebra of shifted symmetric function is the image of the Fourier transform. So you have the central elements and if you evaluate the central element in a reducible representation, you will get a multiple of the identity. So this multiple of the identity is a function on the set of representations. So P, the set of partitions is a set of representations of all symmetric groups or the infimsymetry group if you like. And so you get such a Fourier transform map and it's a non-trivial theorem that this image consists of shifted symmetric functions. So polynomial is infinitely many variables which are the length of the rows of the partition. In variant under permutation, not of lambda i, but lambda i minus i. This is shifted. So in the Hurwitz formula, you get just the image under the Fourier transform of the central elements associated to cycles. So if you have a cycle, it's a conjugacy class and its image is a function on partitions. And in Gromov-Witten theory, you get this regularized power sum. So if you want to do power sums in shifted symmetric theory, you would try to start with this first term in the sum. It's divergent, so you regularize, but you have to subtract and add infinity. This is what you do. And the fact that this term and this term cancel is a generalization of this Ramanujan formula if you like. Okay, so for genus 0 and 1, Okunkov and Pandaripande gave more explicit descriptions. And one consequence by result of Bloch and Okunkov is that if you take this generating function for genus 1 target, you will get a quasi-modular form, namely polynomial in this Eisenstein series. Okay, and maybe so there is also a version for the asymptotics as d goes to infinity, which is related to rational billiards, but I will not explain that. So there are other use of partitions, which I will just very rapidly sketch. It has to do with random matrices and longest increases sub-sequences of random permutations. Instant on sum, so anticellular connections, and the ring structure of the quantum homology of Hilbert's scheme of points on the plane. Okay, let me talk about something completely different now. So maybe I don't have time to talk about this. So this is gold. It's a gold crystal at 1000 degrees Celsius. It's a very tiny micrometer scale. And you see that a gold crystal at this temperature, I don't know if you see it very well, so begins to develop facets. So there are some parts of it which are flat, and they reflect the crystal structure of the gold. But it also has parts which are melted in some sense, which are not yet flat. So this is gold, and the challenge is to describe these boundaries of the facets and the correlation functions and so on. So this is a problem in statistical mechanics. And there is a nice model, which is again related to partitions, which is a dimer model in relation with random surfaces. So a dimer configuration. So to define that, you need a bipartite graph G, a graph with vertices of two colors, with edges connecting, vertices of different colors. And the dimer configuration, or perfect mapping, matching, is a coloring of the edges, so that it's a subset of the set of edges, meeting every vertex at exactly once. So for instance, if you take a dimer configuration, a planar graph, like a Honecom lattice, and you replace these edges by dominos with the shape of rhombite, you will get a picture like this, which looks like a random surface. So yes, but you can take other graphs. And in fact, Kenyon, Okunkov, and Sheffield consider dimers on doubly periodic bipartite graphs in the plane with positive weights, Boltzmann weights associated to the edges. So the question of statistical mechanics is then to consider the asymptotic behavior of the measure, given by these weights, product of weights of occupied edges, in the limit if you take a lattice to infinity. And so Okunkov and collaborators were able to get a beautiful picture, which is very important for the very reminiscent of the physical situation one considers, but also established in a surprising relation with very modern real algebraic geometry curves in the plane. So the basic object is a spectral curve that you can make out of the weights and the connectivity matrix of the graph. I will not describe the details. And this spectral curve is not any curve, but what they prove is that it belongs to a very important class of curves, which are called Harnak curves. They were studied classically by Harnak, but they appeared recently in modern real algebraic geometry. And so one definition of this class of curves involves the amoeba of a curve, which is the curve in the complex two-dimensional plane, is an image of C under this strange map log, log of absolute values of the coordinates. And one definition is that this map is 2 to 1 on the interior of the image. This is one definition due to Michalkin, but it's equivalent to other definitions, and in particular a complete topological classification of these curves is known. So in mathematics, the consequence of the work of Kenyon and Dokunkov is that they have a complete description of the modular space of these curves in terms of Boltzmann weights for these dimer models. This is the mathematical consequence, and there are also physical consequences. Here you see an amoeba, looks a little bit like an amoeba, except it has 10 tackles. And so what they give is a complete description of the phase diagram of these models, and one way to describe it is that these Boltzmann weights can come in two parameter families. Physically one says that one introduces a magnetic field which has two components, X and Y. And then depending on these X and Y, the statistical mechanical system has different behavior, like the height of this, which describes these surfaces, has different behavior of correlations as the points go to infinity. So this is the picture they get. It is given by this amoeba of the Harnak curve, the spectral curve. And so this is the X, Y parameter, the magnetic field, and you will have different behavior of the frozen phase in the unbounded complement of the amoeba. And in these islands you have Gazio's phase and his liquid phase. So they are, for instance, the liquid phase is characterized by having unbounded correlations. So another consequence is the calculation of equilibrium phases, and so the prediction of the shape of these crystals. And this is also given by a very important quantity in this three-layer algebraic geometry, which is a Ronking function of the curve, which is given by this integral. And this function has the property that it has locally flat pieces. This is a graph of the function where the flat pieces are omitted. So this would be flat. Then there are round pieces. And so these functions explains the occurrence of facets. So Kenyan and Okumkov consider also different boundary conditions for this dimer models, and they obtain beautiful pictures, always related to algebraic geometry, like the cardioid is a rational curve, which is obtained by taking this kind of boundary condition in this Ronbeidomino tile. Okay, so I will conclude with this picture. Thank you.
Laudatio on the occasion of the Fields medal award to Andrei Okounkov.
10.5446/15959 (DOI)
Laudatio for the role of Nevalina Price, John Kleinberg, by John Hopcroft Cornell University, USA. It's a great pleasure for me to be here today and present to you some of the work of John Kleinberg. John's research has helped lay the theoretical foundations for the information age. He has developed the theory that underlies search engines, collaborative filtering, organizing and extracting information from sources such as the World Wide Web, news streams, and large data collections. John's work is so broad that all I'm going to be able to do today is just to sample. And I've picked five areas that I believe represent some of his important contributions. And the five topics that I picked are his work on hubs and authorities, on small worlds, on bursts. Burris neighbors, and on collaborative filtering. And let me start with his work on hubs and authorities. In the early 60s, people in library science developed the vector space model. And what they did is they took a document and they represented it by a vector where each component of that vector corresponded to one word. And they counted the number of occurrences of each word and used the vector that they got to represent the document. And so you can see in this particular document the word art of art does not occur, the back us does not occur. But anti-trust appears 42 times, CEO 17, Microsoft 61, Windows 14. And you can probably guess what this document is about. Now early search engines used this vector space model to answer queries. And what one of John's contributions was is that he recognized when the worldwide web came along that there was additional structure that could be used in answering queries. So the worldwide web was represented as a directed graph where web pages corresponded to nodes. And if there was a link from one web page to another, then the graph contained a directed edge. Now John introduced the concepts of hubs and authorities. And an authority is a web page on a particular area. So if it was on automobiles, an authority might be an article on a specific aspect of an automobile. And a hub is a web page that directs you to various authorities. So a hub page might belong to car and driver, which would have pointers to many articles dealing with various aspects of automobiles. Now in the mathematical definition, an authority is a node that has a large number of hubs pointing to it. And a hub is a node that points to a large number of authorities. And you notice that there is this circularity in the definition. And so to be a useful concept, John had to develop the mathematics necessary to break this circularity. And what he did is for each web page he assigned two weights. One of them is a hub weight and the other was an authority weight. And so I've shown a simple example here where on the right I showed the authority weights for a certain set of pages. And on the left, the hub weights. And then what he did is he started an iterative procedure. He took each node on the right and adjusted its authority by summing the weights, the hub weights, the weights associated to the hubs pointing to that authority. So if I do that, you notice that the first authority now has a weight of four, the next three, three, and so on. Then what he did is he adjusted the hub weights by summing up the authority weights that they appoint to. And so he got a set of hub weights shown here. Now, to prevent these weights from just growing, at each stage he normalized them so that the sum of the squares adds to one. And if you look at this method, you quickly recognize that the iterative procedure converges so that the hub weights are the coordinates of the major eigenvector of A transpose A, where A is the adjacency matrix of the directed graph representing the worldwide web. And the authority weights are coordinates of the major eigenvector of A transpose. Now, the way a search engine, the way you could use this in doing a search, is by the following method that he described in his paper. He used a search engine such as AltaVista, which at that time used the vector space model to answer a query. And he would take, say, the 200 most relevant web pages returned by AltaVista. And then he recognized that you had to add to this set of web pages because at that time the web page for a search engine did not have the words search engine on the page. So if you did a query, put in the query search engine, you would not get the search engines of that day. So what he did is he expanded this set of 200 web pages by looking at the pages they point to and the pages that point to these. And this would maybe give him on the order of 5,000 pages from which he would try to return the answer. And so what he would do then is he would take this small graph and find the hubs and authorities and use these numbers to rank the web pages. And this had a major impact on all modern search engines today. And one of the things I should do, because you're probably wondering, how does this differ from what Brennan Page did when they created Google? Well, they did something which is very similar but has a major difference. What they did is they said, let's take the worldwide web and let's do a random walk on the web. And what we will do is we will look at the stationary probabilities of this random walk. Now they had to solve a problem because if there is a node such as the one on the right there that has no outgoing edges, when your walk reaches that vertex or node, there's nowhere to go and so you lose probability. And in fact, they had to solve the more general problem. There might be a small connected component out there with no outgoing edges and all of the stationary probability would end up in that component and it would not give you any way of ranking the earlier pages. So what they did is they said with small probability of epsilon, they would walk to just any random vertex and then with probability 1 minus epsilon, they would take a random walk using an edge of the graph. And that's sort of equivalent to adding a vertex and with probability epsilon, you go to that new vertex and then take a step back to an arbitrary vertex of the graph. Now the difference between these two methods is the following. To do Brennan-Page's algorithm, you've got to calculate the singular vector of a graph which has in the order of 25 billion vertices. Whereas the method that John developed, you have a much smaller graph in response to your query and you only have to calculate the singular vector for a graph with about 5,000 vertices, something which is very easy to do. So that was one of his pieces of work. It has had a major impact on all search engines today. It has also created large numbers of industries because people want to figure out how to increase the page rank of their particular web page and things like this. But it has also had a profound effect on all of us that teach theoretical computer science because this work has led to a large number of theoretical papers that those of us that teach simply cannot ignore in our classes. So let me move on to another example of his work. And this time I'm going to pick a piece of work referred to as Small Worlds and it's had a major impact in the social sciences. Back in 1967 Stanley Milgram did some experiments where he calculated sort of the distance between any two people in the United States. And the way he carried out this experiment is he would pick a person who he would call the source, say in some state such as Nebraska, and he would pick a target person in another state, say possibly in Massachusetts, and he would tell the person in Nebraska the name of this person in Massachusetts, give him their address and occupation, and ask the person to start a letter moving towards this target. But the rule that he asked is that you could only send the letter to somebody that you know on a first name basis, and you're to pass the same set of instructions onto that person. And so the letter might move to a close neighbor, maybe then to the next door neighbor of that person, and then to a distant relative, and then to a close friend, and so on, until it reached the person in Massachusetts. And what Milgram observed is in those cases where the letter actually arrived, the number of edges along the path was between five and six. And from this he concluded that the distance between any two people in the United States was only five or six steps. And this created a large amount of research in the social sciences where they studied the interaction between people and various social groups. Now implicit in this experiment is not only the fact that there is a short path between two people, but we have a way using only local information of finding that path. Now this research became a lot more rigorous. First with some work of Watts and Strogatz where they made the model a little bit precise, they started with a graph where I've shown the vertices around the circumference of a circle, and they put an edge from each vertex to each of its adjacent vertices, and in fact they put edges to vertices which are a distance one or two or three around the circle. So they got a graph something like this. But in this graph you will notice that if you want to find a short path from a vertex on one side of the circle to a vertex on the other, and there are n vertices on the circle, the path is going to be about n over four hops. So what they did is they added to the model a few random edges, and then they were able to prove that with high probability any two vertices were connected by a short path. But what they didn't do is they didn't consider the question of how do you find the path, and that's where John's research came in, and he fundamentally changed the nature of this research. Now John used a slightly different model. He used a two-dimensional grid where each vertex has a short path to each of its four neighbors, and then in addition at each vertex he added one random long-range path. And we'll have to talk a little bit about how he put in that random edge. But first let me go back and just point out that early work following that of Stanley Milgram really focused on the existence of short paths. And the fundamental change that Kleinberg did in this field is he asked the question, how does one find these short paths between two neighbors using only local information? Now John, when he put in these long-range edges, had to decide with what probability he would put in a particular edge. And so what he did is for the long-range edges he put a random edge from a vertex, and the probability of that edge going from a vertex u to a vertex v was given by some constant divided by the distance from u to v raised to the arth power. And the constant is just there so that you can normalize some of the probabilities adds to one. And John looked at this probability distribution for different values of r, and the surprising results that he came up with was first of all that if r is equal to 2, then he was able to show that there's a polynomial time algorithm using only local information for finding short paths. However, if r was less than 2, there would be many long random edges, and there would be short paths, but there was no efficient algorithm using only local information to find these short paths. And similarly he showed that for r greater than 2, where there were fewer long-range edges, there was still no efficient algorithm to find short paths. So what he was able to prove is the only time that you can efficiently find these short paths is in the case where the value of this parameter r was exactly equal to 2. I was going to try and go through one of these proofs, but I think what I'm going to do is just mention the very outline of it because I'm going to run out of time otherwise. So what he did, and this is the proof that you can find a short path for r equal to, he looked at the distance from the current vertex to the destination, and he said the algorithm would be in phase j if the distance to the destination was in the interval from 2 to the j to 2 to the j plus 1. And so you can see that since there are order n or n squared vertices in that graph, depending on whether n is the length of a side or the area, that there's only log n phases. And if you can show that you are going to get out of a phase within log n steps, then the algorithm would take time at most log squared n. And I think what I'm going to do is I'm going to just skip over the proof, and so I can just suggest a little bit about some of his other research. But what I will point out is that this research influenced, well it certainly influenced research in the social sciences in a fundamental way, but it also has had application in many other areas, and I just mentioned one of them in peer-to-peer file sharing systems. The other thing that I will mention is that if you look at real data sets, many of them have this quadratic decrease in probability, which is essential for having efficient algorithms for finding these short paths. Let me quickly move on to John's work in detecting bursts in data streams. If you have a data stream, you can organize the information by many different ways by the topic, by the time something occurred, by its frequency or other parameters. And John observed that bursts of activity provide structure for identifying or organizing data. Now what John did is he developed a model for a data stream, and then using this model showed how to detect bursts. And John's model is, he said that the gaps between events satisfy a probability distribution of the form p of x equals some constant times e to the minus alpha x. And in this model, the expected value of a gap is 1 over alpha. Now John created a model which had a number of states, in fact an infinite number of states, and I've labeled them q0, q1, q2, and each qi has a distribution for arrival rates, and in q0 this rate is 1 over g, and as you go up to higher numbered states, the rate increases by a scale factor s. And in each state there are two transitions. You can either go to a higher numbered state, one higher numbered, or one lower numbered. And he charged a cost every time you went to a higher numbered state. And what he did then is he developed the mathematics to find the best fit, the model, the parameters of the model, the best fit for a particular data stream and what the state sequence would be. And then looking to see when you're in high states, this tells you when there is a burst. He applied this to a number of areas. One of them was two important conferences in computer science, and he looked at the words that appeared, and sure enough what you can do is you can see how various intellectual areas became important and then ceased to be important with time. And if you look at this list, it very much agrees with what I believe were important areas at times. Since I'm very short on time, let me just quickly mention his work on nearest neighbors. One fundamental problem is if in your high dimensional space, say d is equal to 10,000, and n is some large number, say a million points, and these points represent, say, documents or webpages, and you have a query, which is also a vector in this space, and what you would like to do is find the closest point to your query. And what you would like to do is you'd like to do this with an efficient algorithm. There was a large amount of work in this area, but what John was able to do is beat, get much more efficient algorithms by random projections, by projecting points onto a line. And one of the things you would hope is if two points, well, certainly if two points are close together, when you project them, their projections will be close together. However, you could have two points which are far apart, which end up to be coincident because you happen to project them along the line through the two points. And what John did is using some very sophisticated mathematics, showed that if you do enough projections that you can indeed, with high probability, find the nearest neighbor. What I'm going to have to do is apologize to him for not actually covering even five of his papers, which is just a small portion of his work. But what I should do is conclude, since I'm out of time, but let me just go to my conclusion slide. His research has impacted our understanding of social networks, of our understanding of information, our understanding of the worldwide web. It's influenced every modern search engine and thus the daily lives of researchers throughout the world. And it has laid the theoretical foundations for the future of computer science. And with this, I will conclude, and I just have to say I'm very, very proud to have this opportunity to give you a glimpse of some of his work. Thank you. APPLAUSE We close this Laudatione session with a warm applause to all our dears and the presenters. APPLAUSE
Laudatio on the occasion of the Nevanlinna Prize award to Jon Kleinberg.
10.5446/15955 (DOI)
It is great honor for me to introduce the plenary lecture of Professor Henrik Iwanyac. Professor Iwanyac was born and raised in Elblok, Poland. He attended the University of Warsaw, obtaining a bachelor's degree in 1970 and a doctoral degree one year later. Already as an undergraduate, he had introduced fundamental ideas, new ideas into SIV methods. There have been over the years, recent years, other many exciting developments in SIV methods, but the work that Henrik did as an undergraduate, the ideas that came then, formed the background backbone of the modern SIV theory. Shortly thereafter, Professor Iwanyac turned his attention to the analytic theory of automorphic forms. Here as in everything else he touched, he pioneered with new ideas and great technical innovations as well. And in particular, he pioneered the successful development of this theory of analytic theory of automorphic forms to classical problems in number theory. As one example of this, one might mention that although the Planckere conjecture seems to have gone the way of the Fermat conjecture, the Riemann hypothesis is still around. And kicking. As Riemann's paper tells us in its title, Uber de Ansel der Prinzalen, this is a subject about prime numbers. The most important applications of the Riemann hypothesis today and its generalizations are to the distribution of prime numbers. In particular, it's very important to study their distribution and arithmetic progressions. An important theorem, the Bombier-Ivina-Gradow theorem, shows that for many of the applications of the Riemann hypothesis, they can be dispensed with. Riemann's introduced ideas which show that in many of these applications, one can go further than one could go, just even knowing the Riemann hypothesis. In more recent years, Henrich has introduced ideas which have surmounted the parity problem in sieve theory, which prevented one from using sieve methods to produce prime numbers, which was, after all, their goal. This was thought to be a fundamental obstacle up until his work. He has introduced numerous, numerous innovations in the theory of L functions, the theory and applications of which are ubiquitous, not just in analytic number theory, but in all parts of a number theory. As Henrich has won numerous honors and awards, the Cole Prize and Ostrowski Prizes, for example, he's a member of the National Academy of Poland and also of the US. He is a prolific and generous supervisor of graduate students. His annual lecture notes have become hot black market items which circulate widely and introduce the subject to people all over the, many people have learned from his notes, which he produces afresh from a different part of number theory every year or twice a year. Henry Vinyitz has lectured, this is his third invited lecture to the ICM, the first in Helsinki in 1978, the second in Berkeley in 1986. Already in 1978, Henrich Vinyitz was at the very top of the field of analytic number theory. Many things have changed in the 28 years since then, this one has not. The title of Professor Vinyitz lecture is Prime Numbers in L functions and I'm very pleased to introduce him. Well, the title of the talk is Prime Numbers in L functions, so let me go to the business state and to change something here. Here they are, the prime numbers, 2, 3, 5, 7, etc. Not all of them, of course, we know all that are infinitely many of primes. These are elementary particles of arithmetic so we can play with them forever. Before I go to the subject that interests me and John mentioned about distribution of primes after Riemann, let me say a few things about what the prime numbers are good for. As I said, in case you think it's just a game, but I don't think so. They are really fundamental for structures of arithmetic and in mathematics. For example, let's just take a data of an equation. You like to know whether there are solutions. You test it, the easy test is just look at the congruence, modulo prime for each p. If you are lucky, there are no contradictions there, so you can go farther. Of course, you have to look at also real numbers. If you are lucky there, the solution is real numbers, then you can still test farther. Take real numbers as a completion of rationales with respect to the Archimedean evaluation. Now for every p, you have added discrete evaluations and do the same in completion. Go to periodic fields and you can test your equations in periodic fields and then maybe lucky. If you are lucky all the time, for real numbers and for periodic numbers, you may ask yourself, is there a real solution? I mean the one that you are originally interested in, that is to say in rational numbers. Very often the answer is yes. This is the local principle that sometimes is true. For example, for quadratic forms, for example. So I think that is enough of these few words to convince you that primes are basic in the structures of mathematics, not only just for studying them on its own right. Also extensions of rationales to number fields require knowledge of primes. This time, prime ideals replace prime numbers. And those if you want to understand, you have to understand the rational primes as well. Anyway, that is just few words about how the prime numbers are good for. And we often like prime numbers as entertaining subject to play with. And I just pick up two questions like what is the largest known prime number? Of course it is perplexing question what is mean largest known when we know that infinitely many. So I leave it up to you to decide what does it mean to give somebody a prime number. What terms and how you convince this is a real prime number. So here is the record at least a few days ago when I googled and see that the largest prime number is over 9 million digits. It is impossible to print the digits here. And the other problem of my choice in this regard is twin prime conjecture that says that P and P plus 2 are simultaneously prime numbers infinitely often. It looks very innocent question, of no importance. But typically in development of mathematics when we start something from just playing with, there is a development father and it becomes basic things like remember the last theorem that triggered so much investigation number theory in Kumar and then of course the first one. So even between prime conjecture which is nothing but just interesting subject on its own actually does have some relevance to serious problems. Recently the analog of the prime conjecture for Gaussian primes found is appears in problems of the curves. And so I recommend you to go to the talk by Jorge Jimenez who was on Saturday when he would be presenting this subject. Now let's go to the team of the talk. It's about distribution of prime numbers. There are infinitely many of prime numbers. Actually there are plenty of them. Prime numbers says that asymptotically number of primes up to x is x over log x. So it's not a positive density but nearly so. Just density is one over log. And which for us and now that's from the Carnal to point of view that's not a big deal to work with. The density is not really small for tau and for green. It was a big issue and you will probably hear more about this how it is overcome this problem in the lecture by Ben Green. So the density of primes is one of the log. And so one may think that you can study the horistic prime numbers very easily because there are so many of them. But irregularities of distribution is something that we don't comprehend today. And so in spite of being so dense the set of primes there are lots of problems. I mean the predictions where the prime numbers are really easy to make but rigorous proofs are harder. So for example it's easy to predict how much would be the number of prime twins if you think that P and P plus 2 are somewhat independent in terms of being prime. The density of should be one over log x squared and indeed there's a conjecture but hardly to that asymptotic quality. The number of twin primes is Cx over log x squared. C is a constant. Before I come into technicalities I must say that it is better to count primes with some weight which is called for mangle function just that it's easier. So it's a log P if and it's a power of prime and zero otherwise. So the prime number theorem here is now equivalent to this statement and it was proved by Adam Adelavallapusen more than 100 years ago. And the same way twin prime conjecture is equivalent to this statement. Well twin prime conjecture is of course attractive on its own. I mean it will not really revolutionize foundations of mathematics if one saw of it but the question would be why is it so hard? And I think the answer is this because it really appeals to multiplicative structure of within the additive structure of integers. The standing additive structure of integers is easier these days by means of harmonic analysis. You can apply for example Poisson summation formula. Here is one simple example and the summation is over on the left hand side over vectors of dimension integer vectors on the other hand. They have the same lattice vectors of the same dimension. F is a test function. F hat is a Fourier transform. And let the more general question formula gives you relations between some of the lattice on one side and some of the dual lattice on the other side and the test function is changed by Fourier transform. More generally the summation formula, if the test function is a periodic you can think this is a trace formula for on a torus. And in more general situations on homogeneous basis when a group action is not commutative the trace formula may look differently. It has no longer the group elements on one side correspond to another group elements on the other side but rather they correspond to eigenvalues of some self-adjoint operator. And a typical game we play in Adgram-Berts theory with summation formulas is that first we use this involution type relation to understand spectrum on both sides. I mean the formula serves as a tool to help us to understand spectrum of the other side in terms of the spectrum on the original side. And you can play until you understand completely the spectrum with this formula and then many applications follow. So far the prime numbers resisted this and still resist this kind of treatment. And the dual object to prime numbers, the companions to prime numbers are complex zeros of the zeta function. Here is the zeta function defined as Dirichlet series, Heyser-Euler product. It was introduced by Euler so we call it Triemann zeta function today. And so it has two representations, very useful for prime number theory. One is the product of the primes. The other is Weierstrass type representation in terms of the complex zeros. And with complex integration argument you can transform this relation into a so-called explicit formula which is here. Some over the potential of the prime powers with test function f is the sum of the zeros where f is kind of fully transmitting f in multiplicative group. So it's really a million transform. And so fine, this is a formula which we would like to interpret as a trace formula if only the zeros could be interpreted as eigenvalues of some self-adjoint operator. This is a dream which is not yet realized in spite of very intelligent speculations, particularly from random matrix theory. And if true, one could really, the dream hypothesis which says the complex zeros are on the critical line would follow from here almost immediately. But we may wait a long time before we see these operators and prove along such lines of the hypothesis. So I hope that in my talk I will be able to show you how much has been accomplished in number three by run about methods about prime numbers and even go beyond the limits of limitations of the hypothesis. One of the important investigations in this area in the 70s and 80s were the density theorems which essentially tell you that almost all zeros are near the critical line. We also know wonderful result of Perran Conry that at least 40% of zeros are on the critical line. Just for your information to understand some words, let me quickly just show you the quantity. We are talking about number of zeros in the critical, the total number of zeros in a rectangle of height up to t is about t over 2 pi log t. So there are plenty of zeros, even more than integers up to t. But the density is estimated that is accomplished unconditionally. It says that really the number of zeros of the critical line farther you are from the critical line diminishes very rapidly. And this is one of the treasures of the numbers theory because it's a fantastic replacement of Riemann hypothesis and sells us very well for applications particularly to the distribution of gaps between primes which I'll be talking about in a moment. So the goal would be the density conjecture that is this estimate whole truth secret to two. I'm not sure if I really have enough time to speak about it, even though I intend it. There are beautiful developments there, new ideas of particular Montgomery or the zero flash values of this polynomial. But I better skip this because of the time. So concept of zeros behave asymptotically like 2 pi n over log n. So we can have an idea how many of them are there. And so you can deduce here what is the average spacing about the zeros is 2 pi over log n. So remember that I come back to this in a moment. If you like to understand the mechanism of which rules the distribution of prime numbers, we should probably study something that is more local like the gap-speaking primes. So prime numbers, if n is the n prime number asymptotically it's n log n by the prime number theorem. So the average gap between consecutive primes is about log n. Riemann hypothesis would give you the bound n to 1 half log n square between, so it's really far from the average value. And that's the end of the story by Riemann hypothesis. The 1 half exponent is because the zeros of zeta function do exist on the critical line. You can't get along Riemann hypothesis better than this straightforward. There's a beautiful progress done by combination of unmethods in sieve and the word record here unconditional result belongs to Baker, Harmon and Pins. The exponent is 0.525 plus 1 half. And again I intend it or it's like to talk about this a bit more but I don't think I have to, like I can do it. It's just not enough time. In spite of the limitation of the Riemann hypothesis that the exponent n to 1 half is best possible because of zeros on the critical line, one may still expect better results. Results that go beyond the Riemann hypothesis. Kramer had a model, a probabilistic model of prime numbers and based on this he conjectured that the gaps between code 6 primes should be log n squared. Much better than what Riemann hypothesis can do. However, this probabilistic model of Kramer has some flow. I mean it is not, it does not have enough input from arithmetic and in fact his model predicts much better results about gaps between primes asymptotic in short intervals which later had thought to be false. But you by Helmut Meyer and others, followers of his ideas. And so when it comes to modeling, probabilistic modeling of distribution of primes I would better recommend to look at the random matrix theory which gives you much more convincing predictions. It is amazing about random matrix theory that it is still analytic in nature. Nevertheless, the arithmetic factors in resulting formulas are pretty correct and so it is perplexing. How do different worlds of real numbers and integers coexist here? And this needs to be really explained. So Brian Connery wrote a beautiful article and I recommend this to look at. Right, so the gaps between consecutive primes should be as small as two. That's the twin prime conjecture that I mentioned before. And recently it seemed to be impossible even to come close. And here's a stunning result from one year ago by Goldstone Pins and Ilgim who proved that the inf of the gaps over log n, over the average value is zero. And actually later they proved a bit better that even by log n, one half essentially infinitely often. And the prime numbers, the gap speeding prime numbers are given because they run like random variables and they follow the Poisson distribution like everything else in our life essentially. Yeah. Primes and arithmetic progressions are really building blocks for many constructions and numbers. They are interesting on their own right. But first of all, they are really basic objects to create other things out of primes or find primes in other sequences. And we know they are quite equidistributed by Dirichlet who proved that it was the characters and associated with these functions. And by means of these he was able to extend the theory of the zeta function to the L function. Many things transfer automatically to the L functions including the Riemann memoir on the zeta function and the Riemann hypothesis, etc. So the primes and progressions now satisfy this law. Every residue class programming of modules is supposed to represent the same proportion of primes. And the error time is pretty good from the Riemann hypothesis of course. And it is a meaningful formula when the modulus is not too large, when the modulus is up to square root of x. So it's still quite good in uniformity, but there's some limitations here. Again, square root of x because of zeros on the critical line. We expect more to be true and Montgomery conjecture that asymptotically this should be true for q up to x, up to x1 minus epsilon. And I will talk about this a little bit later in a different context. One should not hope too much. I mean, there's some limitations in the distribution and there is a very surprising result of Friedland and the Greenfield that if q is as large as x over log x to 2006, for example, then the asymptotic fails. Yeah. The Riemann hypothesis why it seems to be beyond rich of our present technology was based by results that are as good in practice as the Riemann hypothesis itself. First spectacular result of this nature came from 1965 from Bombay and Vinogradov who indeed proved that the asymptotic formula for prams in progression is true, but only on average. But with respect to the modulus up to the size x2 and half over log x, negative power, that's the same range as the Riemann hypothesis offers you. The bound is not impressive here. We only save a power of log and practice it's enough. The point being that the uniformity in q is very, very high. I would like to stress that uniformity in asymptotic formulas or in equalization number theory are key issue. Because the important components that connects different sets of prams that the way we produce prams or find prams in other sequences by playing with these parameters. So uniformity is extremely important. And Riemann hypothesis has this interesting feature that it leads you very neat and very strong results uniform with respect to the involved parameters or invariance. I mean think about general functions in number fields, even or automoficial functions, whatever. So this is the feature which is very nice. I would recommend have no reservation or hesitation to use Riemann hypothesis in industrial applications because of this beautiful feature. Nobody, I guess, doubt in the validity of Riemann hypothesis. Nevertheless, this super extraordinary uniformity is deterrence. I mean to attack the problem because if you are experiencing a number theory, you see how far we are getting anything that good in terms of uniformity. So this is my prejudice to attack these days. We still have to comprehend something between. But then I would like to say that uniformity is not just a sport event that you want to be better than your neighbor. It's extremely important. It is misleading for some observators that things may look the same way. But because you ignore the ranges in which things hold. So be alert that sometimes there's similar result, but with great uniformity is a completely quantitatively different statement. Anyway, Riemann hypothesis is not the last word in the distribution of crimes. There's a conjecture that Bombay-Rivigno-Ralph theorem holds much farther than Riemann hypothesis could allow you. And so the Halberstam conjecture says that you can get the same statement with Q as large as X to theta when theta is anything less than one. I mentioned this because there will be incredible consequence of this conjecture. Goldstone-Pinzen-Ildrim showed that if that conjecture is really true with any theta larger than one half, then the gaps between consecutive crimes are absolutely bounded infinitely often. So there's a great interest in extending Bombay-Rivigno-Ralph theorem. Really before they showed this beautiful work to the world. They already know some extensions of Bombay-Rivigno-Ralph and here's one that goes beyond Riemann hypothesis. That is to say we got a bound with modulus up to Q X to 4.7. 4.7 is less than one half. I hope you see it. But there is one little major problem here to use it for Goldstone-Pinzen-Ildrim is because we do not count error times in prime number theorem with absolute values but with certain weights. We call it wealth-actorable. I have no time maybe to define it here. But let me just ensure you that these weights up to today, up to recent time, were as good as anything else. So we didn't really worry about having absolute values counting error times. But they noticed that this is not sufficient. And I prepared here in transparencies a bit of explanation how this result is obtained because it combines so different techniques. Again, I'm not quite sure if I'll be able to explain it in short time. So let me just simply wave the transparency to prove you that I prepared something for it. I'm not saying I'm not aware about that. There is a lot of interesting ingredients over there. Maybe just a few things. The way we treat prime numbers is very much, maybe I should say first that if you attempt to prove something beyond the Riemann hypothesis by using explicit formula, by using zeros of the function, you would like to know the distribution of zeros on the critical line. And there are lots of conjectures that come to this, the percolation theory, for example, which gives you some step forward beyond the Riemann hypothesis, but very little. And having Riemann hypothesis alone as a plain statement without any intrinsic knowledge about the zeros does not really reach far. You really have to know the sources, why the zeros are there, and what do they represent. So the attack on primes in progressions beyond square root of x here, for example, is done directly through the prime numbers. And one ingredient is there is a combinatorial decomposition of some sort of primes that has inspiration from sieve methods. And you may think it's not really very profound, it's very, very elemental. That yes, at the beginning it may look like this. However, my point is that when you do the composition, you see different structures in each component, and you'd be able to apply different kind of harm analysis for different structures. So it is probably too simplistic to think that you can find one self-adjoint operator to study prime numbers. Maybe the family of operators would be a better thing to do, but you have to then decompose your sum in an appropriate way. So here's a little bit of taste of that point. You know, you write primes by exclusion principle as a binar form of two natures. One has a large variable which is smooth and a small variable which is irregular, and then because large variable is smooth, you can apply just fully analyzing the L variable. So that's just GL1 of sort analysis. And the other part of the composition is when M and N, the composition of prime, are medium size and they're both irregular. And here you can interpret the, aha, a very good point here is that when Q is larger than square root of X, remember that we have in mind explicit formula or Poisson type formula or trace type formula. There's kind of uncertain principle harm analysis that if the parameter, you believe that you should apply in evolution transformation when you benefit from it, when the dual side is better than the original one. So the breaking point here is really square root of X. That's where the limit of the problem stops giving you anything better. So this is manifested here in this argument that instead of looking for congruence, we look at the equation. We interpret congruence as equation. It's amazing that it's beneficial to consider equation rather than congruence. And there isn't being that Q is larger than square root of X. And the equation becomes now, you can write this as a determinant equation. Then you can change A by applying factor operator TA and reduces to the determinant one. Okay, here we are. This is the question of counting determinants in SL2Z with pretty general coefficients. The spectra theory of automorphism forms is very helpful. Lots of problems here, of course, as you can imagine, because of congruence groups here, small eigenvalue problem, the magic conjecture for mass forms, density difference for small eigenvalues, et cetera. Anyway, in practice, we don't go this way. It's much more practical to go through the sums of cross-dimensional sums. But you've got the idea, what I'm saying, that the composition, combinatorial decomposition offers you a possibility of applying very different chemical analysis to different components. Even though Riemann-Happes is true, it gives you very good regularity in the distribution of primes and progressions. But again, there is some irregularity. The Chebyshev long ago noticed that there is a tendency of having more primes compared to 3.4 than 1.4. This is the bias, which is not visible by Riemann-Happes itself. But there is a theory developed by Knapowski and Turin, comparative number theorem, which explains the things. Again, I ran out of time. But there is a wonderful recent publication by Rubinstein and Sarnak when they consider this bias in multi-classes that determine the measure of various effects of distribution. One great thing about this is that it illuminates, it illuminates how the prime numbers cooperate with zeros. I recommend this paper to look at. When testing a distribution of primes, it is very dangerous. You may think that primes are really random and you can expect anything that is sensible, et cetera. But I tell you that better is to look at it through Mabies function. This function is the coefficient of the inverse of the zeta. It is in practice much more random variable than just distribution of primes. I said this in loose words, this kind of randomness of Mabies that whenever you have a general sum twisted by Mabies function of general sequence of integers, the Mabies is not biased. Unless you choose the CM specifically like in SIF theory, when such things would not hold because the coefficient CM from SIF theory are indeed biased by its very nature construction, they are biased to the Mabies function. It is very safe to assume that Mabies change sign pretty randomly. That would be very useful for testing sums of primes. The point is that for Mangle function can be written as a convolution of Mabies against where something is smooth. Because of randomness of the Mabies function, heuristically, you may expect that only small devices matter. If you believe that, then you can derive lots of very interesting heuristically, of course, as performance for sums of primes. Assuming in SIF theory that the visibility by M is pretty random event, so this quantity should be proportional to some multiplicative function of M, the density function times the carnality of the set. Using this randomness of Mabies function, you can predict us in a formula like this. H is the constant and X is the carnality of the set. Interestingly enough, this formula, which is heuristical, of course, never failed. The same formula predictions was earlier claimed on basis of circle method, hard-deleted, did it for primes and other things. But actually, the principle behind the circle method and this randomness of Mabies is very close. You can see that major arcs that contribute the most part of the sum really give you the complete Ramanujan sums, which is nothing but Mabies function itself. Even though it's very heuristical in nature, there are really unconditional and precise achievements along such lines using SIF methods and Banner-Forms methods. And again, I think I exaggerated a bit for the talk. I prepared to explain this, how it works. Again, I don't think I will do. I just show you that there is something prepared for it. Sorry, just no time for it. But one maybe thing I should mention that this new axiom in SIF, which is a Banner-Form estimation, is the new ingredient in SIF theory that we are working with John for years and be able to establish results that were previously not available by SIF methods alone. There is a wonderful, and wonderful maybe trouble principle in SIF theory, the parity problem. The SIF method alone in this traditional axiom is not unable to produce prime numbers. Not because we are weak, we are not skillful enough. There is intrinsic reason for the limitation. It's called parity problem. Bambier is a sympathetic SIF explain this in the most beautiful fashion. I don't have the time, but just to illustrate what this means. But traditional SIF method, you cannot produce numbers with the same parity of not only prime numbers, but prime numbers which have the same parity of number of prime divisors. For example, you cannot produce integers having 2006 and 2008 prime numbers. But you do produce numbers having 2006 or 2007, or one or two prime factors. So this is called the parity problem in SIF. It's intrinsic to the linear SIF theory. And there are other examples that you cannot break it within the axioms. This axiom here breaks that example and allows you to produce primes. There are many other results of this nature which I have no time to present. But the point is that Mabius function is random and is not biased to this coefficient. Axiom is not difficult to introduce, but re-verification in any practical case is extremely, extremely difficult. So here's a list of results that you can obtain along such lines. And like the result of Fouverin and myself about Gaussian primes, essentially, if you know what this means, you can count. You want to have L squared plus M squared to be prime number, and in the same time, one variable is also prime number. Here is the result. It reminds me of twin prime conjecture, because if you look at the density of primes that are here, it is very, very close to the one that is in the twin prime conjecture. Other results of the nature are much tougher than this, because the sequence is very sparse. The primes represented by polynomial are S squared plus B4. And another one is, of course, Brown, A cubed plus 2B cubed. It's even more sparse sequence. And so these results are now unconditional, really unavailable for the hypothesis at all, but can be obtained by combination of various harmonic analysis together with combinatorics. This is nice by product of our investigation, John, on this A squared plus B4 primes, the spin of Gaussian primes. These really seem to be very independent variables. They don't see each other. However, there are lots of beautiful results, like reciprocity law of Gauss, when different primes interact each other. So one should really be cautious about thinking that primes are somewhat independent objects. So I'm a little bit detoured from the subject. So reciprocity law gives you beautiful relations of the Jean symbol between two primes. But with Gaussian primes, the dose which is 1004 represented as sums of two squares, you can associate one symbol to such a prime. And we call it spin. It's plus minus sign. And we proved that the spin is the sign is plus or minus one, equally frequently. And the result is this. So exponent is less than one. So if it is comparable in a way to quasi-liman hypothesis, but it's not because this SP is not multiplicative. But anyway, this is a product of our investigation. There's a general philosophy in exponential and character sums. The amount of cancellation in the sum should be a square root of number of terms. It has a very serious foundation to believe so, like CIF theory, to provide such beliefs very, very convincingly. But so we apply that when testing things in number theory. But one has to be cautious because primes are not extremely independent in that regard. Here is something that Peter and Sir Natalou and myself recently encountered with some surprise. If lambda p is eigenvalue of a operator, the satisfies, of course, the Riemannian conjecture proved by the lean. And Riemann hypothesis for hospital functions says that indeed there is a square root of cancellation such sum. On the other hand, Riemann hypothesis for the Riemann zeta function also gives you cancellation up to square root of number of terms in this sum. And you may think that lambda p does not see this exponential. So when you twist, you should be also getting square root of cancellation. And yes, we do get some cancellation. No question about it. But surprisingly enough, not so much. So the bound is extra three-quarter. There is some kind of bias between lambda p and the argument of this exponential function. We still can't understand that c is a constant difference from zero. It's a value at one of symmetric square function representation for the lambda p. I have not much time left for something that is the best in a way of controversy. So let me spend some time on this. The exceptional character issue. It was Landau who proved long ago that all the functions do not vanish in this region here, which is good enough for most basic applications in the prime number theorem. But there's one exception, possible one exception. A real character, and the zero function of this character may exist closer to one in this region. It's called exceptional zero. The character is real. The zero is real and simple. Of course, if you believe in my hypothesis, such exception don't exist. One of the most basic problems we face today in a number theory is to eliminate the existence of that zero. It is a pest in a way in the area of an example theory because it disturbs the harmony of distribution of prime numbers. If you look at the explicit formula here, and better is very close to one, you see depending on the value of chi of a being one or plus minus, either you have plenty of, you have two times more primes in such as the class than normally expected, or you may have no primes essentially in other cases. So there's a really disturbing distribution of primes and progressions in terms of uniformity, I mean, in respect to q. Let me just tell you something, what is perplexing here and how we can enjoy actual existence of the exception character. For simplicity, let me assume that character is odd. So it is a chronicle symbol associated with quadratic field. Imagine a quadratic field. There is a deep relation of the story with arithmetic of that field. The initial class number formula gives you explicitly this relation. The class number is the order of the ideal class group of the ring of integers in the field, and it measures the degree of failure of unifactorization. So there's no doubt it's very important invariant that you would like to know about. Of course, you would like to understand the structure of the class group itself, but even the order of the group is important. And then many of you probably know the story about Gauss 9th discrimination, et cetera, so I have no time to repeat that. And I take a different viewpoint for the purpose of the stock. So let me just say one very intriguing property of the exceptional character and the exceptional zero is the property of repelling. So that roughly speaking says that if the exceptional zero is very close to point one, as I indicated in this picture, then other zeros, not only of the same L function, but of other L functions, not only real zeros, but also complex zeros of any other function whatsoever, it could be not a traditional function, it could be an automorphic function, it could be a function of n number field, et cetera. These other zeros are repelled, are stayed farther away from point one. So that's the property of repelling. Even the universality of such property is not difficult if you just believe in the following thing that can be explained. The exceptional character pretends to be the Mabuse function. It takes a prime's value minus one more often than not. And the reason is that the class number is very small, there are very few prime ideals that are of degree one in the field, and the speech companies in the field, so that's the reason. And if so, if the Mabuse function, if the character pretends to be the Mabuse function, if you remember the definition of Mabuse function, that's the coefficient of the inverse, we have wonderful harmonic to study the inverse of a function, because the character has additive nature, you can apply the harmonic analysis in characters, why we don't know yet the harmonic analysis for prime numbers, that's the dream of Peierg Hilbert. So this property opens the way to study prime numbers using exceptional character. Of course one has to be careful, because we don't believe the exceptional character exists, but we can't rule it out. So we better name these primes that are produced by means of exceptional characters as illusory primes. Well, if you want to have something that is really effective, then you cannot work with this hypothetical assumption about the zero, because we don't believe it exists. But here is a wonderful observation, John, many, many years ago, that the n real zero of nl function has its exceptional, if it's larger than one half, in the sense of having repelling property. The power of repelling is diminishing when the zero goes to the central point, but it's still there. And you can see from the class number estimation how much repelling remains. This is the class number estimation effective in terms of such exceptional zero. But if you approach with the, if your exceptional zero is closer to the central point, or even if it reaches the, aha. In order to get a conditional result, you can't dream to find exceptional zero of the critical line, because we all believe in the hypothesis. So the next question would be, what happens if your exceptional zero goes to the central point? As the repelling property is preserved, does the character, the exceptional character, and the exceptional zero at the central point still maintain the repelling property? And while the repelling property diminishes and disappears, but not completely. If the zero is multiple, then it still shows some power of repelling. But here's the chance, and it was exploited in two decades ago by Dorian Goldfeld, such examples of exceptional, in that sense exceptional zero do exist for L functions of passive value of electric cars. You know, the multiplicity of central zero is equal to the rank of more than a group. And indeed, this repelling helped to give effective lower bound for the class number and solve the Gauss problem. But now let's assume for a moment, there was a vigorous attempt to prove, eliminate existence of the exceptional character and the exceptional zero. However, as we see from its repelling property, it's so useful that actually the exceptional character and its exceptional zero are welcome. We would love to have it. So let's see what can be done with it. Actually it is such a powerful tool that it even outperforms the Riemann hypothesis. I have no time to show results that are so convincing, but let me just mention too that his Brown first used it to prove that there are infinitely many twin primes using exceptional character. And we proved to have drawn that there are infinitely many primes of this type here, which by the way, chosen on purpose so we can prove that there are electric cars with only one place of better action, for example. But these are not, these are results that are temporary if somebody proves there is no exceptional zero, these are to be useless. But it tells you where to look and how one can hope to get something and condition along such ideas. And indeed one cannot really hope too much for existence of exceptional character. Sacherascu and Sarnak proved that existence of exceptional character would ruin the Riemann hypothesis very, very badly. That is to say that certain L functions for cast forms would have complex zeros of the critical line, not just real zeros, but complex zeros. So it's unbelievable. So I don't think now anybody believes that the real exceptional character exists, even though it's just not long ago, some people would think that it may be the case. So as I said, eliminating the exceptional character is one of the most important problem until number three. And let's try to do something unconditionally. We are looking for, what about having, instead of high multiplicity of zeros, look at clusters of zeros on the critical line, which may be more realistic to find. So if the zeros on the critical line have high multiplicity, that would be okay. We don't believe it. But maybe just a cluster, lots of accumulation of zeros in the short segments. And here's a result of Konrad and myself recently established, which says that if a positive proportion of pairs of zeros of the Riemann zeta function, Riemann zeta function, not the elephantion of Dirichlet. If the positive proportion of the zeros have gaps smaller than half of the normal gaps, then there's no exceptional zero. So the moral of that is that zeros of distinct functions see each other. We used to think they are independent objects. On the surface of the matter, I think it is true, but when you penetrate deeper, you find that they conspire. So here's this example. Well you may ask, we're assuming these clusters, do they really exist? Well, there is a random matrix theory predicts they do. The percolation in particular, in this part of this random matrix theory, in the context of functions originally done by Montgomery, asserts this asymptotic, which tells you that the gaps between zeros can be arbitrarily small very often. So our assumption about, over in the paper of Konrad, is quite realistic. The more precise distribution of gaps between zeros is established by Rudnik and Sarnak in the paper on n-level correlation. So even though the idea of exceptional character and zero is unrealistic, it leads sometimes to the situation that we get inspiration for positive, for constructive works along such lines. So the message is that one should try to use families of functions to study a problem that apparently is related only to one of these. And I think one can speculate a lot more, but I think I'll finish today and say these words.
The classical memoir by Riemann on the zeta function was motivated by questions about the distribution of prime numbers. But there are important problems concerning prime numbers which cannot be addressed along these lines, for example the representation of primes by polynomials. In this talk I will show a panorama of techniques, which modern analytic number theorists use in the study of prime numbers. Among these are sieve methods. I will explain how the primes are captured by adopting new axioms for sieve theory. I shall also discuss recent progress in traditional questions about primes, such as small gaps, and fundamental ones such as equidistribution in arithmetic progressions. However, my primary objective is to indicate the current directions in Prime Number Theory.
10.5446/15952 (DOI)
Okay, so we will now start the ghost prize lecture. So this lecture will in fact be in two parts. In the second part, Professor Hans Follmer will describe various aspects of Professor Ito's work. But first, Professor Junko Ito will tell us a few words about and from her father. Thank you very much for the introduction. Since my father cannot be present in this occasion, I have prepared together with my father a short thank you note on this occasion. I am greatly honoured that the International Mathematical Union, in cooperation with the Deutsche Mathematica Fereinigung, has awarded me the first Karl Friedrich Gauss Prize in recognition of my work on stochastic analysis. It is difficult to express the unparalleled happiness I feel to be the recipient of this award, bearing the name of the great mathematician whose work continues to inspire us all. Because my own research on stochastic analysis is in pure mathematics, the fact that my work has been chosen for the Gauss Prize for applications of mathematics is truly unexpected and deeply gratifying. I hope therefore to share this great honour with my family, teachers, colleagues and students in mathematics, as well as with all those who took my work on stochastic analysis and extended it to areas far beyond my imagination. Professor Filmer, who I am very happy that you will be giving the inaugural Gauss Prize lecture. I have many fond memories of working with you in Thuys in the 80s when I was a visiting professor at the Ehe Ha. In fact, this picture of me and my late wife taken at that time is one of my favourites and always displayed on my desk. I am sorry that my health situation prevents me from attending your Gauss Prize lecture, but I look forward to reading and studying it so that I can learn about the new developments in stochastic analysis. Professor Hans Filmer, Professor Martin Gurchul and the Gauss Prize Committee, DM Phau President Günter Ziegler, Professor Manuel de Leon and the 2006 Madrid ICM Organising Committee, IMU President Sergio Mbal and all the participants of the 2006 ICM. Please allow me to express once again my heartfelt appreciation for the great honour that you have bestowed on me. Thank you very much. So it's a great honour and a great pleasure for me to introduce now Professor Hans Filmer from University Humboldt in Berlin. Professor Filmer has made many fundamental contributions in probability theory in domains such as stochastic calculus, random fields, probabilistic potential theory or large deviations. In the last 10 or 15 years, he has become the leading expert in the world in the area of applications of stochastic analysis to mathematical finance. So he will lecture today on the work of Itou and its impact. No sound? Okay. So I try again. It's clearly a great honour and also a great personal pleasure for me to be able to comment, to address you on the occasion of the first Gauss Prize which has been awarded to Kioshi Itou. About a week ago, by chance, I stumbled on the Internet to some website where there was a discussion going on on potential candidates for the field medals. And one statement was, unfortunately, it appears that the bias against applied mathematics will continue. I am hoping that the Gauss Prize will correct this obvious problem and they will pick someone really wonderful, like Kioshi Itou of Itou calculus fame. Now this has actually happened and I definitely share this feeling that somebody really wonderful has been picked. The Gauss Prize has been awarded to Kioshi Itou for laying the foundations of the theory of stochastic differential equations and stochastic analysis. Now you may wonder why somebody who obviously cares about applications as a sky, anonymous sky, who made the statement on the Internet is so enthusiastic. And in fact, why is the Gauss Prize? Because Junko Itou, yesterday in the press conference, pointed out that her father considers himself a pure mathematician. And in the words we just heard, he expresses even surprise. Now the statutes of the Gauss Prize say that it is to be awarded for outstanding mathematical contributions that have found significant applications outside of mathematics or achievements that made the application of mathematical methods to area outside of mathematics possible in an innovative way. Now one aim of this lecture is to make the point that on both accounts, it's a marvelous idea to pick Kioshi Itou as the first winner of the Gauss Prize. I mean to those who know clearly point one, stochastic analysis and the tools and the concepts of stochastic analysis have found important implications, significant applications in various areas outside of mathematics. But also stochastic analysis has made conceptual advances in clarifying the structure of certain situations in applications which then made it possible to bring in other methods like PDE, numerics of PDE. But first the conceptual insight provided by stochastic analysis had to open the door for that. So also point two is highly relevant in this specific case. Now I try to speak on the work of Kioshi Itou, its conceptual power, its beauty and its impact. My goal is much more modest than was suggested in the words of Kioshi Itou. My goal is not to talk about new advances in stochastic analysis, this is strictly about Kioshi Itou. And I will try to explain to the non-specialists what some of his concerns were. So here is Kioshi Itou in 1942. At that time he had already made an important breakthrough, a fundamental breakthrough in clarifying the structure of Markov processes. And he had done it completely on its own. Here his photo was taken in the Government Statistical Bureau of Japan and we were just told by Jinko Itou that they simply left it in peace there. He was not formally involved in any graduate work, he could do completely his own thing and something really great came out of this. So he wrote at that time a sequence of papers, differential equations determining a Markov process. This was in fact a Mimeo written in Japanese. It then came out much later in an extended version in the Nagoya Mathematical Journal and also thanks to Joe Dup who was one of the first to immediately saw the importance of Itou's work as a memoir of the American Mathematical Society. And then at the same year a separate paper appears on a formula concerning Stochastic Differentials, which is now known as Itou's formula, which came in almost in passing as one step in his program here. Now what was the program? The program was about Markov processes. So let me recall, so this is not for the specialists, this is for those who want to learn, they have not seen before what Itou's work is about. So let's consider a Markov process on RD. So this is usually described by transition probabilities. So if you start in the position X after time T, you find yourself in some area of the state space, in this case RD, with a certain probability, which is specified here. Now the Markovian traveler does not remember what has happened before, what happens next only depends on his position right now, and this is captured by the Chapman-Gormogorov equations to show how these different probabilities are patched together. Now this is enough to construct a probability measure on pass space. Now I'm getting more special than Itou did by focusing from now on on the continuous case. Itou's papers are already much more general. Where the pass space would be the space of continuous pass on the state space RD. We note by XT the position at time T for a given trajectory, and the measure is related to the semi-group in this way. The probability starting in X, given the past up to time T, to find at time T plus S, the particle, let's say in the area A, is just given by this transition probability, depending on the present position XT. So this clarifies the point that the past behavior does not intervene in these predictions for the future behavior, it's only the present state that matters. And so we have a precise mathematical model which allows us to take, to talk about the past behavior of the process. Now but what is the infinitesimal structure here in such pictures? So these pairs typically are quite crazy. So what can we say? In analytic terms we can say something starting with these transition probabilities viewed as transition operators. We can differentiate and in the continuous or diffusion case, this will turn out to have this form of a partial differential operator. And there is called Morgorev's backward equation which says that for a function defined in this way, for some say continuous function F, we have this partial differential equation here in terms of this operator. So this is an analytic statement on infinitesimal behavior, but in probabilistic terms on the level of pairs, how can we understand the structure of these pairs? And Ito's insight, Ito's idea and his program was first to identify tensions of the Markov process. So this clearly cannot be done in a naive way here. And then to reconstruct or simply to construct circumventing the Morgorev construction, the process directly passwise from its tensions. Now what are the tensions on the level of processes? Insight number one, these should be Levy processes. Now while Levy processes, Levy processes are so to speak the straight lines. Their increments are independent and identically distributed, so they move so to speak in the same direction. The laws are infinitely visible. And already before the papers I'm now describing, in his thesis, he had worked on Levy processes and proved to some extent in parallel to Levy, what is now known as the Levy-E2 decomposition on the level of pairs of such Levy processes. Now for our purposes now, for this lecture, we only need one prototype of a Levy process since we stick to the continuous case. So the prototype, they're the laws for the Gaussian and it's given by a Brownian motion. So Brownian motion at that time had already a history. 1900, Vachelier had written the thesis Theorie de la Speculation, where he introduced Brownian motion as a model for price fluctuation on the Paris stock market. Five years later, Albert Einstein with a very different motivation and intuition, connected with a physical Brownian motion, and then the rigorous mathematical construction as a measure on pair space was given by Norbert Wiener in 1923. Now here is how the construction works. You take the sequence of independent identically distributed normal random variables. You take an orthonormal basis in L2 with respect to the Lebesgue interval on the unit interval, and then you sum up with these random factors the primitives of these functions in the orthonormal base. And then you can show this is uniformly convergent in T, P almost truly. Therefore, the limit exists that defines a continuous function. And this was started by Wiener who choose a trigonometric basis. Levy simplifies the arguments by using the Haar functions as a basis, but it's only much later that Eto in the joint paper with Nizio really gave this very elegant and general formulation with a proof in the general case, and also going already to Barnach value with pass. And so here we have Wiener process. Now, typical paths are, as I just said, by construction continuous, but they are nowhere differentiable. So there are no classical integrators. They are not of bounded variation. Now, Weierstrass had constructed one example of a continuous function which is nowhere differentiable, but this was seen as a pathology. There is this famous quote in a letter of Helmut to Stiltius where he says, I turn around with cold and horror of this lamentable defunctions without derivative. So I turn away with shock and disgust from this lamentable plague of functions with no derivatives. But here in this context of diffusion phenomena, both mathematics and in the real world, that's how it is. The typical paths are nowhere differentiable. So the question is how do we deal with this? Now, the idea of a tangent of a given Markov process. Here the tangent in X in the diffusion case will simply be such a Wiener process with constant coefficients taking off in this sense. This suggests to write down the infinitesimal behavior of the Markov process path in this form. It should take off like its tangent from any given position at any given time, which means it should have this form locally. So that's the idea. And then next step of the program constructs the path from this stochastic differential equation. In other words, solve this in integrated form, this equation, where the position at time t comes about from the starting point X by integrating up here and integrating up here. Now, here it's classical, no problem, but here it's completely unclear because this depends on the path. And even though sigma may be smooth, since XS itself is non-smooth, this will be very non-smooth. And here we don't have a classical integrator. The path of a Wiener process is nowhere differentiable. So this needs a new approach to integration, what is now known as stochastic integration. Now, there had already been one step by Wiener and Paley, which defined the integration by parts. And this works whenever here you have a classical integrator. So if the path here of the integrant is of bounded variation, and they did it in the case where it's of bounded variation and deterministic. But as we have just seen, the integrants which are needed here in this construction program are by no means of this type. And so, Ito went about constructing the integral in this form for general integrants, adapted in the sense that they depend only on the information up to time s, what's the path behavior up to that point. They satisfy some integrability condition, but no regularity. And the idea is to write down Riemann sums, but non-anticipating Riemann sums. So here the integrant is evaluated at the left-hand side and then passed to the limit using a basic isometry. That for simple integrants like this, piecewise constant, the L2 norm here will be equal to the expectation of this integral, which is the L2 norm in this product space. And via this isometry something reasonable came about, and that was the birth of stochastic integration in the sense of Ito. In the introduction to Ito's selected papers, Densstruck and Warradán write everyone, if you could say in this room, as at least heard, they say who might pick up the book and start to read, as at least heard that there is a subject called the theory of stochastic integration, and that K. Ito is the lebeck of this branch of integration theory in brackets Paley and Wiener where it's Riemann. Now, this is not all. To complete the program, the stochastic differential equation has actually been solved. We have so far made sense out of the integrated form, and we need a verification that this construction does what it's asked to be, namely to provide a solution of Kolmogorov's equation, and this verification goes via Ito's formula to which we are turning now. Now, outside of this room, and in fact outside of mathematics, there are nowadays many, many thousands of people who probably have not heard, or if so do not care about lebeck, Paley, Wiener, but they have heard and they do care about Ito's formula. Now, why is that? On the one hand, it just has turned out to be an extremely useful tool outside of mathematics in many areas, and also it has a certain fundamental quality. I want to underline the fundamental quality of this formula. The Gauss prize, of course, is not the first distinction K. Ito has received. He obtained the Wolf Prize, I think, in 1987, and in the Laudatio of the Wolf Prize, it said, he has given us a full understanding of the infinitesimal development of Markov's sample paths. And I've tried to show you, as a short summary, how this goes. This may be viewed as Newton's law in the stochastic realm, providing a direct translation between the governing partial differential equation and the underlying probabilistic mechanism. Its main ingredient is the differential and integral calculus of functions of Brownian motion, the resulting theory is the cornerstone of modern probability, both pure and applied. Now, coming from Germany, having been socialized in Germany, and especially from Berlin, if I see Newton, I also think of Leibniz. So here is Leibniz. Together with Newton, this is taken from an article in The Economist, where they described a few they had about priorities. Now, the bottom line probably is that to a large extent, to paraphrase Pascal, the truth was the same in London and Berlin at that time. Now, here is something by Leibniz written in Berlin, a Merkwilder symbolismus des Algebrasch und des infinitesimalkirkuts, but at no point in reading it in German, because it was written in Latin, but I only got hold of this German text. Now I show you. He writes here what everybody knows, integration, here's a product tool for differentiation. Then he says, out of this formula, the whole remaining calculus can be developed. But this formula is demonstrated as follows. So he multiplies out and he gets this term and then he said, okay, this dx is much smaller than x and dy is much smaller than y, so the whole thing is much smaller than the remaining terms, so we forget them and out comes the usual product tool, which in particular implies this and for a reasonable function f, this standard behavior along the trajectory x. And then he writes, quattirima zanimemorabile omnibus corvus comuniest. So this very remarkable theorem is common to all curves. Now the insight of ITO was, no, it's not. In general, this formula should be written in this way. In other words, these extra terms cannot be forgotten in general for general curves as they turn out to be typical in the diffusion picture. Because here in this form, something comes up, which does not come up in classical calculus, that's the quadratic variation of the function. The function is nowhere differentiable, but it has a quadratic variation. And Poly-V showed it's equal to t for a typical path of the Wiener process, and ITO showed that for a solution of a stochastic differential equation, it will have this form. And so more generally, ITO's product rule takes this form with a correspondingly defined covariation, and then ITO's formula for a function f in class C2 has this extra term. And if it's spelled out in the context of a multidimensional diffusion solving the stochastic differential equation, we have seen it takes this form. And here this operator L, of course, comes in. By the way, here we have a choice, we can either stick here to the underlying Brownian path, or we can translate by taking part of this over to the other side to de-ix the increment of the diffusion path, and then the operator changes in the sense that the drift is taken out of the operator which we had before. Now I want to describe some consequences which we are going to need a little bit later. The first consequence is, and that was the reason, the reason he proved this. He used this then to check to verify Kolmogorov's equation for the solutions of his stochastic differential equation. So it looked like a tool you use in passing. But it has far-reaching consequences, and I'll show you one which will be interesting for us a little later. It implies a representation theorem. So let's take a functional of the diffusion process which is given by the function small h applied to the final position over some time window from 0 to big t. Then Eto's formula reduces to this representation of the functional as a constant plus an Eto integral of this process if you choose f as a solution of the following boundary value problem, it should solve this partial differential equation, and the terminal value should be equal to h. Why? Because in general we have a remaining term where exactly this integral over these terms appears. Now if this vanishes, this drops out, and you have this representation. The choice at such simple functionals of the diffusion can be written as stochastic integrals of the underlying diffusion pairs. Now you can take products of those and patch the results here together in an obvious way. Then you get it for functionals of this form that they can be represented with some integrants, and then you add an approximation procedure for general functionals, and then each reasonable function of a nice diffusion x can be shown to be a stochastic integral of that diffusion with some integrant. So what I have sketched until now is really in the spirit of pure mathematics, I mean some people perceive the field of probability as applied per se, but it's a wide field. It has very theoretical areas, it has very applied areas, it has a whole spectrum. As we have heard, Kiyoshi Ito considers himself to be rather on the very pure side of it, and so in this spirit I have tried to describe this program. And in fact what I've given you is a short summary of the whole book which does that in detail, so if you really want to know you should look at this book, Markov Processes from K. Ito's perspective written some years ago by Dan Strube. Okay, now we come to impact. The reason for avoiding the Gauss Prize. Let's first look at impact within mathematics, and there it should be said that it took a long time, at least in the West. In the fifth, so we are talking about work which had been done already in the early 40s, then worked out, published in an accessible way in the early 50s, but in the 50s and 60s the reception was not very widespread. There's a notable exception of Dup, Joe Dup immediately understood the significance, made sure it was published in the memoirs of the American mass society, and he himself wrote a chapter on stochastic integration in his book appearing in 53 on stochastic processes. In the East there was more interest in the Russian school. Dinkins, Koroghot and others took up very fast the idea and the techniques of stochastic integration and used them in important ways, for example, in defining the Gauss-Sanow transformation and shift of the measure, transformation of the measure which corresponds to a change of the drift in the stochastic differential equation. Now Eto came to Princeton to the US in the 50s, so he was there from 54 to 56. And he described in the statement he gave a year ago at the Abel's Impulsion in his honor that there he met Mackeen, a student at the time of Feller, and explained to him his ideas about stochastic differential equation. And then he says, there was once an occasion when Mackeen tried to explain to Feller my work on the stochastic differential equations and the idea of tangents. It seemed to me that Feller did not fully understand its significance. At any rate there was no encouraging reaction. On the other hand, when I explained to Feller Lévy's local time, he immediately became enthusiastic, made some conjectures, how to describe the structure of one-dimensional diffusions in terms of local time and posed that as a problem. And then Eto and Mackeen worked on this Solvci problem in a way completed Feller's program and wrote a whole book, Diffusion Processes and their Semper Paths, which appeared in 65 and ironically no stochastic differential equations, no stochastic integrals in that whole book. Now for me as a graduate student at the time, this was my first exposure to the work of Eto. And I found it, so we formed a group of graduate students and did our own informal seminar, the reading sections in this book, and we found it hard going. And I thought, okay, maybe this is the Japanese style to be tough on your students. But no, no, complete misunderstanding. By chance then I discovered lecture notes written only by Eto on stochastic processes at the Tata Institute and at Arhus University and they were marvelous. Suddenly everything looked very clear and very easy. It was written in a friendly way and so I was really looking forward to meet that person and that happened in the summer of 68 when Eto came from Arhus where he was at the time to give lectures at the University of Erlangen. This is taken 10 years later. But I remember somehow this posture and especially his way of transmitting his own joy in dealing with mathematics he was explaining and this was a very, very strong experience and there was one memorable situation where a distinguished colleague from the United States who was visiting for a whole term and was giving lectures on probabilistic potential theory gave his last lecture and Eto was present. Now this was 68 so students at that time were somewhat unruly and they were ready to be unpolite. So we asked this colleague who was just planning to finish a long technical proof about additive functionals of Markov processes, couldn't we just forget about the technicalities and could he not just give us some perspectives, some outlook, what the important directions are. So this took him by surprise, he was not prepared for that so he hesitated and was not eager to go in that direction. Then suddenly Eto went up, went to the blackboard and that was one highlight of our time as graduate students how Eto in one hour gave his views on topics he thought were important and had potential for further research. I think it was mainly excursions theory of Markov process which he described, excursions of a Markov process viewed as a past value point process so it was absolutely exciting stuff. But so far no stochastic integrals, also in that lecture, this improvised lecture no stochastic integrals. Now 69 the situation started to change. Henry Mackeen wrote a small book, Stochastic integrals, clearly dedicated to Kioshi Eto and since the 70s suddenly there was an explosion of stochastic analysis in a general martingale setting via an important paper inspired I'm sure by Eto of Kunita and Watanabe and then the French school took over, Meier de la Chéry in Jacques-Code d'Iore and expanded the whole framework in a tremendous way. Then since the 80s infinite dimensional extensions came, measure value diffusions are one class of examples, Dawson, Watanabe and others motivated by biological population dynamics and then in a very new and systematic way in Maliaven, calculus initiated by Paul Maliaven which by the way is the head of the strong school in Spain represented at least until recently by David Nuala and still by Marta Sanz and Kioshi Eto himself contributed to this development the lectures he mentioned in his greeting at the ETH Zurich were on foundation of stochastic differential equations in infinite dimensional spaces where in a way he viewed Maliaven calculus as an infinite dimensional Eto calculus based on an infinite dimensional Einstein-Umbach process and embedded that in a general framework. At any rate stochastic analysis is now clearly a core chapter of probability there is no doubt about it. But the Gauss prize requires applications beyond mathematics. So in an anecdotal form I will describe how I experience that. The first thing to say is that this went faster when I was as a young PhD and instructor at MIT courses involving Eto stochastic differential equation I still have my course directory so I checked in mathematics zero. Electrical engineering four aeronautics and astronautics two and I actually attended one of these two which was about space flight and the effect of random disturbances by changing fluctuating densities in the atmosphere and this was modeled by Eto stochastic differential equations and analyzed special topics stability of such dynamical systems perturbed by noise. Stochastic vietnolo functions introduced and handled via Eto calculus problems of filtering and control going beyond the Kalman-Busey filter. And the first textbook for example in Germany on stochastic differential equation by Ludwig Arnold appeared in 73 at the Technical University Stuttgart was written primarily for engineers. For example the motion of the satellite what I just described randomly fluctuating density of the atmosphere was one prime example here. Now my next experience was in 1977 when I moved to ETH Zurich my next neighbor was Konrad Osterwalder. Konrad Osterwalder had just returned from Harvard where he had worked with Schrader, Jaffy and inspired by ideas of Schimansik on PATH integrals in quantum field theory. And Barry Simon in this year was in Switzerland and gave a whole lecture course on PATH space techniques which appeared as a book in 79 and here are the sections 14 to 16. ETH's integral Schrodinger operators with magnetic fields introduction to stochastic calculus where he gave a proof of ETH's formula. So clearly the importance was seen by colleagues in mathematical physics and then the idea came up in 87 to offer an honorary degree to Kioshe Ito. It was very easy because it had strong support not only by the probabilists it was a joint venture with mathematical physicists working in quantum field theory. Now finally I want to describe one case study where I am being a little bit more precise namely the applications to finance. My own I came into contact with this in something through a student David Krebs who later became the got the Clark Medal for the best economist under 40 so some analog economics of the field medal after 84 and then at 84 he visited ETH and we discussed martingale aspects of the new development and finance related to pricing and hedging of derivatives. So we have a price fluctuation on the liquid financial market by the way this is not a recent fact in some sense we are closing the circle we are going back to one of the sources remember Bachelier wrote his first as the introduced Brownian motion with that purpose and so we have such fluctuations we have a probability measure on PATH space let's stick to the continuous case if we have D financial assets this would be the PATH space but very quickly it becomes fancy if you think of yield curves fluctuation of yield curves the state space would be an infinite dimensional space and you could make a choice here via stochastic differential equations. Now what is P? Yeah that's a big problem it is business and this has statistical econometric but also theoretical aspects related to the notion of market efficiency. The strong form of market efficiency says that information and expectations are immediately priced in so after discounting you get this property that the present price is equal to the best guess of the future price and that's the martingale property under P and in that case P would be called the martingale measure. Now on the financial market you can trade so you can vary the number of stocks you hold at any time and the cumulative net gain of an idealized continuous trading strategy would be exactly an ETO integral. Why would it be an ETO integral? Because this is based on non-anticipating Riemann sums and that's exactly the financial meaning you must make your investment at the beginning without foresight and before the actually price increment happens. Now for such a martingale measure Duke systems theorem says that there are no winning strategies so that's somehow too strong and a weaker form of market efficiency would be the absence of free lunches. There may be winning strategies but in that case you have a downside risk and this is then equivalent to the fact that the model does not have to be a martingale measure but it must admit an equivalent martingale measure. Okay, now we come to a core problem in this business pricing, hedging or financial derivatives defined as some functionals of the price process. So some contract which says you get paid at the end so much if the scenario small omega happens. And now we can see how ETO comes in in a strong and crucial way. In a complete model this measure is unique and that's okay for nice diffusion such as a so-called black-soil model. Then I showed you a representation property which was proved through ETO's formula and by the way where did I first see this proof in a paper appearing in electrical engineering because in the early 70s because the electrical engineers were interested in this property. But the fact itself was already known and in the financial interpretation it now means that you have a perfect replication of your financial derivative by some dynamic strategy and that means that this initial constant here, the initial cost of this perfect replication must be the natural price and this can all be computed as the expectation of the equivalent martingale measure because under the martingale measure this will be a martingale term and the expectation will drop out. And in particular in the special situation of the black-soil framework, special process, special derivative, this is the black-soil formula. But it's really a corollary of ETO's formula and ETO's formula is conceptually plus the representation theorem. The representation theorem gives the right conceptual framework and at the stroke settles the issue for any exotic derivatives. Of course now the computation of the strategies becomes very involved. It's a mixture of ETO calculus and PDEs. For exotic derivatives there are ways of reducing it to the simple case or to use really advanced techniques like Malier van calculus. So all this is nowadays financial engineering which is a way of applied stochastic analysis. Incomplete models are much more complicated but all the more interesting and also here ETO's stochastic analysis provides the crucial concepts and tools. My time is running out so I'm skipping one example which I wanted to illustrate how an important issue heterogeneous information on financial markets. Some people know more than others and the question is what is the financial role of that, what is the financial value of obtaining certain additional information. There is a small paper by ETO in the 70s on stochastic differentials where he discusses the change of filtration. What happens if you increase your information? And that provides immediately the key to this very hot topic of heterogeneous information and some of the recent work do use the techniques initiated by ETO. For example Karatsas in Killa, Switzerland. To summarize, Kioshi ETO has molded the way in which we all think about stochastic processes. This is a quote again taken from the introduction of Strug and Varadant to his collected papers. Now this was written already in 78. We all in 78 they probably had in mind a rather known group of specialists in the area of stochastic analysis. But we all that has increased dramatically over the last 30 years and beyond the boundaries of mathematics in areas such as engineering where it started early, finance where it started with the vengeance and really developed an incredible momentum. And the amazing thing is to me how much the concepts, not so much the computational tools have framed the discourse in departments of finance and economics nowadays. It has completely taken over so to speak. And so I do agree and I'm sure many of all these will agree with that initial quote from the internet that the Committee for the Ghost Prize has really picked someone really wonderful. Okay, thank you.
Laudatio on the occasion of the Gauss Prize award for Applications of Mathematics to Kiyosi Itô “for laying the foundations of the theory of stochastic differential equations and stochastic analysis”.
10.5446/15951 (DOI)
Ladies and gentlemen, it is my honor and my great pleasure to introduce Jakov Ilyashberg, a very generous mathematician. Yershey-Ilyashberg got his PhD in 1972 in Leningrad, in our Simpetersburg again, under the supervision of Vladimir Hochlin. The title of his dissertation was Singularities of Mappings. Yershey-Ilyashberg then took a professor position at Sik-Tikva in the Komi Republic. After a difficult period from 1979 to 1987, he was allowed to emigrate from the USSR to the United States of America in 1987. He visited the Mathematical Sciences Research Institute MSRI in 1988. He then joined the mathematics department of Stanford University in 1989, where he has been ever since. He now chairs this department. The present area of research of Yershey-Ilyashberg is Simplectic Topology, a discipline that grew from the fantastic developments Simplectic geometry underwent in the last 30 years under the influence of many people, in particular Mikhail Gromov, but also because of the push from theoretical physics. Magnificent geometry really appeared as a geometry of a new kind in the memoir of Joseph Frouillagrange of 1808 on la variation des éléments des planets, then was fully established by Sir William Hamilton with fundamental contributions by Darbou and Jacobine, the second part of the 19th century, and then the local content of the theory was completely uncovered. The beauty of the theory is praised by Felix Klein in his introduction of the history of mathematics of the 19th century, but it doubts its usefulness. The emergence of quantum mechanics in 1920s gave a major impetus for further developments of the theory and showed its relevance for a wide area of applications. The shift towards a global Simplectic geometry took some more time. A first major step was taken by Eugenio Calabi in the 1960s, but it is Mikhail Gromov who turned things around somewhat later. By exhibiting completely new rigidity properties of automorphism Simplectic manifolds. Conjectured by Vladimir Arnolv and a number of fixed points of Simplectic automorphisms played also an important role. Jascha Eliasberg started to be interested in these problems in 1979. At the time, Connley and Zander proved solved Arnolv's conjecture for surfaces. Further important developments are due to the regretted Andreas' flaw. Jascha Eliasberg started a very fruitful collaboration with Helmut Hofer in 1995. He presented the preliminary version of the now monumental theory he developed fully since 1999-2000 in collaboration with Hofer and Giventown. In his lecture, Jascha Eliasberg will tell us where Simplectic field theory and its applications stands today. Jascha Eliasberg. So of course it doesn't work. Okay. It doesn't work. Oh, okay. So now. So Simplectic field theory, as Jean Pierre mentioned, is some project which we initiated few years ago with Alexander Giventown and Helmut Hofer. And it's not yet far from being completed. A lot of people contributed to just building foundation, application of the theory. And this is of course not complete list. And there are a lot of people who not formally working in Simplectic field theory, but on the other hand they really, they work very relevant. And I will not give their names because I will definitely forget someone. So what I will start is some kind of crash, three-minute crash course of Simplectic geometry. So Simplectic geometry is just geometry of a non-degenerate skew-symmetric bilinear form, like Euclidean geometry, geometry of symmetric form. So symmetric form have at least one invariance, a signature, but skew-symmetric form is unique if it's non-degenerate. And viewed as a differential format always can be written in this form. And you also kind of important to think about Simplectic form in the context of complex structure. So if you think about even dimensional space and Simplectic geometry only exist in even dimensional space, and think about the space as a space of complex coordinates, then Simplectic form is just a minus-imaginary part of the standard termission product. So Simplectic manifold is a manifold which is locally modeled on this R2N with this special form. And equivalently according to Darbuth theorem, this is a manifold with a non-degenerate closed differential to form. And very important, again, like an Euclidean space to consider Simplectic form together with this complex structure, and in this case we want to consider almost complex structure, which means this complex structure on tangent bundle to the surface. And we say that it's compatible with Simplectic form if they relate to each other the same way as this standard Simplectic form and standard complex structure relate to each other on Euclidean space. So it's important, and this will be really crucial for the holder's development, that the almost complex structure, given omega, you can always find compatible in this sense J, and more or less this is a kind of convex choice of such J, so it's a contractible choice. But on the other hand, if you have a J, it's not always possible to find a compatible omega. So omega is much more important. So Simplectic manifold, unlike, on the first glance it looked like parallel to Riemannian manifold, okay, we have a manifold with symmetric form or manifold with skew-symmetric form, but unlike Riemannian case, Simplectic manifold has a huge group of different morphisms called Simplectomorphism. Well, this is probably due to the fact that you see that the symmetric form has a n times n plus 1 over 2 n trials and skew-symmetric only n times n minus 1 over 2, so it's less equation and this gives such a huge difference. So the group in the two-dimensional case, an n equal 1 for surfaces, Simplectic form is just area form and the Simplectic geometry is just geometry of area preserving transformations. But in high dimensions, this is a drastically different. So the group of Simplectomorphism form really proper and amazingly close to C0 topology. The C0 closure of this group is a proper subgroup of the group of volume preserving transformation. So this is a kind of really fundamental theorem which opens the whole area of Simplectic geometry. So the algebra of this group consists of Simplectic vector field and this is just equivalent that this is a vector field which has this property that if you contract them in Simplectic form, you have a closed form and especially we would like to have that consider the case when the dual form is exact and then this function is called Hamiltonian and the corresponding vector field called Hamiltonian vector field and I will denote by this Simplectic gradient of H. And H is called Hamiltonian function. So in the Darbouk coordinates, of course, everybody knows this canonical form for this Hamiltonian vector field. So this kind of Hamiltonian equation differential for the flow of this form p dot minus dh of dq, q dot is equal to dh of dp. So for any compatible omega j, you always have the Simplectic gradient just gradient rotated by j. So very important object in Simplectic geometry are Lagrangian submanifolds. The submanifold on which omega vanishes and maximum such dimension can be just half n. And in terms of complex structure, they characterized by the properties that when you multiply by j, the tangent place become its own orthogonal complement. So there is a kind of two most important examples of Lagrangian manifold. One when you take a cotangent bundle and cotangent bundle of manifold has a canonical Simplectic structure, which is differential of this famous form pdq. And then section is Lagrangian if and only if it's a closed form, the section. In particular, if you have a, say, differential of function and you have exact form, then its graph of derivative of function is Lagrangian manifold. On the other hand, if you consider Simplectic manifold and you have a Simplectomorphism, then graph in the product, if you take a Simplectic form omega on one factor, Mimolus omega on the other, then graph will be Lagrangian. And it's always kind of important kind of play in Simplectic geometry. Then when you have a kind of one split of coordinates, then Lagrangian manifold looks like graph of Simplectic transformation and another system of coordinates, they look like graph of derivative of functions. So there is because of this kind of huge group of Simplectomorphism, it's very difficult to find Simplectic invariance beyond some obelisk one. So of course, we have a total volume, it's invariant. Then there is this homotopic class of a compatible almost complex structure invariant. And in the closed case, the homology class of the Simplectic form is also invariant. But it was for till the beginning of 80s, nothing else was known essentially, more or less some minor things. And this was a Gromov's great insight who introduced the homomorphic curve as a tool for finding more subtle, specifically Simplectic invariance. And we will discuss this today. So let me talk a little bit about this kind of homomorphic maps between almost complex manifold. On the first glance, you can easily talk about homomorphic map between almost complex manifold as in this situation for integrable case, just say, okay, differential is complex linear map. But just unfortunately, if you, if you, if the dimension of the source manifold is real dimension greater than 2, then this is a highly over-determined system and you never have any solution generically even local. So it's a really kind of miracle that the Karschelian equation in integrable case and high dimension have solutions. But the dimension s equal 2 when s is the Riemann surface. Then, and by the way, in this case, almost complex structure and complex structure on the Riemann surface and two dimensions is exactly the same thing. So it's just a conformal structure. And in this case, you have a locally, the system is determined, you have exactly the same degree of freedom of constructing homomorphic map as in integrable case and they call homomorphic curve in this case also J, also J-holographic and or pseudo-holographic. I don't like the word pseudo-holographic because it doesn't carry any information. J-holographic sometimes useful because you want to say for which J. So I will fight with this kind of terminology. So if moreover, if s is closed and with appropriate boundary conditions, this is an elliptic system and its principle symbol the same as for standard debar operator. So you can apply standard thread home theory and under certain transversality assumption, you can define modulate spaces of homomorphic curves which are finite dimensional manifold or at least orbitals or algebraic geometry, they say stacks but okay, so here they just orbitals. So, but this is a kind of, of course I'm saying this thing but it is an extremely serious statement. So you see the whole point of, well one of the points of introduction, almost complex structure rather than integrable structure that like you have a much more freedom of constructing such object and in algebraic geometry, algebraic geometry you really restricted with your tools how you can achieve transversality but here you kind of hope that you can have almost as degree of freedom as an anthropology with achieving transversality. But unfortunately it's not quite the case, it's almost the case but not quite and in some case indeed you can achieve transversality and just kind of forget about it but on other cases you in fact have to unreach the class of your object in talking about some what more general equations and homomorphic curve equation but I will ignore all this issue about it. But there is a huge subject and for instance one of my collaborators Helmut Hofer and this is Kosser devoted a lot of efforts of building appropriate foundation for this situation. Okay, so local theory as I said in the non-degenerate cases fine for homomorphic curve but if you want to get global result you need to have a compactness and here is this kind of one of the major result which was proven by Gromov he proved that for homomorphic curve you can prove similar result as a kind of this famous Knudsen-Dilling-Mampford compactification of just the space of Riemann surfaces. But there is a big caveat only if the area of homomorphic curve is uniformly bounded and it's extremely difficult to control this area so the role of symplectic form as we will see come precisely to in this point. So let me kind of recall few things about the Riemann for compactification so according to the uniformization theorem you can talk about this consider Riemann surfaces with mark point is the same as well for this condition which it's the same as hyperbolic matrix on the punctured surfaces so this condition precisely kind of called stability condition precisely just ensures that you have a hyperbolic geometry on this thing. And so this compactification Knudsen-Dilling-Mampford obtained by adding nodal surfaces and they look like union of two casps in the hyperbolic interpretation and the duration means shrinking a closed geodesic to a point. So the stability condition as I say have to be satisfied for every component. So Gromov compacting theorem is exactly the same but it's different the stability condition it needs to be satisfied only for constant components of nodal curves which are sometimes called good. So it is a kind of slight conserved modification of Gromov compacting theorem. So this is that you may have this phenomenon of bubbling off of holomorphic spheres so this is kind of doesn't exist in just case when we talk about Riemann surfaces. And as I said already the role of symplectic form is to cure area bounds and the point is that with compatible J omega area is the same as symplectic area meaning this is just integral of symplectic form and hence because form is closed it depends only on the homology class of this map of the surface and so when we vary our considered holomorphic curve in given homology class we don't have to be worried about control of area. The same true in the boundary value problem with boundary Lagrangian manifold. So Gromov scheme of defining symplectic invariant where holomorphic curve theory was the following. Pick a generic almost complex structure compatible with omega. Then the compactified modulate space of holomorphic curve of genus G and the fifth homology class can be viewed as a cycle as a kind of similarly unreached modulate space of just smooth surfaces which is kind of much more simple object and its topology can be understood. The compactness theorem ensures that the homology class of this cycle remain unchanged when one varies J while keeping it as a compagny of the omicron. This is an invariant therefore because all J kind of homotopic and hence we get an invariant that's what I call Gromov invariant which just invariant of symplectic structure. So let me give you example of kind of application and this is a Gromov theorem with kind of somewhat in this form it appeared in our joint paper with Gromov. So let's take a so-called symplectic camel problem it's probably called this way by Arnold. So let's consider half space in R4 of course you can go into high dimension but the first non-trivial first when this phenomenon appeared in dimension 4. There is a ball and half place with a small hole and the radius of this hole is smaller than the radius of the ball. And what we are trying we are trying to kind of to put this by symplectic isotope put this ball through this hole and the claim that it's not possible. So why this is not possible? So okay so suppose there exists such isotope. Then what we do we at every moment we consider almost complex structure which has the following property on the ball is just push forward of the standard complex structure at infinity it's standard and also near this our near the boundary of remaining part of hyperplanet standard. But and because this is a power of almost complex structure because it's just homotopic simple objects it's kind of easily to prescribe almost complex structure as you wish on some places and just continue to extend elsewhere with the conditions that it's compatible with symplectic form. So now we take a Lagrangian cylinder in this plane so just radius 1 Lagrangian cylinder sitting in this plane. And consider the modular space of homomorphic disk with one marked point this boundary in RL. So you see before we deform of course this flat Lagrangian cylinder was filled by flat Lagrangian disk. And we had this kind of our Gromov invariant this cycle this kind of one dimensional family of this Lagrangian manifold. And this cycle survive when we deform almost complex structure for any J we have such family of homomorphic disk. And this family together they form some kind of hyper surface spanning this Lagrangian manifold. And when the ball passing through the hole at some moment he has to intersect this kind of moving wall. So therefore in some moment we will have our homomorphic disk this green one kind of and it's centered precisely on one of this Lagrangian on this homomorphic disk. And then in this moment we can look at the area. On one hand we can compute the area of the intersection of the piece of homomorphic curve inside the disk. Because this homomorphic disk has a boundary Lagrangian manifold and therefore it's area is fixed. It just as I said it just depends it's a simplex invariant just depends on the radius of our Lagrangian manifold. It's just equal to one or pi. And on the other hand I can consider this area looking at the ball itself. And at the ball it's a ball with standard complex structure. And at the side you have a curve passing to the center and according to standard the perimeter inequality for minimal surfaces area should be bigger than pi r square and we get a contradiction. So gromofiton potential is a certain motivating motivated by physics way to package algebraic information contained in gromofinvariant. So let me kind of talk a little bit about this. So let's consider this compacted simplex manifold with a compacted complex structure and take a cohomology class. So let's think about this cohomology class as some differential form on our manifold. And then this is a kind of physically motivated notion that defines the acrylator just pulling back this to the small space of the homomorphic curve, evaluation map at the first mark, point second, etc. And integrating over our manifold. So, okay, so I just said what was written here. Okay, so this is the acrylator and then we would like to write some kind of generating function for all the acrylator. So we fix the basis in the cohomology and also just to talk about homology class we fix the basis in two dimension homology and identify any, so any homology class can be expanded in this basis with coefficient d1, dk and because this d degree and just will not distinguish between this d and this homology class. So we also can introduce this variable for this, all this d1 for this basis in cohomology and use this notation. So Gromov-Witton potential is a generating function for all, for all acrylators. So you just take this formal linear convention, it's important to take the t as a graded variable of the same grading or parity of grading as the theta, then we get this thing is a kind of each component even graded and this is a symmetric expression in this t. And if you write expansion in the t variable you get all acrylators. So the meaning of acrylator is kind of very simple, it just, if you consider cycles, duals to this cohomology classes then as it was explained in the talk of Andrea Kouinkov yesterday, what we're counting, we're counting homomorphic curve, some subject to this incidental condition of restricting some marked point to certain cycles. So also you can upgrade it further including so-called descendant but variable so if you don't know this you probably will not be able to learn from my talk so just what I just want to say is that you can further kind of elaborate this incidental condition for the homomorphic curve imposing some restriction on jets of homomorphic curve on this derivative and it's traditionally done in certain way but important thing that you get this homomorphic potential become dependent of infinitely many variables instead of finitely many which was dimension of our cohomology. So genus zero part of this expansion is called rational homomorphic potential and for instance for the case of CP2 let's take the standard basis of CP2, this one, the two-dimensional cohomology class and four-dimensional cohomology class, choose CP1 as a just projective line as a standard generator of two-dimensional cohomology and consider this function. We put two first variable corresponding to this theta 1, theta 0, to 0 and only take this one which correspond to point constraint and so this coefficient has this very simple and numerative meaning. This is as it was in yesterday Akonikov talk is a number of rational curve for given degree D in the complex in the projective plane which pass through k points in general position. So you have this standard number for instance you have through two points you can pass one line etc and well in general of course we have this relation between number of points and degree in order to have finitely many. And what I want to do I want to describe some kind of geometric algorithm generated by symplectic field theory which I didn't explain yet for computing this function. So this is a today there are many kind of recurrent procedure first started from Kansai for computing this number but I will explain some another one and this one is also actually in fact close to what Akonikov and Ponderpand the co-operator for my reason. Okay, so let's do the following thing. Consider this infinite dimensional space of this C2 very formal Fourier series just kind of think about this as a formal loops in C2 space of loops in C2 and so in coordinates I choose the coordinates in the space just for the efficient of this expansion. So notice there is no constant term kind of loop center than zero. So then we can see the some symplectic form in this space. So this is a symplectic form which also correspond to Poisson bracket versus PM0 bracket with QM1 is equal to K2M. So because of so it's also can be written kind of in this form. So if you think about this Poisson bracket as a tensor on just form defined on co-vectors and co-vectors themselves can be viewed as the same type of loop because it's vector space then is just given by this formula. Don't want to see my hand everywhere. Okay, anyway, so this is a pretty famous symplectic structure which appeared in the theory of integrable systems. So consider Hamiltonian function on this space. Hamiltonian function is functional on this U. So let's take this one and it defines some Hamiltonian flow and if you write the Hamiltonian equation you get this equation. And this is in fact turns out to be dispersionless Todor equation or continuum linearity of Todor lattice and written as one equation is just the second kind of order differential equation and I was told this by Dubrovian. I didn't know anything about Todor before he told me. So let's do this now the following construction. It's take zero section in this space. So zero section in this space is the kind of space where the Q-coordinate is equal to zero. So you can think about this if you think about terms of Fourier expansion then I just take Fourier expansion it's only if you have a positive coefficient and no negative coefficient. And then move this zero section under our Hamiltonian flow which I described. So that will be some moving Lagrangian manifold and I denoted by LT. And Lagrangian manifold I told you already they are graph of derivative of function. So they generated by some function if they are graphical. But they are in the world of formal power series and here everything is graphical. And so we can find this function GT and finally evaluate this function and this particular point. And this is precisely our function f of tz expansion of this function is this enumerative number of algebraic curve. Okay. So it's important that Gromov written potential is a symplectic invariant. It's Gromov written invariant. So it depends only on omega. So while omega explicitly does not participate in the definition, nevertheless it's really it just keeps this background controlling role. So how to compute Gromov written invariant? So it's easy to say okay take j take this modless space but nobody ever solved any kind of non-trivial d bar equation explicitly and kind of so it's just not possible. So the only way to do it either you kind of try to deform it to some kind of so you deform it to j or degenerate the j to some situation where you can understand think and then you say because it's invariant it was the same as before. And one way to do it is just kind of to deform it to algebraic situation and another way of doing this is to degenerate. And symplectic field theory is one way to degenerate it. And tropical geometry is the other one. So what we do is split manifold into pieces. So if you have a real hyper surface in almost complex manifold you can kind of start to stretch the neck as they say engage theory. So just insert bigger and bigger kind of piece where they solve almost complex structure independent on this vertical coordinates. And then in the limit you have a manifold with cylindrical ends. So this is impossible to do in integrable case. It's extremely restrictive but in non-integrable this is advantage you can always do that. And in fact in integral case sometimes this is also possible but it's much more restrictive. I'll say about this later. So what happened with holomorphic curve when we do this? They also split. So in fact what may happen that homomorphic model is based on homomorphic curve kind of split so you observe the curve which looks like this. So in upper part you have some piece of curve and lower part you have some piece of curve and also you can have a bunch of cylindrical parts. And there is a piece of curve stuck there and all together they match data and they fit in your original curve which you have in your original manifold. So this kind of brings us to studying these two objects. This cylindrical almost complex manifold, this manifold which are kind of cylinder where almost complex structure is invariant with respect to translation and this is the kind of definition. So I'll talk about this more detail later and also manifold with cylindrical end, this non-compact almost complex manifold which are cylindrical outside of the compact set. It's precisely those which appear as a result of such splitting. So let's kind of important that every cylindrical structure as a very cylindrical structure can assuage this vector field tangent to our manifold. Why? Because J times D or DT, where T is this vertical coordinate. So this is again the same picture. So I say that there is a splitting procedure which brings out of the manifold this manifold with cylindrical end. So any cylindrical structure is determined by what is called in complex analysis the CR structure. That's you have on our manifold you can in tangents why the odd dimensional manifold but it contains maximal complex subspace of real co-dimension 1, this intersection JT, YT, YT. So it's a, you have a distribution of complex tangency to Y and then you have this vector field R which is transferred to it and given R and given C you can always of course define some one form which satisfies this normalized by conditions of equal 1 on this vector field and equal 0 on this plane C. So generically all periodic orbit of the vector field R and periodic orbit will play important role in the story are non-degenerate and thus there are finitely many of them of bounded period. We will call this case generic and we will have this notation for the space of periodic orbit. So however it's kind of useful also to consider so-called Morse-Bott case when space of periodic orbit are organized as some kind of sub manifold for instance like all orbits are periodic. I told you that we cannot do anything in arbitrary almost complex manifold unless we have a symplectic structure which kind of control this manifold. So we also need something like this here. So we will assume that there exists close to form of maximal rank on Y. Notice that Y is odd dimensional so the maximal rank of close to form is 2N-2 if dimension is 2N-1 is one less. And so we want this form such that R is in the curve so R is a Hamiltonian vector field of this form. And we want this R form lambda be preserved by the flow of R. So this is the same as just to say that R also in the kernel of the form d lambda. So the pair we call this pair stable Hamiltonian structure and cylindrical almost come crutch is called compatible with H. So there are three important cases of Hamiltonian structure. In fact, I don't know any other kind which would not be like small modification of this one. So first is a floor case. So in this case our manifold just mapping torus we started with symplectic manifold M and then consider mapping torus of the symplectomorphism. And let's suppose for instance that the symplectomorphism is a topic to identity so it generated by time one dependent Hamiltonian. In this case R just equal do dt plus symplectic gradients of HT and periodic orbit are the same as in 1-1 correspond to this periodic point of f. So you already noticed that my picture of course pretty bad but I got as a present from a TNGist one of his pictures. Okay. So this is a TNGist picture of the mapping torus. So there is also contact case and in the contact case why is this contact manifold that means that C is maximum nonintegral plane field and lambda is a contact form and omega d lambda. In this case R is called red vector field of lambda. And another case circle bundle case when we have our manifold is circle bundle or symplectic manifold and R just generates this one action and so the space of periodic orbit is just our original manifold. So now we want to discuss homomorphic curve in this manifold. They themselves have to have the cylindrical end. First talk about cylindrical. So just cylindrical manifold. Notice that if you just take a straight cylinder over this orbit then you get a homomorphic curve because do dt direction is precisely j times this is equal R. So you have homomorphic curve and therefore you would like to consider just homomorphic curve which asymptotically look like this cylinder. So now I can say what just SFT is. It's the first approximation because in fact first of all the SFT is some kind of functor from the category which I call geometric category SFT where object are Hamiltonian structure with compatible J and morphism or symplectic abordism and algebraic category of the FFT the object are differential by algebra which I will discuss and certain Fourier integral operator kind of interwining them. So in fact it's just only zero approximation because the structure of the SFT is much richer and first of all it's not one category but at least two categories and second there's much more kind of structure in this object. So there is a kind of rational version of SFT when we're talking only about general zero curve and then instead of this while algebra we get a Poisson algebra or symplectic manifold like was in this example and instead of Fourier operator we get Lagrangian correspondences. So by the way symplectic abordism and just symplectic manifold bounded by this out to Hamiltonian structure and symplectic form restrict to the boundary as this omega plus and omega minus. I just want to say that you know that there's an interpolation there is this issue how you compose symplectic abordism there is not canonical operation we have to smooth but in this kind of world. In fact when we take abordism we immediately attach to it the cylindrical end and when we need to compose abordism we just take them one on the top of the other just without any glue. So let me now kind of describe this object. So first of all I need to say about periodic orbit. So periodic orbit themselves in non-degenerate case and restrict only to non-degenerate case can be odd or even depending on this return Poincaré map it's a kind of hyperbolic or elliptic orbit roughly speaking. So let us associate with every simple orbit of our R just sequence two infinite sequences of graded variable and I grade them say Z2 grading is precisely the same as I described here even or odd but there is a in principle there is an integer grading the mass of conical bits and rindices but I am not discussing this so then organize them in the kind of Fourier series. So in fact you can think that we have this big space where this like dimension is equal number of such gammas. So and you can think that this collection of those you gamma just coordinate so it's kind of loop collection of all of them is a formal loop in this big space. So in the first approximation and this is a even not first probably minus first approximation SFT associated with h and compatible J the following algebraic objects. So I will describe separately rational SFT and full SFT. So object in rational SFT is a graded Poisson algebra generated by this variable with the commutation relations that this Poisson bracket is equal to K and all other zero and this is the same as that you can consider corresponding symplectic form which is just one over K times this. And you have a special element which I call Hamiltonian which satisfies the properties that Poisson brackets with the self is equal to zero. So let me note that this h is in fact always outgraded but so this Poisson bracket is in fact not necessarily it's not zero automatically that we are in the even case. So in the full SFT you have instead Y algebra in the Poisson algebra so you just have a associative algebra with this commutation relation instead this Poisson bracket so you have one extra variable h bar and you have another element in that one over h bar of W which also satisfies this h commutator with itself is equal which is exactly the same as to say that composition of h is with itself is zero because it's odd object. So it's important of course all people who work in this area know that this well algebra can be represented as an algebra differential operator as this while representation which acts on this so-called Fox space is a space of function of this Q variable and kind of formal power series in this h bar variable and you oh I forgot here sorry I forgot most important thing. So here so should be k times h bar coefficient not just t or dQk but k h bar. So okay so I will not describe morphism in general I just don't have time so I just only consider rational case so in rational case the morphism between this two Poisson algebra is some function which have this properties that Hamiltonian vanishes on Lagrangian manifold defined by this function. So geometric meaning of this form consider the symplectic space corresponding to this Poisson algebra is coordinates pq and this is a as I already told this is a symplectic form which corresponds to this Poisson bracket and then in this symplectic space you consider Lagrangian manifold which is just given by this function and you have this Hamiltonian vector field kind of everywhere given by this function h plus h minus. So the condition that h plus plus h minus constant on Lagrangian manifold just means that this Hamiltonian vector field is tangent to this Lagrangian manifold. So in the case when one of this when you have a keyboard with an empty set so you have just a manifold bounded by one just keyboard spinning one odd dimensional manifold then we associate with this just Lagrangian subspace in this symplectic space. And composition of morphism is a composition of Lagrangian correspondences well I don't have time to explain what is this is well for instance for those Lagrangian manifold which are graphs of symplectic morphism this is just correspond to composition of this map. So where is all the structure come from? This is the above algebra describes the structure of the boundary of the modular space of holomorphic curve with cylindrical ends. So I need to describe this thing so this the appropriate compactification consists of so-called holomorphic buildings. So this is a kind of typical holomorphic building so you have several stories so you have kind of one holomorphic curve another holomorphic curve and you have a matching data on each between each floor of this building. So the co-dimension important thing the co-dimension of boundary stratum in the cylindrical case is equal to the number of non-trivial mean not different from this trivial cylinder component here for instance here 1, 2, 3, 4, 5, 6 you have this 6 components as a co-dimension of the stratum and hence you always have only two story building when you looking as a co-dimension one stratum of the boundary. So in the case by the way in the case of the manifold this cylindrical ends this in the count of co-dimension you only count this non-trivial stratum in cylindrical parts. So why this while algebra? Because this while algebra is precisely the algebra which describes the gluing of Riemann surfaces. So look at this two surfaces and suppose I would like to consider all possible way of gluing there for instance I glue this to end and just extend this trivially here or I don't glue anything or I in this one I glue this to this end this to this end and don't glue anything to this one etc. And this is exactly the same as I just associate with each of this so end of this curve marked by I'm sorry this is not possible to see in this picture but it's marked by some of this p's and q's I marked by p's upper end and by q's lower end and I associate with every such curve I associate monomial we just describe its end and this is h bar to the power of minus error characteristic of this. So you get a so this one you get p1, p2, p3 not error characteristic g minus 1 the g minus minus minus 1 half of error yeah half of error characteristic okay so you have a p1, p2, p3 and this is this one then you have q1, q2, p1 this is this one and then you multiply them in our while algebra just means that transpose transpose all p to the right all q to the left or think about p the h bar d over dq and take a composition of this operator and you precisely get this number of terms. So appearance of this k efficient k in this because of this k h bar because there are k way of gluing of multiple orbits so you see if you have a kind of you can glue this way or you can glue this way this way or this way. So then you can kind of also do upgrade the whole theory in the same way it was done in Gromov-Witton theory so you define correlators and this is the same correlators we kind of count a whole more significant curve with cylindrical end and subject to constrain and the only difference we have to remember what are cylindrical ends I mark them by gamma minus gamma plus here and then organize them in the correlator which now become element of this while algebra and finally consider this generating function which I call this h h bar and I will call this h and then you have the following kind of master equation. So notice I didn't tell that theta consists of the closed form I could take any forms and then turns out the equation of the following if we consider h associated with this set of theta form and d theta and then you get a kind of the composition of them is equal this operator applied to h where this is a s variable correspond to d theta form t variable to theta form. So in particular if d theta is equal to 0 you have that h composed with h is equal to 0 so this is precisely the Stokes formula as I said this is our composition of differential operator describes the boundary of the modular space so the left hand side it's a so it's a in Stokes formula I integrate some form over the boundary and get this one and I integrate form over the manifold and I get this one. So in many many interesting cases you have this in fact h of 0 is 0 and in this case differentiating different sorry differentiating this identity in t variable you get this commutator of h i and m j equal to 0 where the h i j derivative and so you get sequence of commuting differential operator moreover you can add this descent variable so you can in fact this h depends on infinitely many variable and hence by this procedure you get infinitely many commuting differential operator so it's kind of sign of some kind of quantum integral system and indeed for instance if you do it for the case of S1 you get infinite sequence of commuting integral for the so-called quantized burger hierarchy this is non quantized thing and you need to take a kind of quantum version of this replacing this ODE by PDE and then this will be commuting integral for this hierarchy and for instance if you do the same thing for three dimensional sphere and then then you get integral for quantized so called this dispassion of Toder hierarchy which already appeared before in our county hall Morphicor in CP2. So let us talk a little bit about invariant so thanks to the master equation h h is equal to 0 we can define on this while algebra the differential just commutation with h and it still satisfies this property d square is equal to 0 and therefore you get defined homology which provide us with many cases with powerful geometric invariant for instance in floor case you get some far-going generalization of homology floor homology theory which is not really explored at all so there is a kind of wealth of new algebraic structure and kind of cosmological operation on the on the floor homology which which were not studied so in the context case you it leads to so-called quantum homology theory which really extremely successful story in terms of finding invariant of many a fault and in high dimension in dimensions there is a kind of competing construction of invariant via OJVSABO theory but in high dimension there is nothing or cyberquizan theory but in high dimension nothing else is known so alternatively one way use h to define differential on the space of function of q on the so-called folk space and then leads to what is called BV infinity algebra kind of structure on the space and this was recently studied by chili bark and lachev so so let's let's consider the case when you have a x which bounded x is simply a co-ordinate and bounded by y just co-ordinate them with one end so I already explained that you can associate with this just counting homomorphic curve in this combordian with cylindrical end you get some Lagrangian manifold but also you can get this Lagrangian manifold you can do the same thing with this parameter t including this form and including precisely as was done in the case of h and if dceta is equal to zero then similar to this naive case h vanishes on this corresponding Lagrangian manifold but if dceta is not zero but only zero on the boundary then you get the following following equation with a similar d and if you apply to this case of four ball bounded by three and kind of we get then you in fact get so you take this equation and notice here notice here that you have this operator t d or dc so if you differentiate with respect to t you get something d or ds is equal something like this and so and this is exactly Hamilton Jacobi equation for the evolution of this f is that how this so if you do this for the case of of a three then you can explicitly easily compute this Hamiltonian and then you get this dispersion is sort of floor which was used for immigration of rational course and sepital okay so let me do final remarks so just this will be to do least well so there are plenty of things one need to do with this kind of symphactic field theory which currently some people working on on this and on some think people not working here but so for instance there are huge possibilities of application Hamiltonian dynamics so homomorphic for curve successfully were applied to homiltonian dynamics but nobody ever seen any serious application of homomorphic curve different from of general greater than zero to dynamics and and using this I'm sure you can do this for instance you can generalize Poltorovich end of quasimorphism but which it end is top and etc so there is this kind of mysterious relation the theory of quantum integrable system so as I already hinted you have a this odd dimensional manifold Hamiltonian structure and you always get some kind of infinite sequence of commuting differential operators and in rational case it's a good infinite sequence of commuting Hamiltonian and moreover this kind of evolution property allows us to write equation solve them and left hand side of this and the solution of this equation will become left hand side of new integrable hierarchy so it's kind of self generating system of integrable hierarchy if one can really solve them explicitly so one need to study algebra of SFT what is actually invariance okay so there is this homology but of course there is much more in this kind homotopy type of this algebra and there is much more structure and how to extract through this actual computable invariant that's a problem so there is a lot of work now of getting topological invariance where this idea of SFT there was a for instance so the idea is the following you take a topological manifold and do some kind of canonical synthetic construction take a cotangent bundle of union cotangent bundle or and then you study symphonic invariant of this one and in many cases it turns out to be powerful topological and very powerful for instance there's a I just mentioned Lenny and work on knots invariance of nodes but there are a lot of work to do here so there is this kind of current work or attempt of constructing relative SFT which is half constructed which work with holomorphic curve with not only this kind of puncture but also with boundary and it's related with string topology and cluster homology and and there is a work here of Foucaille and of Chillybacke and Lachov and many others and Carnell along to cluster homology he'll speak about this today and so there is a certain procedure which is called symplectic surgery which also tightly related to serious symplectic left spin cell and in order to be able to compute invariance one need to kind of to develop this and this also will shed light all this mirror symmetry theory and there is this connection between topological boundary field theory of SFT there is a kind of some work of Tereman and Voronov here and there is a relation is tropical geometry so this generation of tropical geometry is a kind of another very much related generation to this SFT generation and there are a lot of other things to do thank you.
Symplectic field theory (SFT) attempts to approach the theory of holomorphic curves in symplectic manifolds (also called Gromov-Witten theory) in the spirit of a topological field theory. This naturally leads to new algebraic structures which seems to have interesting applications and connections not only in symplectic geometry but also in other areas of mathematics, e.g. topology and integrable PDE. In this talk we sketch out the formal algebraic structure of SFT and discuss some current work towards its applications.
10.5446/15950 (DOI)
It gives me great pleasure and it's an honor to introduce Ron DeVore, who over the last few decades has been pioneering and introducing a variety of methods of functional approximation and approximation theory in all of its ramification. Ron's contribution covers a broad range of issues and problems, deep analytical issues related to methodology for functional approximation. You will hear sort of the outcome and import of this kind of work in his talk later, but let me just say that in our day functional approximation is sort of the key to the digital age in some sense. Machine learning and functional regression, it's sort of the main straightforward elementary question that everybody asks, but nobody knows how to answer, and hopefully we'll learn something in the next few minutes. Later aside, I just learned a few minutes ago that we are actually mathematically related. He's my nephew, mathematical nephew. We graduated under Bojanic, who is my mathematical brother. We are both students. We were both students of Karamata. So the world is small in mathematics and beautiful. So thank you, Uncle Ralfi. Ralfi. I think we're all aware that most scientific problems can't be solved exactly, so we have to resort to some sort of approximation or numerical computation. So we create algorithms to try to resolve these problems, and my talk is concerned with is there a way to tell whether an algorithm is optimal or near optimal? This would be not only a wonderful result mathematically, but would lead us in the right directions of how to proceed computationally. So to answer that question, of course we're going to have to describe exactly what we mean by optimal. But I want to put another caveat into this. You may think of an algorithm that sounds very good to you in theory, but you have to remember that whenever we implement such an algorithm, we do it on a computer, and we're subject to machine errors and even errors in entering the data. So it's not just a question of designing an algorithm, but the algorithm should be numerically stable, and we'll see that ingredient coming up in my talk. So there are many settings that I could describe this, and in fact with my colleagues, especially with Albert Cohen and Wolfgang Dahman, we've studied this problem of optimal algorithms in many different settings. But after some thought about what I should do here in this talk, I've decided to speak about the, in the setting, this clicker is not working very well, so I think I'll stand here maybe to do better. I decided to concentrate on an area which is called compressed sensing. This is a relatively new area in mathematics. It's getting a lot of interest these days. And the motivation for this is the problem of sensing real world signals. So the usual paradigm for signals is that they're band limited functions, and the way to sample them is through what's called Shannon sampling theorem or Nyquist sampling. The unfortunate state of affairs is that many of the real world signals that we're interested in are broad banded. This means that their Fourier transform has large support, and that this type of sampling cannot be implemented in practice. So compressed sensing is looking for a way around this to find another way, an innovative way of sampling that defeats the roadblock of Shannon sampling, and samples signals more at the information content in the signal rather than at the, determined by its bandwidth. So my reason for choosing this subject is there are three reasons. One is that it's very audience friendly as you'll see. It'll be easy for you to grasp the main ideas that are here because it's basically going to be an exercise in finite dimensional geometry and linear algebra. The second reason is that the subject itself interfaces a lot of areas of mathematics, most notably probability theory, theoretical computer science, and functional analysis. But the real reason is that by discussing this subject, I'll be able to introduce the types of optimality that are important in numerical computation and all the nuances. As we'll see, I will introduce three types of optimality when we discuss this problem, and it's sort of the full range that I know of that occurs in numerical problems. Okay, so let's begin. Well, really in compressed sensing, we're interested in analog signals, but the subject is fully developed only for discrete signals. So I'm going to stick to discrete signals until the very end when I make a few comments. So our problem is the following. We have a vector x and rn, and here capital N is large. So for example, if we were dealing with images, n would be the number of pixels, it would be of the size one million, or even more. And the problem is that the game we're going to play is we're allowed to ask small n questions about this signal. And these should be non-adaptive, by which I mean I don't want you to have that the one hundredth question depended on the answers to the previous 99. You must set the questions down in advance. I'm even more restrictive because what I mean by a question is going to be very specific. A question to me is that it's going to be the inner product of this vector x with some other vector. So you can choose the vector v, and that will give you then by taking the inner product of v with x, you'll get the answer to your question. So question is very specific in this case. It's linear information. Now, we can always represent this kind of a system by an n by n matrix phi. The rows of this matrix would be these vectors v that we're taking the inner product with respect to. And capital N is the length of the vector x. So typically, capital N is very large, and small n is very small. So we have short and fat matrices phi. So you see in this world, the Brazilian diets, this is not applied. We're interested in short, fat things. Okay. My question is, can we say what are the best matrices to use in such a system? Or if not best, near best, or good matrices? Now here, good is going to mean what? It means that once we extracted this information y, which is phi of x, so x appeared before us. We asked these questions. We get this vector y, and now x disappears from our site, and all we can use is y. And what we're interested in is how well can we recover x from this information y? How well can we approximate x? This is roughly speaking what we mean by good. Now there's going to be two aspects to this problem. One is, does y contain enough information about x? And the second is, how do we extract this information from y? That part is called the decoding, and both are serious issues. Okay so let's think about what the problem really is here. So we have matrix phi that's mapping a large dimensional space, Rn, into a small dimensional space. So naturally, a lot of vectors are mapped into the same image. In particular, if you look at all the vectors mapped into zero, the null space, this is a large space. It will have dimension capital N minus little n. Since capital N is big and little n is small, this is a large dimensional space. We have a lot of collapsing going on. If you took any vector y and asked which vectors are mapped into y, they can be described as just a hyperplane where you take any one of the vectors mapped into y and then add the null space to it. This gives you a hyperplane, and this entire hyperplane has the same information y. So these hyperplanes which are denoted here in yellow, so this is a given hyperplane for a given y, all the points on this hyperplane are mapped into the same information vector y. Right, so there's tremendous collapsing here going on. Now a decoder, what is it going to do? It's going to take y and it's going to map us back into the big space, rn. So after it does that, we get our approximation x bar to our original x. Now at this point, you should be very pessimistic that we're going to do something here because of this collapsing. So this vector x bar would have to serve as an approximation for all the vectors in this big hyperplane that was mapped into y because none of those are distinguished in any way by this encoding decoding. They're all treated the same way. Well indeed if we had to work in complete generality, we wouldn't be able to say much about this. However, our understanding of the world of real signals is that they have some structure and we want to utilize this structure or take advantage of this structure in going forward. So our paradigm or the way we think of real world signals is that they're sparse in some sense, that there's a set of building blocks and if you represent this signal with respect to this set of building blocks, it either has a very small representation or it can be approximated well by a few terms. Now building blocks, in my case, I'm going to restrict to the case where the building blocks are a basis for our end. You could think of this in other settings as well. In some problems, we may know the basis in which the vector is sparse. In other problems, we might not know the basis. So this is an important consideration. I'm going to actually proceed for the moment assuming that the basis is known to us and then later I'll say something about what we do when the basis is not known. So here's going to be my first way of measuring optimality. So I'm interested in telling you that when we tackle a problem like this, let's try to make a precise mathematical problem to see what we mean by a best or near best algorithm. So here's my first notion of optimality. I want to deal, I'm going to assume that my signal x is sparse. And I'm going to, for now, I can assume that the basis in which it sparses a canonical basis. This is not, I'm not pushing anything under the rug because you can always make a basis transform to get to this case. So what do I mean by a sparse vector? I mean that it has few non-zero components. By let's, given a vector x, the support is the number of i's, the indices for which the vector is non-zero. And I form this class, sigma k, which are all vectors whose support is less than or equal to k. So this is a non-linear set. I mean it's not a linear space. It's a union of hyperplanes. You can take, for example, take any index at t of size k and look at the set of vectors supported on t. And then you union these. So you can visualize this as you have a lot of hyperplanes and the union of these is the set sigma k. So I'm going to assume my signal is in sigma k. And I'm going to ask, given that information, how can you best design an encoding-decoding scheme? So here's the problem. No one is fixed, little k is fixed. And I ask, what's the smallest n such that I can build a matrix phi and a decoder that will capture every one of those guys exactly? And I'm going to tell you right off hand what the answer is. The answer is n equals 2k. Now this is sort of morally justified if you think about it. Why? Because a vector in sigma k is determined by 2k pieces of information. You need to know the k positions where it's non-zero. And then you need to know the value of x at these k positions. That's 2k pieces of information. So one would think you can do this with 2k samples. But of course, my definition of question was very specific. I have to take inner products. So it's not absolutely clear that I can do this. But as we will see, we can. So to describe what matrices, so my goal now is to describe to you what are the good matrices for this problem? What matrices will work? So I want to think of my matrix as built from its columns. So this matrix has columns v1 through v capital N. And I'm going to introduce a property of the matrix that if the matrix has this property, it will allow a solution to this problem. And to describe this property, what I look at is sub matrices of phi. And the sub matrices I get is I take any t columns, any set t of indices, column indices, and I look at the sub matrix gotten from phi by just keeping those columns and throwing away everything else. I'll denote this by phi t. And I will use this matrix frequently. And associated to that matrix is this Gramian matrix, phi t transpose phi t, which is the matrix of inner products. This matrix is a square matrix. It's symmetric and it's non-negative definite. And now here's the theorem. So suppose you have an N. And you want to check whether a matrix of this size will capture every k sparse vector exactly. I claim this will be the case if and only if one of these four conditions hold and they're all equivalent. So the first condition is that the intersection of sigma k of these three conditions, since this is the requirement, that we reproduce all signals exactly. So the first condition is that sigma 2k intersects in all spaces zero. Now notice I'm trying to recover k sparse signals, but here the condition is 2k. Now why do we have that? It's very intuitive. We need to avoid the case where two signals of length k are both mapped to the same y, because then we couldn't find which one it was. While two signals are mapped to the same y, I can take their difference and that'll be mapped to zero. Well that's something in an all space, but their difference will typically be a signal of length 2k. So that's how I get to this condition. And equivalent conditions are the following and these are the simple linear algebra. Another way to state this is to say that these matrices phi t that I form, no matter what selection t of columns I pick, as long as I pick 2k columns, that this matrix should have rank 2k. Another way to say it is that this Gramian matrix should be invertible. So if I can find a matrix phi with these properties, then I know it will recover two k sparse signals exactly. So the question becomes, can I construct such matrices? Can I write some down? Maybe I'm stating something you can't find such matrices. Well what do we need to do? We need to create vectors in r2k. I'm claiming I can do this for n equals 2k, right? But I need to be able to create a lot of vectors because capital N is arbitrary. It can be arbitrarily large. I need to create a lot of vectors in r2k and I need that whenever I take 2k of them, they're more or less independent. They have to be linearly independent. Well here's a quick and easy solution to this. Take the Van der Mann determinant. Take distinct points x1 through xn and form the matrix where in each column you have the successive powers of the given xj's. Then we know that if we pick a sub-matrix consisting of 2k columns of this, we get a square matrix which corresponds to polynomial interpolation. This matrix is invertible. So this matrix satisfies the properties I wanted. So it solves the problem. But I haven't discussed how I'm going to do decoding. Now you can think of many ways to do decoding. I'm going to just put down a naive decoder. A naive decoder would be I'm looking, now I'd like my decoder to be defined for every y, not just for the y's that are images of k-sparks vectors because in practice I won't have that information. So I want my decoder to be defined for every y. So here's something you could do. Given y, you could look over this set sigma k and try to find a z that best fits y, that is that y minus phi of z is small as possible. This is a least squares problem of trying to find a least squares fit to y from the image of sigma k under your map phi. You can decouple this problem. You may look at these spaces that we have introduced before where t is a set of column indices of size k and we solve the least squares problem on this set t. And of course we know we can actually write down what the solution is. This is more Penrose inverse where we have to apply this Gramian matrix, the inverse of it, to phi ty. So this already shows you where we need somehow that this matrix is invertible. We need non-singularity. And then we just search over all these t and find the t star that gives us the smallest residual. In the case that our vector was the image of a k-sparks vector, there will be a t for which this is zero. And it will be a unique t and we'll pick up that t by this decoding process. Okay, sounds great. We solved our first problem. Can we now go home and have a glass of cognac and feel happy that we've done something positive? Well, I'm sorry to tell you that there's bad news here. None of us will be alive when this decoding is finished. Okay? So this decoding that I told you, this naive decoding is horrible. If you had a problem where n is like a million, like an image processing and little n is like a thousand, you can imagine you're going to have to take capital n things taken little n at a time to check. But a more serious problem, it turns out, is that the decoding is not stable. Now you can actually fix the first problem. You could find another matrix. For example, you could take the discrete Fourier matrix, first n rows of a discrete Fourier matrix, and then by some clever decoding, you can do this whole problem in this number of operations, which is a reasonable number of operations for us. So this isn't the real problem that the decoding takes forever. But the fact that the decoding is unstable is a real problem. And let me just explain this to you that we're really stuck here in some sense, because suppose I'm a generous man. Suppose I tell you where the vector x, what its support is. Suppose I tell you, OK, it's supported on t. Now you only have to find, you don't have to find these indices of support. You only have to find the vector x. How would you do this? Well, you have to apply this inverse operator. And this, of course, we know in the examples that I created that this is very poorly behaved. And in fact, now, if I stick to wanting to have only 2k samples, I'm stuck. I cannot do this in a stable way. OK, so what seemed like an easy mathematical solution, a lesson to us, is what seems mathematically OK may not be numerically reasonable. Well this brings us to actually where compress sensing began. I'm not telling you this in chronological order. Compress sensing began about three or four years ago in the work of Immanuel Kandez, Terry Tao, sometimes Justin Romberg, and in a parallel development by Dave Donohal. And what they do is they show how to create matrices that not only will recover 2k sparse vectors, but do it stably. To do this, one has to pay a little bit. One has to increase the number of questions, and we'll see how that plays out. So they make, and I'm going to talk about the Kandez-Tao development, which I think is very elegant and very much to the point, they make two important contributions. First, they're going to tell us what are the good matrices, or they're going to tell us a sufficient condition that a matrix be good. And secondly, they're going to handle this problem of decoding. They're going to show us how we can do fast decoding. So how does this play out? So the property that they're going to introduce for the matrices is the following. Remember these matrices, phi t and these Gramian matrices, they're going to require, we say the matrix satisfies a restricted isometry property of order k. If whenever we pick k column indices, the eigenvalues of this matrix are in 1 minus delta 1 plus delta, and here delta is between 0 and 1. So the important thing is the eigenvalues are staying away from 0, and they're staying bounded. So this Gramian matrix now is not only invertible, but it's nicely conditioned. So that's the new ingredient. You could restate this property as follows. You could just say that what you need is that whenever you apply phi to any vector in sigma k, the resulting vector has norm roughly the same as the norm of x. That's where the word restricted isometry comes in. If we could take delta in this equal to 0 in this formulation, then it would mean that we have an isometry. But we give up a little bit, and we have to give up a little bit. Now what about the decoding? They suggest for decoding that once use L1 minimization in this way. So suppose we have our vector y. To decode what I'm going to do is look at all x which map into y, and then I'm going to pick the one which has minimum L1 norm. This has a very simple geometrical description. If I think of y, it determines this hyperplane, and this is the ball in little L1. I take this ball very small about the origin, and I blow it up like a balloon until it hits this hyperplane. When it hits this hyperplane, that would be the solution. And generically, it usually happens that it hits at a unique point. Now here's the main result. By the way, an important part of L1 minimization is it can be solved by linear programming. So we have a modest investment in the decoding. For example, we always know we can decode with order capital and cubed operations. And here's the main result of Candidaz and Tal. They say if you have a matrix phi that satisfies this restricted isometry property, now for 3k, before I had 2k, probably you could replace 3k by 2k and everything if one studied this enough. But once you have this property, then you can find a matrix with this. If you have a matrix with this property, then you use L1 decoding, you'll capture every x exactly. That's in sigma k. Every sparse guy will be captured exactly, but moreover, the decoding is stable. So now our problem is, OK, we have a new matrix problem. Then we find matrices phi that satisfy this restricted isometry property. And how do we do that? So how do we build good matrices? Well, I claim that for given n and n, we construct such matrices. So we'll satisfy restricted isometry property of order k. For every k less than n divided by log n. Now all of a sudden you see a log n appear in this, which hadn't appeared before. This is the price we pay for stability. You can actually show that you can't discard this log n. It must be there. If you want this restricted isometry property, you're stuck with this log n. So we give up a little. Before we could take k equal to n over 2. Now we can't quite take k at large, but only sacrifice by this logarithm. Now, the question I want to discuss next in the next few minutes is, how can we construct such a phi? What do we want to do? Well, we're back to our problem of creating a matrix with a lot of columns, matrix and our vectors and our n, and a lot of them. Now we want them not only to just be that whenever we pick 2k of them or k of them that they're linearly independent, we want them to be very far from being dependent. We want them to be almost orthogonal in some sense. I'm going to mention to you now three ways you can do this. All of these constructions are probabilistic. The first way is we could look at the unit sphere and our n and to choose our columns we simply pick at random according to the uniform distribution. Independently at random we pick our columns. This would work. Another way is we could create a stochastic matrix. What we could do is take one of your favorite distributions, let's say the Gaussian distribution with mean zero and variance one over n. And for the first entry just make a draw of this distribution and whatever number you get stick it in as your first entry in the matrix. And then independently repeat this every time you want an entry and thereby fill out your matrix. You could do the same thing with a Bernoulli distribution with plus or minus ones or zero and ones. Only one of these constructions will work and will give you a matrix phi with high probability it will work. That is you do this realization then you can prove that with overwhelming probability the resulting matrix will satisfy the RIP property for this range that I advertise and I tell you this is the best range you possibly can have. So we have a way to construct these matrices. I want to make one point very clear here where probability is coming in because sometimes people get confused on this point. We use probability to construct this matrix phi to prove the existence of phi. What we know is we do this construction and almost every matrix we get is going to have the property we want. Once we have a matrix in our hands the algorithm is constructed. There's nothing probabilistic in the algorithm. We apply phi, we apply L1 minimization and so on. Probability is only used to construct this. Because this RIP property is so important, it's of interest to have nice methods to verify RIP. Typically one would think that this is an eigenvalue problem and I'll approach it through eigenvalues. What I want to point out here is that actually there's an elementary track to verify the RIP property in the settings that I just described, for example in the Gaussian or Bernoulli case. So suppose I generate a stochastic matrix by taking draws of some random variable R and fill out the matrix in this way. Then the first thing to observe is you can just write out what phi omega of X is and expand it and take the norm and you'll see that the expectation of this vector is norm of X squared. So I'm going to try to draw out what distributions, what probability distributions would work in creating these matrices that satisfy the RIP property. So this is always going to hold for any distribution, but we're going to need more. What we need to have more is that when we look at a given X and we look at phi omega of X that it's with high probability close to this mean. So we need a concentration of measure inequality, namely that for any given X we have this estimate on the probability that these two things differ by a delta times the norm of X. This concentration of measure inequality will be satisfied for Gaussian, Bernoulli, many distributions and it's usually an easy matter to show that this property is satisfied. And I claim that whenever you have this, then the resulting matrix that you create from this random process will satisfy the RIP property for the range of K that you want. And maybe it's not so important that this is true, but what I want to emphasize is the proof because there are other ways, as I mentioned through Eigenvalue analysis, to prove this result, but there's actually this has a very simple analytic proof that I want to just quickly draw upon. So how would I prove this? What I do is, see I need to verify on sigma K, I need to verify this restricted isometry property which meant that the norm of phi applied to X is like the norm of X for all X and sigma K. That's what I need. So what I do is first I cover the unit sphere in sigma K to some tolerance, let's say delta over 4. Then I know that with this covering, this net of points, for each point in the net I know this, this is a concentration of measure inequality, told me this. With high probability this is true. And since it's with high probability I can apply a union bound and get that as uniformly true for all Q and P. And of course you've got to be careful of how many points do you throw in there, but do this. And then finally, you need to extend it to every vector in sigma K, not just the points in this cover P. And to do that is quite trivial. For example, the upper estimate, you can estimate norm of phi by saying, well I'll approximate X by a Q from my net. And then I have this trivial estimate for norm of phi of X. And the second one is under control because of this. And the first one, well any matrix on a finite dimensional space is bounded, so there is some bound. But of course I won't get the best result this way with M. But then I can bootstrap this. Because once I get a bound for M I can repeat the process. And if you do this you'll arrive at this theorem. Okay, now I talked in the beginning about, you know, the basis is known to me. What can we do if the basis is not known? And people in this subject talk frequently about this concept of universality. Well, these random things, they work for any basis. But I caution you to be a little bit careful on this point. Let's make this precise. To make this precise, if I start with any collection of bases, and if the number of bases in this collection is controlled by e to the cn, n is this, you know, remember the number of rows in a matrix. Then this same argument I gave probability argument can be used to show that you can prove the existence of one matrix phi that will satisfy RIP with respect to every one of these bases simultaneously for the hope, the range of k that you would like, the biggest range of k. The problem, so this is a sense that you can say universality. You can say that if you draw at random a matrix with high, and if you have these bases known in advance to you, this collection of bases, and with high probability, this will uniformly handle and encode well, spar signals with respect to any one of these bases. The problem is that we would need to know the basis to decode. This is good for encoding, but not good for decoding. Well, some people talk about, well, I'll encode and then 20 years later I'll decode. You know, I'll put it in a time capsule. I don't know too many applications in which people want to wait 20 years, but. Okay, that was my first example of optimality, and it was the simplest. Now I want to go to more general. So we know that real world signals aren't typically going to be spars in the sense that I mentioned to you. They're not going to have support that's only a finite number of terms or a small number of terms. They're typically going to have a lot of entries in them. But in general, most of these entries are going to be small. And our idea, remember as we began, was that we said that we view signals as being approximated well by spar signals. That is, they have only a few components that are essential to capture them, and we only need to find those components. So to make this meaningful discussion, I want to talk about how a signal X can be approximated by spar signals. So I introduce this measure here of error. So you can choose whatever way you want to measure error. The norm X maybe for this talk just take X is the least squares L2 and and then given a vector X, we look at how well it can be approximated by vectors which have support K. So a vector is nice if there's a vector of length 100, support 100 that approximates it well. And we think real world signals are nice. So mathematically, we're interested in classes of signals in which we know that they can be approximated well. And typical way to get such classes are the following. You could take the unit ball of an LPN. So for example, if you want to measure error in LQN, and I suggest you to take Q equals 2 here, then any vector in LPN will be approximated by K terms to this accuracy. Now look at this a minute. As P gets small, now here small P is better than large P. Small P means that the vector X, the components of it die off very fast if you order them in size. And you'll have this decay, K to the 1 over Q minus 1 over P. So this can be very, very fast if P is very small. For example, if Q is 2 and you take a vector in a unit ball of L1, then it can be approximated by K terms to accuracy 1 over square root of K. This is easy to see and I don't stop to prove that for you. So what would I mean by optimality in the second sense? Optimality now will mean I have a class of signals, capital K, and I have some norm in which I'm going to measure distortion or error. And I say that the sensing system is optimal if it performs like this min-max. So this min-max says, OK, let's look at how well we can possibly perform. This is the best performance we can make by systems of size n. That is, the matrix has a dimension, the little n by capital N. We take such a matrix and a decoder and we apply this to a given X and we achieve a certain error. And now we look at the super, the errors over all X in the class K. So this is a usual min-max way of doing things. We look at the worst error and then we look at the system, the phi and delta, that give us the best worst error. So we take the infimum over all these. And now I would say a system is near optimal if I have this kind of an inequality with some absolute constant C0. If I had C0 equal 1, I would have optimality. There would be nothing better, but usually you have to give up a little something. So this is interesting to understand for a given class of signals, how does this en behave? What's the target? What's the best we could possibly do? That's the usefulness of en. It tells us the best we could do. And the story is that this, the behavior of enk for classical sets K in our n is known. It was not only known, but it was done a long time ago in the 1970s and early 80s in the context of finite dimensional geometry or geometry of Banach spaces. The main players were Gaussian and Gluck skin. And I just mentioned one result. For example, if I look at the unit ball in L1n and ask what's the best encoding decoding I can do with that in L2 when I measure the error in L2n, then here's the estimates. Notice the estimate from above and below are the same order. The only thing that differs are these constants, C0 and C1. The estimate is basically 1 over square root of n. 1 over square root of n would be what we could do if we chose the n best terms of the vector. And we pay a little price here, this log of N over little n again. Remember we had that before. And this price we know is necessary to pay it because we have a lower estimate. Now I'd like to say so why was this not a solution to the compressed sensing problem back in the 70s? The thing that these players did not look at was practical decoding or how can you implement this encoder? In fact, they never touched the problem of decoding. And a contribution of Candidaz and Tao and Donohull is that they show that if you have a matrix with the restricted isometry property and you add L1 minimization, then it will give near optimal performance. It'll achieve this type of performance. Now not for every Q and P, but for certain Q and P. For example, if you want to measure error in L2, then for every P less than or equal to 1 it'll be optimal. So they give a practical scheme and once you have this scheme, now you're happy because you know that you're doing the best you possibly can. Let me say just a couple words about this because there's, you know, I wouldn't say a controversy but I talk to people in functional analysis and they say, look, these guys in compressed sensing, they're just doing our work over again and so on. This is not a fair statement because as I mentioned to you, they did not discuss the practicalities of the decoding. So their results are useful and interesting but didn't discuss practical decoding. What is the most useful about the results is the lower bound, it turns out, because it tells us we can't do any better. But it doesn't give us real practical schemes to do the encoding and decoding. So I think both camps have made a great contribution. Now I want to go to my third and last way of measuring optimality. And it, to me, if you were in the States, you would call this the Cadillac of all ways of measuring optimality. And Germany would be, I guess, the Mercedes Benz. I don't know what it is in Spain. But anyway, what I want to have optimality in a very strong sense, here's the sense. I'll say that this encoding-decoding scheme is instance-optimal of order K. If the following holds it, every vector I choose. I don't need it in any class or anything like that. Take any vector. When I encode and decode, it performs like K term approximation. Well, this would be terrific if you can do this for large values of K. For example, if you have this and you can return to your first problem, and if you take a vector which is K sparse, this will be zero. So you'll encode, you'll get it exactly. It will also include all the results on optimality for classes. So this to me is like, you know, if you could get this, it's like a dream, but if you could get this, this would be the best possible result. And the problem is, let's suppose you fix the size of the problem, the size of the system you're going to use, the number of rows you're going to use in the matrix. Then the question is, how big can you take K and achieve this instance-optimality? And with L Bear Cohen and Wolfgang Dahman, we've solved this problem for all, if you want to measure the error in any LQ space as long as Q is between one and two. And I'm just going to mention a couple of these results. So there are two things to say. One, the good news. The good news is that you have instance-optimality in L1, and you have it for K of the range that you're the most you could expect, namely K less and log n. Moreover you can build this system and use L1 minimization as a decoder so you can actually build a stable system and have this instance-optimality. And this result, although we prove it in our paper, you can, it essentially follows from the work of Candez and Tao. Now that was the good news. Now here's the bad news. The bad news is that if you ask for instance-optimality in L2, you want to measure error in at least square sense in the L2 sense, then the bad news is that you can't even get instance-optimality of order one unless you pick a number of measurements that's roughly the same as the length of the vector. Well, we don't want to do that. We're trying to have little and small. We don't want little and big. So this says that instance-optimality in L2 doesn't seem to be a viable concept. Now what happens in LP or L3, for Q between 1 and 2 is you sort of mediate from one to the other. Near 1 you have almost the same as L1, and then as you move to Q equal 2, measuring the error in LQ norms, the situation deteriorates until when you get to L2, it completely falls apart. Well there's some recovery. Although I said that instance-optimality in L2 is not possible, there is a little bit of recovery. And I think this is an interesting fact that you can recover some. I want to change the game a little bit. Well the game I've played so far has been you create once and for all a matrix phi, you create once and for all a decoder, and I use that system over and over and over again. Now I want to change the game. What I want to allow is a different game. That I give it, I want to think that I have a bunch of encoding schemes. For example, I have stochastic matrices, and I can grab any one of them that I want. And I start with a vector x, and I roll the dice and grab one of these stochastic matrices, and I use it for my encoding. And then I decode it in some way. And now I ask, okay, I know that I'm not going to be able to do for every x with certainty get instance-optimality, but maybe I can do it with high probability. And this turns out to be the case. And to describe this, I need to tell you what types of matrices, or what do you need about the stochastic matrices to make this go. So suppose I have a collection of random matrices. I want this collection to have two properties. The first is I want it to have the restricted isometry property of order k with high probability. Well we know that. We know how to create such matrices. That when we make a draw, the resulting matrix will have a restricted isometry property with high probability. So I want this. And the second thing I want is I want it to be, remember the restricted isometry property applied to vectors in sigma k. Here I want to have a bound in this property with high probability. Namely that for every vector x in Rn, I want with high probability that a random draw of this matrix when I look at phi omega applied to x, its norm is controlled by, let's say, two times the norm of x. Now we know in general, given an x, in general we can't, or let's say given the matrix phi of omega, when I start putting in different x's, for some x's this is going to be big. The norm is going to be big. But we also know that the probability that it's going to be big is small. Remember those concentration and measure inequalities? They said the probability of this was small. So it turns out you can have matrices that satisfy this property, the Gaussian, the Bernoulli. They all satisfy these two properties. And now here's the theorem. The theorem says suppose you have a collection of matrices and they satisfy those two properties, then with high probability you're going to have instance optimality. Namely take your x, draw your matrix at random, apply your encoding and decoding, then with high probability you're going to have this estimate. So should we be happy with results in high probability? I think so. I mean, if you know that you can do something with probability 10 to the minus 10, that's probably more certain than anything else you'll ever do in your life. So I think results in high probability are interesting. Okay. And the range of k, again, is the same range as the best that we could expect. An unfortunate part of this is that this presently stands, the way we prove this theorem is that we use this least squares minimization, this thing that we can't really do. I mean, it's this combinatorial search over all these spaces. But I believe that we probably can use the L1 minimization. We just haven't proven that. Okay. I'm done with the content of the talk, but I want to make now some comments about, okay, where are we? What are we trying to do? Where are we? So I'm going to talk about three subjects here, amplify a little bit, sort of to give you an idea of where we sit. So let's talk about optimum matrices. My whole thing was to what's optimality and to build optimal matrices. The optimal matrices that we've built, whenever we ask for optimality in terms of performance of sparse vectors and stability in addition, the only way we've known to do this is through probabilistic methods. So it's interesting to ask, could we do this through deterministic constructions? Now I want to be a little careful on this because this needs some thought of what do you mean by deterministic? Because let's think about our Bernoulli matrices. We know we can create matrices with ones and zeros that are optimal. We prove that by a probability technique. What we need is to have this restricted isometry property. On a matrix, I can check the restricted isometry property, not realistically check it. It would take forever to check it, but if you say deterministic, yeah, we had enough people working all the time, we could check it. So I could start writing down all zero one matrices, there are only a finite number on them, check for restricted isometry property until I get one and I've got a matrix that works so I have a deterministic construction. But this isn't fair. To me, this isn't a deterministic construction. So I have to be careful about what I mean by deterministic construction. And maybe what we need to do is make this a precise statement, what we mean by deterministic. Maybe it means that you have an algorithm that in polynomial time will assemble a matrix that has the properties you want. We at present, we have no deterministic construction that gives the type of performance that we want, that gives the high performance. That is, we have to take very few samples. Remember we had to, we could get by our, if the sparsity was K, then the number of samples N was just a little bit bigger than K. What you can do, and this is very poor man proof, I mean it's very easy to do this, is you could prove that through using matrices from coding or finite fields, you could prove that you could construct matrices, really construct them, that would give sparsity performance for K less than square root of N. Well that's far from N. So we're far, a long ways away from having good performance there. Okay, second topic I want to say a few words about is computational issues. I have tried, I have made the following, there were three ingredients that came up when we started discussing optimality. Second was we wanted optimality in terms of the sparsity level, we wanted to be able to handle as sparsity signals as possible. Okay, a second one that came up was the number of computations you needed once you implemented your algorithm. And the third one was stability. And these things play against one another. When you try to push the sparsity level up, then you have problems with computations and you have problems with stability. And it has not been sorted out yet exactly what the story is here. What are the best results? So if you, if you fix the sparsity level, are you forced to have spend so much time decoding? The best we know right now is L1 minimization decoding. This is linear programming, could be order N cubed. Can we do better? This is not understood. There are things that are being done and unfortunately I didn't have time to mention it in theoretical computer science. Gilbert Mutuchrist, Nann Strousen, many other people. I mean, this is a problem that really fits well in theoretical computer science. I'm sure if there's any people from that area in the audience who are saying, well, we do this all the time. And, and it's sort of true. And they have results, but they don't come close to answering this triad of questions, how these play off. They introduce other aspects of computation into the picture. Finally, we're really interested in building practical schemes, right? I mean, we're not just doing this for, for the fun of it, although it's nice interesting mathematics. We would like to have analog signals. Our original problem was we have these cell phone conversations or our, our radar chirps out there and we want to be able to find them and grab them. That's the whole purpose of compressed sensing. So we really want analog, we want to handle analog signals. So how do we go from discrete to analog? This is not completely clear. The, the waters are very muddy here and it's not sure that we're going to really be able to do this in the way that we would like. And moreover, I want to say that even if you mathematically could conceive of a system that would work in analog, you have to keep in mind that we want to build this system. So we're going to put it into hardware. We're going to, when, when you start putting things into a circuit, you know, all of a sudden it's not just like even numerical computation. You have more serious problems to deal with. So to get the big bang that we're hoping for in signal processing is not there yet. But I do want to mention that there are already impressive applications of compressed sensing, in particular, Emmanuel Candez in tomography. And I'd like to recommend that you go to Emmanuel's talk that is taking place here, which takes place last Friday. Okay. One last thought, if you're at all interested in compressed sensing or these ideas, there is a website at Rice in the Electrical and Computing Engineering and it has fairly up to date all the papers dealing with compressed sensing. There's surely more than 100 of them now and it's an interesting area and if you want to, you can easily get into it from that perspective. Thank you very much for your attention.
A large portion of computation is concerned with approximating a function u. Typically, there are many ways to proceed with such an approximation leading to a variety of algorithms. We address the question of how we should evaluate such algorithms and compare them. In particular, when can we say that a particular algorithm is optimal or near optimal? We shall base our analysis on the approximation error that is achieved with a given (computational or information) budget n. We shall see that the formulation of optimal algorithms depends to a large extent on the context of the problem. For example, numerically approximating the solution to a PDE is different from approximating a signal or image (for the purposes of compression).
10.5446/15948 (DOI)
I think we should start. So the organizers of the Congress asked me to remind you that there is an electronic version of the program, which is updated continuously. Now, in particular, there is a schedule for the Fields Medalist talks and the Nivalina Prize talk. Now, also, they mentioned that some of the short communications talk are also updated there. And the printed version of the program will appear probably after this lecture, somewhere around here. You can find it even in the printed version. But please look at the electronic version. Now, I'm very honored to announce the next talk of a professor of the current Institute of Mathematical Science, Percy Dive. Percy is an amazing mathematician. He keeps surprising, getting surprising results, and the very diverse results in mathematics. Now, he has studied, he received MS as a chemical engineer, and then very soon he realized that it's not his cup of tea. So he decided to switch in mathematics and studied doing mathematics and mathematical physics. And his thesis was devoted to the scattering theory. Then he switched to the study of asymptotic solutions of nonlinear equations. He studied inverse problems. He then made links with the combinatorics and random matrices. And then lately he studied problems of universality. Now, there is something which is behind many of his results, and in particular, there is a method, a method coming from Riemann-Hilbert problem, which he applies in an amazing and very diverse way in his studies. Now, Percy is a very warm and very, he has very warm personality, and very excited about mathematics in general. And this excitement attracts a lot of young people, and he has many, many students who became eventually his colleagues. And what else, what more can you desire from this kind of thing? Now, today he will talk about his title, the title of his talk is universality for mathematical and physical systems. You are very much welcome. Okay, thank you, Harry. First of all, I would like to thank the program committee for the opportunity to speak here. I'm very appreciative. And also I'd like to compliment the Spanish Mathematical Society for the wonderful job they have done in organizing this Congress. So, the title of my talk is universality for mathematical and physical systems. So, here is the outline of my talk. So, first of all, I'm going to be giving a very general description of universality, some ideas there from physics mainly. And then I want to propose or speak about a mathematical model, and the particular model which I want to focus on is random matrix theory. After that, I want to speak about some physical and mathematical systems which will illustrate the ideas behind this talk. Then I want to show how to relate the problems which occur in C, particularly to section B, which is random matrix theory. Then I want to say a little bit about what the mathematical methods are behind these results and how they relate to possible future developments. So, to start off, all physical systems in equilibrium are believed or do obey the laws of thermodynamics. And the first law of thermodynamics, everybody knows, is the conservation of energy. The second law has many different formulations, and the one I want to mention here works in the following way. Suppose that we have a heat reservoir at some temperature T1, and suppose that we have a heat sink at a lower temperature T2, and we have some heat engine here in the middle, and you take an amount of heat Q1 from the reservoir, you exhaust an amount of heat Q2 into the sink, and the amount of work which is done by the heat engine is Q1 minus Q2. Now what we are interested in is the efficiency of the conversion of heat into work. So the efficiency is given by W, Q1 minus Q2 divided by Q1. Now the second law makes a statement about the maximum possible value of this efficiency. So the maximum efficiency, which you could obtain presumably by doing the process very slowly, there is no friction issues like that, the maximum efficiency is given by T1 minus T2 upon T1. And nature is so set up that you just can't do any better than this. Now that's on the one hand. And on the other hand there is a very old idea going back to the Greeks that matter is made up out of constituent elements, we call them atoms, and each of these atoms has its own different set of laws of interaction. So it's the juxtaposition of these two points of view, the macroscopic world say of this table here, and the macroscopic world which we imagine that lies underneath it that presents this long ongoing challenge which involves so many people to try to understand the emergence of a macroscopic world out of this microscopic world. So how does one derive these macroscopic laws? Remembering that each of the different constituent elements may have different macroscopic laws of interaction. So the salient feature of this challenge to deduce the macroscopic world from the macroscopic world is that exactly the same laws of thermodynamics emerge independent of the detailed atomic interaction, the same laws emerge. Now in the world of physics this is known broadly speaking as universality, although there is some caveat here because physicists often mean by universality some statements about different critical phenomenon scaling laws, but nevertheless I think this is a good way to describe things. Now of course let me just say along the way that there are certain sub-universality classes which I'll mention again later. For example liquids like water and vinegar, you expect them to obey the Navier-Stokes equation, but if you're looking at some heavy oils you'd expect them not to do that. There'll be some other laws like various lubrication equations. So there are sub-classes which satisfy what we could call sub-universality laws. Now until recently this way of thinking that physicists have not been, these ways have not been common amongst mathematicians. Mathematicians tend to think of problems as being different until proved equal. So each mathematicians think of problems as sui generis, each on its own, unless you can prove some explicit or implicit isomorphism between the two kinds of problems. The idea that broad classes of problems on some scale should look the same without producing some explicit mechanism or isomorphism between them has not been a common idea within mathematics. Nevertheless what I want to speak about today and report today is that this type of universality, some sort of emergence of a macroscopic mathematics for one of a word, seems to be becoming more common. And I want to illustrate this with a variety of examples which I'll get to in a moment. There are mathematical precedents of course for what I am speaking about. We all know the central limit theorem going back to the 18th century. We take variables xi which are independent and identically distributed, mean zero, variance one. We add them up, x1 up to xn. We scale them by root n. We ask what's the probability is of the scaled sum to be less than t and that will converge to the normal distribution. So one sees here that each of these variables xi could be completely unrelated to each other. X1 could be the temperature in Madrid. X2 could be the temperature say in Barcelona. X3, the pressure in Milan and so on. But they have no physical relationship. There's no mechanistic relationship. Nevertheless this broad theorem makes an assertion of universality of these systems. Now of course within probability theory this is just the first amongst many such universality results. So that is the context in which I'm now going to present the rest of the talk. So the question is whether these kinds of phenomena which are well known within physics and if you think about it for a moment, if there were not these universalities, what are the laws within physics, there really couldn't be any physical laws at all. So let me begin now with this mathematical model of random matrix theory. Now at this point there are many, many different random matrix models which are of interest. Of course a random matrix is just a matrix, n by n matrix and the entries have some randomness attached to them. They're different models which you can place on them. And we will be interested primarily here in this talk just in two different ensems. So the first ensemble is the Gaussian unitary ensemble which is GUE. Now the elements here in the ensemble are the n by n Hermitian matrices, m equals m star with coordinates m k k j and the probability distribution you put on these matrices is just some kind of renormalized Lebesgue measure. So dm is Lebesgue measure on the diagonal entries, Lebesgue measure on the real part of the off diagonal elements and the upper part of the matrix and this is Lebesgue measure on the imaginary parts of the matrix. Each of them are in this trace m squared. There's just a way of normalizing now the Lebesgue distribution and one upon z in is just a normalization constant. Okay. Now as it were a little bit to get the ball rolling here is that if we replace trace of m squared by trace of v of m, for example the v of m could be m to the fourth. We could replace trace of m squared with trace of m to the fourth. Then you get a general example of a unitary ensemble and there is sitting in the whole structure a universality within this choice. In other words what is true irrespective of which v you choose, the statistical properties of the matrices are going to be independent of that choice. That is a theorem and it's sort of a sub universality result moving along. So the unitary part if people think about it refers to the fact that such a distribution is not on unitary matrices. It's a distribution on Hermitian matrices but the distributions are invariant under unitary conjugation. Now just as the matrices are random and have this distribution here, they're eigenvalues which we write lambda 1, bigger you go to lambda 2, bigger you go to lambda n will become random variables. In particular that's true under GUE. A second ensemble is the Gaussian orthogonal ensemble or GOE. Here that's an ensemble, the elements are n by n real symmetric matrices, m equals m transpose with entries m, r, j. Probability distribution is very similar to the GUE case except now the big measure. Everything's real so it's just dmkj where k is less than or equal to j. Again you can replace trace of m squared by trace of say m to the fourth or m to the sixth or any such polynomial. Again there will be universality results along that way which will tell you that the interesting statistical quantities are independent of the choice of v. Of course the eigenvalues lambda 1 to lambda n will also become random variables under GOE. So just summarizing a little bit of what I'm up to at this point although I'm presenting GUE and GOE as models, I could have looked at a much wider class of ensems and obtained exactly the same results. Now here comes an important point is that what do we mean when we say that the system is modeled by a random matrix theory? Well we say it's modeled by a random matrix theory if it behaves statistically like the eigenvalues of some large GUE or GOE random matrix. So I have to make it a little more precise. Along the way there's something which is known as the standard procedure. So what you should have in mind is a situation a little bit like the following. A scientist is trying to investigate some phenomenon and the scientist puts this phenomenon on some slide which he or she then puts into a microscope and then can do two things. The one thing that can be done is one can center the slide. The other thing that you can do is alter the focus. But once you've done that you're set and you have to look and see what you get. The analog of that is what one means by the standard procedure. So what you have is a set of quantities, little ak in the neighborhood of some point A. And you want to see if these quantities ak look like the eigenvalues of a matrix. So you now imagine you have eigenvalues lambda k of some matrix in the neighborhood of some energy E. Then what we always do is center. So you move the slide into the middle of the microscope. So you move ak to ak minus capital A. You move the eigenvalues to lambda k minus E. You then scale both of them. And the agreement what is meant by the standard procedure is you ensure that the expected number of ak tilde's, the scaled ak's per unit interval is the same as the expected number of scaled eigenvalues per unit interval. And in the bulk that's usually taken to be one. So this is the way things operate. Whenever we want to compare one phenomenon mathematical or physical with the eigenvalues of a random matrix we always understand that we've prepared the discussion by following the standard procedure. Now we are interested in two particular statistics for the GUE. And there are similar statistics in formula for GOE but I'm not going to write them down. I'm just going to ask you to imagine that they are there. So let theta be some positive number and define the gap probability Pn of theta which is the probability that a GUE matrix has no eigenvalues in the gap minus theta to theta. So let gamma n be the appropriate scaling for the standard procedure. Then it's a wonderful result from the sixties of Gordan and Meydhe which showed that for any positive number y if you ask what is the probability that there are no eigenvalues in the scaled interval then it's given by an explicit formula which is the determinant of 1 minus ky where ky is this trace class operator with so called the sine kernel and acting on L2 from minus y to y. And what I would ask you to do is perhaps not remember the details of this formula but that there is such an explicit formula and it's part of the as it were the charm and the effectiveness of this whole subject that there are these beautiful formulae which can be evaluated and give you very precise information on the statistical quantities you're looking at. The second statistic that I want to bring to your attention is the statistics of the largest eigenvalue lambda 1. And what we do again it's a similar business you look at lambda 1 and you center it. The centering must be done by taking away a square root of 2n and you scale it in some appropriate way the scale of n to the minus 6 here and it's a theorem of Tracy and Wittem that this distribution when the size n of the matrices gets large is given by an explicit formula called the Tracy Wittem distribution and has this absolutely wonderful form which is an exponential basically of a square of a solution the unique global solution called the Hastings-McLeod solution of the Panlevay 2 equation which if you think of cancelling the non-linear piece you see looks like the area equation and you choose your solution u to be the one which looks like the area function the classical area function s goes to plus infinity. Again I don't ask you to remember the exact form but just that there are explicit formulae for these two basic statistics the first being the gap probability the probability that there are no eigenvalues in the gap scale gap and also the probability distribution for the largest eigenvalue of a random matrix. Now one of the most important features or characteristic features of GUE or GOE or any of the orthogonal or unitary ensems is the notion of repulsion which I will come back to quite a bit later on. As we said you have these random matrices you have their eigenvalues so the eigenvalues are themselves random variables and you can compute exactly the distribution function for the eigenvalues and the feature it has is this van der Mond raised to the power beta. If we are dealing with GOE then beta is 1 if we are dealing with GUE then beta is 2 and one of these other distributions I am just putting in beta is 4 something known as the Gaussian symplectic on some it is just a remark. Now you see what that is telling you it is telling you that if two eigenvalues are close the probability of that event is very small. So what that means is that naturally speaking when you are looking at the eigenvalues of a matrix displayed out on a line they got a natural repulsion which is built in the probability of them being close together is small. And this is a key feature of random matrix theory this notion of repulsion. So now I am up to the point part C of my talk where I want to speak about some examples. Now the first example comes from physics and that is where random matrix theory was first introduced into the theoretical physics world and after that came into the mathematical world it was introduced by Wigner and so it is appropriate to begin at this point. So what you should imagine as my first example you should imagine that you are scattering neutrons at some energy E onto some very large nucleus which could be uranium 2238 or thorium 232. Now the picture you are looking at the first one is for thorium it is a scattering matrix it is the scattering diagram for thorium the second one is for uranium along this axis along the x axis is the energy and on the y axis is loosely speaking the amount of scattering. It is a scattering cross section. The feature which I want you to focus on is that there are many many many lines and if I was to expand my x axis you would see there would be hundreds of these so called scattering resonances the meaning of the scattering resonance if I pick an energy which is say at this peak then that neutron at that energy coming in and hitting the thorium nucleus would be mostly reflected but if I pick an energy which is between two peaks this neutron will as it will go through. The details of this are of course not important here the question is how do you proceed to model such a physical situation. The a priori possibility of writing down some Schrodinger type equation and then solving that numerically is clearly it was beyond the computers at this particular period in the 1970's it certainly was it is beyond us now and it is inconceivable that one would actually be able to really put that on a computer and actually find these scattering resonances. Some other way had to be found of making scientific sense of a diagram like this. So the first question is how does one model these resonance peaks and the form of my talk I'm just going to be posing for a while a variety of questions first of all this one from physics and then some questions from mathematics. So the first question how does one model these resonance peaks. The next question is subject which has caught the imagination of many people and it goes back to the work of Montgomery in the early 70's. He was interested in the zeros of the Riemann Zeta function Zeta of S and assuming the Riemann hypothesis Montgomery looked at the non-trivial zeros on the line of half looked and he wrote them in the usual way one half plus i gamma j then he rescaled again he had this standard procedure in what we would now call the standard procedure in the back of his mind he scaled into have mean spacing one in the sense that the number of zeros scaled zeros which are less easy to t upon t goes to one as t gets large then for any a less than b he computed the two point correlation function for the gamma j till this one hasn't got a look at the details of the correlation function but Lucy speaking it's telling you when gamma j one till then gamma j two till they're all close together he then showed module of certain technical restrictions if you took this correlation function for the zeros of the Riemann Zeta function rescaled on the line of half and you divided by n and you took this limit this limit would exist and was given by a certain explicit formula the question is my second question is what formula did Montgomery obtain for our a b now the third problem I want to speak about comes from combinatorics and it's a particular card game and you play the game in the following way you have a deck of n cards which for convenience your number from one up to n you shuffle the deck and then you take the top card and you put the card face up on the table to my left take the next card if that card is less than the card on the table I put it on top if it's bigger I make a second pile take the third card if it's less than either of these two cards I put it on top and I have the agreement that if it's less than both of them I put it as far to the left as I can if it's bigger than both I make a third pile and so on until I've dealt out the whole pack and the question which one asks is how many piles do you get so mathematically of course a shuffle is just a choice of a permutation pi Q in of pi is the number of piles you get after you have played this game perhaps a more interesting version as you're in a bar late at night you've got your deck of cards a question you're betting on is how big a table do you need to play this game so let me give an example of how it works suppose we have six cards we shuffle them we obtain a permutation pi and we get the permutation 341562 so three is my top card four is underneath it 1562 so I start the game my top card is three I put it down my next card is four it's bigger than three so I put it on the right on my right then I get a one and one is less than both three and four and my rule is to put it as far to the left as I can I then bring it down to five five is now bigger than the top card one and the top card four so put it up here similarly six goes down finally I have two two is less than four five and six bigger than one it goes over the four because of my rule of going as far to the left as I can so the number of piles I get Q six pi is equal to four one then equips SN with uniform measure and our third question is how does Q in of pi vary statistically as N gets large? So here is a problem from transportation theory so it's a problem about the buses in the city Cuerno Vaca in Mexico now city is about a half a million people they certainly have a bus system but they don't have a central transportation authority the end result is that there is no bus schedule so what happens is that you get this typical Poisson like phenomenon that you can be standing at a bus stop and there will be big weights between one bus and next or a lot of buses could come and they could be bunching now the buses are owned by typically individual operators so they were facing a situation where they would come to a bus stop and the bus was already there loading up and they had missed their chance of any customers and they would then have to go on to the next stop so they were losing a lot of money so they asked whether they could do anything about this and they came up with a very ingenious scheme which I've since learned is rather common in a lot of many places in Latin America so what they did is they hired observers so you imagine there are these bus routes going through Cuerno Vaca and they would post these observers at strategic points along the routes and what these observers would do is they would take note of when buses passed them by and then when the next guy came along they would sell this information to that bus driver and say look a bus just came by you should slow down a bit or a bus hasn't been in a while should speed up a bit and there's some marvelous pictures you can see on the web of these guys signalling with three fingers up or two fingers it's very nice to see. The end result of this is that they have a pretty steady and reliable bus service and I've spoken to people from Cuerno Vaca it's a well known thing and they're very happy with it. Our interest in it is that recently two Czech physicists Krabala and Sheba went down to Mexico and began to investigate this phenomenon. They took data on one of the bus routes route number four for about a period of a month they collected a large amount of data and our fourth question is what did they find? So the next question is a model a statistical mechanical model or a statistical model due to Michael Fisher and it's called it's one of many walker models. So suppose we have walkers located on the ladder Z initially at position 012 and they walk according to the following rule. At each integer time K precisely one walker makes a step to the left. I'll illustrate this with an example shortly. No two walkers can occupy the same site this is what's known as Michael Fisher called these vicious walkers and thirdly the walker that moves at time K is chosen randomly. So how does that work out in an example? Yeah we imagine at time zero we have the walkers at 01234 and so on. At time one the move is forced the person at zero makes a move to my left. Then at time two there are two people who could possibly move this one or that one the one at one or the one at minus one. Let's just suppose that the one at one takes a step to the left. Then at time two there are again two people that could move the one which is at two and the one which is at minus one. Let's suppose the one at minus one moves. Now there are three possible people who could move let's suppose the one at two moves and so on. The question we're interested in is let Dn be the distance which is moved by the zero particle. Here for this particular example D4 would just be two. So our fifth question is how does Dn behave statistically as n becomes large? Now this problem is a tiling problem with connections to statistical mechanics and it's a domino tiling problem. So we imagine what we're looking at a tilted square to end up 45 degrees and we're tiling with dominoes which are of size one by two. Here is a tiling a particular choice of tiling in a tilted square of size n plus one equals four. The way one counts is here is the origin. There is the origin. You count one, two, three that would be n and n plus one gives you four. So this is one particular tiling. The rule is that the tiles must stay completely within the square. It's a non-trivial theorem of Prop and his collaborators that the number of such tiling is two to the n times n plus one over two. Now what we assume is that all such tiling are equally likely and our sixth question is what does a typical tiling look like as n gets large? This problem is called the aster diamond because if you just focus on the upper part here and you look at the shape of the tiling it looks like one of those Mexican pyramids. The final problem is a problem which is familiar to I think most of us here. It's the airline boarding problem and how long the question here is how long does it take to board an airplane? This is a problem of great interest to the airlines because every extra minute they spend on the ground has lost money. Now I'm going to describe this model which is due to Aitan Bachmat and his collaborators. He has now a much more sophisticated model which makes contact with Lorenzi and geometry. It's a very interesting analysis but I'm just going to give his very simplest model and it contains the main features of his analysis. Let me say again this model can be made much more realistic. I'm not going to go into that. So the model is that you're looking at a very small plane and there is one seat per row. Passengers are very thin for reasons that will become clear. And secondly the passengers move very quickly. The main time, the unit of time that is blocking us as we board is the time it takes for somebody to come in with their baggage, turn around, open up the bin, put their luggage in, close the bin and sit down. That is one unit of time compared to that time all other actions are very fast. So how does, how would such a boarding look like? I'll give an example. Okay. So imagine that there are six passengers and these passengers are in the waiting room and then the steward says okay we are ready for boarding and people line up at the gate. And suppose they line up in the order 341562. Now these numbers refer to the ticket that the person has. So person with ticket number four sits in seat number four and so on. So they line up at the gate in this order. Three is closest to the gate, four is right behind and so on. So they now file into the airplane. How do they file in? Well three can go to his seat but then four is blocked and cannot and must wait until three puts up the bags. The person in seat number one can go to that seat but then five, six and two must wait behind and they are blocked. So after one unit of time, one and three sit down and now four, five, six and two are free to move on. Four goes to the seat, five and six are blocked but two can go to his seat. Then four and two put their bags up after one unit of time. They sit down then five can go, five takes one unit of time, finally six can get to seat number six and we see that this process, this model process takes four units of time and the question which we are asking here is assuming that passengers line up randomly, how long does it take to board such an aircraft? So those are the seven questions and now I want to start off by, so let me keep this over here. Now the remarkable fact is that although all these problems come from extremely different areas of science, mathematics, physics, applied mathematics, all these systems are modeled statistically by random matrix theory. So I recall that to say something as modeled by random matrix theory we have to go through the standard procedure and compare the statistics of these different random quantities with random eigenvalues. So the first problem which I remind you is the scattering problem, the neutrons of these heavy nuclei. The scattering resonances after the standard procedure, the probability that there are no resonances in an interval minus y to y is given either by this formula which I ask you to recall, determined of one minus kky which is the asymptotic gap probability for GUE introduced above or its GOE analog and you get GUE or GOE depending on some underlying symmetry conditions. So in some very remarkable way the neutrons are behaving like the eigenvalues of a random matrix. The next result was about the zeros, the rescale zeros of the Riemann Zeder function so what did Montgomery found after some technicalities found that the limiting two point function RAB for the zeros has this formula, explicit formula integral A to B of one minus sine upon pi R squared. Now as noted by Dyson in the famous story which I will not repeat, this is precisely the limiting two point correlation function for the eigenvalues of a random GUE matrix. Now this as a basic idea has been taken up by many people working in number theory, Rudnik, Sarnik, Katz, Keating, many, many people but it sort of acts, these two examples now sort of set out the bookends of what we are talking about. On the one hand we are talking about this very explicit physical experiment, on the other hand we are speaking about this very pure mathematical object which are the zeros of the Riemann Zeder function and somehow there is a commonality of description between them. So now what lies between these two extremes? So the third example, let me put it over here, is the game of cards, this patient sorting PN of pi which is a number of piles that you obtain, that turns out to behave like the largest eigenvalue of a GUE matrix. In other words, if PN of pi is the number of piles you get, again you have got to do some centering and scaling but once you have done that you compute this, then this goes to f of t which is exactly the Tracy Whitham distribution for the largest eigenvalue of a GUE matrix. You may remember that that is something which involves the panel of two equations. So somehow in this very strange way just playing this game of cards is bringing in this esoteric function theory. This is a theorem of Junhoe Baik myself and Kurt Johansson and has been developed further by many different people, I mentioned Okunkov, Borodin, many, many people, Tracy and Whitham. The fourth problem is the buses in Kronovakha and the question was what did Kribalik and Sheba find? Well they found quite remarkably that the spacings between the buses after the intervention of these observers behave exactly like the eigenvalues of a random GUE matrix. So the formula is again you get this familiar determinant of 1 minus K, take a second derivative with respect to the length of the interval, you integrate from Zeta S and that is what you get. Now the thing that I want to get across, it's not as if these are so very approximate models. The accuracy of the models is really quite astounding as I'm going to show you over now in a moment. It's quite remarkable to me and I think to everybody who thought about just how good this random matrix model is. So let me show you now what they actually found. So this is taken, it's a paper of Kribalik and Sheba in Journal of Math Physics. Now what you're looking at here, the heavy line is exactly this formula, second derivative, the integral of the second derivative of this determinant, the crosses are what they actually observed. Now if one is an applied mathematician you get this kind of fit you quite astounded, right? But the situation is even better than it looks at first glance. There's an inset here which is a blow up of this left hand corner here and then you'll see if you look on the inset there is the heavy line, there are the crosses and there are these dotted lines. Now what these dotted lines do is they take into account that the bus drivers are not or the observers are not recording all the information. Some of the information is being thrown away. So there's what I believe is called a binning problem. Then so what they do to overcome that problem they sample the statistics of the eigenvalues by leaving some of them out. So when they leave some of them out to model the way that the actual observers operate they get this dotted curve which goes through these crosses even better than the original curve. So the fit is really quite extraordinary. Now the first problem is the walker problem. It was analyzed by Peter Forester and he then found to the question you had these random walkers how far does this guy on the left get? Well the statistics of that guy's motion, dn is exactly described in n gets large by the largest eigenvalue of a Goe matrix. The Goe are not the omission, they're the real symmetric matrices. And again it's given by some explicit Tracy Widdem distribution which is very similar to the FFT which I wrote down before. The sixth problem is the Aztec diamond. You take this square and you tile it. Now it was a wonderful result of Elke's prop and many other people that there is something called an optic circle phenomenon. So when n gets large you scale x by x over n and you get the circle coming. Then what you find and the circle is called the optic circle. In the top region here or the left here or the bottom here or the right here which are called polar regions you find that the tiling is completely regular. Here it goes east-west, here it will go north-north-south and inside it's as it were intuitively random and this inside region is called the temperate zone. So that's the result of a variety of people. So inside the polar region things are frozen, inside the temperate region things are tiling. Now Kurt Johansson proved an absolutely wonderful result. You draw a line as I've drawn this red line here. And here is the edge of the circle, crosses this line in two places. And for any finite n it's approximately described by the circle. In general for finite n there will be fluctuations. Now the result of Johansson is those fluctuations about the circle are exactly described by the Tracy Wittem distribution. So they behave like the Argonne Valley of a random matrix. Now the airline boarding problem, again we find that under this model that the time it takes to board as people line up at the gate randomly is again given by the Tracy Wittem distribution the largest Argonne Valley of a random matrix. So I just picked these examples just to try and spread out within mathematics where these phenomena are occurring. There are many, many other problems there. Exagonal tiling, condensation problems, percolation problems, there's a whole theory developed by Peter Sonak and his collaborators and Keating's work connected to L functions. There are many, many different things. I've just picked these to give you some sense of how things work. At the mathematical level the status of the problems are of course the neutron scattering problem is experimental and numerical. The Zeta function is an actual mathematical theorem that it behaves like the two-point function of the zeros behave like the two-point function of a random matrix. There are certain Fourier time restrictions. The patient's sorting problem is a theorem. The buses in Kornabaka, there is now a model for it which was developed by Jin Ho Baik, Alexi Borodin, Tufik Suidan and myself where we are able to show the origin of the random matrix theory statistics. The Walker's problem is a theorem. The Aztec Diamond problem is a theorem and the airline boarding problem is a theorem. Okay now what is the kind of mathematics which is involved here? Integral systems is a key player. They're coming into the analysis. There are ideas from inverse scattering theory, Riemann-Hilbert methods as Ari mentioned, Pandeve theory, theory of determinants, the classical and Riemann-Hilbert steepest descent method and also many, many different combinatorial ideas, people like Gessel's ideas and also Shor ideas going back to Shor. It's a kind of mathematical arena. It's of course many lectures on its own to really bring that out but that is the kind of mathematics which comes up here. So on my last slide I want to just raise a number of issues. The question is maybe you're asking yourself how do I recognize that the system I'm interested in behaves like random matrix theory. A more scientific statement of it would be is in intrinsic probabilistic terms how do I state a theorem which would be the analog of the central limit theorem. The central limit theorem says I've got independent identically distributed variables. I do a specific thing on them. I add them, scale them and then I get a normal distribution. The question one wants to ask in purely probabilistic terms is I've got some independent identically distributed variables. I do some operation X on them. Then when I do operating X on them, random matrix theory comes out. That's the kind of intrinsic question which is being raised and work in this direction has been done by Bikin, Suidan and also independently Bodino and Martin. How can we think about what is a question which is posed in, say, in more analytical terms is that what people believe is that the natural arena for thinking about these things is the space of distributions. That's the space I have here. Initially it's something without any structure, without any topography, but we do know that there's something special here. There's a Gaussian point here. It's like a little valley. We know that as you get near to it, you can be sucked into it. Now we understand there isn't just this Gaussian point. There's also things like the Tracy Whidham distribution. One wants to somehow put some kind of metric down here to understand how you flow in the space of probability distributions. This is a different direction. Finally, the question is to what extent are we seeing an emergency of what one might want to call macroscopic mathematics. I mean, one has macroscopic physics and macroscopic physics, which satisfies thermodynamics. To end off, I just want to present a picture, which I'd like to give to you. One should think as it were that one is in a valley. You walk around in this valley and you see this thing and it's different from that thing, but it's like this thing, but it's different from that thing. Then you begin to step out from this valley and you begin to walk away at some distance. The remarkable thing is that what happens is that the situation, as you look back on it, does not just blur into some indistinguishable picture. What happens is that a very clear picture begins to emerge with something, a very clear structure begins to emerge, which is very robust and contains a great amount of detail. It is this distant picture that is so well described by random matrix theory. Thank you.
All physical systems in equilibrium obey the laws of thermodynamics. In other words, whatever the precise nature of the interaction between the atoms and molecules at the microscopic level, at the macroscopic level, physical systems exhibit universal behavior in the sense that they are all governed by the same laws and formulae of thermodynamics. In this talk we describe some recent history of universality ideas in physics starting with Wigner’s model for the scattering of neutrons off large nuclei and show how these ideas have led mathematicians to investigate universal behavior for a variety of mathematical systems. This is true not only for systems which have a physical origin, but also for systems which arise in a purely mathematical context such as the Riemann hypothesis, and a version of the card game solitaire called patience sorting.
10.5446/15595 (DOI)
Thank you, Yus, and good afternoon, everyone. Thank you for being here. So my name is Marco Minghini, and I work as a PhD student at Polytechnic Odimilano. And I will show you a work developed with Professor Braveli and Dr. Zamboni, which, as you can guess from the title, deals with the issue of participation within web-based GIS systems. But let's start from the beginning. So the context of the study is what we call geospatial web or geo-web. That can be defined as the set of tools, of services, of infrastructure related to the use of geospatial information over the web. But what do we have in this context? First of all, we have data. And you know, this data can come from different sources. We have users accessing data. And again, they can do it from a variety of possible instruments. We have data catalogs, and then we have data processing tools. Of course, the glue between all these elements is represented by the internet. But as you know, the rise of web 2.0 has dramatically overturned the paradigm of user interaction over the web. And this new model, which we can say is based on collaboration on sharing, online sharing of contents, has affected so much also the geo-web that the term geo-web 2.0 was coined. And actually, it was coined together with the series of other concepts like neo-geography. Neo-geography denotes the possibility also for non-expert users to build up their own apps using this new web mapping technologies. But for instance, also the concept of volunteer geographic information, which highlights the fact that geospatial information is no longer coming only from the top, as it was in the past, but also more and more from the bottom, so from the single users. And then, thanks to the incredible spread of mobile devices, the concept of PGIS, or participatory GIS, which actually was born in the mid-1990s, so much before web 2.0, this concept is acquiring more and more importance and is evolving towards some web-based and shared platforms where users can dynamically add and edit contents. But what are the requirements that such systems have to meet? First of all, as they embrace the entire community, they must provide interoperability in terms of both data formats and services. They also must be able to manage different users with, in principle, different privileges. And they must also allow these users to act on data, for instance, add data, save data, delay data, edit data, and so on, and also to create customized mesh apps. So this is the architecture, of course, an open source of this web-based participatory system that we developed. Now I would like to make just a very general overview, and then we go deeper into each part. But let's start from data collection on the field, which is achieved through this ODK, the Open Data Kit Suite, which is mainly composed of a server-side application called Aggregate, which is installed under Tomcat, and is connected to a PostgreSQL database, with, of course, a PostGIS special extension, and an Android application called ODKCollect, which is useful to feed our database with the user field register contents. These contents are then well-published, in this case, as WMS by GeoServer, and then they are accessible. They can be accessible, first of all, in two dimensions. We developed different viewers, for instance, for traditional computers using Open Layers, GeoX, the XJS, but also for mobile devices using, for instance, Leaflet, and again, Open Layers with JQuery Mobile. But we also developed a fully-publicitative 3D platform using the NASA WorldWind Virtual Globe. OK, let's start from data collection on the field, which, as I said, is accomplished by this Open Data Kit, the ODK Suite. I decided to make this sort of introduction about ODK, because I think it is the less known software among all those included in the architecture. So what to say, ODK is a toolkit, is free and open source, and it's currently used in hundreds of projects all around the world. It's composed of three different but complementary modules, which actually correspond to the three steps, if we want, of the data collection on the field. First of all, we have two alternative tools for creating the forms. The form are the questionnaires that users will then compile on the field. And they are ODK built in XLS form, I will show you in the following. Then we have this Android application called ODKCollect that allows users to fill the form and to send it to the server, or actually to the server side component, which is ODK Aggregate. It can run in the cloud, on a virtual machine, or, and this is our case, on a local server backed with a PostgreSQL database. This is a ODK build. It's an HTML5 web application, providing a very intuitive user interface, drag and drop user interface. And it supports a lot of fields. For instance, here you can see a date, the choice of one option within a list, a text field, but we can also register a position, using, for instance, the GPS of the device. And we can also upload multimedia contents. Here you see image, but also, audios and videos are supported. As I said, the alternative tool is called XLSForm. It allows to build more complex forms using just a spreadsheet. Anyway, in both cases, at the end of the process, the form is exported as an XML file at the end, and it is uploaded on the server. So once it is on the server, it is managed by ODK Aggregate component. And this application plays also the very important role of managing the different users that can have different privileges. For instance, users can have the right just to compile forms and send them to the server, or the right to also download forms, or to even create new forms, delete forms, and so on, to the administrator profile, which of course can also create and manage all the users. OK, once the form is on the server, using an Android device, we can download the ODK collect application, and we can start performing our survey. So this is the main page of the application. Of course, the first step is to connect to the server, and after the login is to choose the form we are interested in from the list of all the available forms. Here, for instance, we select this form called point of interest, which, as we will see later, will allow users to report some touristic and cultural points of interest. And then we can download the form. Then we start performing our survey. So we choose to fill a blank form. We access the point of interest form, and we arrive to the real questionnaire, which guides the user in the compilation of the different fields. For instance, the date of the survey, the type of point of interest, in this case, a historical monumental building, the, let's say, the nature of the subclassification of the point of interest. Here, a villa. The name of the point of interest, here it's Villa Homo. For those of you who are familiar with Lake Homo, it's a famous villa on Lake Homo. And then using the GPS, for instance, of the device, we can register the position. We provide us an image of the point of interest by taking a picture in real time, or by selecting a picture that is already available in the device archive, and finally, we save our form. So after filling the form, which, of course, does not require internet connection, we can choose if modifying the answers to the questionnaires, or if having this time an active internet connection to send the forms to the server. So here, I show you, again, the server side part of the architecture, just to let you understand how the ODK aggregate interacts with the other components. Actually, it's very easy, because when the forms come from ODK collect, which is the Android application, to the ODK aggregate server, they automatically fill the synchronized PostgreSQL database. And then it's very easy, because using PostGIS, GeoServer can read the data and can publish them as WMS, for instance. As I said, we developed some viewers, different viewers, according to the specifications and the requirements, the needs of the applications we were dealing with. For instance, this is a very simple, very traditional viewer, developed with open layers, GeoX and XJS, which simply represents our points of interest on top of a base map. If we click on a point of interest, WMS gets featured in for equities performed, and a pop-up shows the information that the user has originally collected on the field. This is a very similar application that we developed for a project involving some students of a secondary school who mapped the city center of Como, who mapped the architectural barriers of the city center of Como. So you should see different symbols, probably it's not so clear, but you should see different symbols for stairs, for ramps, and for pathways. And even if here we see just an image, they also determine for each of them if it was or not conformal to the current directive. This is just an example. Another example is this one, which is a client developed with the Leaflet JavaScript library. In the framework project, this time that we are starting with the Basin Authority of River Po, River Po is the longest river in Italy. Actually, they are starting to make some experiments on this participatory data correction in order to report damages or problems or environmental emergencies related to the river. As you can guess, this viewer is particularly suitable for large screen devices, so typically tablets. While this one is specifically thought for mobile devices and for small screen devices, typically smartphones. Here the flags that you see in the left figure are user reports of road pavement damages, which unfortunately has been a very serious problem in our city during the last winter. And what we did in this case is just to customize the SLD symbolization in geosurvers so that the flags are green, yellow, or red according to the entity, the dangerousness of the damage. And as you can see here, when the layer is clicked, the results of the query are represented in a separate page, which is much more suitable for these small screen devices. To summarize, in two dimensions, we have simply developed viewers. While in three dimensions, we developed a real participatory platform named Polycrowd. So Polycrowd, first of all, allows the three-dimensional visualization of our touristic points of interest using the NASA World Wing Virtual Globe. But this is not all because in Polycrowd, the users can also share their additional knowledge about points of interest and also the customized projects that they can create on top of the globe. Before entering the Polycrowd system, I would like to briefly introduce World Wing, which is the open source virtual globe developed by NASA. World Wing is actually available as an SDK. And so it's really customizable and provable. And then it's written in Java. So it's a multi-platform. Here you have just some of the possible features. Of course, if you want to know more, you can have a look at the website. We can just say that World Wing provides a pool of predefined, default layers, both satellite imagery and digital terrain models. But what really makes it suitable for scientific application is the fact that you can place on the globe, actually, whatever layer you want. And you can also customize the terrain information. For instance, if you have your own very high-resolution DTM, and if you need it for some three-dimensional analysis, you can use it in World Wing. OK, this is the Polycrowd architecture, again, divided in the server side and the client side. So the platform, which is available as a Java Web Start application, first of all, access is our layer of points of interest from your server, and it renders it on World Wing. Then, as I will show you in some minutes, the users can view, can edit, and can upload contents related to the points of interest using some webpages that are dynamically generated, both as JSP pages and server objects executed inside GlassFish. What is important is also this MySQL database, which allows to store additional information, like all the user profiles and the related privileges, but also the metadata related to the WMS layers that users can use inside the platform and all the projects that are saved within Polycrowd. Let's go step by step also here. So let's start from data visualization on World Wing. In this case, we actually created three different layers in GeoServer in order to represent our points of interest on the globe with three different symbolizations or levels of detail according to the altitude of the point of view over the globe. And in particular, for high altitudes, all the points are represented in the same way using place marks. By Siversa for medium altitudes, we use different icons according to the type of point of interest. And at small altitudes, we use other icons according to the nature of the point of interest. These are the same icons that are also available in the Android ODK collect application. But whatever is the altitude of the point of view, if we click on a point of interest, a balloon again appears showing all the field collected information and also the picture. If we look at the balloon, we see that the last word can be view or view slash edit according to the type of user. And in fact, one feature of Polycrowd is the possibility of registering to the platform, providing as usually a username and a password. And in fact, registered users can also add additional contents about point of interest while known registered users can just see what registered users have uploaded. This is an example of a web page of a point of interest for known registered users, while this is the corresponding one for registered users. You see that there is this button, which is the highlight, which allows registered users to add, for instance, an image, an audio file, a video file, or to enter a comment about the point of interest. Registered users can also create and save projects. And when I say save project, I mean save not just the list of all the available layers, but also all the contextual information, for instance, the position and the camera orientation of the point of view over the globe. All the saved projects are stored in a catalog. All the users can access this project, but of course, only the owner of a project can then modify it. And something else that only registered users can do is to connect to WMS servers and add layers in order to create customized mashups. Also in this case, all the layers that are added remains are stored in a catalog and remain available for the entire community. This is an example in which you see our point of interest, superimposed on a historical map and other WMS layers. I would like also to mention that this Polycrowd application was one of the winners of the first WorldWind Europe Challenge, which was a competition organized by NASA, and which was looking for, let's say, some WorldWind-based solutions for European community. I would like just to mention the other members of the team. So we were four students. Apart from me, Michele Bianchi, Roddy Jollac, and Andres Kignoneson, our mentor was Giorgio Zamboni. So thank you again to all of them. Conclusion. So we developed a free and open source architecture, allowing users to collect the georeferenced data on the field and to share them on the web. And in three dimensions, as I showed you, also really fully, let's say, a participative platform with also some other functionalities. We are thinking too many possible improvements, of course. But I would like just to mention one. That is an extension that we want to do to the Odyk collect Android application, because there can be a problem in the GPS positioning for mobile devices. Not just the problem related to the low accuracy in the positioning of the device itself, but also the problem related to the fact that the position where I perform the survey, which is, of course, my position, typically the position from which I take the picture, can, of course, be very different from the position of the point of interest that is being photographed. And for this reason, we are thinking to modify the Odyk collect in order to allow users to manually choose the position or to manually refine the GPS-estimated position using an interactive map. Of course, this would also allow positioning without using GPS or for devices without GPS. And then there is another point that is the synchronization of the user profiles on Odyk and Polycrowd so that the user can just register once and use both. Because up to now, it's not in this way. OK. I finished. So if you have questions, just ask. Thank you. Thank you very much, Marko. I think it was a crystal clear presentation. You led us to the requirements, the implementation, the results. And we're honest about everything here. But of course, there are still questions, I think. Are there questions? I know it was crystal clear. I'm curious about the 3D support. Any issues with navigation on a 2D device in a 3D world? Well, I forgot to mention, but this 3D navigation was just for computer. Because in WorldWind, they were developing also the Android version. But up to now, they stopped it. I mean, if you go to a website, probably you read that. The development is going on. But it's not true because they told us till the moment they stopped it. Of course, the performance are not the same as Google Earth. OK, of course, it's a quite new product. But I mean, it works. Let's say I have to say that our main goal was not to achieve great performance, but to provide an architecture that was working for our goal. Hi. I came across the open data kit for the first time last week, and it looked really nice. Do you know if there's plans to support anything other than Android, or do you have any idea how difficult that might be? Well, I know that it works only for Android. Then probably, you know, something more than me. No, I was just wondering if you knew. No, no, no, no, no, no. It works just for Android, up to now. Unfortunately, this is, let's say, the bad point. But Android devices should be more than all the other devices. I mean, from a statistics of last year. Hi. Have you had much participation as it been rolled out to the users? Are they using it successfully? Well, we did not publish this application, but we just proposed them to the people which could be interested. For instance, we published the application for reporting the road pavement damages to some administrations. And so we started some projects, possibly, for the future, because really, next year, we had these terrible problems about road damages in all the streets, and the municipalities had no money to intervene everywhere. But actually, all the applications you saw were restricted to a small group of applications, because we wanted to test, first of all, the architecture. OK, thanks. I still see questions. We have, well, like five minutes, I think. Yeah. So for clarification, did you say it sounds like you're running ODK on local database? This is on the mobile device, or you only did it on the web? No, no, no. It's not on the mobile device. OK, no, no, no. The database installed on a server, of course. Then you just connect. Of course, with your mobile device, you have to connect to the server just writing the URL and just getting from the server, and then sending forms to the server. Thanks. I want to see if it works in disconnected environment. Just curious. I can show you with my smartphone. We have one gentleman from here. Never mind. Thanks. Your architecture seems to have glassfish, as well as Tomcat. So haven't you got a bit of an overload of servlet? Well, it was just a choice related to the experience, also, of the students that had to work on that part. Because actually, we had to participate to that challenge. So we say, OK, we have to start also the use of my SQL instead of Postgres. Of course, it's something that could be done. I mean, it could be done, but there's no reason, actually, to use two different databases. So it was just a choice. OK, thanks. Well, probably, is there still questions? OK. Well, first of all, I want to thank the four speakers. Actually, we had four sessions.
Driven by the rise of Web 2.0 and the non-stop spread of mobile device sensors, the concept of PGIS (Participatory GIS) is knowing a new, revolutionary era. This research investigates the opportunity to build up a prototype of Participatory GIS, with completely FOSS architecture, in which data directly comes from field surveys carried out by users. As a result, the system should increase public active participation in data creation and sharing, besides enlarging the knowledge up to the local level. Open Data Kit suite allows users to collect geotagged multimedia information using mobile devices with on-board location sensors (e.g. a GPS receiver). Thanks to an authentication mechanism, on field-captured data is sent to a server and stored into a PostgreSQL database with PostGIS spatial extension. GeoServer is then responsible for data dissemination on the Web. On the client- side, different OpenLayers and Leaflet based solutions allow data visualization on both traditional computers and mobile platforms. The designed architecture provided support for FOSS usage in the process of gathering, uploading and WMS/WFS publishing information collected in situ. GIS user participation could thus be substantially increased, making this innovative bottom-up approach a key factor for fostering, speeding and improving decision-processes.
10.5446/15594 (DOI)
Okay, so my name is Hugo Martins. I come from Wutor Consulting. And the presentation is about using web processes services, web processing services with Ornans Survey Open Data. Well, this is the outline of the presentation. We'll start explaining how we came up with the idea of building something which we called catchment finder. And then we go on speaking what it is in fact and how it is working. And in the end we'll draw some highlight conclusions. We are specializing mainly in open source GIS, both desktop and web GIS. We do software development for QGIS and also web GIS bespoke applications. And within the GIS world also we are having a niche specialization in numerical modeling concerning water engineering. And because we do a lot of water engineering, we have commonly, we came up with the idea of building catchment finder because we commonly use a simple procedure that takes a bit of processing in the desktop GIS applications, which is to calculate river basin. And we do that a lot for our modeling. And it was something that we realized that it was happening lots of times. And we had to process all the data and then make some quality analysis of this data. And yeah, it was taking some time to tweak all the parameters in all the operations that we were making. So we came up with this idea of building catchment finder, which we use for this kind of simple processes and also as a proof of concept of web processing services. So the catchment finder is mainly at combining all this. So we pre-process the data that we took into a database and then we make this process available through the web. As I was saying, it doesn't mean that it needs to be used through the web browser, although catchment finder itself is also web GIS application, but the web processing service itself can be used also through the desktop. So what it means is that it doesn't matter if you are a GIS guy or not, you can use it in a straight way because you don't need to know how to do each step of the processing chain, let's say. So first thing, we cannot run a process without data. So this was the first thing that we need to came up and yeah, great, Ordnance Survey is providing several open data data sets. And from those we took out landform panorama, which is a data set which is having a vector layer with contours and also a digital terrain model in ASCII grief format, I suppose. I'm not sure about that, but I know it's a raster file. And also we took the strategy data set, which is just a vector with providing contextual information, like urban areas, streets and a gas at year. So we took that also just for putting some more information to give context in the web GIS application that I'm going to show you later. This is on the Ordnance Survey website for open data. So you can download all this data, it's open and not only you can freely get it, but you also can use it for making your kind of studies and analysis and mix some outputs of it. So it's open data. So now we have the data, we have to find a way of providing this service that we have in mind. So this chained model of GIS operations that allow one to calculate a river basin. So what to use? Well, we obviously wanted to use OGC standards and for that there is one which is called WPS, which stands for Web Processing Service. And mainly this standard is specifying a way of communication between client and server that allows one to not only find process that are available in the server, but execute this process in a way that is typically done in the GIS desktop application, but through the web. So the great advantage of it is that you can have really, really from simple things to really complex models in the back end and you just need to give the inputs and you'll receive the outputs and you don't need to make all the operations in between. The process will take care of it. So that's the big advantage of it. How is it working then? So the standard is specifying three types of requests. So each time the client sends a request to the server, the server understands this request which can be get capabilities, describe process or execute and then it sends back the answer to the client. All these requests, there are three. So get capabilities. The request looks like the URL that I'm showing you and it returns like an XML response which is basically describing service metadata. So it's describing, giving general information to the user who is providing the service and what kind of processes are available in the server. For example, you see here in the lower part it says process offering. So there it describes how many web processing servers are available in this server. So once you know which processing services you have, you want to know what they are doing exactly. And that's why there is describe process which is another request where then you use a simple identifier that you got from the previous offering list. So you say, oh, okay, I see that there is a processing services that is called simple grass. But what does it do? And that's why you use this request. And then you can see that it has a title, an abstract and also it's telling you which inputs it needs to take so that it can run on the server side. And also it's describing what are the outputs of this process. So you can see here in the lower part that you have at least three outputs from this process. Okay. Now you know that you have a process that you want to run. You know which inputs you need to use to give to the process. So now you can execute it. And for example, for that process that I was showing, we, sorry, I've just come back. So you can see here that he's taking an X coordinate and it would also take a Y coordinate. I just had it because it was too much. And that's what we are specifying in the execute process. We give back to the server an X coordinate and Y coordinate and then from that point, the process will calculate the catchment itself. So this is how you work with the standard. So, yeah, now we have the standard. Okay. We have the data. What are we going to use to provide this service? So we had to choose an architecture to implement this, our objective. And obviously open source was the way to go forward. And we had in mind to build the server side and also a client side. So for the server side, we went, we were using already grass. We have lots of experience with grass. We like grass. Grass is one of the first open source desktop GSS desktop applications. And yeah, we were doing lots of processes with it. So we decided, yeah, if we can find something that works with grass in the back end, it would be nice. Because then we have already all these procedures that already implemented within our framework of water engineering modeling. And so we went to search some kind of server that would communicate with grass and provide the web processing service. And that was easy to find. It was PyWPS, which is providing native support for grass and R. Obviously, PyWPS is implemented in Python, so you can use several other packages that are working in Python. So if you want to use Shapely or GDOL or whatever kind of package that is in Python, running in Python, you can call it through the PyWPS. So it seemed a nice framework. And it was also giving a lot of confidence because it was also one of the first implementations of the PyWPS standard in the open source world. So we went for this PyWPS software. And PyWPS is also depending on map server because the standard defines that you can send back to the client numbers, letters, strings, but also maps like WFS and WMS results of your process. So PyWPS is also connecting to map server to send back WMS or WFS data to the client. Then we wanted to build a customized client on the web. Although this server component is working, for example, you can run this process in QuantumGIS. You can install this plugin, which is called WPS-something. You can install this plugin, and then you can run this process from QuantumGIS. But we wanted to make a custom application on the web. So we decided to use OpenLayers. I think everyone knows about it. It's a JavaScript framework, awesome JavaScript framework that allows you to build special apps in the web. Then we decided also to go with AXTJS. It's a big, big framework that allows you to develop rich internet applications that are more or less unifying the desktop applications. And then GeoX, which is just a middleware framework between those two that are already making some widgets which combine both EXT and OpenLayers. Well, so how did you start the development? First thing to do was obviously include all the data inside the grass database. So we set up a location and a map set for the ones that know how to work with grass. So we started with the first procedures, then we imported the data. We pre-processed the data and made some quality analysis on it. And finally, because we had all the procedure in terms of analytical procedure, we had already developed. So we just converted this procedure into a Python script that would be called by PyWPS. And, well, we converted it to the Python, then we used it in PyWPS and everything was set up to test this web processing service. So I'm going to do a quick demonstration. Hopefully it works. I think it should work. So this is really simple. It's just a proof of concept. It's just a really simple WebGIS. Well, you see that we used open data for conceptualization and just putting some nice maps in the background. And to see how easy it is, and for sure this process is taking like three or four middle steps to come to the end so that it can build. So normally it's used only by GIS people, you know, because they know the inner workings of this processing. But with these kind of apps, the user doesn't need to know what is happening behind. It just needs, it just knows that, oh, I want to calculate the reverb basin. So, okay, the first thing that he needs to do is just defining a point in the map and then calculate extent or catchment. Sorry. Okay, catchment's in that area. Yeah, that will be driven from this point. So there's like a buffer around your XY. Now, in the process itself, what we do is that we try to find the closest point in the river to this point that was user defined. And so from the river, you can calculate the catchment, you know, so. It knows to go, okay. Because, well, you can put the point in several places, but then because we can snap it to the river and find the closest point in the river, then we can calculate the catchment for that line, okay? For that river. That's pretty cool. So that's what's happening in the back end. So the user doesn't know anything about this. So now the process was calculated and, yeah, we'll zoom into it. So it was really small. As you can see, well, what was the problem here? I can explain you. It's because it's really flat and I'll do another one so that you can see that it's working. I was unlucky with the point. Well, hopefully I have. Well, let's put some point over here. Hopefully. So calculate catchment. It's running him. Yeah. So you see that the, because the process is running asynchronously, the user is kept always in constant feedback. So he knows where it is at each stage. So you can see it's now calculating. And yeah, there you go. Then we have a catchment there. As you see, this output was coming from the server as a WFS. And now we can also using pi WPS, we can also retrieve a shapefile to the user. So we decided to give that possibility so the user can just download it. And there you go. And yeah, inside of the zip file, we will have the catchment shapefile like that. So it was quite easy for each kind of user. It doesn't need to be a GIS guy to make this process. And it doesn't need to care about the data or about which operations to do, about which validations to do. Everything is happening in the back end side. So in terms of conclusions, what we can say is that, well, WPS standard is really allowing one to develop really high complex models and turning it into really transparent tool into the user. I mean, if we are GIS guys, know that we can do really simple things to really complex stuff. And we can chain lots of operations together. And if we give those tools to a regular user, which is not a GIS guy, it doesn't understand anything about what it's doing. But using WPS, what we can do is that we can say, OK, this tool is specifically to do this. And it doesn't matter if it's just one operation or 30 operations. It's just built on the back end side. And the user doesn't need to care about the data and about how to make the workflow. So this WPS standard, in fact, was developed by GIS people to GIS people, but it can be used with anything, even just calculations, regular calculations and numerical modeling. It doesn't really need to work with geospatial data, but it's mainly directed to work with geospatial data. Another thing to point out is that all the implementation in open source software is quite stable and really, really robust. So you can work with it and it just works. If you have a little bit of knowledge and try a bit, it just works almost out of the box. Obviously not the Python script, but yeah. There's something that you need to do by yourself because the server doesn't know what you want to do. So you need to do it by itself. And another thing is that without open data, we wouldn't be able to make this proof of concept because, yeah, it's not only that the data is open, free and you can get it, it's that you can also provide services with it. So the only problem, and you saw that in the first catchment, is that this data in terms of special resolution is not that high special resolution. That's why sometimes you get really small catchments. So in areas that are more flat and everything is flat, it's difficult to have a good catchment calculation. So it would be nice to have a higher resolution data set so that we could provide a service that is a little bit more accurate. But this is just depending on the underlying data. Yeah, I think that is it. Any questions? I think it's 50 meters, I think so. Would it be possible in future, something that a user can upload his data and then make this analysis on it? Yeah, in fact, we are at this moment developing an application which is happening, which is working like that. We have four data sets and we allow the user to upload his own so we can get higher resolutions of it. And then the process, instead of working with our base data, it starts to work with the user data. So it's possible through the WPS process itself. So it's, yeah, in fact, we are doing it right now. Just quickly following on from that. Does that mean you point the WPS process to their source data or do you actually first copy it up and then point it to your loader? We have a, so for the standard one, in this case, we have pre-processed all the data and it's already there living in the grass database. So when we run the process, I don't need to report everything. So it's just faster because it's living there. But if the user uploads his own files, then I need to import it into the grass database and then I can work with it as I would work with any kind of data set. So the input to your WPS process is something specific to your grass database. You can't say, right, I have a, I don't know, a terrain server here, give you a DTM. I just want to point to this, use that data to calculate my calculation. At this point, no, but it really depends on how you implement the WPS itself because you are coding it. So if in your WPS process you say that one of the inputs is the URL of another service that is providing you data, then you need to code that logic in the WPS script to go and fetch this data and then you can work with it. So it's quite flexible. In fact, the WPS standard itself is not implementing the process, it's just managing, you know, the way of communication between client and server. But it's really flexible because it's Python, so it's really easy to develop things. It doesn't take ages. Yeah, some, some, it might be discussable about the performance until now from what I've done with Python, the performance is great. But yeah, Python is just nice to do quick prototyping and to implement this kind of service. So it's really up to you which inputs you are expecting. It could be an URL, it could be an integer string, it can even be a GML feature or WMS server, whatever. And then you make the logic in the backend script in the server. Yes. So I mean, it should be possible. I mean, you use as inputs WPS. Yeah, yeah. WPS for RASTA. Yeah. So I mean, so the PyWPS is 100, is it grass-based or 100, is it WPS or WPS? Yes. Not the grass itself, but PyWPS is the one, is the server-side component that is dealing automatically with setting up the grass, the communication between grass and the PyWPS itself. So you can call any other program that has Python bindings. So for example, this shapefile, to be honest, was immediately outputted from grass as a shapefile. But I could use it with OGR because OGR also has Python bindings. So we can call it in GDAL and spatial light. So even post-GIS, you can access things through the WPS to post-GIS, insert in the database, make analysis in the database, retrieve it to the PyWPS, and then you say, OK, I want this output as a WMS or as a WFS. And it's just really, really flexible. You can do whatever you want. It just depends on how you code it. OK. If I wanted to use this tool in a semi-urban area, I would probably have to have higher resolution data. Yeah. I'm going to be looking at vacant lots, slews, just anywhere I might be able to find a catchment. And you have to individually point on it to do the calculation to find it. Yeah. So you would have to microtap the process and say, I'm going to look in this quadrant, this quadrant, this quadrant. Is that how you would do it? No, no. This is a simple process. It's just a proof of concept. You can put it as much complexity as you want. But this one, you put a point there. And the process itself in the back end is, well, this point is not in a river line. So I'm going to find the closest point in the river. So if you are putting it in an urban area where there is no stream, it will calculate the closest point that is snapping into the river, the closest river. So if you want to do it in urban areas or, I don't know, it's just you need to put new data sets because it's just 50 meters of resolution. So it's not that great, to be honest. And, well, at the national scale, it's nice. At the national scale, you know, so it's just giving you some outputs. But if you want to really work in small areas, and especially where they are flat, you need higher resolution data. But it's up to you, you know, if you have this data, or even like they were asking before, if you want to provide the user a way of replacing the data with their own data, it's possible. It's really possible. And if I have my own model, and I have to put in grass to run it, or is it possible to link to the system? And which kind of model? It can be a nitrogen model or another way to store the basin or something. You don't need to put it in grass if you can call it from Python. Okay, so if you have Python bindings, if it's something like a C module, and then you have the Python bindings to it, then you can call it directly. Otherwise, for example, we had something like this, we had to build a new grass module, so we made a C module, new module, and we have put it into grass because it would be faster than in Python. So it was for performance issues. But we could have done it with Python, and there is this nice add-on to Python module, which is NumPy and SciPy, and this is like providing features really similar to something that is widely known, a proprietary software, and that is for numerical modeling. So you can work with this. It really depends on the things that you know how to do, and it depends if you are worried or not about performance issues. So in this case, for example, we had this complex module that we tried it in Python because it was fast, but it was really slow, so we decided to go and made a new C module for grass, and grass is working with that. Sorry? Sorry? It's called NetWat. We didn't release it yet because, to be honest, the module itself was not done by us, so it's from the university, but we are speaking with them, and probably it will be included in future releases of grass as a new module, you know, R.NetWat, and it goes. So this one is only attached for river? Yeah, yeah, yeah. It's just for that main purpose. It's just like this. We wanted just to make a proof-of concept because WPS is a really nice standard, but it's not being used as much as we were expecting, but you can do it, you know, even for simple things, it's just a nice way to communicate with the server, and it's really straightforward. Then you have all these standard defined, the server understands what you are asking, and the client knows what to ask and how to ask it, and how to read the response from the server. So it's just a nice standard to, then you can abstract yourself from these reading inputs, outputs. You just need to worry with the process itself. So it's just, that's why we did it, you know, just as a proof-of-concept, but it could be much more complex. Thank you. Thank you.
In April 2010, Ordnance Survey made a number of their national mapping products freely available under the OS OpenData initiative. Vector and raster datasets at varying scales were released under a very permissive license which allows users to freely create derivative works, even for commercial purposes. Lutra Consulting released a WebGIS application to demonstrate the value and potential of combining OS OpenData, OGC services & standards and open source GIS software. The WebGIS application, Catchment Finder, uses the OGC Web Processing Service (WPS) to provide a simple method for users to generate hydrological catchments (or watersheds) for any point in the UK. Catchment delineation is based on the OpenData Landform PANORAMA dataset, a 50 metre resolution digital terrain model (DTM). Catchment Finder was developed using the following FOSS components: OpenLayers and Ext JS for all user-facing functionality. MapServer and TileCache to serve background mapping and processed results. GRASS GIS for server-side catchment delineation process. PyWPS to provide a mechanism for interaction between the browser and GIS processing taking place on the server. GRASS GIS sits at the core of Catchment Finder. National slope and aspect raster datasets were pre-calculated as inputs for the watershed analysis module in order to optimise calculation times. A WPS process was developed in python (using PyWPS and GRASS’ python bindings). The process chains together a number of GRASS commands in order to generate a vector layer representing the catchment outline which is then displayed in the web client via GML or optionally downloaded as a Shapefile. PyWPS (based on python) was chosen in preference to alternative WPS server implementations due to the typical flexibility and efficiency offered by python (a high-level programming language). Implementing specific GIS processing tasks as WebGIS applications simplifies the end-user’s tasks and therefore opens up GIS processes to non-technical people. Storing datasets and carrying out processing centrally helps remove the burden of managing large/national datasets. Any updates to underlying datasets can be carried out centrally with minimal impact. As Catchment Finder implements the OGC WPS standard, it is also possible for the service to be utilised by desktop GIS applications. At present, due to the low resolution of the underlying DTM, it is only possible to generate watersheds for larger watercourses.
10.5446/15593 (DOI)
My name is Vasile Krachunosko, I'm from Romania, from Bucharest, from the National Meteorological Administration, Remote Sensing and GIS department. I'm also leading the OSG or Romanian group and I'm always scared in presenting in Phosphor G because this is the meeting of tribes and you have a lot of developers and I'm not a developer and I'm always afraid that people will expect to see things like coding and so on. I had this experience in 2009 in Sydney to play Skyler Earl and he's a very popular guy and I was totally, I'm not expecting different things. So today I would like to talk about this project we are doing in the Met Office, the European project on an important topic in Romania, in Europe and worldwide, the water quality because as you may all know we do have a lot of issues with water quality and the future does not look good. So but first what clean water stands for, so it's the name, this is the acronym of the project but the real name means Integrated System for Protect and Analyze the Status and Trends of Water Threatened by Nitrogen Pollution. The project doesn't focus only on nitrogen but on more other issues on water quality. And this is financed by the European Commission through a framework called LIFE, LIFE Program, a financial instrument supporting environmental and nature conservation projects. And it aims to develop at the basin scale so it's quite like a, not a pilot but it's for just one major basin in Romania. So it would try to create an integrated management system to see when the water quality is threatened and to enable the decision makers to try to simulate what will happen in the future and to take the right decision. So modeling and then assess the, and analyze the results. And of course because the subject of the project has a very, very big geospatial component and we did this, all this component with free and open source software and that's the reason I'm here to present it to Phosphor G. Beside the Met Office we also have some other partners in the project mostly rated with water so the National Institute for Hydrology and Water Management, Romanian waters and so on. Have to speak about our end users because they are national and local authorities and the very job is to make sure the water quality is good. They're in charge and responsible on the good quality state and to be able to assure this water compliance with the national and European laws and there are quite a lot of directives in Europe that deal with water quality is quite an important point. And this decision makers they need instrument to understand the impact of the new industrial or agriculture investment in their water basins. And they do not possess any knowledge whatsoever of how to run a numerical model, a mathematical model to simulate the water quality and they don't know how to use GIS software. We are from different business, they simply don't know how to deal with this kind of technologies. They do know how to use simple tools like Google Earth, Google Maps and this kind of easy mapping frameworks. And of course we are looking for nice tools to be able to create scenarios that involve future water quality improvement. So they need to take decision to improve the water quality but they need a tool to simulate what will be the impact of those measures and also is not only the impact but what is the cost of that measure. So that is why we have to somehow bridge the part of the numerical modeling in situ data collection with some nice easy to use interfaces. Of course geospatial enabled interfaces. And we did this with an online GIS system, distributed architecture that could be run by anyone in the browser. So we could do data visualization, query and spatial analysis, simple one, not very complicated one. They could create new scenarios that involve water quality and send those scenarios to a numerical model. So we basically are able to improve easy to use interface to modify the inputs in the model, send it to the models and then receive the results and have some tools to analyze and see what is the outcome of the measure after the modeling. And of course because you see everything rotates around the money because if you have plenty of money, I don't know, take all the measures necessary to have a good quality state of water, you have to somehow prioritize these. Of course it could be good to use the money in the best way. So you need to have like an analysis of cost effective of the measure you are proposing and to integrate with some popular mapping platforms. So not to talk about numerical modeling, it's not my area of competence, but for this project we are using three types of numerical models. First is for the surface water quality in this model developed by the Pierre-Marie-Quilly University in Paris, France. It's called Sénèque River Strahler. We use mod flow for the ground water quality modeling and there are some in-house development models to model some soil parameters needed for this. And I'm also here in this conference because this project was built on a previous experience, a project called Diminish. And there's a good thing that this project is also an example of migrating from proprietary solutions to free and open source one because the previous system more rudimentary was built entirely on things like ESRI, RKMS and we use Microsoft stack for the databases and for programming language and so on. And then this happened in 2005, 2006 and in 2006 I attended Fast4G in Lausanne and I was amazed by what the free and open source technology can do and I simply for a new project I remove everything we had and this was some example of the previous project and went for the free and open source solutions. So the project starts with some system requirements. Being in this quite complex context we had to take an account of a few initiatives and standards like the Inspire directive which is very, very important in Europe and also we have some satellite data and we somehow have to take into consideration GMEs or the actual Copernicus, the European program for space observation. SASE that is a European shared environment system. We'd like to think that what we are building now is an example of SASE and some directives and the most important is the water framework directive in Europe. And that's some technical stuff. We got these requirements from the end user and taking on the mentioned initiatives and we end up with things like general requirements, data requirements, functionality, security, hardware, software. I'll not bother you with this because it's just what the system should do is like we need interfaces for WMS, for WFS, for all kinds of things, performance things and we got like a big document with what the user needs and then we had to assemble a shared database and of course we got what was already available in different institutions but we had to work a lot almost one and a half year. We did a number of field campaigns and it took I don't know a thousand but many hundreds of field measurements regarding water quality and other issues and it was quite time consuming and we spent quite a lot of time to try to make this GIS database compliant with what the water framework directive is saying and the inspire directive. It should sound easy but it's not, trust me. And this is the content I will not get into details. We also have like data fusion application because we were still receiving data from a number of institutions across the country and we all use different file formats, different platforms and we have to somehow to create a layer that could translate the data we're receiving in some standard formats to be able for our system to use. Of course we make use, besides open software, we do make use of open standards and for this project we are using standards of course issued by OGC ISO and also the standards from Inspire but we are similar ones. So one of the most important are the data portrayal standards like WMS and WMTS. So this is the main view of transporting the data from the GIS server to the clients, the web and desktop clients. And we also like our users to be able to download their data and using it in their own environments so we set up also some download services for the vector based on WFS and for the raster data based on WCS. We have a catalog. We don't have as much data as you may think but at some point it's very good to have a catalog that is also use some standards for the search to be able to be harvest by third party catalogs and we created metadata using some well-known standards. And we also use some light formats like GeoJSON and KML to stream the data. So we use GeoJSON inside our web application. We give the users the opportunity to take the data out as KML because everyone knows how to use Google Earth. So now I'm talking only about the geospatial system architecture with as mentioned a distributed one. We have a database management service, a catalog service, a data delivery and portail service and some or one software client. And the general thing is like it's look like this. So we have the user somewhere could access the system through a web client is how we think most of the project but if they have greater knowledge and they prefer to use their own tools they could use desktop client. So they could connect to our system and make all kind of requests behind. We have the database, the catalog and the services and we stream back the responses to them in different type of formats like maps, flow data, services, charts, animation and so on. And we also have the project partners who are feeding also the system with numerical model simulations and the economic analysis which is quite complicated and I could not really understand it but it has to go there somehow. And if you zoom a little bit on the technology because you are in a the FOSFORG conference. This is the tools you are using. So for the client you are using JavaScript libraries like XGS, GOX, OpenLers and a number of other for float for charts and so on. And we test, we are testing the system with things like QGIS, UDIC but also RGS because some of our end user have RGIS or Google Earth. And then on the service side we have like a normal stack with Apache, Tomcat, Java, PHP and Python for service side processing and then we store all our vector data in a Postgres plus PostGIS database for the roster we are still using the normal plain file like GOTF, compressed GOTF files. Use geo network open source for our catalog and it's a nice product to integrate quite well with other inspire related catalogs in across Europe. And for our GIS server so to be able to deliver data as WMS, WFS and so on. You are using geo server and the geo web cache and as I said the user can get back a number of the results in a number of ways. Now for the final part of my presentation I will present a little bit, little facts about the web client because that's the public face of our system. So it has just a very simple interface. I could say it's very rudimentary. It's not supposed to be very fancy, it has to be effective. So this has two parts, one is for scenario management so place where the user can create scenarios, choose what input data they want to use for the scenario and so on and then we have the normal web mapping application based on open areas in geo X with a main map, some layers, they could switch on and off and just transparency and a number of tools and they could use to query the database there. They could make different information there. They could edit and we spent quite a lot of time for maybe not such useful thing. They could create their own layers or they could edit some of the existing layers. For example to create a scenario it involves to edit the, for example the industrial, the industry in the area. So factories, power plants and so on. So you could delete or add new, this kind of objectives that are very important because they all affect the water quality and they also could change a lot of parameters. So it's not only geometries but attributes because you'll need to know how much, I don't know, the wastewater, how much they have bad water they're sending out. And you also have things here like the, I don't know the name, the station where they are cleaning the water. So again, a lot of parameters and stuff and that's why you're required to do a lot of editing and so on. And you could use a number, we also deploy some WPS services. So for, especially for vector data, the most simple example is to create buffers. So you could select existing elements or you could create your own, I don't know what, with vulnerable areas or whatever, sources of pollution and whatever. And you could do things like buffering and others and then you could create complex selections using also this kind of buffers and the attributes and so on. And of course you could view and analyze the outputs from the models. So this is an example of the groundwater model, you could do a lot of thematic maps with the outputs of the models just to understand what is there. And we have a chart module you could do. So whereas like you could create charts for hundreds of cross sections and what a basin is closing, per basin and so on. You could animate the charts to go in time for one year, for one month, for more than that to go along the river and so on. So most of these guys seems to look a lot of charts. Yeah, understand the impact of the measures they are taking. You could put charts also on the map and so on. You could see it in Google Earth. So for the future work, because the project I didn't mention but it's just past this half time of the project. So we are not done. We have a lot of work to do still. So we have to finalize the integration of the models and the environmental cost because there are some issues there. And we would like to improve the visual look of the system. We want to put our own custom HGIS team. We want to use Bootstrap for all the parts with forms and what the scenarios are created and so on. Because they look pretty pretty. They scale on different devices. You could use it on tablets and so on. Bootstrap is a great technology. We have quite a complicated document for system validation. A number of components has to be validated through using some standard steps. I have to do user training. And we want to open the data. I'm a big fan of open data. And we collect quite a lot of data. And there's not so much data open at this point in Romania at least. And the water quality is an important issue. I think that everyone should be able to access this kind of data and understand how good or bad the water quality is in this area. And this should be important. With this geospatial stack we want to go live no longer than January 2014. And of course have more fun in doing things. So this is the end. Thank you. If you have questions please ask me. We are a little bit behind the time. But that's not my fault. We have enough time for questions. Thank you very much for seeing.
Water quality is a major problem nowadays around the world. CLEANWATER system combine various information and complex data in order to evaluate the present level of nutrient pollution in vulnerable areas, as well as to assess the cost-efficiency of the measures that could be applied. Through a simple and intuitive web interface, CLEANWATER offers the decision makers a spatial aware tool to (1) create scenarios related to the human activities and climate changes, (2) send those scenarios to numerical models to model future evolution of water quality and (3) view, query and perform spatial analysis of the simulation results. The system was implemented in a test river basin (Barlad River Basin in Eastern part of Romanian) and started to contribute to the development of a modern water management system, according to EU legislation (e.g. Water Framework Directive, Nitrates Directive). The future plan is to replicate the system at national and international level. The system is build entirely with standard compliant free and open source software applications like OpenLayers, ExtJS, PostGIS, GeoServer and GDAL.
10.5446/15592 (DOI)
Thank you. Show it a few seconds. A minute. Okay, let's go. Okay. Hi everyone. I'm Marco. Hi, Marco. Hi, I'm Marco. And, well, I'm going to talk about a different way to talk about smart cities because many, many cities are publishing lots of data. I'm part of Open Knowledge Foundation Italy and I'm working with lots of many Italian municipalities to work on open data. And we all love open data. It's great. It's a great way to get transparency from the community, get transparency from almost anything. And open data are really cool because we got lots of data collections. The census says 281 are available at the moment. Lots of data sources, an incredibly growing movement. And more and more cities, regions, entities all over the world are giving out data. So we get lots of geographic data sets, lots of geologic data sets, and public transportation and so on. So, ever more interesting. Is it jumping? Somewhere? Oh. Earlier that was happening because the connection was somewhere. Yeah, yeah. Can you just say the top title in Italian? The top open data is so cool. But there's a catch. In specific, this is a photo of Central Park. And there's a catch and I want to show you what the catch is about looking at parks. Because looking at the data catalogs from three cities, New York, Chicago, and Bologna, where I come from, we see that there is something really strange going on. New York City Parks have this level of detail, organizations, status, type, and the whatsoever. Chicago, yeah. We can't even read the level of detail. Bologna. I mean, it's part of the game. I mean, everyone publishes the information he or she has. So, New York has obviously a way to look at parks in a more management kind of way, because it has information about jurisdiction, waterfront, map, if it's mapped or not, the borough, the precinct, and whatsoever. In Chicago, we can't read it, but there's a whole level of detail on the specific services available, or areas, or a lot of detail. Yeah. So, if we want to try to connect the dots and see what kind of information matches in the various views of the map, of the park concept, we have to see that, for example, the precinct or the sign name are connected to the park name in Chicago and the norma in Bologna. And the idea, the specific idea of the single row is once it's in Gis Proc NAM, here it's another code, and here code underscore UG. It's terrible. So, it's all about semantics. If we look at a data set and we don't understand the columns, we need to develop something around it to understand how we can be able to manage it. We know how we could do that. Having an application for Chicago, we would take the data sets, work on that, know the column names, and write our code around those column names. It's easy. It's elaborate, but it's easy. But let's say we want to try to take a data set from New York. We would have to add a normalization process, for example, because the address, the written address is not exactly the same format. So, we would have to really create a complete re-elaboration of the data. And it's, again, pretty easy, but quite elaborate. And doing that once means that you have to do it for every data set you want to add to your system. Or we could start looking at the whole problem at a higher level. It's all about dimensions. We have time, we have a space, and we have a topic of the data. Time, it's easy. We know, time's a line, more or less. So, it's pretty easy to manage. Space, we love space because else we wouldn't be here. What the real problem is, is the third part. It's the topic problem, because there are so many topics that are covered by open data. And there are so many data sets available all around the world that it's pretty, quite impossible to understand exactly what a specific data set covers. Again, if we talk about parks, everyone has a different view on what parks are. And that goes from parks to recycle bins to any kind of element. Probably nobody in this room would agree on what a door is. So, it's all about ontologies. We have lots of ontologies, explaining almost any kind of topic. Not every, but many. We have the DNTF for specific computer infrastructure ontologies. We have Inspire. We are Love Inspire, more or less. We have Dublin Core. We have Friend of Friend. Do we need more? We always need more ontologies, because we always need more ways to describe the world in a coherent fashion. What we see behind the whole text is the linked open data graph, the linked data graph specifically, which has lots of data providers and ontology definitions that are interconnected. And as such, the whole discussion of ontologies means basically it's like having foreign keys in our relational databases. So, if we can get the ontologies into the whole discussion on semantics, we can be basically ready to do something way more interesting with our data than we could before. But in the end, this is our discussion for developers and coders. What in the end is true is visualizing the data, and an end user doesn't like a table. It's terrible, because this is just data. This is not information. What a user wants is a map, and he's always wanted a map, because a map gives you the context for the information. And giving you the context for the information, these are all maps of Nottingham in various time periods, giving you the context for the information enables you to understand the situation, to understand where you are and what the services are around you. And it enables you to do one more thing, to elaborate on that. There are many ways to elaborate on information. There's MDX for business intelligence, there is Sparkle for the graph world, there is SQL for the relational world, and there's WFS to get the specific features. And this is great, because basically what you can get is infographics. You get the possibility to do aggregations, to do aggregations and get directly into something like a city dashboard, where you can get more information than you could ever get from just having one element in a table. And you can think about planning, and only knowing every part of the city enables you to do that. So here comes the whole Vivacity project. The Vivacity project starts taking this information, these very simple rows of CSV files, takes the concept of an ontology, this is a very simple ontology for a park, that I wanted just to show you the functionality of Vivacity. It says, yeah, it's unreadable. Okay, it says park, tree, species, because in Bologna we have specific details that the single trees are available present in every park. And then there is services, fields, ball field, basket field, then there is management of the park and phone number, because we have this information almost available here without these... The problem with these CSV files is that there is no connection between the parts. So what happens? This happens. Vivacity uses a graph database as a backend. So the information is connected in a way that enables the user to reconnect back using only the ontology as an entry point. For example, knowing this structure, we could just ask the database what the telephone number is of each of the parks and show it on the map, because we know the geometry from there we get to here, we get to the borough, the borough is this and we have the telephone number, which is connected to here. And this way we just ask a question about our ontology and we get the answer on the specific city. This generates obviously a few problems because it's not fun and games. And the raw data, the ETL is complex and we thought that having just an ETL taking only CSV files and Excel files and tables anywhere, or shape files, basically structured data with meta information is one part, but it needed something more because many cities are starting to publish APIs to get access to their information. The raw data can be taken directly from APIs with this meta information and description. The raw data collected is kept and is versioned so that we can see a given time, a given moment in time what the situation was in a very specific moment. And then there is semantics. The semantic part is basically an interpretation of each column based on the ontologies that are given. And every change in a single data set, we control the meta information about the specific data set, every change in a data set creates a new semantic model and the user has to intervene which means the mappings. It already enables just, I mean it requires the user intervention just in this case. It's not just a front end, it's not just an only forming tool. It's a way to understand and really help the understanding of the data. In the end it's a way for the city to become not just a producer of data and just someone who has the information and just gives it away. But it becomes an integrating part of the city decision making and most importantly the integration of data sets and APIs enables the city to really understand what's going on. The stack, the Vasity 1.0 had to be presented last year in Beijing. Didn't make it. I mean Beijing didn't make it. The Vasity did. Anyway, it was based on OpenLayers 2, Django and Postgres. It was just a prototype, very slow, incredibly slow, because putting a graph inside Postgres, don't do that. I mean, now the new versions are quite good, but yeah, last year wasn't that great. Now the Vasity 2, we changed to Leaflet and maybe soon to OpenLayers 3, hopefully. Again, Django as a data manager and exposes the APIs. And as I said, I spoke about MDX, Sparkle, SQL and WFS. These are supported by the backend. So Django interprets everything and manages to do the queries and transform the queries in the specific queries to the various backends. And now the two backends, MongoDB for the document approach, let's say, from the bottom up, and Neo4j for the relational part, Neo4j spatial, to get the relationships between the resources in the map. And yes, it's open source. It will be soon in November, by end of November. I wanted to show you a demo, but the server farm where it is in Germany just said, your motherboard is exploded. Okay, no problem. That's why you use server farms, right? It's somewhere else, someone has to do it. I can't make the demo, sadly. But there is version one on GitHub, and it's a prototype. It doesn't work at 100% as a prototype. Yeah, and that's it. That's it, yeah. APPLAUSE You want that? I was going to say you forgot the question. Okay. I'll handle them. Thank you. Hello. Hi. I'm Sebar Colby. I would like to see if you could go back to the slide that had all the data that looked like newspaper graphics or something. Yeah, the one that one. Okay. So that was sort of like an example. Yeah. How you can be a data integrator. And could you just talk a little bit about what those tables represent and also how these are used in decision making for citizens and urban matters? I mean, good question. This is just an example of what can be done. Basically, these are all aggregations of the information available. The more information is put into the system, the better the ontologies represent, I mean, better the semantics represents the whole system. And as such, you're able to define the specific aggregations. I don't know. Has anyone used MDX and business intelligence tools? Okay. Basically, what you can do is that is you can, you have, imagine a cube of information. Only it has not only three dimensions. You need and you think you need. As soon as you start working with lots of dimensions, you have to really understand how to get to that element you really want to look at. And MDX does just this. It's like SQL, normal database query. Only it enables you to slice this cube and take only the parts that you really want to work on. And then aggregate at the end with only a selection of elements. For example, you could say one dimension is, let's say, it's unreadable here and it's unreadable here. Great. Let's say, let's make an example, usage of buses. You know where the transport, where the bus stations are, you know where the, how the buses, the lines work. You know, if you get to the level that you know every user, when a given user uses a bus, you can be able to start aggregating on that level of detail. For example, saying how many users use that specific bus stop. And as soon as you do that, you're able to create a model for the bus stops. And this model enables you to then understand how many bus stops you really need. And this can all be done through this MDX queries. Maybe they're complex, maybe sometimes they're slow, not always that fast. But it's part of the whole idea is that usually MDX uses specific databases to work. And applying this, the MDX model to a graph database is a completely new approach. Just a very small literature on that, yeah, because it's a very new line of experiments. Actually, my question is facing a similar challenge. Probably is that we ruled out the use of ontologies because of the risk was ending up with one ontology for every dataset. Because basically, in the different sciences, so vague, there is no defined set of ontologies. So you ended up with one ontology for every dataset, or for every organization you can create a problem. Is that something? Have you experienced it? I understand it's just the version one, but how many datasets you already have been? And I think that you've experienced in this. Thank you. Thank you. The problem of the dataset is that, in fact, there is no real consensus of what is a good, the problem with ontology, sorry, is that there is no real consensus that this ontology is good enough for solving this problem, and everyone uses that ontology. And what we did was using the most used ontologies and trying to work on them, to work just with them. As soon as someone gave us data that didn't respect that ontology, there was two-sided work. On one side, we evaluated the specific dataset, if it was sensed to elaborate it and try to bring it towards that ontology. In other cases, it didn't make any sense. So it was basically an extension. And the whole thing is that the system itself contains, in part, extensions to the basic ontologies given by the classic... I mean, the standard ontologies, and there is a small extension given, created by us, just to map the additional information. And sometimes, some of the information is just mapped between the ontologies. Just to give you an idea, how many datasets do you have on the big day? We have the... at the moment, it's a good question, we have around 30... we have the datasets of Bologna installed that are around 50, 60 datasets. What are the municipalities or from other... Municipality and a few entities around the municipality. Transports. All companies, all entities starting to push open data. And we're starting to confront with other datasets, and most importantly, the CCAN and SOCRATA data collectors that have a really nice API to get directly to the metadata. And possibly, starting to import their datasets soon. Meaning, New York, Baltimore, Ann Arbor, and anywhere. That would be an interesting experiment, because then we will have really to see what kind of problems... We think we solved. Yeah, exactly. We found that with the data we were working on, we had some problems, but we were able to solve them pretty fast. Yeah. Can I just grab that? Thank you. Sorry, just before the next question, we've got about five minutes left before the next presentation, which is a start. Due to the presentation after this one, I've been told that we'll have to sort of skip to the program, because the one after this is cancelled. We're going to have to get after me, because I'm back for yours, Chris, after. So I've just given everyone a bit of a heads up about that. But we can carry on with the questions in the meantime. So sorry, who got questions? So I guess my question is, so you have Bologna, so you have a starting... You have some ontology specifically for Bologna. When you add New York, there's going to be additional data elements. Some data elements are going to have to be transformed. You may have some data for capital, and you may have to transform it to a population, something like that. So that's part of the work you do. Every time you add a new data set, there's going to be some semantic mapping that you have to do. You're going to have to perhaps extend the data model. I mean, this is structured data. We're not talking about unstructured data. So there's no magic bullet there. This is work you have to do, right? But the idea is that you're going to come up with an ontology, and over time, you're going to come up with an ontology that will be able to include Bologna, Beijing, New York, whatever. Yeah, exactly. There is one additional aspect. Thank you. Yes, exactly. There is one additional aspect. The transformation part of the data from the specific format to the ontology-like format. The idea is to have that easily created by anyone. Meaning to have a small flow editor that enables you to just do the basic operations. In fact, that part is still under heavy development because we're evaluating even the possibility to work with Google Refine and Open Refine that have a great, great tool to elaborate the transformation of information and the datasets. Having that would help us a lot because Google Refine enables you to export the transformations you make. Who knows you Google Refine? Okay. It's an amazing tool. Now it's Open Refine. It enables you to basically elaborate CSV files and Excel files and anything and clean up your data. For example, you suppose you have data collected by someone in an enormous amount of time. A given road has one name but it's spelled wrong many times. Open Refine just takes the deck column and says, hey, maybe you meant the same road. You just can clean up everything just before you put it into a more complex system to enable evaluations. We're thinking about integrating that into the system, into the platform so that it's an easy experience to clean up the data, prepare the transformation, prepare the mapping and then have everything already running. Don't you have to do the same thing with spatial data? I mean, using a tool like FME, basically, is that a rule that transforms data into something else? Yes, we do. We do. That's part of the game. We have been looking at Open Refine because there is a tool already, an extension for Open Refine, that enables you to already do at least part of that. That is one of the issues. I didn't even talk about the problem that Bologna, living in Italy, Bologna and the city beside Bologna have different projections in data sets. That's great. That's sweet. Yes? We have a set story. That's okay. We came up to when the next session is due to start. We don't have one in here. If anyone wants to go to a different talk, there's a few different ones on. Alternatively, Mark, do you want to answer more questions or do you want to have a break? No problem. You're okay. Do people want to ask a few more questions or hear their story? It's entirely up to yourselves. Okay. Go back over to you then if you're on. I'm just on water. Basically, the problem with the projections in Italy is that we had, up until last year, when the European standards had with ED50, almost every region had a specific projection. Sometimes even a modified version of classic projection then that made everything worse for people who were working on the data. There are some regions that are overlapping into other, not time zones, meridians. Basically, the government chose to get part of Italy down to Africa as measures so that everything would be on the same side. Now you mentioned OGC standards on one of the slides. For spatial representation, does that mean that you're using GML as your standard way of representing spatial data? Standard weight, yes. Standard weight, yes, because that's what more or less WFS supports. The next question is, you're talking about a whole city. So, you know, there's something called City GML which is designed, it doesn't do very well inside buildings, but outside buildings and it's been extended to include utilities and so on. Is that sort of the longer term model that you'd like to fit into? Yes, that's the, yes. We have a project working, no, we have a discussion going on with people working already on City GML and that's, yes, it's a long term, mid to long term, but yes, that's part of the plan, yes. Do you want the microphone? Well, it's part of the, I mean, it's part of the blocks, yes, scale. No, it's, but if you have the same source of data and then you want the same target, schema, you can do this mapping in hail, it's probably open source and so on, then load it, then export the mapping and do the transformation for the data set aside. So, every time you have this kind of data, you can transform that. Yeah, the idea is exactly that. I mean, you do the mapping, as I said, once the semantic is given for a given data set, that is kept and you don't even have to tell the system he has to go and get it. The idea is to have it automatically at least once every, say we have a data set that gets updated every week, once every week you get the data set into the system and it's up and working possibly, hopefully. Would this be, let me say, only a container of the data or do you think that it will be also possible to integrate some management models or some other things? At the moment, it's just a container with APIs. But the idea is to be able to develop plugins to have the APIs, I mean, already in the system to be easier to access and faster, possibly. Yeah, but the idea is, again, plan, this is not long term, but mid term, ten, even short term, but yeah, it's part of the game. Any other questions at all? I was just wondering if you could do another use case scenario, maybe using something like water. For the aggregation. Water or just another urban metric that people will look at. In this moment, we have in Italy the, I mean, it's part of the 2020 thing to have broadband in cities. So we're developing a model to see how much a city is valuable for a telecommunication company. And to do that, we need to know lots of information, basically. Amount of people in a given area, amount of infrastructures already available under the streets, amount of the kind of people, meaning the income, the meaning of a given area. These are all factors to be considered in this evaluation. And at the end, we can give a specific value to different zones. And through this system, basically, it's a project that's starting because, yeah, Europe. Things are not always as fast as we would love them to be. The whole project is just, yeah, just starting, but we were able to define two or three areas that big telecommunication companies could be interested in investing in fiber optics and installing the infrastructures. And two of these areas will be starting in a few months. So, yeah, it's a model we are using. The whole aggregation on OpenData part is something we are using, and it's useful. Sometimes it's difficult to find, to really see the connection of information, and that's in part why we wanted to do this project. Because as soon as you see how information is connected between the, how the dots are connected in the city, then the aggregation is simply deciding where to go and cut to see how the cake is made. And as soon as you see that, then the aggregations are immediate. Because you see the structure, you see the stratification of the city, the services, the infrastructures, and you can see, suddenly you can even see where a city is tendentially going to grow. Because you see the infrastructures, the transports, the quality of living in that area, the services present to kindergartens and whatsoever, and schools. And you really understand how the urban fabric is created. And as soon as you get that, the aggregations are part of the game. And the good thing is that having a standard tool, a standard language for these kind of aggregations, you basically can use a graphic tool to just play with them. And it just creates new tables. Then the problem is really understanding those tables with those numbers. And it's something really interesting because it enables you to really have a playground to work with. It's like SimCity only with real data. I just add that it allows you to compare cities in the sense that you can compare the quality of the bus system in the quality of the bus system in Milan. Having all the information connected enables you to do metrics. It's an interesting book. It's your example of your own parks. I work at Birmingham and the UK City Council. And I seem to have spent days just trying to discuss with our parks team what's a park, what's a recreation ground, what's an open space. This is just within one city municipality in effect. So in order to do that kind of international comparison, the ontology and the descriptions are going to be really, really important to allow that kind of comparison. The infrastructure I suspect is a little bit easier. I don't know because I don't work in that sector. Use of space, that's a really difficult thing to compare. Not that it shouldn't be tried down. So I don't know your parks example, as Mark took, comparing even Chicago to New York. I should imagine judging by not just the attributes that were available, which ones are minigolf. It's an interesting worldwide comparison, the best minigolf courses. That alone is getting into the fine detail. I think that's one of the factors we wanted to explore. Because it had a very practical reason that was the one I was talking about, the fact that we wanted to evaluate the value of a city for the telecommunication. But in fact it's just a metric. You did a market survey. You looked at demographics, were you doing market research or a whole composite assessment? You say, oh well, here's where we see stronger man. Here's where we see an underserved area. I mean, is that it? So you don't understand the driver's for your mouth? The answer is quite strange, so I'm just putting my hands in forward. I believe in Dr. House. He says the patient always lies. The patient lies, but data doesn't. Data is numbers, and numbers don't lie, at least, hopefully. You can manipulate them, but then we're talking about black magic. Data, they're numbers. And being able to create a model only on numbers without getting to evaluate the specifics of having people calling you, would you like to have broadband whatsoever? Has given us the opportunity to really thinking about changing that approach, because again, people lie. And on the phone, they don't even have to give the picture. Once everybody agrees on the basic data, then you can have a rational conversation about what to do. One thing you may be aware of is something called urban observatory, and one of the things that the first thing I ever saw from urban observatory, it's totally graphical from what I saw, totally graphical, was they compared parks in Paris to parks in New York or Chicago or London, and it was just incredible how much parkland there is in Paris compared to how much parkland there isn't in Chicago. And it's graphical. All they've really done is they made sure that the area cover is the same, so you're comparing apples to apples and not apples to orange. But it's exactly the kind of thing you're doing, except they're doing it totally graphically. The idea is to be able to do that. I mean, the whole platform has the scope to be able to do that numerically if you need it so, for example, having an endpoint for Excel to make MDX queries, and possibly graphically, because you could define a query beside the map, meaning in a side panel, you'd write your MDX query, run it, and the data shows up on the map, so that maybe not like that, because that requires normalization of areas, and maybe that is probably easier to do numerically, but the idea is exactly this, to be able to see what happens. I mean, this slide deck is slightly different from the one I had before, because the one I had before for version one was deeply SimCity-based, because what's open data if not the engine of SimCity? And we've been, I've been working with the municipality where I live in, it's a very small town, 15,000 people. So, and we've been, I mean, the council, the city council, really was amazed when they saw me playing at a, waiting for a meeting, playing SimCity on my laptop. They said, what's that? It's really cool, and we started thinking, that's the moment when I started thinking of, hey, SimCity, why didn't you ever play it? I mean, you're managing a city, if you can't manage a simulated city, how can you manage a real city? I mean, there are more problems there, for sure, there is more complexity, but the model below is basically the same, and the new version of SimCity is even more like that, is more a network, specifically a network of networks, and that's what cities are, and that's why, yeah, the whole thing started. I've said one thing, and once you've got the data, using a SimCity game-playing model is a really good approach, because it's something that a politician, non-technical people can do, you don't have to be an engineer to do that kind of thing. That's one thing we did with some of the members of the city council, and they played, and it was great, they had fun, and they saw that they couldn't always manage the simulated city, but it was really fun, it was really great. It's interesting, actually, from the SimCity point of view, the talk that we're supposed to be on now by E-Mart and Rob Hawkes, I don't know if you've come across the visiting cities, it's called Visit Cities, V-I-Z-I, cities all one word, they were both, well, certainly I know Rob Hawkes was a former developer at Mozilla, and they're taking the likes of OpenStreetMap data purely for London, I think, at the moment, and OSOpen data, they've made 3D visualisations that work in the browser, I think they were inspired by SimCity from what I understand, as well as making it effectively creating this SimCity game with London as the SimCity in fact. So, it's well worth going away and checking what they're doing now, I think it's not really beta, is it? Yeah, it's still in beta. Following some of their Twitter accounts at the moment, Rob Hawkes has put some images on there, just trying to deal with the Zed-hide data, like in London you've got some roundabouts that are underground and stuff like that, so they're trying to deal with underground roundabouts combined with flyovers, combined with various parts of the infrastructure, let alone the Thames running around. And on the website there is an amazing video of the underground map, and really great. It's amazing because it's a 3D visualisation of the whole underground network, and wow. With real-time data on where the underground is in a given moment, it's really great. Sorry, I'm just taking over, my mind's racing. The thing that I find interesting from the City Council point of view is, in Birmingham we broadcast on the internet the planning committee, so you've got these people, untrained, elected members who are looking at 2D plans on perhaps either on a Google map or on our own internet maps, and asking the kind of questions that the 3D visualisation and the now playing of the city would answer a lot of times, how does the life from the Zed affect things, stuff that you would normally model, but they actually just want something they can almost instantly just turn and play with, and then they can make a more informed choice about the actual planning decisions, and particularly start throwing demographic data on top of the actual physical infrastructure and the physical effect of something going on. I was hoping to meet the guys from Visit Cities, but yeah. The guys from Visit Cities are great, they're super star web developers. Amazing. Do you have any more comments, questions, or should we give Mark a break, because we've got 10 minutes before the next session, to start or we can have a... Well, I just want to say that one of the biggest problems with urban planning is, and let's just say, like you say, land use and planning committee decision making, is it's a long elaborate process because you have to integrate all this information, like the infrastructure and then the economics and the architecture and okay, and then you take every public comment that you're supposed to get, and then you respond to every public comment you get, and you know, after... It feels like home, yeah. Seven months, you know, maybe you're... Yeah, I know. You get your permit to build, let's say you were going to build a big, urban, nice new thing, it's maybe 10% affordable, right? It just takes so long. And so what you're saying is this will provide a more integrated, immediate, integrated data, and networks of data, you know, showing all the ways that something is integrated into space, and help people make better decisions. And it's a better way to argue pros and cons of a certain decision. Yeah, it's part of being informed. I mean, being informed is not just having that piece of paper, it's knowing all that's around, the decision you have to make, and all the implications. Call it like that, give me a rest, what a nice... Thank you.
Many big vendors are exploring the smart city concept explaining that the smart city is a city aware of the things happening in the infrastructures. Thus the vendors are pushing for a Smart Grid, Smart Metering, Smart Sensors and Smart Whatsoever. This makes the city look like a sick patient, being monitored in many ways with histograms, gauges and panels for the information to be read. In our opinion this is the most unnatural way to interact with city information. Historically the most used way to interact with citizen oriented information is the map. Even today, with the always more precise GIS tools, the map can be an important part of a city information management tool. The VivaCity Project is a platform for the data-driven smart city. The core of the platform consists of a map- based view of the city itself, with all the possible cartographic open data made available by the governance. Beyond that, various apps can contribute in a smart manner through a set of plugins and entry-points for various views of the city, enabling a deep and complex interaction with the city itself. This system is self-sustaining, considering that the city already contains its monitors, which are the citizens. They just need two sets of tools: a visualization tool enabling the citizens to understand what is being done at a given time, and a tool to express opinions, problems and proposals to the governance. Considering that an overly generic tool loses its meaning because it has no real target, the interaction with the governance is delegated to function-specific or target-specific apps sharing a common API. This way both governance and citizen gain benefits, having both sides creating new data all the time and interconnecting information from the city and its inhabitants: governance has the ability make decisions based on real-time citizen-driven data, while citizens have the opportunity to create new services using the provided data. Figure 1 - Part of the VivaCity Smart City Interface For instance, the APIs offered to external apps are aimed to the following areas of interest: Politics, political decisions Maintenance • • • • • • • • • Security City Info, Touristic, Cultural information Management, urbanistic information Urban events, Urban Acupuncture, social analysis Emergency Management, Emergency information aggregation from the many sources available Economic, Managerial information Environmental, Energy usage information The data shown in the interface is the sum and interpretation of the data provided by the local governments through open data, or applications created by third parties like OpenMunicipio in Italy, the OpenSpending platform by OKFN or even simply mash-ups with complex datasources, like the USGS earthquake map, or the various regional APIs for simple services or any other app enabling the citizen to participate actively to the activity of his government. Using the platform in different cities enables a normalization of the services offered by the cities, and the direct comparison and interconnection of cities through a distributed API supporting the governance to empower policies and improve citizens’ lifes.
10.5446/15590 (DOI)
information services and it fits in different ways in the university so we have an MSc program in GIS in geographical information science and you can see here the GIS services is one component of this so this is a second semester module that they do things like this fundamentals of GIS and special data handling in the first semester and those are pretty the classical GIS courses that they use GIS for the training that they do hands-on with the desktop systems and some programming and this and that and so in some ways the GIS services course which is second semester is a counter to that as a foil if you like for it that having done all of the proprietary software in the first term or first semester in the second semester in the GIS services partly it's about web and internet GIS and and getting away from just desktop stuff but also it's about exposure to and use of the open source sweet and OSG software so that's kind of how GIS services fits inside the MSc program but also it fits inside another program at the University our Horizon doctoral training center so one or two people here who've been presenting here a part of the the DTC so for example Mark Eiliff and Laura Kinley who presented here or been involved here are DTC students so this is a four-year PhD program and broadly the students on this are studying PhDs in the area of pervasive and ubiquitous technology so this clearly has the strong location elements and it's quite focused on some new media web and mobile stuff the intake is about 15 to 18 students a year on this program and it's multidisciplinary so the students come in some of them have strong computer science background others have been doing design or business studies very broad range of students and as part of the four-year program they do 120 credits of training which is roughly equivalent to this to doing a taught part of a master's degree and GI services is a core module but is an optional core module so all of the 15 students here a year will take GIS services but they sort of complement the the MSc students so as a result in the GI services module itself it's a very sort of mixed student group you know they've got so qualified whatever qualified means here I'm not sure what I meant there qualified computer scientists as students almost new to computing students who've done a semester of GIS training and have had all the stuff about the points lines and polygons vectors rasters you know object models field models and then students who are coming to GIS almost for the first first time so it's a it's a interesting balance of training this this module so what's the pattern of classes how is this this taught it's an alternating pattern one week they have a lecture and the following week they have practical classes through the through the semester and this is a broadly how these things pair one of the arrows have slipped already so there's an introduction and that's a pairs with just an introductory session to OSG live getting getting around the Zubuntu system and opening QGIS because they've done some deaths most of the also some of them have done the desktop GIS before a bit of introduction to desktop GIS with with QGIS second week we come on to spatial databases also sorry second pair of weeks and that's paired with a practical and loading post-GIS open geospatial web services goes with practical work in Geo server to set up layers and and exploring service interfaces to those layers so actually doing some manual ATH TTP requests and and looking at XML responses and that's on top of the post-GIS layers they've set up plus shapefile layers yes I use shapefiles and then there's the tile mapping and we talk about Geo server SLDs and in the practical also connecting open layers then to the services they've created in Geo server it's a week for them to catch up and then a separate piece of work so they at the end of this they have a piece of coursework to hand in that is a build-up of all of the work that they've done and actually in the piece of coursework they sort of marry two different pieces of the practical work together so up to here they've set up services and then they have a slightly separate piece of work which is introduces them to open layers and the final so bringing it all together is actually to connect open layers to the services they've created and to create an actually an appropriate web map interface to some data analysis they've done in through this practically in QGIS as well so kind of the coursework they've kept up with the individual practical elements the coursework kind of brings this all together so at the end they also do some API programming we which is separate from the coursework and we you know we one of the aims of the msc program at least is that they should see commercial and open solutions because that's they're going out to the world of work still that's what they need so we do rather so they've done some work in open layers as an API and we kind of compare and contrast that with using Google Maps so that's kind of the structure of this that this OSGO live thing fits into so it's a pretty sort of brisk pace through this to keep going through all of this so the first iteration of this was in the spring of 2012 last year it feels like feels like several years ago already and indeed this was quite a previous version of OSGO live this is 5.0 that used in this and my first sort of attempt or solution at this was to use Oracle Virtual Box for the system and I came up with a cunning plan for this so we're in if any of you've been in the B26e lab here but that's if you've been to run the workshops in in this building it's in that in that room so that's a networked Windows 7 lab and associated about we have a sort of sand storage as well so I wanted to be able to use that one of the aims here was that the students didn't have to be in any particular lab that they could move or even take something home to to work with so that they they had sort of portability of what they were doing so the VM is too big just to put on the sand and let them access over the network and lots of network contention that way so the the the original VM image this is source OSGO live image went on the local C drives of every one of the the PCs and set up as one of these multi attached virtual disks which means that Virtual Box doesn't touch that image it saves everything in the in diff files and those diff files went up to the the sand okay and so there was some testing of that and in for me doing it with us or with a couple of PCs that was working fine but it means that you could take your snapshots away and and those would those are relatively small so they're easy at the end of a class to save and you could go away and and work on this stuff at home for example okay however once you've got a class of sort of 20-odd students and you're relying on the network link to the sand how the good it is this started to get a bit sort of temperamental okay actually they're saving the diffs and the access and worse still a problem that became more obvious as the as from sort of class to class is that they although you're using the same exact same file on each of this PCs if you're not careful when you set up the the VM it gets a UUID a unique ID which means that you're kind of locked to working on the VM on that on that machine now there are ways around that but in the middle of a class and with this sort of some of the issues with using the network links and and the dis being temperamental this was starting to get to flaky after about sort of three weeks of also three of these practical sessions so that was turning into a not good solution quite quickly so actually halfway through this course we made an emergency right turn and switched to using the USB live system so it was a non-ideal should we say so we were so the first question is how how to just buy 20 USBs doesn't sound like very much but in the university there's always a question of who's going to pay and we're lucky that the Ordnance Survey set stepped in and sponsored the the USBs and so they got some branding on the on the USB drives and this was quite a positive experience there's a slight downside we bought 8 gig USB drives about so nine pounds each so that's not yeah not too huge a cost the problem is that the persistence file on a USB system is limited to 4 gig for usual sorts of reasons not least that this is a fat 32 system on the USB drive and and all sorts of addressing issues and so the virtual file system is limited in size and you don't get to use the whole 8 gig of the of the disk drive itself got sort of 4 gig plus the original sort of image of the Zubuntu system and of course the practicals had done had been tested with the virtual box system so we actually crashed into some problems with the students saving the zip files and the unzipped files and everything else all in this persistence file and it filling up so there was some smaller issues with them just filling the disk space and the Zubuntu system grinds into a halt because it had no no scratch space but it worked so it was a bit sort of bit difficult in places but 17 of the 20 students submitted successful coursework that year one one of the three dropped out for non-technical reasons so two of them had to resubmit the coursework which actually isn't a bad ratio on coursework anyway I think but it it's still because all the switching around it was a lesson ideal to learning experience for them because they learned as much as anything about trials and tribulations of using live systems as much as the GIS components itself but good things you know the OSGO live environment is a sandbox server in a box you know that the idea of setting up servers for all of them to work on that might be web accessible and so on was just a bit problematic in the in the university it's easier that they stick something up you you can use the physical host to connect to the live system possibly so you can do a bit of so testing that way but it it's much less exposed it doesn't matter so much what they screw up they each have their own system to work with so generally as happy with the general sort of possibility of the live system here there's quite a bit of time wasted and student confusion raised by the VM configuration at the start all this stuff about where you saving diffs and what diffs are and then the the problems later on but also there's a hell of a lot of technology here in this stack to have to get through so within the course to on the so golden thread of just getting them through that stack of things from working out how to do Zubuntu all the way through to having an open layer system you know there's all of this sort of stuff that they're having to get some sort of grip on in here and so there's quite a big learning curve just to just to get through to how do how do you produce a web map okay and understand some of the component technologies you know we you could probably go to matbox or something and just drop some data in and make a map but the aim here is to actually understand something of the stack of software to to make it as well so this year then this last spring we went for just straight to the so USB drive solution the deal with us for the students with the USB drives was that they could they gave me a 10 pound deposit for the drive they could either keep the drive at the end or they could swap it back for the 10 pounds in the previous year most of them kept the USB drive so I had money and could you know stack of cash basically I could buy new USB drives mostly those are one or two who'd return them so it's exactly the same model a Kensington data traveler stick in the previous year in 2012 every stick was exactly the same capacity and so you could write onto it an image for the whole just you know disk image essentially for the USB stick including any partitions and things in this set each each USB drive just varied a tiny bit in in capacity for for some reason and so that meant a switch around that you had to create partitions and then write partition images which is a bit more complex and meant having to work out work a different way to create the sticks and it's just slightly slower but this piece of software clone Zilla I found very useful for actually once you've got your master USB of taking images of the partitions on it and then being able to write it back onto onto duplicate USB drives had a new partitioning scheme because it didn't like the idea of this wasted space at the at the end of the the drive so it looks like this so you have to have a the yellow ring there is you have to leave a little bit of a gap in between the two partition so one is blocked at the front of the drive the other is blocked at the end of the drive and there's a gap in the middle that accommodates this variation in in actual physical space on the drive and so you take basically a partition image for the master partition and the fat 32 partition you set and there's a parted script that sets up the basic structure and clones illa writes into it okay so that sets up a USB drive in a way to clone it and so this this was a lot more successful that they're just not having any other stuff about VM configuration they basically got a drive and they could use it say you know saved a lot of if you like mental capacity and work time within the within the core so it made the course more relaxed however these USB drives this this year seem to like I said a bit different from last year's one and they failed much more so four of them just died completely it seemed to be actually the physical interface putting them into the machine no recognition of anything plugged in two or three of them the students managed to screw up the file system somehow and needed partition partitions being recovered the advantage of having the two partition scheme is that essentially where the aim was that they kept this partition as mostly system and they put data up here and that's actually worked quite quite well because it tended to be the system partition that got corrupted because of having separate partition images it's possible to just a splat back a new version of the system in the lower partition and they still got their data preserved in the in the top partition as well so it's also they stick to how they should be putting the data on it's also a bit more yeah yeah bit safer and they can always back up that partition as well to hard drives at home and things to help preserve their work because one way the reason one the reasons I went for VM in the first place is students lose USB drives you don't want them just losing all of their work a week before submission and things and this gave a bit of protection for this because this partition at the top is fact 32 it's easily read although Windows 7 doesn't mount it for all sorts of reasons just Windows is issues in seven this one is a squash FS is much more difficult to access from a separate system so another reason for getting to put data up here is it's a simpler partition to access as well so we've got to yes so actually in the end despite one or two sort of teething problems or issues with the USB drives all the students submitted this year one a day or some day or two late and so they all passed except the guy who submitted late because the and that was just because of the late penalty so it's more successful in terms of student outcome this year and partly because it's the second time round and I had some bugs and just what the practical work was but also just there was it was much simpler so this will be be repeating this course again this year so what am I going to do this year I've been thinking of just using the live USB again and probably using live seven so shame this not got queues just to on but there we are seven point five isn't out quite in time for this this course the point five releases come just after I've started the course which is slightly disappointing but just one of those things but maybe I'll look at a different live system I know some of the presenters here have built their own new bunto live systems which are a bit more slimmed down that's a possibility but I came in working on some of the works with people on some of the workshops here came across portable virtual box which packages virtual box into an executable you can run off a live disk live six is very slow to boot okay and it was much slower than life five to be the biggest disadvantage this year was having to wait for life six to boot and actually loading it into memory as a offer VM image does does have some advantages I might go for this this scheme all other things I might experiment with is setting up Amazon instances for each of the students and get away from having physical devices at all that they can access those instances anywhere but that needs needs in different form of payment and that might get in the way and then possibly over with some discussions around here maybe some sort of thing client thing but I think that's too difficult with our firewalls and proxies so I suspect it'll then so part the minimum action is to use live seven and just go for the same thing again maybe a different make of USB drive this year but it was reasonably settled by the end of last this last session so I think that's the that's the run through have I done for time too bad so if you want want any more if you want a bit more detail of set it of the sort trials and tribulations of setting up the USB drives and things if you go to my I don't often blog but I did write it up on my blog page if you want more about this or just contact me directly thank you
Big Data in the Earth sciences, the Tera- to Exabyte archives, mostly are made up from coverage data whereby the term "coverage", according to ISO and OGC, is defined as the digital representation of some space-time varying phenomenon. Common examples include 1-D sensor timeseries, 2-D remote sensing imagery, 3D x/y/t image timeseries and x/y/z geology data, and 4-D x/y/z/t atmosphere and ocean data. Analytics on such data requires on-demand processing of sometimes significant complexity, such as getting the Fourier transform of satellite images. As network bandwidth limits prohibit transfer of such Big Data it is indispensable to devise protocols allowing clients to task flexible and fast processing on the server. The EarthServer initiative, funded by EU FP7 eInfrastructures, unites 11 partners from computer and earth sciences to establish Big Earth Data Analytics. One key ingredient is flexibility for users to ask what they want, not impeded and complicated by system internals. The EarthServer answer to this is to use high-level query languages; these have proven tremendously successful on tabular and XML data, and we extend them with a central geo data structure, multi-dimensional arrays. A second key ingredient is scalability. Without any doubt, scalability ultimately can only be achieved through parallelization. In the past, parallelizing code has been done at compile time and usually with manual intervention. The EarthServer approach is to perform a semantic-based dynamic distribution of queries fragments based on networks optimization and further criteria. The EarthServer platform is comprised by rasdaman, an Array DBMS enabling efficient storage and retrieval of any-size, any-type multi-dimensional raster data. In the project, rasdaman is being extended with several functionality and scalability features, including: support for irregular grids and general meshes; in-situ retrieval (evaluation of database queries on existing archive structures, avoiding data import and, hence, duplication); the aforementioned distributed query processing. Additionally, Web clients for multi-dimensional data visualization are being established. Client/server interfaces are strictly based on OGC and W3C standards, in particular the Web Coverage Processing Service (WCPS) which defines a high-level raster query language. We present the EarthServer project with its vision and approaches, relate it to the current state of standardization, and demonstrate it by way of large-scale data centers and their services using rasdaman.
10.5446/15588 (DOI)
Okay, second session of this afternoon. We have a small change on the program. We have Alan Bacati instead of Peter, talking towards data analytics. Yeah, thank you. So I'm Alan Bacati from Jacobs University, Bremen, and we are coordinating the AIR Server project. So I'm here to talk to you about the approach of this project towards big data, AIR Analytics. What we will see in this presentation is about the brief overview of the AIR Server project itself, and which open standards we are using in the project. Then something about the technical platform oriented to the scalability issues of dealing with big data, and some demonstration of the services that are being built up on the AIR Server infrastructure for delivering access to datasets themselves. So our servers is an EU-founded project. It involves 11 partners from both computer and their sciences, putting together the software developments and the technologies to build an infrastructure for serving and accessing efficiently the science datasets, providing analytics over there in a flexible way, and built on top of that technology, pre-operational services for that access and analysis. What's our approach? Well, we use distributed systems for server-side processing, and we move toward the integration of data and metadata for the datasets analysis and location that I will show you later, and we visualize that on the web using 3D clients and 2D clients. All of that based on open-source software and open standards. Let's talk about standards to begin with, to ensure interoperability of the data serving in the archives. We use an OGC standard. I'll focus this presentation on the data model, which is GML code. It's a coverage model. What basically is a coverage? Well, it's a representation of a special temporal variant phenomenon, and it is provided in a standardized way. Well, we have ISO definition for a coverage, which provides the abstract definition. The GML definition provides a concrete implementation over which you can serve and deliver your data and read data from other systems in an interoperable way. Basic type of coverage that we deal with are grids. So grid data. Actually, not just maps, but going on on dimensionality, you have multi-dimensional grids like data cubes in space and time. For example, if you have a time series of image satellite image. Well, you have to locate your data in space. So there is a standardized way of dealing with the coordinate reference system of your data set. And beyond grid data, which is quite convenient for the technology behind. The coverage model defines other kinds of data set like multi-point or topologically different coverage like curve, surface, and solids. Let's have a look into the standard itself. This is the conceptual top level view of it. It's basically a feature from GML. So it is compatible with GML. The coverage is defined by three main elements. Well, I said that it's a definition of a data set. So the core element is the range set, which is the container of the actual values that you have to access. So if it's a multi-band spectral image, all the pixel values, let's say the structure of pixel valued, are contained and then delivered into this range set element. Well, I said it can be structured data. So you have to find a way to deliver information about the structure of the data itself. We have the range type element for that. It comes from SVA common and tells you how each pixel, how each value, let's say, is structured. So if it's a multi-special image, you get information about each band value and semantics of the value itself. This is the data part. Then you have to locate this data in space. How do you do that? With the domain set element, the domain set is again coming from GML, and it is holding the coverage types. It's also what define the type of coverage. So if it's a grid coverage or a multi-solid coverage, it all depends on the domain set element. So with this element, you deliver the coordinates of the data, and it can take on different topologies and can be compact or extended depending on the layout of the data itself. So what is the idea behind this having a coverage? Well, it will help integration of data because you have an unified model that takes on observations from very different variety of sensors, and put them within a generic schema that can serve out n-dimensional data in n-dimensional coordinate systems. So from one side, you can fill in the coverage with different data sources, and on the other side, you can access and process this data with the aim of the interoperability of systems. There are many standards based on the coverage model. There is core and extension model, but the key aspect of this project that I want to show you is flexibility. So which part of these standards we can use for that? Well, how do you get flexibility for analyzing and accessing a standard-life data sets? We use an high-level query language approach. So we use a standard that allows you to write direct queries over your data model. Well, having a query over a data set is a proven valid model. We do that on the coverage model. So basically, we have the Web Coverage Processing Service standard, which defines this query language over the n-dimensional data set that is stored into the coverage. What can you do with this kind of language? Several operations. You can do server-side computation. Well, the languages composed of main elements where you define which coverage, so which data set you want to operate, the query two in the four clouds, and you decide what to return out of this selection. For example, here there is a band-mass computation, define it into the query. So you're processing a coverage data set and extracting already a subset, sub-selection of the bands stored in this value with some computation done on them. Obviously, if your coverage is large, you want to subset it to an predefined area of interest. The query language allows for that. So you can specify sub-setting in the coordinate reference system in which the coverage is stored and defined. Here, the example is latitude, longitude, and time for a three-dimensional cube of data. Another interesting thing is that you can integrate different co-virages together by specifying them as different variables in the query and provide operators over these different co-virages into the single query to provide an integrated result. Okay. Right. Example of the semantics that you can get with the query. Once you learn how the query language is laid out and how it works, you have the semantics of what you want to get from the processing of the dataset directly encoded into the query, instead of having it in an extended human readable form like in WPS. So it's a compact way also for representing your function. What are we doing on top of that within the frame of the project is integration of not only the processing part and the access part, but also of accessing the metadata relative to the coverage stored into the server. What does it mean that you don't have to know all the coverage is by name and what they mean? You can do that by describing the coverage is. But the goal is to have predefined metadata and specify the query directly on that. So for example, you can tell, I want to process with this query content all coverage that deal with Barcelona as the geographic extent. Or you can get metadata from the result of the processing. Like I want to test some condition on the coverages, and I want to return on the ID and the extent of this coverage is matching the processing. This is the implementation is ongoing from the Jacobs University and the Athena Research Lab Partners of the project. So this is for the storage and processing level. Then you want to visualize some of the data after the processing, and you do that with the X3D standard, that is being employed in the project for visualizing multi-dimensional data. So once you do your extraction and you define how to visualize that into 3D, you can visualize your data. So what we do basically is, again, leverage on the query standard to build the web interfaces that builds the query for you and provides you the display of the results. An example of that is the 3D visualization that we have used, and you can see the result of an query extracting data from two different coverages. One is the red, green, and blue bands of the dataset, and the other is the Alpha channel that is built with a digital elevation model so that you can get it displayed as an image, light out on the 3D scene. This we are doing with the round-offer project path there. Okay. Let me talk about then the platform, technical platform that we are using for storing and accessing the data. We are basing that on Ruslan, which is an array database and it is providing the core storage of the system. Core focus of the project is dealing with the scalability issues of accessing data. So we aim at the scalability through the parallelization of the system and the approach is then to, well, we use queries to access the data, to extract the data. So we want to distribute the query based on their content and on the data location. So, Ruslan system offers several optimization for that. For dealing with the data itself on a server, we have a tiny architecture and we performed a type processing in a pipeline and well, most of the trading is employed there. But what we aim to do is the parallelization at the query level, so that you receive a query and you're able to split it according to the processing node that you have available and according to the content of the query. So where the coverages are located, you receive a single query, you split it over different processing servers, and you then join back the results, which hopefully are reduced in the dimensionality because it was upsetting on the single server, and you fuse back the resulting coverage. So as I told, the Ruslan system is providing the horsepower for storing and accessing the data. Again, it is an array database, so it's particularly well suited for the gridded coverages, but in the project we are extending the system for supporting the irregular grids. We have already a working example of multi-point coverage extraction, and we are moving toward the irregular gridded coverages, so that it can be used into the system itself. One interesting feature, that we employ to avoid duplication of data, is in situ that I will show you later, and the distributed query processing system is being implemented also with the integration of the metadata. Okay. So graph note, as the manager array database system, and it works on n-dimensional data. So you can have not only maps, not only 3D data volumes, but 4D data cubes like in climate simulation. Well, it provides not only an array engine, but also an implementation, the reference implementation for the standards, both the WCS and the WCPS that you can use to access your datasets. Okay. One feature, one interesting feature is that it partition your data archive, like in custom tiling 2D or 3D tiling you can do. So that optimize the access of the data according to the layout of the data pattern. Well, the interesting feature I thought was the in situ feature. What does it mean? Well, to obtain the optimization of a pilot storage, you have to import your data into the database, and lay it out according to what you expect to be the access pattern for your clients. One extension is to reference data files themselves or existing archives themselves so you can not import but register your data. Of course, with this approach, the optimization of the data structure is lost, but you don't have to duplicate your entire archive. So what you can do is load, link all the archive and when you have hot spots where the data is accessed more frequently, you import directly and build the data structure for communication. Okay. Let's have a look at the services that are being built on top of these technology stuff for the storage and processing and visualization. We have different domains of data providing solution for accessing and processing and analyzing data. One of them is the CreoSphere Data Service, which provides you a web interface for accessing and analyzing the snow cover products and you can do combined analysis with digital elevation models and re-enbalancing district data. That can be done directly on the web interface, which builds the query for you and you get results displayed. Second domain is the Atmosphere Data Service, which is employing the same technology and providing the similar interfaces to modus derived atmosphere products. So with this service, you can access for example, two-dimensional extracts of the data coverage that is stored in the servers or temporal profiles of the values of the parameters. The service deals with the ocean data, so it provides web access to marine datasets that can be analyzed dynamically on the web interface. Again, this is leveraging this same stack of standards. So what it's flexible is that you can parameterize the query toward the interface quite conveniently. One example of the three-dimensional visualization comes from the geology domain where you can have solid earth information like geology parcels, and the access can be done by visualizing them in 3D and moving them into space to see how they are located and how they relate one to each other. So that's a good example of the 3D visualization. To give it a little fancy, not only earth is considered, but we also have planetary science implementing these services. This service is provided from Jacobs and it's analyzing multi-spectral, high-perspectral datasets coming from Mars. Again, tower with the web interface, and it provides for good examples of query that compute statistics, or in this case histogram of the dataset itself. So writing a single query, you get the data for your diagram directly on the web. So for concluding, the aim of this project and technology stuff is to provide what we call a general analytics over the datasets. The concept then is to have a flexible way of querying your data without having to program, without having to dealing with the internal dynamics of the access to the dataset, but just provide and high level access to a structured dataset. The integration of the data and metadata to provide search for catalogs is a good addition for that analytic part. Basically, we have the standard base interface to the system, all implemented with the open source solutions, and the visualization toolkits. So one thing I want also to stress before concluding in that, the project is building user interest groups. So whatever colleague that is working in this domain area, please have a look at the website and the service provider implementation, and what you find useful, use their data and access to these datasets. So we have feedback on the solution and the implementation is going. Good. That's all. Thank you for your attention. My question to Alan. The query language you defined as part of WCPS. Is that language sufficiently well defined that I could in theory implement it myself on a different backend? Indeed. It's provided and defining a standard document from the OGC, so it's completely defined. You can provide different implementation, not only this one, of course. We have the reference implementation of Rassdenman but you can build your own obviously. Thank you. Just roughly speaking, if I had a terabyte of data, how long would it take me to ingest it into Rassdenman roughly, and how big would it be once it ended up in the database? Roughly. Well, it depends on which talent scheme you are using for ingesting the data, on which backend and giant you are implementing. So roughly, we can get some feedback from the service provider themselves on the data ingesting part. So I don't know exactly. But we have one service provider in the audience, so maybe he can. It depends. It increases the size by about 10%.
Big Data in the Earth sciences, the Tera- to Exabyte archives, mostly are made up from coverage data whereby the term "coverage", according to ISO and OGC, is defined as the digital representation of some space-time varying phenomenon. Common examples include 1-D sensor timeseries, 2-D remote sensing imagery, 3D x/y/t image timeseries and x/y/z geology data, and 4-D x/y/z/t atmosphere and ocean data. Analytics on such data requires on-demand processing of sometimes significant complexity, such as getting the Fourier transform of satellite images. As network bandwidth limits prohibit transfer of such Big Data it is indispensable to devise protocols allowing clients to task flexible and fast processing on the server. The EarthServer initiative, funded by EU FP7 eInfrastructures, unites 11 partners from computer and earth sciences to establish Big Earth Data Analytics. One key ingredient is flexibility for users to ask what they want, not impeded and complicated by system internals. The EarthServer answer to this is to use high-level query languages; these have proven tremendously successful on tabular and XML data, and we extend them with a central geo data structure, multi-dimensional arrays. A second key ingredient is scalability. Without any doubt, scalability ultimately can only be achieved through parallelization. In the past, parallelizing code has been done at compile time and usually with manual intervention. The EarthServer approach is to perform a semantic-based dynamic distribution of queries fragments based on networks optimization and further criteria. The EarthServer platform is comprised by rasdaman, an Array DBMS enabling efficient storage and retrieval of any-size, any-type multi-dimensional raster data. In the project, rasdaman is being extended with several functionality and scalability features, including: support for irregular grids and general meshes; in-situ retrieval (evaluation of database queries on existing archive structures, avoiding data import and, hence, duplication); the aforementioned distributed query processing. Additionally, Web clients for multi-dimensional data visualization are being established. Client/server interfaces are strictly based on OGC and W3C standards, in particular the Web Coverage Processing Service (WCPS) which defines a high-level raster query language. We present the EarthServer project with its vision and approaches, relate it to the current state of standardization, and demonstrate it by way of large-scale data centers and their services using rasdaman.
10.5446/15587 (DOI)
I'm from a small Swiss company. We work on a contracted-based open source development. We have a couple of products. And what I'm going to show here is a cooperation with NOAA from USA on publishing of the pre-rendered maps. So it's a new open source style server project. And let's start. Whenever you, as you probably know, whenever you look at a penable map, in fact, you are seeing tiles, small PNGJPEG files with predefined geographic location. And these tiles are normally rendered dynamically from the data on a server. So you install a special software map server, GeoServer. And this server is accessing GeoTips or the original raw data and providing them to the web, to the client, through this WMS server. The more modern approach, faster is pre-rendering or seeding of these data. So the tiles, the small images are, in fact, faster available because you store them already somewhere. And with the seeding approach, in fact, you run a software which is grabbing these tiles in advance and storing them somewhere on the way between the client and the server. So it's faster than just rendering them dynamically. With the pre-rendered approach, which is now standardized as WMTS, you can also just get the raw data on your desktop computer or on your server during the time of the update and process all these data and create the tileset in advance. And this is the approach which we are taking on which we are speaking during this presentation. With this pre-rendering approach, traditionally, already four years, there is the GeoTiles utility, which was my student project for Google. And you can also pre-render vector data with style mail. You probably know. And there are other approaches. We will show the MapTiler, which is kind of like advanced version of GeoTiles. There are several advantages and disadvantages of the pre-rendering approach. You can read it yourself on the slide. The most biggest advantage is usually if you have high quality hosting with agreements or if you host your website in the cloud, you can simply put the map data in the same location as your web content, like HTML files and PDF or whatever appears else on the website and still have the zoomable map interaction through the JavaScript libraries like Openlayers without a need to install any other software. And this can be quite a good thing because if you install a software, you have to maintain it. It can stop to run. You have to upgrade the versions. If you simply have the data, it's much more reliable. Also, without the dynamic aspect, it's then even faster than if there is a software in between for the serving because the web servers are normally optimized for a really, really fast delivery of the content which is on the disk. And this is the advantage of using this pre-rendering approach for sharing the data. Of course, it does not fit to all the data which you have. If you have data which are regularly updated and you need to change what is visible to the people, then this approach is probably not the best one because you would have to do the processing and handling a lot of data transfers on the server. And on the other side, if you have something like a base map which you create once, and then it's on the server for years without any change, and you have a large number of visitors accessing the maps, then this pre-rendering approach is really the way to go, especially if you don't have a huge amount of data. Then you can even use really cheap hostings and you save yourself a lot of headaches with the hosting. It's also recently launched quite a lot, the pre-rendered tiles with the mobile applications where the mobile developers are using it also for offline, caching, offline storing on the mobile devices. So you have it with you somewhere on the way as well as hosting the tiles on the server. The most straightforward way how to see the tiles is, in fact, the directory structure where you have the pyramid with the zoom levels and x and y coordinate for the storing of the tiles. So if you run GDAL2 tiles or MapTailer, you typically get this directory structure with z, x and y, and the tile. And this is directly mapping what you need to share for the open layers or other JavaScript viewers to the world. So if you simply open Google Maps API, it is able to load tiles and display these. The alternative to the folder storage is some kind of package wrapper where the tiles are in fact the same tiles more or less are just put into SQLite or people are using GOTive as well as the storage for the tiles because it's also just a container, something where you can put those pre-rendered images internally. There are, again, advantages, disadvantages. With the package, you already need some kind of software which is serving it. On the other side, it's really easier to transfer the data from one server to another because you don't have to handle potentially millions of small files. And that's good. So traditionally, the access on the server is done just directly through the ZX and Y. And there are also, if you need more, if you write a viewer for Z and X, Y, you are kind of missing the metadata. What is the bounding box? Where should I zoom? What are the available zoom levels? And this is being solved by ThileJSON recently quite well in the web world. Of course, there are online or there are standards which is solving this problem pretty well. But ThileJSON is more designed for the JavaScript viewers and other viewers are using it as well. These pre-rendered tiles can be hosted practically anywhere. So you just can copy them on any cloud service. On Amazon S3, you can just put them on whatever server you have in the company. It can be Windows Server. Just copy the files there. And if the directory is exposed to the web, suddenly your maps are available as a resource on the website. And it's really fast. You can even host in free hostings, free PHP hostings, or Google Drive dropbox. It's such a crazy thing. If you are a student and you don't have any budget at all, it's even possible to have the maps online with these three services. If you are serious, of course, you will set up something like NGNIC server with optimized web server for serving the tiles really fast or use some kind of cloud service or CDN service for distribution of the maps. And here we come to the WMTS. So the XYZ is nice, but we have a standard now. And the question is how to expose the maps which are pre-rendered into the tiles through this OpenGIS consortium standard. And we work with other people on the open source project which is doing exactly this. We were kind of pushed to use PHP for not that it would be the best language on the world, not at all. But it's the easiest one to install. And the idea of the project is that it's really easy to use. You simply put the tiles into the folder on a web hosting server. You put next to it the PHP files which are in the project and simple HDXS file on Apache. And once these tiles are in the same folder as these PHP files from the project, you get an online service which is officially following the WMTS specifications. This means you can open the maps in a traditional GIS desktop clients like QGIS, RGIS desktop, and others, but also reuse the tiles in modern viewers and mobile devices. So the first reason why to work on the tile server was the easy to use usage for normal people who are beginners or who don't want to install and host maintain special server software or who do not have server where they can install it. The other cool stuff which we did is reverse engineering of the S3 WMTS implementation. According to my knowledge, this is the first open source project which is really exposing WMTS in a way that S3 products can open it. So we had to study the way how S3 implemented WMTS standard. They are always finding a bit different ways than exactly straightforward implementation of the standard. So we were following their way in providing the WMTS the way that it's compatible with their clients. So that's some interesting and cool stuff on this project as well. And then we come up with one more idea on how to implement a tile server. And that's the idea that, in fact, very often if you are implementing tile server, you need some kind of user interface. And you have to implement it in this language, that language, if you have maps hosted on various technologies. And we, in fact, come up with the idea of having just an exposure of a list of metadata about the tile grids and do all the user interface in JavaScript. So the JavaScript is loading this small list of information about the maps, about the tile grids, and building the whole interface, which looks like this in the end for previewing the tiles and giving people a way how to use the tiles and how to use the map layers. There are now alternative implementations being implemented for the tile server. So the same ideas are now being ported from the PHP language to other languages. There are students in Switzerland working on some master thesis on rewriting the tile server into Python. We are shaping a bit the MapCache project to have the C++ implementation, which gives you very, very fast hosting for the MB tiles as well. And we have Amazon S3 implementation, so no server software at all, just pushing the tiles to the cloud. But you still get the user interface and the access as it is presented here. In fact, the tile server PHP has been used in production by NOAA for the Hurricane Sancti response imagery. So they replaced ArcIMS server with these pre-rendered tiles and simple PHP. And the reason for doing this was that during these events, there were too many people accessing the server. And the server crashed simply because there were too many clients accessing the same data sources. And they replaced the service for providing the maps and tell simply all the people who were keen to use the imagery just after they were flying with the airplanes they just put the process tiles on the server and provided all the others the access to WMTS. Most of the people never recognized that it's not ArcIMS anymore because it was just different URL. They were loading the data into their GIS clients and doing further analysis, overlaying data visualizations and so on. So the project has been in fact used on production already last year. If we go through the process of how the tile server can be used, how easy it is, in fact, to put the maps in this way on a server, the first step is to render the tiles. The easiest is probably to use the map tile, which is a desk to the publication. You just choose in the first step what kind of tiles you want to render. You drop in the geodata. The geotif is supported. Everything what GDAL can read is supported. So Mr.Cid, ECW, Geotifs, a lot of data formats. And you can just specify the coordinate system and georeference is loaded from the file if it's available inside of the file. If not, you can specify it manually through the user interface, as it's shown here. You get a preview where the image is located. And then you can choose if you want to render into folder or into a package. There are advanced options as well for setting zoom levels and so on. Then you wait a bit. And it's processing the data. It's able to combine also multiple files into seamless layers and has another advantage is, and once you are done, you end up with these crown tiles already prepared on your disk. And you can preview these in the application. So if you render into a folder, you end up with something like this, already prepared viewer. If you just open the Google Maps viewer, leaflet viewer, you are getting the zoomable application. You can start to enrich it with other functionality, put markers on top of it, or use it as a single layer for multiple layers and so on. But the raw data are prepared. The second step is the upload of these data into a hosting anywhere, together with the Tiles server PHP. So you go to the project website, just download the zip file, unpack it, and into the same folder where you unpack the file. You just put the folder with the tiles. And once you visit the website, it gives you this user interface already, which is completely JavaScript based with the list of the tiles. And by clicking on one of the maps, you get the preview in Google Maps and ready to use viewer. And for, in fact, multiple viewers, by clicking on the show source, you can just copy and paste it and host it somewhere else as HTML, and directly loading the layers into the viewer. You can use the maps on the desktop as well. So we have step-by-step guides for various desktop clients, which are directly supported. So there is even a way, I mean, showing step-by-step where you should click and what you should type so that the maps appear inside of the GIS software. All of this without any installation of the software. I mean, it's three PHP files on a hosting and the tiles uploaded. And the performance is really, really good. I think it's comparable to map cache, and it's significantly faster than the dynamic approaches for doing this. Because in fact, the tiles are served as pre-rendered files by the web server. They are not served through PHP. So all the headers are correct. HTTP tells you this tile has modification dates back when you were rendering it. And in case, simply, it runs fast because all the HTTP protocol handling is done by the Apache or other web server. And this is highly optimized already. And all what the PHP stuff is doing is providing the metadata and XML files, which are loaded once in the beginning by the clients. And then the raw image data are accessed through the web server directly. If you are interested in the project, this is the main URL. If you are interested in the MapTiler, you can get it at maptiler.com. There is also, if you compare the GDAL2 tiles and MapTiler, we worked quite hard on improving the performance. And the whole utility has been rewritten into CC++. And it's also available for using in processing through batches. So as a command line application next to the user interface, which you saw. So it's both easy to use, but also usable in automated processes. It's able to render very large data sets. We were, let's have a look on some examples. We rendered, for example, the Ordnance Survey OpenData complete data set. So we have the spherical mercutral tiles pre-rendered. It's about 15 gigabytes of data, which you simply put on a server, and you have full replacement of OpenSpace API, which you control, and you are not restricted by anything. All what you need is simple hosting with 15 gigabytes of space on the server. And you can zoom in and use Google Maps API, LeaFlat, the mobile Android SDK for the Maps, either Google One or Open Source One, or iPhone access to the data. And this can be used as a background maps for additional resources. It's in fact used by police in the UK in some of their internal applications. Another application is on the old maps. Together with National Library of Scotland in the UK, we made a base layer based on seamless historical maps for the whole United Kingdom. And there are several clients, several iPhone native mobile applications and web applications, research projects, who are using this instead of the Google Maps base map and putting their own historical data on top of this base map. You see how fast the maps are loading. Another commercial applications in the UK, just to show a few, Google is using the same approach in our software or was using it on the aerial photos of the fires in Australia. The researchers are using it, for example, for the magnetic visualizations of the Earth. Commercial companies showing various visualization. These are the thermometer imagery for the roofs, how this is well-isolated or not, and the old maps. So this is the base links. If you want to use the projects, you are welcome. We also seek contributors, so it's on GitHub. You can forward the tile server and provide batches, report bugs, and hopefully you find it useful. Thank you for your attention. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.
OpenGIS Web Map Tiling Service (WMTS) is becoming the standard used for distributing raster maps to the web and mobile applications, cell-phones, tablets as well as desktop software. Practically all popular desktop GIS products now support this standard as well, including ESRI ArcGIS for Desktop, open-source Quantum GIS (qgis) and uDig, etc. The TileServer, a new open-source software project, is going to be demonstrated. It is able to serve maps from an ordinary web-hosting and provide an efficient OGC WMTS compliant map tile service for maps pre-rendered with MapTiler, MapTiler Cluster, GDAL2Tiles, TileMill or available in MBTiles format. The presentation will demonstrate compatibility with ArcGIS client and other desktop GIS software, with popular web APIs (such as Google Maps, MapBox, OpenLayers, Leaflet) and with mobile SDKs. We will show a complete workflow from a GeoTIFF file (Ordnance Survey OpenData) with custom spatial reference coordinate system (OSGB / EPSG:27700) to the online service (OGC WMTS) provided from an ordinary web-hosting. The software has been originally developed by Klokan Technologies GmbH (Switzerland) in cooperation with NOAA (The National Oceanic and Atmospheric Administration, USA) and it has been successfully used to expose detailed aerial photos during disaster relief actions, for example on the crisis response for Hurricane Sandy and Hurricane Isaac in 2012. The software was able to handle large demand from an ordinary in-house web server without any issues. The geodata were displayed in a web application for general public and provided to GIS clients for professional use - thanks to compatibility with ArcIMS. It can be easily used for serving base maps, aerial photos or any other raster geodata. It very easy to apply - just copy the project files to a PHP-enabled directory along with your map data containing metadata.json file. The online service can be easily protected with password or burned-in watermarks made during the geodata rendering. Tiles are served directly by Apache web server with mod_rewrite rules as static files and therefore are very fast and with correct HTTP caching headers. The web interface and XML metadata are delivered via PHP, because it allows deployment on large number of existing web servers including variety of free web hosting providers. There is no need to install any additional software on the webserver. The mapping data can be easily served in the standardized form from in-house web servers, or from practically any standard web-hosting provider (the cheap unlimited tariffs are applicable too), and from a private cloud. The same principle can be applied on an external content distribution network (Amazon S3 / CloudFront) to serve the geodata with higher speed and reliability by automatically caching it geographically closer to your online visitors, while still paying only a few cents per transferred gigabyte.
10.5446/15586 (DOI)
Good morning all together and welcome to my presentation, Ties and more, Degree Freshly Implements, OTC WMTS. My name is Sebastian, and I'm working for Latlon, as said before. Yeah, and let's start. So first of all, some words on Latlon, who we are. Yeah, we are a software development and consulting company based in Germany in Bonn since the year 2000. Yeah, we are linking standard based geospatial applications with a professional open source technology and we are from startup, an active member of the Open Geospatial Consortium. And since 2010, we are one of the principal members of the Open Geospatial Consortium. Yeah, and we are actively participating the first for G community. So degrees, for example, one of the OSGU projects. Yeah, a few words on me. I'm working at Latlon as a consultant for spatial data infrastructures and OTC services. And I'm the technical committee representative at the OTC for Latlon. So therefore, I will be leaving early today to the OTC meeting in Frascati, which starts on Monday. So the degree project, now known as degree initiative since the last community space in 2012. Who of you would know degree? Please raise your hand. Okay, yeah, degree provides state of the art geospatial software, implements standard based software for sustainable and intrapable solutions. So different or several SDI implementations in Germany. The degree has an LGPL license and since 2010 degree is an incubated OSGO project. So I have a timeline here in the year 2000. Degree was founded somehow at the University of Bonn as the project XSE. And there was a follow up in renaming it into degree. Then we had degree two, which was the biggest implementation of OTC standards in the open source world. I think it was 2009 when we started the OSGO incubation, which we graduated in 2010. And also in 2010, we had degree three and now we are at version 3.3 with degree. Degree and the OTC as said before, degree is one of the most comprehensive implementation of OTC standards in the open source world. So degree is the reference implementation for all of these OTC standards. So web coverage service 1.0, web map service 1.1, web map service 1.3, web feature service 1.1, web feature service 2.0, web map tile service on which my presentation is, and geography markup language 3.2.1. Currently, there is a resertification process of all these reference implementations and compliance certificates at the OTC with the new degree version 3.3. And I think at the end of the year, we will have a resertificated setup of degree with all of these standards. Yeah, as you have seen, maybe the template of my presentation switched to the OSGO and degree one from the degree and latron one, because now I'm switching more to the degree project side. And this means, no, we have more standards first. We're also implementing web processing service, the catalog service, several versions of GML filter encoding SLE and SE. SLE is style data descriptor and SE is symbology encoding. So now I'm switching. Web map tile service. I think you know what a web map tile service is in general, who knows it? Okay, this is some. Yeah, so I will first introduce the web map tile service in general, this is the standard. And that's the definition of the OTC specification, a WMTS enabled server application can serve map tiles of spatially referenced data using tile images with predefined content, extent and resolution. So as you may know, or as you know, maybe no, there are several implementations of tiling services. And so why do we use, why do we need an OTC implementation? I think that's easy because the OTC is the standardization organization within the geospatial world and we need a standard for tiling. So some details. What else is AWMTS? It provides the operations get capabilities with which you can get the services metadata. There's the most important operation of get tile with which you can get your tile data. This operation is based on tile metric sets, which I will explain a bit more later. And there's the optional operation of get feature info defined in this standard. Get feature info is a very difficult operation because there's no real standardization and for the output format. And so it's very difficult even in WMS implementations to solve interoperability issues. Yeah, I said get tile is based on tile metric sets. Here we have an example request for a get tile request. So I don't know if you can see that here's something in bold. This is the definition for the request of a tile with a specific tile metric set. In that case, it's tile metric set inspired CRS 84 quad. The time matrix is the zero, Tile role and type column. And of course you have a layer. So with those parameters, you are able to provide maps through this standard. But additionally, you need an understanding of tile metric set. This is a tile permit taken from the WMTS standard. So on the first level, you have just one tile for the whole bounding box, for example. Then you have four, 16 and so on. And every tile layer is its own tile metrics. So you have a set of tile metric pieces and that's a tile metric set. And of course you have rows and columns. And that's all you need to get your map from a WMTS. Because a tile metric set has, every time has a spatial reference given by a specific coordinate reference system. Yeah, but what about degree WMTS? Which is, yeah, this presentation mainly about. Degree WMTS has some special highlights, I would say. First of all, I'll start with a spatial reference support of degree WMTS. There's native support for several tile metric sets such as Inspire CRS, 84 quad, Google CRS, 84 quad, Global CRS, 84 scale and several more. And you are able to add your own tile metric set to your degree server or service. And as the whole degree web services package, the configuration is XML based. So to define a new tile metric set, you just have to define a tile metric set like this. This is taken from the capabilities and the configuration is oriented on the capabilities document, I don't know if you can see this or read this. So we have the identifier Inspire CRS, 84 quad. The support of CRS is CRS, 84. So the metric set knows about its spatial reference. And to locate the tiles within its tile metric set, we have the definition of a tile metric set with a scale denominator for the whole tile metrics, the top left corner, which gives the spatial reference to the tile metric set based on the supported CRS, tile width, tile height, and metric width and metric height. Yeah, and these are the parameters which you need to use a WMTS get tile request. So some words on data sources. Degree WMTS has support for GUTIF, so-called tile stores. We call it tile stores in degree. And with GUTIF, you have the capability to use the tile metric set of the GUTIF itself. So you could serve AWMTS, which is only based on this one GUTIF. But GUTIF support brings big-TIF support. So you are able to serve GUTIFs with a size of more than a 4GB. Further data source is tile cache. You all know tile cache, with which you are able to provide the tile cache file-based tile sets, tiling sets. Yeah, and my personal favorite data sources are remote WMS in the versions 1.1.1 and 1.3, and remote WMTS 1.0. This means you can cascade remote WMS services through degree and provide it as WMTS. There is also a caching mechanism for this, so that tiles are cached within a EH cache-based cache. Yeah, here's an example picture of the output tile, just to give you... And the third of the sections of highlights is we get feature info operation. As mentioned before, get feature info is not well specified by the OGC standards, and this makes it very difficult to handle it, even in WMS implementations. And the degree WMTS is able to cascade get feature info output from remote WMS and from remote WMTS. So for example, this looks like... this is a HTML output of a get feature info request. And using degree WMTS for this has the opportunity that degree WMS has wide range support for several get feature info implementations. For example, a degree can understand S3, get feature info output, UMN map server, get feature info output, my WMS, get feature info output, and so on. And so you can cascade all these and put it all together in one WMTS and serve it as a tile service. So yeah, now I'm at the outlook. My presentation was not that long, but I hope it was contentful. So what's hot? There's one missing for the WMTS and that's support for geo web cache based data sources. Then we have currently we are implementing a GDAL tile store and with that you will be able to serve tile sources based on every GDAL output format which supports tiling. Then the degree project, there's one point which is very hot currently and that's the security. So we are working hard on the degree security component within degree three. There was one in degree two, but now we are implementing a new one, fresh new one within degree three. And what's also hot is the upcoming degree release 3.4 which will be within the next month which brings, for example, a resource based workspace concept so that you can reload every configured component data source. For example, you can just say, okay, reload this data source which is not possible currently. And so on. Yeah, and what's also is hot and nice to have it on the outlook slide is the FOSCIS hacking event at the end of November which will take place at the Linux hotel in Essen in Germany. And we will have many participants from the degree initiative and so we will hopefully reach many goals we have for future degree development. So here's some links so you can follow us on Twitter, go to the Wiki and the general degree website. Thank you for your attention. Yeah. That's what it is. Thank you Sebastian for your presentation, for the introduction of this, I think the important standard and for the implementation and so together we are going to go over to нап halfway to the beginning of the year because I can just allow it to be covered by you. Thanks. That was not that part. I have yet to answer the question, I mentioned the space page. There's more questions from the audience. I have one. This is TILING. The numbering scheme of TILES are different, as far as I know, between Google Bing and OpenLayer, perhaps. Is WMTS still another concept of the numbering scheme? Or is this compatible? It's maybe its own scheme. I'm not quite sure. What is the need on the client side? You have to support WMTS protocol. Does DECREE comprise its own client? Not for WMTS, but it works with OpenLayers, for example. OpenLayers would support? I forgot one important point. This WMTS is used within the European-inspired geoportal from the Commission. I'm not sure. Thank you.
In 2013, a new service type joined the deegree family - the deegree Web Map Tile Service. This deegree service implements the OGC WMTS 1.0.0 specification and is going to be the OGC reference implementation for this specification. Both, the OGC WMTS test suite and deegree's candidate reference implementation have been developed within the OGC OWS-9 initiative. The intention for implementing WMTS was that deegree had no clear strategy to handle big raster data. As a result, one of the advantages of deegree WMTS is the performant handling of big raster data - such as aerial images - and providing it through a standard-compliant interface. Additionally there is advanced support for using other web services based on OGC WMS and WMTS such as GeoServer, GeoWebCache and Mapserver as datasource for deegree's tiling API, which is the underlying data access layer of the WMTS. As a key feature deegree is capable of proxying FeatureInfo output from those remote services. The presentation will give an overview about deegree WMTS and all its capabilities, especially regarding the interfaces with other OSGeo components.
10.5446/15585 (DOI)
Well actually we had a poll throughout Europe and the people was asked which is the best place you want to be beside your hometown. It was that's going to be set. Okay so we were the first to issue a low on open source, promoting open source. I'm not particularly in favor of those promoting open source because usually there is something reasonable and nobody really applies but it means that we have kind of a feeling towards open source. More recently we have Italian low, state low where every manager has to consider the open source alternative before others and has to just justify his expenses on proprietary software. Of course it's paper but still it could theoretically have financial consequences for the top manager of the administration so it's something that pushes towards open source. In Italy it's the regional administration who holds most of the data. The state is more of a collector and the lower level don't do much but most of the things are collected at the regional level. So it's a fairly big region for Italian standard at least. They got lots of data, well plus 25 plus terabytes of data especially major but also vector stuff. They migrated all their topographical map to proper database so they have several databases of several hundreds of gigabytes so it's a serious stuff somehow. They act as an inspired node. In Italy the decision was to keep an inspired central level in the ministry of environments and to have the region as local nodes and then they decided at the lower level how they want to organize. So they are obliged to publish WMS, WFS, WCSW, etc. You can have a look to what is published at this address if you want. There are a number of people who's working on the JS in the region. It's more than 400 people. They have central offices with a core team of the well, JS specialists who keeps the infrastructure alive but they also have a number of local offices that deal with agriculture, forestry, engineering, bridges, roads and all sort of stuff. So it's a pretty diverse environment. And the mission of the Redone Tuscana, as all the region, is to produce first and maintain all the geographic information over its territory and it's a mandate of facilitating the access both to information and to the use of this information. So for them having software is usable for everybody it's a plus. It's one of the important things for them. And in Europe we have this ARUS convention that requires states region to publish their environmental data for public screening just like in Japan. Okay, they came from the usual stack. So Oracle, Oracle Special, ArcView. They still had ArcView around since very recently. ArcJS, ArcMES for publishing on the web. So more or less what we are used to see, what we were used to see in the old days. In 2009 they decided formally to get rid of this and to go for free software. And so the idea was to migrate the whole structure, both the server and the client to geographical free software. In the meantime they also are migrating from another department. They are migrating all their desktop to free software. That's much more complex. It's a few thousand people working. So you can't just go and change everything and put an Ubuntu machine on it. But they already migrated first to Firefox and then LibreOffice and slowly, slowly they will eventually arrive at. But the fortune is that geographical entities, it's a bit separated so they can go at their own pace. It was decided to work with the communities, which is that's one of the reasons why I like to do this talk because this was a non-obvious decision. So they understood early that that was a key factor for the success of the operation. And they want to contribute back. I see very often people that take and less often people that gaze. Of course it's not finished. We're still in the process of doing it but we are quite advanced. So I mean we passed the point of no return so I think I'm entitled to talk about that. So it was very, very clear from the beginning that all people that will work on this project must be committers of the project they are used. That was I think a clever thing. Well it was very good for me being a committed. But I think it was good because I have seen other region in Italy or other places that employ the usual IT firms or societies that employ them to just let them know that it's okay you stop Oracle and you do also positive yes. And they can do it of course they hire people and but it's not the same thing. Okay so they understood clearly that if you are inside a project you were better with open source than if you just try to. Okay they say it from the beginning in the formal bit they put that all the codes written must go in the central repo. So the one who were not entitled to do so or couldn't guarantee this they couldn't access to the bit. Okay so that was an important choice. Of course there are a few exceptions for tools that are really useful only for them but it's really really very exceptional a few scripts things like that. So they didn't decide to go free to spend less. That was very clear that they didn't want to save money. Of course it's nice if it's a side effect but it was not an aim of the project. It not even performances were important. Okay so they have to have their minimum level of service but they didn't go open source because they want faster map server or whatever. Okay the idea was to spend better than money. Okay we all know that most of the well maybe all the big GIS company are American so we want to keep as much as money as possible in Italy. We want to have tools that are usable for everybody and not just those who can buy it. So we want to make our university students have their tools for working with the proper infrastructure and so on. So make both the data and the software that allows the use of this data available for everybody. That was the basic basic idea. So a choice which is not technical which is a political one culture and political. So it's deeply rooted while in a way in our yeah political culture in in Tuscany especially. Okay of course the other pillar of this this building is that they have strong skills inside the region that is not always the case. Very often there's our empty shells you know with people that is able to buy things from outside but not really to use it. They had it so they could do a much more confident choice. Okay so phase one was training. They understood early that they had to train people to understand these new tools. Then developing what is missing. Of course we have all this nice software but we all know that it's not complete. We need here and there things to improve. And then support. Having a nice software deployed but not supported by anybody. They don't have not having a phone number to call or an email to write. It's well a recipe for disaster of course. So that was the idea. We start with training. We did a lot of training. We as a finale I must say. Yeah almost everybody in who use a more or less JES went to the courses. Of course most of them are general courses, cartography mainly. A few were on analysis because of course less people is doing analysis probably. They had this acceptance. She needs to be filled compulsory and the scores were very high. And they were very high because they were acquiring new skills even though who say no but I still want to use my arc something. But they say but I learned something about WMA. They were fast. Things that they were not used to. Okay and I think these is not just because I'm doing training but I think it's because open source software is better for training. You learn more of the function and less of the buttons. Okay so I think this has a relation, a strong relation with the type of software we're using. Okay I don't want to go into detail about what we developed in these years but GML support in the good old was really poor. We improved it. We had these types. We had the special light driver. Who of you know special light? Wow. That developer would be happy. So the special light driver in good old GR was improved. Was quite poor at that time. We had a function to Geos or Geos or whatever. Like this a few you see in the comments all the reference to Resonato Scana. We did bug fixing for this. Working with this size of data. I mean you find bugs a bit more easily. So we developed this nice I would say library which is a split in a way of Geos. It has a special function that don't fit into Geos and this can be used on its own now. It was there were quite a few functions in PolyGIS. They are now a proper library that can be used by a special light by QJS, etc. etc. And lots of things on the PolyGIS. Sandro Santilli worked for us for a while. Lots of topology. What you find about topology in the PolyGIS now is largely due to the Geos Scana. We made this make valid which is quite useful if any of you have already used it. Okay thank you. And now this is in LWGN. So it can be used also from QJS. We have a nice plugin for that. Improvements here in the GML. They use heavily of course GML and topological GML. So they improved it a lot. We also worked on special light. We made a system of reusing lots of functions that were used by PolyGIS. So basically you have more or less the same function in PolyGIS and special light which is very good. So you can use it more or less transparently. One or the other. Supportful metadata insights for special lights, portable SLD, etc. etc. These last functions were just added. This was not made by us by Farnalia but other people. This is not known I would say but interesting web mapping framework which is the nice thing is sorry is that it is data less. So you can set up your thing and you take your data from whatever WMS, WFS it's available on the net. We did a lot of work not surprisingly I think on QJS. So lots of things on symbology. They need to publish real maps with the cartography options. So there were a lot of architect and planners that were quite picky about transparency on the borders and the rotation of the symbols and all sort of things. So we did a lot of things with that. Some of this probably you already used it. SLD is very useful. I discovered that people that were not using QJS they start because they were web people you know they started using it because of this SLD import expert. So you can design your SLD with QJS is possibly the easiest way of doing that now etc. Then we improved the field calculator adding a few functions. We improved, well now we developed the recovery of geometries of features from WMS which is something that not too many people know that but you can get a vector from WMS and we can do it now from QJS. Lots of buff fixing. Now we can run your plugins from a custom path so you can put your QJS on a disk or on a pen drive which was not possible before. We developed quite a few plugins. RT SQL layer you can run arbitrary queries from QJS without having any permission on the database apart from select of course. So you can have your queries stored on your client rather than on the database. You can extract data from cut your data from a database. You can have all your data there and cut at a municipal level. Okay and extract a one by one all the vector that belongs to each of the multiple. We have a map server export plugin. It was present in the earlier version of QJS but it has been completely written based on a map script. So it's much more powerful. Actually I invite you to test it because we still are in experimental phase so any feedback it will be will be good. You can create a vector from DBF tables for the op points or lines etc. You have the mirror map. I don't know if you have seen it but it's nice you can split the canvas of QJS in two or three or four etc. and you can keep this in your project and so you reopen it and you can compare WMS or photos etc etc. Then we have a very specialized plugin which is used for subbing of pre and post earthquake buildings which is very tuned to Italian situation but I think it's a lot of software so it can be useful for you to extract features and reuse it in your plugins. Okay LWGiOM plugin as I mentioned already. Then we start with support. We are in the first year. It's mainly of course on desktop support for servers more of a consultancy. It's a bit of a different story. Of course email, telephone, site etc. We included also a certain amount of hours for backpixing so that they can fix things easily without restarting with all the bidding and process etc. The funny thing I discovered is that people were surprised that they were a bit shy of asking because they were surprised that people could actually answer and reply and find solutions for their problems because apparently they are not used to have this kind of service for software which is funny and of course our main advantage was that we could deliver a fix just like this. The next day you can have your nightly build of QJS with your backpix and nobody can compete really on the proprietary world. So our problem is really to go to people and say well please ask me questions because I can reply. Okay so here you have a list of what's available, more or less thing that you probably know. We could point this in an empty slide because the only thing is that we couldn't find a proper replacement for AutoCAD. That's something that I think we should start working seriously on that. I'm not fond of CAD, I found ugly environments but still for the administration they use it so it's pretty strange that in all the free world I think the CAD is the only area where really we didn't make any any big impact. Okay so what what has been concluded that of course the cost, the cost was much less so they had much more resources to improve things to have more services around what they need. A lot of things that they did on their own before them. Now they are free to call people to help them so they basically kept the same budget but could do much more. And the nice thing that they learned is that they could influence development so if they need something in PogJS and QJS of course if something makes sense but they could have it, they could have a different direction of development and also for us as developers it was very good because they provided real and heavy use cases. When you develop your software you think that something is working but when you put it on the on the roads then it's difficult. And so I think it was both sides profited by that. They discover early that some of the things they improved were more they improved by other subjects, other public administration, some Italians, some French, some other places. So that was a pleasure for them. They discovered that with their money they got more than they paid for so that was good. And so they are starting a networking with other administration because of course the needs are more the same and they are sharing it. And what they develop now they know that they have to put some money to maintain it but they are not, everything is not on their shoulder so they can rely on others to keep things alive which was not the case for the custom development they had before. What is problematic is that not everybody is understanding this. Initially we don't have a national strategy about that. In central government they spent a lot of money in strength thing. I mean they don't have the same efficiency. The tools of collaboration top-down are not really working. They have this you know round table and everything but it's not really working. What is working is people to people collaboration. Technical collaboration is working very well and the free GS word in Italy is working very well. We have a mailing list with almost 700 people and very often the same people that are on the top level that discuss serious things on our homemade mailing list which is interesting. Okay there are reasons I think I'm a bit long now but there are reasons why the others are not doing the same. There are good reasons and bad reasons. Of course you are faster if you decide everything on your own. You don't have to discuss with the community. Sometimes the community has strange ideas. Sometimes they don't understand you etc etc. So it looks nicer to go ahead with your fork or whatever but of course that's not gonna work on the long term. Okay you have to think only on your to your platform so if you compile on your windows everything works and you don't have to bother if Saga does a work on Ubuntu or whatever. The other thing is that to do this you need some internal skills. You cannot really rely on somebody else because if you if you buy a package more or less maybe it works but if you if you want to go into this process you really have to understand what's going on. Many many times we were discussing okay should we implement this this way or the other okay do it do it like this and if you don't have the internal skill that's that's gonna be a big bit of a problem or either you trust very much your software provider are are you gonna be a problem. So the more you understand what's really going on the better it's your development. So it needs a lot of work. They did a lot of work internally to support this this choice. So if some of you want to do the same thing or want to go home and convince their administration to do the same let them know that they should never work on isolation that's bad they should share with the community from the beginning you don't have to have your tool ready and then publish okay and so work together with the community avoid as much as possible solution that can seem nicer if you develop on your own but then you break up sooner or later and make sure that when you hire a developer that it understands all this very well it's not someone that goes you know into his office and type that stuff okay and of course raise everything okay thank you
The Tuscany Regional Administration had a rather usual proprietary GIS infrastructure (ArcIMS, Oracle, ArcGIS). They started migrating to Open Source GIS with an integrated approach, both on the sever side (PostGIS, MapServer, Geonetworks) and on the client side (Quantum GIS, GRASS), providing also training to hundreds of their technicians. What makes this experience particularly interesting is the fact that they worked form the onset in very close contact with the community, requiring that the code developed for them was generalized, and pushed to main source code. This seemed more cumbersome at first, having to coordinate with several other developers, and not having functions closely fit to their specific needs, but the superiority of this approach become quickly evident, as several functions were further improved and maintained by third parties. Among the most notable achievements were much improved topology support in PostGIS, SLD support in QGIS, and much more. We advise other administrations and enterprises to avoid the temptation of working in isolation, and simply using FOSS4G, maybe tailoring it locally, without contributing back, as this approach is short-lived, and less successful in the long term.
10.5446/15583 (DOI)
cosplay nid unedig yn e unjust i fedfal y Ym prayer is wneud am hefyd, in accessory yn amlwyl��forcell o'i chi'n meddwl yw'r unrhyw sydd wedi bodLL Felly, rwy'n gwybod i'n dweud yw'r ymddangos, yw'r ymddangos, yw'r ymddangos, a'r ymddangos, yw'r ymddangos yn y dyfodol. Felly, ydym yn ymddangos? Mae'r ymddangos yw'r ymddangos yn ymddangos yng nghymru, yn y ffordd y gyrdd yma, felly, yn 1873, y gyrdd ymddangos ymddangos yw telograff. A dyna'r ddaeth eisiau, ydych chi'n gweithio ar y llwytoedd, ar y dyfodol. A'r ffordd ymddangos yn ymddangos, a'r ddaeth eich ddweud ymddangos, yn ymddangos i'r ymddangos i'r ymddangos, i'r ymddangos i'r ymddangos. Felly, yma'n ym 1873, a dyna'r ddweud ymddangos yn ymddangos yn ymddangos. Felly, yma'n ymddangos. Felly, mae'r proses ymddangos ymddangos, rwy'n ddweud. Mae'r cyfwyr, ymddangos, ymddangos, ddweud, yn ddweud i'r ymddangos, yw'r ddweud? Mae'n ddweud ymddangos i'r ymddangos, ac'r ddweud i'r ymddangos. Felly, mae'n ddweud i'r ddweud ymddangos, a'r 1.4 petaflops, a ddweud i'r 1,000 triliwn cyfwyr ysgrifennu. Felly, mae'n ddweud yn gweithio. Mae'n ddweud i'r ddweud i'r ddweud, a'r 1.4 petaflops yn cellnigio yn hatchedglomell естеannau. Mae'n ddweud i ihan i ein hurtiau a'u meddwl sy'n delwod, oste dim maen nhu'n gwell fydd. P summon o'iり raise the UK. I hwn i ziom ni nhw'n gweld oherwydd 5 decyllwedd yn chi. Mae'nhu o通wch wasf. Dyqui'r angen chargerydd ym� Bud. a we start talking about open data and giving stuff away. So we actually have a commercial team and their job is to create products using the weather data, not just for creating the public task and your public forecasts and the BBC forecasts, but also for the utilities companies, for the road users and highways agency, do I grit the road when it's snowing, those sorts of things, and lots of other services, defence, et cetera. And then we have reusable data. Resolution 40, you may not be aware of, but that's a World Meteorological Organization resolution, which says that we will share a certain amount of our data for free. Because we've been exchanging weather data with other people for quite a long time. So we've got the world meteorological organisation and it's kind of neutral, so wars can happen and we'll still exchange data. I think it's the scientist at heart. They want to exchange the data no matter what's going on. And more lately, we've had UK locations programme. We're working on things like Inspire. So we're always working, and obviously there needs to be standards. So we're working on how do we share our data easily with other people and keeping it up with the latest standards for sharing that data and using the latest technology. But with all of that, we've ended up, we've needed to go to open data. But we thought, well actually, we don't just want to go to open data and go, here's a CSV, there you go, off you go. We said, well actually, we need to do something a bit better than that. Because we need to do open data, but we don't just want to make it downloadable, we want to make it reusable. We want people that can reuse our data with their open source software and do stuff with it. If we're going to do it, let's do it properly. We also wanted to maintain our position as the source of the data. We didn't want to become an intermediary, so somebody else taking these CSV files and reformatting them and then being the one that everyone went to for the data and us losing our position as being the national Met service. We also wanted to give ourselves a platform for adding value, because we do have a remit to make money and offset the costs of what we do. So we created DataPoint. To work out what we were going to put on DataPoint, though, we needed to think about what did open data mean for us. This took an awful lot of thinking about from quite a few senior managers within the office to come up with a set of criteria that we would use in order to work out what was open data. So we said, well, it's basically got to be funded by our public weather service customer group. We have a group of people that represent various parts of government and also the public to say what we should, as a Met office, provide to the public, and that's our public task. So it's got to be funded by them. It's got to be in the public interest. So there's no point making data available that actually nobody wants, because that would be a waste of money. It needs to be in an agreed format, so the majority of our data is actually in XML, JSON, those sorts of things, things that we know that the developer community wants to be able to use it in. But the data volumes have got to be reasonable. There's no good putting out huge files of data that take you six hours to download when it's an hourly updated forecast. It really doesn't help anybody. We also have to own the IPR. So we talked about all this data that we're taking in and sharing. Well, a lot of that we don't own the IPR for. So we need to make sure that if we're going to give it to somebody else under an OGL licence, that we actually own that IPR to make it available to others. It's got to be operational, because actually the driver for data point was something that people could use to create commercial services on, operational services make money out of using our data. That's what we really want to see. But if it's not an operational service, then actually you're not going to be able to create your commercial services off the back of it. And it's got to be consistent. We had a lot of complaints probably a couple of years ago when our forecast and the BBC forecast weren't the same, but they were the same data. So we need to make sure that whatever we're putting out there is consistent. So you don't go, well, I've got two data sets that you've provided and actually they're not saying the same thing. And we also needed to think about the cost and the opportunity cost. So we do have to think about what is the impact going to be on the Met Office as a whole if we make this data available for free. And what is the cost of doing it? So we do need to think about how much development effort is required in order to do some of the work that we need to do. But we did eventually, after we locked them all in a room and said you need to make a decision on this, we did actually get them to make a decision. And to a certain extent what we said was, well, all the stuff on our website that you can go and look at and use weather information, that's pretty much our public task, that's the public data. So we'll start with that. So all the stuff in blue is all the data, all the web pages that the public tend to look at. And you can recreate those forecasts using the data from data point. The only bits you can't have are some satellite products in terms of the scope. We do provide satellite images, but not to a full European level, because we said we would keep it over the UK. We have products from the Maritime Coast Guard Agency. We consider that their public task not ours, but we are talking to them about actually making their products available through data point. And then finally our authorised voice. So we have our national severe weather warnings. So when you get the really heavy rain, etc. And we need to make sure that we got consistent messaging. So we said we really can't have that as open data. We need to know who's reusing it and what they're doing with it and check that they're doing it properly and that they're updated in a timely fashion. So it's not that you can't have it, you just can't have it as open data because we won't necessarily know who you are. So that's where we started. So we knew then what the data was, and then we went, well, how do you want it? So we've really had user engagement at the core of everything that we've done with data point. So we started off with some requirements capture. We went out and we asked people. We went and saw the Ordnance Survey, because they'd done something similar with their open space. What did you want? We know that they use our data. How do you want to use our data? App development companies, what would you like from something like this? And we put some surveys out. And we used that to define our requirements. We now have a user forum. I use a Google group, and it's self-supporting. So they can ask each other questions. They'll say, how do I use this map layer? And somebody else will answer it. I go in there, I help, but actually we do actually have user-to-user support, so it is working really well. We also have analytics, and I'll show you some of the numbers that we've had from monitoring who's using our data. So we use that to inform where we're going forward. We obviously have the hackathons and those sorts of things, and we're talking to the users. So a bit of a pitch. We're over in the corner of the coffee shop. It'd be lovely to talk to some of you people, because we're going to be doing some developments in the near future. And it'd be great to hear what you'd like to see happening. So please come and have a chat. So where are we now? Having done all of that, this is data point. This is actually the catalogue for data point. It's not actually the end, back end bit where you get the data. But what I've done recently is we've started to make it the place you go if you want Met Office weather data. So this actually sits on our wrestle web service. So there's data that sits behind us on our wrestle web service through an API. But then we have our historic regional climate data. That's on another page within the website. Most of you probably wouldn't be able to find it even if you were looking for it and are highly unlikely to probably go and look for it because you just didn't know it existed. So what we're trying to do is have data point become the focal point for those people that want to access our data. In time, we're looking to put paid for data on there because there's some stuff that isn't part of the public task and that we've created separately. So this will be the place you come to go and get that data. So this is a... I'm going to stand over this side now. So these are some of our products. We have the map players. And you'll see there's no map. That's because you get to put the map that you want. So... and then you can use whichever layers you want to overlay them to see what the weather is. So these are forecast map layers. We then have the XML for daily site-specific forecast, regional extremes and those sorts of things. So we've tried to make it flexible and obviously it's the content that's on our website. We've done no advertising of data point, apart from things like the hackathons and those sorts of things. Really haven't told anybody about it. So when we went live in November 2011, there was a few people who knew about what we were doing. So we had 96 people who were registered for data point. And to be honest, I think half of those actually worked for the Met Office because they'd heard all about it and so they wanted to find out what it was. So they all registered. However many months later, this month, I've just done the stats for this month, we now have 3,500 people registered as users of data point data from all over the world. You'll see that from the next slide. Of which, on average, there are 744 people actively downloading data within an hour. So each hour there's on average about 744 people actively downloading the data using the rest of web services. Which, for me, is absolutely fantastic because you can see even from October last year, there were 42 people. So for me this is fantastic and there's been really no advertising on any sort of great scale. So each month I have a look at what's going on and what the statistics are. So I can see the majority of what people use is our site-specific data. The last couple of months, there's been one individual who I think just counts code because he's completely skewed my results on who's using the observations layers and this is rainfall radar layers. I've looked at the logs and it's one person calling the data every minute and they're only uploaded every 15. So I think that's skewed my results somewhat so you can ignore most of that. The other one that's quite interesting is, is actually by country. I actually have, I think there's around 42 different countries actively downloading data each month from all over the place, from Belarus, the Ukraine, China, Japan and 450,000 downloads from America last month. So, you know, it's a worldwide thing. So that's one of the things that we've needed to be quite careful about with our open data. One of the key drivers was from government to generate UK economic growth but actually we're driving USA economic growth as well. So we do need to be careful about what data we put on there and whose economic growth we're actually encouraging. These are a few examples of who's using our data. I tend to do a bit of Google searching. It's usually a Friday afternoon thing that I do, bitboard, what shall I do? I'll see if I can find out who's using our data because although you have to register to use Datapoint, I need a username, which could be Mickey Mouse, an email address and that's it. So I can contact you but I don't really know who you are and I don't really know why you're using it. So some of these things I've just kind of found out about. There's the Essex weather. They do county level weather forecasts. Forecast IO, which some of you might know about. That's quite a nice interface. Our IT director quite likes forecast IO. This one is one of my favourites because they're actually using Transport for London data as well. He's created himself a little barometer. He's set himself some thresholds of when he'll walk and when he'll take the tube and what the tube lines are like. So he's taken their data as well. The little barometer will tell him whether he should take his bike or whether he should go on the tube. So I quite like that one. That one actually is open source code. He tells you how to make your own barometer with your Raspberry Pi and our data in TFL's data. So lots of different uses. The old one or two in here are actually charging for it, which is absolutely brilliant. OK. We did some surveying recently because we wanted to show to our public weather service customer group that what we were doing was actually generating economic growth. So we did a bit of a customer satisfaction survey. For me, unfortunately, only 20% of the users of data point at the survey said that they actually generated revenue, which considering why we're doing it was a little disappointing, but I'm sure that will improve. But actually, what's really interesting is that, well, they actually don't care or they're very satisfied, satisfied. So actually those that are using it to create revenue are happy with what we're doing, which is absolutely brilliant because it means I must have got something right. We also use something called a MEP promoter score. So you have a look at those people who would say I would recommend you to a friend. And of those, the majority of those that are using it to generate revenue would recommend us as a data provider to others of where they should get their data from, which is absolutely brilliant. We also said why have you started using it, what industry you work in, and you can see that actually lots of different industries that they're working in. So all of personal data, but if you look at those that are generating revenue, the app website development goes up to about 60%. So, you know, there's a lot of different industries. We have Drax Power Station using our data. I still haven't worked out what they're doing with it, but they are. Okay, one of the other things that we've been looking at is how do we brand our data, or should we brand our data? At the moment, if you use data point, you're using government published information. So we thought, well, actually, should we be saying that it's copyright or data provider, a met office, because Ordnance Survey, you have to say it's data provided by Ordnance Survey. What was quite interesting is actually those who are trying to generate revenue would actually like us to add the brand on, but the ones that are doing it for personal, less so. So we're still in the balance of whether or not we'll do it on a requestful basis. Again, if you have an opinion of whether you'd like to be able to use our brand and you think if you were going to use the data, then please come and see me and let me know. Okay, so where are we going next? Well, as I said, we've been user engagement. These sorts of events are surveyed that we did. Industry standards. You know, we're trying to work with industry standards and do what we should be doing with our data. We have to meet Inspire as a government organisation. We have to be Inspire compliant, and we actually have to be Inspire compliant by December. So our developer teams are working very hard to make sure that they meet those deadlines. And we've also been keeping records of incidents and known errors and users reporting issues with it. So we've collected all of that information, and from that, so Inspire is about obviously the discovery, the view, and the reuse and download. So we're doing stuff with our map players to turn them into web map tile services rather than just being the PNGs at the moment. So this is our data point release plan. So what we've done is we've said, right, okay, this is what we're going to be doing before Christmas, fingers crossed. We're looking at being Inspire compliant, so we need to get our web map tile services. We need to use UTF-8. I actually don't know what that is, but I'm told that's what we have to do. Additional map projections, so you can use different map players because you can only use one at the moment. And then things like self-registration for service notices. Because I've got 3,000 registered users, but only 700 of them are actually using the data, they get a bit fed up when I send them lots of emails about the fact that there's going to be an outage or something's going to happen. So what we said was actually, it would be much better if you could just sign up to the notices yourselves. So that's going in, getting some decent error messages, that sort of thing. We're also going to tell you we do our mountain area forecasts on our website. We're actually going to tell you what the area of the mountain is that we're forecasting for so that if you wanted to, you can replicate it. So there's lots of stuff that we're doing here to make the data more useful. Okay. And then over the next few months, we're going to be developing our requirements, the detailed requirements for doing some more work. And what we're now trying to do is to get ourselves into a rolling plan. So in the next few months, we'll start thinking about what we're going to do before Christmas next year. So you can see we're looking at adding more data, increasing what's going on. So all the stuff that I said, well, we point stuff on other places in the website, that will again be moved over to Datapoint, WrestleWeb services, WebMaptile services, those sorts of things, to make the data more useful and making it more interactive with other data that other people provide. So this is where we're going. The next big project, though, is our National Archive project. So we are a place of paper deposit for the National Archive, and we need to become a place of electronic deposit, really, because we want to maintain being the source of the data. And that means we need to provide people a way of getting to that electronic data rather than just archiving it. So we are actually in the process of collecting requirements. How do people want to be able to get at this data? What data do they want? What resolution? How quickly? And all those sorts of things. So again, if you wanted to come and talk to us, because you want some historical weather data, then it would be really, really brilliant to hear from you. Any questions? Thank you. Right, anyone have any questions? I'm going to have to pass them like to you, and then to our battery. You talked about your data. Yep. You want to see software staff that are using to provide it to those on the source of that? I'll just try to see if any of my technical colleagues are in here. I don't know. However, if you see anybody with a Met Office badge around here, they're likely to know. But I'm afraid I don't. OK, another question? I just wanted to clarify. Daily rainfall, though, since station data, is going to be available through the firm? Is that not station data? We don't have daily rainfall amounts on there at the moment. There's quite a lot of discussion about that. One of the things is because we've been kind of maintaining what's on the website, we don't have any daily rainfall amounts on the website, and therefore we don't have it on Datapoint. So it's a bone of contention between we should be making data open for people to use, and actually this is quite a highly valued amount. The other thing is, is because we're not the only ones that have daily rainfall amounts. It's obviously the environment agency have a huge amount of it as well. We need to be quite careful about, well, here's all of our data available for free. And then we effectively force their hand of what they need to do as well. So it's going to be a bit more work and sort of collaboration about how we do it. I mean, actually, ideally, you'd have a single data set that was made up of us at what we had and the EA and Northern Ireland Rivers Agency and SEPA and one single data set. So there's probably a little bit more work in there for that. I have a question with regards to opening up your data. It's, I guess, an extended exercise. And with regards to sitting up your infrastructure and putting people down well. So you have a commercial performance. How can you justify opening up your data? Is it push from above? It's push from government. You must make your data openly available where it is public data. We've probably spent more than we really needed to because we said if we're going to do it, we want to do it properly and give ourselves a platform that we can then make that data, make some money out of it like the open source software is then about providing support. And that's kind of what we've done. But we do have a usage limit. So if you want to have more than 5,000 requests a day, then you pay a contribution. It's about £1,700 a year. And you pay a contribution to the infrastructure and the networking that we need to put in place to stand up all that extra bandwidth that we would need. OK. I think we better wrap that up now and start switching up the last two slides. Thank you.
In November 2011, the UK Met Office launched DataPoint: an Application Programming Interface (API) for the release of its Open Data, in support of the Government’s desire for increased transparency and economic growth. Starting with just a handful of users, the service has grown in data, functionality and usage. This year the we are making further developments, responding to user feedback and ensuring INSPIRE compliance. This presentation will describe the journey so far and a forecast for the future.
10.5446/15581 (DOI)
And he is very busy being back home working on a big tender that we're doing to go with the Ministry of Finance. So instead you have to do it with me. I'm head of the data distribution at the geodata agency. I do have some of my skill technical people here. So if you have technical in-depth questions to what I'm about to show you, please come up here in the break and we'll take them afterwards, alright? Okay. What I'm going to talk about, I'll give you a short presentation about the geodata agency, who we are. And then I have two primary messages. I could speak for at least one hour of each of them. But one is the OpenData program that we have in Denmark. Secondly, the data distribution platform. And then at the end I'll give you a glimpse of how I see the future. Alright. I have 36 slides. We're already a bit behind, so I'll rush through some of this. Basically, the geodata agency, we are the national mapping agency. But we decided to get a new name. We realized we don't really produce maps anymore, we produce geodata. So we got a new name from January 1st. We got a new strategy and we got a completely new organization. At the same time, I came into the picture, I joined January 1st as well. I'm a land surveyor, background, specialized in GIS and land management. Before that, I worked nine years with GIS software, both commercial and a lot with open source software as well. Alright, so this is our new organization, pretty traditional. But what you can really see from this is that we are a value chain or data driven organization. From the data acquisition, data collection, to data processing, data visualization, and in the end, using data application and data distribution. So that's our new organization in short. Alright. So the open data program, from January 1st, the whole government released this open data program. It's called Good Basic Data for Everyone. It's the Ministry of Finance who are in charge of this, which obviously gives a lot of muscle behind this. Having said that, they don't do this because they find spatial data and other base data very interesting. Obviously, if you look at the first page, it's a driver for both growth and efficiency. If you look at it for business, for private sector, it's growth. For the public sector, it's efficiency. So these are the two main drivers why we have released all our data. To look into what kind of data it is, well, this is overview of what we call the basic public data. It's information about people. It's information about businesses. It's real property information. It's addresses. It's road networks. It's all the base data. And obviously, it's a lot more. It's auto photo elevation model, et cetera. And if you look at it, well, these data are actually ours. So we play a key role in this whole open data initiative. So what kind of data is free? Well, if you look at our data, topographical data, base map, even our cadastral map is completely free. High resolution auto photo. This is our national 10 centimeter pixel resolution auto photo. It's digital elevation models, even just released a blue spot information about where rain will collect. Get back to that a bit later. And I went to this quite interesting presentation yesterday with Arnold. Arnold, are you here? Oh, yeah, you were right there. So I can say our data are completely free to copy, change, and distribute. They're free to use together with other datasets. And it can be used commercially. That's actually really no string attached. There's one, however. We do like the people acknowledge that the data comes from us. When did they get the data and how did they get it? Do they use the WFS service? Do they bulk load some data? What does it do? So that's all we ask. It's not really much. And obviously that was a huge change of how we used to work before January the first. Okay. If you look at the data distribution platform, which I'm in in Chazap, I could show you something like that, but that is a bit too complicated for this early morning. Instead, I'd like to say, well, we look at ourselves a bit like IKEA. All right. We don't distribute flat parcels with furniture, but we do distribute a lot of massive amounts of data. So if you look at IKEA, well, this is basically IKEA and the business model. Oh, IKEA, they have a very nice showroom. If you like me, don't really know how to put a rug together with a couch and a coffee table. They have a showroom. We have that as well. We have a very nice refreshed web page where you can get all the information you really need. I must admit, it's primarily in Danish, but we are working on that. You can subscribe to different feeds about information, about status of the platform, et cetera. IKEA also have a very nice web shop. We have that as well. The web shop you can go in, log in, put all kind of information into your basket. You can have pre-generated datasets, nationwide datasets. You can define your own dataset in your own format, your own projection and collect it. Before, I work for the Geodata Agency, I was a sales manager for Instagram. I can say it's a bit easier to sell your products when they're free. I mean, you put all your data in the basket, you click check out, and it doesn't cost you anything. That's a huge advantage. We also have support for professional users. We have for almost 10 years distributed our datasets using OGC services. We have a web page where you can see all our different services, how to connect, you can get previews, you can get all information you really need in order to use your data in different OGC compliant systems. We can also help you bulk load or bulk import your data. We do have traditional FTP features, primarily for professional users where you can collect the whole national dataset directly from our FTP. We currently have data tests with ads and feeds, so you can actually subscribe to our feeds. So we will tell you when part of the data is updated so you don't have to bulk load everything. We do have information about how to use our data. And finally, I have a great team, some of them are here today, really struggling and stretching themselves a bit like this photo in order to help our customers, help our public sector customers, help our partners, etc. Obviously, this requires a bit of infrastructure. Before January 1st, we had around 100 web servers showing the data. Because the data was released, we tried to scale up in time, saying today we have 141 servers. We added some last week, I'm not really sure if they are into 141. But if you look at that, that's quite a lot of servers. On one hand side, I'm not really fond of having so much servers, but because our infrastructure provider is not an agency, that's who we work with, that's a strategic partner we work with. These are the kind of infrastructure they could actually provide to us. Don't take the number, don't count the servers, it's just to give you an idea of how the infrastructure is divided. We use open source for, I would say, the majority of our distribution platform. But we don't solve the use open source. So we have a policy saying we use the components that are best for what they do. So for example, we use a map server for WMS. We use, at the moment, web cache for WMTS. We are currently looking into map cache to see whether that would perform even better or not. We use map server as well for web coverage services. We use Snowflake, that's a commercial product. Anybody here from Snowflake? Yeah, it's a commercial product, they are very good at supporting inspire schema models. So that's why we chose them. Having said that, we still use GeoServer for some other things. The West Sokeys, something we developed from scratch, they are not OGC compliant, but they're actually taking a massive load every day. And we just released a new version, I think it was yesterday. Then we have quite a few servers of a product called Splunk, it's a commercial product as well. Something called Business Intelligence, I hate that work, it's basically, we use it to digest our massive log files. I think we produce 16 gigabytes of log files every month. Yeah, Misha is not around that. So every time we have a request, we log it in order for us to complete that yes, to say what kind of information is used, which services, which feature class, and which services from which users are done that. So we use Splunk to analyze that information. GeoNetwork for catalog services, we also have quite a few servers for our file-based data for our web shop and FTP. We have some switchboards that takes care of, let's say, whenever a request enters the system, we have a switchboard who logs information, makes sure that they have the username and password needed, etc. Obviously we have a few web servers as well, have a number of development test servers, and then finally we have different kind of application servers. We still use a little Ionix, a little RGOS server that are currently migrating, and we also use or about to implement a puppet for automatic deployment of our servers. Unfortunately, we haven't put puppet into production yet, and I really regret that last week. Last Thursday, we more or less moved the front page of one of the nationwide newspapers. It says, like, look if your house is going to be flooded the next time there's extreme rain. See, that's something people could really relate to. I was at a conference and I could just, like, got the tweets coming in, so we were getting more and more flooded with requests. We worked very, very hard, I mean, to keep up. In the peak hours we got, I think, 317 requests coming in per second. That's double to triple the amount that we typically get. So it's basically your city here was very busy that Thursday. Okay, a bit back to the data-driven or the value-turing organization. This is more or less how we look at the way we do things within the due date agency. If you look from the left to the right, obviously we have the data collection, we have Q&A on data, we have a number of different production systems. Each, let's say, tailor-suited for a different production environment. So, cadastral data is produced in one system. I think it's geomedia-based. Tobacco for mapping, produced in another system, et cetera. But what we have worked with for quite a number of years is that all the data coming from a different production system will all go into one, we call it the due data bank. It's called LDS up here. We have heavy uses of FME, as I talked to you about that earlier, for, I mean, analyzing data, making sure that we have the right distribution data models whenever data leaves the house. And that is the due data bank. That's where we keep the master data. We also have derived data in there before it's going into the distribution environment. It's called KF Chainstall. That's an old screenshot and I didn't have the original, so I couldn't change the Danish text. I apologize. So that's how data is moved through the system. Recently, because the whole due data bank is based around an Oracle application cluster, and that's not about to change. But we realized half a year ago or so that we needed to be able to scale our database platform as well. So what we did is that we introduced PostGIS as replicated distribution databases. I'm not quite sure how many PostGIS we have. I believe we have a master and a number of slaves under that. That's what you see here in the bottom. I'll come back to that later. If you're more interested in knowing how did we change from Solvely using Oracle to also using PostGIS, I could say you could spend some time tomorrow afternoon in the lunch break. I know it's a bad time. But to my colleague, Oli Nielsen and Jonas Nielsen will give a presentation telling you about our experience, what went good, what went not that good, and how would we recommend you to do it. All right. So obviously the open-data program changed our world quite dramatically. These are the recent statistics I have. And if you look at the way as spikes here, that's the number of users that we have. And you can see it's constantly growing. I mean, today we have almost 7,500 users. Before January 1st, we had a few hundreds. I'm not really sure why it keeps growing. I mean, we said, well, it's going to grow the first quarter and then it's going to be stabilized, but it's still growing quite significantly. And the dots on top of it is actually the number of orders we get in our web shop. And order could be a range of data sets. So it's basically one basket, this one. So we have more than 30,000 data orders coming in here until now. So all these new users, obviously we have very happy users. This is Peter Borsen. He is very active in the OpenStreetMap community in Denmark. He actually came knocking on the door January the 2nd. The first day we opened the doors and he collected two times two terabyte disks. He thought it was way easier than start to get all the data from the web shop. So all our data is entering OpenStreetMap as well. Obviously that can give a lot of new information into the OpenStreetMap products. Obviously Google and Bing, even Apple, they have our data as well. I'm not going to say much about that. Demand is still increasing. July, which is in the middle of the summer vacation, we hit a new record. We had almost 120 million requests in that month. That is more or less the same amount of requests that we had for a whole year back in 2008. People who know me know we like cake. We always celebrate whenever we reach a new limit. If you put that into perspective, this is a research institute showing all kinds of Danish websites, media. How many page views they have per month? The first one is the Danish version of eBay. The two next XTOPLA, Deco and BTDK are the two main sleazy tabloid newspapers in Denmark. These kind of newspapers, right? If you look at it, we would actually come in the third. Obviously, other websites requests are not page views. We are actually in the top league. Since this is an international conference, you could even say, well, I'll keep it on mute. This is the most viewed YouTube video ever. It has roughly 1.5 billion views. Later this month, we will reach the 1 billion request within 2013. We actually aim to reach the same amount of requests on our web services within this year. Please remember, Denmark is a small country. We only have around 5 million people. I believe it is quite massive interest and data amounts we are actually pushing out through the infrastructure. The future. If I try to give you a glimpse into my future and the future of our data distribution platform, I need to go a little back to the Open Data program. The good basic data for everyone. If you look into that, you come into chapter 9. That is actually in English, so you can go ahead and have a look at it. It says that one of the initiatives in this program is to build a common distribution infrastructure on top of all the basic data within the country. The common distribution solution, I call it a distribution platform. I regard it as a lighthouse. It is a completely new infrastructure. Every governmental agency who has basic data will be pushing their data into that. Everybody using basic data in Denmark only has one place to go with one common API, with one common way of collecting data. The process, and that is why Morton is not here. We are currently in a competitive dialogue with five selected companies. As I said in the beginning, it is the Ministry of Finance who are in charge of this. We are just one out of five agencies in this tender. Some of them are international players and some of them are large Danish system integrators. I do know we have some top contractors sitting in the back row up there. That is why I am not going into very deep details. The process, the material was available in June. That is why I really appreciated not being a commercial provider, but now I am sitting on the other side of the table. I could go on my summer vacation knowing that people were sweating and working very hard out there. Since this is a competitive dialogue, we do have a number of dialogue phases. It is quite time consuming, but we believe it is giving us a lot of value because building this kind of lighthouse has never been done before, ever in Denmark. We need to have this dialogue in order to make sure that we are actually putting the right demands and requests in there. The one date that is worth mentioning is May 9th next year. That is actually when we expect to sign a contract with one of these five system integrators. We have a big review team. Modern Linnocore is part of that. They are sitting in almost an air sealed room within the Ministry of Finance. It is a competitive dialogue. There is obviously a lot of things, precautions to be made to make sure that everything is completely confidential, completely documented and so on. The reason I have that folder is there are two reasons behind that. First of all, the material we are getting is massive. In my head, I visualise that there are people going, it is definitely not garbage that is coming in, but it is huge amount of pages with solutions descriptions, with prizes, and they are really digesting it bit by bit. The other reason is that it says beast up there. Beast is actually the big review tool that the Ministry of Finance uses. It is a big Excel spreadsheet calculating everything. That is why we have the beast here as well. When we have this common distribution or current data platform, what will that do to an agency like ours? Basically, we will keep all the production systems, complete SAR, the geodata bank will be completely untouched. What we will do is we will make a slight cut, we will move our current map distribution platform, we will move our PoChess databases into the lighthouse, and then we will make sure that the data is always updated. Alright, I will have to speak a bit louder. Obviously, before you move, that is something that we need to do next year, and I guess everybody who has tried to move at home knows, that if you take all your garbage, all the things you have in the ceiling, it is going to be a mess. So, we, earlier this year, started to clean up to look what kind of systems don't we need to move, what are the system dependencies, do all our system use web services, do some of them go, let's say, sidewards into the database. So, we made a complete mapping of all our system and how they are integrated to give us an idea of how our current system landscape looks like. I mean, remember, the current distribution platform has evolved over 10 years, and I guess everybody who has been living in their house for 10 years know, there is a lot of stuff going somewhere else in the basement or in the ceiling. Okay, and what's our role going to be in this current distribution platform? Obviously, we are going to be one of the five basic data authorities being responsible for our data for our web services. But we are also going to get a new role, I'm not really sure how to translate it, but the Danish word is operator, which made me think of that photo. Definitely, that's not how we can do it, but the operator is going to, on the way, on the behalf of the Ministry of Finance and all the other basic data authorities, going to make sure that we are moving in the right direction, that the company we select to deliver is this infrastructure is going to live up to what we ask them to do, to make sure that the data will be used, to make sure that the platform will evolve as technology evolves, just as demands evolve, et cetera. So that's giving us a quite central role in the whole open data program, not only for spatial, due spatial data, but for all basic data. I'm getting away from here, it's five minutes. So why do we have this quite interesting and important role to play? Well, there are several reasons, and a few of them is what I'm going to end up with here. If you look at the expected data volumes, I mean the top one is the total, the one just below that, that's our data volume. So 95% of all the data in this common distribution platform, that's our data. So that's at least, that's one good reason. The second is that we have a very well-established partner program. Yeah, I put September up there, that's the newest partner, they're sitting in the back row with hangovers, or? No, okay, good. We have almost 25 commercial partners that we've been working with for quite a number of years. They are typical commercial GIS providers, engineering companies working with open source, smaller land surveying companies also doing a bit of GIS. And we've been serving data since 2002, and these are actually some of the slides from the very first partner seminars, or partner seminars we had back in 2002. We've had that like an annual conference, we had that this year as well. I think we're 120 people this year, in the beginning there was around 20. I was host for the first time, not really, it's quite hard to see how I'm hiding in the back. I guess I should be standing in the front, but anyway. The last thing, I believe, is one of the reasons why we actually were chosen to play this operator role in the future, is that we actually been able to keep a very high availability and keep a very stable platform. Yes, March was not one of the best months, I agree, but if you look ever since, we are hitting 99.99 and even 100% uptime, which is quite amazing and it's only possible because I have a very amazing team being able to do whatever is needed. I think Misha took her bike in a snowstorm into our office to restart some servers when we had a huge winter storm. Alright, any questions? I'm just going to leave, let me see, this one here, bad resolution. So this is our webpage, we have a counter showing how many hits we get per second. It says the first one hits today, hits per second and hits per year. And we are already hitting 1 billion in around 12-13 days from now. Obviously we're going to have a huge cake within the agency. I already have made sure that we get another digit in the counter so it doesn't go back to zero. Alright, thank you. Thank you very much. We have time for a couple of quick questions and really not surprisingly this guy has won. How many requests per month is said your top month? We get a July, we got 120 million requests on our web services. And how many of your 143 servers, how many of those are web servers? I mean, I think, well, the 120 million requests, I can't really remember how the disabuse but it's always WMTS requests is quite a bit and WMS is also quite a bit. And then our due cases is due coding, reverse due coding and stuff like that. So do you have some kind of a queuing system where the order comes in, you go do data processing and then they get some kind of key and they come back and they can download it? To be honest, I'm not quite sure how it works. I do know we have something in our switchboard but Misha is shaking with the herd. So a specific queuing system, no, we don't have that. That was another question there. Okay, maybe you can do that in a break. Very interesting presentation and we are all enjoying ourselves here at the phospho G. So a question to you. It seems to me that you are in a wonderful position to help the community, all of us, all the interested open source, GS, enthusiasts. Can you say, are you going to, it looks like you would be in a wonderful position to kind of keep it going and take your position and your strengths and make us all work with the data and find new ways to use it. So any thoughts on that? I am in a wonderful position. I did two comments to that. First of all, as I said, I mean, I think 90% of the software and applications we use are open source. We don't have, like Australia is saying, we should only use open source or not. The other thing is that as we are currently in this tender process where we are getting five different companies to give their bid on a new solution, obviously some of them is not open source providers. So if they are the ones we are going to select together with the Ministry of Finance and the other authorities, we are probably going to use less open source software in that platform. But obviously, I mean, we have Nordic, we are working together with the other Nordic mapping authorities on some of the open source applications. The Geodays Info, our national, our usual network, our catalog services, we have been developing with the other Nordic countries. Hi. You said you have chosen Splunk as a logging framework. It looks like an awesome framework, but it is proprietary and probably quite expensive. Why did you choose that over other options? Yes, it does cost some money. It is definitely not free. We chose it because we have statistics for many, many years. I said we generate 16 GB of log files every month and we have done that for many years. And in the benchmark, we saw that Splunk was actually able to do that for us. So that is why we are using that simple benchmark. And the amount of money that we spend for Splunk software is actually not a lot compared to all the beautiful information we are getting out of it. Yes, hello. Just a quick question. Has there been any research into the benefits of this in respect to the growth and efficiency in the economy? There are a huge business case behind this in very great details. We are currently, I think we are going to, of course, on Monday, we have currently tendered out for making a benchmark study of how did our data do before, January 1st? How are they doing now and how will they do in a number of years? So we have a huge responsibility to really show all the value coming out of this. So we have this huge benchmark study coming quite soon. I can send you the numbers if you are interested into going into the details. Okay, I will take a last question because he was mentioned. I guess he earns a question. It is more like a suggestion. Many national mapping agencies support OSGEO, so have you considered becoming a sponsor? If you haven't considered, I suggest that would be a great idea because you are using a lot of open source. So maybe that would also help you get more information back into your organization. We are, I think this is tomorrow. We are looking or meeting about making a Danish OSGEO chapter. I signed up, showed my interest, I think, two months ago. So yeah, we are definitely looking into that. That is a very good bridge into my finalizing this because I will have to cut it off here because some birds of feathers are going to start, or I actually have started already, and there is a coffee break. So I want to join me in another applause for our two speakers.
Digital distribution of geodata makes it possible to improve the efficiency and accuracy of our professional users' data collections on an ongoing basis. The Agency's Digital Map Supply is a national infrastructure to distribute geospatial data to all kind of users. Subscribers to the Digital Map Supply receive their geodata via web services, eliminating shipping time and resources. All services are based on OGC standards e.g. WFS, WMTS, WMS and WCS. Furthermore the Digital Map Supply exposes a range of REST and SOAP services for geocoding, address searches etc. As part of the common public-sector eGOVERNMENT strategy 2011-2015, the government and Local Government Denmark have agreed on a basic data programme. The programme contains a number of specific improvements and initiatives in public-sector basic data, which will underpin greater efficiency and growth. The Digital Map Supply is the infrastructure that is used to supply the geospatial data to public agencies, end users, private companies etc. Furthermore the Digital Map Supply also supports a number of INSPIRE compliant services that The Geodata Agency is responsible of - such as a cadastral WFS. The presentation will show the architecture behind the Digital Map Supply including the number of open source components such as PostGIS, MapServer, GeoWebCache and GeoServer. The Digital Map Supply has been in service for more than ten years and the architecture has evolved during that time moving from commercial software to open source software. Moreover the presentation will outline the future of the Digital Map Supply including the migration to a new, common National distribution platform for all common public-sector data.
10.5446/15580 (DOI)
for open standards. And just to heads up, this is probably the least technical I wanted to do something on Hadoop. And they said, no, so I'm doing open standards. But first some terminology. There's an old joke, and it's not very funny. The nice thing about open standards is that there are many to choose from. The geospatial domain, as you know, we have standards such as KML, GML, city GML. There are some formats such as shapefile. They're so popular that they're almost standards in themselves. We exchange data in a variety of formats, CSV, XML, JSON, even PDF. Other domains such as statistics have their own standards such as SDMXL, which I've never actually used. So what is a standard? In my mind, a standard is a blueprint. It is something that allows people to build something. When an architect builds a building, he provides a blueprint so that a builder, engineer, inspector can look at it and say, well, if you build this according to plan, the building won't fall down, and it'll stand. But a standard is more than just a blueprint, because it has to be something that other people agree upon. If there's not the consensus or blessing, then we typically call that standard a specification. And by abusive language, we sometimes confuse standard and specification. So in actuality, a standard is a specification, but not all specifications are standards, if that makes any sense. Then we have de facto standards. The de facto standard is a specification that has become so popular because people just decided to use it, mainly because maybe it was implemented in some sort of program software that got considerable market acceptance. But these specifications may or not be open. They may not be publicly available without some sort of licensing agreement. The main problem with de facto standards is that they're owned by a single vendor who often does change it when they want to. The prime example is Amazon Web Services, or AWS. AWS is the de facto standard for cloud computing. Then we have standards that are created through consortiums of companies, government agencies, educational institutions that use an open consensus process. The prime example of this is the ODC, the Open Geospatial Consortium. And then we have bodies such as ISO, the International Standards Organization that creates de joray standards. Examples of de joray standards include ASCII, TCPIP, 802.11, and the Wi-Fi protocol. I often see confusion between open standards and open source. And I typically see that among people who haven't done software development before. And I'm probably preaching to the choir, but open source is code. It's concrete software, and it may or may not implement open standards. Open source is created in a very open environment with community involvement. It's publicly available. And one of the great things about open source that I like is that it can often speed up the adoption of open standards. Like, for example, if you have a beta release of software that implements some new standard, the two can help work out bugs in each. And one thing I find interesting is that you typically don't see open standards groups creating open source and open source groups creating open standards. So if you have any insights on that, I'd like to hear them at the end of this presentation. We see open standards spanning three areas, technology, data, and services. There are technology standards to efficiently manage data, standards for databases, and storage, and communications, and servers, standards such as SQL, which is both an ISO and an ANSI standard. We have data standards to work with, interoperate with geospatial data. In Europe, we have the European Inspire Directive. And in the United States, there is the Federal Geographic Data Committee, two examples of bodies that create geospatial data standards. Then we have services standards to consume geospatial services. And great examples of these are WMS, WFS, Web Features Service, Web Map Service, from OGC. And we see open standards benefiting a host of people within an organization from software developers all the way up to C-level people like CTOs. For software developers, open standards promote code reuse and powers them to exploit reference implementations and open source for enterprise architects, open standards, rather, promotes resilient design patterns. It gives them flexibility when they're designing systems and allows them to react easily, quickly, in a really ever-changing IT environment. For project managers, open standards promote, I'd say, effective budgeting through the reuse of software. Allows them to swap out software if they need to. It's also good for cost reduction in a project. For IT directors, open standards help them to align their IT strategy with policy and technology shifts. And it also fosters innovation and research incentives. And now at the C-level, open standards help them to align corporate strategy with long-term technology developments. And it also gives them a certain amount of market insight, which hopefully can drive new business. Now, those are just some of the benefits to internal stakeholders within an organization. But we also see open standards as adhering to good sound public policy. And I'll use the UK as an example. So in the UK, the cabinet office has mandated the use of open standards in all government IT systems to ensure that they are interoperable and can talk to each other. In April, I think it was April of this year, the cabinet office released a report called the Open Standards Principles. It contains seven principles aimed at placing the user at the heart of the decision-making process around standards, enabling suppliers to play on the level of playing field, supporting sustainable costs, supporting flexibility and change, making sound decisions quite simply, using a fair and transparent process when choosing standards and also trying to be fair and transparent when actually implementing those standards. And I think all of these have tremendous business value. From this diagram, which I stole from the report, the ultimate vision is to have systems with open interfaces, open protocols, and open data formats that these systems can actually talk to each other across government agencies. I'm really into open standards, open source, and open data. And I think at the heart of all of this is the user. Why are we doing this? It's for the user. Open standards promote improved data access, especially access to data during emergencies, crises, conflicts, improved data sharing among stakeholder organizations, improving just understanding of the benefits and need for sharing that information, improving communication, which in terms does community building, improves data quality and documentation, increased data access. Another aspect of open standards is thought leadership and a little bit of self-promotion. I'll use Ordinance Survey as an example. Since the time we first released our products in digital format, we've had the use of open standards has been at the heart of our strategy. And we've continually tried to develop open standards, both in the UK and globally. Our distinguished director general, Vanessa Lawrence, is the co-chair of the UN GGIM Committee of Experts, setting the agenda for global geospatial information. I can never remember what it stands for, but it's the United Nations Initiative on the Global Geospatial Information Management, UN GGIM. We've provided technical expertise to inspire to create European-wide standards. We continue to support inspired programs such as ELF, the European Location Program, to create European data and protocol standards. We were a major supporter of the UK location program. And I think we led by example in the use of WMS. We also provided expertise to create the location to discovery metadata standard in the UK called GEMINI. And there's something about eating one's own dog food. So we have our flagship product OS MasterMap, is what's released in GML and also as a commercial WMS. And we have a free and commercial WMT, as it's called, OpenSpace and OpenSpace Pro. We've been a long-term technical member of the OGC, I think since 1998, but I could be mistaken. It's roughly about 15 years. Our DG is on the board of directors. We have someone on the Global Advisory Council and also the Business Value Committee. We played a key role in developing city GML and geo-sparkle, and hopefully now the emerging points of interest standard. We're heavily involved in ISO TC211. ISO TC211 is the geospatial arm of ISO, so they create all the geospatial standards. And we're very involved in linked data. You may have seen some of the presentations this week. We provided guidance on the development and maintenance of linked data URI sets for location data, and we're involved in a number of bodies, such as the W3C government linked data group. So open standards look really good on paper, but do they really work? And I think one way of ascertaining that is through PlugFest. PlugFest is an interoperability experiment. So the OGC Organ Survey with the support of AGI will be hosting a PlugFest over the next few months to really test for OGC standards, TML, WMS, WMTS, and WFS. What we want to do is get all the vendors, open source, commercial vendors in the room, so Snowflake, software, Esri, Pitney Bowes, QGIS, you name it, and techies from organizations across the US, public sector and non-public sector, get them in a room and just really, really test these standards. The outcome of this will be two things. The OGC will be creating an engineering report, which will be quite technical, and it will be creating a best practice paper. Now the best practice paper is meant to be sort of the good, the bad, and the ugly. So it'll be everything that we find. It could contain information such as, well, we tried the standard and it worked in mapping for professional, but it didn't work in ArcGIS or we tried this in QGIS through an exception, so the workaround is to do this. It's a way of just giving people guidance in the UK on how to use these standards within the software that's available. So if you're interested, there's a teleconference next week on the 25th at 10.45 British summertime. The cut-off date is the 2nd of October. The first print will be at Ordnance Survey in South Hampton on the 17th of October. We'll have a second print on the 9th of December. Results will be presented on the 10th of December. And if you're interested, come grab me because I'd really, really like people to participate in this. That's actually my last slide. Before I close, does anyone have any questions? Do you wear the Funk Nets in New Zealand? Is that the stuff? I am aware of it, yes. Yeah, through Denise McKenzie. Did anyone have any of that questions? I don't think we don't like that. It would be good to touch base just to pick your brain. Do you have any other questions? I have a question for you guys since I just went through some of the benefits of open standards. Can anyone think of some negative aspects of open standards? That was a quick hand. We've actually had to do the same thing with our WMS. Certain aspects of it, we thought we're lacking. So we implemented some workarounds. So we intend to feed them back into the OGC. And this is not a criticism against the OGC, but in general, standards bodies take a long time to adopt and revise standards. So you may find something that's all short, but then it'll take 6, 9, 12, 18 months to by that time you've moved on. So yeah, it's definitely a limitation that I see. Anyone else? We don't need them to be so careful and they are more than the hands of the party. I think that my question might pertain to the previous. Could be wrong? Yes. Not my best. Can you rephrase that? I thought your last question was if there are any downsides of the OGC standards? Yes, yes. Someone has maintained them. I wonder if there are always more people. Well, one thing I like about the OGC is its consensus base. So it's the OGC, sorry. I was saying one thing I like about the OGC, and I'm just using that as an example, because it's consensus based and it involves a large community of people. So the OGC, the actual staff is rather small. I don't know how it would be, probably under 20 people. But the people actually working on the standards are thousands, myself included, just people who are interested in geospatial and want to contribute somehow. So that's the way of maintaining the standards, is having that community make sure it's striving to work on them and continually revise them, adapt according to changes in technology. Do you want to feature it in the school? Does it take a long time to do things? It does. And that is one of the negative aspects of standards work, is that it just takes a long time. And for many reasons, it takes a long time to build that consensus and make sure that the standard is solid. Hi. I also really like open standards in general. I think they're a great thing. But one, shortcoming that I have found working with some of the OGC standards is a few elements I think are missing. And if you want to stick to the standard, you can't implement a feature that is not part of the standard because then you're not part of the standard anymore. So that can be constraining some way. The standard has a plot. You have to go with that plot, otherwise you don't want to. So that's your other. I agree. Point well taken. Oh, sorry. I want to state that with an example, I find it great that there are open standards. But one of the possible clauses, and I will explain that with the de facto standard, once there is a standard, you are sticking on it for a long time. For instance, we all know what shape-laws are, which were a de facto standard, which are very great, but they had short-catch, but today many people are still using them. So it's good that we have no open standards, I think there's a threat that we have so many standards that there are things that we don't think about right now that we are going to be sticking it for a long time. And that's why we have companies like Safe Software doing quite well. Just to convert for one standard, formats for the next. I agree with what you're saying. Thanks, guys.
The use of open standards has brought considerable business value to Ordnance Survey, Great Britain’s national mapping authority. Ordnance Survey participates in the development process for open standards in international standards bodies and is an early adopter of many open standards. The use of open standards has enabled Ordnance Survey to future proof internal information systems, foster innovation within new product development and better serve data to its customers. The use of open standards has brought considerable business value to Ordnance Survey, Great Britain’s national mapping authority. Ordnance Survey participates in the development process for open standards in international standards bodies and is an early adopter of many open standards. The use of open standards has enabled Ordnance Survey to future proof internal information systems, foster innovation within new product development and better serve data to its customers.
10.5446/15579 (DOI)
It is a kind of mobile apps and the mobile traffic map service is one of this app. Background is here. For your information, NTIC is National Transport Information Center. It's a national organization. They are collecting all the traffic information in South Korea and synthesizing the contents and they are providing information to people. The background of this project is this. NTIC's requirements are very simple. They want to deliver real-time traffic information to users to disperse traffic on major national holidays. For example, Lunating New Year's Day, New Year's First Day and Chuseok. Chuseok is the equivalent of Thanksgiving Day. At those holiday season in Korea, about 30 million people will visit their hometowns and families. That amount is nearly 60% of the population of South Korea. The best migration causes people to be interested in the traffic conditions. And these days, most of the people have a smartphone, so they want to get more information through our smartphone. Briefly, the main features of the mobile traffic map service is like this. It supports interactive zoom in and out and three steps colorize traffic data. The green color means free traffic and the red color means congested. The reference speed is applied by road types. And in addition, it supports traffic accidents, traffic accident information and CCTVs on the roads. This is the architecture of 2011. Actually, my company wasn't the one who designed this system. There was a company who originally designed it, but there were a lot of problems. As you can see, it is a Windows-based system. There are six geosurvers as a map server and one post-GIS, postgres, SQL as a database. There is no cache server, and the client requests one size image and no tile. So there are a lot of problems. So 2011, they launched this service. There were more users than they expected. Because of absence of a cache server, the same location information was reportedly requested to the server. And geosurvers are frequently down at peak times. Because of these problems, our customer, NTIC, suspect the performance of the open source GIS, geosurvers or post-GIS. So NTIC required to us to prepare Chuseok in September 2012. They requested system improvement. So the requirement is they want to support 200,000 users per day. And they want to change the database to SQL server they already had. Because at that time, there are not proper good benchmark sites using open source GIS at South Korea. So we failed to persuade the customer. So we suggested solutions like that. On top of that, we insisted the system architecture should be redesigned. And the system regarding maps should be redeveloped. And in second, we proposed that all map requests on mobile site should be tiled. Insert a SCREED proxy server should be used as a cache server for improving the reusability and performance. And first, in order to call out valid traffic information, valid time for cache tile should be determined, we used using WMTS interface and content expire time and custom time tag. And last, we suggested making all the tile map data in advance every five minutes. Here is the new architecture in 2012. It consists of three SQL servers as database and six geo server. Geo server making two five six by two five six tiled map in this architecture. And there are two cache server in front of them. And we put cache maker at the cache server side to produce the traffic map data tiles in advance. And the cache maker uses Alpo, uses round robin dispersion principle of Alpo switch to produce tiles. And as we redeveloped the client mobile apps and mobile web page, we also redesigned it to automatically renew traffic map every five minutes and apply layers cache structure into mobile apps. During the Chuseok holiday season in 2012, my colleague, Visee Zhang, was being stood by to monitor this system at NTIC. Reportedly about one million people downloaded mobile app only during that season. As a result, the service was somewhat successfully stable with was much better than previous system. But frankly speaking, as a developer, it was not satisfied. Mobile app requested time map every five minutes, so unnecessary requests had been increased and the response time consequently got longer. As a traffic tile, as a traffic tile comprised of 10 zoom levels, so it had exceeded over one million tiles. All the time map in every level couldn't be updated in advance within five minutes. It's a problem. So we to solve this problem temporarily for nine, ten level when clients request to let your server make traffic map data dynamically. Therefore, the connection time got longer and the number of connections increased. And also in this architecture has a scale of the issue. The cache server can to share the data. So for processing increased requests, cache server should be added here. But adding more cache servers under this architecture needs more map server and according to SQL server. So it causes cost to issue. And then more cache server means more request, more request from the cache maker at the cache server side. So it rose in, what is it? It rose the SQL server end times. It's a burden of this system. So we suggested the new strategy to our customer. We persuaded our customer to use post GIS again because which provides faster special query function, we think. So we designed your server connect post GIS one by one instead of SQL server. And actually the total size of traffic map data, tidal map data in whole ten levels was less than four gigabytes. So meaning that data size was not that big. We to reduce the cost to speed up data access process. So when adding cache server, we chose memory disk instead of SSD. Furthermore, we changed the system to push tidal traffic map data into cache server. This slide shows new architecture this year. As you can see in the middle, we physically combined map server and database server. Also system generates CSV file there and import post GIS and to sync data, we replicate data in Postgres to another Postgres sequence. And instead of cache maker at cache server side, we put the tile generation manager at the map and gdb server side. And where did it go? The tile generation manager is like this. Tile generation manager divide jobs for each server clearly to produce map tile data in parallel. And as soon as the jobs finished, they push tidal traffic map data into cache server. So in case of increasing cache server, the map and gdb servers only have a role to produce tidal traffic map data. And the request of the mobile apps can be handled only by cache server side and don't have influence on map and gdb servers. So in case of increasing connections, the system can be serviced by just adding cache server here and with some configuration with the tile generation manager. So this architecture is more scalable. And we have another ideas to reduce the total amount of data periodically to be updated. The idea is very simple because since the changing data containing traffic information, it would be fine if even if only tiles on passing rows is updated. And it also could be fine even if only tiles of changing the traffic information is updated. The zoom level, as a zoom level, is this idea is more effective to update data. This chart shows the result of improved system is compared to 2012 and this year. As you can see, the time to update the interval of time generation has been shortened to one minute for whole 10 levels. Actually, the developers said to me it takes only 10 seconds to update the traffic information. This week during the first 4G conference is a Chuseok holiday in South Korea. I was told that the mobile traffic information system built by new architecture has been smoothly serviced now. So through this project, we became to make sure that open source GIS is definitely equivalent to commercial products in terms of performance now. And we got confidence to persuade the customers into adopting open source GIS with no worries. I think this project is a good case adopting open source GIS, especially by public institution sector in South Korea. There is a lot of case in South Korea to adopt open source GIS. With this experience, Yoo Jae-jung, he is the original developer. He is constructing mobile weather chart service using geoserver and post-gracicle at KMA. These are screenshots. KMA is a kind of MED office in England. I was not a member of this project. I don't know the technical details. If you have any questions, please ask Yoo Jae-jung via email. Thank you.
MOLIT(Ministry of Land, Infrastructure, and Transport) has established NTIC(National Transport Information Center) for effective management of various kinds of transportations in South Korea and released several services that people can use. Gaia3D Inc., has involved in one part of mobile service which displays traffic status on roads, streets, and highways up on geographical map, making people easily check the status of traffc wherever they’re heading. Gaia3D Inc., will introduce not only the experience of implementing mobile traffic map service (iPhone App, Android App, and Mobile Web Client) showing traffic on roads, streets, and highways at NTIC using Squid Proxy Server, GeoServer, and SQL Server but also advanced architecture coming up in 2014. NTIC system collects all kinds of real time traffic data of all highways, routes, streets, and roads in South Korea and divides those collected traffic data into three colors in green, yellow, and red by speed. These colorized traffic data are mashed up with map data to service on mobile devices. Servers carry out tiling traffic map in every 5 minutes and clients receive and display those tiled data. This system aimed at tolerating peak times of two major holiday seasons in South Korea - Chuseok(Korean Thanksgiving day) and Seolnal(Lunatic New Year’s day) when almost 15 million people per day travel at the peakest dat and about 8 million vehicels are poured out to roads, streets, and highways, so the system should be designed to safely handle over 100,000 concurrent connections. The whole system is consisted of two Cache Servers with Squid Proxy, six Map Server with GeoServer, and three Database Server with SQL Server. Real time traffic information and road lines are managed in SQL Server and provided to GeoServer. Traffic map tiles are produced in GeoServer and are passed to Cache Server. The client is designed to request tiles via interface of WMTS(Web Map Tile Service) protocol with Time Tag. The very initail architecture designed in 2012 somehow managed to endure traffic loads at peak times, but had some problems, which was quite disappointing and unexpected results. In order to improve the system, we’ve mainly focused on the enhancement of scalability. Also, we’ve newly redesigned the system into seperating tile producing servers and managing static contents using NGINX web server.
10.5446/15578 (DOI)
Oh, yeah, taming rich email. I thought I would be stealing from the rich GML and give it to the poor But the results of it could be an interpretation. Okay, but I'll be telling you something about the Stadel Which is lightweight Python framework for geospatial ETL Little bit about myself. I'm an independent open source geospatial professional And in daily life I'm also the secretary of the OSG o local chapter in the in the Netherlands and a member of the Dutch open geo group a corporation of independent professionals in providing open source geospatial support and this is some of the things I like to do in my spare time playing with mobile and GPS but of course you always start any Project because you want to solve a problem in this case. I guess we have a problem and The problem is the rich GML problem and We say rich GML and it's a term I coined together with Marcus Schneider the lead developer of degree Because people always talked about complex GML that sounded a bit negative rich GML so Probably you have guessed what the rich GML is about the rich GML complex mess you could say So think of application schemas and of course they are designed very neatly in in tools like Enterprise architect and then with a push of a button some schema is generated so Probably you are aware of several of the schemas and mostly the deal for instance with inspire you probably know the NX Schemas but also many of the Dutch or the national datasets in several countries use Application schemas if you're lucky or some other form of complex XML so in the Netherlands we have national datasets in Germany. There's National datasets and I just learned that the UK OS master map is also a form of XML GML and And apparently complex more complex than I thought So how to give an impression what I've talked about this is Dutch addresses and this isn't even an application schema This is what I would call semi GML. It's XML with some GML namespaces and well lots of overheads and What you see also is a sort of arbitrary XML like you could have multiple Elements of the same element nested elements Implicit X links so And if you look at it inspire for instance This is part of inspire. I won't explain every element here But this is the street name only of an inspire address So the inspire address model is one of the most more complex models so I Mean an address would be like a street the number and a place But somewhere over here is actually the street name and the rest is all Overhead I could say now it's it's part of the model but We want to do something useful with this let's say make a map make a geocoder of the addresses so We we have to deal with complex model transformations and not only Are the models complex but it's there are huge files like we talk about gigabytes of files GML files So this is part of the Dutch address when you download it you get all these XML files that means there are millions of objects and maybe 10 millions of elements so To to transform this to do you something useful like like Putting it in the database and making a map We need spatial ETL so What are the options? How can we do this? one approach is of course to write a program for each data set and then Try to do that and so in some cases that I've seen that working maybe it works But of course if we look at the open source to your spatial world, we have several high-level tools and at high level I mean tools for the GUI where you can sort of set out the transformation So geo cattle may may may be known to some of you Talent use spatial and this week I learned also about hail I knew the project from a couple of years ago and it was a little bit shaky, but it seems to be very much improved so If you're sort of on the search Also try these I mean I've also tried these but I'm a sort of Old Unix command line hacker and I like to stay close to the iron and these are some of my favorite tools So let's say if you if you have to transform a shape to post yes, I would use let's say OGR to OGR Maybe even shape to pts gruel But each of these tools and XSL T Some are not familiar with XSL So I won't have to explain what this so that's to transform XML to another XML schema or anything else and post yes but the problem is Each of these tools is very powerful But cannot do the whole thing if you have random XML you cannot just use OGR to OGR. I mean you have to do That's why I said you need multiple transformation steps so and This also came out of yeah some years of dealing with this So the question is how to combine these individual very powerful tools Actually from Ramadan will be somewhere in the next room just now talking So And this came out of earlier research in some of the Inspire projects I did in as in your geographic context and several people are even here in the room like Frank answer So what we did there was this Multi-step approach let's say we took cadastral data that was exported into a shape file or math info You just OGR to OGR to produce a simple feature GML file and the simple feature GML file could be translated via XSL and then we could generate in sparan x1 GML and then we use FS load which is part of the degree tool set to load it into let's say an inspire database, but That was sort of ad hoc and a little bit of scripting. It was a bit hacky and it didn't scale up So from that I Thought how to combine these tools and the answer is basically at Python to the equation I Done not with shell switch and but then I said well Python is ideal and Python makes a lot of sense in geospatial world because it integrates with all of the existing libraries that are there are so This is really what what what's that was about in the sense, okay, it combines the basic tools And and the way and the abbreviation is So now it's written like this. I think in the abstract it's still with capitals, but It's about simple streaming spatial and speedy ETL. That's what what it's tries to stand for so It's basically from barrels and buckets of GML For instance loading into post GIS and then using QGIS to make beautiful maps because I should show a map It's it's a map. It's a geospatial conference or a geocoder What steadily is not just about loading? GML into post yes, that's one of the scenarios So I should show a map here this this guy is amazing. He's also here in the map contest You have William van Aals. So he he uses some of that tooling to produce topographic maps of cities in the Netherlands combining topographic and address data building data And with QGIS of course So what are the stable concepts? Actually, it's quite simple if you have multiple transformation steps you go from one source to a destination a target so it set up like well you need some input and then several filters to Process the data But this is still quite abstract. So for instance an input could be a GML file and the output post TIS and then it would go through several one or more filters and Something to produce output So this is one sort of trivial example. So you could have some kind of GML reader module and then let's would Send its output to no gr2o gr Output module and it would Output to post yes So let's let's take this inspire model transform. I showed earlier For instance data could also be already in a post yes database. So OGR to OGR is just a model which Command line what which is integrated Which states out the data it's it produces simple features as a continuous XML stream and then XSLT would be used to Create complex features So that would produce for instance a GML file. So it's not just about reading stuff into post yes And I will get more specific. It's still a little bit abstract. So instead of and that's the It's a little bit like like Lego. You could just connect anything To anything as long as scores as the inputs and outputs are compatible. So this writer to a GML file Could then be for instance be replaced by a degree writer, which is sort of specific model module which writes into in this case a degree blob store or It there's also an output writer for WFST to publish directly to the degree or to use server. I just learned So how does this work input filters output? Let's take an example step by step. So we have some random XML file here We have we apply an XSLT filter and we use OGR to OGR output to produce a shape file so Sometimes you get this kind of random XML. So it's not not Let's say a feature type. It's just XML. It has some names and some coordinates and So you couldn't run OGR to OGR maybe with some very clever clever command line filling, but So you need some some way to convert that so We have an XML input module in or I should say component instead of and then we produce an XSLT filter To create to transform this to simple feature GML so And XSLT. Yeah, I know there's some criticism on on XSLT, but it's very very powerful when you have to transform one basically three XML schema to another schema so this simple XSLT script We'll just take the points out of it and and produce an OGR a feature collection basically a simple feature collection so we find same places Amsterdam, but then it's it's true True GML so this Into this so that basically comes out of the XSLT filter So once we have simple featured a GML we can apply OGR to OGR and Then you could produce basically any of output in our case. It's a shape So how is this all glued together? Steadle is based on configuration files, so you don't have to program basically nothing you only have to configure the the transformation and the transformation is So this whole chain is specified in the configuration file and it's a simple file format to any file it's You can find it in Windows, but also in Python. It's used a lot So basically the whole chain There's a special section called ETL and you can have multiple chains in this case one chain. It's an input XML file going to transform my XSLT and Then an output OGR shape and these are sort of identifiers which point to sections further up in the file so input XML file points to this section and as you can see the the specific component processing the unit is identified by a class name and Then there are specific parameters for that class and the class is really your component So do I have a block like XML input is a component and it's specified here as input XML file and you see the file path pointing to the Specific input file later on you see how we can parameterize that as well and the transform makes itself T is another component here and it's Needs a script and that script is an XSL file and for OGR to OGR you can just apply your regular OGR to OGR command Which is nice because it's the syntax of this is all known, but basically you glue together these different tools in this Simple configuration files so well configuration files could be more extensive of course This is very simple example and to run this thing There's a command line tool called Steadle And then you specify the configuration with the minus C option and then it will produce And of course we use QQS these days the result as a shapefile So Steadle is in Python so it needs its can be installed via the standard Python space like Python package index so that's Sudo pip install Steadle. It's not yet for Debian or not yet other packages and there's of course some dependencies On Linux. This is very trivial to install these dependencies. I know it's somewhat harder on Windows So I talked about speed as well What also is part of Steadle is this whole streaming thing because you cannot Let's say if you have few hundred gig a few hundred gig a few hundred megabyte XML file just cannot parse that in memory and then pass Some something like a document so Steadle is based on streaming without intermediate storage and Also Steadle calls upon all the native libraries the C libraries those lipxslt, lipxml2. These are standard libraries in Linux so It's speed optimized going native and for each of these input filters output components there are several options now and But you're also able to write your own filter So from now this this is a little bit of Python. Maybe I should show some code, but If you want to add your own Component here, let's say a filter You can specify your class name in the in the configuration file and then there's a trivial example that just prints some standard output and there's of course standard APIs so your Filter always needs to implement an invoke method and then you get a packet and the packet contains the data and the status So Okay, what is exchanged between these components? What Steadle doesn't do is make a Its own feature my internal feature model it stays very close to the feature Information that comes out of these different tools and where necessary the one one format is Translated to another format so For instance an e3 doc is A Python version of an XML document But the document can stream can spread a very large document in an XML stream and then you can specify At which point and how many features you make a document so you can split a eujik's ML file into multiple smaller documents Or you could use an e3 element array so you get individual features So and there's lots of Several components to deal with degree integration especially to write to degree for instance blob store output or FS loader That's a tool set of degree or a very standard to just use WFST So there's two sort of main cases where Steadle is applied. It's for Inspire Transformation so to generate harmonized data and the other is to to read national GML data sets mostly into post-ES And this is an extensive example for instance top team and L which is the national Dutch topo data set and this is a More extensive file and also see multiple Chains for instance for initialization setting up a database you can also use Steadle and all these parameters They can be substituted on the command line with the Steadle commands. So it's not hard coded and again, of course we should show maps and Also recently we did the BGT maybe took a couple of hours and then we could Read the BGT into post-ES and this is more extensive Probably I won't go into the details here, but this is actually you has been used in PDOK. I should say this and But recently they try to switch over to FME and as I recently did was one and a half year ago and they're still struggling so So the status I mean it's It's not yet a full-fledged product. It's still in development, of course, and that's why I'm also presenting here to Show some of the results and get some some feedback there, but you could install it already via PyPy There's documentation this with read the docs and Yeah, several real-world transformations have been done So is it solved I Can't say the definite answer, but I hope to have helped a little bit solving this problem So, thank you very much Stunt We said it's installable or tricky to do with Windows environment. Yeah, would you try and Now the the the point is not as much settled but the supporting libraries and we always find this problem when installing something with Python on Windows, but There's several options there. I think in the documentation. I've pointed to portable GIS I don't know and that's actually Made by Joe Cook who's around here. She's in the organization and it's a USB stick with all the Windows versions of the basically open Geostack and You can run that without installing that and that's very very powerful so you can copy that to a directory for instance and then initialize that once and So that's the first step and then then you already have G doll and G. Little Python bindings and even a patchy I think even postgres post GIS And so you don't have to install it and it's called portable giz Of course, I said several options you could use OSD over Windows there's something called us bgiz as well, but Now portable GIS, we have some good experiences So please talk to Joe. Okay Okay, anyone spotted more than two animals? Spotted more than two animals Yeah And now here I'm just wondering if you have any SST from GML to WFST from GML to T XSLT you mean XSLT, yeah, but WFST is basically a container You know and what? Okay Now actually the WFST output module is two or three lines of code it's of course it's Python, but It's basically a template and and in the other transformation steps you produce regular GML just like you would send to a file But then the last step the WFST writer will Will take those GML features and put that in a template and that's that's basically container for the WFST What is it insert feature? One insert Now that's one of the things with Steadle it's Okay, you have a stream But you can hack the stream or hack the stream partitionate the stream into manageable elements So you could say I do WFST for one For ten features or what you could also set up a stream to then that's very powerful degree Of let's say a gigabyte of of GML So there's several options here, but it's another approach maybe as hail and it's totally dedicated to streaming and The output is basically independent of the input how the So WFST module is unaware how the other modules have produced the GML It gets a document or now remember but a document could be arbitrary large or small But maybe you can talk offline if I didn't understand the entire question Usually that's there's actually an extra. Oh, sorry the question is if you would like to do validation where where would you put that step into this chain of processing and Actually, there's an XML validator component and it's usually placed after the XSLT step so that produces full documents in the target schema for instance and A nice thing about the validator initializes once to get all the XSD's out of the from the internet and then it validates each But that's usually only done in testing because it's It takes some performance to validate each and every but yeah That's actually how I tested a lot to see if it all worked It's very important. Okay. Thank you. Okay. Thank you
Data conversion combined with model and coordinate transformation from a source to a target datastore (files, databases) is a recurring task in almost every geospatial project. This proces is often refered to as ETL (Extract Transform Load). Source and/or target geo-data formats are increasingly encoded as GML (Geography Markup Language), either as flat records, so called Simple Features, but more and more using domain-specific, object oriented OGC/ISO GML Application Schema's. GML Application Schema's are for example heavily used within the INSPIRE Data Harmonization effort in Europe. Many National Mapping and Cadastral Agencies (NMCAs) use GML-encoded datasets as their bulk format for download and exchange and via Web Feature Services (WFSs).
10.5446/15573 (DOI)
Mi flavor, komentelli i čudosti pri upgrades. inOMrši se imel kri gracias za esquko Christians, kajo se真的是 v kita, heela bil si, da je na urataker operations vpravo. tak su pri Scotlandu ni navarjo. Mon waters tabcekt. Wi naš glow na bistan br Amerik Carlos Hare. Jer na iz� clips evošneho izvideno naprefterKO. La daOSTom s neko ne koristaj linjo v ConstructionELL onječ čestnele. To je začel 1977i. Rešava nas ne bovno, a ja prav smo oере, položila ingolibil정이. Pa evaluating dana je doblivense ob��az'... je svoj verbos appeal sat beauty izdanje, sem sem v svoj élite film, z jemiro so tie kri, celo silnega postovova. Bo varmi komeptepo je, Vلكje modislje iz Slovenima v Hπει, ali se posebeve zplčen let perfektอ direkt render in modič könnenidi čest na tom paritranjo vehicles ljud Complete that is there. In obtihlj least rovi. Tukaj Truče mod enak del je podao dovut ten. A n settleo.раш, projekši se in v v komentaciju. Abchebeli z Mozyacem, refined, Ar 증am ende sovtvar paradoxene. Centr Alex Jonkenski je v opristem delenom, Po subway Prva power UCS plug s z katavila nosèmesca novegainca, držetovave cómo pozdravosti nekortvega piruša možet, a nekako sem igral mie OK idem olo stableva巧čer, ki naz teoritika drugajOOO-po nak KOC in na mashalen 那 interchange in prisonsISE, biеспacing region, pr tysis, ki počcomial do Raptorskihisas spirituality in inštuna aktivit體. V authoritarianijo je tukaj select fishing sprats decrease, imač ne v domnanju slebih osk хотьas. Molimka stora na fungšranje sponsorship, idem z v �kl thousands excusizavu in da so tukaj klik money. Tukaj Viktor Krabin, ped polishedakše silne, aspecijo na nasenju patro 했어. adolitah unitu underm Iraq Mr. kurdo ispogodil je ki jaz nevalo kitno, potre Chuck Mackay po den ju snorkama v likriti posla. previousni za kedah opravlu je skodil po vkrene k papa nevalo. κoy Sirlo Austin inbeno 76 fikr Nevala, roznil gen. Pa viče medicijčarea, o zde.sal ozničak je jo v svarku iz budava kot seemingly. kako su od naz�� hoped, me dala je remnants g hatred. S naspeljumeo propr provedelวes always done v 2011 za prečel immune. Prečel se tem knob visiteden Kollegin, posledaj z mjel tightly ke oto ampak na m lado kega, bo modula pri Z territories in podi to wanje, Pitečiti ešte vodnji klasist, jaz telj bridges in modlji knobali oben anjeli jezost, skida nisi stac bombing другo za delopt, vzene crashed viedilo z dala 태kala in tega tr 대kala.ringo v premištelji 19.12 priאש! Reživ je English speaking. in tak pajo hratnno ilžhto ja l jealousy trebáib vse časem postahimo Vížan postatné splat Preladcl0.5 tako kot do하시는 ace Unless tsij spurer Missel, Še v bo seemsa smalce z designera igaj Captain B ormSTW 2020, a to je pořel א�ксп prosecutor sa NHDP, skup 350 qtevodin blikutze na Princess Uς armi novi ispravili trening, ležili inほ in bližali ne진eg hne s souvenokaneo. Poje matarently na energini teraz osme это pa modわ en 이 aykori videlo gramiend你就 bi se repunila in ino se d전 ošlightshe dose in detenem mesa je odpašnji enen na prisitz Kar MB Nama opto pres payer, bolje sem bilitol ven sem intercept rib flankuju nenefer. Sleko hovala je, da 보면el je dareč ešte whitok terminals. Witko so prakt Hayır-lafa, čMo transition only nekako ka poz surname, in v keno n ρ- алеjnas bo prav. N ρ- je zelo vjal naprejeni pr Vlogi dedu po obsinezpla in v d Getんだ v allwac in praying, wavelength. Ja je bil s socib Heidi update v..., rejoicej squeeze certainty, ki придivaj dolejnega videa k abo pokrač 37 cellphone in Coreboard video testne bas, Fun星est centerhovo soda. Pr confessed zepravljal in� klipsel mi v tom pa ba zdaj. Na tylna foli ni od večcu napravlj za dela pump mer eu dod Kratko. Za regalipak na školenCO 3927 Na aklas monsterstva download ga, da je zvani na toru, condov transitionklje exacerbati in datak -"30 on Zen Perfect torch". in zelo do 7, before introduce some other feature, but this summer Denazer decided to shut down the FTP server and move from FTP to HTTP protocol. So several people asked me to update my library and I had no time to do that. One night I decided that we should start again to download because my company is using that. For one month more or less we didn't download any tiles from Modis. Marcus maybe didn't realize this because we are working on the data until 2012 and not yet on 2013, but it was a mess. So one night I started to work on it and in two or three days I was able to download data from the new server. So now it supports HTTP and FTP repository because NASA shut down not all the FTP server, but only some and some of us remain on FTP. For example, if you are looking for the Snow product on the FTP server and if you are looking for LST, then the surface temperature is in HTTP. We have documentation for each class and for each script. This is very useful for the common user. There are examples and you can see how to use it. There are five scripts to work with Modis data. There is a Modis downloaded, it's useful to download the data. Modis parse is able to parse the Modis XML file. Modis multi parse is able to read more XML file and it's very useful when you are mosaicking the data because it's able to write the new XML file for the new HDF file. The Modis repražakšan tool is not writing the XML file so I introduced this script to do that. The new XML is not complete, there are some feature missing but are not very important right now. Modis mosaic is able to convert the production system or the format. I show you a little bit workflow. When you use Modis download, you obtain something like this. For example, this example in Japan, these are the three ties of Japan. It's not very useful, you cannot understand anything. After you have to mosaic them, and now it's a little bit better, you can see that Japan is more or less shape, but it's not so simple. If I ask you where we are, probably nobody is able to answer. Also the Japanese guy was not able to answer me. I was in Japan in December, I make this screenshot for them and I ask where we are and the people say, I don't know. Yes, it's true, it's quite difficult to understand because the HDF file has a signal-zoidal projection that it's not so common, and at the end with conversion we can understand we are in Japan. This is Korea and it's more comfortable for us. These are a little bit of stats about Modis. There are four contributors. One is, two of them make very few contributions. I'm the main developer and from two months there are another guy from Ireland that is working a lot and he is helping me. That's all about the present. About the future, we have some really interesting new features like the quality class and the quality script. The Irish guy, it's a Dutch guy that is working in Dublin. He created a class to check the quality of the Modis data in each HDF file there is a layer for the quality data and he created this class that is able to read this layer and the output is this map where you are able to understand if each pixel is good or not. I don't tell you before that the data are not so much for all the landscape. You can see that red and green are very few. Probably for that day there was a lot of cloud so the satellite are not able to keep the data under the cloud. So all the white are cloud and red and green are right value with better or worse quality. The red one is the worst and the green one is the better. The second really important feature is GUI. I know that a lot of people don't like to use command line. Instead I love it. I introduced them not for me but for the other people. I am very close to have a stable version of the GUI. They are using VWX Python because it is the simple one and it didn't introduce too much dependencies and it is not so powerful as QT or something like that. The third is library for the GUI but it is enough for my point of view. The other stuff are new tools. There are some new script because using a lot of the data sometimes you can find something that is interesting. For example there is a new tool that is able to download the data from a file. You can write the product and the data that you want and it is able to automatically download the data. Obviously it makes a lot of bugs in clean up. There are not so much bugs but sometimes you can find some. Brief, I want to show you who is using now PyModels. For my knowledge maybe there are other people that I don't know. Obviously from the Zenet Boomac we have, I think, all the data set of Modis and the language set temperature from 2000 until now for all the Europe because one of the best result are the reconstruction of the missing data. We have a data set for all the Europe with data for each pixel. Usually the Modis are missing a lot of data. You can see some maps that are completely white because there are a lot of clouds for one day and we are able to reconstruct that data. The University College of Cork, the guy that the developer is quite classed using PyModels. In Argentina, the Konae, like HESA, are using PyModels because two guys came this year for six month visit in the Zenet Boomac. I introduced them PyModels and they were very happy and they bring PyModels to Argentina. Also in Japan, the National Institute of Agroenvironment Science. Last year I was in Japan, and I didn't know anything that some people who were using PyModels outside my company. It was the first time that I realized that someone else was using PyModels. And I went to Osaka to venka professor and he told me, yeah, you should go to Tokyo to meet a guy because he wants to speak with you. I left them, he showed me that he was presenting a project using PyModels and I was very happy. And probably I told you there are someone else around the world that is using it, but I don't know. No, I know that someone in the US is using it, but they are not so friendly or they are not... I ask them, I can use your institution, but they don't answer it so I can't put their name. And probably someone of you will be the next one, I hope. I want to say thanks to Marcus Netter that give me the opportunity during my work time to work on PyModels. Ingemar needs the developer of Quality class, Marcus Netter for the testing, Stefano Cavallaria and Giza for some improvement of Python. They are very clever Python developer and so they help me. That's all. If you have any questions. Thank you. Sorry, can you repeat that? So the problem is converting from HDF to TIFF? No. I take the HDF files and run a model. I get a TIFF file in the same direction. Ah, OK. I would like to use PyModels because you have all the stuff there. OK, I think that you should be able, if you obtain a TIFF file with the sinusoidal projection, you should be able to convert it to GDAL. Because right now the convert function is using MRT that has input only HDF and output a TIFF file. So you could also probably use PyModels and convert before the HDF in TIFF and after make your model. Yeah. We've made the pixel resolution now. If you convert a different projection. No, you can keep. There is a parameter to keep the same resolution. MRT gives this possibility and also PyModels. There is a minus R option that you can set what resolution you want. For example, in my foundation we are using 1,000 meters resolution and we rescale to 250 meters. Another question? Me. Just one question. Which kind of development is the library? Right now the only dependency is... No, now there is no dependency. In the future there will be GDAL for the quality map. What is the conversion to... MRT software. So you have to set the path to MRT. Right now it's... Yeah, MRT is a software. You have to compile but it's very simple and you just put the path to the main directory of MRT and he set some variable and it works very well. In the future there will be GDAL, VX Python for the GUI and NumPy. So from version 8 that should be the next one. One more question? Yeah. About data sets available by PyModels. Is it raw data or has it been processed in radiometric correction? No, it's the original data from NASA. So you have the raw data and after you can make your analysis with the raw data. Jezreko.
pyModis library is a Python library to work with MODIS sensor satellite data. It was originally developed as an interface to download MODIS data from the NASA FTP server but it has grown into a powerful library which also offers further operations on the data. pyModis has several features: - it supports downloading of large numbers of original MODIS HDF/XML files. This is ideal for the automated continuous updating of a local archive through a cron job; - it can parse the XML file to obtain metadata information about the related HDF files; - it can convert a HDF MODIS file to GEOTIFF format; - it can create a mosaic of several MODIS tiles to obtain large coverages including the creation of the merged XML metadata file with information of all tiles used in this mosaic. For format conversion and mosaicing the MODIS Reprojection Tool (MRT) is required, because at time MRT is the best free and open source software to manage original MODIS data and convert them into a different projection system or format while taking care of the special features of the original Sinusoidal projection. pyModis is composed of three modules: - downmodis.py contains a class downModis used to download MODIS data, it requires a “password” for the FTP transfer (usually your email address) and a path where to store the downloaded data. Other parameters are optional, such as the date range or the MODIS product to be downloaded; - parsemodis.py contains two classes, parseModis that parses metadata of a HDF file returning all useful information. It has also the capability to create a configuration file for MRT; the other class is parseModisMulti, it reads metadata of several HDF files, hence it is used to create the XML file for a mosaic. This class is also able to return the bounding box of all the tiles; - convertmodis.py is the module to do some simple operations on the original HDF files such as reprojection. It contains three classes and all of them require the MRT software to be installed. convertModis converts HDF files to GeoTIFF format; createMosaic creates a mosaic from several MODIS HDF files into a single HDF file; and processMosaic converts the raw data of MODIS using swath2grid from MRT-Swath. In pyModis the user can also find five command line tools to easily work with pyModis library: - modis_download.py is the tool to download data, - modis_parse.py reads metadata of a HDF file, prints information or writes them to a file, - modis_multiparse.py reads metadata of several HDF files and prints bounding box or writes the MODIS XML metadata for a mosaic, - modis_mosaic.py creates a HDF mosaic from several HDF files, - modis_convert.py converts MODIS data to GeoTIFF or other formats and as well as different projection reference systems. During the presentation all these topics will be discussed and illustrated along with more information about the future of pyModis and the tools for the community (how to contribute or how to report a bug or an enhancement).
10.5446/15572 (DOI)
Hello, good afternoon. My name is Andrea. I work for Geosolutions. Today we're going to talk a bit about WPS and spatial processing in general with GeoServer. Some historical perspective on why I'm doing this presentation today. A few years ago, the things that people were looking after was publishing a map, maybe do some styling, some filtering, maybe time elevation based, maybe do some WFST editing, some PDF printing, and that was all. It was all about getting the information out. That is still being done, but what I've noticed during the last two years is that we can hardly make a new application, new web application, without having some bits of processing into it. Now, basically every kind of application Geosolutions is making today has some kind of spatial analysis, even if it may be simple. Sometimes it's very complicated, but it seems that it's kind of unavoidable to have some spatial processing in a modern application. I put together this presentation to talk about how you do it with WPS and GeoServer and SQLViews and to provide some use cases where we had some significant experience with WPS and SQLView and share it with you. So some quick reminders about WPS. The presentation is not an introduction to WPS. It's a presentation about how we use it. But anyways, just quick pointers. WPS stands for Web Processing Service. It is the OGC specification to do any kind of spatial analysis, any kind of calculation, actually any kind of action, because WPS is totally general. I could write a process to send an email or to order a pizza if I wanted to. It doesn't have to be spatial, but normally it is, right? So this is an example of a request. A request is normally an XML document that you send. This is kind of the simplest one you can think of. I'm throwing a geometry, an L-shaped line. I'm saying, okay, buffer it by two pixels, by two meters, whatever, and I get the result, which is a polygon. This is kind of the simplest process you can think of, and from here it can get as complicated as you want. WPS has two execution modes, and it's important to choose the proper one when you are developing your application. There is the synchronous mode where the client asks the server to do some calculation and sits there on an HTTP connection waiting for the result to come back, and it's pretty much like a get-map in WMS. I want the map, I get back now. This approach assumes that the execution will be fast, which is not always the case when I'm doing processing. Spatial analysis can take hours, it can take days. So the other approach is to do an asynchronous request, which is an optional part of the WPS protocol, but your server supports it, where you ask the server to do some kind of processing, and the server gives you back a token that you can use to check the status of your processing. Is it running? Is it done? Can I get the results? So you make polling against the server to check what's the status of the request you've made. And of course, this is suitable for longer computations. Something specific about GeoServer, this is kind of the common WPS setup in the OGC, from the OGC point of view, a WPS is something that does processing, but it does not own data, not necessarily. And you get data from other OGC services. You need vector data, you get it from a WSS. If you need the raster data, you get it from a WCS. Or you can get it from whatever other HTTP server that is around on the net, but you have to fetch it from remote, normally. Then you do your processing and send back the results to the client. GeoServer has more functionality because it's an integrated WPS. That is, the WPS is part of a larger environment where I have data sources, local data sources, I have the ability to configure new layers and so on. So GeoServer retains the ability to talk to remote services, but very often you want to talk to layers that you are already exposing via GeoServer, so you go directly to the data source. So I don't do a WFS request, patch the GML, parse it, and so on. I go straight to BOS GIS if I need to. I go straight to the shape file, to the GeoT file that I have locally. We have integration with the WMS, which is called rendering transformation, that allows us to take a style which instructs GeoServer to transform the data on the fly while rendering it. So we don't actually store anywhere the transformed version, say, contour line extraction. We do it on the fly, we paint the contour lines, and that's done. And we have a nicer integration with the UI in that we have a sort of a little client, which is not a proper WPS client, that allows you to build the WPS request, which comes in handy if you are a, I don't know, a JavaScript developer that wants to build a request without having to type all the XML. This builder will build the request for you. So if your WPS usage comes to setting up maybe a complicated request once, you can have this tool do it. This is the rendering transformations that I was talking about. Contour line extraction case, so from point to data to raster data, from raster data to line data. And rendering transformations are very nice in that they are not just applying the contouring or the hit map on the data that you have. They are context sensitive. They are actually working just on the area that you are looking at with the WMS, so they don't process the whole dataset, but just the part that you're looking at. And they are resolution aware. So when I'm extracting these contour lines, I'm extracting at the resolution that the user is looking at, not at the native resolution of the digital elevation model, which speeds up the process tremendously and makes it possible to use interactively. Writing the processes can be done normally in Java, but it's also possible to use scripting languages such as Jiton, which is Python version for the Java virtual machine, Ruby, JavaScript, Ruby, Scala, and a few more. So you're not bound to extend and write new processes in Java. If that's not your language of choice, you can write them with a scripting language, which is also nice because you can have your running your server, you throw at it your script, and a new WPS shows up, a new processing WPS shows up. You want to change it, you do it, you don't have to restart your server, so the development goes faster. Now, we could say a lot more about the GeoServer WPS, but we don't have the time in these 20 minutes. I want to show you a bunch of examples, a bunch of significant examples. A few years ago, Denver, Post4G, I made a presentation about WPS, which goes in detail about which processes we have built in, what they do, and all the detailed capabilities of our WPS. So I want you, if you're interested, to have a look at that one. Now, WPS is not the end of all things. WPS is meant for processing, but it's not always the best choice. You never have to forget that if you have a special database and the data back end for your infrastructure, well, the special database is an exceptional platform for doing special analysis as well. It has a number of abilities to intersect data, to compute buffers, and so on, within the database. And sometimes it makes a lot of sense to just do the computation right there, because it's faster. Besides that, special databases are very well suited for the classic select, aggregate, and filter, and join routines that every database does. They do it very quickly. It makes no sense to grab the data and have processed it in Java or JavaScript, and so on, if the database can do the computation for you. It's already there. It's functionality that you don't have to pay for. So how do you interact with the database? Well, GeoServer has this SQL view concept in which you type an SQL in the GeoServer user interface, and GeoServer uses that SQL as the data source. So you can do joins, you can do computations in that SQL. And GeoServer just runs the SQL to extract the data. Plus, the SQL can have parameters that you can pass down from the client to alter the functionality of the SQL view. So this is more or less how it looks. From the client, you can pass down, say I have a low and a high parameter in my SQL view. The client gives values to those parameters. GeoServer expands them into the query, sends the SQL to the database, retrieves the results, and then displays them as the blms, or returns them as the blms. There are certain limitations that you are aware of because we are basically replacing strings into an SQL, so we are open to SQL injection attacks. So there is a validation bit that you have to add to make sure that no strange things go into your SQL statement. Okay, and this is undone with the theory. Now, let's go down to business with some examples, some real-world examples where we used these or that capabilities of WPS or SQL views. First application that we developed, FAO Tuna Atlas. Tuna Atlas is a web application that has this grid. Each cell of the grid is actually a historical data set of all the catchments of various kinds of tunas with various kinds of fishing technique. And this application allows you to select a range of years. It can allow you to select a range of quarters of the years, if you want to. The type of fishing that is being done, and the species, one or more of tuna that is being catched, basically it shows you an aggregate map of all the catchments that happened during that time with that technique and that kind of tuna. Now, what do we have here, really? We have multiple type of filters, we are closed. We have an aggregation, select some something. We have joining because the quarterly starts to be joined with the special grid that we have here. There's nothing that a database cannot do, right? It's just a query at the end of the day. And that's what actually powers that map. We have an SQL statement with some parameters. Here, the operation can be some, it can be average, and we send it down from the client, depending on what kind of aggregation the user wants. We have other parameters here to select the fishing technique, the years, the quarters, and so on. And all of these are coming down from the client, from the user interface in JavaScript that we showed. Oracle runs on the fly, in this case it's Oracle, runs on the fly. This query which has nothing special returns the data and just ever displays it. And that's it. And if you wanted to select a different range of data and so on, we just have to change the parameters. So this is processing, yes, but it's so simple that it can be turned into an SQL query, and that's how that web application works. So no need for WPS for this one. The most efficient way is actually to have database right. On top of it, if we select a number of years, we can have the web application produce an animation showing the evolution of catchments among the years. Instead of aggregating them, summing them, we can show by year or by quarter a frame in animation. So we have this way to control, this user interface to control how the animation is done. And then we have a tool in your server which is called Animator, which basically does many get map requests, assembles them into an animated GIF, which is then returned. Again, some sort of processing without actually having to do anything with WPS. Now, this is a PDF. This map is not moving, but the real application, you would see the catchment change year by year and quarter by quarter. So this is one first kind of application where we have done processing, but we haven't touched WPS at all. Now, let's go to another application that we have been developing, Download Services, which instead uses WPS. Now, the requirement of this application is pretty simple. The user wants to look on the map, select some layers, select an output format, and download the data, plus some extra, some extra about clipping the data. The user can draw a polygon. Okay, let me see. Yeah, this is the user interface. The user can draw a polygon. He can ask for a buffer around the polygon if necessary to enlarge the polygon that was drawn. He can draw a polygon or select data from a polygon layer. So maybe I'm downloading by province or by county, and I can select the county instead of having to draw the exact contour of it. And then the idea is that I can download an XF, a KML, a shapefile of this area for the selected layer. Now, you might say, okay, this is a job for WPS now. It's a way to extract vector data. And if we can also choose a raster layer, that would be a job for WPS instead of extracting raster data. There's a catch though. The extraction can be massive. And WPS and WPS do not have an asynchronous mode to operate. What if the extraction takes 20 minutes? The web application will time out waiting for the download to be available. Besides, we don't want the user to have to sit there all the time. What we do is actually ask the user to input an email address, and when it's done, we send an email with the download link. Now, here we have some simple spatial processing, clipping, right? And we wanted to take some time to do the execution, so it's a job for WPS in asynchronous mode. The only thing that we do in synchronous mode is the buffering. The user draws the polygon if it asks for a buffer. In order to display the buffered area, we call in a synchronous way the buffer process. So send this polygon to GeoServer, and GeoServer sends us back the buffered version. Okay? But this is the only synchronous part of this application. Instead of tracking the download status, it's done via I-async WPS. Here we have another twist which makes things a bit more complicated. We need to be able to use a cluster of GeoServers, so a cluster of WPSs, which means we need to know what the status of the execution is, regardless of the server that is actually executing the download. So we had to make a bit of customization to GeoServer. We created, of course, a download process that can do all the clipping and selection and reproduction and all. And we replaced a pluggable part of GeoServer, which is called the process manager, so that the status of the execution can be shared among nodes. The default one uses a local thread pool and does not inform anybody about the status of the execution. We created one that stores the status in a database, and we created a little extension so that when the process is done, it also sends an email. It actually sends multiple emails. It sends one email when the extraction of the data begins and one email when it's done or when it failed to inform the user, and it gives it the download. And the client, which is called the MapStore, which is, by the way, another open source project that we are developing, does a number of calls to WPS. One is the buffer. We already talked about it. It's a synchronous call. It can do a download, estimate, or call, which is another process that we developed, which allows us to estimate how long it will, how much data will be extracted because we have limits on it. We don't allow people to extract 10 gigabytes of data out of the data sources. So the download estimator is a quick way for the client to know whether we are within the limits or not. Then we run the download process, which also computes the limits itself for security reasons, in case somebody tries to dodge the user interface. And the get status, which is used to know about the current status of the process execution. Oops. And display it here. Okay, so this is a case in which we didn't touch the database. No SQL views. It's pure WPS, but with a twist. It's asynchronous, and we actually had to tune and geoserve a bit to allow it to work in a cluster. Let's see another case in which we had to mix a bit of WPS and a bit of SQL views in a sort of ingenious way to solve a problem of large data processing. The destination project. Now, the destination project is a funded project from the northern Italy. I won't go too much in detail, but the idea is that we have this road network, we have this truck scaring dangerous goods, petrol, gases, stuff like that could harm the population. Or the environment, in case of accident. And we wanted to compute in some way the risk for the environment and the population associated to each arc of the road network. The road network of the northern Italy is actually splitting to 50 meters arcs, and we have statistics about the average number of car accidents happening in those arcs. Then we mix it with the environment that's around, so hospitals, buildings, that is generally speaking places where the population can gather or live, indications about the environment, for example, water and stuff like that. And we have an ocean of the area that could be affected by a certain kind of accident, depending on the kind of substance which is being carried around, can spread out quickly or not. So we have several buffer areas that can catch the portion of land that would be affected by an accident with a truck, transporting dangerous goods. So the data volume is relatively large. We have, sorry, it was 100 meter portions, but the thing is we have 500,000 of them to cover the northern Italy road network. And then we have a second level of aggregation, just to have a bit of multi-resolution so that we don't have to paint the 100 meters segments always. When we are zoomed out, we can go for the 500 meters aggregation, and when we are even more zoomed out, we switch to polygons, one kilometer cells. We have 51 different buffer distances depending on the scenario, what kind of substance, and whether the scenario is, I don't know, a weekend scenario plus a weekday scenario because certain kind of targets are more vulnerable, that is they have more people during the weekend versus during the working days. And we have several types of targets, both human and environmental. This makes for a large amount of computation, plus the expression that we have to apply to compete the total risk is this huge set of aggregations. Now, what we are adding together is more or less the things that are already explained about, so the accidentality types of goods, human environment, and so on. And the scientists that want to look at this data and the decision-ares that want to look at this data also might want to have a look at partial results, so not the total risk but some parts of this aggregation. So we have an O plus, and a number of coefficients in there can be tuned by the scientists attend, so basically it's kind of impossible to pre-compute the risk in all these possible scenarios, too much. This is the result that we have. Once we set out what kind of part of the expression we want and what kind of targets we want to consider in the computation and so on, we end up with a color-coded map with a risk and a number associated to the risk. Now, how do we compute this efficiently so that since we are basically bound to do an on-the-fly computation, but we have too many data, we have too many variants? Can we do only with SQL views? No, because we would have to generate a million of these SQL views for all the possible combinations of parameters that we have around, so that's not possible. Are we going to use a pure Java process instead? It's going to be very flexible, not really too much data to transfer from the database to the Java process to do the computation. We are going to die waiting for it to get out of the database. We are going to do it fully on-the-fly. Again, no, there's too much data involved. So what we did is a compromise. We pre-compute all buffer and see what kind of resources they catch. So we have a large data set already pre-computed for the aggregation and we have a process that builds SQL views but on-the-fly, taking into account all the variants and parameters that the users specified. I'm going to go a bit quicker. The thing is we are actually doing parametric views in which some parameters are all the SQL views themselves, so it gets really complicated. But the thing is we managed to get very good performance with this setup. All the stuff can run either as a WPS process or as a rendering transformation. So it displays on-the-fly what it computes. We also had to do efficient cross-layer filtering. Find me all the cultivated areas that fit within this buffer and since we are comparing two layers that are in the database, guess what? We used SQL View in this case. So we passed the SQL View, the buffer area that we are taking into account and the bounds of the view that we are looking at so that the database finds quickly the areas that are intersecting the buffer. We also had to develop a process to do these 51 buffers on top of 500,000 arcs and we found out that we needed a customized version. Quickly, another project I am going to fly by on this one. This project is completely different. We made it for NATO. They are doing naval exercises in the Mediterranean Sea and these naval exercises produce a lot of noise which can confuse marine mammals, dolphins and whales and so on. So they have to pay attention to that. So they have this sound propagation model which is an octave model that computes the sound propagation in the water and the geo-server is actually calling it via WPS. But it is interesting because the process does not only call octave but it also logs into a WFS feature type. All the details about that request are so that we can use WFS to list which computations were made, when, by whom, with parameter and extract the result and display it on a map and eventually decide to delete it. So it is, again, a creative approach to how do I get more status out of an asynchronous request. Plus, we have to make a comparison between the raster layers so we needed a faster raster algebra and we made a simple version that uses an OTC filter and then we have a fully freeform version that uses this GIFO library that you can find online which is a very, very fast implementation of a raster algebra in Java. I say very fast because the thing is you write your own math, your own conditions, your own multiplication, trigonometric function calls and so on and it first turns that thing into bytecode and then the virtual machine turns it into native code. That is, if you are computing a large area, it really runs like it was written in C after a few seconds. So it's really quick. And this is an example of how the syntax looks like. Let me see just a final example quickly about scripting instead of a process. So in this case, I'm starting with a, this is a simple application, it's not real world but it gives you an idea. I have a land use map. I wanted to get the percentages, the distribution of percentages of land use within a polygon that I draw on the map and the result should be HR. So simple one, right? But we don't have a built-in process that does this. So the idea is that we have this shape file and someone wrote a Python code to do the aggregation to compute the result that is being displayed on the map and this is all the code that was needed to be written to have a new process in GeoServer show up from Python code, which I find personally, it's quite good because it's relatively compact and as I said, you can iterate on top of it because GeoServer reloads the Python code if it notices it has been modified. And this is an example in Geiton, but as I said, you can use a lot of other scripting languages. And yeah, this is it. APPLAUSE Q&A. Quick question about your customer process manager that you wrote for the WBS thing. Yes. Is this able to cancel the process? No, not at the moment. So we have some sort of support at the API level to cancel a process, but nowadays none of the process that we have written so far is cancelled. Q&A. You say you can get the status of the process, but is that a little 20% to 20% going on in these pages? That depends again on who coded the process. Some processes can give back some estimate of where they're at in terms of a percentage, others don't. Q&A. How is it because all the things to attempt a data block to give up on you and essentially the graph of the data that you can write? Q&A. Getting to link with graphs is not a five-minute thing because you have to set up the map set and then you have to export the data and the processes, get the data back from the map set. So it can be done, but it would take the very least a few days of every development to do it. Q&A. Is that possible to call an external service or function from within Azure? Q&A. You are writing codes so you're free to do whatever you want. Q&A. Can you call an external WPS? Q&A. It's all in your implementation. You get the data and once you have it, you can do whatever you want with it. So if you want to send it over to a remote service, you can. Q&A. There is no WPS. Q&A. No, no, no. We don't have a WPS cascading ability at the moment. It could be written and that also would take a few days of every development to be done. Q&A. Thank you very much.
This presentation will provide the attendee with an introduction to data processing in GeoServer by means of WPS, rendering transformations and SQL views. We will start by a brief introduction to GeoServer WPS capabilities, showing how to build processing request based on existing processes and how to build new processes leveraging scripting languages, and introducing unique GeoServer integration features, showing how processing can seamlessly integrate directly in the GeoServer data sources and complement existing services. The presentation will move on showing how to integrate on the fly processing in WMS requests, achieving high performance data displays of heatmaps, point interpolation and contour line extraction without having to pre-process the data in advance, and allowing the caller to interactively choose processing parameters. While the above shows how to make GeoSever perform the processing, the analytics abilities of spatial databases are not to be forgotten, the presentation will move on showing how certain classes of processing can be achieved directly in the database. Eventually, the presentation will close with some guidance on how to choose the best processing approach depending on the application needs, data volumes and frequency of update, mentioning also the possibly to leverage GeoServer own processes from batch tools such as GeoBatch. At the end the attendee will be able to easily issue WPS requests both for Vectors and Rasters to GeoServer trhough the WPS Demo Builder, enrich SLDs with awesome on-the-fly rendering transformations and play with virtal SQL views in order to create dynamic layers.
10.5446/15570 (DOI)
펼ha adhesive o swellwll i g dresses-t 120, oes yr gyllid o glyfe amwyenciaid llawer, mae Hudson, roi gofyn honi arlaeth CRC, dru wlaff, daen hyn o syniad gyllid. Myll forth cymd twist nad y dyfodd cheexp produce al ysgol yma, rhywbeth Llyfr derbyn, restaurants, observerios leid illegally i wnaeth ac sefydlei ar Eh Po Anger. ymwneud o'r cyflawni metrologiad, ond rydyn ni'n rydyn ni'n rhaid i rwy'r gwaith ymwneud o'r ffordd gynghorwch. Rydyn ni'n rydyn ni'n rhaid i'r ffordd. A'r ddweud y bydd y ffordd o'r mynd i'r ffordd o'i ddweud ymwneud o'r ddweud o'r gwybod o'r ddweud o'r ddweud o'r ddweud o'r ddweud o'r gwybod. OK, WMO. Rwy'n rhaid i'n fwy o'r byd yw'r metrologiad a'r ddweud yw'r cyffredig o'r cyffredig o'r ddweud o'r ddweud o'r 100 o'r 120 oed. Rydyn ni'n rhaid i'n ddweud o'r 60 oed bydd y Unedig o'r WMO. Rydyn ni'n rhaid i'n ddweud o'r cyffredig o'r metúl o'r beidio, o'r Mosgo, o'r Washington, o'r Braziliau, o'r gwaith o'r weithio ac oed. Rydyn ni'n rhaid i'n ddweud dangos grwp. Rhynt rydych mili bofyd yn ei gyn coolest i'r Mitglio heylo HyperGή Erion. Te luontio hwn i hwn i realise y cwDE oes i gael gyd yn rhywion roedd yn intwbl excreto a dyna, occupy a fill ofyd yn grabaewi ac fel hynedd diwethares rend gwnaeth efallai cyt deunyddr y gallais cymdeilio'n tamu a dyna yn busg ac yn rhwng greg oedden ni, blends y gallai dynun mewn iaud cy Andy's isiwyd gyntaf, gwaith gen i fy ng Спасибо Aberfarwyddio myfnwysg grafio намohell gallach iawn gneud fridgeag i'r Oméraill μου Ras dat magnetic ur lyngor sortie unrhyw ofod wych ddweud! All lywodol a dyf speig Og ydych yn cael ei iawn er gweld i fanodlach? Chwanaeth symbolnau o ran a rangell sydd yn rhan o ddiog seitniad o accelerator yn gweld iddyn nhw, Ad hoc, rydyn ni'n gwybod i'n meddwl. Yn ystod, mae'n gweithio. A WMO wedi'i gweithio 10 yw, yw'n gweithio'r catalog o'r methologiol yn ymddangos. Felly mae'n gweithio'r cyflwyno, maen nhw'n gweithio'r data. Mae'r gweithio'r data clamatologiol. Yr ysgol yw'n gweithio'r Google, yw'n gweithio'r Google i'w ddweud ystafell. Yn ymddangos ymddangos, mae'n gweithio'r cas. Mae, ychydig i'n accessories sy'n gweithio, rydych chi'n remove phnwog нравll, ychydig papyr cyllidig o cleanlynedd. cytild yw gennych ddeithas drwyaf o'r Pradwith Goed immediately. Felly mae'r adres yw'r problemau a'r small asteroid. Felly, rwy'n meddwl i'r servicio ar y UK yn y stryd a'r vice versa. Rwy'n meddwl i'r servicio ar y UK yn y France. Mae'r small asteroid problem. Rwy'n meddwl i'r small asteroid o'r big one. Rwy'n meddwl i'r cyfrifio, mae'r cyfrifio yn y dyfodol. Mae'r cyfrifio yn y ddweud. Mae'r cyfrifio'r cyfrifio yn y ddweud, mae'n gweld yn ffysg ac mae tobwg ein f Benny Cyfrigol. Mae'r peru nytyr yn nod settleach. Dimhaf, mae'n bobi andrinella I Ac wrth gwrs, y gweithio'r ffordd y dyfodol, ac y ffordd y dyfodol.....y'r ffordd ychydig. Yn y bydd, metrologist wedi bod yn dweud o ddau. Ac mae'n ddau'r ddau'r ddau'r ddau'r ddau'r ddau'r ddau. Mae'n ddau'r ddau. Mae'n ddau'r ddau'r ddau'r ddau'r ddau. Yn ymddangos yng Ngôl Gweithredu, mae'n gweithio'r cythlau. Mae gallery yste�ed, y pedfyniad CM worry. Mae guthlas sy'n cael comborynu i ffath interfere o openant a llwyr ME總or. attracts yn ofain fathent iawn. Mae'r laying i bw yn nahau gwelch. Fe unigblef ni ar sordcau iawn am bravef звed. Alherび mae'n cael cyllid yn gweithio, mae gweith mill dim.rywodd yn cael ffrind o'r ar Students, mae'n cael ei equ列 i nad oherwydd rwy'n meddw sy'n defnyddio's y換oedd, er mwyliwn ti arrivalis yudead rhawer. Byddwn i'r adnod, morning. Ond ond y chathlog ar dechegon o adnod. Felly wai gyd anest rhwy Ac i dam o'n dillegon darned yn dinc, Ac fydian i werth o snelwch fit gwymCha Cy�� cook Caerdiwch os yna'r mewn hylltriaid sy'n rwy'n ffordd nagur i y choi On yn fretch gofiannol oherussia iawn'ddwn eksplaeth am rhai belts Azo siwr bod y dylan Sence parcaid listogel fiskwyl y telfio mor unrhyw yw, a ddyn nhw'n ddweud o'r newydd sylwyr newydd, newydd, newydd, ysgrifennu'r newydd. A dyna'r gweithio. Felly, dyna'r gweithio. Rwy'n gweithio. Rwy'n gweithio. Rwy'n gweithio. Mae'r 1915 sy'n cymryd yw XML, yw y 19139. A dyna'r cyfnodd yw'r cyfnodd yw ddataeth. Mae'r 1919. WMO has actually produced its own restrictive profile of 1915. There's a lot of generality in there and we said we don't need that full generality and then so we've pinned it down and it is a restrictive profile. There's no extensions, it's restrictive, so it's a strict subset and we've all agreed around the world what goes in there. We're in the process of updating it to version 1.3 and we'll update it again because we know that 1915 is going to change next year and it's going to change quite significantly actually, my understanding, because the ANSI process is working its way through because it's five years old. The search is ISO 29350, which used to be known as the ANSI Z3950 and that's the search protocol that came from the library community. So it actually knows about narrower and broader search terms if you don't find anything and it can federate searches to other nodes. The SRU 1.3 is search and retrieval by URL. So it's all that stuff in the search box and CSW's optional. In terms of dissemination and how you copy the rack caches between caches and how you do publications and subscription, none of that's been standardised yet because we felt the standards weren't mature enough. So for example, the idea we chose the 1915 and the 2950 is suppose I want to search on some climate change data to do with deforestation in Vietnam. Well, it may be the definitive document is not in Vietnam but it's in Paris on paper, which is quite likely. So therefore it would be nice if we could actually plug in the libraries and find out federate searches that far and that's the long term plan. Nobody's done it yet. We started a procurement to get some software in Europe. We had a kind of a start in 2008. That fell apart for various reasons, mainly to do with money. It just cost too much. So that was led by Deutsche Wetterdings, the German service. So they decided to go and do their own thing. In 2009, the UK and Metru France started a collaboration. Metru France's commercial arm joined and then Korea and the Australians joined. We shared the money and split it five ways. We went to an open ITT in the European tender. It's won by a company called Acre, which is French. Surprise, surprise, there are about 300 metres from Metru France. Started writing what we now call OpenWiz. Wiz is the WMO information system. It's about as simple as we get it. We envisaged it would be made out of open source as much as possible. We suggested to the vendors some possibilities. The route Acre took was geo network and or lots and lots of middleware. OpenAM used to be known as OpenSSO, right? Before Oracle bought it. Now it's called OpenAM, Independent of Oracle. That's the security layer. So out of the total bit of code that's openWiz, probably about 20% is geo network. Another 80% to do the cachings, dissemination, monitoring, industrialisation. So that software was accepted last year. It's still in warranty. We made the decision in 2009 that we would release it as open source. We had a big battle with Metru France's international commercial arm. They didn't want to do that. They were outvoted. It's quite interesting watching Metru France. This is not part of Jack's presentation, right? It's quite interesting watching Metru France and their commercial arm argue. You don't normally see that in public, right? Now Metru France commissioned a small ITT from a company called Altaway just to analyse the software we've got from Acre, independently of Acre, to see what the intellectual issues are, what the legal problems about chopping the software up or whether we can just stick a LPGL on it and away we go. So they did an analysis. They said it's a modular architecture which may cause problems because it's a modular architecture. The different modules could be legally considered different bits and therefore we may need different IPR on each component. What I should say, that modular architecture has been very effective. We've been running our node of the catalogue for two years now and if certain parts go down for something, there's a problem in the computer room, catalogue access fails, the other bits don't fail, the dissemination works or if the dissemination crashes, which is not very often, the catalogue just keeps working or the user interface keeps working. So it is in fact very robust architecture and very resilient. So those are the legal issues what exactly is being offered as open source. We've got all those other middleware bits, Tom Patsch, etc. It's not quite clear where the boundaries are for the legal content. So some of us can't be redistributed because we built it on the enterprise version of JBoss. So there's a bit of work there to turn into, clean that interface up and there's some bits where we're interfacing the dissemination system to our own internal systems or accessing our own internal databases. So we specified an interface but everything on the other side of the interface is ours. So we're not going to release that. We'll release the generic interface and say well you're now going to plug it into your post-gis, your Oracle or whatever you want to do, or your file system. So it's a non-trivial problem. So that's the sort of rough architecture. The metadata portal is what takes you, the bottom right is what takes you into GeoNetwork. The indexing services, so GeoNetwork has Lucene and what Acre did was they took the Lucene out and put in Solr and that has had certain advantages because we can now put quite a lot of and separate it out on a separate physical machine or virtual machine. So actually if we want to support, we actually gave them guidance as to how many users we wanted to support and it's several thousand simultaneously. And so if we just throw hardware at the indexing, it's fine. It actually is quite fast. And the data service is delivering the data once you've identified some in the catalogue, you get the record, you click on it and it takes you to the data and if you're entitled and authorised and authenticated, if you're authenticated and authorised, you will get the data somehow depending on what the service is. Now the security services, they're kind of orthogonal, right? They're all outside, but we think it's really important. So the intent of the catalogue is it's totally open to anybody in the public. And in theory, we will also make the catalogue for public data open to anybody in the public so they could request publicly freely available data. In the short term, we haven't done that because we're a little bit worried about, certainly in the Met Office, we've got a public website. We've got one of the busiest public websites in Britain. If we forecast snow, we know we're going to get 16 million hits in two hours. Right? And it takes quite a bit of infrastructure to look after that. And so if we start saying, oh look, there's all this freely available weather data from anywhere in the world, all you do is click here, you can get it. People are a little bit twitchy about having 16 million hits. So for the time being, it's just open to anybody in other Met services around the world. And then we just slowly open up the door and when we're fairly sure that we can cope with the demand. But that doesn't really impact. It's clear there. It's XAML2 and all that kind of stuff, all those scruperty stuff, which I don't understand. So then we've had debate within the consortium about what licenses we should use. The French got a bit confused and upset about GPL2. Yes, we can use it and can't we know use it? Is it compatible? Why can't we use GPL version 3? We don't think it's a real issue. So as it's geo networks published under GPL2, is it still under GPL2? Yeah, fine. Okay. We don't see that's a problem. And then we had the lawyers in France and the lawyers in the UK and the lawyers in the Korea and the lawyers in Australia can't all agree on exactly how they're going to do it. I'm not going to say any more, Matt. We're still discussing it. Maybe a conflict may not be a conflict. There's lots of licenses, right? So we're all going to go and try and do it all under GPL3. Unless there's some other stuff because there's other intellectual property beside the software. So she's all the documentation. That's all going to be creative commons, whatever BY stands for. Anyway, it's pretty open. Version 3. And we're going to put it all in GitHub. Yeah, so the lawyers of all the institute, we're government organisations, right? The lawyers have to be happy. Metro France has actually trademarked theopenwiz.org, but that's going to be shared by everybody. Somebody's just pursuing how they're going to get it registered in. There's some kind of organisational treaty you can sign up to with a World Trade Organisation, then it becomes automatically worldwide. Right? So, I want to say about, well, when we release it, we've decided to go to summarise all of this, we're going to do in GitHub. Fine? Okay. And we are going to use Gira. Yeah, we looked at all this kind of stuff and there's all the things that the French use, that the Brits don't use and vice versa. So it's going to be GitHub and Gira. So let's skip that. As I just said, fine. Okay. I've said that as well, right? So the source code at present is sitting on Git inside the Met Office and inside Metro France. So once we've sorted that, we'll go public, it will appear. And then we're going to have to sort of establish how we're going to do all of this. Who's going to have the intellectual property? Well, we've decided how we're going to do that. And we'll need people to do the architecture, who's going to do the development, how we're going to do the assurance, how we're going to do the committee or the community development, how people are going to buy in, if they want to spend money on it, how, if you don't spend money in it, what do you get, and how is support managed. Right? So we envisage there will be self support from the community and also commercial support. Right? Because basically this software we think will be used in quite a lot of countries around the world. Right. So at present, the software is about 70,000 functional points. You can take any metric you want really. And we think it probably will require about two people full time looking after it. That's fine. We've got to talk to all the other open source people, all the middleware bits to make sure we keep track of what's going on. And we're very keen on better integration with the geo network because when this has started in 2009, we took a snapshot of 2.6 or Aka took a snapshot of 2.6 or whatever it was, or maybe 2.6 with bit 2.8. And it diverged slightly. So Geocat have actually just about to deliver a report on how we could merge back into the main trunk or some bits. We can merge it in the main trunk. Which bits, there are some bits Aka have done that Geron's quite likes and some bits in 2.10 that we quite like and Aka didn't put in. So there's definitely advantages for both of us to try and merge even if we never completely converge the two development lines. There's quite a lot of chunks there that are useful to exchange and save on. Next we set before publication. Well, we've done mostly legal audit. We are still reorganising the code because as we got it from Aka, it was a bit ad hoc. It was not really middleware agnostic. And it uses specific versions of Jboss and Apache etc. So we're trying to make that agnostic so therefore it's easy to install and it's less of a security risk. Of course some of the middleware is a little bit old. That's a security risk. And we've actually just finished our security audit. We did a contract to get somebody to do some documentation that Aka hadn't really done or we weren't very happy with. And in fact the people who did the documentation were the people who were subcontracted by Aka to write some of the code. So it's fine. We've already set the private GitHub and we've all already reserved a public GitHub. And we are the lawyers and now busy trying to set up a non-profit limited liability association in Belgium. And they will own the intellectual property right. There will be five directors, one from each founding organisation from the original collaboration. And they will own the IPL. And that was supposed to have happened three months ago. It hasn't quite happened yet. Lawyers are talking about it. You're trying to get a Korean lawyer to agree with Brits and the French and the Australians that they're going to set up an independent organisation in Brussels. It's kind of just takes time. It takes time. We should have chosen Netherlands, shouldn't we? There's some estimates of the amount of work we have to do. We're about halfway through lot, is it? That's what it actually looks like at the Met Office. That's the public portal. If you just search on OpenWiz, Met Office, etc. Gysg stands for Global Information Systems Centre. And you can recognise it's kind of a geo-network thing. And of course if you do a search, you'll get... Did I go the right way? No. It's the wrong way. You get some results back. So there's the... That screen is the first 10 results out of 715. Looking for Japanese wind on the French system. That's on the French system as opposed to our system. And the idea between us and France is we've chosen to mirror each other and offer mutual service. And we'll do it in such a way that... Okay. You get the same results if you go to China or Japan or Korea or Australia or Washington. You'll find the same results. The Chinese and Japanese have implemented completely differently. If you go to Germany, you'll see the same kind of results when you do a search. And Germany has implemented it completely differently using proprietary software. But they're all federated through the OIPMH protocol. What I'm going to say, what we'll do between UK and France is we'll exchange subscription information. And then if one node goes down, if an airplane lands on Exeter or there's a small asteroid, France will just keep issuing the forecasts. Or the products or whatever people have subscribed to. And so that's how we think. Right? We really do. And that's how we have implemented the nuclear pollution forecasts. So for example, UK and France have responsibility for any nuclear pollution forecast in Europe and Africa. So any met service in the world can ring up and say, I suspect there's been a leak. Please can have a forecast within 10 minutes. And they'll get it. Right? But we back each other up because before the Met Office moved to Exeter is in Bracknell. And 20 kilometres west of Bracknell was Britain's atom bomb factory. So there's a reasonable chance if there's an accident in Britain, we wouldn't be functioning. So this out of country backup is essential. Right? That's where we are. Any questions? Thank you, Chris. We've got a couple of minutes of questions before the bitchover to the January session. So questions for anyone? If you're running a Met service, fine. You're probably interested in the software. Right? If you're not running a Met service, you may not be. I think the key issue is going to be that there'll be some industrial strength stuff we've done that GeoCAP can benefit from. OK. Well, thank you, Bray, if you can round the applause for Chris. Thank you.
OpenWIS OpenSource Software The World Meteorological Organization (WMO) has been working for several years towards upgrading its global infrastructure to support all of its international programmes of work, both operational and research-based, to collect, share and disseminate information. The new infrastructure is called the WIS ( WMO Information System). It identifies three top level functions, namely: • GISC: Global Information System Centre; • DCPC: Data Collection and Production Centre; • NC: National Centre. Météo-France, the UK Met Office, the Australian Bureau of Meteorology, the Korean Meteorological Administration and Meteo France International have developed the OpenWIS software, coupled with their existing systems, to perform the three functions required by the WMO Information System; that is, GISC, DCPC and NC. Based on opensource bricks, with GeoNetwork, OpenAM, JBoss, Apache, Solr and PostGreSQL, OpenWIS is going to become opensource. Beyond the WIS requirements, the OpenWIS consortium is building new functionalities for OpenWIS that will fit the OGC (OpenGeospatial Consortium) and INSPIRE (European directive) aspects, with standards OGC interfaces, a portal providing the viewer function with the discovery, search and request possibilities, and in a short future the billing and the transformation services. The current functional components of OpenWIS are: • Data Service and its cache of essential data • Metadata Service (ISO19115 catalogue synchronised with OAI-PMH protocol) • Security Service • Monitoring and Control • Portal (Discovery, Search, Browse, Request, Subscription) Météo France operates various dissemination tools. OpenWIS provide a generic interface that Météo France has adapted, covering requests for dissemination and their monitoring. OpenWIS interacts with data sources to respond to ad hoc or periodic subscription requests either directly via harness connections or relying on SOA OGC infrastructure. The new challenge of the consortium is to share the opensource model and expand membership beyond the founding members. The reflexion within the consortium enables to give some trends: • A steering committee for the integration of new functionalities (spontaneous or not) • One or two licences (the portal and the metadata component inheriting of the GeoNetwork licence) • A strong but reduced team for the initial developpement (MetOffice and Meteo France) • Git for the management of versioning and integration • The will to put the soft on the shelves of the World Meteorological Organisation • Entrance in the opensource area by the end of 2013
10.5446/15568 (DOI)
Where we are going with OL3, what we want to do, direction we're going. So first of all, we think that there is a convergence between 2D and 3D, and people expect to be able to draw, to display 3D objects like 3D buildings and 3D terrains on top of 2D projected tiles. So as on this picture, we can see buildings on OSN tiles. So that's, we're not yet there, we don't do any 3D right now, but that's what we want to do. And it's also interesting to note that that is also what big players like Google, that's the direction they're going to. Another key thing for the project is vector rendering. We want to be able to display many vector features with complex tiles on the map. And we think we can use technologies like WebGL to be able to achieve that. And vector tiling is also something we are looking at, and something we want to take into account from the very first, from the beginning of the development of the library. So 3D and vector rendering are main things, I would say. And we think that to be able to display 2D as well as 3D objects and with complex tiles, many objects, we think that we need to treat map as graphics. And what this means is that, more specifically, this means that we think we want to use a graphics API like Canvas and WebGL, that modern browsers support now. And we already do that a lot. The library, in its current state, the library uses WebGL and Canvas quite extensively. And we think that by using these technologies, WebGL and Canvas, we can also get very good performance. Now I'd like to take some time to position OL3 within the current suite of open source web mapping libraries. So on one side, we have OpenLayers 2 and Leaflet, 2D libraries, very popular. OpenLayers 2 has many features that people use and need. And Leaflet is a great library. It's a very lightweight, it works great on mobile devices, and it provides an API which is very convenient for people to use. On the other side of the spectrum, we have 3D libraries, virtual globes, like Cesium and OpenWebGlobe. They are fantastic libraries, very powerful libraries, but they are also very complex to use. And with OL3, we want to be right in the middle of that. And we actually want to cover the entire space. We want to be able to do 3D, and we also want to provide a convenient API and easy enough API for people to use. And obviously, we also want to support the features that OpenLayers 2 support because people use those features and need them. So that's it. I like to say that OpenLayers 3 is a very ambitious project, a very ambitious library for ambitious maps, maybe. So that was the vision. So now I'm going to talk about some of the design principles we apply. These are the things we care about while developing the library. So the first thing is separation of concerns, separation of concepts. So within the library, everything is very well separated. We have many modules, many files, each has a clear task, a clear responsibility. And this actually triggers down to the API. And I'm going to give a few examples here. So the main object that you use at the API level is the map. That's the main object. And the map has a renderer. The renderer is the object that takes care of the rendering operations. So all the rendering operations are done by the renderer object. And you won't find any WebGL or Canvas code outside the renderer. And the map also has a view. The view is what determines what the user sees of the map. So for example, we have a 2D view object, which is determined by a center, the center of the map, a resolution, and a rotation. And the map has layers, which is very typical, but actually a layer as a source. And the source represents the data. It represents the remote service that provides and serves the data. So we have this distinction here. So the source is, if we borrow from the MVC model, the source is the model, and the layer is the view. The layer determines what you see. For example, it has properties like opacity and visibility. So another thing where things are separated is interactions and controls. We have these two concepts, and Tim, I think, will talk more about it. So basically, interactions respond to browser events on the map. And so we have, for example, the double-click interaction, double-click-to-zoom interaction. So this is just an interaction. There is no presence in the DOM. On the contrary, controls have a presence in the DOM. So for example, Zoom slider or scale line or Zoom balance, those are controls. Okay. The other thing we care about is obviously performance. We want the library to be super-performance, and we take special care for this. And we want to, so we are very careful with the code we write, the JavaScript code we write. And we want to avoid unboxing and boxing operations that JavaScript engines need to do if you're not careful with the types you put in an array. This is just an example. We also want to be very nice with the garbage collector, and we try to reuse objects within the library as much as possible. We use this new API function, browser API function, which is RequestAnimationFrame. So our entire rendering engine is based on this API function. And we watch the frame rate. We use tools, specific tools to assess the frame rate that we achieve. And we also try to redraw as few pixels as possible in the renderer. And one big thing is we use the Closer compiler to get a very compact library with an optimized code. So this is just an illustration for the things we are looking at and we are careful about for the performance of the library. Another thing, the library has no opinion on the UI, which means we use CSS a lot, and it's your responsibility as the developer or the designer to customize the controls and everything. And we also provide objects like ol.overlay, which allows integration with other libraries. For example, Bootstrap, if you want to use Bootstrap. So with ol.overlay, you can easily create a Bootstrap pop-up, for example. So that was design principles. Now I'd like to give an overview of the current features we support. So we support various tile sources or providers, OSM, XYZ type providers, Stamen, Types, JSON, Big Maps, WMTS and WMS. And WMS, we support both tile and single tile. We have a number of controls, attribution, full screen, mouse position, scale line, zoom slider. I will show you some of those in the demos. We have a vector layer with a rule-based styling and a very powerful expression system. If you want to know more about that, you can go to TeamSTOP. Yes, and we have many parsers already, like JSON, GPX, KML, filter encoding, so those are OGC parsers, GML, WMS capabilities and WMTS capabilities. So just to say that we, and we have more, we have an animation framework that we use internally and that we expose also, so users can do all sorts of animations based on that framework. And we also support geolocation and device orientation. And actually there is more than that. Just to say that the library has quite a lot of features already supported. Just a few, just to demo a few examples to show what an OF3 map looks like. So this is a big map, map, and you can see the controls I was talking about. So these are very common stuff, plus and minus, so you can see that everything is animated. When you zoom, zoom in, zoom out, it's all animated. You can, even when you pan, there is this kinetic momentum effect while panning. We have this scale line here control, excuse me, and other control as well. And something that we support is, whoops, yeah, rotation. So we support rotation, which can be very useful on mobile devices. And we have this, you can, when I move, we have this binding stuff. When I rotate the map, I can see the imputes, impute range up there that moves as well. And we have this full screen control that allows switching the browser to a full screen mode. Okay. So I mentioned animation in the previous slide, previously. This example here demonstrates what you can do with the animation framework. So you can do very, very, this is just an example. You won't use that in real application, I think, but this kind of demonstrates the capabilities of the library. Elastic to Moscow, bouncing to Istanbul, and let's spiral to Madrid. Okay. So now I will let Cedric talk about the application they're building. Okay. So I'm working for the Sistopo, it's a national mapping agency in Switzerland, thanks Eric, to let me introduce Joe admin. My main goal today is simply to say that OpenAI 3 is already very good, is working. We can use it in projective application. And that's what we are going to do with MapJew admin. MapJew admin is based on OpenAI 3, AngularJS and Bootstrap, so we try to use really large, modern libraries. We created several components on the top of OpenAI, so kind of widgets, and we name it components in Angular jargon. They are directive, mainly. Mobile and desktop application is only one application. Previously we had two applications, now with OpenAI 3 we are able to create also a very nice application on mobile. So we made only one application. It's really lighter. Previously we had 600 car, and now it's 250 and we can probably reduce that. It will be in production on the 17th of October, and the code is available on GitHub if you need it. So one aspect is the responsiveness of the application. Here you have the application on the phone, application on the tablet, and an application on the desktop. So it's exactly the same code base, it's exactly the same application. With this Bootstrap we are really able to make the application depending on the width and the height of the screen, and also the touch component. I will maybe make two simple demos. First, the responsiveness, you can see typically on the left is this tool, that's a typical OpenAI 3 control, and now when it's too small, when the width is smaller, then we have only two buttons, and we don't have the Zoom slider control. You can see something equivalent also here in the search tool. We move it and we also move it according to the size of the screen. Okay, and a nice function is the only one search. Now you can search here for layers and for location. In this case I put water and with a nice preview function which works very well with open layers. And a third, this preview is also active in the catalog. You can select layers and simply see a preview in order to check what is the layer. In summary, really, I think you can already use OpenAI 3, so feel free to make that. Thank you. To conclude, I will give a quick status update on the project. So we're about, if not already done, we're about to release 3.0 beta 1. So we had a series of alpha releases this summer and this is going to be our first beta. The things we are currently working on, we work on a new website, a new build system. It's very important to be able to do custom, OpenAI 3 is already a large library, so it's important to be able to do custom builds tailored for your applications. So that's what we're working on. WebGL vector is a key thing, something we're currently working on. We're making progress, but we're not yet there. Vector editing, something we've been working on. I think Tim also will show you the current status of that. That's the end of the talk. Thank you very much. Thanks, Eric and Zivric. Do we have any questions? Is there support for time? Not directly right now. You can change, for example, we support WMS and you can change WMS parameters on the fly, so you can easily add a time parameter, but it's not built in the library, I would say. But this is something we've discussed already. I noticed in the list of base layers, there was Bing and a bunch of other X, Y, Z, and 5 layers, but it didn't mention anything about Google, does that mean Google is not supported? Google is not supported right now. It's been difficult to maintain in OL3 because of the way we interact with the Google Maps API, so it's not clear yet that we'll support Google Maps in OL3, to be honest. Any of that? I think there's no sort of backwards compatibility with Open Rails 2. No, there is no compatibility. It's a completely, it's a rewrite and it's a new API. So maybe the compatibility is more on the features we support. We try to support what OL2 supports, which means if you, our objective is that if you have an OL2 application, you will be able to update to OL3. And also, that you are trying to maintain OL2 still and if so, how long? Yeah, we will maintain OL2 and how long? I don't know exactly. It depends on our progress with OL3, I would say, but I think OL2 is here to stay for some time. That's my opinion. If you're trying to stay for some time, it depends on the progress with OL3. It sort of sounds like once OL3 is finished, it will drop support or it will drop the maintain. No, I shouldn't say that. Thank you. It's looking really beautiful for some time. Really nice, really fast. Are you doing tilt as well as rotation? This is in the plan, obviously, with 3D stuff and yeah, we'll do that for sure. Is that going to be in the three release or 3.1 or later? No, the three release will be 2D only without tilt, I think, but we'll add this later. Sorry, didn't... Minimum browser requirement. So we support... as I said, we use Canvas extensively. That's one of our main rendering engines. So this is IE9. So we support Chrome, Firefox, obviously, and IE9+. And IE11 is supposed to support WebGL, so this is a very good thing for us. That was my question. Okay. Are there any plans to support SQT symbols, for example, point symbols? I don't think this is something we've discussed yet, but it would make sense to me, at least. Any other questions? Okay, now I'd like to thank Eric Silverton. Thank you.
OpenLayers 3 enables a huge range of new web mapping functionality. In this talk, we'll show off many of the cool features of OpenLayers 3, including: Rich interaction and animation Virtual globe integration Raster layer effects Wide-ranging data source support The talk will be light on technical details and heavy with cool demos to show you how OpenLayers 3 opens up new and exciting ways of presenting your geospatial data.
10.5446/15566 (DOI)
Search company based in Belgium. And as you can see, he's going to be talking about an open source software for land cover mapping. So I'll hand over to Peter. Thank you very much. Thank you for being so patient. I hope I can fulfill the expectations, which must be very high by now, waiting so long. So I'm going to talk about open source software for land cover mapping based on remote sensing data. This is the outline of my talk. I'll give a brief background on software for remote sensing. Then we'll jump into a case study, which I use to show how this software was used. I'll give the methods and some results before I conclude. So first some background. For image processing, for remote sensing application, that has been previously reserved for the commercial packages, as we know, for example, NV, Eras, Imaging, E-Cognition, some others. And luckily, there has been a counterpart for the open source starting, of course, with the well-known GDAL OGR tools with their API. Not all researchers, because the research environment where I'm from, many people are using still the commercial packages, because they're not so either capable or interested, or they don't have the time to use the API to really implement the algorithms that are needed. For example, the machine learning techniques. And for that, there are some other packages, like, we all know, GRAS, the GIS software suit. There's a bit more recently. There's a very nice package of the CNES space agency in France, the Ophiol Toolbox, and the next speaker we'll talk all about that. It's quite a large package with an API development environment, command line interface, and a GUI. The package I'm talking about now in this speech is PK Tools. I started to work on that on my PhD quite some years ago, and I decided to release it as an open source. It's much more humble than the Ophiol Toolbox, and it's only a command line interface. But I want to show what's possible with it. The case study, it's a data fusion contest of the IEEE. Every year, the IEEE, Geoscience and Remote Sensing Society, is issuing a contest. And the contest is always about data fusion. And mostly, you have to classify automatically an image. And this year, in 2013, there was a high-perspectral image acquired over Houston, or the Houston urban area, together. So it had to be data fused, let's say, with a digital surface model, acquired with a LiDAR sensor. And those of you who are not familiar with hyperspectral images, those are a special kind of digital cameras, let's say, or sensors that are acquiring data in a large number of spectral bands. So instead of having an RGB camera with only three bands in the visual spectrum, it has, for example, the KC, the Kasi sensor, has 144 very narrow spectral bands, a lot of information to deal with. The spatial resolution was 2 and 1 half meters per pixel. And that was also the case for the digital surface model. The digital surface model gives you height information. So in combination with the hyperspectral data, you have a lot of information, not only on the spectral side, but also on the height. So that was acquired with LiDAR, a laser-based sensor. It was only one band of information, which is a height. And it has been acquired in the same spatial resolution, and the images were already co-registered. That was already helpful. The contest consisted in an automatic classification of 15 classes, which you see over here. I'll go a bit into more detail in the classes in a minute, when I will talk about the challenges. Of course, there are some challenges, otherwise it wouldn't be a contest. First of all, there was a data fusion. You had to find the best way to combine the two data sets, the LiDAR information of height and the spectral information, in order to obtain an optimal classification, so the best classification results. One of the challenges was the similarity of the classes. Some of the classes were very near to each other. There was also a cloud shadow cast on one part of the image, which turned out to be quite a difficult thing. And I didn't solve it, actually. So I already say that now. It had to be automatic, so it had to be an automatic classification, no visual interpretation involved. And the extra challenge I put here is I was interested in how can we solve it only using only open source tools. So a bit more on the similarity of the classes. You see the classes legend over here. As you see, there are three classes for grass. The grass, there is healthy grass, stress grass, and synthetic grass. So the synthetic grass was for a sports field. In the sports field, you had the middle part, where people run quite often into this was a stressed grass, so the non-healthy grass, let's say, and the healthy grass. And from the visual eye, it's very difficult to distinguish those. Luckily, we had a high-respectful data, which is very capable of dealing with those subtle differences in classes. Then one of the challenges turned out to be much more difficult than the similar classes of the land cover, which was that some of those classes were mixed land cover and land use. Land cover is typically related to remote sensing, because they're two different types on the ground. The problem is, when you have land use, the classes can be very similar, can be the same actually from the texture, from the material itself. But it's just how we use them. As an example, residential and commercial areas for remote sensing applications, it's actually the same. The same with roads and highway. They're built out of the same material, but it's just how we use it, that the classes differ. And then finally, for the parking lots, there were two types of parking. One was the free parking, and the other was the taken parking lots. This is, it's very difficult to see here. I'm sorry for that. But there was a huge cloud shadow. So a cloud was hanging over here, and it cast a shadow all in this area, which is very difficult for an automatic classification. Because actually, you want to classify, or the computer to classify this building, if it's another building in commercial area here, it has to be classified in the same class, but from a spectrum, it's totally different because of the cloud shadow. OK, then about the methods, it's about a supervised classification, which is a machine learning method, which means that you have a number of labeled pixels, which were provided by the organizing committee, and it enables us, or the computer, to learn how to classify unseen data. So there's a subset, which is provided as a training set. And the approach I was using is open source tools only, heavily based on GDAL and OGR, and on this PK tools, which is built on top of that for remote sensing applications and the machine learning. And I would like to invite you to, if you're interested, you can run the entire methods I will present now. You can download the data. You can download the training set and also the tools. I generated everything in a script, and I tried to explain it in a wiki page. So if you're interested, you can just run through it and do it for yourselves. As I mentioned, the tools have been released in GPL3. You can download them. There's a Git in Savanna. It's based on GDAL OGR. For some, it might be a curse, but for me, it's a blessing. It's only command line driven. Just as the GDAL tools are, currently it's only on Linux. It's in the stack or on the to-do list to release them for Windows, but it's not yet there. So on the top row, you see the data that was provided by the organizing committee. There's the LiDAR data. There's the CASI hyperspectral data. And there was this training sample. And the training sample was just a plain CSV file for each row, an ID number, the longitude and latitude, where you can define the pixel, and then the class it belongs to. And based on that, that was the training set. So the first program I used was a conversion of that ASCII file to a vector file. And I used the OGR library for that. So I converted it in a vector file. And overlaid that vector on the hyperspectral image in order to extract the values that were below those training sample. And when I extracted them, I created a new vector file with as attributes the actual content, the values of the pixels. And so that I obtained a vector file containing all information I needed to do the actual training for the classifier. To give you an idea of how a hyperspectral signal looks like, I give you an example here of the grass spectra. So the three grass spectra, the healthy grass in green, the stressed grass in orange, and then the synthetic grass here. What you see here, this is the spectrum, the electromagnetic spectrum from the visual part all the way up to the near infrared. And what you see here is quite interesting. So for example, for the healthy grass, the middle line here is the average spectrum. And then I also show the standard deviation for the different pixels of that particular class. And what you see here, this is the visual part in blue, green, and red. If you only see with your eyes, it's very difficult to distinguish, because all the classes are overlapping. What you see with a hyperspectral sensor is that you have all these bands here, luckily, and you see that it's quite easy to distinguish those classes here. So again, maybe on the online data, it's much more easy to see there. But for example, the synthetic grass in a normal grass, it's very difficult for us to see the difference. But here you see immediately there's a huge difference in the near infrared. The same for the building there, which was even more difficult, because here we're talking about land use difference. The houses that were built or that were constructed for the housing for the residential area, they are much like the buildings that are used for commercial area. And you see, especially in this commercial area, there's a huge variation. And there's a lot of overlap with the residential areas. Even worse, for the roads, as I mentioned, you have roads and highway, they're exactly the same. And so there's a complete overlap there for the training pixels in those classes. Again, the parking lots, a total overlap. There's another step I won't go into too much detail. But in one of the problems in hyperspectral imagery is that you have a lot of high dimensionality. And in machine learning, this is known as a curse of dimensionality. If your dimension is too high with respect to the number of training pixels you have, it's an ill-post problem. And you will not be able to solve it. So there's a feature selection that involves where you only concentrate on the best features. And you see that from a number of 16 spectral features, the classification was not improving. Then once we got the selected features from the hyperspectral image, it was time to fuse the data with a LIDAR image, and in this case, we just did a feature fusion where we concatenated 16 spectral bands with the height information, treated as one image, and extracted that information as input for the classifier. Now, you see that it's already better. For example, in the residential and commercial area, there's a total overlap almost here. There's still some overlap. This is the height information for the same class as residential and commercial. But you see that most of the majority of the houses in the residential area are until 20, 25 meters, whereas height, whereas the commercial area starts from 20 meters. So they're just a bit higher. And you can use that information for your classifier. So combining these two information, you hope that the classes get distinguished better. So then the actual classification. You use both imagery. You make a training sample by extracting, by overlaying the vectors on those imagery. You train the classifier, and then you feed the classifier also with this actual imagery, and you obtain the final classification output. There was one final step, which is a mark of random field filtering, which is also part of the filtering process in the tools. This is a filter that uses contextual spatial information in order to get rid of the salt and pepper effect you typically have in this raster-based, pixel-based classification. So there's a lot of small pixelation effects. And with a mark of random field filter, you are able to clump some of the classes together in order to get rid of those. And you see, when I switch back and forth, you see that some of this effect has been removed. So typically, we're more interested in a more homogeneous result. And that's what this filter was used for. OK, finally, some results. You obtained overall accuracy with 83%, which is not a bad result if you see that there were 15 classes at the end. There was also quite some similarity, also the land use. It's quite a difficult and challenging problem. However, if you look at the winning solution, that was much better classified. So it was 94% accurate. If you look in more detail where actually the differences are, in this area, most, let's say not all, but it's quite similar, where the differences are mostly is in this area. Remember, we had this cloud shadow. So I did not solve the cloud shadow part. So all this is a bit rubbish. Let's say you see some water over here, some roads that shouldn't be roads. And this is because it's just all dark spectrum. And what the winning solution did here was it was some kind of a cloud shadow. And it would take me too far to try to solve it with a generic open source tool. So there's no real generic tool that can do this, neither in any of the commercial software packages. So if you see how they solve it, it's very ad hoc, very dedicated to how they solve it. And it's just not possible with a plain, or a very open source solution that is ready for use. If we go, for example, to the 10th place of the, there were only 10 places published. And so you see here, it's already much closer. Because if you see the 10th place, they did not solve this problem either, you see here. So they have the bit of the same problems I had in my classification there. And you see there, it's very close. And also the 83 is also much more closer to the 86, which explains that most of the difference. If you wanted to win, it was there that you had to concentrate on. To conclude, so we have seen, this was a contest, which is there every year from the IEEE. So it's quite a challenging land cover. There are much more easy land cover problems. There was hyperspectral data involved, as I mentioned. There's a curse of dimensionality. It has 140 spectral bands. You have to deal with those. There was then also the data fusion. You had to deal with the LiDAR data combined with the hyperspectral data. There was a cloud shadow. I've shown that it can be solved with readily available open source tools. The result was not among the top 10, but close. For the moment, it's only command line driven. No clicks. For me, it's really nice, because you can put it in a single script. I can execute it in less than a minute. Everything is done. If I want to change or tweak some things, you can just adapt the script file without any clicking involved. And if you're interested, you can do the same and just download available files. That's all. I thank you for your attention. Thank you. I have two questions. Did you consider bringing in the spatial component of dealing with topology, shape, looking at say, if you see GLGMA that impregnates some sort of e-cognition? Did you look at that at all? Yes, that's a good question. First of all, for the classifiers, for the moment, I have implemented another one, an artificial neural network. I did not use it here. Actually, because the support vector machine, which I used here, it's well known for dealing well with the high dimensionality. It's often used in hyperspectral remote sensing. And this is the reason why I used it here. I'm not sure if it would change that much by choosing another classifier. My experience in classifiers is that it's much more how you deal with the training data, how you do the feature selection, try to put in some other features. This is mostly, if we're talking about good classifiers already and random forests and the others, they're equally good classifiers. My experience is that the difference is rather in you have to search in another area. Like, for example, trying to deal with the cloud shadow, that would be the first thing to concentrate on. The second part is also quite interesting. There are, apart from the spectral and the height information, there's a lot of other information, spatial context. I've used it with a marker for random field. There it was used in some sense. I've played a bit with some texture parameters, especially trying to deal with the roads and the highways, trying to deal with there's more homogeneity in the highways than in the roads, which are more narrow. The conclusion was that it didn't make that much of a difference, or I had to spend more time in it. It was also possible. So I was not able to do a one or two percentage increase. But then, for me, it was more for the case study, it was more interesting to have a quite simple solution, which was easy to follow, and just concentrate on one package. And then? Just to have a question with the group, the pixel, and the actual object, you start dealing with objects, and you start teaching them the hard linear line, perpendicular intersections. Is that what helped in that area? Yes. I'm sure that there are some clever solutions you can use for that, indeed. Yeah. So I mean, a simple question. Yes, please. The data fusion technique you use in this particular project, is it similar to the typical front sharpening technique you use in the most sense? Sorry, I didn't know. The data fusion technique between the alpastor cry image and the glider image, is it similar to the well-known front sharpening technique in the most sense? No, no. It's a different thing. The pen sharpening tries to create one image by, typically, you have a panchromatic image without spectral information on a high resolution, and you try to put in some spectral information within that to have the best of both worlds. Here, it was a different technique. You had the spectral bands, and you have the hide information, and I just created one image, concatenating the several bands, having one extra band, let's say, as a layer for the height, and used that for the classifier. So I did not create a new image for the panchromatic, which is, in my experience, mostly useful for visual interpretation and not so much for classification. Thank you very much, Peter, not only for a really interesting talk, but being almost exactly on 20 minutes, which as we're fast approaching lunches is pretty vital. OK, thank you. Only two seconds off, the best so far. Thank you.
Open source software is well established for basic raster and vector data processing, with the Geospatial Data Abstraction Library (GDAL) as one of the most well known tools. Its utilities and application programming interface (API) have become a common standard for data format conversion, reprojection, spatial and spectral subsetting. With its command line interface utilities, GDAL is better suited for the automatic processing of very large amounts of data and for repetitive processing tasks than most of its commercial counterparts. Though GDAL provides an excellent API on which more advanced image processing tasks can be built, not all users have the time or programming skills to get involved such development. In particular within the remote sensing user community, there is a large interest in machine learning techniques applied to remote sensing data.
10.5446/15561 (DOI)
Thank you for coming. I'm a little bit surprised because normally in Germany not so many people are really interested in Brandenburg, but that's fine. Probably you might miss the second talk which is Emmanuel Bielow. We thought a little bit about talking about this project. We actually run together with Teresquist and Camp2Camp. But I'm in a good situation because he's talking later on his own talk and now I just can drop all the stuff which is too technical for me and pass it over to Emmanuel. So please stay here if you're more interested in specific technical stuff. All right. Yeah. What I'm going to talk about, first a little bit about us which shows you where Teresquist is coming from, why we got this project, a little bit, has a strong focus on what do we mean with 3D underground data. And then I'm going to talk a little bit about 2D WebGIS and 3D data. Explain a little bit about the architecture and the setup. Come over to 3D WebGIS and 3D data. Go back a little bit to architecture and setup to show you what we've changed in order to do the 3D client. Have a short preview on the system here on the live system when the Wi-Fi doesn't crash. Yeah. And of course I'm going to talk a little bit about the problems we had because when we got on this project, we knew we were doing something which nobody did before in that case. Okay. A little bit about us first, company of Teresquist. This is part of our team. We're doing geoinformatics since more than 10 years now, especially in the web mapping context and especially in the open source context. We're doing some development and open layers, geoXT and several other stuff. So we have a strong focus especially on the client side of the web mapping stuff on that. Yeah. I'm talking a little bit about me, but the only really interesting sentence for you is I'm a geographer. So I know really a lot, but nothing really deep. So please don't ask too deep on that. So going to my talk. This is a 3D geological map. I think everybody of you might have seen something like that before. So these funny colored polygons which just show which rocks are scratching out at what part. So this is a classical view. You can buy them everywhere. If you're lucky, you got something like that which is a coordinate, X, Y, and which might be just a drilling somebody did into the underground and found out something like that. So we have a drilling and we have a drilling profile. And this is quite interesting just thinking a little bit about that because you got an X and a Y coordinate which represents the point and you have a set coordinate for every horizon you have here and even some attributes. So this is, the same side is kind of classical what we think of 3D geoinformation data. If you're even more lucky, you get something like that which is a profile and somebody draw something like that so you get an idea about the situation in the underground. I think it's clear this must be kind of model and nothing which is actually 100% the reality. Thanks and like that. And yeah, drawing that onto the state of Brandenburg which actually paid for that project. It's in northeastern Germany. Probably you know Berlin which is right in the center here. And what you see are the yellow crosses. These are the drillings they have and you see the blue lines which is a kind of seismological reflection, investigation stuff. I'm not an expert on that but we got data in these locations and they even did something more, they interpolated kind of surfaces. So this is an example for the top of the premium layers and it looks something like that. Like rough sea or something like that which is just located on that. So thinking about that, coming to the point of 2D WebGIS, I can say okay, I have some points, I have some lines, I have some polygons. What do we normally do? WebGIS. Just put it in the 2D WebGIS stuff and yeah, it looks like that and then you could do some clickable stuff on these funny images, funny drillings, show the images from the drilling profile and then you're fine. So doing that, I think you get bored because this is not why you are here and this is not why I am here. So we did some funny thing, we put the data in a new post-GS 2.0 database and that makes you happy, hopefully and we can do some really interesting things with the data now. First thing I must talk a little bit about is how data coming to the database. We have another part of the project which are real geological scientists and they do a 3D modeling of this data which is really highly scientific and stuff like that. I said I'm geographer, I have no queue about that. So they put the data in a kind of proprietary model which is called GST and they can export the data as WKT and from that point we get the data into post-GS and do two main things with the data and first of all I'm just going to talk about this data one. We import the data into the database and form it to kind of point cloud. So we have tons of points in it, just x, y, z, exactly what I've shown with the drain profile before. So it looks a little bit like that, we have the data like that in the database and coming back to the idea with the geological drilling again. This is the situation you normally have, you have a geological map and you have a coordinate and you drill it down there. What we can do now is you can click anywhere and we just create something like a virtual drilling on that point because we have that point and put it into the database and look which layers are underneath that and even we can do the same with these profiles and I can show that here in the 2D web mapping client we have. So we have a tool here and I just can click on belly and get the virtual profile or do something like that with a profile line, wait for a second but I think the performance is not too bad because you must think these are tons of points which I just requested and we directly live generate something like that. So how are we doing that? Of course it is virtual and it is kind of modeling but it works because you see that lying x over there, this is my click point and then we just go down. We know the strategy, we know there are maximum 12 surfaces and they are in the same row always. So we have a kind of predefined maximum distance to every point we look and then we just look for the nearest point which is on our axis when we just let fall down that point to the underground and from that we just create a JSON string, send it to the client and the rest is just programming stuff doing here. Okay, so let's have a look at the architecture which is quite quite easy. We have a Postgres core Post.js database, we have for the Geo stuff Geo server and on the client side we have a classical architecture we use in our company which consists of open layers, XJS4 and Geo XT2. So quite easy, switching over to the same slide you've seen before and to that block. Remember we get the data, we import it to the database and in the second part which is called data2 which is just kind of silly, we create real polyhedral surfaces within the database. So we have all the data together, it's something in between the grid and the tin, a polyhedral surface, something like that. It looks after importing that might look a little bit like something like that and why do we do that? It's as simple as it is, if you have only 2D and you have some polygon data you just switch on the next layer and you won't see the layer buff. So that's classical problem, you all know about this base map issue in open layers and stuff like that, this is really the same. So what we really need is something like that so that you really can get into the data in the underground and yeah, that's what we actually did and how did we do that? Remember the architecture from the 2D client and this is the architecture from the 3D client and you've seen there hasn't changed so much at that point. We just added plug-in to GeoServer which is a V3DS plug-in to GeoServer. The first point I can point to Emmanuel, he's talking a little bit more about that later and we placed the open layers part with a JavaScript library called x3dom which is for representing x3D data which is a format used in the internet and again I think it's part of your talk later Emmanuel, so just keep in mind, keep sitting here if you're interested in that and look later. And we have more or less the same architecture than for the 2D client. So what changed? We throw open layers out as a mapping library, replaced it with x3dom and we added a user extension with V3DS to GeoServer and then we can go on and created something which connected the xjs framework with the x3dom and yeah, I think here are some GeoExt developers so don't be angry, it's just my idea to bring it closer to you what the idea behind that framework is or that library is. I called it GeoExt3D which is of course not part of the GeoExt program project at the moment and we use kind of RP which was created on top of the x3dom library and that enables us to use the same xjs elements such as viewport or panels or something. So the 2 clients look very, very similar and the usage is very, very similar at that point. So then we have some API methods such as setScale that, setLayer visibility to click on layer so switch them off, setLayerColor, getState and setState which is really interesting stuff because it's not that easy to navigate in that client and if you got a really good view you want to show in a presentation like that so you can just save the JSON string and reload it later and of course you have the possibility as an open layers to navigate with mouse keys and something like that. So just a quick look about then on the architecture with a slightly different figure. On the left side we have our database, here we have our GeoServer which really serves the data to the client as v3ds and which is very important, that point is the data is sent to the client. So it's not like in the 2D client where we mainly use WMS stuff, it's more like WFS, we really send the data to the client and the rest is happening in the client and especially the data later is rendered in the client. Okay so we have a look at that as well and you see it's nearly the same, it is the same framework, it looks a little bit different and here you have your GeoLinux layers, you can switch them off and on again and yeah, a proof that we really render the data in the client is when you just change a color which is really done really fast within that client. You have some tools like, I mentioned that method set scale so you can do a little bit something like that and you have different navigation methods which make it more or less easy, especially if you don't have a mouse, to navigate inside the stuff and you see that, now I was in front of you, you don't see that funny kind of overview map we have on the top right here which shows your view onto the model at that point. So if I move around here with the mouse it will always move and show where I am actually. Yeah, there's some more fancy things like the kind of hovering, going over the data, have some background information about the stuff and yeah, as I mentioned before, I always take the wrong one, I always can throw in pre-saved view, I also can put that into a text file, something like that, send it to a colleague and you can just open the same situation as I did at that point. Okay, problems, a lot of them. One problem of course is performance because we send really huge tons of data to the client and yeah, it was quite difficult to catch up with that. As you've seen in the preview on the system, we only have a short piece of data, we don't have the full state inside the client now, so that's a kind of compromise we did. So normally you do a kind of pre-selection in the 2D client and then you pass it over to the 3D client and open it there. Of course you could reduce the data density but as far as A&O, hopefully you come up with a new solution now. There's no real method within PostGIS where you really can filter out the data and I think it's sort of a little bit dangerous because I'm not a scientist who created that model and just throwing out some points might really cause completely different data. Of course you can enhance your hardware, that's what you always do, something slow, just put some megabyte of RAM in it or something like that and of course wait, just wait. I think the first map server map created took about 30 seconds and now it's like this just a few years later and I'm sure that hardware and every element within that world will get better and better. The second problem was that v3ds is not officially supported by GeoServer so we had to use a GeoServer UX, I think it was created by a kind of student or... Someone in Portugal? Yeah, someone in Portugal, Emmanuel created that and came to camp, took it over and made it usable for our project and there was no RP on top of the X3DOM library so we had to create one as well. I think there were a lot of more problems but in the end I think it's a good showcase that you really can do something like that and yeah, many thanks for listening. If you have questions, of course you can ask or if you look for me later, I'm the crazy guy with two badges so you'll find me and yeah, okay, thank you. Yes, are there any questions? Yes, what coordinate system do you pass to the X3DOM? German coordinates. German coordinates, yeah. Oh like this, the country plane. Yeah. Do you have to translate the German coordinates into a specific center so that you don't have some problems with numerical precision? No, no, nothing. I think if you've seen the system, you can't really talk about precision in that context, it's all modeled. I mean is that when you've got some big numbers for coordinates, you can have some problems with 3D interactions and stuff like that. No, no problems till now. Is the application online or the general? It is online and if you're smart, you noted the URL because it's just open. We are just in the moment, we are in the process of finishing the stuff and the code is already on GitHub but it's not open yet, I guess. I think it's one of your GitHub repositories but of course we are going to... Is your system capable of dealing with geological poles? Who? I'm a geographer, said before. We have... I'm not quite sure about the 3D viewer what really happens in that context. For the profiles and the drillings, yes, definitely. I have a short question. For the W3DS request, do you implement the complete W3DS client for Big3D DOM or is it... how can I think about that? No, we have to implement the complete 3DS request on the client side. We just use the request we needed for the user. So get 3D scene graph, X3D clean graph just from the W3DS to the client based on the bounding box, send us a get scene request with the bounding box. Then you have the data. You receive the data. You get an X3D clean graph and do you drop this right? Yeah, exactly. Any more questions? We still have time. Did you try 3.js on the comparison? I didn't get it. Did you try 3.js on the JavaScript 3D library for comparison with X3D DOM? No, not really. I don't know if you did it when we wrote that offer. No. Related to your performance problems, primarily the data transfer or is it the rendering time? Yeah. Yeah. Which? It's the data transfer, definitely. It's really huge amounts of data. The data is modeled for desktop thing, anything like that. No more questions? Okay. Thank you, Phil. Thanks. Thanks. Thanks. Thanks. Thanks. Thanks. Thanks. Thanks. Thanks. Thanks. Thanks. Thanks. Thanks. Thanks. Thanks. Thanks.
The geological borehole, depth profile and layer-data and some background-data such as topographical maps were setup as services, mainly in a PostGreSQL/PostGIS and GeoServer environment. Both webclients are fully client-side based applications, for the 3D-client WebGL for rendering is used and all data is delivered via standarized services. For the 3D-data the X3D format is used, which is not an official OGC standard yet but delivers phantastic possibilities for 3D-modelling of data in a webbased environment. The talk will focus on some of the high-end announced requirements, especially to the 3D-webclient such as gazetteers, FeatureInfo or dynamic load of services such as WMS or WFS. A special task is the delivering of borehole data as BoreholeML, for which the GeoServer app-schema extension was used. From a technical point of view especially the development of a GeoExt-like library which connects X3dom and ExtJs 4.x is an interesting part. With this solution, elements such as gazetteers and presentation-masks for requested attribute data could be used in both 2D- and 3D-client. At the end some live impressions of the application will be shown.
10.5446/15559 (DOI)
I think they've switched presenters. Jonas is from the Danish Geodata Agency, part of the Ministry of the Environment. You can talk about how they've migrated to other source systems and the use of map server. Indeed, yes. Well, mainly, we do use map server. We'll be seeing that as well. Mostly, this talk is about our migration of database back-end systems for our web services, from Oracle Spatial to PostGIS. The Danish Geodata Agency is the Danish National Mapping and Cadastral Agency. We have a range of web services, WMS, WFS, WCS, WMTS. This is mostly the WMS back-end that this will be about. We'll talk to you a little about, in the beginning, the situation we came from and why that was no longer viable for us. What we did and did not do, this is, as it turns out, quite a, well, I would say, a somewhat focused use case and reasonably simple as well, it turned out. And some experiences we gathered along the way. The situation a few years back was that we had what I would call a monolithic Oracle Spatial database doing pretty much everything because that was how the agency was structured. Everything went into this one database to end all databases. So all production systems, everything went through that database. It was also the authoritative data store, and it was delivering data for web services as well. So pretty much everything was somehow interacting with each other, some of it interacting with nothing, in this great database. It was, and still is, we still have it, residing on the same, there were several databases in it, as you can see, on the same system, physical system. So that was a hard physical limit, and there were some licensing issues as well that were somewhat limiting to us, more and more as time went by. And what we did, we had all this production flowing through it, and in the end, there were database views created for web services distribution, and they were, well, increasingly complex and not at all times performing as we would like. And what happened is basically this, I don't know how many of you went to Mass's presentation yesterday morning. He's my boss, he showed this one as well. This is how many requests we serve per month. And in August, we were just shy of 120 million, which this Oracle database system was no longer able to, and it's not supplying this. I think we made the change in 2011, is that somewhere around there, which is here. And at that point, the Oracle database was no longer scaling for us at all. So there was a big increase in data distribution through our web services. It kept increasing. We were doing, well, pretty much just whatever we could, nurturing this database, trying to push speed however we could. At the same time, a new database structure for the cadastral data was implemented, which was not as simple as it might have been, and that increased the complexity of the database views for the web services as well, and did not make it any quicker. We had WFS requests going into the database as well. They were technically legal, but very expensive to go through. And we had, at the time, no reasonable self-defense mechanism. So those WFS requests came in. The database was grinding through them. The production systems behind all that was not able to do any work in the database, either because the database was pretty much in a tar pit or deadlock sometimes. And as it was, we were not able to scale the bottleneck, which was most of the time the database, because of the licensing costs. We were bound on the same physical hardware, and the license was prohibitively expensive, especially because this was exposed to the web. We could have done it possibly if it wasn't, but the licensing is a completely different story when it's exposed externally to the web. So that was just, well, we would not have had the money. So something had to be done. We have set up some self-defense mechanism in our front load balance that we call the switchboard, where we do not allow as many WFS requests into our system. It's a reasonably simple self-defense, but along with the increased capacity we have for delivering web services, it's working quite well in day-to-day use. We moved the Rasters for WMS services out of the database, for some reason they weren't there as well. I've been taken out, and we latched on post-JAS replica that we put in the chain after the Oracle database. Putting that in place allowed us to trim the database structure as well, flatten the table structure, and not be dependent on the views that we're not performing. So what that looks like is pretty much this, reasonably simple model, which is basically latched on this one. We still have this in place, and it's, to be fair, working fine. We were just not able to scale it as we would have liked. But putting these in place, this was our first implementation of it. We've gone through a couple. What we did here is we used, we have the Oracle database, OGR to OGR, to make an idly copy out to post-JAS master database and slaves. On the same machines, we had installed MapServer, based on the notion that the post-JAS database would probably be IO intensive, and MapServer mostly CPU intensive. So we thought that's a good idea. And this was effective, I would say, very much so. Not in the long run so much efficient as the overhead of managing all these databases. Well, mostly they take care of their own stuff, but we still need to set it up and implement it and make it run. So we have switched to what you could call a more traditional setup. We have the database behind the MapServer, we call those load servers, application servers out front. So that allows us as well to scale independently whatever it doesn't perform in our system architecture. And it allows us to roll out what we usually have to do if there's some sort of a situation where we need extra performance, extra capacity. We need to put in more load servers out front, not necessarily more database machines. So having those load servers be as simple as possible allows us to roll them out quicker and easier when they all do just direct from the same database door. It'll be a lot slower having to roll out a machine that has to take a new dump of the data and be put up as a slave of the master database and have all that in sync before the machine is live. So this rolls out a lot quicker. And this all worked out well. I did think of removing these numbers. They're somewhat embarrassing really. Those are milliseconds on there for those who can see them. And we do have zero at the bottom, so it was bad. This is when we changed at the end of December, two, three years ago, when we switched one of our services to the new database back end. So that went from, what, seven to ten seconds response time down to a couple of seconds, which is still not lightning fast. It has gotten better since. But, well, also, to be fair, as I put there, the old database system was in need of constant nurturing, which we didn't give it at the end because we were building this flashy new system that we just wanted to get finished. So what we did not do is to completely replace the Oracle spatial database, which is still working fine for the rest of the organization and for us as well, really. So what we did was just, yeah, just, we put in features that we could scale the things that we needed to scale the most, which was at that point the database for data distribution. So the old database being offloaded with the task of servicing us with data is working very well still. And there are still things we would like to do, some more advanced load balancing of the post-use databases. So we've been looking into PG pool. Has anyone tried it? You have. We were not immediately impressed with the performance of it. I'm not sure why. I'm sure we'll be looking more into it. But initial impressions were not that good, really. Is that? You agree? Yeah. Okay. We'll look into it more, I suppose. But we do need it to perform. For the time being, for the load balancing, it's really simple. We mainly just do it in the DNS system, so which is just basically a simple round robin load balancing, if you can even call it that. The system has been running so well and so stable that we're not in a dire need of introducing more advanced features. We still want them, but there's no gun to our head. And introducing some sort of self-healing system. Again, the system has been running fine, so we're not dropping databases right and left that we need to take care of. But to create as healthy a system as possible, that would be nice to have as well. Postgres does offer some features. So the experiences we had can pretty much be summed up, I suppose, along these lines. We freed ourselves of the Oracle spatial licensing issue, which was prohibitive. We installed fresh databases as we need them, which is very nice. And we were able to, in the process, to switch, well, in a controlled fashion at least, to the new databases. Some services even ran from multiple databases at once, both from Oracle and from both GIS for a short period. So we've been introducing it in reasonably small steps, which I think is a very good idea as well. And always being able to roll back if anything fails. And one of the other presenters I heard one of the other days who said he might be preaching to the converted, which is, I suppose, this one goes there as well. It is a very mature database system, so you shouldn't be afraid to use it. Just get out there. We were not at all experts when we started this. So the project has probably run high on man hours put into it and low on funds as in money put into it. But it's been a good project. It's fun, really. And as well, along the way, we've learned that PostGIS is fully functional, obviously. And we would really not be afraid to put it into production use. We're not in a position to push it back necessarily into the rest of the organization. But us who have worked with it would not be afraid of doing that as such. We do believe it's fully capable of doing that as well. But tearing out a fully integrated production database system is a big process. And we're at the point that we're considering dumping backwards compatibility with the Oracle spatial database because we feel this system is running really fine as it is. And I'll leave you with this one that we are, I hope not unhumbly so, but somewhat proud of. The system is running quite smoothly. At the time, we're serving about 5 to 10% of these requests daily from the PostGIS systems. That's 300 to 400,000 requests a day that comes through the system. Which it is? The rest is mostly WMTS and WMS from Rasta data sources. We have some of our in-house made, we call them geo keys as well. They take up some of it, but mostly WMS and WMTS is for the rest. And that was in competition with the lunch. Sure. Thank you. Tony, time for questions. What product is your switchboard? Well, it isn't. Yeah, the question is, what product is the switchboard? The switchboard is a Java component that we have built in-house. But you can talk to Mechio Turos behind you if you'd like. She's the master of the switchboard. Yeah. Did you run into any problems with the OGR and OGR? Yeah, the question is, did we run into problems with the OGR to OGR conversion? Yeah, smaller things I would say. There are, yeah, a message I want to get across as well. You will probably hit bumps on the road, but we have had no showstoppers at all. There are things, we have a field from the Oracle database with so many characters in it, when it goes through OGR to OGR, it somehow forces the same field length actively in the post-gis database, meaning it'll just fill out with zeros or blanks unless we do character conversion or data type conversion through the OGR to OGR. So, along those lines, but no showstoppers that we've seen. But yeah, you will spend time doing some nitty-gritty stuff, that's for sure. I can add something. I also think it's very robust OGR to OGR, but it's low, maybe it can take hours to transfer some gigabytes. We do only a plus-minus transfer, so it's not a problem. But if you transfer your whole data, it will take some time. Yeah, so maybe you don't want to do a complete nightly dump from your database. We do have one of our WMS services as well. I don't know, the data is odd somehow. We do a dump to a shape file and then use that for map server in one of our services. It's just one particular layer that was taking up more than half the time for that particular service. But the dump to a shape file from post-gis goes really quick, and map server services really quick. But taking it from post-gis is just somehow a bit slow. So we worked around that pragmatic fashion. Anybody else? I just feel I should have asked one question, because I haven't asked any questions in any session yet, since this is the second session I've been to. Did you have to get any kind of approval from your boss to take this step, or did you just show them what was happening in Oracle Spatial and say that this is all you want to do? Well, it was obvious the whole organization, the database, the Oracle Spatial database, was no longer up to the task. And everybody could see up front what it would cost to scale the license and the hardware. So something else had to be done. So there was support for it in the organization. The answer is from a barbie, we throw more money at it, and I don't understand open source is going to do it. No, but there was an understanding, and a common feeling that this could do the job. And if we were to scale on the Oracle database, it would have probably scaled pretty much linearly with spending, which is not going to work in the long run. So we had to do something else. How desperate was your Oracle sales rep when you said that you didn't need a spatial license? I didn't see him. I don't know. I think we still pay a reasonable amount of money, so I think he's doing all right. He's not driving his fast car. Probably, yeah. We've still got two of our speakers here, I think we've lost the other two. So let's just thank these two, and we'll clap with loud the others to hear the others. Thank you.
Changing data distribution from one relational database system to another should be an easy task. SQL is a standardized database language and concepts concerning spatial data is much the same through OGC standards. Still, some tasks has to be done in a slightly different manner. The Danish Geodata Agency decided to explore changing a major part of its data distribution from a commercial Oracle Spatial database to an open source PostGres/PostGIS database. A pilot project was set up to evaluate PostGIS as a productive distribution database accessed by a lot of users through open source services. Experiences were positive and the pilot system was upsized to a full scale production system. The database setup is designed to facilitate sufficient performance and ensure constantly running service. Databases and services are replicated and a master-slave relation is established between the databases to ensure immediate copying when new data are transferred from the authoritative database. A special challenge was the change-over from the old system to the new one while services were still running. New data are copied on a daily basis. Old and new system were run in parallel for a short while to be sure that the new system was stable. The change-over has mainly been done by inhouse employees, which were non-specialists in open source products. Documentation and expert service companies are available if help is needed. Experiences are positive. The Danish Geodata Agency decided to explore changing a major part of its data distribution from a commercial Oracle Spatial database to an open source PostGres/PostGIS database. A pilot project was set up to evaluate PostGIS as a productive distribution database accessed by a lot of users through open source services. Experiences were positive and the pilot system was upsized to a full scale production system.
10.5446/15553 (DOI)
I'm going to talk about map-bender 3. Hello all of you, welcome to my presentation. You will hear more about web mapping now, another presentation on this subject. And I want to show you how you can use map-bender 3 and you can use it to create your own G-Portal web application and build up a service repository with it. So who am I and what am I doing? I'm from Germany, I'm part of the map-bender team since a long time now. Map-bender is quite old, I'm more than 10 years old and I'm in the team since such a long time now. I work in Bonn at the WERC group, it's a company with 20 employees and we are working on map-bender with a big team and we do most of the development and investment in map-bender and we have a quite big user group and users in Germany who use map-bender but this project is huge worldwide and yes it's quite popular. I'm active in the German local chapter as well, FOSCIS EV and we do a lot of presentations of map-bender and other OSGU software on conferences and fairs and I'm involved in OSGU live and try to bring map-bender and all the other software on it. So let's have a look at map-bender and what makes it special and what is the focus of map-bender. So it helps you to create a web-js, it's a client suite that helps you to administrate a web interface. So you can build up a lot of, it has an administration web interface and you can build up new applications with map-bender. And one feature of map-bender is that you can create geo-portals without writing a single line of code. There are a lot of people who are not used into coding but still they can set up map-bender and create their own applications, very individual, with Yale and a lot of applications and administrate them. You can create, if you are handling a lot of O-W-F-S services, you can create and maintain a repository and handle all of your services with map-bender and get them structured. And you can distribute configured services to your applications so you don't have to use a service like it is published, like in a WMS with 100 layers, you can configure it and maybe only publish two layers in your application. So you can do that through the backend. And then you have users and groups and you create them in map-bender and backend and then give them access to applications and functionalities and services. This has a close look on the map-bender development. We now changed from developing map-bender 2 to map-bender 3. We had a long discussion about reinventing map-bender and building it up from scratch and doing a really new application, bringing up a new application. But we want to keep all the goals that we had in map-bender 2 and record them or recreate them with map-bender 3. So we still do map-bender 2 development and support, but parallel we create map-bender 3 and we have solutions already. So if I look in the audience, is there someone using map-bender from you or is it you for you who is using map-bender? Not so many? Okay, so it's brand new. So we just heard what are the goals of map-bender. So if you're like a city or a commune and you have a lot of services and a lot of different users with different focus and different theme topics that they have to handle, then you can bring these components together. You can build up several applications with a few mouse clicks, you can handle your services and repository, and then you can set up roles, groups and users and assign the users to the groups and give them access to the applications you build up. So let's have a closer look on these three components. We will start with applications. So as you heard, you can set up one or many applications as you like. And this depends on your need. Maybe you only need one city portal, then that may be enough. But maybe you want to create portals that have, they only have access to some people. So you can set up an application which only some users can access and they have maybe editing functionalities which should not be used from other users. Then you have services. At the moment, we focus on OWFS, OWS services and the main focus is on WMS services. But we will support WFS as well or WMTS and more data sources. And we focus on the data sources that are available from OpenLayers because we use OpenLayers and MapBender 3 and so we will more and more support data sources which are possible to use with OpenLayers. So let's have a look at the applications. This is a standard Map into 3 application that you get when you do the installation. So you see we integrated OpenLayers as a map component and we built a lot of elements that you can see here in this application. Like you have a navigation toolbar and you have an overview map that you can integrate and you can do it all in the back end with a few clicks. You can build up your own application like you want to have it. You can select the scale in the application. You can do it with a navigation toolbar as well but you can select the tier as well. You can change the projection. You have a layer tree. We have lots of buttons that you can use in your application and you can open a layer tree like this where you have all the services integrated and you can drag and drop the services or delete layers that you don't want in your application. In the layer tree you can disable layers or get information from the layers if they are provided. You also can integrate the layer tree in a sidebar like it's shown here. You could integrate the legend here as well and you can build up your own template and use your colors or the style you want to use to set it up. You can integrate legend elements and you have a WMS loader where you can enable the user who is using the application to load additional WMS to the application. They will show up in the layer tree and they can drag and drop it or disable it or handle the integrated WMS. You have a measure component. You can measure the distance or you also can measure an area and get the volume. You have a print client where you can provide different print templates. You can provide different quality. You can provide different scales and you have the possibility to rotate the map for the print and then export a PDF file. This is all configurable so you can provide your own print templates with open Libre and document which you export to PDF and all the templates that you configure here you can define how many you want and which size you want to support or which resolution you want to support. How do you set up the application? It's more interesting now. How does it work? First, you have to log in. When you install MapBender 3, you have to define an administration on a user. There's one user already there when you set this client suite up. You can log in with this user. As you can see here, you also have the possibility to provide a register link. If you want to set up a portal where other people can register and then create their own applications, you will activate this or you can provide the forgot password functionality as well. But normally, maybe you don't want other people just to register and do their own stuff. After log in, you have this view where you can see that you have a lot of applications here in the middle. You have the functionality to view an application or to copy an application. You can edit an application. With the I, you can publish an application and with a cross, you can delete it. At every module, you find this filter functionality. If you have lots of applications, you can filter them and then find easier the application or later the user or group that you want. At the left side, you see the tree where you can now choose maybe to create a new application. A new application is made of a title, an URL title and a description. After you created that, you can choose from different templates. We provide different templates. They are different because they set up map bender with different areas. In this application, you see in this template, you see a toolbar which will be at the top, a sidebar which we saw before where the layer tree can be in. Then we have the content where the map is in and the filter where maybe the scale selector is in. You have all these areas in map bender 3 where you now can add elements to. You have this plus button at every region and then you add elements. In this case, I want to have a map. Maybe that's the first most important thing I want to add. For every element, you have a basic configuration where you have to define which layer set maybe you want to use, which projection, which start extent you want to use and maybe which scales you want to support. After you defined in this case a map element, this is the first application you created. Like we saw before, your application can integrate more and more elements and you can set up your individual application like you want. This was about the application part. Now let's have a look at the services. Our concept is that you have a tool where you can have a repository for your services. You will add one service like a WMS one time to map bender and then you move it to your applications and still have the overview on the service and always can update it from the administration and get an overview on what services you handle in your geo portal. You can configure the service and you define control access for the service in the application. How does it work? You know the service URL and you publish it. You can access services with username and password and load them into map bender. After you edit the service to map bender, map bender knows everything about the service from a capabilities document and you get all the information about the service from your map bender application and you can have a look at the metadata, the contact info or the layer information. Then you will add the service to applications and here you can now make changes like you could say okay maybe topography or the border. These are layers that I am not interested in and you can disable them or you can change the position of the layers by drag and drop or you can change the format or the opacity. Now let's have a look at the roles. So a lot of our map bender solutions are customers who have a lot of users like in a city, they have a lot of users with different needs and you can provide a user for every user in your city administration and then you add a username and password and email or maybe a special profile that is the inspire profile that we use here and then you can add such a user to an application. We see here I have this demo application that I created earlier and this user, Astrid Emde now gets view access and edit access to this application. So it depends on how much you want to give to the user. You can click on the buttons and assign access to different rights and this example shows how a user gets access to an application but you can define groups as well then assign users to groups and give the right to the group. If you have to handle lots of users that's much more easy. Okay that was the information about the backend. So if you're not into programming and you're fine with clicking and building up roles and applications and service repositories, the backend of map bender 3 is very good for that and now I want to show you some solution that we build up with map bender 3. The one, the screenshots you saw before, where the standard map bender 3 screenshots that you get from a normal map bender 3 installation. And now here we can see the geoportal, the portal from Germany. We build it up with map bender. Usually you would go there to the geoportal and make a search. You will look for some information that you are interested in and then you can add them to the standard map and get them here at the side. You can drag and drop them and disable them or you can choose a different background. This is a blog there. Get legend information and the things that we saw in the other application as well. You can set transparency for a layer or you can zoom for a layer. Maybe you added a layer that you can't see at the beginning so you can zoom to the layer or get the legend or camel export from the layer tree. We have another solution which is a solution when you want to go cycling and you want to find out a route that you might take. You can make a search here or define the route that you want to take and you get a profile and can export it in different formats. One thing you can see here is that you have individual configuration that you can make and you set up your own style on the map and application and map and that can look very individual. This is another geo portal that is run in Lepper in Germany and here you see that you can get detailed information. If you have lots of topics in your application you get several tabs with information about the things you choose. Then here is Wintatler-Einland files. It's a very new application where you get information about... I didn't look up... Windwheels? I don't know. Windwheels. Okay, sorry, I wanted to look up the word. Let's have a look at the components that we use in MapBender 3. We use Symphony, a PHP framework as the basis or we build on that. That was a good solution for us because with MapBender 2 we did everything on our own. We did the map components, user handling and everything on our own. Now with Symphony 2 we can use lots of components which are integrated in this framework and use a lot of functionality which is already provided from Symphony, like Doctrine which is a database abstraction tool. We can provide lots of databases from this and don't have so much work to implement that. We use the bundle philosophy from Symphony 2 as well. As a map component we use open layers and we communicate with open layers. If it is possible with MapQuery which is a wrapper between application and open layers, we use jQuery and as I already said we can set up MapBender with different databases. What are the next steps for MapBender 3? We do regular releases. We try to do three releases a year and we have a roadmap where we define our milestones. The next steps will be to provide more data sources. We want to go for WFS as a data source. We want to provide service update. We want to provide, we are already working on an SQL WFS digitalization tool so you can edit data with MapBender 3. We saw already maybe in Geoportal Germany there was a search plugin that we added to the application. We do a Lucene search and we have solutions with SQL and WFS search but they are not integrated in the project yet and we want to bring this to the open source code as well. We nearly finished WMC editor and loader which will be in the next version and we will work on a mobile solution. If you want to get to know more about MapBender you can visit our MapBender 3.org website to get more information. You will find a documentation at doc.mapbender3.org where you find information about installation and about the elements that you saw and that you can use in MapBender. You have a MapBender 3.0 demo that you can use, you have to register and then you can try out to configure your own application and you have MapBender 3.0 on OSGulife so you could test it on OSGulife but it's not the up-to-date version, it's one version earlier. So installation is quite easy, you can download the package or you can install it from a Git repository that you download and you find information in the documentation about that as well. For more information you can visit these sites and that's all about MapBender. Thank you for joining the presentation and if you have questions you can ask them now. Okay, the question was whether I can compare with GeoServer so maybe you know GeoServer has an administration back end as well to provide authorized access to applications and it has a front-end where you can export an open layers application maybe. So we want with MapBender 3.0 we are independent from WMS service providers so you can integrate WMS from MapServer, QS, QGiServer or GeoServer, that doesn't matter. You could do it in GeoServer as well, you can import external WMS there as well but we focus on this Geoportal setting up Geoportal functionality which is not connected to service providers but maybe it's similar how authentication works in GeoServer. Okay. Another question? Can you do a simple use case scenario for us like for example I am a city planner and I want to find all the vacant lots in the neighborhood. Yes. The dashboard. Most of the time you can show a scenario how maybe you could make simple analysis. At the moment MapBender is only a viewer for WMS or maybe you could use editing or search functionality but we don't have this classification functionality integrated yet. We had it in MapBender 2 and maybe that's at the moment the process is that we have to reimplement functionality that we had before in MapBender 2 to get it provided in MapBender 3 as well because in the earlier MapBender version you could do a search in a special frame where you can define your query like give me the houses, blah blah. And then the result, you would only see the results in the map because the WMS server got the filter added to the get map request but we did not provide that yet. And maybe we will provide it when we integrate WFS. It's much easier to define the query then and add it to the service if you use WFS. But maybe we are at the moment at that point that we want to bring all the basic functionality to MapBender 3 and we focus on the things that our customers which were using MapBender 2 before that they are satisfied and get the basic functionalities. But that will be a topic that we will definitely go for as well. Yes? I have a question, so trying to compare a bit MapBender with MapBender, how easy would it be to extend an application, to create an application with MapBender and then extend it to some application to make it a more rich web application versus using a database with MapBender, the kind of stuff that we usually do with MapBender to go into the code and then extend it. Yes. Okay, so the question was how can you extend or if you compare it with MapBender, MapFish, how easy it is to extend MapBender? Not MapBender, but an application created by MapBender. Yes, to extend it. Okay, I think we concentrate on symphony and we have this bundle philosophy, so now when we extend a MapBender application with more functionality, we use the symphony bundle philosophy as well, so you would implement it as a symphony bundle and add it to your application. But I don't know how simple it is to do it somehow else. Sorry, maybe I'm not too much into development. I think so, yes. Yeah, yeah. Okay, I think we'll probably draw that so close. So once again, let's thank Hashtrin. Thank you.
Mapbender3 is a client framework for spatial data infrastructures. It provides web based interfaces for displaying, navigating and interacting with OGC compliant services. Mapbender3 has a modern and user-friendly administration web interface to do all the work without writing a single line of code. Mapbender3 helps you to set up a repository for your OWS Services and to create indivdual application for different user needs. The software is is based on the PHP framework Symfony2 and integrates OpenLayers, MapQuery and JQuery. The Mapbender3 framework provides authentication and authorization services, OWS Proxy functionality, management interfaces for user, group and service administration. In the presentation we will have a look at some Mapbender3 solutions and find out how powerful Mapbender3 is! You will see how easy it is to publish your own application.
10.5446/15552 (DOI)
I'm trying to make it keep it brief. Good afternoon everybody. Just after 12. My name is Rob Stecklenburg. I'm working for Ideages in the Netherlands, software developer. Ideages is a small company in the Netherlands, a geo company that has clients among mainly the governmental organizations, provinces, councils, police. And we were once involved in a project, a national project where we had to use tiling, but where the map data was updated very regularly. And we had to meet and solve some challenges in that respect and develop special software to do what we wanted to keep up with the goals of the program. The Dutch government has a national facility for showing spatial plans at all levels, council, governmental, provincial, all kinds of spatial plans that are useful for public use and professional use. And at the moment we have around 15,000 plans in the system and it's growing daily, it's really taking off. It's publicly available through a website and besides the website there are facilities for professional use as well. You see, we have 15,000 plans, every dot is a plan and it's almost the view of the Netherlands is coming out through all the small dots. Users have to be able to zoom in to plans, to pan and to zoom out. So we need a good performance and that was one of the demands that the project had. I show you an example, a plan, more in detail from the hometown of our company, Risen in the east of the Netherlands. And the website gives wonderful means of searching. I just found this plan in our town by typing in the address of our company's headquarters. And it's a small plan that says something about how to develop the center of this town, all the documents that belong to it, all the official documents can be found as well. And this is a really common local plan, so to say. It's difficult to translate the specific items into English, but you can say this is a local plan and each local plan has about four to five map layers, including borders, including an overview layer. And these five layers are used for the majority of plans in this facility. And they all constitute the whole service area of the Netherlands. Maybe a few percent of the total number of plans is different. They have often have a larger service area. They are more in the national area or the provincial areas of spatial plans. And they have their own specific layers with them. And that makes one of the challenges in this case. The topography, the map of the town and surroundings that you see is pre-tiled because it changes maybe once a year or even less. So we do that on separate hardware. In this spatial plans project, plans are updated daily because councils and everybody who is using it from the government really wants to be able to present the new data as soon as it is ready. So we have from a few plans a day to a few hundred plans a day per day that are updated in the system and have to be shown on the map as soon as possible. We provide for a number of zoom levels from the overview of the Netherlands up to the utmost detail of a certain plan. And we need to do that for different map layers as well. Most plans contain several layers like the plan I just showed before contains five layers. And some of these special plans contain ten or more layers as well. And we have to provide for that in total we have one known, they said, seven hundred map layers in the system that we have to provide to users. To give you an example of how many tiles that could be for the topography layer, I have been counting the number of tiles for the whole Netherlands for all these zoom levels. And that's for one layer only five and a half million tiles. And we have seven hundred layers in total. And this means that if we would not use tiling it would result in high demand on the WMS. Because at the daytime the map server is also used by professionals as well. And it serves the map data for the website. So we had to come with a solution. And we had to use tiling to diminish the demand on the services. But what we only wanted to do is tile parts of the layer that were changed. We can't change the whole layers every time. And the customer also had said that they wanted to be ready with tiling before office hours. So there would not be any stress on the WMS during office hours. We come up with a solution that we call the tiling manager. Static data is pre-tiled like topography. And dynamic data which can change per day is tiled before office hours. We use the web cache which is a perfectly sound solution for tiling, for static tiling that is. So in this case we're not smiling yet. We do need something extra. And that is the tiling manager which sits between the planned database of the whole project and the web cache. And it queries the planned database on a daily basis and then makes tasks that it can issue to the web cache. And we had to change to adapt the web cache as well to be able to communicate it with it in a way that we wanted and to be able to monitor the progress of the web cache. We faced a number of challenges. As I said the monitoring of the web cache was wanted and we also wanted to be able to stop the web cache because the tiling manager can get a command to stop everything it is doing at that moment. So it has to stop the geocache instance as well with tasks at hand. And we had to extend the rest interface of geobab cache. The tiling manager itself has a simple information page which shows the plans at hand, all the layers per plan, the zoom level to which the layers have to be tiled. And during tiling it gives a percentage of the tiles ready and also an estimate of the time still to expect the tiling will need to go on. And because we have several plans that have their own layers, the geo web cache has a configuration file with all the map layers inside and we only have five of these common layers that are common for a lot of plans. But there are some plans that use specific layers which we can't predict beforehand. So we have to generate a configuration for geo web cache using a template and the tiling manager knows about the plans, it knows the template, it fills in all the details and it makes a proper configuration for geo web cache that it needs. We also did some work on optimizing the WMS performance mainly by using some extra caching so that the database requests were diminished, the number of them. And we also found it very handy to be able to shrink the PNGs that were produced, 24-bit PNGs to 8-bit using PNG quant which is a standalone program that we use for every tile and the transformation is also built into geo web cache. And it means that I think the size of these tiles is indeed going from 100% to 50% or 30% size. So it's really helpful to get less transport over the internet. And what was the process that we had up-to-date tiles? So when a council changes a plan today and they upload it to the site, they really want the map and the view in the next day to be in order. We can't have old tiles running around or still being there. So up-to-date tiles was very important and this was ensured by the cycle that we made, we built into this tiling manager. We have the plan database and the tiling manager takes all the plans that were updated, deleted or added. It finds all the layers in all these plans. It makes a task for each layer, a geo web cache task and all these tasks are visible in the information page of the tiling manager. And the cycle starts with generating a new configuration. So for all the plans that we have in the morning, so to say, we generate a new configuration for geo web cache up-to-date. Then we're going to truncate to delete all the tiles of the changed plans of the day before. So we erase the old situation, delete all the tiles that were changed. Next we send the new configuration that we generated here. We send it to geo web cache and we reset geo web cache because we have to make sure that geo web cache can read up this new configuration file. This was the most safe option we had. Then after that, and geo web cache is back online, we give it seed tasks. So it makes the tiles based on the new configuration and it makes tiles which represent the new plans and in the map later on. And all these tasks in this info page are being done bit by bit until the list is empty and then we're ready for the day. And hopefully before office hours. A small idea about the architecture. There is one tiling manager and there can be multiple geo web cache instances and they can each run in, they run in a tomcat and they can run all in the same tomcat or in a separate tomcat or on separate servers. Communication is via HTTP. So there is a flexible way that you can deploy the tiling manager and the geo web cache instances. You can have many geo web cache instances if you want. But they all do get the same configuration. And the same configuration means that they all send their requests to the same WMS to get and produce the tiles. The tiling manager is trying to find when it goes through his list, through the cycle and through the list of tasks, it finds a geo web cache that is doing nothing at the moment and send a task to it so it can do the tiles for that plan, for that layer. And then finds the next geo web cache that is doing nothing and send another layer to that one and so forth. And if they're all busy, it just waits and monitors progress and waits until one of the geo web cache is free or the list of tasks is empty. And all that is configured in a properties file for the tiling manager. It's a bit bad. It's not readable really, but from a software perspective, there is a tiling manager class that accepts HTTP requests from users, translates that into calls to a manager service. And the manager service has a pool of that holds W, no, geo web cache instances. In the pool, we have several geo web caches representing geo web caches and the status of geo web caches is also monitored in a special class, in a special object, all is written in Java. If I show a very short overview of the database scheme, there is a table that contains the task, the cycle of the day, so to say, a row for the cycle of the day. There is a big table that contains all the status items that are important for the task at hand, the plan, the layer type, the zoom level we need to go to, etc. Then there is a table that combines the type of plan with the layers that the plan is normally containing. And we have for the pre-tiling, for example, when we want to pre-tile topography, it does not make really sense to take the bounding box of the Netherlands because we would also tile a lot of the North Sea, which is not really useful. So we made a table with seed boxes, bounding boxes around the Netherlands in a more efficient way so that most of the land of the Netherlands is covered that way. And the pre-tiling of topography in other layers is a lot faster. Well, this is not readable also, but there is a configuration file where you can really adapt the number of geowab caches, the connection to the database, passwords for Tomcat, etc., etc. And I'm really going fast. This was it already. Okay. Thank you. Any questions? I did. I did. And that was some time ago. And the comments were something like, for example, the PNG 24-bit to 8-bit conversion. They didn't like the way we did it. And for other items, they didn't take it up because they found that the classes we used, we adapted in geowab cache, would not be their choice of using our adaptations. So they didn't take our work less. No. I'm not sure that the talk will be distributed sometime. Yes, I will do so because you have 10 seconds to write down this link. Okay. I won't share it. I won't do it. I'll upload it and... Okay. Yes, but I don't. I really had to go. I really had to go. I have to catch your flight. Sorry. The main link is wwwidgs.nl.
Tiling is currently the best solution to achieve high performance and throughput for serving map images. However, because tile images have to be prepared, tiling is often only used for relatively static data. The Dutch national facility for presenting governmental spatial plans, www.ruimtelijkeplannen.nl, is used intensively (app. 15.000 plans) and has high performance demands and therefore wanted to make use of tiling. Because plans often change and are added and deleted from the central database, a special solution was developed to manage the daily update of tiles. The presentation will concentrate on the solution to manage the daily update of tiles, the Tiling Manager. The Tiling Manager software queries the audit trail of plan updates, executes tiling tasks in collaboration with GeoWebCache and monitors progress. We had to deal with several challenges to realize the requirements, such as monitoring progress of tiling tasks in GeoWebCache, run-time generation of GeoWebCache configuration, optimizing WMS performance and assuring that the services will never present old tiles if new plans are available. In addition to the solutions to these challenges the presentation will show the technical architecture of the Tiling Manager.
10.5446/15551 (DOI)
Today will be quite in relationship with what we saw just in the previous talk. It's about machine learning and remote sensing. And I will introduce you to the new capabilities of the Off-Road Toolbox brought by Bridge to OpenCV in this field. So this is the outline. Actually I will start by OTB facts and figure, but I decided to do it the other way. So I will start by talking about machine learning for remote sensing, what is required, and then I will introduce how we brought OpenCV into this. I will give two, one not so short example and another short example. And I'll end with facts and figure. So if you want to leave early for the lunch, you get the interesting message. But still, yeah, there are things to know before I start. So machine learning steps, especially supervised classification, we have two main steps. We have training. So this is something we saw in the previous talk. And we have classification. So the training is estimating some kind of model from training samples. And then the classification is applying this model to decide the class of specific entities. And in remote sensing, we usually classify pixels, so image from image or features like we saw from a digital elevation model in the previous talk. We can also classify segments or objects. And we may also classify patches or even images if we have several images and we want to roughly separate them. So that is to know about machine learning, about remote sensing, and about OTB. You just need to know for now that it is a C++ template library. And it also has application plugins, so something you can use without compiling anything from command line or from QT. So in my talk, everything I will show, I've done it with the command line interface. So as I note, a single specific line of C++ to do this. And so here, I will just introduce you to the world pixel-based classification chain in OTB. And then I'll show you how we brought open CV into this and many more things. OK, so requirements for machine learning for remote sensing. Obviously, we need a machine learning algorithm. So that's a start. In OTB, until 3.18.18, we used SVM, especially the libSVM implementation. It is widely used in remote sensing. We saw this in the previous talk. It has few parameters. As Previusi said, it is good in high-dimension feature space. We can solve non-linearity problems with the kernel trick. It has strong theoretical guarantees. And it has extension to multi-class problems. The main algorithm is a two-class solver, but you can extend it quite easily to multi-class problems. In fact, the algorithm tried to find the best separation between the set of samples. So it's the best margin between those two samples. Set of samples. But then we have to do other things, like normalizing features in MAHE. For instance, here, I show here you have an Instagram of an image, and here there is a tiny, tiny, tiny blue line. And in fact, this is another feature. And if I zoom in, I can see this is a 9DVI, and this is an image spectral band. And this lives between minus 0.6 and 0.6, and this lives between 200 and 600. So it depends on the classification algorithm. But if we use something, for instance, based on Euclidean distance, then this feature will absolutely dominate everything you can infer from this feature. So to avoid this, you can try to bring all the features into the same range. And it also enhances numerical stability if this range is something between 0 and 1 or something between minus 1 and 1. So this is the normalization, and there are several methods. You can do just linear min-max stretching, but it is not really resilient to outliers. And you can also do linear stretching by clipping the histogram, or you can also use centered reduced normalization. So and the important thing is, of course, you have to use the same normalization for training step and for classification. So in OTB, we have an application that can estimate the statistics from your input image you will classify and then apply consistently the normalization throughout the process. And then there is the training step. So it is under by the train SVM image classifier application until 3.18. And starting 3.18, it has been renamed to train images classifier. You'll see why later. So in our application training data, we use GIS vector files. So we require polygons delineating the training areas with class attributes that you can specify to the application. And what we do from this polygon is that we sample pixels within each polygon. But as we may require many less training samples than the world-available samples, if you delineate it over really large areas, there is a random sampling of these classes for each polygon. So this is something you only need the image and your shapefile with the training set and we take care of all this plumbing. And then comes performances measurements. We have an application that's called compute confusion metric application. You can enter validation data or validation set. You should use a different validation set, the training set, because it will help you get a more realistic evaluation of the performances. And you can use either different polygons for your validation set or different random sampling of the same polygons you used for the training. And this application accepts the classification map as an input. So you can actually estimate the performances of the world-chain. I mean, if you have post-processing or this kind of thing, it's a separate state. And we output some widely known measures such as confusion metrics, the Kappa coefficient, which takes into account the lacquer rates, and overall accuracy, precision and recall, and F score for all classes. So this is about performance measurements. And then once you trained your classifier and you estimated the performances, you may want to apply it to other images or to your images. So we have also an application to do this. You can read back the model you estimated from a file. And the classification filter is a streaming-enabled filter. So you can process rasters as large as you want. It will just process it piecewise. So if you have very large raster, this is interesting. And also, it will perform classification in a multistraded way. So it might help you gain some time. And it's under the store. That's not really interesting, but it's under the store data field. And then we have some post-processing. We have regularization. So in the previous talk, we heard about Markov-Rondon field regularization. We do not have this for the moment. But the goal is the same. It's to try to remove isolated pixels with classes different from their neighbors, because they are likely to be noisy pixels. So here we have a simple approach, which is majority voting on a given neighborhood. And it is handled by the classification map regularization application. And more regularization technique may come later. And we also have a fusion of classifiers application. Imagine you train different classifiers from different images or different feature set. You have several classification maps. And you want to take the best of each one of them. And this application will help you do this. Isobi, majority voting this time on the classifiers. Or you can also use something that is maybe more clever, is dumpster-shofe combination from confusion matrices. This will take into account that some classifiers are really inaccurate for this class and very, very accurate for this one. So it will take this kind of information into account. So now, meeting OpenCV. So this is really great. But we heard a lot of these two things. First, OTB is nice. But SVM algorithm is a shame. I would like to do random forest boosting by a neural network or anything else. And also, did you read this paper? Transient Boosted Learned in Fuzzy Infinite Spaces. It's super cool. Can we have it in OTB? Really, I would like to have it. So this is not a joke, but we really got some of these feedbacks. So what we tried to deal with it was first to define a minimal machine learning class model that will be able to embed any machine learning algorithm. And we came up with this. So our Fyotoolbox is a C++ template-based library. So obviously, we have inputs and output samples types. We can set GET training and target samples. And we have here some simple triggers to train and to predict a single sample, and also some function to save the trained model to a file and to load the model from file. So the methods in orange, they are pure virtual. These methods need to be implemented for each specific classifier. So once we define this generic, very simple model, what here is what we do. This is an extract from the doxigen of Fyotoolbox. So here is our very simple machine learning model. And here is Libes VM. And we added also subclasses representing all machine learning models that can be found in the OpenCV library. So you have a single API with high-level methods. And you can reach any of these algorithms. So I'll go into detail just here. So we have SVM, both from Libes VM and from OpenCV. The nice thing is that the one from OpenCV includes meta-parameters optimization. We have cannerist neighbors from OpenCV. So this is a very simple classifier. We have decision trees training. We have AdaBoost, which is also some kinds of committee of decision trees training algorithm. We have grade and boosted trees, more boosting techniques. We have artificial neural networks, which we add to wrap around additional code to define the layer scheme. And we have normal biasing classifier. And we have rodent forest. So what is nice when we have all these classifiers under a single API is that we can easily train a whole set of them and compare them. So this is the result we have actually in OTB testing. So we test all these algorithms against later.scale, which is a dataset available from Libes VM website. So here are the confusion matrices. So it's with the default parameters. So don't try to say, OK, why is rodent forest performing not very well? I'm sure you choose this one. You choose the dataset. So rodent forest will be not accurate. No, it's just the default parameters. It's just to show you that you can very simply run all the algorithms and get the performances of each one of them. So this is great. But now we can answer to the first question. We can answer, yes, yes, of course, we have these also. And to the second, we can at least answer, please provide the code as a subclass of machine learning model. And please, please, please write some tests because we will have to maintain this funny algorithm afterward. And it will be quite hard. And then we have something we call the machine learning model factory. I said that during the training step, you actually can write the model you trained as a file. And this is left to the underlying implementation of the algorithm. So when we only add SVM, it was quite simple because we only wrote SVM models. So when we had to read back these models, we knew that it was SVM or nothing. But now we've write a file and then we do not know what kind of algorithm we trained and we want to read it back. And we have nine possible models. And the user should not have to know which model he has to load. So to solve this problem, we implemented a simple factory pattern, which I already mentioned that reading and writing models is handled by the underlying implementation of the algorithm. So we have the reading function returns true upon success and the machine learning model factory. It will just try to instantiate each model class and it will return the first that successfully read the model file. So this way, the user has no need to know which model the file actually represents. So now I'll go briefly into some examples. Yesterday I was at a session called What You Can Do With OpenStreetMap. And it's funny because I actually tried something to illustrate this talk. I took Spot 4, Take 5 data. These are something to do like time series which were acquired during Spot 4 and of life. So they tried to get the same revisiting time as the future Sentinel-2 mission. So this data set will allow to prepare for as to prep processing and value-added products. And in this particular example, I focused on, for instance, getting the water bodies mask as best resolution possible. And I ran into some problem because I had no training areas or training samples and I'm not very good at photo interpretation and I'm not an expert in water bodies and land use and all those things. But I knew about OSM. So I tried to simply use the OSM data as supervision for the algorithm. So I derived two classes. One is made of water elements from OpenStreetMap, like pond, riverbank, reservoir, water. And class 2 is made of anything else. I tried to sample some of the other type of classes you can find in OpenStreetMap. And from this, I got a quite large shape file for my area of interest. And I selected 1,500 for each class for training and 3,000 for each classes for validation. So here is what the data set looks like. So you have the acquisition dates. So I selected three images. What's interesting here is we still have snow in the mountain here and we have also some shadows. And then here we have more shadows. And here we have still more shadows. And the multi-temporal information, it will, yeah, okay, I'll try to go very fast. It will allow you to somehow mitigate the problem with those shadows. So here is the training set from OSM. So you see here the river. It's the other class. And here is the validation set. I just need to mention that this polygon is extracted from SRTM water body mask. It's not from OSM. And here are the results for the four classifiers I chose. Well, SVM, RBF, won almost all the time except from this date where Random Forest did a better job. In fact, you have three K-PAC coefficients here. The first is the one estimated during training set, training step. The second is estimated from the validation step. And the third is estimated from the validation set, but including the C. Okay. So including the C, it increases the K-PAC coefficient in all cases because it's really simple problem. What you can see also is that there is a slight increase between Kappa-Learn and Kappa-Val-1. This is probably because the learning set is more, it has been randomly selected. It seems to be more difficult than the general problem. So anyway, I selected the three winning classifications. So one from each date. So this is SVM, RBF for the first date. This is Random Forest for the second date. And this is SVM, RBF for the last date. And then I did a majority voting combination of this algorithm, a regularization with radius 1. And here is the final accuracy I got. So I got Kappa-Val-1 up to 0.95. And this is the final classification result. So everything I did here is just launching the command line application. There is nothing specific. Okay. So this is just to give an example of what can be done just with those tools. And then I would like to briefly, briefly describe, very briefly describe something that is coming next. With Playa's data, which are very wide resolution data, I mean 0.7 to 0.5 meter resolution, it's not very relevant to do pixel-based classification because of the heterogeneity of the data. So instead, we can do object-based classification, which means we classify the region segmented for a segmentation of the image. And this can capture more information like statistics or shape or even neighborhood or this kind of thing. Coming next in our filter box, an application to perform the exact large-scale segmentation of a very large image. So it will allow you to have the segmentation as a shape file. So you enter the image, you output the segmentation as a shape file, and it has almost no size limitations. So because Playa's images are very, very huge. And we have the vector layer classification application, which is basically the same as what I showed here, but for vector data. So we classify polygons with their attributes, with the same algorithm and the same workflow. Okay, that's what I said. So here is an example. So this is 1 ninth of a Playa's image, so it's already huge. And this is the segmentation we got. This is the shape file overlaid on the image. And it shows 171,000 polygons here. And we classify those polygons into, I don't remember maybe 10 classes or something like this. So here is just the detail. So this is quite promising and will come in the next release, maybe end of October, something like this. So facts and figure, I don't have time for this, but this is everything you need to know, but I just jump to this. Here is all the resources, so best start is with the websites. And if you want to have a close following, we have a JIRA, we have a dashboard, we have Mercurial, we have all these things. And just the take home message. So now we have LibSVM and all the machine learning algorithm from OpenCV in an extensible framework. We provide all the site tasks and plumbing to apply those algorithms to large satellite images. And you can reach this tool through the C++ API or through the application plug-in, which in turn can be run from Monteverdi to our QGIS. I didn't have time to talk about this, but we'll see this in another talk. And coming next in OTB, the vector equivalent of the RESTAR classification tools and tools for the segmentation of very large RESTARs. Thank you. And sorry for... Thanks very much, Julianne. And we've just got time for one quick question. If anybody's got any more detailed questions, if you want to grab Julian afterwards over lunch, I'm sure he'd be happy to. Awesome. Question there, please.
Orfeo ToolBox is an open-source library developed by CNES in the frame of the Orfeo program since 2006, which aimed at preparing institutional and scientific users to the use of the Very High Resolution optical imagery delivered by the Pleiades satellites. It is written in C++ on top of ITK, a medical imagery toolkit, and relies on many other open-source libraries such as GDAL or OSSIM. The OTB aims at providing generic means of pre-processing and information extraction from optical satellites imagery. In this talk, we will focus on recent advances in the machine learning functionality allowing to use the full extent of OpenCV algorithms. Historically, supervised classification of satellite images with OTB mainly relies on libSVM. The Orfeo ToolBox provides tools to train the SVM algorithm from images and raster or vector training areas, to use a trained SVM algorithm to classify satellite images of arbitrary size in a multithreaded way, and to estimate the accuracy of the classification. The SVM algorithm has also been used for other applications such as change detection or object detection. But even if it is one of the most used function of the OTB, the supervised classification function did not offer a single alternative to the SVM algorithm. However, the open-source world offers plenty of implementations of state-of-the-art machine learning algorithms. For instance OpenCV, a computer vision C++ library distributed under the BSD licence, includes a statistical machine learning module that contains no less than height different algorithms (including SVM). We therefore created an API to represent a generic machine learning algorithm. This API can then be specialized to encapsulate a given algorithm implementation. The machine learning algorithm API assumes very few properties for such algorithms. A method has to be specialized to train the algorithm from a samples vector and a set of target labels or values, and another to predict labels or values from a samples vector. Thanks to templating, these methods handle both classification and regression. Two other methods are in charge of saving and loading back the parameters from training. File format for saving is left to the underlying implementation, and the load method is expected to return a success flag. This success flag is used in a factory pattern, designed to be able to seamlessly instantiate the appropriate machine learning algorithm specialization upon file reading. It is therefore not necessary to know which algorithms the trained parameters files refer to. This new set of classes has been embedded into a new OTB application. Its purpose is to train one of the machine learning algorithm from a set of images and GIS file describing training areas, and output the trained parameters file. Another application is in charge of reading back this file and applying the classification algorithm to a given image. With these two tools, it is very easy to train different algorithms against the same dataset, evaluate them with the help of another application which can compute confusion matrix and classification performances measurement so as to choose one or several best algorithm along with their parameters. The resulting classification maps could then be combined into a more robust one using yet another OTB application, using classes majority voting or Dempster-Shafer combination. Our perspectives for using and improving this new API are manyfold. First, we would like to investigate further the use of the regression mode. We also would like to investigate the performances of the new machine learning algorithms for other tasks achievable with OTB, such as object detection for instance. Last, we would like to evolve the API so as to export any confidence or quality indices an algorithm can output regarding its predictions. This would open the way to the implementation of new active learning tools.
10.5446/15550 (DOI)
Yeah. Okay. Thanks. Um, is this on? I don't know. I don't know if it's on. No, no, it's coming. Okay. Great. Okay. So welcome to my presentation. I'll just, uh, I'm sorry for the slides. I don't get to set the resolution, right? But, um, I'll just give you in the beginning a short overview, um, of the presentation. I'll very short just introduce myself and come to camp the company I'm working for. Um, and then I'll give you, uh, so introduction about Mapfish, about the framework. Um, it's architecture and the second release. And then in particular, I will go into one of the implementations, which is a full-fledged web TIS. I'm known as C2CTO portal. And I'll show you some other examples. Yeah. Just some ideas about it. And I'll talk about upcoming developments and give an outlook. So, camp to camp. We are an open source solution provider of about 45, um, from Lausanne, Switzerland. And we have also, uh, some, one part is in Germany, France and in Vienna. Um, we are working, um, in the geo-spatial domain, but also in, in business ERP systems, in infrastructure, um, for system administration and doing implementation, a lot of implementation, a lot of consulting, support and training in that domain. So what is Mapfish? Um, I don't know if you heard about Mapfish before, who did, who did before? Okay, still some, so you know a little bit. So, um, at least so Mapfish is a web TIS framework. It's a component-based. So, uh, they are on the server side components in, in Pylon or Pyramid Python, um, in, in Ruby and PHP, um, Java and so on. Other libraries, TRL, community, JPL, it takes, et cetera. And on, on the client side, there's a TRIX component, um, with open layer and XJS. And it is open source, um, an official oldest TIO project. Um, yeah. So, uh, the architecture, as I said, there was like, uh, there's a client part with, with XJS and open layers. Um, it's communicating with the server via Mapfish REST protocol, which is JSON based. Uh, and, well, on, on, in parallel to, in parallel to the OTC protocols. And on the server side, we have those different components in Python, PHP, et cetera. So, the second release, uh, it's, uh, there's a demo on, on Mapfish.org. I will put the, the URLs in the very end so you can, um, maybe go and have a look. Um, it shows just some simple application you can do with it. So you can, uh, have information about some features, about factor features. Um, you have, uh, editing tools. Um, you can implement the search tool. You can print, uh, Mapfish print is one of the major components. Um, yeah. So you can, with the editing, you can insert like, uh, the attributes you can have dropped down, et cetera. So yeah, that's the state of, of Mapfish.org. Um, in between, uh, there was some time, um, passing and, uh, we, we developed further application with that framework and we, we integrated other, other components. We, uh, uh, yeah, we did some implementations and among those it's, there's the Mapfish Web GIS, also known as C2C, cheer portal. Um, and, um, this one is, is kind of interesting because it's plugin based. It's not a framework anymore. It's a full web GIS and, uh, you can choose your tools, what you want to use, what you want, how you want to arrange your tools, et cetera. So I'll go a little bit into that one. Um, so as I said, it's a generic web GIS. Um, it's plugin based. It's adaptable and like, sensible. So you can write your own plugins. Um, there are a lot of tools available and, um, there's a user group supporting that and, uh, pushing that forward. Um, so other tools get integrated, et cetera. So the architecture stays more or less the same except that there's, um, there are some parts which got developed a bit more like the CG XP. It's a development from GeoX towards more functionality. Um, there was, there's the center touch. We integrated as well along with XJS for, for the mobile part. And, uh, on the, on the server side, we used mainly Python, um, libraries and the map fish print. So how does it look like? One way it could look like if you're setting in, when you're setting it up, it's, um, like this. So you have a panel on, on the left side, you have, um, a theme, you can have a themes panel with different layers where you can arrange them. You can change the, the, um, the order of the, of the layers. And, uh, you cannot have other panels like the query or print, print tool, et cetera. On that side, um, you have a lot of tools up on the, on the top panel. Um, and yeah, map, uh, uh, some map tools around, uh, yeah, like changing background layers, et cetera. Yeah. So a lot of tools, I won't go into all of them because, uh, yeah, as you tried yourself or, um, yeah, you, you asked questions for them or something afterwards. Um, but I will talk about some of them. So there's full text search. You can configure, um, your search fields or, yeah, your search fields in, in from post, post GIS, post CRES. And, um, yeah, you can search for them, you get the results, um, immediately afterwards. Yeah. If you want to do some more complex, uh, if you want to have some more information about, uh, some features you query them, you get the results from several, uh, several layers. You can show them, you can select them, you can also export them to CSV. Um, yeah, or even you can do complex queries, um, with several combinations of attributes and spatial restrictions. So you, you select a layer, you, you choose how it should match. If there should be everything combined and like the subset of this combination or if just one of those arguments should, should be respected. And there's also profile plugin. Um, yeah, you can choose your, your profile line and then it, it shows the height profile. You can have an editing interface where you can add data to, to your post CRES database. Um, and add, have the, the restrictions about some, some attributes, et cetera. And there's an API also included. So you can integrate that in other pages or you can with your, with your portal, um, setup, you can give that to others or, yeah. And there's a mobile simplified mobile interface, um, for small mobile devices. It works in general also on mobile, but if the device is too small, the simplified, um, interface is like easier to, to handle. Yeah. So there are many, many examples already in use. Um, I would, yeah, we recommend you to, to have a look at them at them. But, um, of course you can also do your own. It's open source. Um, yeah, just have a look at the GitHub page. Have a look at the documentation. Um, try it out, give feedback if you find something. Yeah. Um, and yeah. So I will just show some other implementations about the mapfish project. There's, for example, map.map.schweizmobil.ch, um, touristic application, um, which was made with the basic mapfish framework or like the map.io.admin.ch from the SwissToppo implementation, um, where they are doing a redesign. Maybe you saw the presentation of Cedric Moulet yesterday, I think. And, uh, yeah, like a campus plan of, of the University of, of the Technical School of Lausanne or also time, um, time-related data, which is shown here, um, between some time space. You can animate that and integrate that into your web, web.ch. So where are we going? Um, for the web, for the mapfish web.ch. We will, um, have some features, um, in the next time, which will be coming up. So we will integrate the snapping for editing. We will add a routing based on OSRM and, uh, yeah, integrate the time slider to have the time component in that web.ch. And, um, we are working hard on a QGIS backend so you could configure your layers in QGIS, configure the styling and then publish them to the, to the web GIS. And on the long run, um, we are looking forward to open layers 3, integrating that into the, into the web GIS as well. So yeah, I didn't, I was really fast. Sorry for that. Um, but thanks for your attention. And, uh, yeah, if you have any questions, just feel free to ask. Questions? 15 minutes of them. Yeah. I was interested to see the, the little bits about the time, the time thing. And could you tell us a little bit more about that? And also when you're planning to, um, to, to, to, to, to, um, this time slider? So the, I should repeat the question, I think. Yeah. Okay. So, um, the question is about the time, the time component or the time slider, um, and when, when it will, it will, it will be implemented. So, um, the time slider is based on the WMST, um, specifications. So we will, uh, integrate that way of, uh, yeah, of publishing time. Um, so I think it, a first implementation will be like towards the end of this year or the next, or the beginning of next year. Like the basic, uh, implement, implementation. And then probably during the first half of the next year, there will be the further on developments for, for interaction with it. But I think it will, it will be something similar to, to this application where you can choose your time, your time scale or like the start time and the end time. And then you can animate through it or you can slide through it. So this is already an existing feature? Yeah. This one? Yeah. Is it like a plugin? Um, this one is based on the, on the mapfish framework. So it's not yet like a plugin into the web GIS, like the existing where the fall fledged, yeah, the fall functionality web GIS. But, uh, yeah, we are trying to, we are going to integrate that into the, yeah. Does your web GIS also have a geostatistical plugin like a predecessor cover that? Um, does the web GIS also have a geostatistical plugin like, uh, Yeah, like, like, like, no. So in order to show like, uh, yeah, like data and visualize data with. Are there any classes you want to see? Yeah. You can interact with the data with it? Um, there, there are some plugins which, uh, which exist, which, uh, um, not yet in the, in the, in the main, um, project, but which might get integrated because they have to get generalized for the main project. But like you, you can, you can select, uh, for example, a, a data set and one data and you get like statistics about how much, uh, one area and you get statistics, how much, uh, coverage is in that area and so on, but it's in the beginning, it's not yet, uh, it's not as the core application is not like geostatistical visualization. Yeah. Um, which version of geo x time using it? So, um, what version of geo x are we using and, um, how will we contemplate to, um, about the fact that geo x won't be. Integrating open layer three. So, um, actually the CG x P is, uh, uh, deviation of geo x. So we're not, um, exactly using geo x from it, but we, we got inspired by it and, and use it like similarly to, to CG x, to geo x. Um, but on the long run, when we are, when we will integrate open layer three, it will probably be with another library. So bootstrap or I don't know, still to be defined. Um, which version of x time using it? Um, it's x three. Still x three. Yeah. Yeah. No. And probably, yeah, probably never, we will stay on x three and then make the jump to open layer three with another library. Do you think that by using another library, uh, the applications can be, can become lighter? And was that part of your? Yeah. So actually, um, why? So, yeah, in, in general, we will, we will stay for sometimes still on, on the, on the x three basis. But, um, probably the reason why one of the reasons why we, we consider going to another library is because the cross device is handled. Yeah. In easier or like it's, it's really one. There are a lot of libraries coming up which handle the cross device aspect. So, uh, probably cross device is this, yeah, the reason, but also like light, lightening up and, and the ranging. Yeah. Easening the, the usability. Sure. Yeah. Thanks. Just to, uh, uh, add something, uh, that would be about about, uh, share exposure. Uh, I mean, this right there, now it was on Friday, so. How was that? Oh, you, I saw you already know, I suppose, that, uh, we mostly switched to angular. It's not like. Not, it depends. The, the, the Switzerland, your portal is already using angular. Yes, but. If, if we can continue. I, I, I, you, I had time. Okay, so even the discussion yesterday was not conclusive at all. Differentations are going forward. So most likely territories will continue using. And the two X. Which, um, one less seems to be going. Yeah. Actually one of the. Application. Yeah, the. Yeah, this one, this one is based on angular bootstrap. Yeah. Yeah, yeah, that's true. It's like it's going towards this direction, but for, for like the web GIS, um, the map fish web GIS. We are not really, um, we, we, we wouldn't say yet that we are really taking this library. We, we are still evaluating and probably it will be this one, but maybe we could also stay with X. Maybe it come, it depends also what, what the needs are. So you are camp to camp is the leading developer in this project and also for what I understand. Major role in open place. Uh, are you coordinated somehow? If you are coordinated between open layers three and, and map fish. Um, yes. Yeah, sure. Um, yeah. So well, with open layers three, we, we do a step further with, with developing. So we're breaking the API. So, uh, when like the web GIS implementations are kind of coming afterwards. So as soon as the API is fixed, we will, we will for sure change and go towards open layers three. But yeah, not like one day after another. Yeah. Anybody else have a question? Okay.
The MapFish framework allows to build rich Web GIS Applications in an easy and flexible way. It combines some of the best Open Source Tools in one framework: OpenLayers 2, ExtJS3 and GeoExt4 on the client side, and MapFish print, Ruby or Python modules (especially Papyrus based on Pyramid) on the server side. Besides the OGC-Standard web services, a MapFish protocol adapted to the efficient communication between Client and Server is available. On this basis, complex and high performance web mapping applications have been built. Among them, one MapFish-based project will be presented in more detail in order to show the power of the MapFish Framework: the c2cgeoportal is a complete WebGIS with large set of tools and configuration options. Since its beginning, the plug-in based architecture makes each application unique and adapted to the specific use case. The presentation gives a general overview of the MapFish Framework and demonstrates its possibilities with the c2cgeoportal implementation.
10.5446/15549 (DOI)
Good morning. So my name's Paul Ramsey. I'm from boundless geo. I'm a post-just developer, but also now a lidar point cloud developer. So for the first third of this year, I worked on building blocks for storing lidar data in PostgreSQL and then leveraging that data for analysis in Postjust. So that means do this developing lidar types and functions in the database. And then loading utilities to get the data into and out of the database. The development work's been largely funded by Natural Resources Canada, the first chunk of it was. They're planning on using PostgreSQL for their database to store their national elevation data in national lidar inventories. So why would anyone want to put lidar in a database? What's the motivation for doing that? So first of all, you can't just stuff lidar point clouds into existing post-just types, like the point type or the multi-point type. There's just too much of it. A country can generate counties. You can generate hundreds of millions of points. A state will generate billions. A country will generate trillions. Second, lidar is multi-dimensional. And by multi-dimensional, I don't mean just xyz. Dozen or more dimensions. That's common for lidar per point. That's not unusual. And unfortunately, the multi-dimensionality of lidar is not fixed. It's not like there's always 12 or always 14. Sometimes you have four. Sometimes you have 17. So you've got potentially billions of points with many dimensions. You can't predict how many. So there's no way to stuff this stuff into existing post-just types or columnar tables. But we don't just want to say, oh, well, it's too hard. Because lidar point clouds, they have a geographic location. We can put the points into a place in space, which means if we can get them into a spatial database, then we can mash them up with other spatial stuff. And thanks to Tobler's law, which says everything is related to everything else, but near things are more related than distant things, we get some spatial value out of that. We can do spatial reasoning. So there's value in the exercise of doing it. On the other hand, I am on the record as not wanting to put rasters in the database. And lidar point clouds share a lot of the features of rasters. Lidar data isn't particularly relational. This is a table definition for a standard post-just feature table. It's got lots of interesting stuff. It's got geometries, but it also got lots of attributes, which can be joined to other things. The contrast, a table of point cloud data looks like this. It's just a row of patch blocks. It's basically blobs in the database. There's not a lot of interesting stuff to query about there. And then lidar is really big, like billions and billions of points. So this is going to result in really huge tables that are kind of fiddly to manage. They're hard to back up in a database that would be far easier to manage on a file system. And finally, lidar is pretty static. Databases, they're built to handle constantly changing data, but lidar updates, they're granular. You're not changing things at point at a time. You don't have continuous updates. They tend to be these big bulk updates, big overflies, just like raster data. Which means I need a little bit of remotivation before I can keep going. So what actually inside these rows and rows of binary blocks, there is actually quite a lot of detailed information. There's all these dimensions per point. And unlike raster, lidar use cases do tend to filter and subset the data. All these points get filtered and subset individually. So the use cases aren't just all about bulk retrieval. And finally, Tobler's law is still there. So you've got the same motivation that got me to accept that raster was a useful thing in the database. It applies to lidar as well. Once you put it in, you can unlock all sorts of interesting analysis and value by having raster to vector, vector to raster, point cloud to raster, point cloud to vector, analysis going on inside the database. So we decided to do it. How do we store lidar in the database? First of all, we can't just go one point per row. Because if you have a table with billions and billions of rows, this is going to be too big to practically use. The index is going to be too big. The table size is going to be very large, with one dimension per column. In general, there's a cost for a query iterating over per row. So we want to minimize that event. So what we do for storage is we organize the points into patches, several hundreds, several thousand each. This reduces a table of billions of rows into a table of millions, which is a lot more tractable. So practically, for the implementation, we end up with two new types, the PC point type, one per point, and the PC patch type, which holds these squares, these collections of points. So the goal of lidar storage is to keep everything small because there's so much data. So we pack the data into a byte array. And for each dimension, we use as few bytes as possible to represent each value. So we can compare a packed form of a particular point. This has got an x, y, z, an intensity, r, g, and b values, packed into just 17 bytes. If you stored that same data using doubles for each value, you'd have 56. So even the fairly simple trick of packing things into the bytes saves you a lot of space. Once you pack them into a byte array, you need a description of that packing so you can unpack them to do analysis. So we have a description of how things are packed. It's done using XML schema document. This is the same schema format that's used by the open source Poodle project that Michael's going to be talking about next. So this is just one dimension. This is the x dimension. And you can see the scale and offset values. This allows you to efficiently pack large values into a narrower byte space. So you can bind multiple dimensions in a single schema document that fully describes how a whole point is packed in. Each schema document is then stored in a row in the PC point cloud formats table, which assigns every schema spatial reference system and gives it a unique PCID. So to recap, we've got PC patches, which are collections of PC points, which are packings of dimensions, which are described in XML schema documents, which are stored in a point cloud formats table. And it's all tied together with the PCID that relates patches and points to the schemas, which you need to interpret them. So that's the developer's story. Does anyone recognize that? It's so what? So you're not developers. You don't care about that. What if you want to use it? What does it look like when you're using all this new software? You want to enable it. So you build it. Point cloud only runs on PostgreSQL 9.1 and up. So we support installation via the extension method, create extension, alter extension, drop extension. So you enable the point cloud extension. Now you have point cloud. If you want to do spatial analysis with post-disk, then you also enable the post-disk extension. These two actually don't depend on each other. They're fully independent. In order to get the integration, you then add the point cloud post-disk extension on top, which gets you the cats back and forth into the post-disk domain. So you can do spatial analysis there. We've got a lot of tables and views in our database. After enabling those extensions, most of them are from post-disk. But there's two from point cloud, point cloud formats, which you mentioned. That holds the schema information. And then like geometry columns, we've got the point cloud columns view. So it looks into the system tables and tells you which tables have point cloud columns in them. Now before we can create any points or patches in our little example, here we need to have a schema that describes the dimensions that we're going to hold. So this is the one we're going to use. It's a simple four-dimensional schema. It's got an x, y, and z as 32-bit integers, and then an intensity as a 16-bit integer. And we're going to assign it a PCID number. All right, the top there, 1. So you'll see the 1 show up. You can see it. It's actually quite verbose. And this is four dimensions. So now we can create our table. We're just going to put, we won't do this for operations, but for demonstration, we're going to create a table that has a PC point column. You'll only use points transiently in operations. You use patches for storage. But we're going to make up a point table here. Insert our first point in. You use the make point function, like you take an array of doubles and convert it into a PC point. And see, we're telling it that it's PCID 1, so it knows that this is 32-bit, 32-bit, 32-bit, 16-bit. And then we can select the point back out of the table. And we get a well-known binary format. Very well known, see? Actually, if you break it out, it's not that crazy. It looks a lot like the well-known binary format for geometries. We've got an Indian flag at the top. Got to have a PCID everywhere, so we know how to interpret the rest of it. And then we've got x, y's, and z's. It's little Indian, so the least significant stuff is out here in the front. And then finally, the intensity at the bottom. So that's the computer-readable version. We also have an asText function, return something that humans can more easily interpret, or at least computers who think like humans. So rather than a OGC well-known text for my asText, I've just stuck with emitting JSON. It's more likely that people already have preexisting JSON parsers to slurp that data down. You can pull any dimension out of a point, using the dimension name. So this features the gateway for filtering. So for any point, this is pulling z out, values 34. If you've got the point cloud postage extension enabled, then you can cast your PC points over to postage points. This is useful for visualization. All the visualizations later were done by casting my data from point cloud into postage, and then visualizing it using QGIS. So taking the thing, cast it across, I get a point z. If we add one more point to our points tables, now we have a point with two tables in it, we can make an aggregate. So first we add the second point, and then we can use these PC patch functions. This is an aggregate function. It aggregates our two points into a single new patch in this new PC patches table. And then we can pull that one patch back out and see what the text representation is. The asText representation of the patch looks a lot like the point, except now we've got two point values inside our patch. So it's the world's smallest point cloud. We did an aggregation with the PC patch function. We can do the opposite, an explosion, take a patch and blow it out into all of its component points. So exploding our patch back out and then taking the as text of those points, you see we get one row per point and the same values back. So there you go. Now you know how to use point cloud. So that's the basics. But when you're actually using it, yeah, you're probably going to use something more practical like work with real data. So what about my SQL? In order to do real world stuff, we need to load real data. So we're going to use Poodle, the open source LiDAR processing tools, which will let us handle multiple input formats, so in this case, it's a last file. And multiple out formats, although in this case, it's the PostgreSQL point cloud format. And if you want, you can also apply processing chains in the middle to muck with the data on the way through. So for the NRCan project, I wrote a PostgreSQL point cloud driver for Poodle. That's now available in the Poodle main source repository. In addition to the reader and writer, we have to use a chipper, which takes the big input file and cuts it up into our small PC patch chips, which are suitable for database storage. So Poodle works on the idea of a pipeline file, which defines the operations and the readers and the writers. So the readers go in the middle. We start by reading last. Then you've got your operations. Here's the ones we care about. We want to filter it, chipping it. In this case, we're chipping it into 400 point patches. And then we're writing up here to the point cloud. When the load's done, we've got a table like this. We've got a primary key and a whole bunch of patch data. And we can query it out and make sure it's the same as what we put in. Confirm, we've got all 12 million points by summarizing number of points. And we've got 30,971 patches. And the result looks like this. It's kind of hard to see what's going on. But if you put it in the physical context, it's more interesting. It's Mount St. Helens. And if we look a little bit closer, you can actually see the patch lines. So this load, the chipper, ensured that each patch holds about 400 points so we could go higher. I'm up to about 600 points without passing the post rescuer page size, which I consider a magic number. Michael and I are still going back and forth about what the most efficient patch size is. So it'll be fun to find that out and tell everybody what the right answer is. For now, we don't know. So this is about Mount St. Helens. Mount St. Helens is an odd mountain, right? Because it doesn't have a nice pointy summit like most mountains do. It's got a rim around the caldera. So we're going to answer an analytical question using Postgres Point Cloud. How tall is the rim of Mount St. Helens? So start that. Digitized a line. Got up my handy queue. Just digitized a line around the rim of the caldera. And what I'll do is I want to calculate the average elevation of this caldera rim. So here we go. It's a multi-step query. So I'm going to use my favorite PostgresQL SQL syntax sugar, the width clause, which allows me to chain together a bunch of queries without having to nest them in a subquery so they can actually look at them sequentially. So first step, we get out our patches. We want to get all the patches that intersect a buffer of that rim geometry I just created. So that gives me the first chunk of data I'm going to process. It gives me the raw data. Here's the patches that I get back out. And if you look in closer, you can kind of see what's going on. This is the path. Here's the buffer. Here's all the patches that intersected that buffer. So now given the patches, I'm going to take them and I'm going to blow them out, explode them into a set of points. So now I've got a great big tuple set of points. And now I want to take them and filter them. So the points are coming in. I'm going to filter them and find just the points that fully intersect that buffer. So that's going to filter down the patches that were partly in the buffer and only leave behind the points that were fully in the buffer. So this is what's left is these guys down the middle. And then the last step is to take those points and calculate the average elevation of all those. Average count, there's our answer. So the average elevation of the rim of the Mount St. Helens-Cull-Dara is 2,425 meters, which is problematic because the source of all truth in the universe says that the elevation of Mount St. Helens is 2,550 meters. So what's going on here? I cannot go against the source of all truth. So let's do a little bit of extra analysis and visualization to figure out the discrepancy. What we can do is explode all the points and patches and find just the points that are higher than an elevation threshold. So we'll find all the points in our Mount St. Helens area that are above 2,500 meters. And we'll look at those in QGIS. And we start to get an idea. Aha. So there's a tall bit over here. The rim isn't flat. The southern end is the tall bit. And then it slopes downwards towards the north. And we can see that even more clearly by taking the patches. This is casting the patches across the postus geometry. So when you cast a patch, you get the square that surrounds it. Take those patches and color them thematically by their internal average point elevation. Then you can really see it. It's not just a nice even circle. In fact, it's quite tall over here and slopes down as you head to the north. So there's a little practical analysis of PostgreSQL point cloud. So in order to lower the IO load, as you load this data into the database, you want to keep them small. So you want to compress things. It's an important concern. The compression of a PC patch is not just one fixed compression. You can have several different ones. So the first compression, if you're going to have a multiple compression format, is going to be none. So all we do for the none compression is byte pack things. This is basically equivalent to a last file. Dimensional compressions, the default compression right now. What dimensional compression does is it flips the ordering of the data from point, point, point to dimension, dimension, dimension. And then figures out the best possible compression for each dimension. You can pack 400 to 600 points into a single patch without going over the page size. It's about a four or five times compression compared to the uncompressed value. And then we have a third one implemented right now. And Neury in the third block is going to talk about this scheme, the Geohast tree compression. That takes the points and sorts them into a prefix tree. So it drops a lot of bits, which are common to all the points, and then it also moves data up to the point in the tree where it's common to all the children. So it's both a compression and an ordering scheme. So it allows you to go into your patches fairly efficiently and pick bits out. We're still figuring out the most efficient way and effective way to use it, but it's a really cool trick. I've shot a number of functions in actions, but the most important are probably the get function, which allows us to interrogate points. Explode, that breaks the points into patches. Or sorry, patches into points. Union goes the other way, takes patches and builds them together into bigger patches. So this would be good for clip and ship output. The patch function, which unions point sets back into patches, intersects. So you can figure out which patches go together with particular geometry, and then really important these casts back and forth to post just. So you can take your point cloud data and push it across to the analytical side and post just for things which aren't native implemented in point cloud. That was the only slide I had when I first presented this. We now get even more functions. A lot of these are for speed. So you can now filter a patch, find all the values of particular dimension that are less than a particular value. So threshold filtering is a really common use case in LIDAR, so less than, greater than, between values. And then it turned out that for efficiency, having the maximum, minimum, average values stored in the header of the patch object made a lot of sense. And then this was a very common query. I want to find all the patches that have a threshold dimension higher or lower particular things. So we can get summary statistics for each patch as well. Future development, stuff that's going to happen over the next few months. These two almost certainly transform. Right now, once you load your data into a particular schema, there's no easy way to flip it into a different schemaization, either a different S-RID or a different number of dimensions or a different scaling of dimensions without going out to poodle, doing your scaling and changing and running it back into the database. So transform will allow you to do those kinds of operations inside the database. Intersection internally to point cloud will make their sort of classic geometry clipping thing a lot faster. And then really common use case, given a big point cloud, I want to see a raster visualization of it. Upsampling it into particular rasters. So this will be an integration with post-just raster type. So you'll upsample things into post-just raster, and then post-just raster will let you move it out or do more raster stuff with it. The poodle writer and readers are still very, very simple. And I think when they have contact with the enemy, just to say the users, will probably learn a lot about the extra flexibility and extra functions that are needed to actually make it useful in production. And then finally, I really feel like, although I tried really hard with the dimensional compression, I feel like there's smarter people out there who can do better compressions for sure. Probably not last-ip, because we have unpredictable dimensionality, but something like last-ip that can handle arbitrary encodes of dimensionality would give us higher compression ratios, for sure, without having any data loss. So it's out there for real. You can get the source on GitHub for point cloud, pull the latest poodle to get the point cloud drivers for reading and writing. And I'll take any questions for the next couple of minutes. Yes, sir? I'm wondering up to which point this is LIDAR specific, because I'm doing a lot of work on switching from machine-realistic to the teenagers. And I get these point clouds, but they are not LIDAR clouds. So to what extent is this LIDAR specific? It's not. If you have multi-dimensional data, irregularly spaced samples, then there's a lot of it. So it's a five-falx or driver. Yeah, if you get the right driver to read your format in, it'll go fine. The only place where it might maybe not be a good fit is if they happen to be regularly spaced, in which case, hey, you have a raster. At the back. Do you have anything in the works to extract the point cloud and the other point cloud? Do I have anything in the works to extract, say, a surface model from a raw point cloud? I don't have anything in the works to do that. I think it'll probably be pretty far down the road, because you can correct me if I'm wrong, Michael. That tends to be fairly a very tunable operation. It's not an obvious sort of, here's one parameter, do I have anything to my data? It's 50 parameters of juggling and making sure you get the right stuff with lots of visual feedback. It feels like something that would be more likely implemented as third-party software reading from the database than right inside the database. Yes? You mentioned about those patches. Is there any way to get the extents of the patches out? I mentioned the patches. Is there any way to get the extents of the patches? Yes, I think. Geometry objects. Well, they're objects with geometry in them. So the dumb way is just to cast them over to post-jists. But I feel like I actually have extents functions in there, but I don't have the function list in my brain. So yeah, I think it's very trivial to get the extents out, either casting the post-jists or I think there's direct reads on them. Any more questions? Well, thank you very much. You're welcome. Thank you.
How do you store massive point cloud data sets in a database for easy access, filtering and analysis? The new PointCloud extension for PostgreSQL allows LIDAR data to be loaded, filtered by spatial and attribute values, and analyzed via integration with PostGIS. We'll discuss the extension implementation, basics of loading data with PDAL, and how to use PointCloud with PostGIS to do on­the­fly LIDAR analysis inside the database.
10.5446/15547 (DOI)
Iawr o'r furiad am, dwi'n i g18 wnaeth r 分frwy i'w ddweud o ddweud o yr Hidroparwy oedd penll Charityd o'r rhaglockau atм y 가�dd men br vector. Si fel amlw restaurantssal hawr hefyd yn ein ddweud oام applied piwgrwyddol, a во dwi'n though efallai ar fy mod hi gymlyaeth ddych yn bwyntad blynyddfa. Dyna'r..., dyna, i'n dweud内w flog pen Gigannol i', i wyllaeth nod Scientists unknown yesterday, i wyllaeth nad oedonig o ddaeth yr ydydd ar y gwahanol Gwneud nawr i'r Dweud i Unol yn ddeud. ar y tafnoltıch yn ddweud i gael datariaeth UK. Mae'r llai ar eu slychi parsley wedi d ordinaryd samyol oоп granturol. Heliwchngohu gyffin Zugastio Alweclub 아직n am g inspect argon i dnosio data dit podcastio deallwn gynghエld yng nghydfys этоch yn d stocks linio un o ddech pattern yn yr mentioned. Y ICI hiad iawn gfgaud yr dyn nhw, A'r OGC standard, KML o'r OGC standard. If you're into 3D stuff, city GML is an OGC standard. We've also got things like shapefiles, which are not really standard. But they're very popular amongst the geo community. There's other formats, which are things like CSB, there's XML. Some government departments think PDF counts as a data format. I'll come onto that a bit later. The thing is, this is where I get a bit heretical. The rest of the world doesn't actually care about GML, KML, shapefiles. You've got the... Well, you've got the stusted divisions, they've got S-T-M-X-M-L. I think you've financed your data, they've got X, B, R, L, M, L. And pretty much every little domain out there has got their own M-L of some variety. So when you actually... And I think one of the things is that data is kind of interesting when you bring it all together. So it's sort of more valuable when you start bringing lots of different data sources together to do something. Ia wnaet hangu dduch shopr yma. Alriwch am countryd a bosedd bywg – ymlaeniau mwy y Oxur inviting ieunaethau ofyn月.ейw mwrdd y cwngh hyd ynac ynieni a gwled barnod speshwr recogn freedoms. Mae'r cyfl Bakry, peddon i'r desem Yu Ních.üşeithio'dy winners oblygnwyl seaweedwyr. Rhyw magnet mewn iawn y cyfl-hîarch Yna os gandd yw'n hyfyrdd ychwanegu mae'r g Gespridostol iawn, y confiance gyda ni yn ei ddiddordeb yn gyff fluwyddiant. Mae sch司hesion yn credu bod yn teimlo ddech Rose Parsley. Nifer ladder five star yw gridlwgach o'r יllaf iawn credu eilydd wedi'i markly welding, Trydygedd eich chwara measures accomplishments YReareerdd produce cans masy eu gofynnag o unig o ymddist이� College of Data Wards. Wel, yna bod yn fwy revenue ac yn gynghen ystod y werk poetry am gweithio'r hungladau a pwynt o ro fi ch 연�rif neu nifo dd rank obsediaid ars diwedd yn y Dada, ac ydyn dobrech yr ydi fra y trafi. Ac edrych, mae fy relaxed yw Plo yn dal, oherwydd mae'r fun sprayeddur, fel ils ond ma Aren These yn gyr 봐 snap linguistic. Rfeddwyryn yn gwisio. Re spare Smeigwyrd Fy First yw Llyrgrf texts Fel manutwyr 我們 SC Why dyRY Mawr. Cynghi i'w tŷ o bhlyau o Gymr enrichor ond ein llant. Cynghi i siarad ar bah inclined OA gan ha 많은 nodi. A fe, efalle'r worhau, eu hunain, eu hwnnw正ur wediloedd mins. A gベnyddynta gymrydau yn cyанию. Rydyxitudgo cyntaf oedd y tal почlon o fathiant.��wch gell genious o ddysgu, llwydof mewn brwr, gwintwch sendwch a'i böffid. I discour dachteau. Cynonwch foch. Bye, bye. Mae'n Bakwm ac fyariatau. Facebook sefhu. Data.gov published some data about senior civil servants. They also published data about government departments and the Board of Nations Survey published data about place. In the old world these were all silos of data that were stuck on different websites. There were no connections between them. If they were you didn't know what those connections were. In the link data world what you do is you identify each of those things. Vanessa Lawrence has got her own URI that identifies her. This is the URI that identifies ordnants survey up here. This is one that identifies Southampton. You identify people, places, organisations, like these HTTP URIs and then you link them on the web using HTTP. What you actually do is on the link data web you qualify what that link means. On the document web there's lots of links between HTML documents but you don't know what does that link mean. In RDF you actually see what that is. They were saying that Vanessa Lawrence has a post ordnants survey and an ordnants survey is based near Southampton. You can imagine as you start flowing more and more of these links you get a big graph of data. I think it's a very simple data model. I'm just going to go on now to talk about some of the link data work we've been doing at Ordnants Survey. Three years ago we were asked to open up a number of our products. We've created link data for three of those open data products at the moment. The first one of those is a product called Co-Point Open. Co-Point Open is a product which tells you all about post codes. It's a where they are which administrative areas they're in. The other one was the 50K Gazetteer which I'm not going to talk about too much in this talk. It's basically an index onto our 50K maps here. Lastly is a product called Boundary Line which has information about all of the civil voting and administrative areas in the country. All of the constituency's wards, counties, et cetera. What we've got in the OS link data is hopefully my ambition is to head towards having a URI for every place. In Great Britain. We have to make do with post codes and administrative areas. This is the URI for the City of Southampton. This is the URI for the Ordnants Survey headquarters. When you look up those URIs, this is a screenshot of the post code for Ordnants Survey HQ. You get back some nice HTML which has a few facts telling you stuff about that post code. Not surprisingly we've got a map showing you where it is. It's got some information for example telling you that this post code is in the district of Tess Valley. It's in the county of Hampshire. It's also in England. This is what you get if you go from a browser. If you go from a browser it assumes you're a human or a close approximation. You want some HTML. Likewise this is some of the information for the administrative geography. One of the things we put into our link data, because in the early days, link data wasn't very suitable for geospatial data because not any of the triple stores, which is the database that holds RDF, not many of them had the capability to do spatial indexing. What we did was actually pre-computed a lot of the implicit topological relations and put those in the data. For example, if you look up an administrative area it will tell you everything it contains, everything it's within and everything it touches, as long as that makes sense within a particular geography. That's some of the stuff we've got on the OS link data. We've got a number of different APIs that you can use to interact with that data. As we're an open source conference, this is entirely built on open source software. The first one we've got is a simple search API which was built on top of Apache Solar. I'll just let you type in a place name and find you the URI for that place. You can do some very simple spatial queries in that. Another more interesting one is we've got a query API. We've got a Sparkle query endpoint. Sparkle is to link data as SQL is to relational databases. That's the query language of choice. If you're interested, it's built on open source software called Apache Gena or a database called TBV. We've also got a couple of other APIs which I'll touch on a bit later. This is a screenshot of our Sparkle endpoint. What you can do is, in the top window there, you can type in a Sparkle query. I won't go into Sparkle too much now. That's a whole day's worth of tutorial. You can actually choose your response format to whether you'd like it back as JSON, XML, TSV. You can either look at the responses here. This thing which I think is quite useful actually tells you the GET request it's performing on the API. Should you want to copy that GET request and embed it in your JavaScript or PHP, you can actually then use that to build your applications. One of the other interesting APIs we've got is something called a reconciliation API. This is a random spreadsheet that I grabbed off data.gov. I think it's the location of all the libraries in the country. One of the columns, which one is it? This column here has got the administrative area that the library is in. As you can see on that, it's just a string. It's just a bit of text. The thing is it doesn't then give you a hook into anything else. If you want to find out what the European region, that particular unitary authority or county is in, it's not very easy to do that. These are all very niche queries, by the way. If you want to compare the number of libraries in bordering regions, you can't do that sort of thing. With our reconciliation API, you can load a spreadsheet into a tool called OpenRefine, which used to be called Google Refine. You can turn that, so as you've seen, all that column of place names has now turned blue. What it's done is it's tried to match the string in that column to a URI in the OS link data. You've now got the URI in the spreadsheet, which means you can then go off and get information from our link data and use that to enhance the data that you've already got. One sort of perhaps more pragmatic simple thing is you could go off and grab the Latin long coordinate for that, and then, if that was a postcode, stick it on a map. It's basically just a way of hooking into the OS link data. That's just a very brief summary of the OS link data. There's lots of other people doing link data around the government. The ONS have just published, I've just stolen their thunder to pretend I didn't say that. They will be soon publishing a new link data site. The LAM registry published link data, Environment Agency met office soon, hopefully. Who else? Oh, legislation, so there's actually link data for every piece of legislation in the country. What this is actually starting to do is it's actually starting to really join up government. There's actually linked to the department of communities and local government that have been publishing link data. They've published something called the Indices of Multiple Deprivation. They've linked that to an ordinance survey, the company's house has published data, which can be linked to an ordinance survey. The Environment Agency bathing water data again has been linked to the OS. They've also linked it to the ONS, the ONS have linked this to the legislation. You can see you're starting to form this big web of data across government. This means if you want to ingest all of this data into an application and use it, because it's all in linked data, because it's all in RDF, it makes it, I won't say easy, but it makes it easier because you don't have to worry about translating between different XML, JSON, random stuff that most APIs give you back. This is just an example. This is a screenshot of the Environment Agency bathing water data. As you can see, there's a reference there to the ONS and to the OS. This is just a little, this is an API that builds on top of it, so you can actually just put an OS or an OS URI, or the free text into one of their APIs and get back a list of all the bathing water quality observations in that particular area. Then you can actually do a slightly more complex one, which gives you, that compares it to the bathing water and neighbouring areas. Big data. I'm not a big fan of the sort of discussing before the talk of the big data terminology these days. One of the things I think linked data helps with is probably the variety aspect of linked data. It's what you actually get. It's basically designed when you want to integrate data from lots of disparate sources. So you start having this big graph of data. We've been using internally, so we've actually been trying to see can we consume some of the data that the government is releasing, and ingest it and use it to enhance our data and provide value-added services on top of that. Luckily for us, there was data about transport, the environment, local authorities, crime, weather, business, education and health. It had all been released as linked data, and it had all been linked to the postcode. So if you wanted to do some sort of analysis, at least at the postcode level, of different areas around the country, this made it very useful. We actually built an application which, I know it's going to sound very GIS-y in some ways, it is very GIS-y, but it lets you more easily do queries across a number of data sets, but more complex queries. So this simple application was just designed for, say you move to a new area, you want to find a house where you want to live, a very traditional GIS kind of thing, but you want to filter it out, so you want to find a house in an area that's got low crime, where the education, where all the scores have got high-ofstead ratings, maybe it's near a pub, hopefully there's low levels of pollutants. You get the idea, where you can combine all these data sets to really narrow down the areas you want to find. So again, while you can do that in a GIS, what you can do in a relational database, I would argue why would you want to, because this just makes it so much easier. Just another point of interest, so when I started off, I mentioned that linked data didn't work very well with spatial data. I think probably a couple of years ago now there was something called GS Barkle, which is standardised by the OGC. GS Barkle is a way of embedding geometries, qualitative and quantitative spatial data into RDF. For those of you that are in Tia GML or WKT, so basically the way you put a geometry in RDF is to store it as a big blob of GML or WKT. The idea is that now we've got a standard, people who create the databases that load RDF can start to build a spatial index. I think a certain popular, maybe not popular, but a big database vendor has actually implemented it in their latest 12C release. Hopefully some of the open source ones will start to follow suit. A nice thing about linked data is that if we start to have URIs on the web, which identify the resources that we're interested in, and people start reusing some of these things, so I'm going to be a bit biased here, but I can say if you want to talk about post codes, use EOS URIs and other URIs around the world. This means that we don't have the problem of key clashing, so if two sets of people are using 10 digit numbers for keys, you don't get that problem because you've effectively created a global key. We've got a common data format now across the web called RDF. It just makes it easier to integrate data. I think it's actually at OS, it's starting. I'm not really a geographer, I'm not really a GIS person, I'm just interested in data. I think at OS, it's trying to push us to think just beyond points, lines and polygons, beyond cartography, but actually about the data that we've got and what we can do with it beyond printing out and sticking to the wall and putting some pins in it. So I'm actually trying to think more about data rather than visualisations of that data. Again, I'd argue at risk of getting into trouble that basic spatial isn't special anymore really, it's just data. But it's really, really important, it's not surprisingly been recognised that location is one of the key hubs that everything connects to. My next talk, I'll show you an example where location has been shown to be a key integration hub that everything joins into, that we can then create across lots of data via that route. That is a whistle stop tour of all things linked data.
A lot of data references some kind of location whether it is a place name, street name, address, postcode or some kind of coordinate. Because of this it is becoming clear that location provides an important data integration hub on the linked data web. This talk gives an introduction to linked data, and will focus on challenges around constructing linked data for geographic and spatial information. Examples will focus on work being done at Ordnance Survey and the wider UK Government. A lot of data references some kind of location whether it is a place name, street name, address, postcode or some kind of coordinate. Because of this it is becoming clear that location provides an important data integration hub on the linked data web. This talk gives an introduction to linked data, and will focus on challenges around constructing linked data for geographic and spatial information. Examples will focus on work being done at Ordnance Survey and the wider UK Government.
10.5446/15546 (DOI)
So, my name is David Askov. I'm with Pacific Disaster Center. And Pacific Disaster Center, we apply information science and technology toward decision making and disaster risk reduction. We have a series of software tools to act as a decision support system for emergency managers. And then this is Scott Clark with LMN Solutions. And I can let him, do you want to talk a little bit about your company or should we just dive in? Okay, so we are collaborating together on a project that is called Rogue. And I'm not going to read this whole thing, but you can, I guess, watch it later on YouTube or whatever. And so basically Rogue is being implemented by OpenGeo, who I guess just recently changed their name to Boundless, is that what it's called? So I didn't change the slide, sorry, and LMN Solutions. And then at Pacific Disaster Center, we are what is called the transition manager. So basically we are sort of the entity that agrees to take on all of these things that are developed under the project and hopefully, you know, find some practical use for them. So without further ado. Basically there are four main components to this project that we're working on. And the first two are really driving the majority of the development on this. So the first one is called GeoGit, and I apologize if the title of this presentation drew you here, but we're going to speak a little bit about that, but not exclusively. And basically it's data versioning and replication. And then there's Arbiter, which is a mobile data collection. And then we are working on also this thing that's, we're calling it the GeoServicesRest, which is named after the specification that Esri proposed and then ultimately was not accepted. But basically what it is, is we are working on making GeoServer implement the Esri GeoServicesRest specification so that clients that are built for an ArcGIS server can talk to GeoServer and hopefully think they're getting something useful out of it. And so we're working on that. We have a demo here if the internet cooperates. We'll do some demos. And then we're also working on a KML uploader. So you can actually load KML up into GeoNode and then it will come out of GeoNode and GeoServer as a native GeoNode or GeoServer layer. So those are the things that we're working on. I put in yellow the ones that we'll be talking a little more about today. So as far as the GeoServicesRest integration, PDC has a decision support tool called Disasterware. And it was built. It has some support for WMS. Previously we supported WFS. We found that was rather difficult to support partially because it's a client-side architecture. And so we actually wrote some parsing rules in the client-side. I know it somewhat duplicates open layers. But we also had issues with SLD, which was a whole other spec that we had to support. And so about three or four years ago, we pretty much went to an ArcGIS server implementation. And it still has pretty good support for WMS built into it, but we've more or less abandoned the WFS. And so when we started this project, really the question was how are we going to get the data from GeoServer that we're using for this project into Disasterware. For things that paint a picture, that was easy. We'll just use WMS for features that needed to be delivered into a client-side app. We said, well, do we go back to WFS and do that again? And at the time, there was the, I guess it was a proposed specification, what it's called. And it was ultimately not accepted. But at the time, it was in the proposal stage. And so we decided to begin building a tool that would work with that. And ultimately, we did not go with the spec. We ultimately just went with an ArcGIS server. 10.0 is mostly what we've been testing it against. But I'm fairly certain it'll work against 10.1 or 10.2. And so, again, not really working with the specification, but more just building a tool. So this is a screenshot of Disasterware. And as you can see, it has a list of the current hazards that are here. I'm not sure. I think this is some rainfall data that we're looking at. There are earthquake data, a tropical storm over here. So this is our map viewer. And pretty much all of these map layers, except the Google background, are coming from ArcGIS server. And so this is the challenge that we're facing as far as getting data into that. And then it also has some ability to bring in TV news feeds for disaster managers. We try to get up on the Emergency Operations Center. We try to get one of those screens. And then they say, but we need to watch the news. We said, OK, you can do both. And it's pretty interactive. You can hover your mouse over it and get features. And then we also have reports that we bring into it. So as far as this is kind of what we started with, we have Disasterware. We use the ArcGIS server rest interface. And then I put WMS here as a dotted line. I mean, it works fine. ArcGIS server has decent support of WMS with a few little hiccups. But we really have no reason to use this because we can just speak over this rest interface to ArcGIS server. So what we were doing was bringing in GeoServer and trying to figure out how to make that bridge. And so what we decided to do was to use rest for feature data. So any kind of dot, really it's all the vector data. Anything that's not megabytes and megabytes of vector data can be streamed back to the client and rendered there. And so we decided to build that as a rest. And then we just continue to use WMS for anything that's image, whether it's raster or maybe something like contour lines where it is vectors, but you just don't want to stream it to the client. And so really this rest here, the idea is Disasterware. As far as it knows, it's talking to an ArcGIS server. And then the GeoServer, of course, can be based on a PostGIS data store. And then one of the other things of the project that I mentioned is this GeoGit. And so many of you have probably heard of GeoGit. It was, it got some nice billing in the keynote this morning. And one of the great things about GeoGit is, of course, they can be chained together in a distributed and versioned fashion. Okay, and so that's how we can get all of this into Disasterware. And so, you know, all of this work here is great, but this is when we go to our operational demos, our exercises, showing it to the stakeholders and the funding agencies. That's really what they're going to be seeing, so we need to be able to get the data into there. So as far as ArcGIS server, there are really three main interactions that we're working with. The first one is to get Map Service Info, so it just describes the Map Service. It's really fairly superficial. GeoServer doesn't exactly have a concept of a Map Service. It really mostly just goes straight to layers, so we had to kind of fake that a little. Whereas ArcGIS server, you can break it down into folders and different services. So this is the HTML view of the Map Service Info. And then ArcGIS server, as part of its REST interface, you know, the REST can produce HTML or the REST can produce JSON. And so this is the JSON output. They have a, what they call, pretty JSON, which formats it nicely and puts, you know, indentations and everything, or you can just get it all in one line. And then here we have a screenshot of our server here producing REST output. So this is describing the Map Service, okay? And this is coming from GeoServer. As you can see, there are a few things left to be done. You know, there's a description field here that has some information, and I, whoops, I think it might be missing from this one. So, you know, there are a few things left to do. But we do have the IDs and the names of all the layers that are present. The next thing that you do is you go and you get the layer info. And again, you're still working by Map Service. This is sort of the way that the GeoServices REST spec works. But now you go in, and even though you're still working by Map Service, you're asking a lot more detailed information about the layer. So you want to know the field names, you want to know the extent of it, scale thresholds. And then one of the biggest challenges that we faced was the symbology. So it actually returns the symbology to you. And then if you requested in JSON format, whoops, I pressed the wrong way. If you do it in JSON format, you can see here are many of the things that, you know, were just in the HTML view, except now it encodes that icon. So the, whatever layer this is here, I guess it doesn't really matter. But it encodes it into ASCII text, okay? And so that was one of the biggest challenges that we faced. For many months, we were just dealing with very simple, you know, squares and circles and triangle marker symbols. And so getting that was something we did just in the last couple of weeks was a big victory because every time we demoed this, they said, that's great, but can we not have circles and squares, you know? So here it is coming from GeoServer. And so you can see the really smart folks over at OpenGeo and LMM got the secret sauce to encoding those images into the rest output. And so really when it connects, it has to connect to this. It gets the list of layers. The layers are all referred to by a numeric ID, and then it has to get the symbology, scale thresholds, et cetera. And so this is really the key is before you can draw anything, you have to get this information. And then the next step is to query the layer. And that's where you pass in a bounding box and you say, well, what police stations are within this area. And you can put in time queries, where clauses, spatial filters, that's your bounding box, et cetera. You know, again, we are not supporting all of this. The Esri Respec is over 200 pages. We're not going to get there, but potentially with some of your help, maybe we will someday. So we'll talk about that in a little bit. Basically, when you make a request, here is what it would return in the HTML format. And so you can see there are only three fields. These are hospitals. And then there's a geometry. Now, again, no symbology. You've already gotten the symbology from the previous step. Here's what it looks like in the JSON view of the rest output. Same information. And when I said there's three fields, there's actually, I guess, five. I wasn't counting the, you know, like object ID field or whatever it is. And here is a layer. I'm not sure what layer we're looking at. I guess it's fire stations. And this is for Tegucigalpa and Honduras, which is where we did a demo recently. And so this is coming out of GeoServer, and it's returning features that look exactly like a rest output from ArcGIS server. So I'll do a demo if the Wi-Fi cooperates here. I've already pulled it up, actually, so hopefully it won't take long to show. This is an instance. Yeah, unfortunately, our screen's gotten kind of compressed here, due to the resolution. This is an instance of disaster aware that we call EMOPS. And this is for registered users only, for emergency managers. And this is exercise data that we put in for Tegucigalpa, Honduras. And so you can see all of the, you know, it's a Google Maps background, but all of the dots on the map here are coming from GeoServer. And as viewer, as far as it knows, it thinks it's talking to an Esri ArcGIS server. So it's, you know, completely neutral to the vendor, you know. So what do we have here? We've got temporary first aid stations. We were simulating, what was it, a stadium collapse during the parade on the, what was it, National Independence Day or something like that. So, you know, they're setting up all kinds of temporary first aid stations. These are the incidents that they entered at the command center. Here are control posts. These are actual existing medical clinics that are permanent. So, you know, but these, I mean, it kind of doesn't matter what the layers are. As far as the technology is, you know, the point simply is it's talking to a GeoServer, but it really thinks it's talking to ArcGIS server. And I have the screenshot of that just in case the Wi-Fi wasn't working. So as far as the plan scope goes, you know, we've implemented close to as much as we're going to do. There's still more to go, but we're almost there. So basically the three things that I just walked you through, you can get a point line or polygon out of it, and we've gotten to the point of doing icon marker symbols. So what is coming up? We still have about another year left in our project. So we have multiple point icons per layer. So we're talking about like categorical or numerical classification on a field, say. We're hoping to release this as a GeoServer community module. Hopefully get other people interested in this. I don't know if any of you work with clients who have built, you know, tools based on the Esri stack, and you might like to integrate with them. So, you know, maybe hopefully, you know, get this tool out there. Hopefully get some help developing it a little further. And then we really only support the JSON format right now. So hopefully trying to support the HTML format that makes it more user friendly. It's really hard for non-developers to wade through that. And it also gives you hyperlinks that you can click on to follow through the different links. But as I mentioned, you know, the spec is 221 pages. It basically is everything that's in ArcGIS server. We're not going to support all of that, you know. Maybe someday it'll get there. I don't know. And just as a for example, there's no way right now, and not even planned, to actually get a map out of the system. It's really, we're only planning to do feature data. So, you know, that might be something somebody else could extend and work on with the base there. The next thing that I wanted to discuss was the KML uploader. And this is something that has really plagued us for a long time. KML is a great way to pass data around from one person to another to visualize it and view it. It has a lot of challenges in terms of trying to get it interoperable into a system. What if I need to read that KML and figure out if there's some hazardous situation within my area? You know, what if I need to find out all the critical infrastructure that's within the, you know, ShakeMap or whatever it is that's in that KML. And it's just something that at Pacific Disaster Center we have just really struggled with because it's a great format, but it's just very hard for us to get into our system and work with. So, we are working as part of this project. I guess, LMM started it and then OpenGeo is boundless is the other way around. Okay, boundless started it and then LMM is going to pick up the ball with it. Uploading KML into GeoNode and GeoServer. And basically when you uploaded it serves it out as a brand new layer. And so the current support, and this is really, we're on the bleeding edge here. So, you know, I don't want you guys to get too excited and run home and, you know, I think there's a solution to this. But basically right now it's point features only. There are no styles built into it yet and no network links are supported. So, this is really just in the very, very early stages of this. And so as far as a demo goes, I think we're running a little short on time. So, I'm going to, I have some screenshots that I prepared in case the Wi-Fi is bad, which I think will give you the same idea with much less time. So, what I did was I just Googled for sample KML files and somebody had made one of the Google Earth campus here. So, they put a sample place mark. It's got a yellow push pin and some attribute text in it. This is what has been built. So, it's part of GeoNode and you come in and it has an upload layers and you drop files here and you can drag a KML file right onto that. I don't believe it supports KMZ right now. And so, literally, you know, just drag it on there. There is also a file chooser too if you want to go grab something off of your hard drive or the network or whatever. Once you have it, it reads the file and it says, okay, I have this to upload. I believe you can do multiples. You can kind of queue them up and then you say upload files. It's pretty straightforward. And then when it is over, hopefully it will say your layer was successfully uploaded. It doesn't always say that. But if you've done everything, you know, according to that, you know, what we support, it will. And then in this example here, it's very hard to see, but right there is an orange dot, which is our yellow place mark. So, I told you, no style. So, you know, the yellow push pin became an orange dot. And the text there, the name, simple, I think if we expanded that. But what it said was simple place mark. And so when we come in here, it has, you know, many of the information that came with the original KML. So, it's pretty simple. They actually, the guy who developed it told me it doesn't support polygons. I actually loaded it in and this sample from Google had 3D polygons and all kinds of extruded features. And just, you know, it was like a, there was like 30 samples of all the kind of crazy KML you could do. And it actually took some of them. So, it took some of the 3D polygons and it flattened them to 2D, but it did work. So, there are other things that we've gotten to work. And then once you loaded in, I showed you here in GeoNode how it's now, you know, available as a layer for you to work with. And then it's also available as a layer in GeoServer. So, here, as soon as you load it in, you go back to your GeoServer, hit refresh, you've got a new layer there, ready to work. And then it supports all of the things that a GeoServer natively supports. And then I loaded it back into Google Earth and like GeoNode showed it as an orange square. This shows it as a kind of turquoise dot. But it actually, you know, came through fairly well for an initial test. And again, the sample that I was using was sort of like, what kind of crazy features can we throw at it? You know, if we're dealing mostly with like points lines and polygons, I would expect that to be, you know, well-defined rows and columns of data. I'd expect that to be a little easier. So, as far as the next step, lines, polygons, we need to do symbology. Error handling is very important. You know, you load it in and it doesn't work and, you know, why didn't it work? What do I do to fix this? So, that's what we're working on. And then the last thing to mention is really, we're not really sure how to share this back to the community. I mean, that is the goal, but it spans across, you know, GeoTools, GeoNode, GeoServer, you know, all these different things. So, you know, it's not like a neat little bundle that we can release, but we're working on something. And then Scott, do you want to talk about this? Oh, right. So, this kind of brings it all back into, brings it all home. So, basically, we had recently done an operational demonstration for Rogue. This is back into Rogue and we use GeoNode at both the Southcom location, JTF Bravo, and then at Capeco, which is their disaster response organization. The mobile application for collecting data and pushing it up. And then everywhere you see a green arrow is actually a GeoGit, syncing between the different nodes. So, we are able to keep all of the different nodes in sync so they can actually see off their own servers. And if they lost connection, then they still at least had a server local to them so they didn't lose the data. And we keep the provenance of the data through GeoGit so that if somebody actually deleted something, you know it got deleted and you can actually see that in history. And the other part is both organizations can actually edit the data. And then both organizations also had the disasterware viewer open. So, the difference being the GeoNode, we had the editing capability. So, that allowed for users to input information, to edit, and to put things in. And the disasterware is really a great situational awareness tool for when you're talking about what is the emergency management center looking at, or the decision makers when they want to come in, they just want to see the big picture. And this is actually, this link down at the bottom is actually Capeco's post on their website about it. That's the minister in the middle and two of the Capeco responders using Arbiter and Uploading Incidents into the system. So, then again we use the GeoServices REST to visualize it in PDC. We didn't actually use the KML much for that operational demo, but it's extremely popular and one is. So, it will happen definitely in the next one. So, that kind of all ties it together. If you have any questions, then dig this at that time. Thank you. And one plug is the GeoServices REST Alliance Share of the Work was done by David Winslow from Boundless. And he's at the desk out there. So, if you want to ping him about technical details, if you're a developer, he's the guy to run down. Any questions for the guys? Before I ask, would it complicate one by all, Josh? Data services are basically available. You know, where or disaster or... So, the question is if the data services from disaster were all public. Actually, we have an application that is sort of a disaster aware public version, which is called Atlas. That's at atlas.pdc.org. It's about, you know, 80% of the layers, I would say. But for the EMOPS website that I showed you, basically the difference between what's in the Atlas and what's in EMOPS, those generally require restricted access. So, you know, anyone, and it's fairly loose, I mean, anyone really working in disaster management can request an account. But, you know, it's really not intended for the public. Absolutely, they could. They are password protected. But, you know, if you authenticate it, absolutely, they're, you know, just ArcGIS server services or as far as the Geo server goes, it's Geo server services that mimic ArcGIS server, if you will. Yeah? Why KML? Where is the source of the KML files? Is it remote sensing or devices in the field? Are you asking why we chose... Yeah, exactly. You know, it's just a problem that we have over and over and over again, you know, when we try to integrate disaster data from other agencies, you know, there's all kinds of data out there, you know, but often KML is the method of interchange, you know. And I think a lot of times disaster agencies are really looking at how do we reach end users. So there's a great feed out there from Japan Meteorological Agency, they have tropical storm tracks, you know, throughout all of the western Pacific. It's great. It's published as KML. It's great for a user to be able to just pull into free tools. It's not so great for us to try to pull into a database and do analysis on it and try to say, okay, what's in this area? And it's just a challenge. And that's just one example. I mean, just countless examples. And so, you know, if all we want to do is display it on the map, that's one thing, but if we're actually trying to pull it in and do any kind of analysis on it, then, you know, we really have to be able to get it into some kind of native format. And apparently somebody will bring you a KML file. I mean, when I was down in Capec, well, I've got this flood layer in KML. You wouldn't have problems at the time that you actually got a history of events in the format. So it's not a single observation, but probably 10 or 20 observations. Yeah, yeah. So you have all that as a time slice in the system? Right now, we're not doing anything like that with our KML. I mean, it's very simplistic, you know, so, yeah, just really development is just kind of barely underway. So any other questions you said? You have one, Ken? No, no, no. Oh, you're just kidding? I think my employer would oblige me to ask a multiple question. Are there any final questions? Okay. Yeah, thank you.
ROGUE (Rapid Open Geospatial User-Driven Enterprise) is a 2-year project funded under the Joint Capability Technology Demonstration (JCTD) Program from the U.S. Department of Defense. It is scheduled to be completed in July 2014. Technical management is provided by the U.S. Army Corps of Engineers, with OpenGeo and LMN Solutions leading its technical implementation and the Pacific Disaster Center (PDC) serving in the role of project Transition Manager. The project’s goal is to improve the abilities of the OpenGeo Suite to ingest, update, and distribute non-proprietary feature data in a distributed, collaborative, and occasionally disconnected environment and then transition it into an operational environment by the end of the project. The charter for the ROGUE JCTD is to enable collaboration on geospatial feature data for distributed organizations and teams. This is being accomplished through a community effort based on the OpenGeo Suite, GeoNode, and GeoGit. While GeoGit provides data producers with a conduit to collaboratively develop and share geographic data, the GeoNode software is also being enhanced to leverage this capability for the discovery, display and dissemination of the data. By integrating these capabilities with Pacific Disaster Center’s DisasterAWARE platform, the DoD and mission partners are better able to plan, analyze, and collaborate using dynamic map data to support humanitarian and disaster response. PDC’s DisasterAWARE system presently supports ArcGIS Server REST format, so another aspect of the project is to develop a prototype of the GeoServices REST 1.0 candidate standard (derived from the “Esri GeoServices REST Specification Version 1.0”) to deliver the content from the OpenGeo Suite to PDC’s DisasterAWARE. This enables clients to ArcGIS Server REST services to consume map layers from the OpenGeo Suite via this new functionality. The ROGUE-enhanced OpenGeo suite will be integrated into PDC operations as well as its DisasterAWARE decision support application at the end of the project. This will greatly facilitate collaborative data development and management with key humanitarian assistance and disaster response stakeholder agencies to more effectively support disaster risk reduction activities around the globe.
10.5446/15544 (DOI)
of the session and the missing chunks for delaying your lunch. I'll try to make this a good presentation. And thanks to Marco for letting me use this computer. Yes, okay. So my name is Chris Icamp. I work at the Henry Tudor Institute in Luxembourg. I'm here today to talk about IGAS, which is our distributed modeling platform based on open web services. And I'm going to talk to you a bit about what IGAS is, how it works. And then I'm going to have a section on the lessons that I learned or that we learned programming it and then some ideas for advancing the web systems, web services ecosystem to make projects like this a little easier. And of course there will be cowboys. So music is a European project intended to help cities reduce their CO2 emissions primarily in the areas of energy production and energy consumption. It's a partnership of five European cities and two research agencies. And IGAS is the modeling platform and decision support system that's a primary component of the music project. And the types of questions that IGAS is intended to answer are things like how much solar potential do we have in the city? How much energy do we have in the city? How much energy do we have in the city? How much energy do we have in the city? And I think that's a good question. So I'm going to talk about the data and the models could work together. We wanted to simplify access to modeling tools. If any of you have done much modeling, you know, sometimes models can be very complex to configure and run. Now some of that complexity is just inherent in modeling. It's a complex business. But we were looking for ways that we might be able to simplify the process a little bit. And we wanted to create a reasonable framework. We wanted to use this code for other projects. We wanted our partners to be able to use it for other projects. And we wanted to have it be open source so anybody could use it. And so the code base had to be distributable. So we took all these things together. We decided that IGAS should be designed around open web services and not ecosystem. So this is just a very schematic view of how IGAS fits into the services that we use. Over on the, say this is the left, the right, this side, the far side of the diagram, you see these are the model services, model servers. So these are WPS servers that can live anywhere on the Internet. We have data servers over here on the right that are also data provided by people on the Internet. And IGAS is in the middle. IGAS doesn't host any processes. IGAS doesn't host any data. It just acts as a middleman, kind of a matchmaker to help match up data services with modeling. And then it will help serve out the results of the modeling process. So one thing I'd like to point out on this diagram is that there's a gray box around these different services. And the reason that the box encompasses all the services together is because we see these data services as components of a really of a single aggregate composite data service. We don't think of WFS as different than WCS. I mean, it is different. But we also, what we really think of is these are data servers. So from the modeling perspective, we don't really differentiate between those two. So this is a screenshot of IGAS. And this is just, all this is showing here is just a list of the different models that have been registered with the system currently. And it doesn't matter if these are all on one WPS server or on different WPS servers, the user doesn't care where they are. The user just has these registered models that they can use and run. So what I'm going to do now is open up one of these, expand one of these tabs and show you what's on the inside. So here we have, there's a description of the model at the top. Then we have a list of all the inputs that the model requires. And down at the bottom is a list of outputs. All this is, is just a formatted view of the WPS servers get capabilities. It's a request you can make to WPS server that will tell you what it can do. And this also, all this is just a view of that response. But before we can, can run a model, we have to add data. So I'm going to show you how we get data into IGAS. So this is our data import screen. Or it's more of a data registration screen because we don't actually import data. All we're doing is registering where the data lives on the internet. So you put a URL in up at the top there. And the, the, the web client will go off and make requests using WMS and WCS and WFS to find out what data is available at these URLs. It pulls back the responses it gets from all these things. And creates a little, a little like, I don't know, I think of it as an index card showing what the data is. You get a snapshot of the data. You can see if the data is available for, for mapping if, if the WMS server is responded or if the data is available as a model input. But probably the most important thing on the screen is the tagging. So when you register a data set, you can assign tags to it. And these tags are drawn from the identifier, input identifiers of the different WPS processes. So when you start tagging data sets, when you import a data set and assign a tag, you're essentially associating that data set with the input to a model. And we'll see how that works in just a minute. So this is the actual model configuration screen. This is essentially the same information here that we saw earlier. We've got the model description, the inputs and the outputs. But in this case, we can actually start assigning data sets to each model input. So for example, on this top one is a digital surface model. Any data set that we tag with DSM is available as a, as an input in this drop down box. So we can register a lot of data sets with I guess, but if you've only tagged one or two of them as DSMs, those are the only ones you have to worry about deciding when you're, when you're configuring those. So you can go through, assign input data sets to all the model inputs. If the model, in this case, there's a place where the model requires a string entry, you can type that in. Basically, you configure the model by providing, you know, just specifying what all the inputs are. And then down at the bottom, you can say what do you want the output data sets to be called. So we get that all configured. Everything's happy. We provided all the inputs that, that we need. And we have the option to run the model. So here the model is running. That says running. That's how you know it's running. And it goes. And WPS, the protocol says that you can, if, what we do here, we start pulling the WPS server. And if the WPS server is well behaved, it will tell us its progress as it goes. So for example, in this case, the model is 25% completed and we get that status indicator in the browser. So the model runs away, it runs, runs, runs, and finally, hopefully, it finishes. And then what we can do is, we've created, in this case, two new data sets. We can view those data sets. We can use them as inputs into other processes or download them and analyze them further on the desktop. And so this is just a quick view of our map viewer, a data viewer. There's nothing particularly interesting here for people at this conference. I'm sure you've seen plenty of map viewers. It doesn't, doesn't do anything special. So what I want to talk to you now about is what we learned while we were developing this system. So first we have the good things. So the system is, it's easy to deploy updates. If we fix a model, fix a bug in a model, a data set gets updated, these, you know, by, by any of the data owners, these changes are propagated immediately to all the users of the system. You don't need to install new software, you don't need to download new data. It just happens automatically. Data owners retain control of their data. And that's important for some people. They don't want to give their data set away. They want to feel that they still retain ownership. So by using web services this way, we can use other people's data, but we don't have to take it from them. And we can use data in models that are developed by other people and perhaps put them together in ways that we hadn't anticipated. Someone could register a new model with a system that we hadn't seen before and use it with data that we might not have seen before and come up with a result that we hadn't anticipated. So this, this type of system gives us flexibility to be creative and allow other people to be creative and we, we find that to be an important thing. It's based on open standards. There's no vendor lock-in. I come from a long history of using Esri. So this is a really new thing for me. I like it. And the system is flexible. By plugging in different models and different data sets, you can suddenly do entirely new things. So these are all great, great aspects of this architecture. So not everything is great though. So we have five partners as I said earlier and each of them had, what to me was an interesting problem. So the first one I call the WFWTF problem. And this is basically an observation that we dealt with a lot of very professional, very educated, talented GIS managers and almost none of them had heard of web services. So if you're going to deploy a model with people in the real world who are maybe not the people at this conference, they're not necessarily going to know what web services are, how to set up a WTF server and so on. Now another problem we had was that one city had one of our partners had contracted all their IT functions to a private company. I'm not sure why they would do that, but that's what they did. Now the company was very happy to set up web services for them as long as it was itemizing the contract. To change fees were paid, to change the contract and it took over a year to get everything renegotiated, get the money sorted out and get this all working. I mean just to set up a server. Sometimes in-house IT can be a problem. One of our partners had an IT department that was not very happy to open new ports on the firewall to have servers that were not under their control running. Some IT places in some parts of some places, maybe you've worked in places like this, don't really, they want to control everything. And so if you're dealing with people who are in that kind of environment, it's a difficult issue to overcome. Data security and privacy concerns are a big issue here. One of our partners had a DEM or a DSM data set they purchased from a vendor. They couldn't put that service or that data on a public web service because it violated the contract and the copyright of the data. So that was an important issue. They also had data that was utility data. It was aggregated somewhat to the block level, but then there were some blocks that only had one household on them. So you even with aggregated data you could still get information about individuals utility consumption and there are some really legitimate concerns with publishing data that can be tied back to individuals. So that's another issue with putting data into this kind of ecosystem. And finally, some of our partners' software just has poor support for web services. For example, one of our partners is using an old version of Mapguide, which just doesn't support WCS data and that's a problem. So the conclusion from this was that sometimes the non-technical issues are bigger than the technical ones. And when you're dealing with partners who are not necessarily on the cutting edge, this is a real issue. So the ugly. It gets better, right? This is primarily a comment on my feelings towards WCS. I think WCS is a service for providing raster data. And it's in itself, it's a fine thing. It's just that nobody supports it and not too many people seem to be using it. So there's a lot of library, like we use open layers and GEO-AX and we actually wrote WCS services for these and I think sometime in the next week or two for open layers anyway, the WCS code will be rolled into the main release. QGIS is actually supporting, I just learned this yesterday, we'll be supporting WCS maybe as of today if they release on schedule. But until today, they didn't support WCS. So if you're building a system that relies on WCS as we did, this is a problem. It will get better over time but right now there's still a lot of packages that don't support it. Okay, unstable. There's a lot that can go wrong in this ecosystem. There's a lot of ways things can fail. You're dealing with servers all over the network, you have network issues, you have server issues, servers go down, people take data sets away, things happen there. That doesn't happen on the desktop. I mean no one sneaks into your office in the middle of the night probably and steals your data sets. So there's a lot of things that can go wrong in this environment. Most of the systems are beyond your control. So you might be using, I guess, as your modeling platform but somebody has changed the version of a software somewhere or these remote pieces may not support the same versions of the protocols you're hoping are supported and there's just a lot of stuff that can happen. And this, because there's so many different pieces of software that people can plug into this ecosystem, it's very difficult to actually test and verify that your system works. And so what often will happen is you get an error somewhere. We do have some good ways of propagating the error back to the user but oftentimes you get errors that look like this. They show this to a user who's using, I guess, trying to run a model on solar panels and, you know, they're not going to know what's wrong with it. I didn't know what was wrong with that. And that came from our, actually came from one of our servers. But this was a problem because the WCS server somewhere was misconfigured and it caused the WPS server to fail and, you know, it's just, it's difficult. So the undead, if the zombie cowboy, what presentation would be complete without a zombie? So, actually this is, this is primarily a comment on the fact that WPS server is immortal. Now, what I mean by this is not actually immortal, they crash all the time. But what I mean by this is that once you start running a WPS process, you can't kill it. At least within the protocol. I mean, you can unplug the server. But there's no way within the WPS service protocol to kill this thing. Now, we have our solar model. If somebody uploads a reasonably large data set, this thing will run for two or three weeks. It's a huge process. It consumes a lot of resources, a lot of power. It's expensive to run this. Somebody starts running this and says, oh, man, I should have put a two in instead of a three. They stopped running it. Well, okay, I guess it's fine. The process is gone. It forgets about it. But the WPS process just keeps running. So if weeks go by while this thing is in the undead state. And meanwhile, that person may have launched another one and then decided they didn't want that. And, you know, so we can end up with a lot of processes going. It's expensive. There's no way to kill them. And this is a real problem. So the future. These are my ideas for how the ecosystem could be improved in coming years. And I hope we'll be. So the first one is this idea of, I don't know if any of you who worked in great detail with web services, but WFS and WCS, which are the vector and raster providing services, have a thing called get capabilities that describes what the server has. Now, currently, you have to make that request separately for WPS and W, sorry, WCS. And this is like, this is soup to me. It's like web services soup. So I get them confused sometimes. But so you do get capabilities for WFS and WCS. And then you have to maybe do a describe coverage or describe feature requests. And you get all these, make all these requests and you get all those data back. And then you have to try and sort it all out. And hope there's nothing conflicting. And I hope everything looks good. And it's a real pain. And it would be great if there was just a way to say, get me all the metadata for all the data on the server. And just one request. So that would be my first idea is to simplify the process of getting metadata about the data that's on the server. Second one is WPS, this is the model running service. Request has, you know, you have to provide all this data to run a model. But the WPS process does not actually describe what it needs very well. You can provide it, you can get a style sheet that is fairly detailed. But processing this and parsing this and understanding the style sheet is very, very complicated. And we haven't found any good tools for doing it. What would be great is if a model could say, here's the data that I need. And you could take it and describe in detail. And you could send that description off to a data service and say, hey, data service, what data do you have that matches this description? And then the data service could say, well, okay, this data set looks like you could use, you could use this data set as an input. And if there's a way that the WPS could cooperate better with the data providing services, it would make life a lot easier for everybody. WPS also should have some callbacks, I think. Right now, when you start running a model, in this case, it's a three week solar zombie model. The model, it runs and you have to ping it, start pinging the server and say, hey, are you done yet? How are you doing? Are you done yet? We do this every minute. Now, if this thing is running for weeks and we're calling it every minute, that seems to me to be a little bit wasteful. I would really like to have the, when the server is done running, it could call you or I'll say, hey, I'm done now. And then we could carry on. And this also applies to very, very short process. You have a process that runs in a second, but you're only checking on it once a minute. That means the user could have an almost instantaneous feedback from running the model, but instead might have to wait a whole minute until your polling cycle goes through. So if the WPS server just had a way of announcing its status to somebody, that would be great. Number four is just want to kill the zombies using your favorite zombie control method. There just needs to be a way for WPS to be terminated. And then the last one is it would be great to augment XML with JSON. And what I mean by that is you do get capabilities requests. The server tells you about itself and it tells you about itself in XML format. Browsers don't understand XML. If you're doing something on the client side, get back XML. Now you need to interpret that XML. That usually requires a fairly big library. And if you're running on a phone or a tablet, there's a lot of JavaScript you need to download in order to make this work. And if any of you have looked into the open layer source code, a lot of it is just XML parsing code. If we could get a response in JSON, browsers can understand JSON natively. We could cut out a bunch of stuff from open layers, simplify it, make life easier for developers, make life easier for users, and I think make everything work a little bit better. So those are my ideas for how things can be improved. And we are now at the end of the presentation. I thank you for your attention. Right. Some tricky questions. We'll let some people get some lunch here. So as I understand the right orchestration service, I go between data processing. So you have a catalog of data from wherever your users are. Okay. The catalog of data comes from the registration process. The user enters a URL of the data service and then kind of builds the catalog on, I guess. So if you want to run the process, does the data not actually send across to the processing machine? No. Well, okay. Well, of course the data, the processing service needs the data in order to run, of course. But it doesn't come through, I guess. What I guess does is sends the URL of that data to the WPS server and the WPS server will on its own download the data set from the data providing. That's fine. That's what I understood. But are there not problems with that? Yes. Yes. Well, I mean, when it works, it's great. When it works and it usually works, it's great. But if somebody has removed the data set since you last saw it or the network is not working. I'm really just thinking about size. When you're going over, I guess, the public internet connection where if you want to be doing stuff to save aerial imagery or satellite imagery, you've got gigabytes of transfer. Yes. Yes, that's true. Yeah. There's no real, I guess. It's unavoidable. If your processing service is different than the guy who has the data, there's no way to avoid that. But you could help the people who have the data to install a local processing service which they can connect to? We could. But by doing that, you reduce the modularity of the system. You start people now starting to install software. And we have enough problems just getting our customers or partners data providing services up. If they had to also have now processing services that are going to be sitting there processing for weeks at a time, that's going to be too much. But yes, I think your points are well stated. Yeah, thank you. Really fascinating presentation. You know, the architecture that you've outlined here is fairly generic in the sense that you have some process. You have data coming in. You have data going out. If you're defining data coming from different sources. The OGC has something called OpenMI, which is a kind of, I think it's an attempt to standardize this kind of architecture. And I also came across the, it's a bit of a funny organization, but it's the national rural electric co-op organization in the US which has an R&D. And they're interested particularly in running electric network models. In other words, to be able to determine, you know, fairly detailed stuff about voltages and curve flows, whatever. But it's designed to be a generic model of very similar to the architecture that you outlined here. So, I mean, I think what you've done, this is going to be the general, I mean, this seems to me, this is going to be the general architecture for how we do modeling in the future. And there's always the details of how you deal with these problems, moving data or whatever, there'll be generic ways to do that kind of thing. But I think, you know, this really is the future of how we're going to do modeling exercise, whether it's electric power modeling or the kind of modeling you're doing. Just a comment. Yeah, I agree. I'll go into it. I'm going to bring the microphone back here. Okay. Well, we're using WCS basically in its simplest form. We just grab everything and let the WPS server figure out what it needs to do with it. Because if you want to start adding bounding boxes and clipping and stuff like that, then the interface becomes much more complicated. So, yeah, we're just using it in the most basic raw form. Okay. Okay, everyone, pretty cool today. Thank you so much. Thank you.
The integrated Geospatial Urban Energy decision Support System (iGUESS) was conceived as a way to help urban planners explore renewable energy and energy savings potentials to make cities more sustainable and self sufficient. Models that calculate solar, wind, and geothermal energy potential can be complex to build and run, so we felt we could simplify the process by creating a web-based tool that a planner could run from their browser. To maximize interoperability with existing models and data sources, we decided to build the system using existing OGC standards and protocols. iGUESS is a web-based system for connecting data, modeling, and visualisation services distributed across the Internet. Users can leverage data and processing services offered via standard OGC protocols such as WMS, WFS, WCS, CSW, and WPS. iGUESS helps users match data with models, launch model runs, monitor progress of execution, and visualize computed results. iGUESS does not store data or host computation services, but instead relies on data and modeling web services provided elsewhere in the project, by our partners, and by third parties. Developing iGUESS has given us a better understanding of the strengths and weaknesses of a distributed modeling system based on OGC services, and some of the inherent limits of these protocols. The interface allows users to interact with services in real-time, using minimal caching, so it always presents an accurate reflection of what data and modeling services are available. This design has presented us with interesting challenges related to intermittent and unpredictable availability of distributed data and process services that live beyond the bounds of the system. The primary advantages of this distributed modeling system is its modularity and flexibility. Users can run models using input datasets they (or others) may have published for different purposes. Models can be upgraded and improved by their publishers without requiring users to install new software. Finally, running models via WPS can be easier than configuring a local desktop model, and the processing is offloaded onto a computer presumably more suited to handling large, complex calculations. Many of the specific challenges we faced have been related to the limitations of the WPS protocol. It is difficult to precisely specify inputs or describe outputs, and there is no mechanism for prioritzing or terminating a running process. The lack of process control is particularly relevant to the sorts of large, processor intensive models that iGUESS was designed to run. Also, very little of the data our partners need to use is actually available online, and they have encountered a wide range of logistical and institutional barriers to providing it themselves. Lastly, we are still trying to cope with issues related to exposing computationally expensive processes to the Internet. This talk will present a technical overview of the iGUESS system, how it works, alternate approaches we considered (distributed architecture vs. traditional “desktop” approach), and the lessons we learned building it (managing complexity and the risks of oversimplification). It will also explore some of the “real world” hurdles mentioned above, and will offer some ideas and insights into the type of applications that are best suited for the WPS protocol.
10.5446/15542 (DOI)
Hello everybody. You hear me? You wanna sleep? You wanna talk? Bad time for talking. Well, here we are. This is G3M. It's an SDK for developers. Designed to build mobile native map apps for any platform. And for any device. You hear me? Properly? Closer. Whoa. Can't see. This is what I'm gonna tell you. While we're here, the challenges we have faced, the capabilities of our SDK, the architecture, and at the end you're gonna see it in action. So, first of all, the origin. We started over NASA World Wing, building a framework, which was called Block 3. We didn't focus them on mobile. But at the same time, people from the University of the Canary Island were building a tool for 3D scenarios, also in open source, which is called Kappaware. And somewhere in the route we meet, we started working together, trying to do something for mobility. But we didn't like the results a couple of years ago. We decided to throw away and start again with the new development from scratch. And that's what we're going to show you now. The challenges were mobility, which is now very, it's an evidence, it's more than a trend. We all are in mobility, but three years ago, four years ago, it wasn't that sure that everybody was going to use it. And it was gonna have a strong impact in many industries, like humanitarian aid or agriculture, mining, defense, all this stuff. Now it's an evidence. Another challenge was fragmentation. And you see there are some different platforms, Apple and EOS, Android, and others that are going to come now in the future. And this fragmentation is about software, about hardware, and it's going to be, it's going to increase in the future, because there are many folks escaping from Android, and we're going to have new and new devices in many of the environments. So when you build applications for mobility, you have to fight against this, or focus on just one single device, one single platform. Performance, we like maps, we like maps that you can pan, you can zoom, you can obtain data quickly. So it's very difficult to find the exact performance with the new devices. They give you more power in the device, but also a better screen, more resolution, so the power is used for the home device in the world. And your map is still going a bit slowly. That's another thing we wanted to overpass. And usability, of course. We wanted to do it with a screen touch. It's not that simple. When you build applications for a PC or web applications, you don't resize them, you don't reduce them to be able to use them on a mobile phone or on an iPad. You have to rebuild, redesign everything, and that's what we did. These are the capabilities. We said multi-platform, but it's not exactly like that. We made an SDK to allow developers build very easily and quickly native applications for the different platforms. Okay. In 2D, 2D, 5D, and 3D, and also scenario maps from a Google Earth kind of visualization, the whole globe, to a local scenario with the detailed scale and data of a local environment. That's an example you'll see later. We can use it with any kind of data, from Raster to digital elevation models, vectorials, cloud points, objects, and 3D models. These data are written in the server, and we translate them in a few formats to enhance the rendering. You can also use, developers may use a very rich symbology with labeling, markers, and also they can use carto CSS if they want. It's built to be used online and offline, which is very important for people that do field work in remote areas where there's no signals. You have the same data online and offline. You can capture and save all the things you're watching on your screen to work with them when you're out of the signals. You can see that better later because you're a lot of people, 3D objects and all this stuff. There are subsystems in the SDK for tax management, cash management, and also for animations. We offer real time, so when a developer needs to build an application that is completely connected to the server, maybe this is the SDK because changes in the server are immediately received and displayed on the device. A little bit of architecture. To make it stronger and more powerful, we develop in C++ and translate it to Java. So with C++, we're very close to Objective C for iOS and with Java for Android and HTML5. These two platforms share about 75% of the code, so it's very fast to code for both platforms for developers. They don't waste a lot of time. They can share more of the things they code for one for the other. This is an example in action. This is what you have to code for an Android app. Press symbol. This is where you can find applications in iTunes and Google Play. This is our web. This is an example of lighter cloud points. I think it's 3 million points. 800,000 points. A few points only. How is it moving? This is an object. Flying is a simulation of a plane, but the flight is real. It's following where it is at each moment and managing a lot of information, a lot of tiles, and a lot of information linked to the objects. I'm going to show you now how we build applications. My colleague Manolo is going to explain it. Manolo sounds very Spanish, isn't he? This is a service built over a digital platform. It's a service of publishing applications on different stores or on the web. What you have to do, I'll leave it. It's the first thing where you can create applications. Every application that you create in the previous screen, finally, is an application on the Apple Store or Google Play or whatever. Here is the console where you create the different scenes into the applications. You can create applications for events or whatever. Every application has any scenes that you want. In this case, you have five scenes. Here, I created a new scene. You see? There are other scenes. You can set a base layer. You can set a overlay layer. That you are seeing here is the same that you want to see on the mobile application. This part of the screen is exactly like the mobile. You can create a new. In the moment that you create the new scene, thanks to the real-time server done with WebSockets, everything that you do here is in the same moment in all the telephones. This avoids the problem of deployment, for example, on iTunes. You have to wait 10 days, 12 days, the day that Apple wants. You deploy the apps one time. All the data could be changed in the moment that you want. You're refreshing real-time. You create a new scene. You can create, you can work. You put the description and you have to put the description because this is the menu that you find on the mobile application. Below is a Sennheiser that is working with the application that is within the scene. In this moment, I want to change the layer, the base layer. I want to change the layer. I change the OpenStreetBand layer for MacWebs for the OpenIRA. I change the layer. I change it. On the mobile, it's just the same thing without deploying. You can change other things like color. You can change data or other application properties. For example, the color of the bottom of the sky or more things that we are planning in the future. It's changing exactly in the moment. I need another screenshot because it's not the same as seen now. I have the new screenshot. Now, the menu, when you click on the menu, you can see the new application. The idea is that you can publish immediately. You can publish immediately the data that you need. In our case, this is the end moment. It's not finished. It's in beta. We hope to have a good production in a few weeks. For publish, simply, you have to go to this screen. Select the store where you want to publish. Click on publish. That's all. Thank you. Any question, please, in Spanish, Portuguese, or Russian? What's your license? BSD, 2000. Your local cache data, how is that packaged? The cache is an SQL lead. It's the only way to have the data in all platforms. It's different implementation, but we have abstract methods to do this. It's in SQL lead. We have a piece of software in the repository to translate the data to this database. The same database aren't interchangeable. You can have the same data in iOS, Android, or on the web. Actually, on the web, no. Every browser has a different implementation of SQL lead. We have fighting with the browser. It's hard. You call this a platform for mobile 3D? Is there any reason not to exist for ordinary test-on platform with iris? I haven't heard. You don't have my strong voice. You have to shout. Unless you've got the WebGL, you can use it for ordinary browser. We're using WebGL only for the browser because in the mobile don't run correctly. So, in this library it's very useful for the network apps. Yes, you can use WebGL. The first translation is to WebGL. It's native only on iOS. It's native on Android. The translation is to WebGL. Of course, the WebGL version works on Android too, but doesn't work on iOS because Apple doesn't work that WebGL works on the platform. But yes, that's platform in WebGL. The performance is very good. Is it possible to have more than one globe on the screen at the same time? More than one globe. It's more complicated than iOS. It's more complicated for different reasons, but it's possible too. You have to think that it's an SDK. You can do what you want. You can do it from complicated applications like WebGL, the software services that we have built over there. You have very simple application only with a map with vector or get that in for what you want. You can do it with your code, value of a man, from the most simple stroke or application to a very complex application, depends on your development. Are there any more questions? Thanks again to...
G3M (Glob3 mobile) is a new Framework developed from scratch by IGO Software using the know how acquired developing glob3 and the first version of glob3 mobile. G3M wants to be the reference framework for developing 3d GIS mobile solutions. G3M has been developed taking mobile-only issues into consideration (Performance, Usability, Fragmentation, etc...). In addition, using the same core we have developed a html5 version in order to run in standard browsers in a near future. At the moment, we have the following capabilities: Multiplatform: iOS Android HTML5 - WebGL Terrain support Efficient tiled-based planet rendering Raster support Vectorials support (Limited): geojson (bson) Markers Labelling 3D Models: Rendering and Blender plugin for exporting of any format Blender can read. Animations subsystem: Animated change of position, color, size, etc for 3d shapes, 3d-models, etc. Animated movement of Camera. Tasks handling subsystem: run tasks in background, periodical tasks, etc. Downloader subsystem download queue with priority per request. cancelable requests. Downloader cache (interchangeables databases using sqlite) Offline maps The used architecture is one of the key features of this project. The core is only developed in C++ and works in iOS platform. This code is translated to java in order to use in Android and webGL (GWT). Using well-known object oriented Design Patterns (Abstract factory, Builder, Template method, etc) we was able to create an extensible core system, that can be ported to new platforms with relative easy-to-implement native-implementation of few classes. The result is the possibility of build native applications in iOS, Android and webGL using the very same API. Now the library has been ready to release under a open source license
10.5446/15541 (DOI)
My name is Christopher Helm. I work for Esri. So please, this is my Twitter account. You can haze me all you want on Twitter. You can troll me and all the trolls. I don't care. It's okay. I used to work for a company called GUIQ. We were bought by Esri a year ago and we are sort of in the middle of leading the efforts for Esri to work more openly and things like that. So we're here and talking about all the stuff that we love to talk about. So this is also the third talk today. And I think it's kind of not on purpose, but things will accumulate throughout my talks when I give multiple talks in one day. This is my thought process to just meld them into one. So there's only probably a few of you that actually have seen all three. And so maybe inside jokes and things like that you won't get. But the goal of this talk is to really, you know, ultimately I just wanted to piss people off and make people kind of look at me like I'm either an ignorant or arrogant or completely misinformed JavaScript developer. And that's probably the truth in that I think the world I live in is a bubble. I definitely eat, sleep and breathe JavaScript all the time. And I think it's the answer to everybody's problem and the cure to all the ails everybody. So yeah, so I titled this talk Geospatial JavaScript because the other talk, titles I was thinking of was, well, or Chris Helms Love Affair with JavaScript and the Internet. Or as you might be here for a title called GIS is Not Dead, It's Coming for You and It's Been Drinking JavaScript. And the title of this sort of came when I was thinking about zombies one day. And I actually submitted this talk to have myself as a presenter as well as zombie Christopher Helm. And I had this big zombie theme in mind because there's a friend of mine named Sophia Parafina that is active in the open community who is famous for kind of saying that GIS is dead. And I think that's sort of like this idea that the web has killed GIS. And if we try to take these concepts of GIS to the web, it won't work. And it's not something that we want to do. And I think she's partially right, but this is kind of thinking along the lines of it's not dead, damn it. I love GIS. I'm trained in GIS. It's just changing and it needs to adapt to the evolution of the web. So my goal today is really geo for all. I put this in there this morning because it's the theme of the conference and what does geo for all mean in my sense. And geo for all really, to me, means geo for the masses, GIS for the masses, GIS for everybody. And to me, that happens on the internet. And so my argument today is that the web is central to what we do. It's central to our lives. It's central to our everyday sort of world. I mean, if you don't believe me, look at how this conference has changed in five years. There's Twitter and there's things that were constantly sort of on the web. And so the web is central to my life and JavaScript is essential to the web. And so that's sort of my argument that I'm going to step through a little bit. And I'm probably going to make statements that you don't agree with or maybe I'm just out there. And I think I'll probably admit to them while I'm going along. So I'm giving you lots of fodder to either talk crap about me on Twitter or just storm out. But so really, why is JavaScript missing from this conference? And I think I asked myself this and it feels to me like there's not enough JavaScript. And maybe I'm, again, this is me saying, what the hell? Where is the JavaScript? Where's the innovation in the geo space and the JavaScript? And why aren't they here doing this? Why aren't we talking about the things that are happening in the JavaScript environment at this conference? And I will admit today there's been more JavaScript talks than there have been the previous two days. And so I'm just misinformed on this. And that the friend Steven Ottens who's over in Chippy's talk next door on cycle geography, he's talking about real time web mapping today. There's a talk on D3 today. So these are the things I haven't seen until today and thinking, man, wait a minute, like some of the things I'm going to say in my talk are totally off the base because they are here. But before I start digging into the sort of the concepts of how I see the web has evolved and how JavaScript is central to that, I thought it'd be worth sort of lightening the mood a little bit and talking about this great video here called Watt. And in this talk, he mentions sort of the funny bits and pieces of JavaScript and it's really fun oddities. So these are a few that he points out. And that does anyone know what an array plus an array equals in JavaScript? So it's an empty string. And so in his talk, I'm totally stealing this. This is all him. He's great. He's hilarious. He basically flashes up a big screen of Watt. And then so what does an array plus an object equal? What's that? No, just an object, a string object. There's a string, literally. I'm missing the quotes here. It's not just an object, it's a string object. And then, okay, so what's this equal? Right? Ivan is right. Zero. What the hell? So as I sit here and say, man, I think JavaScript is awesome. I love it so much. Keep this in mind that, I mean, everything is taken with a grain of salt that it's really can be an absurd language sometimes. Oh, yeah. And so there's more. So what's an object plus an object equal? Right? Remember, object plus array equal to zero. So this equals, of course, you would guess, nil, of course. So you get around those things when you're falling in love with JavaScript. And it takes being able, it's like any good marriage, right? You learn to compromise. You deal with the faults of whatever you're working with. But so I thought that's funny. I had to throw this in here. I really had to fit in just to sort of like keep people grounded with the fact that it's kind of a crazy language. But the web is a lot of JavaScript. And when I started thinking about this talk, I started thinking about the web and I think about how the web has evolved. And, you know, with the web evolving, our perception of JavaScript also has evolved. And there's a famous quote, I'm really bad at actually attributing quotes. But it was, you know, 10 years ago, if you were a JavaScript developer, you wouldn't be taken seriously. And nowadays, if you don't know JavaScript, you're not really taken seriously. And I honestly believe that. But so much of how we work, in the geospatial industry, is visualization, sharing data. And that requires this at least some knowledge of JavaScript or something that compiles to JavaScript, I guess you could say. So, I mean, in the course of web evolution, you have new tech is born every day, right? There's new things growing. It's this rapidly evolving place. And that's why it's fascinating, right? It's great because you can take six months off and come back and something completely new has changed. And there's new processing rules, new ways of thinking about how we process video and these industries that are just blazingly fast moving ahead and rethinking the way that they've traditionally done it because of new advancements. Then there's these new data formats. I mean, this is huge stuff in the past five years, like, you know, GeoJSON, what CardiDb guys are talking about with Torq with these time cubes of data. This is this growth that really just comes through solving a problem they had. So, really, what web evolution means is that browsers get fatter, right? This is really what happens in web evolution. But it's a good thing because we have these big desktops with 16 gigabytes of, you know, I mean, just say four cores and 16 gig of RAM, it's a beast. It's awesome. The fatter it gets, the more access to the actual machine it gets, the better. Things like WebGIL, or WebGIL, right? Access to the GPU. This is the pipe into this massive powerhouse that we need. We're using this for visualization, but we're not doing geo processing on it. But why is that? This is what I'm asking myself. And there's few and far between. They like some McGursky, Mike McGursky from Stain and doing a lot of work with WebGIL and exploring ways to render vector data and process it and analyze it on the fly. Things that it's really, really good at. If you look at the gaming industry, I mean, they're so far ahead of us in terms of the things they do and attempt to do that we're still stuck with this sort of relationship of server databases and move it over and, oh, we don't want to buffer things on the client. Like, it's absurd. So then we have things like WebWorkers. It also makes it sort of like, holy cow, WebWorkers are great. I mean, they break the chain of processing the event loop in the browser. We can do big loops in WebWorkers. Throw things back and forth, spawn multiple WebWorkers off, do things in the background, asynchronous development. These things are not mentioned at this conference. Who's talking about WebWorkers? I mean, I think people know what WebWorkers are. I'm not talking about what they are. But no one else is really exploring this concept of projections in WebWorkers or, you know, big, heavy analytics. And again, maybe I'm off base. I mean, this is the reason, come talk to me and punch me in the face or tell me. I'm an idiot because I didn't think of this. But I spend my day thinking about these things and how I want to do these and things like WebSockets, right? WebSockets aren't being talked about. Stephen Autan's talked about WebSockets earlier. So I'm totally wrong. WebRTC. Who's mentioned WebRTC this week? Who even knows what WebRTC is? Right? Right, one guy. This is huge. I mean, this is in the latest versions of Chrome and Firefox, real time communication layers for like doing things like Google Hangouts, right? For like video communication. That's a massive amount of data, way more data being transferred in WebRTC applications than we would probably want to be doing in our vector mapping application, right? Those people have solved these problems. I think this is a call to action for us to explore these things and drive it forward. And maybe next year at Foss4G we have WebRTC, like bleeding, bleeding, bleeding edge technology being explored. So then when the Web sort of becomes an increasingly powerful tool that we have for exploring things, then usage goes up. It becomes part of our lives. I'm addicted to the Internet, right? Like when I was in London, so like it's, what did Festival of the Nerd say? Like you feel safe when you're on the Internet. It's totally true. I'm super geek, right? I need the Internet to feel this weird feeling. But the Web becomes life when I'm dancing between Wi-Fi hotspots. It's like I lose myself. How much time do I have? How much is left? Oh, all right. So we're going to blaze through this. Give me 15. All right. We don't need questions. There will be no questions. Okay, so. Right. You can grab me. We're just going to go a whole way through, even though it's hot in here. That's why I wanted to move to the bigger room. So on the Web, we use it to disseminate ideas. If you come from an academic background like myself, you go to conferences like this to disseminate your publications and share things and you get paid to do it. And it's about networking and sharing. But that's what the Internet is to GIS right now. It's visualization. It's sharing data and disseminating ideas. So how do GIS and the Web sort of fit? Where does GIS belong on the Web? Is it just going to be this sort of, is the door, I think the door is just swung open. Is it just going to continue to be this sort of like, well, I'm going to do all my processing on the back end and have this big round trip and think about ways to just dance around that issue. But really, no. I mean, GIS has to evolve. It has to change. The questions we ask and the way we ask them has to evolve with that evolution of the Web. And I'm not just meaning, you know, making maps with libraries. It's like asking geospatial questions. And when I go back to this idea for geo for all, it's not just making a map of a visualization. It's exposing things like clips and buffers and things like that. Geo Web, GIS on the Web, in a new way that isn't just bringing a user interface to the Web that's, you know, QGIS on the Web, to me, it's absurd. Right? And this is where people start punching me because talking crap, not QGIS. Don't do that. No. I did that in my last talk and got hammered for it. So in this evolved world, again, my argument, JavaScript is so central. It's everywhere we are. We have things like leaflet and open layers and D3. I mean, diving into D3, I don't think Paul's here. Paul's not here. I was hoping you'd be here. He was in my last talk. So D3 is data-driven documents. Until today, no one mentioned what D3 was. My colleagues at work rip on me all the time for being total D3 fanboy. The self-admitted complete fanboy. But what D3 is, is data-driven documents, but it's really just JavaScript glue around taking advantage of existing Web standards that are proven to be already existing on the Web, like SVG, CSS, and HTML. That's all it really is. It's just manipulating data to inform and build documents around those technologies. So this is a quote from Paul Ramsey earlier this year. It was a tweet, I don't know when, the June 13th. And it was like, D3 is the new flash there. I said it. And I think this really just shows that Paul's a little misinformed. And I think he's, you see, Paul's super boy around here. He's Superman. He's like, oh shit. Your talk is packed. We'll run it again. It's Paul. It's a big deal. But D3 is the new flash. I mean, flash to me is, again, I'm a JavaScript guy, so I'm going to bash flash. What that implies is this idea that it has somewhere to go. I mean, and I think really what he's getting down to is this follow-up from someone else who replied to him saying, look at my very beautiful screen, filling one off, data visualization, new instrument, same two. And it's totally true, but it's not just true of D3. Again, it's misinformed. It's not really a statement. You say, well, D3 is exactly like flash because you build this crappy website. You can build a crappy website in anything. And JavaScript can be spaghetti regardless of whatever library and tool chain. Who works with jQuery? It's the most spaghetti-cone language or framework in the world. So really, when I think about D3, I start thinking about freedom for web cartography, freedom to make beautiful visualizations and do it however you want to do it. So I felt like I had to throw D3 into this talk because Jason Davies didn't show up for his keynote the other day. And that totally bummed me out because I was like, I tripped to London, I tripped to England, was like, Jason Davies, going to shake his hand and hang out Jason Davies and fly me here as well. It's like this guy's a rock star. He's totally awesome and it was super bum that he's not here. So he's doing a lot of the projections work behind D3. You see this really awesome stuff. This is all kind of grainy on here, but it's a map projection transition stuff. And this is always like the ooh and aah of the people who don't understand. Yeah. Thank you. Thank you. Thank you. It changes projections on the fly. But this is great because it shows us that one, we don't have to be using Webmercator. Webmercator really sucks. And if you're ever doing something that moves around to the poles, I mean, come on. You can't be serious. So he's got all the work for D3 supports all these geo projections. It's awesome. You can make really ugly maps or really cool maps. But then there's, so now I'm starting to get into like more of the interactive part of the talk. There's less me just standing here spouting off. So now, sort of like to prove my point, I'm just going through examples of things I think are really cool. And so I think, oh man, that's now loading as an H. You're holding. Wait. I know it's a broken link though. Something in my code is, let me see if I can, yeah, yeah, yeah, sweet. That also has a thing called memory. History. So this is really cool work. This is stuff that me, myself, at Esri, I'm actually working on a lot of adaptive composite projection stuff. And this isn't my work at all. None of this actually is my work in this talk, some of it is, but the cool thing about this is it's done as sort of copying an algorithm from a guy named Bernard Jenny who works at Oregon State. He did a lot of work with composite projections stuff. He probably seems to be very similar to this. I'll sit down so you can see it. So as I zoom in, the projections just morphs and changes. It becomes more appropriate for what I want to see. This is the thing I want to see on the client done. I want to see things in appropriate projections and alburs and things that make sense for where you are on the earth. I don't want to see web mercator maps anymore. Web mercator makes sense for when you're zoomed in, right? There's bugs and stuff. All right. Let's move on. So then when I think about the web, I think about the rise of the API. I think that data has become pervasive. Data has become dynamic. And we need to rethink how we think about things on the web because of this fact. Everyone knows what an API is. People interact with the APIs. It didn't exist five, ten years ago. So when the API sort of come out, we start thinking about data. We start also rethinking and tooling our idea of what it means to be a client and what it means to be a server. We start to blur that line. And that's what WebRTC does. That's what WebSockets do. So things like rest hooks do. They make it like one big happy family. And so how does GIS adapt to that? How do we start rethinking our problems? And the answer is really slowly. What we're seeing here is that we don't think about these. We're not on the forefront of how these new technologies work. We're not changing our questions to adapt to how data is transferred on the web. We're not blurring that line enough like we need to be. We need to be marrying post-GIS into clients and things like that. You're thinking about these things. Isn't it awesome, yeah? It's great. That's us. The kid that's us. The cat is the web. I saw him. I was like, yeah, that's it. So then think about if you're growing up today and you're coming out of grad school, there's probably undergrad students here, grad school students. That's what I learned at GEO, undergrad and grad. And if I was growing up today, would I learn Mapsover like I did? Would I be teaching Mapsover in my class? Would I be teaching Leaflet? Or would I be teaching D3? These are changes. I don't think. And this is another misinformed statement probably because my friend Steve is staying in the back. It's like, no, people use Mapsover all the time. And again, I think Mapsover is great. It totally is great. And GEO server is great. But am I going to be teaching class on web and GIS and be teaching that anymore? I don't know. So it's a mystery question. Think about if you were growing up. But then reality check. I'm not talking about just getting rid of things like post-GIS because it's totally hot. It's my favorite tool ever. It's what got me off Esri software and Esri guy. But I am now. But when I was in grad school, I learned post-GIS. And the first time I wrote an SQL statement, it was like my mind was blown. It's awesome. I love it. I now work for Esri, but I think it's great still. Two minutes. All right. So what's missing from this conference? So I think Topo JSON is missing from this conference. If you want to know about Topo JSON, that's a serious problem. I think this conference should have been, there should be 10 talks on Topo JSON. Maybe one would do. But no GIS. Did anyone talk about no GIS this week? Okay. Steve talked about no GIS. Sure. Okay. Several people talked. So I'm just misinformed. I was only able to make it to every talk. So other things we can look at, like learning from no GIS, package management, unbelievable package management and tools that come out of this really fast-moving JavaScript environment called NPM, Grunt, Bauer, Yellman, like scaffolding tools that make JavaScript awesome. So in the end, where's my Archive UJS? That's what I want. I want to go to client-side geospatial analysis. I want to be answering the question not about how we transfer results up to the client, but how do we persist client-based analysis on the server? How do we push things back? That becomes the new problems we start to solve. A few people two years ago had Fossil or Gene in Denver. I was sitting around like Tippi and Steven Ottens and we want to write our new JS. So I put on a conference called Geo. We're going to do it in San Francisco next year. Yesterday I was going to be in Denver. Today I was going to be in San Francisco because I've been overruled. It's the JavaScript and geospatial festival of love. It's total get down with each other with JavaScript. We love it. I brought up the idea of code love for people on GitHub in my last talk. So this is total code love. This is Rocky and Apollo hugging. I got to show these things up. This thing called Walkshed. I'm a big fan of this guy in Audible. Walkshed is this beautiful client-side raster-based analysis for computing cost distance pads on a client. You basically build a cost grid and compute distances across that. Absolutely love it. He showed this off at JS Geo last year and it was totally blew my mind. I don't think anybody in the audience actually got why it was so great. So I'll go back. Then there's Shapefile.js. This is the guy who was mentioning the last talk. So Shapefile.js is rendering Shapefiles on the client. It doesn't get more GIS than that. But doing this on the client is great. You're starting to see these things but they're not being talked about at Phos4G which really upsets me. There's JSTS. Has anyone ever heard of JSTS? Good. There's people that know. There's people that are aware of how awesome JSTS is. Basically you can go to this GitHub repo. This is JavaScript topology suite. The only problem with it is that it's literally a port of JSTS and Geos. It could not be any more exactly a port of those two projects. The problem with that is those two projects are great but in JavaScript then that makes really, really obfuscated crazy code. It's totally confusing. It's really hard to extend. The barrier to entry on that project is really high and really annoying. So I wrote something called Shapely.js. Big fail. Doesn't work. It does a few things. It buffers. That's it. And it's a messed up demo. I didn't clean them up. I don't know why the states are in there but I can buffer. Really, really not a good project that's not being supported because we're, well, you know, just go back. We've got a lot of time. But just bear with me very, very quickly. In my last talk I introduced something called EsriCoop. Coop is GeoJSON as feature services which in EsriLand is like, you know, what we do. And so what I want to show with this is in my last talk again, these meld together a little bit. GitHub started releasing this idea of Geospatial GitHub. And they do things like cluster automatically for points over 750 points. So you can zoom in. Oh, sweet. That's awesome. I think clients are clustering. Sweet. This is great. I have some ideas about service side sharing automatic persisted clusters. But this is awesome because it's for the masses. But it's a really crappy way to view a dataset. And I think if we look at something like Terraformer which my colleagues in Portland wrote, it's a client side and node server based JavaScript implementation of JSON format parsing as well as spatial indexing, storing data and doing simple geospatial contains. I think what's happening is we're turning shapely JS into Terraformer, bringing Terraformer to have more things like actual operations like buffer and clip and things like that. So now I have, oh, geo hugs. That was the end. But I want to show this first. So I did this thing. I'll come over here. Hold on. Just shrink it down a little bit so you can see it. Well, that's fine. So this is, this is code I wrote last week just to have something new to show during this talk. And I thought it was pretty cool. It was a user coop. It pulls in dynamic data from, live data from GitHub and aggregates it on the client with Terraformer. So what's happening here is it pulls in turboJS and render counties. These are vector counties. They're responsive. If I change it, they're sort of slow. But if I really shrink it down, it will start to shrink. And that's just responsive. It's just recalculating all that. It's not optimized. It's just sort of an example. But so one thing I thought would be cool is I'll do this like sample data sets and say, oh, I can aggregate points. This came down there cached a little bit. So I have some improvements just for demo sake. But they do the aggregation against an archery index of counties in the US. So on the client side, we take this turboJSON, we build it, we shove it into an archery index, and then basically render it. And then as points come down, we just touch each point and do a contains on that and then do an actual intersection just to verify. So we do skiers across the US. I'm a big skier. So I'm going to do Dunkin' Donuts. And I'm slowing it down a little bit, and it's actually still really fast. We do adaptive scaling here where, yeah, I'll explain that later. Then I call it a rain. Yeah, I think then, oh, so yeah, then yesterday I got this idea about doing a WebSocket. So I'm sitting here spouting off, oh, we need to be using sockets and RTC and things like that. So I have this, I have all these, I do a lot of streaming with WebSockets and stuff like that. So I thought it would be cool if we do, you can barely see this, I guess. So we could do a live WebSocket. This is, I think, like UnitedFlights over 24 hours or something. Just a loop. Like it's a really big data set that just you tap into at any point and sit there and have like test data to mess with like streaming. And so this is an example of just, this just persists and goes on. And am I over? All right. That's it. Thanks. It's really hot in here.
This talk will discuss several super kick-ass ways that JavaScript and the web have re-shaped GIS and are changing how we visualize, analyze and share geospatial data with each other and the world. GIS is dead? No, it’s not, and it’s coming to find you and spatially kick your ass with a big bag of JavaScript. The world changes fast (hello, Internet). Yet, our industry (map making in one form or another) is stuck, and has generally shown itself to be slow to react to new ideas and paradigms that grow rapidly in other spaces. But there is still hope! GIS is coming back, and it’s being re-tooled with lots of shiny new software and geo-weapons. It’s going to make an assault on all of our previous notions of its old self. Of course this new and shiny GIS resembles its former self in many ways, it's also full many new ideas about how we experience maps and data on the web. As we witness a massive resurgence in JavaScript (hello D3 & node.js), and more emphasis placed on the web in general, we see that there are actually still large holes that should be filled the geo-spatial stack. New waves of JavaScript developers have, and will continue to fill these gaps. This talk will discuss several super kick-ass ways that JavaScript and the web have re-shaped GIS and are changing how we visualize, analyze and share geospatial data with each other and the world.
10.5446/15539 (DOI)
Well, this is my presentation. I mean, so I'm going to talk to you about performance of J.O. Jason in the browser. And but you know, there are more important topics in life. And so actually, I'm going to talk to you about death, famine, war, taxation. No, this is a joke. I think I just, I will never get the soulcast award ever, but yeah. Okay, so a bit of background. I work at the University of Melbourne at the O-ring project. And we are building this system and part of the e-resort group, which means that we do e-science. So basically, it's an IT applied to science. And we're trying to build sort of a laboratory in a browser for urban researchers, which is pretty vague because there's no such thing as a urban researcher. But the idea is that people like epidemiologists, urban planners, traffic analysts, people share an interest in the same set of data and tools because all of them are working on the same space, like yeah, urban space, built-up areas. So, and we are building a software to do exactly this. So to provide them with data collected from various sources across Australia and tools like R modules and Java modules and whatever. So they combine tools and data in the browser, they can upload data and everything should is supposed to work smoothly. Now, me personally, I had this issue because it was decided at the beginning of the project to use GeoJSON vector graphics on the client. Why? To give the best possible user experience. So you can change color of the mouse on the fly, you can use brushing, you can tool tips, you can highlights, polygons, stuff. Good. Problem is that you may end up with something like this. So there are 2,200 polygons across Australia for this particular statistical areas, which is just a subdivision of Australia into this homogeneous statistical areas level 2. As you may see, Australia is a big country, but all the population is here, which means that you can have polygons like this, pretty big, but pretty simple. So just a few points. At the same time, you have very small polygons, very detailed polygons, more ones. Of course, you need to have all those polygons sent to the browser in an efficient manner. So this is the problem statement. Now, being a statistician by training, I want you to build a model of this. So let's start thinking about what the factors affecting performance could be. Right. The factors affecting performance that I could control. So anywhere, of course, the size of response in bytes, which I suppose it was one of the most important things, but it's one of the factors. The server, DBMS performance, of course. The protocol used, actually, we were forced to use HTTPS for some reasons, but that really would like to understand what is the, what I was losing in terms of performance, using HTTPS as opposed to plain HTTP. Now, this sounds like a statistical model, but this is just for the size. Now, if you take the size itself, it can be thought as a combination of these factors. Compression, because that's what we tried to compress to G-zip, compress the output. Decorate the position, which I tried that as well, just to reduce, so instead of adding we're dealing with coordinate geography coordinates. So just reduce the number of digits after this number of points and see what happens. The format of response, Geo-json and top-json, there are a number of features and the number of points. Because you can have few polygons, but very detailed or very many polygons, which not much detail in them. So basically, it's just a number of polygons. Oh, by the way, we're there only with polygons. So with lines, points, we didn't really try, but I think polygons are the most complex feature that you can send to a browser. So little bit more about factors. Of course, top-json is supposed to be some faster because it reduces the size of the output. HTTPS is a factor because, okay, we'll see it later, but I didn't think that it was a factor affecting. Because after the end checking, usually the connection is kept alive because HTTP11, so it should be fast enough. That's what I my understanding. Okay, average of both sizes tends to 100 kilobytes just to give you an idea. We tested two data DBMSs, CouchDB and PostGIS, whatever. Of course, the number of points, this can be reduced as well. So it's not a given because you can use generalization just to reduce, the complexity of a polygon. How? Okay, I think that you should be familiar with that. So this is the variable Douglas-Poker algorithm. Basically, you simplify a line, and of course, polygon in a sense is a set of lines, by just drawing connected successive points. Vertices along the line and setting a threshold. If that is bit here from this segment to this point is less than a threshold, this point gets deleted. So this is a way to simplify two drop points without altering too much the shape of the polygon. Now, that's fine if you have one polygon. If we had polygons which are contiguous to each other, then the generalization in one polygon may be different from the polygon, the generalization of the polygon adjacent to it. So you will end up having gaps or overlappings. You don't want to have that. So that's why we use the SESimplify Preserved Topology function of PostGIS. Of course, the old special pre-processing was that in PostGIS. Well, nothing topoges adjacent. I know many of you are familiar with it, but basically is geogescent using topology, as the name may suggest. So you can define a polygon, because in topogescent, every polygon is defined by itself, like an island. Which means that when you have two adjacent polygons, you are actually replicating data, replicating arcs. So if you take another view as a collection of arcs, you can share the same arc between adjacent polygons. Like in this case, you define a polygon as a collection of arcs. There is a vector of arcs, so you can have another polygon and you can reuse the same arc. So made the second polygon point to say, this polygon, this arc here, is reduced size. So generalization, we chose, for no particular reason, two level of generalization based on the degrees, because we had all geographical coordinates. We should translate it into one kilometer, roughly in five kilometers, at the highest level of generalization. Then we had two more details levels, but I didn't use those data for these experiment. Now, the test procedure we used, we had one interim at our university, which patiently simulated what the user does. So with Selenium, you record us like 200-pound zoom operations. Then that was duplicated in order to have roughly 1,000 actions. Then those 1,000 actions, every action being a pound or a zoom or something, was played back using Selenium. It was played back using different combinational factors, different database, compression yes, compression no. So that we ended up with about 17,000 different observations. Of course, we built open layers too, as more front end with open layers too, and as more back end with no JS, connected to CouchDB and Boschies. So in order to reduce the number as much as possible, the number of variability in our observations, we run this test nighttime or during weekends. We had a dedicated VM, we Windows VM, we will Firefox on it. We reduce the bandwidth to a megabit, just to test your listing environment. Of course, there was no caching on the browser, because every time my little obligation service sends something to the client, it said that there was no caching, so the headers. So no caching was allowed, neither on the client, neither on the server. Yeah, that's it. So we tried to reduce the noise to a bare minimum. Still, we had a bit of weird results. So different times for the same operation, weird. So I did a little bit of cleaning. So actually, I noticed that the variability was variance was much higher when there were more than 400 geometries returns. So I did a little cleaning. I dropped 14 percent of points, because otherwise I wouldn't be able to do proper modeling. Despite these results were, so this is time, the frequency. Of time, you see, there's a peak here in the zero-three seconds or something. But there is still a long queue, which I didn't expect to be honest. Size, yeah, it's not exactly. That's supposed to be expected, because the density of the geometry is different from one part of the straight up to the other. Number of geometries, yeah, same as this. These are a measure of throughput. Geometries per second, which is what we are interested in finally, because we just want to put as many polygons as possible in the shortest amount of time. This is roughly Gaussian, which is hertaining. Now, first and foremost, we wanted to use top adjacent, but it was not supported by the layers too. By the way, now it does, but we did this work in a few months ago. So I will use the model to give you an estimate on how much top adjacent could help produce improved performance using the statistical model which we developed. But no real data on it. Well, I shall rewrite it using either D3 or open layers 3, I think. Okay. So yeah, database factor. First, we ruled out use of couch DB. Actually, we use couch DB and we're pretty happy with that, but not for this kind of stuff because the current implementation of geo couch is slower than poskies. How much slower? Well, five bit I would say from 50 percent to 150 percent. This was done just using bounding box queries. On the same data loaded in poskies and in couch DB. Yeah, we did a little bit of testing, but so for now on, we focused only on poskies because we found out that geo couch was not yet up to speed. Actually, I tried a few things to make geo couch work faster. So I played with list functions, I tried different views and different type of views, the story in the geometry in different way, but didn't work out. Yeah, geo-jason versus top adjacent. What I did is just to grab some data in geo-jason, then to convert to top adjacent using a common line utility. I found out that consistently, at least for our data. Basically, the size reduction was dramatic. So it was reduced to 30 percent of the original size. So if we had some polygons, which were say 100 kilobytes, then you may expect that to be reduced to 30 kilobytes. Which is pretty good. So for the statisticians among you, this is a standardized quantile and you see that the little experiment and they are roughly Gaussian. So I'm pretty happy with these results. So now size model. So these are the way statisticians are modeled the word. So I presume that size was influenced by precision. Precision means the number of tickets after the decimal point. We tried with four and 15. 15 is a four precision, four is a reduced one, but some zoom levels is still good enough because the user won't notice the difference. Precision generalization, remember the introduction of number of points, the polygon plus E, is it whatever I haven't considered yet. So it's supposed to be a white noise. So first modeling the size and then we'll use a size to model performance with two models. Now, with a little bit of a, oh yeah, then I use size per geometry because that was more useful for permitting performance. So I'm not actually modeling size but size per geometry. Because size of course it depends on the number of geometry. So using the analysis of variance, what you get is this effect, this is the mean effect, and so this is an effect which is the same throughout all the conventional factors. Then if you have precision of four, you add pizzenes to 422. If you have generalization of 0.01, then you add to 278. 422, this one. So you have basically three things to add, and in order to have the expected, the predicted value for that particular combination of factors like this. What is the expected size of a hundred geometries with a precision of four, a digitalization level of 0.01? It is a hundred geometries per multiplied by, so as I said before, so roughly 174 kilobytes. What if I want to have a precision of 15? Well, we get the biggest size of course, actually a 45 percent increase. So it's a predictive model. Yeah. Performance model almost done. Performance model is geometry per second. So it is based on factors now of course size per geometry and protocol which is HDB versus HDBs and the compression. Of course, what noise? Now, the results are interesting, but this hurt anything as well, because we found out that I use a linear model for this, not in the analysis of variance actually, but the compression is not relevant. It's far too high, that's the number over there. The protocol plays a part, and of course the size per geometry plays a part. So basically, these are predicted values, the green line HDBs and the blue line HDBs, which means that performance in geometries per second throughput decreases when the size per geometry is increased. Okay. That's obvious. HDBs plays a part in it, which was kind of surprised me because I didn't expect that. So we are losing something when using HDBs, we as our project, and we are losing this bit of throughput. So now I can quantify that. I can say look, we decided to use HDBs fine, but remember that we are using this much, with this much in terms of performance. Another thing that this hurt me, is that you may notice there is a lot of variability variance. You see here, points are very much dispersed. So I don't know what happened to be honest. I tried to get all the factors in there, but there is still something which I haven't considered. Could be net or latency, could be the client. I tried to get rid of all possible external factors to have a controlled environment. But still, because you may notice, the R-square is just 0.17 for this model, which is pretty low. So performance model, so these are the predictions of this model. Basically, I'm using the first second for the sets for the geometry, then I use it to compute the throughput, with the parameters. So if I have a generalized asia 005, the forward-divs position, I will get for a 96 geometries, which is the average response size in thermogeometry of polygons, which we have from our data, you will get the time of 0.3 seconds. If you use another combination of factors, like 001 and 15 degrees precision, then you will get an eight times worse performance, 2.6 seconds. This gives you an idea how these important destructors are. Now, last slide. Particularly the impact of tau-vejson, same model as before. The only thing that decides the geometry, we know it is reduced by 70 percent. So I introduced the underlying factor there, and what I get is that it will roughly double the performance. So by using tau-vejson instead of j-vejson, I'm expecting our system to be double as fast. This is the prediction, I hope that it will turn out to be true. So let's just slide. Okay. Generization is a positive impact, precision as well, protocol plays a part despite my first thought. Compression has no impact. So it's pointless to jizz it things. Then there is of course a positive impact because it's relevant because posgis is way faster than it should be, or it's the current version of how it should be, and tau-vejson is expected to give us a performance boost. That's it. Questions? Yes. Is the other tests available somewhere? So I could test some machine. Yes, sure. Actually, they are on GitHub by using private repository. I'm going to make it available. Yes, sure. Test data and R scripts, these are a lot. So yeah, sure. Other questions? Okay.
In order to deliver rich user experience to user, features (attribute data and geometries) have to be sent to the client for mouse-over visual effects, synchronization between charts, tables and maps, and on-the-fly classifications. GeoJSON is one of the most popular encodings for the transfer of features for client-side map visualization. The performance of client visualizations depends on a number of factors: message size, client memory allocation, bandwidth, and the speed of the database back-end amongst the main ones. Large GeoJSON-encoded datasets can substantially slow down loading and stylization times, and also crash the browser when too many geometries are requested. A combination of techniques can be used to reduce the size of the data (polygon generalization, compression, etc). The choice of an open-source DBMS for geo-spatial applications used to be easy: PostGIS is powerful, well-supported, robust and fast RDBMS ? On the other hand, unstructured data, such as (Geo)JSON, may be better served by document-oriented DBMS such as Apache CouchDB. The performance of PostGIS and CouchDB in producing GeoJSON polygons with different combination of factors that are known to affect performance was tested: compression of GeoJSON (zip) to reduce transmission times, different levels of geometry generalization (reducing the number of vertices in transferred geometries), precision reduction (the reduction of numbers of decimal digits encoding coordinates), and the use of a topological JSON encoding of geometries (TopoJSON) to avoid redundancy of edges transferred. We present the results of a benchmark exercise testing the performance of an OpenLayers interface backed by a persistence layer implemented using PostGIS and CouchD. Test data were collected using an automated test application based on Selenium, which allowed to gather repeated observations for every combination of factors and build statistical models of performance. These statistical models help to pick the best combination of techniques and DBMS, and to gauge the relative contribution of every technique to the overall performance.
10.5446/15536 (DOI)
It's about scaling. Many people say about scaling things up. It's always complicated. And I just want to make an example because, of course, you can also scale up your Postgres database. And you just have several instances and have an proxy in front and have your data showered and so on. But the really problem about the scaling up is the operations. What happens if one server goes down? So let's take this example. You have three documents. You're just going to be on your cluster. And in this case, they equally just distribute on those three servers. And of course, you want to have, so these are the original documents, you of course want to have copies. You want to have replicas on the servers as well, in case something goes wrong. So now one server goes down. And of course, you can't access C anymore, the document C, because it's down. So you just go to the admin interface, click one button, and say, activate the replicas. So now the C replica gets activated. And now you have access to this document again. You have to imagine it's a live application. You have thousands of users accessing it. And it basically just keeps on running smoothly. It might be just a short downtime when the server goes down. But once you say, activate the replicas, it runs smoothly further and you don't have any downtime. The system just keeps on running. And then of course, you say, well, I now have not a backup of C anymore, and if something bad happens, I might lose data. So therefore, you just click another button. It gets reshuffled again. So now we have two servers. Here you have the copies on the different machines. So again, if once something would go down, you still have access to all the data. And the operation is just smoothly running all the time. But now, to the point, this talk is about GeoCouch. I don't want to say this slide. So I keep on going. GeoCouch works with Apache CouchDB and with Couchbase. But of course, as CouchDB has different goals, then Couchbase has. Also, I see the goals for GeoCouch in different spots. So for Apache CouchDB, I see the point of publishing fast. As I said, you just store your documents as JSON. And I was in a project where they had the data in access spreadsheets. So it just extracted some or get into JSON. And then you put it in your CouchDB, build your innings on it, build an OpleaS application that would access the JSON. You can even put the OpleaS application in your CouchDB and you're done. That's it. You don't need any GeoServer or any other application server in between. This is all you need to do. This is what I mean with publish fast. And this is where I want to see GeoCouch being used. Because it's so often that government agencies just won't have a quick way on publish the data they already have. But setting up the whole stack just takes a lot of time, a lot of knowledge. And yeah. Couchbase, as I said, is about scaling up. So this is really I want to be distributed highly scalability GeoDatabase. This is basically what I said in the example in the beginning. If you've set a large imagery big data and you want to innings this, this is where you would use Couchbase for it. And what features does GeoCouch support? It's again funny that we had the talk previously from MongoDB because it's kind of the same. GeoCouch doesn't support many features. But I agree with you to say that this is probably 80% of the use cases. And it's for the same reasons. Because yeah, it's just easier to scale up if you don't have so many features. And it's just simpler there for faster and so on. So we have our geometry types. So we can store polygons, points, line strings, everything that GeoChase supports. And on the query side, you of course have BawningBox search. And this is already probably 50% of the applications. You just have a web mapping application and want to show something from a database. Then there's Polygon search. This is currently not in the Couchbase version, but in the Apache CouchDB version. It's on a private branch. So what I want to say is it's finished. It's done. And it works. But it might be some effort to get it running if you want to play with it. It uses, and the background uses GeoC, because GeoC does the hard work. It's existing, working. And yeah, there's no point of re-implementing it myself. Then there is Cunease Neighbor Search. This one also works, but it's kind of a sad story, because it's implemented, but I haven't published it yet, because it was actually a student that worked with me together on getting it done. But he ported some code from Post.js to make it work on a sphere. But he just ported some algorithms from Post.js. Post.js is GPL. And you might consider it a derived work. So I'm not sure about if I can use it or not, but I just don't want to get into legal trouble. So if you want to have Cunease Neighbor Search, contact me, get the code. I'm happy to give it to you, but I just haven't published yet, because I couldn't be bothered to think about legal stuff. It will be a problem, because you're under a Apache license, right? Yes. You'd have to then re-release your code. Yeah, but the point is that he ported it from C to Erlang. So is this still a derived work, or did he just read the algorithms? And so it's, er, yeah. Anyway, so it's, yeah. Yeah. And of course, what the talk really is about, also started in the beginning, is the multidimensional search. This is currently working with Apache CouchDB only. I will make a release soon. So I wanted to make it for the Phosphor G, but in that train I haven't had enough time. I got to slept in the train. So there will be a release soon for the new Apache CouchDB 1.4 version, which was released one or two weeks ago, which will then contain the multidimensional search. And I might just put in the geometry search as well. What multidimensional search means is really you can build up indexes with any numeric value you like. So let's say one dimension is the geometry, which is only two dimensions. Then, so these examples from a trade office, then they want to say, OK, I want all bakeries, which is another dimensions, that open in 2010, which is another dimension, which has a certain size. So we would have a six-dimensional query. And yeah, this is what it supports then. So from a technology perspective, GeoCouch is mostly Erlang. We're currently plotting things to see for performance reasons. The algorithms is for the gigs of you. So the single inserts use the revised R-starch tree, which is from the same guy who did the R-starch tree, which is basically the algorithm you normally use. It's used in orchestration, it's used in post-JS. So this is the way to go, but this is an even better version. And for bug loading, I use a paper called sort-based query-adaptive loading of R trees. And I've just put it there so we can click on it and get the paper and read all about the geospatial stuff. It's a really interesting thing. And this is what I currently implement for the C version of a GeoCouch. And finally, the future. As I said, it's about scale. It's about performance. So I think I'm really at a point. I've worked on the project already for five years. And now I think I have a good understanding of R trees. So finally, the performance. My goal is to be faster than post-JS. It should be quite simple. The reason is not because I'm so smart. Because, again, post-JS does a lot more than GeoCouch does. And if you do a lot more, of course, you have a lot more overhead. So it shouldn't be too hard to be faster. I want to use an LSM R tree. There's already a paper from 10 years ago about it. And there's an upcoming paper in. It was promised to be out in October from a university in California that does the LSM R tree. It's from the source code. It looks promising. I'm keen to read the paper. This is what I really want to implement it in. Then, for the geometry stuff, I use GeoS. But I'm not happy about it as many of the people. Because this LGPL is always an issue. And I really want to hopefully can gather people to just create another geometry library that doesn't use some BSD or MIT license. It doesn't have the limitations. Because, yeah, it just sucks. And one thing is that the multi-dimensional index also supports strings. So it can also, for example, search for a rest data. This is, as far as I'm concerned, an unsolved problem. I've never seen a hypercube which supports strings. So if anyone has a hypercube that supports unicode, let me know. And, of course, as it's called, GeoCouch, it should do spherical calculations. Thanks for your attention.
Databases that support spatial queries are often limited to three dimensions, but the requirements increase. You might want to query in more dimensions, for time ranges or other attributes like trajectories. Documents are represented as JSON. The values that will be stored in the index can be extracted from anywhere within such a JSON document. Even conversions like reprojections are possible. Apache CouchDB and Couchbase are document databases, hence belong to the non-relational space which is also known as “NoSQL”. One of the strengths of Apache CouchDB is the (multi-master) replication. You can keep the data from several different instances easily in sync, even if you change the data on different instances. The replication isn't limited to Apache CouchDB, but it's a whole ecosystem. It's even possible to sync with your web browser and store it in its offline storage. This way the user can access the data offline, without the need to be always connected to the server. In contrast Couchbase has its strong point in working at scale. The data gets automatically sharded across machines. Adding and removing servers at a later stage can be performed through a simple web interface. If a server goes down the system can still work without any interruptions. GeoCouch, Apache CouchDB and Couchbase are open source and licensed under the Apache License 2.0.
10.5446/15533 (DOI)
you where we were doing I would say database software so list reports list reports really boring stuff and then in 2002 I joined a company so I'm from Belgium I joined a small company called Ionic software which in their history have been acquired by Erdas then by Integrauff so I've been working for 10 years in commercial JIS company and then beginning of this year I've changed and I've joined Geometis it's a company located in south of France nice place and they have a lot of components open-source components and they are building solutions mainly for scientific domain and industrial customers so it's it's a set of open-source component like geo API geo toolkit constellation and we have a set of solutions based on those components started also in 2005 so at the time I was still in Ionic I've worked a lot for DOGC and for ISO mainly for all cataloging and metadata working groups I've been chairing some groups I've been the editors of some specification and since 2010 I'm also a member of the OTC architecture board so that's that's more or less my my background so what is a special database I won't explain you really what is special database I suppose you know it but the kind of information that you would like to store into database that could be look especially located are for example bars or beer location where you have simply position latitude longitude or you can have also box and all this kind of information do not really require what we call a special database many database can store numeric values and you can do simple operation on those on those information like finding where is the point compared to my position or things like this but when you are dealing with more complex information more complex shapes or more complex discovery operations you need a specific structure in your database so a specific way of storing the information and a specific way of doing the query on this information and to have efficient queries you need to build some specific indexes and that's where you really need a special database so on the market what are the solutions that exist for special database of course oracle post GIS special light but for no sequel database there are not a lot of I would say real special solutions Neo4j, CoachDB and MongoDB have some special extension where you can store some geometries you can do some sort of filtering on them but it's not it was not as I would say as powerful as we were looking for so for that reason we decided to think about a component that could be plugged as a cartridge on more or less any database any storage and what what was required if you have a no sequel database you need to define a special model for storing your information some operations on those special information and index those information to have efficient discovery so we needed to find a way to build those components and be able to plug them into no sequel components so we build those components those architecture and then we add also to define what kind of geometry we would like to support so for this we use JTS which is a well-known library and we decided to support all the JTS object all the JTS geometries which is already far more complex than simply a point or a box then we needed to to define a set of operations so either we looked at other special database and copied the operation names to have a kind of compatibility or we looked at what exists as a standard for those special operations and those special extension and we decided to use an ISO standard so SQLMM which have a specific part about special operations so this is an ISO standard that defines the list of methods and functions that you can apply on geographic geometry object so you have a set of transformation operation and functions to transform from WKB from simple text from JML to a geometry object and you have also a set of methods that you can apply to your geometry so to do some discovery about intersect distance between two points or two objects to check if an object is in another object or contains and I think like this so all those methods have a standardized name which means that anyone who would like to implement the same ISO standard would have the same query language the third component is the index the spatial index and for this you have a lot of different spatial index that you can you can do we decided after some comparisons and some implementation to implement the Hilbert R3 index so it's based on the Hilbert curves it's a bit more slower for insertion but it's really faster for retrieval of information if I have internet access I can show you a curve next we had to decide on which no SQL database we will do the I would say the test on which one we would try to plug our cartridge and there you have a lot of choice currently there are more than 150 different no SQL databases there are different paradigm that exists for no SQL databases we had to choose one which match I would say the geospatial requirement and we really liked the graph model because life is can always be represented as a graph of concept of components but the document model was also really matching for example all the document that you can have in geospatial domain like metadata the files things like this so we decided to check in the multi-model databases if there was one was using both the graph and document providing and orient db was one was implementing those two paradigm and we start playing with it we were in contact with the CEO of this the company that build orient db and they were interested in this this cartridge so we use it because it's a mix of graph and document it's also really fast it's amazing how fast it can be and also it's it's a no SQL so not only SQL there is the SQL layer that you can use to query your database so it's not a complete change from a relational model so someone who is using post GIS for example can easily switch to orient db with a special cartridge change is really small and also it can be used in embedded database which is also useful for mobile and things like this so we took the orient db project we modified it a bit to allow pluggable components so we could plug our data cartridge so we had some development to do on it we collaborated a lot so someone from the company is a committer now on orient db which is which is interesting for for for us because our work can go into the project and of course it's interesting for them also to have someone on board who manage all the spatial domain so what we did with the SQL we modified the code and all the parser layer to allow passing SQL statement with different operations and developed this kind of pluggable adapter to support just spatial function and that's the kind of query that you can do on a no SQL database or in db with the special cartridge so you can insert geometry using the operation to create your point your polygon your line things like this and the query is really like what you are doing in post GIS for example so you do select statements and in the where close you can use the different methods that you can apply on the geometry object the geometry are stored in database using wkb and now indexed on a file so the indexed is not stored in the database in the graph database at the beginning we tried to use the graph database also to store the index which was in some use case interesting for example when you have to insert a node you do not have to rebuild all the list you simply have to insert the node between two other one which is in a graph is pretty efficient but for some other part of what it was more difficult and so we decided to put it away from the graph model other kind of stuff you can do you have the Java API also in orient db means that using Java you can build your own representation of geometry here you have three samples so wkb jml and wkb for example and you can easily execute SQL command using the Java language on orient db to insert geometry into the database of course we also support the coordinate transformation so you can store your geometry in any SRS in the database and you can transform coordinates we also have our EPSG database in the graph so we use it also to do the transformation yes other kind of queries so it's for discovery and using method for example to do a union or to calculate the distance between two points so it's really like like post-GIS it's more or less the same but it's a no-SQL database so it means that you have all the the advantages of a no-SQL database for deployment cluster deployment and things like this this is the list of operation that you can apply on your on your geometries and so after having built this this components we had to select one use case to test it we had a list of use cases that we could do or that we would like to do and we we test it on different use case for example the first one we tried is to store raster in the database so following the ISO 19123 coverage model we store the raster into the database which is pretty useful also for the pyramids that are created you have also a pretty efficient raster database using this as I've said the epsg database also 1911 is stored into this graph database which is easy to embed into an application and easy to retrieve transformation coordinate transformation operations as I've said I've been working with metadata and catalog for why now and I've implemented several catalogs in my previous companies and always metadata are a set of documents and they are all related to each other if you take the the big picture of what can be done in the GIS and I've been involved mainly in Earth observation domain you have a set of concept that can be stored into a catalog or registry and that you always need to to link together and that you would like to retrieve based on on some other criteria so for example the big picture is you have data set so it can be a subsurface and product can be vector anything like this you have service which usually publish those data set you have collection of data set also goes that the set series you have rights on those concept you have acquisitions of those data set that can be either satellite or in-situ sensors things like this you have rendering of those data set either great coverage or vector data you need to render them then you have some portfolio rule that you have to store or define somewhere and more and more you define more semantic because people are trying to be more accurate in the description of the metadata so it's not simply now full text people are using more and more ontologies and things like this to describe the metadata so all those concepts are interesting to have in a central catalog or central database and all those concept are linked together so that asset is acquired by a sensor that asset is gathered into collection is published through a service collection are also published through service all of them can be portrayed they have also rights that you need to manage on the data set on the service and of course semantic can be added to any other concept so here you really see it's a graph it's a graph of object and I've implemented those kind of concept into relational database and you always come into the problem on how to normalize this structure into relational database if you're playing with a graph and document database it's more natural you do not have to break structure you keep it as it is and you can traverse your graph to find whatever you want coming from whatever you are so it's really really interesting use case and when you are dealing about semantic of course link data is just a step beyond this is really a graph is dbp here and you see that graph is really a natural model for storing those kind of information the last use case which was interesting is open street map data open street map model is really a graph you have note that relates to way you have way that relates to relation and when you download for example the osm row data it's really a graph of object so usually people take those data transform them into a relational database so they lose the the original structure so what we did to to be able to do some analysis on those data is that we put osm data directly into the graph database without breaking the original model so we keep the structure and I will I will show you what what can be done okay so this is a simple application of viewer it's not optimized for visualization it's simply a data viewer this layer is the osm tile so here we connected to the tile server of osm and retrieved the map if I hide this one so what I have on my laptop is a graph database where we have stored the osm data as it is using the same graph model and I'm retrieving the data as a graph and I'm applying SLD rendering on the fly so it's vector data with SLD rendering directly from the database so you will see you can find it it's pretty slow but I would say that even with retrieving all this information as the original model is pretty fast so there is no optimization it's simply taking the graph model of osm applying rendering so it means that the purpose is not for display because if you zoom out you can see it can be pretty slow because you are retrieving a lot of information but the purpose is to show that keeping the original model of osm you can do data analysis after which is pretty interesting and this is using the graph database it's not so slow okay so just disclaimer to be honest so the purpose is not to show you how far it is or it's not just to say that for data analysis it's really interesting so what we've done on our RionDB as I've said we have improved the component to accept plugins because we did not want to break the original structure and put our components everywhere we wanted to have a component where you simply plug this spatial data cartridge in it we propose it as a get up for currently so RionDB is working on the next release and we will work to align for the next release pretty soon and so the graph.js cartridge has been developed as a DB agnostic and in the title of the presentation I put for graph database between bracket because this could be applied to not graph database also the roadmap I would say of graph.js is integrated with future version of RionDB work on the query engine for big data management because big data is is really it's really big here we have done some tests with osm on some countries if we want to load it on the full world it's a bit more complex so we need to optimize also the query part for this we would like to test this cartridge on other no SQL database other candidates it could be useful to check if it works or what has to be adapted on those components to plug this cartridge in it and also validate the deployment in cluster because that's one of the advantage of those no SQL database that it can be deployed easily in clusters and to for example do some just spatial sharding to have more performances thank you very much I must admit that the beginning it was really hard we had at the beginning to change many parts because it was not so stable it was not done for being extended so we had a lot of work to do and that's the reason why we asked to become a committer into the project because it was a lot of things to change but it was useful for them also but yes you're right it was not as easy as it could have been and sorry one quick other one the OSM database have you got any use cases of where you get value by walking the graph rather than a traditional query I would say not yet the purpose of OSM was because it's a big data use case is the amount of information as I've said metadata is interesting but you never reach number of metadata that you have in OSM data so OSM was mostly to have a large data set of concept to test the database as I said for the metadata yes it's useful it's useful to traverse the graph because you have for example if you would like if I take the slide you have this structure and the typical use case is I would like a service that provide data acquired by let's say spot 5 or acquired by a sensor which have a resolution less than this you do not have always this information to dataset itself so you need to traverse the graph from the service to the data to the sensor because you have to go to the last node to find information what's the name of the project in GitHub? OriandDB? right now it's a for-procoder for OriandDB if you have to search for an OriandDB project there is a for-procoder to make OriandDB a gable okay and then right now we are discussing with Luca Gaudi the leader of the project to integrate this work on the next version but right now the discussions are how we bring this big checkup code on the 107 version of the 2.0 because this is a huge change we are all the SQL models it's more it's more performance but there is a lot of change that the team doesn't know what's all they have to have a deeper look at the code and exactly how it works this is very impressive how many of you were working on this project? for the folder special package free or for people for defense of our workload just to check that I understood correctly you mentioned geospatial sharp I mean that next year it's on one chart some other date is there and it's on the other
Driven by the major players in of the Web like Google, Facebook, Twitter, NoSQL databases quickly gained real legitimacy in handling important data volumetry. With a first concept of key-value, NoSQL databases have quickly evolve to meet a recurring relationships between entities or documents. Graph / document paradigm provides flexibility that facilitates the representation of the real world. Beyond the representation of information of social networks, this data model fits very well to the problem of Geo Information, its variety of data models and the interconnections between them. The emergence of cloud computing and the needs driven by the Semantic Web have led publishers of geospatial solutions to consider other ways than those currently used to store and process GIS information. It is in this perspective that Geomatys has developed GraphGIS, a spatial cartridge for OrientDB, the Graph oriented NoSQL database. This solution provides support of geographic Vector, Raster and Sensor data, in multiple dimensions and their associated metadata.
10.5446/15531 (DOI)
Thank you, Alex. Can you all hear me? Those in the back? Is this okay? Can you speak up a little bit? Yeah. Okay. Now, can everybody hear me in the back? Is it okay? Okay. I won't use the microphone. Okay. Welcome to today's talk. I'm Marco Turkovich. I work as a developer in I-GIA from Croatia. It's part of IN2 group. I won't bother you too much with the numbers, but we are the largest software development company in Croatia. We have offices across Eastern and South Europe. And maybe to say that I-GIA is a competent center for GIS. So we do a GIS development. Today's presentation and a solution is part of Kosovo Cadestral Agency spatial data infrastructure that we have implemented recently. It is an ongoing project, so we have about two or three months of development left. It is an integrated Web GIS solution for spatial data definition and maintenance. And it consists of several functional software modules that were separate projects, a geoportal, cadastral and land information system, and address registry for address data, maintenance and definition. One of the specialties of this project was that we were forced to use Microsoft software for a database management system and for operational system because of contract of Republic of Kosovo and Microsoft. So we couldn't choose that, of course. If we could, we would go with Post-GIS, but that was not a question. And the other thing that modules that would be developed in this project would facilitate the next big software projects that we are part of. So this diagram shows the component architecture of the whole system. So from the user's perspective, we have a geoportal as a data dissemination point. You will recognize a couple of those logos there. We use that all. So this down there is an internal system. It's an internet application for KCA users where they define and maintain the data. This module deals with address register data, and this module deals with cadastral data. So this is the central part of the whole system. It offers viewing and editing capabilities for the data. So that's the point of the system that consumed most of our development time. This is the short overview of a KCLS graphical component. We call it KG, of course. It's integrated solution for definition and maintenance of KCA's data space. Spatial data sets, both address register and cadastral. And it's a basis for development of other modules that would work with other KCA information system. Of course, we use the geoportal to disseminate the data toward the public. Its architecture is a classical three tier architecture. On a database level, we have SQL Server 2008. On the middle tier, we have Geo Server with Geo Webcache and.NET MVC3 business application. Client side is developed in open layers, GeoXt and XJS, standard solution. And the components, of course, base to SOAP principles communicate via services, WMS and WFS and JSON. Couple of functions that our KG application offers. It's, of course, the main component is a viewer editor. It's the main point for the spatial data viewing and editing, then the feature class tool that enables us to define other feature classes and import data into our system later. It rests on Geo Server Rests up before publishing new feature classes on Geo Server once they are collected. Then we have import tool that we use for importing the data into our system. Currently, we support GML shape, DWG, Interlis and GeoT for Raster data. And then there is styling tool which we use to dynamically style our geometries on a client side. It's client side. This is a short GeoPortal overview. Of course, as I said before, this is a central public oriented software for dissemination of spatial data sets. And you can see the user interface here on this picture. This is architecture of GeoPortal. It's very similar to KG application, except you will notice we have LIFRE portal enterprise here that wraps client side functionality and access proxy on middle layer for proxying WFS and JSON requests to business application. GeoPortal enables us to view spatial and related open numeric data. It offers search and discovery services, ordering and download of data sets and product, and upload of data sets by data providers and enables users to give feedback on data quality so that can be checked. This is the URL for GeoPortal, so check it out. So the next part of the presentation is about challenges that we have met during the development process. I have listed 308 challenges, so we'll go one by one until the end. So I better hurry. The first challenge that we knew it will be a big problem. This is a very bad joke. So it's OK. If you're not laughing, it's called bad joke yield. We had to make a tool for curve digitization. So what we did, we implemented quadratic curve handler, so that's open-air development based on quadratic-bezier curve parametric equation. So we, in the end, approximated with line geometry, and this is how it works in practice. So you digitize the first point, the last point, and then you use the apex to reshape the curve. This is the next cool handler that we did. This is a line curve-curve-switcher handler that enables us to seamlessly switch between arc digitization and linear segment digitization. So it enables fast digitizing. This is 4G. OK. So when it comes to polygons digitization, we also implemented several cool features. We basically, what we did, of course, was to extend open-layers draw feature control. We used JSTS on a client side for geometry validity and topology checks and pull operations. So the first tool that I will show you is a polygon splitting tool. The second is a remove polygon area, and then adjacent polygon digitization. So this is a demonstration of polygon splitting tool. It's a done client side, so it uses JSTS for splitting the polygon. And we can see here the one multipolygon has appeared in this configuration. It supports polygons with holes, so it's quite cool. And it's done client side. This is a tool that we can use to draw multipolygons on a client side with open-layers. So we just add geometry while tool is active, and it will add to existing geometry. We can also subtract the geometry from a polygon and get a cool-looking robot. This is a very cool feature. It's inspired on the GIS software. And what we have here is a first polygon, and then we digitize a line around it, and it just finishes a polygon automatically to close the minimal surface with the first polygon. You get the idea. These are cat-like tools for perpendicular and parallel construction. What they do first is they segment the line from point to point so that perpendicular and parallel construction would even make sense. Then when we select the segment that we want, they just programmatically draw a parallel and perpendicular line, and then you can reshape it by dragging. This is a nice example of a tool that guides the user how to use the application. It has five steps. So basically, what we want to do is to digitize a point that lines on an intersection of a circle and a line. So what user does first is digitizes start point of the line, end point of the line, center of the circle, enters radius, and then selects the appropriate point that he wants to use. This is a very cool feature. It's topology preserving modify feature. It's basically extended modify feature, promote layers, and what it does is when we drag a point that coincides with other points, that point goes with it as well. So the topology of our section is preserved. It also has a validity check. We don't allow self-intersecting polygons, and it has a rollback. So when we do an error on geometry, it just fixes itself to the last valid state. As you can see, I think it resides on a JSTS for all the operations. It could use some modifications like insert of tool that would insert vertex on all coincident geometries and delete tool, but that's a work in progress. OK. Basically, our application is a full-featured web editing application. And we knew we will have performance issues when we decided to use open layers, because the large number of features will always overburden the browser. It has limited capabilities, limited memory, so we had to think of something that would be useful for us. Something that would ease this up. So what we did was to extend open layer strategy bounding box so that it implements seamless switching between vector and WMS layers based on a zoom level. So on a large scale, you have vector features, and on a small scale, when you zoom out, you have WMS features. So that means that you can draw on the full extent, but you can select anything. So if you want to select features, you zoom in to 1 to 5,000 or whatever you want, whatever you set it. And then you can select the edit data and the data that is edited and selected that doesn't get uploaded when you zoom out again. So it's a pretty cool feature. And the bounding box does everything, keeps track of the WMS layer, so you don't have to worry about anything. Of course, this is this actual concept is inspired by edit session on desktop GIS tools. And what we did was implemented one set of controls that can work on all layers. So we don't have to duplicate controls. What was missing in Opal layers that to work out of the box was that some controls don't have implemented set layer. So we added set layer method for every control that we use. And then when user activates edit session for some layer, all controls get rewired to that layer, so they can work on it. One of the requirements of Project was to enable user to dynamically style vector and WMS layers, so to be able to change the styling information about everything. So for this, we used the Geosolutions style editor component. And what we do is when a user edits the SLD, through the graphical interface, we create a new layer. Through the graphical interface, we create vector styles from that SLD, dynamically reload the icons for each layer node in the layer tree. And besides that, we have also overridden OpenLayer Surrender and OpenLayer Format SLD classes to add support for label rotation, which we needed as well. Multilingual support was a big issue for us. We implemented in several layers in the application. On client side, we have I18n language files for JavaScript. And then in the database, we have multiple attributes for each language. We have an attribute in the database. But how to style geometries? We asked Geosolutions to develop a new function for that cause. It uses a variable and substitution. And it's called property. And you can send in request the name of the attribute that you want to use for geometry styling. That's pretty cool. It's a very serious and official system. So of course, we had a lot of demands on spatial data quality control. We have a set of business rules and data naming convention that we have to follow. And of course, we check geometry validity on the client side, that performance is OK. So it's good. But for several others, complicated topology and special layer-to-layer topology checks and hierarchy checks that form hierarchy, we use the database spatial data quality control via procedures and such. On the middle tier, we use JTS port and NTS for several other checks. So we could say we have a three-way quality control on our system. So because you remember the graph from before, all those systems are interconnected. And so as the Geoportals disseminates the data for internal applications, we don't want to overburden our internal infrastructure with public access. So what we did was to replicate the database and GeoServer catalog from one system to another. For that, we used Microsoft SQL server integration services and GeoServer raced up before reconfiguration of data stores when we moved them from one storage to another. I think this is the last one. So bear with me. Of course, as I mentioned in the start, we had to use Microsoft software for operating system and for DBMS. And also, we use an application server for Microsoft IS 7 and the.NET framework for and.NET MVC 3 for our business application. And for Geoportals, we use a LifeRay portal at Enterprise platform. And it wraps our client-side functions in portlets. So that was one of the. OK, so we have plans for improvement of that system. The development of the Cadaster component is still in progress, so it's an ongoing project about a couple of months more, and it will be over. Then we thought it would be cool to offer some kind of a data preview when importing our data, but that still needs to do something. More flexible language and security would be nice also for application. Which security, especially. And then the native support for curve geometries from GeoServer and JTS and Geotools and OpenLayers that would probably need a tremendous amount of time and effort from the community to be implemented. But I'm sure it will be implemented one day. So of course, it is our duty, and we would be very nice that we try to contribute more to the open source community. So in the end of the project, we'll contact some of the developers from OpenLayers and see if there's any interest in modules that we have developed for this application. So in the end, to wrap things up, we have learned that Phos4G is mature enough and flexible enough to implement very complex SDI implementations and business applications. So of course, the Big Plus is a great community, very large community that helped us in every way that it could. In the process of development, the software is very versatile, flexible, scalable, all the nice things. And even if we want to, we can easily integrate it with proprietary software as we had to do. Of course, we can only conclude that it was the right choice for this particular project and implementation. OK, so that's it from me. I think we have enough time. Thank you. So feel free to ask something. Yes, sir? We've got five minutes for questions. Hi, I'm Mark. Very good talk, honestly. Thank you. And I'm a part of the OpenLayers. So I'm very...
The presentation covers experiences and challenges encountered during the implementation of the Kosovo Spatial Data Infrastructure. The SDI consists of GeoPortal, Cadaster and Land Information System and the Address Register, all implemented on the FOSS stack and interconnected via OGC services.
10.5446/15530 (DOI)
Speaker is Mohamed Sayed, so I would then invite him to go on the stage and start his presentation. He is from here, which in Dutch is strange because it means he is from here. He lives here, but he is working for a company I guess called here. By the way, I'm not the one on the nerd, so I do want you to switch off your phones, or put them to silence, please, because it's annoying for the speakers. Thank you very much. Hi, good morning. So, yeah, I work for Heel, which is Nokia's location and commerce, but I'm not here for Heel, I'm just here on my own, and I put this presentation together, and I hope that it will be a contribution to the community. So, this is Agenda, and a couple of the screeners, one you already heard and just one more, some goals and motives, and a little bit of historical background, definition of glass computing, so we can talk about the same thing. And some use cases for phosphor G, and then I'll talk about AWS, the components and service at a high level, and if you want to be in the cloud, what you're going to be doing, and how you're going to be doing it, and then some common-foss tasks. If you are building an SDI in the cloud, you're going to have to import some data, you're going to do some rendering, some geocoding, and so on. And then hopefully we'll have time for questions. So, you heard the first one I worked for here, I'm a senior architect in the core platform group for Nokia's location and commerce, but again, this is just personal work, personal effort. I'm also not affiliated with AWS other than being a customer, so I use them to make sense, if somebody else makes more sense, I'll be using them, especially if they give me a bell-a-bell for the buck. This is still work in progress, I hope to be doing this once a month, or at least once a quarter, so you'll knowledge my vary, but I've tried to document it as much as I could. So why did I want to do this? I wanted to maybe validate some ideas with you, and maybe you can validate some ideas with me. I've done quite a few services in the past. I've worked for Yahoo before I went to Nokia, and so I also did www.yahoo.com to adopt my Yahoo. So I have a little bit of background in that area. Maybe I'll get some feedback from you, and we would like to see you try. Hopefully I'll help you save some money. There was a lot of frustration while I was doing this, so hopefully you don't have to go through it, and there's already stuff that some artifacts are produced, so maybe you can use some of that. In the process, I've discovered some problems and issues, and maybe everybody knows about them. I'm not very strongly connected to the PhosphorG community, although I've been an open source guy for a very long time. So I'd like to hear if somebody knows if there's work in progress to address the risks when we bring them up. And why I wanted to do this is basically because I think right now we are at a stage where everybody's talking about open data and geo and location, and I think this is a very good opportunity where you may be called upon to contribute, either in your organization or in your community or university, using open source technologies, maybe save some taxpayers some dollars, and at the same time have some fun. So I think this is kind of my main motivation is that I think this is an opportunity where we can do some disruption with open source in the geo-location space, and I think I can help a little bit. So a little bit of background. So cloud computing a couple of years ago was just a buzzword, and I'm like, yeah, everybody's talking about this, and it's kind of emerged out of that stigma of being a buzzword into a reality where people actually use it and they can deploy services to it. It had, you know, immensely lowered the bar for entry, the barrier to entry for start-up companies, for nonprofit organizations and so on. But this goes back a few years, maybe a little bit over a decade. I think it all started with virtualization. I'm not going to go into the ancient history before VMware and virtualization on the X86, but VMware and PaL used to ship commercial products, and they had some customers, a lot of people used them in lab environments and so on. But it wasn't massively adopted. Solaris, you know, back when Sun existed, they also did Solaris zones and containers, but it wasn't really until Zen before KVM came up, and they came with this power virtualization technique which really helped the performance, and this became a very viable solution to run things in a production environment. And so that was kind of like the disruption. I think Zen really tipped the scale here. At the same time, we were having problems with hardware, so we were not able to deliver any faster processors. Problems with cooling, problems with power, and the solution from the hardware vendors was to go for multicore, so you don't have faster processors, you just have more. So that was one thing, and then they caught on, that if they wanted people to run things on multiple computers, multiple processors, the software at the time was not quite ready, so how do you utilize that? So virtualization was a natural solution, but because of the performance, it was still not quite up to snuff. Some hesitation was there, and the hardware vendors started supporting virtualization into the chipset. So first AMD came up with nested page tables, and then Intel did extended page tables where you can virtualize the page table so the virtual machine doesn't have to contact switch all the time. Later on, there was IOL offloading, so TCP offloading, for example. You can just process things in the neck without having to go up to the main processor anymore. Same thing for storage. And then the storage and network vendors started thinking about how could they support this from an infrastructure standpoint. So there was virtualization in the storage. They kind of called everything that they had done before virtualization, even if it wasn't really new, but volumes became virtual volumes, and slices became virtual slices, and so on. But yes, so it started to gain momentum from there. And then on the consumer side, we started seeing smartphone stabilizers and multi-screens, and people wanting to use services and be able to access the same thing from everywhere. So now it became the idea, okay, we don't want to re-store things on desktops anymore, and maybe we store them on servers, but the servers have to be accessible, and this is where cloud computing kind of crystallized. AWS had really been pushing this for a long time, so they have really done a lot of work there, and they're way ahead of everybody else, as far as I know, in terms at least in breadth of coverage. OpenStack is trying to catch on. I think they do great work as well, but it's just, you know, they deliver software, and now there are companies which are trying to take the software and build infrastructure and services around it. So my definition of cloud computing, and this is just a definition, it's a computing paradigm where it's composed of abstractions, a set of primitives, and some interfaces and tools around them. The idea is that you try to hide the physical stuff, the stuff that's hard to move, the stuff that you don't want to be tied to, so you want to abstract that as much as possible, and then you have a new set of primitives, some of them not necessarily very new, but images, for example, is a primitive. Snapshots, volumes, a region, availability zones, they may have other terms for other providers, but they basically talk about the same thing, they're trying to abstract the data center, the actual computer or the actual hard disk away from you. And then tools and administrative utilities around that. What happens is that once cloud computing kicked off and people started deploying virtual machines and cloud and so on, things spiraled out of control really fast, and it wasn't in a very good shape to begin with, so the tools and automation also really helped set that path, so puppet chef, any other configuration, CF Engine 3, any other configuration management, but the idea is that you have the primitives, you have the abstractions, and you have the tools to manage them. So this is kind of like a block diagram, so at the very bottom you have the physical stuff, and then the primitives sit on top, and you have the tools and APIs at the highest level. We can even go further up, so if you look at things like Heroku, for example, they abstract even more where you just deploy to a platform, so you're very far away from everything else, you just have a command, you run it, you got a service. So this is kind of a clean representation of what it looks like. This is open stack implementation, so there's quite a few areas going back and forth, and this is kind of what it looks like in real life, so this is clock computing, and that machine is very important, because if that gets unplugged, the whole thing goes to shit. Alright, so AWS is a public cloud, so the same kind of diagram, but it would just be a little bit more specific, so we talked about compute as an EC2 instances storage. So EC2 instances is just virtual machines, they have a predefined set of configuration, so you cannot change or tweak the CPU or memory settings, you can just choose one of the models, you can attach drives as you wish. Then a set of storage, so S3 is like a storage over HTTP or HTTPS, and they have elastic block storage, which is kind of a NAS, or a SAM idea, and Glass-Hero is kind of a long-term archival. They have the foundation, you know, the regions, the actual brick-and-mortar implementation, the data centers, the power, the cooling, all the stuff that we don't want to think about. And networking, so we're with 53s DNS service, elastic load balancing, CloudFront is a caching service, and a set of tools around security, so identity management security groups. You go up one level, you see the simple queuing service, or search as a service, or redshift, which is post-glace SQL, and storage, kind of, so if you don't want to run a cluster, and you don't want to manage it, then they will do it for you, and you just put your schema, you connect to it, and you treat it just like a post-guess, or post-gray. Unfortunately, at the minimum, they don't support spatial, so it's only post-gray SQL. And there's more. There's simple email service, simple modification service, and so on. And the management layer is the API, is the outer scale, the CloudFormation, and that configuration, and so on. So what kind of use cases we can do with Phosphor Gene in the Cloud? Well, for start, disaster recovery backup, so it's very simple, very easy to just dump a table, or archive your SQL dump, encrypt it, ship it over to an S3 bucket, get it back when you need it, hopefully you never need it. So this is a very straightforward use case. The other use case is static logic free web publishing. So if you just have some vector data, or roster data, or any kind of static data, where you're not doing any logic, anybody who can make a request can get back that response. You can just publish this using S3 and CloudFront. I'll show you an example, a diagram. You don't have to run a web server, you don't have to run load balancer, you don't have to do anything. You just publish it, and you will pay for the request as they come in, but you're not going to have to maintain any infrastructure. Obviously, online Phosphor Gene, so you can do geocoding, or tiling, or routing, and so on. So any of the software that is available to us under a public license, you can just run it. If you run a GPL license software, you have to make sure that this you're compiling is that license. Data transformation drops. If you have a set of tiles, or a set of data, and you want to just transform them from one format to another, or maybe you have four or five different formats, and you have to do this overnight, you don't need to borrow a bunch of machines and just have them sit for the rest of the day, so you can just fire a job, get it done, and shut them back. Consecuration batch processes, so again, the same kind of concept, if you're collaborating with other people and you want to have some central storage, where they can upload their files, and maybe you can do some processing, put it back, and so on. So this is like a blueprint. If you wanted to do this static logic-free content using AWS, this is your content, and you can just put it to this S3 bucket. You can configure a platform distribution, which will point to this bucket, and you publish it, and you make your DNS-seaning areas to this zone that you are going to configure here, and that's it. Now users will go request your data, they will get seen into the CloudFront zone, and based on some telemetry and some other magic, they will get routed to the closest cache edge to them. If you want to have logs, you can also configure logs to go to an S3 bucket, where you can just retrieve them back later. So how do you build it? If you wanted to do this maybe for a university or for a district or just your company, how do you do that? So I think there are some architectural patterns, and you'll not see this in books, I kind of came up with this overnight, and I just wanted to share them with you, so you're aware of them, they fit some things better than others, sometimes you have to mix them, everything in the world is almost polyglot, so this is not a holy book. So the cookie-cutters, the idea is that you have a machine, the machine has everything that you would need, and you just manufacture them, you just have 10, 20, 100, as many as you need, or as many as you need. They have everything together, so they have the application and they have the data, and they scale horizontally, so if the traffic is actually growing, then you can just scale up, if the traffic is dying down, or on a low point, you can just shrink them. The data is accessible to the machine itself, they are not connected to each other in any way, so if in case they fail, the failures look like. So very simple, scales very well in some use cases, so simplicity is one of the pros, and scales horizontally was laid, and localized failure impact, these are the main points. Problems is that poor support for right-oriented service. If you look at here, if you have so many machines, and everyone has a copy of the data, and the data has to change, then you have to push this back somewhere, somehow. And if the users are allowed to change the data, that's even worse, because now one machine is going to change, and it's going to have to replicate, then that doesn't work very well. It's a coarse grain scalability, so when you scale, you scale everything, or you shrink everything. If you actually have a service where your data layer is very, very fast, but your web service is not so fast, so you need a web service, or web applications, you can't do it, you're going to have to scale the whole thing together. So it's a cookie cutter. The load capacity has vertical scalability issues, so if you have data growing, or if your memory consumption is growing, you're going to hit the ceiling at some point, where you can't just grow anymore within that box. So there is a vertical ceiling, and how much you can process per node. When there's a centrist approach, basically you take the data out, and you let the application run on the nodes, you can have a second copy of the data as a backup disaster recovery, and this is, you know, now the data is centralized in this database, these are the clients of this, and then the user is out there, they're the clients of your web service. Works okay in a lot of cases. Scales well for mid-level loads, so if you're doing 20, 30, 40 new requests per second or so, it probably works. It has some other issues. So the pros first, you know, you can actually scale the web service by itself, or you can scale the database by itself, and that's a big advantage over the other approach. Five minutes. Okay, and we have to run with the slides. So the replicator, the master of colonies, where you can have masters of freckles distributed over the world, and they can replicate to each other as read-only slaves, and that works pretty good for read scalability. You're going to have to do some culture changes. You know, you release engineering, this has to be in really good shape. You have to adopt automation. You really need to think about agility. You need to think about using the primitives that are available to you. You need to make sure that you get a buy-in from the stakeholders. These are very, very key things. Some process changes. And some of the things that you have to remember, the leader implications, don't try to scale in the cloud as you would in a brick-and-mortar situation. So don't try to go for a, let's cluster these machines together and have a big cluster and four or five clusters. Just things fail and just plan for it, and it's okay, and just think about how you could recover as fast as possible. It may take one or two tries. You probably get it right on the third time. But the old approach, you know, trying to connect things and make sure they are going to be reliable. It doesn't work the wrong way. I did some other work in this process. I'll go through it really quick. When I started, I wanted to see if I can profile a renderer, a geocoder, a router in the cloud. And then I hit the first problem. I wanted to get data in. And then I started reading about people taking ten days to get the OSM data set. I thought that was horrible. I didn't want to get a synthetic data set with 200 megs and say, yeah, this works and I'm sure it will heal. So I really wanted to get the OSM data in. And I did. So first I did some tests. I looked at a bunch of countries, small countries. This is the time it took in seconds versus the size of the data set. So up to 3.2 gigs. We are within a 30 minutes or 35 minutes range. I started collecting some stats around this. And I provisioned different infrastructures. So I looked at the local drive, how long it took. I looked at provision IOPS, which guarantees IOPS performance. I looked at SSD, which is very expensive. And so guess how long it took. I went with the SSD one because I wanted to finish as fast as possible. Any guesses how long it took? Ah, you're right. An hour, I wish. It took 35 hours. But this is 250 gig. This is not so bad because a lot of people spend six or seven days to get this done. So this is actually not so bad at all. But guess what I did after I just finished this? No, I made a copy. So I made a copy. I built a RAID zero set over provision IOPS. And I created a logical volume on top. And I kicked off a data copy. And guess how long that took? Wow, it's short. Okay, it took two and a half hours. So actually, this is a file system copy, right? So I just shut down the database and then I copy the volume over. And then I archive it, of course. And this took two and a half hours. So this is what it took to do a provision IOPS to SSD. It took five and a half hours. It took about seven hours to do a SQL dump to SSD. SSD took provision IOPS to two and a half hours. SSD to SQL dump took four and a half hours. OSM to PG SQL to SSD took 35 hours. So guess where the problem is? So this is a profile for OSM to PG SQL. You can see the rank hash nodes get 15%, Y max 10%, copy to table 10%. So there's a lot of things going on before the data actually is stored in the database. And this is part of the problem. And I wanted to talk about that a little bit more. So you can read these notes later. Then I did some profiling with Matta and Matnik. One thread, three threads. So if you run this four threads, if you run this four rendering threads, you actually get six threads. So two threads do the bookkeeping for threads due to actual work. You're going to have to read this unfortunately. I'm going to run to the G-server part a little bit. So G-server, single layer, I took a small country called Finland. And zoom level 15 from 0 to 1 to 15. And this I did around this. And this is about 100 tiles per second. You can do about 100 tiles per second in that kind of setup. And this is kind of a ceiling because this is RAM disk. So it's not going to get any faster. Well, it could, but it would be very expensive. Trankation is very slow. So doing Tranky, your G-web cache, try to publish your data as version data, as version layers. This is much better if you can help it. Stano-known G-web cache will work a lot better. So try to think about just yanking that GW cache out and put some G-web servers behind it. I'll show you a new point. There's some possibly waste conditions in thread writing tiles. And this is kind of an example deployment where you can take the G-web cache out and you put G-servers behind them. And you put a load balancer that can do URL persistence. So these nodes are not coherent. They are incoherent by design. The idea is that you will go to the node that has the tile. If there is no node that has the tile, one will get selected and then from there on, it will be persistent. You can mix the disks so you can mix a fast disk and slow disk in a volume. That will probably give you a very good performance. How much did all this cost? $866 and two weeks. And then I have a backlog where these snapshots to the public. So I actually have the data now in AWS. I'm going to make it public so you can just import it. It should hopefully help out. I'm going to do some G-coding profiling in an OSRM profiler. And I'm open to suggestions as well. Thank you very much. So you took off all of your time. So I'm afraid you don't have time for any questions. I'm sure your slides will go up on the ELO geoplatform.
This presentation will show methods of working with AWS to design, deploy and tune Open source software with an end goal to bring up various geo-oriented full stacks. This includes databases, tile renderers, geocoders, routers with all dependencies. It will cover choosing the components, the deployment posture, prototyping, designing for cloud scalability, performance benchmarking and ongoing maintenance. Most of the concepts will lend themselves well to other public or private cloud situations.
10.5446/15528 (DOI)
Thank you. We're now going to talk about the farm-up online for short-term called GPI, and another program called Regional Environment Program called RMP. Both of our programs are a success in Norway. They're used a lot and they work very well. Open Source has been a main contribution to that success as we see it. A couple of words. I work at the Norwegian Forest and landscape institute, and they provide information about soil, forest, and do some research around that. I am a developer and my name is Lars Obsal. First we're going to say a couple of words about applications so you know what we talk about. The first application is RMP, GPI, the farm-up online application. It's a web client. It's a major tool for the farmers to collect information about their farms. That means area information, information about what kind of soil it is, and so on. It also contains information about ownership and it has different reports for printing, etc. Here's a short picture of the application. You see here, you have kind of area numbers, you can probably see it, but it's there. And there's the ownership information. Is it an anti-couple here? Is it a pointer? There was one, okay. I'll try this one. Here you see different ownership information, you have advanced printing of properties, and you can do a search on any property in Norway. I said it was mainly used by farmers, but it's also used by private persons because it contains information about all properties in Norway. It's open, so it's easy to use for everybody. It's also used a lot by community employed or governmental employed that work with agriculture. One thing in Norway is that farms seem to be bigger and bigger, so we need to group properties together. Because we rent a neighbor farm or something. So that is new functions that are coming now, that you construct your own farms, you group together all kinds of farms until you have the correct area that you actually are growing on. Yes, the next application is called the Regional Environment Program. That is also an application that takes information from many different sources and presents it to the user through a web client. This is only used by farmers, and they use that to apply for subsidizes. This application replaced all paperwork this summer, so a farmer cannot use paper to apply for subsidizes anymore. It has to do it on a web client. That of course means if you are going to apply for money, we need to know who you are, so we have to integrate with mean ID or as a national system for authentication. This system is used by external systems. So external systems connect to the system to get the information that the farmers draw on the maps. I can show you a short overview here. This is some of the complicated part. In Norway there are about 100 different ways to apply for subsidizes for. From region to region that varies. In one region you may have 50, in another region 70. This application is the same application used all around Norway. It behaves differently according to where your farm is. According to which region your farm is located in, you get different choices for applications. Here you have applied for grass decked one way. Everybody knows what that is. That's a kind of subsidizing. You have many different choices there. Where you can apply for that kind of subsidizing is kind of difficult to know. Even for farmers and even for the government employee. It was a problem earlier because they are not actually asking if they can apply here or here or here. You can only apply at this area. The system creates a legal area for the farmers to draw on. If you actually made a road like here, he draws the road. If he tries to draw an outside, it's capped off. That helps the farmers to make correct applications. It saves the community a lot of work to control the application afterwards. Of course you can edit the geometries and delete them or do whatever you want with them. That until a certain date because after a certain date the application is closed and the farmers get their money back. Now back to 2-10-12. We had an old static HTML client and they wanted that fixed. They wanted a more dynamic and faster and more functionality in that client. We had the requirements specification ready. We had 4 months of development time and we had about 4-5 programmers available. Some of them are sitting here. Software investments had to be clarified early in the project. Since the short time frame we had to work on client and server in parallel. At SkogelandSkup where I work, there is a use open source if that's available. But now we're going to turn this situation around. Let's say use only commercial software. Let's see what will happen with the project. We also wanted to install by only certified engineers. So we wouldn't take any risks on anything. The workflow would look like something like this. First we had to use some time to figure out what kind of software to use. Then we had to wait for price quotas from the vendors. The price varies from case to case. It's quite impossible to tell the price before you have the correct runtime environment you're going to run on. We had to wait for internal discussions regarding prices. You really mean that you need 100,000 bucks to buy this? In our organization, that would be impossible. Then you need handling licensing misunderstandings. You mean you run this on VMware which has 64 CPUs? Then you have to pay for 64 CPUs even if you only use two CPUs on your host OS. Then we had to wait for the engineers to install the software. Then we could just start the code. While you know what the result would be, it would be failed. One reason the coding would be laid by weeks and another reason it wouldn't be money for it. We would just continue to run the old application with no changes and that would continue to run until it wasn't possible to use it anymore. Another reason it would have been too expensive. We had to rob a bank to get money for that software we needed for that project. There's a silver lining which means a hidden benefit here. As a developer, I wouldn't be blamed for it, the project failure. Because I was only waiting on external resources, money, so it wasn't my fault. I would have a nice, quiet spring and walk around. Some might say that is bad planning. You should know that you needed 1 million bucks this spring to software. But that's the opportunities you get when you work with open source. If you should stop in 2011 with the project you worked on, then we had to get a team together, start to plan this project and find out what kind of software we needed, what kind of protocols we needed. That would mean an interruption in the current working project. But with open source, we can do it like this. Here's your project. Hopefully you find open source that you can use and then we don't have to take all the budget discussions that they are already taken or we have the amount of money we need because we don't need so much money. Let's do a comparison. Yes. So we did go for open source and this is some of the server side components we used. We sent those for all servers. We used PostGIS, Postgres and GearServer and MapServer, Java Spring and Heimler Special. Heimler Special, that is not very often mentioned. That's a very good tool, Heimler Special. It works very well and if you work on a set of tables like if you work 10,000 different tables and you never work on the same table, Heimler Special is not the tool or the package. But if you work on a set of tables, small set of tables and you know you are going to do a lot of work on them, Heimler Special is quite nice. It works very well. And another thing with Hibernate is that a lot of consultants know Hibernate. So if you like to take GeoTools to consultants, they don't really know about it but Hibernate will know about it. The new GPI application was launched in June 2012 and the development was done on a scrum based. We used Svan and we used Maven. The Maven tool is very efficient as you probably know. It solves a lot of problems, creates some but solves the most all. And people were developing on Windows, Fedora, Ubuntu, that was no problem. So it works very well. To get the system up, we had a client interface ready in about two weeks and that was done by using XSTs and Spring to generate web services. We used temp data in Postgres using generic data tables and that works okay because we are storing data that looks like pretty much the same, but it's different kind of data. So we didn't have to create like 10 tables, we created one table and we had metadata about what those tables were. We could of course use an in-memory database there like MongoDB or whatever or Berkeley database like a harshtable database. But the reason why we use Postgres for temporary data is that those data are going to be viewed in GeoServer and MapServer later. So it's much easier to connect to Postgres by GeoServer and MapServer. We did run quite a bit of integration tests and stress tests and the importance of stress tests is of course, it's important. But we actually did run many thousand farms. So we got the system well tested before to production. And since farms can be put together by many farms and by many properties, we talk about many hundred properties. And to compute ownership you have to check properties, again properties. So it's quite a complicated check, it takes some time. So we have to use a lot of concurrency to get it through or else we have to wait in minutes. And of course you have to handle top-legged top-legged geodos in SQL and Java. Let's do a comparison between commercial and open source. Here's the commercial solution. If you go for Microsoft or Oracle Enterprise, Archies, WebLogic and top link on the open source, you know it. We go for this very simple solution to save money. It wouldn't work though, but it's simple. And it's, yeah. So we have two servers, one database, one application server and one VMS VFS server. This would of course cost something. And here's the price for open source at the bottom and commercial software on the top. There's some percentages difference there. And I'm pretty, I can't guarantee for the prices because if you read the SRE price side, like, hey, we can't save what the price are because there's so many different factors. So it's quite impossible to put the price on the web. You actually have to see your server and find out what the price is. And it's pretty much that with many of the systems because it's like, if you want to find the price. Okay, thanks. Yeah, we go on. There's also one difference between governmental and non-profit and private organizations. Like heavy load on private sites usually means more revenue. But on governmental and non-profit organizations, more traffic do not really mean higher revenue. Of course, you may get some more hardware, but like there's no automatic relation between traffic and revenue. See, was that the previous one? See, yeah. Okay, how can we say that open source increased stability and performance? Because we could scale up the system with no extra cost. Like if you had gone for the first solution, you get to August where the traffic is high. We're up about 500,000, more than 500,000 requests total. And you have to ask for more money and you have to get an engineer to install the software. It will be a failure. We could, so it's not difficult to scale up the system from the beginning because it didn't cost us anything. It's easy to install, just take the servers, put them up and we are ready for traffic in August. Like if you buy an extra Oracle license for 64 CPU, you hopefully use it. We also did a lot of horizon scaling. It's easy on this system because of the amount of, because of the nature of the data and the usage. Here's the system as it is now. There's some more servers, but we won't go into details here. There's one more here. Another thing with open source is that it makes it easy to add new functionality. Because you might buy a system and you cannot buy enterprise-sized enterprise system. So we have to add new functions all the time and that costs money. But usually here's very easy to just find a plugin, add it, dump it into your system and you are up running. Like we needed JSON support. It was very easy. We just took a Jackson package from FastrxML and we run that on the Java objects and we suddenly have JSON. It was very easy. For integration we used OpenOn with the mine ED. Yes, these functions are added after the GPI project. This is different servers in the system and the interfaces. One important thing in this slide is that standards. Without standards it would be impossible to do it. So the importance for the open source community to support standards is like you should do more, you should do more. Because if proprietary software used their own standards, there's no way to get in there. Like this is written in LibreOffice and try to open it in what's called Microsoft. Well, yeah, it doesn't look like, it looks like hell. So you have to support that. Is there any drawbacks with open source? Yes, there is of course some drawbacks but from my point of view there are not any real drawbacks. Because like you can say that with commercial software you can call a guy say, hey, I bought some software from you. It doesn't work. Usually the answer is it isn't my fault. It's a file where you disk your network or you used the software wrong. But the farmer at the end doesn't really care if you run an open software or commercial software or if it's a programming file from your side. Or like the case, the cause of the fault doesn't care for the end customer. So if you see the perspective from the corporate or from the organization's level, it's not important. But if you have to take some chances yourself like there is nobody to call to blame but usually there is no problem. I see like there's much less fault in open source and of course you have to test it. That's a part of your job to test it. And like some say that you may get a refund but as far as I know it's very seldom you get a refund from Oracle because there is a bug in Oracle. So and you have paid some much money already so we will never get back the amount of money you paid in. And who will help with the fix any problems? There's plenty of people out there that will help you. And that works very well. We have contributed some ourselves but there's also always an answer. And I've very seldom been the first person to have this error. Usually there is a person before that had the error and Google finds it for me. We've been through those like hopefully you agree on that. It helped us a lot on all those points. Made it possible to develop those applications. Thanks. You mentioned that you're running Postgres and PostJS on CentOS. We have done that in our own organization because it could be difficult to set up. Was that your experience? No. It works without any problems. Easy to install. Yeah, sure.
We present GPI, the primary application used to view and maintain information and geo-data of farms, farm geometries and farm properties throughout Norway - enabling farmers to create and update some farm data records themselves directly.
10.5446/15525 (DOI)
Until we reach the top and then at last at the top everybody now and then is being served again as the base of the food chain. So let's go back to the first picture of the ecosystem and I want to try to map the phosphor G community on top of that picture. So first of all I detected we have two stakeholders, two living organs which are developers of course because that's what we all are and we have users using the things the developers are making, are creating. But there's more than that. You have also researchers because in open source it's about vision, about future and we do research about technology. And researchers are working together with developers because researchers have an algorithm and the developers are producing the most efficient algorithm of it. And then the stuff is being used by solution providers who are presenting the solutions to the users. And we have a lot of stuff in the open source community that can be reused so we have integration providers integrating all this nice real stuff into working things that users can consume. So this is a first, it's more complex than that but that's enough for today. The question I want to ask here is how does that come? What are the driving forces behind that mechanism? And this is what I investigated a little bit. So the first thing I did, there is a very nice presentation given by just Van de Broeke how to get rich and save the planet with open source. So this was a very intriguing title and I wanted to know a little bit about that. So there we go. And when he prepared his presentation his friends were already saying, okay, creating software, giving it away for free, how is that possible? They didn't understand. So he gave a presentation out of that and I tried to build further on that presentation. So what I did was look how others do business. And I searched on the internet for the business model of Esri and there is a clear statement about Esri, the core business is creating and distributing propriety GS, leaving away propriety, this is what we all do, isn't it? And then they have sales to sell products using their own sales staff. They have support. Until there I can follow everything. And then there are some strange things. They have a geographical strategy. They have direct sales through account managers, business partners, technical sales engineers in the US. And then outside of the US, in the rest of the world, they have international distributors and they have plenty of business partners. Okay, strange. I don't understand that part but the first part I understand. So the next thing I tried was to map that business model onto the ecosystem. So they talk about users and then on top of the users you have the authorized business partners, the international distributors, the sales staff and then you have Jack. Okay, so back to the question, how to get rich and save the planet with open source. Yeah, before we can delve into that, I want to tell some things about open source business models. This is a graphic design from Cascados, a project, a European project. I'm not going to tell you all about that. Just going to summarize for those who know how to do business with open source. So this is the value model of a software chain and there are different models that we can use. You have dual licensing which you can use if you have control over the development part. If you are the owner of the software then you can make it open as open source and you can sell it as property software, not as property software, sorry, as licensed software. Big difference. Another model is support selling. Most of us are doing support selling. So you are an expert in one or other technology and you specialize in training or in support. You don't need to be bothering about ownership or whatever. Another model is a platform provider. You be a platform provider but you concentrate on packaging. You make sure that all the objects that you use, all the things from the open source community work together with the right releases into products that can be reused. It helps if you are in control about the development also. Another model is consulting. I'm going to be a little bit faster here because this is not the purpose, accessorizing. You have all the t-shirts of Osforgy. This is accessorizing. And then at last you have software as a service. You can provide services using open source and then it helps also if you have control about the development. So having said that, we can start with the real topic of this presentation. And there I have an observation. Some people get rich. Some examples you have Bill Gates, Larry Ellison, Jack Dengerman. Other people were rich like Steve Jobs. Come back to that later. What do they have in common? Well, back to the business model of Esri. Okay. What is one thing that is particular is they sell products and do sales and marketing. Sell products. Now I'm talking to the people of open source community, of course. Open source is about libraries. We are building tools. We are creating things. There was a new company before Linux was created, GeoTools was created before GeoMaius was created. So selling products is worth for people from the OSGeo community. And well, we sell ours. We are consultants. We are experts. And that's great. But it's not sustainable enough. We should do more. And I think in the open source community we also need recurrent funding like the people who become rich do. So a second point is, I already mentioned it, propriety. Propriety gives a very strong mechanism to these vendors to protect the software. Okay. I think, and there are a lot of companies in the software business protecting the software. There is even an organization called BSA, Business Software Alliance. And this is the list of companies who are a member of that organization. And they come together to protect, to come up for their rights. But they are very interesting. They produce documents and they write interesting things, many more than what I indicate here. For instance, properly licensed software has a positive impact on national economic activity. That is more than three times the impact of pirated software. And so on. So properly licensed software. They don't write propriety software. No, properly licensed software. So, a couple of comments on that. OSGEO software is properly licensed because that's OSGEO S4. You only get an OSGEO project if your license is correct, if every contributor is known, if you know what the source of the software is. So it's properly licensed. So another thing that BSA states, I didn't mention it, but it's on the previous slide, physically protecting software by hardware models and whatever is a very expensive way of protecting your software. So one of the benefits of that industry is maybe the value in the protection area. But I think we can better focus on new features, on better software, on other things, unless than in physically protecting things. At the other hand, I think that protection, even in open source environments, is necessary. Because, but we shouldn't bother about physical protection. I think that a pure legal protection should be sufficient. Because it's about respect. If we have a legal good license, then others should respect and pay you when they do not agree on open source license schemes. And why is protection important? Well, that's about entrepreneurship. I think we are all coming here because we are looking at open source software. We are committing an open source software. But it should be fun. It should be challenging. Open source software is innovative. It's, in many ways, creating new innovations. It's the basis of the future of to save the world. There are different projects going on in environmental challenges that we have to cover. And it's very hard work. And I think that if we want more entrepreneurs entering into that market, then protection is the key. We should have a way to protect their IDs, to claim their respect if others don't respect it, to be able to compete with the closed source property business that's around there. How can we do that? Well, I think we should work together. Very nice picture from the film Nemo where all the little fishes forms a very big fish. I think that's something that we have to work on. And that's why we are here, to talk to each other, to form alliances, to have better projects, to reinvent the wheel again, work together, and try to bring all the software pieces together into better software. And we should do that in balance. I think that we should find a balance between protection and flexibility. One of the big advantages of open source software is its flexibility, interoperability. So I think that we should build in protection mechanisms so that we can create sustainable systems that can save the planet. It was already mentioned by the previous speaker. I think IPR is a protection that is very strong, but that should be sufficient. When I look at systems like Microsoft or whatever, they have control over your hardware from the operating system to the end user product, to the office application. We talked about free and open source software, free in the way of being free, being in control about your hardware, being in control about your algorithms. I think this F for freedom is very important. I think IPR as a protection should be sufficient to create that so that new open software can be... Okay. So the idea is that it can be interesting to have closed source systems on top of open source software stacks. And giving that the opportunity for new ventures to protect themselves for a certain amount of time. Not like Microsoft, not like Asri, which have 40 years of protection and you can't beat them almost anymore. We can if you work together. One company can't do that, but together you can do that. But we don't...may allow to make the same fault again. Protect the upper layer and when that solution becomes strong enough and protection is in that way interesting that they have created a new venture, others can make it open source again and you can create a new layer on top of it. So that's the whole idea. Okay. So in the last five minutes, I want to use this as an example about the GeoMaias community has done that and is continuing doing that. So back to the first picture. The GeoMaias community as an ecosystem. Well, we've...okay, this is all the stakeholders I mentioned before and then we created the commercial organization GeoSpark, which is here on boot S1 and who is responsible for the commercialization of things coming out of that GeoMaias community. But very important working together also with other communities. Okay. One use case, a researcher has a new very good ID. You should protect him. So he should be able to protect his IPR. What we did at GeoSpark is give...if he uses software that we own, that we can license or that we can license through partnerships with other communities, this researcher can be...create a protected algorithm into a close source solution. With that solution, he can valorize his ID into the market going to a solution partner. The solution partner can use that value, but if he wants to use that value on top of open source, well, he has also have to protect the IP of his additions and therefore he should pay a commercial license to GeoSpark in this case. Having that protection, the solution partner can sell his solution to users and sell it to integration partners using it in other solutions. And this gives the solution partner the ability to grow, not becoming an Asri, not becoming a Microsoft, but as sufficient to invest in that venture. Integration partners can go to users creating integrated solutions. They need experts. Well, a second thing that we do at GeoSpark is providing the experts. We are not providing them all by ourselves, therefore we are too small as a company, but we are committing developers from the community, paying them and reselling them to the integration partners so that we have a working community. And then the integration partners are creating systems, solid system, sustainable systems, and they need support. So we provide support level agreements to the end users for which they pay GeoSpark, oops, and GeoSpark uses and reinvests a lot of money back into the community. So this is one use case, one example of how you can create a sustainable business ecosystem with open source. And I think to conclude, three important things, we should sell products. Protection is important through licenses and we should bundle commercial effort to have a bigger fist against the propriety industry of today. Thank you for your attention. Thank you. Okay. So the question is, you started at research, what should a researcher do? Well, depending on what research you do, but many research involves also algorithms and you have two possibilities, you can publish everything and make it open. But one of the key points is you have to valorize your results and if you want to have the possibility to valorize your results, it's not always good to make everything open. You should protect some things. If you want to protect some things, you need propriety software to do that. We provide developer license. So that you... I think I understand afterwards, but I don't get the... You can publish the content of your research, but you are not obliged to publish an algorithm. And many times you create also an algorithm to prove what you've did. So you can publish the results of your PhD and then you can hire a developer because it's always a project. But you don't know in advance as a researcher whether that can... No, that... It's difficult to predict. That's why... So we should protect everything. And that's why a developer license is really very cheap. But it gives you the opportunity to don't open the software... If you use open source software in many cases, but there are two types of licenses, but you have a license like Apache, BSD, and then you can do whatever you want to. You have also other licenses like AGBL or GPL where you have more possibilities to create business with. What we want to do is to use GPL licenses so that we can create the business, so that we can earn money, so that we can reinvest in LGPL projects. And for instance, for a researcher, we want to provide very cheap developer licenses so that you can protect as long as needed the things that you have created. And then you have the choice or you make it in a commercial project and then it can migrate to another level. Or you say, okay, it was an ID, it was a PhD, but it's not of any commercial value until today, then you are obliged to give it back to the community. Or you have to keep paying that low license, but you don't want to do that. So you give it back to the community. So that's the whole ID. Is this an answer to your question? Okay. Okay. Okay. Thank you. That's a big thank you, guys.
How to make money with free and open source software ... that's the question! Often the easy answer is “by delivering services to the clients using the software”. A more nuanced answer could be: “to be open in your business model, to cooperate with other FOSS project communities and to provide a sustainable service offer with quality assurance to the clients”. Dirk Frigne, co-founder of Geosparc and spiritual father of Geomajas will share his experience with open source adepts and business people interested in starting doing business in an open and transparent way.
10.5446/15524 (DOI)
Thanks very much, Yost. Well, thanks very much again for coming this afternoon. As you said, my name is Pascal Coulomb from Company Call Sizes. We are a British-based company, a system integrator, and really our drive is to integrate GI or geospatial solutions within the business or enterprise solutions. So really today, I'm going to try to take you through the various steps that at Sizes we follow to try to design and implement disconnected GI mobile on an open source platform, or actually probably more an open architecture. And really what I should start by addressing is what were the main drivers and considerations for us to try to move towards an open architecture. Well, the first aim really was like we all are over the last few years. We need to really reduce cost. But it's not just a question of reducing cost for the implementation of the solutions. It's also in terms of licensing, support, data, all these key elements of the solutions. As you probably know in Europe and particularly in the UK, we've got a strong emphasis from the governments to push us towards the use of open data, open source solutions. So that was a key driver for us. We also needed to look at the interoperability. But not just at the level of data. We needed to start considering interoperability at the level of the technology, the software, the hardware. What can we do to essentially feature proof the solutions and eventually try to reduce the risk to be even the locked? So I like to take a bit of an analogy when I talk about an open architecture. Yes, like Wikipedia will tell you, it's a set of computer software which is there to help you to quickly swap components. But when you consider that with LEGO, and I think it's reasonably good analogy, you end up being able to swap components. And from those little blocks, being able to build pretty much anything. Because essentially, it's a bit of an open format. You're able to change the component easily from a red block to a green block, to from a small to a large block. And the key thing is none of those blocks have got any business logic in it. And that's one of the key element that we had to really govern as we designed and implemented the solution is to remove any business logic from any of the client or data tier. The business logic should sit in the middle. And combining an open format with the coupled component, that's what drives the interoperability of your components. And ultimately, what will help you to reuse numbers of components across your solution and therefore drive the cost down as you implement your solutions. What I'll try to do today is over the next few slides, take you through five key points to try to increase the curve of your open architecture. And each time we'll go through one of these key government points, we'll get one star and so on, and see whether or not it is practical to put them in place, what are the pros and cons of implementing those values, government and rules. So the first one is open source. Yes, I'm sure we're all sold on the idea to use open source, particularly at force 4G. But we have to still sell open source to our clients. It's sometimes quite difficult. But I think in our case, it was quite obvious that open source was a strong contender. It's really well suited to deploy it on large numbers of device. We are in the GA industry, and the GA industry has benefited from open source product for a long time. And we are very good with open source to follow open standards. So that's, again, another element which glued together really give us a strong emphasis to future prove the solutions. But again, when it comes to open source, yes, we've got the ability to really understand the level of security, because we've got access to the source. And it is proven technologies reliable. Any major defect will be fixed very quickly by the community. And it goes without saying that as well, now across the UK and Europe more widely, there's more and more recommendations and direct recommendations from the government, like I mentioned here, where they actually directly tell us, well, now go for this type of open source product. But is that really, really just a clear answer? Maybe not. We need to be a bit mindful. And what I always use as an example when I'm saying, yes, open source is a very credible product. We've got two of the key mapping agencies across Europe, IGN in France and Ordnance Survey in the UK, who not only are actively using open source product, but are actually funding the development of those products to make it, for instance, inspire a compliant across Europe. So there is a clear evidence here that, yes, open source is the way forward and should strongly be considered. Interoperability, that's going to give you two stars. And this is all about how are you going to glue the various components. We've got all those Lego blocks, again, that are used as an analogy to start with. And this is very much what will help you to disseminate the data between the various components of your solutions. Because we're going to be using WMS, WMTS, we are fully able to actually swap between geo server, map server, open layer, leaflet. That's the beauty of it. We are not constrained to stay with one product, as one might actually disappear. Because open source doesn't mean that it's going to stay here forever. We can swap with a more up to date technology. Now, there's a lot of debate around, particularly, obviously, in the mobile industry. Should we go for native or non-native, or the like of HTML5 technology? In our instance, I think HTML5 gave us a strong headway, really, help us to implement solutions which could be easily deployed on multiple type of device. That means different type of obviously operating system, as well as different type of screen size with responsive design. Now, again, HTML5 brings us additional advantages, particularly in the GI industry. We've got the ability to do caching. The small amount of data can be cached. It can still be beneficial. We've got geolocation API. So all these elements combined brings us a lot of advantages. And it goes without saying as well, there's a number of JavaScript API. We've, I think, got a few presentation afterwards which highlight that fact, which are really well integrated with HTML5 and give you rapid development with big end user components, helping you to drive very nice design. Optimized storage, that has been one which caused us a lot of challenge. We had to be really mindful of obviously the fact that we were in a disconnected environment. Our end users, they're not working urban area. They're actually working mostly into remote area where there is very poor connectivity. I was looking around this morning for the sort of combined coverage, mobile coverage map in the UK, let alone. And actually, in most part of the UK, there is very limited or poor 3G coverage, even nowadays. And combined with the fact that we are looking at GI applications, those type of applications are hangry in bandwidth. So we really need to enable the end user with all the data on the device. And I'm including in that the like of what we call Westmaster map in the UK, this really fine detail mapping, which could take a large amount of data. And we only have a few gigabytes of space on our device. We also need to start thinking about the business data, which in our cases was over millions of recalls that needed to be displayed, like the client would really want with the right styling, but needed also full access to all the attributes to be able to be queried. So really, the key story here is you need to engage with your stakeholders very early, and you need to understand what are the priorities. Do you want to really display master map with the full color coding with the topographic area as we call it, or would just the topographic line be sufficient? Are you looking at generalizing the data? All those questions needs to be asked very early on with your end users. You want to make sure that you engage with your end users, so there is a high risk that they will not start using these applications as it comes live. You also need to look at interoperable data format, the like of ST geometry and so on. So that you've got the ability to swap again server, more course component for serving the data to the rest of the applications. Now, one of the key challenges we had was the provisions of the base mapping. We decided to use the like of TileCache. TileCache is great. It's an open format and very efficient, because actually you don't even need any sort of server layer to provide the data. You can access it directly through a web-accessible folder. Now, there is a bit of a downside to it. Yes, the smaller the tile is, the quicker it is to provide the data. But actually, the more space it takes on your hard disk, because at the end of the day, those little tiles, those little files, will go under the threshold of the block size on your hard disk. So you need to be very mindful of performance versus storage space. And we find out actually that particularly when trying to cache the like of master map data, especially in real-world area where there's very little informations, the best was 512 pixels by 512 to have the optimal performance, but also the best storage. Now, really, the next one is all about user interface. That is a key element, again, when designing a mobile applications. You need to be very mindful of where the user is going to have access to the applications. Are there more extreme environments? Are there other areas? So you need to make it simple, decluttered, large icons lots of user feedbacks. And indeed, again, that goes along with the idea of reducing the cost. You need to really embrace the concept of responsive design, which goes well with HTML5 indeed. So let's have a look at the solution building block. We were fortunate enough for the first device that we had our hands on is to go for a window 7 tablet. And those tablets actually are very powerful, dual CPUs, large amount of RAMs. So we decided actually to go for something which might sound a bit audacious, but we decided to start storing the data in Postgres with PostGIS, which gave us a strong way of managing the data, loading it in a convenient way. Serving the data was actually done by dual server, all loaded onto the local device. One thing to consider there are lots of people are thinking, wow, dual server really on a mobile? But yes, why not? You've only got one user. And those tablets are very powerful. And I can guarantee you that we're capable of serving several hundred thousands of features within sub seconds to the end users. So this is a credible solution. But again, we wanted to remain as open as possible so that dual server was not performing to the right level. We could actually consider alternative options. So any of the data would be served through WMS, WFS, WMTS. And again, the usual suspect really in terms of implementing the client applications, we're looking at the like of OpenAir, JavaScript, OpenAir, JavaScript, jQuery, mobile, very convenient type of product, allowing us to quickly implement the solution integrated into existing forms or even integrated with the local GPS on board of the device. But the key thing is, as I mentioned, is we've got the ability to actually swap Postgres with SQLite. There's no problem. You can do that. And that's still going to work. And you don't have to reengineer the whole solution. Maybe you would rather have leaflet rather than open layers in combinations with map server. Yes, you can do that as well. Now, as I suggested early on, yes, open source is great. But you need to be mindful, actually, that it's not always possible to go full open source in an organization. So you need to think about the touch point with other commercial product, which is actually one of the challenges we had here. We needed to enable touch point with the lack of a full on as re-entreprise architecture. But we also needed to make it simple to the end user to upgrade the base mapping as well as their business data. Just as a simple click, they needed to just have the data downloaded and dispatched. So that was actually enabled with the lack of tile cache, indeed, again, to generate the base mapping. And GDAL, OGR, has proven a really good, strong ally for us in terms of synchronizing the data with our central data store. But again, because we decoupled all components in the solutions, we have the ability to actually queue the values data captured on site and synchronize it at our convenience and using open components. So we've learned a lot of lessons. And the key one is around open source. It was new to our client to start using open source. You've understood. They were primarily using Esri. But yes, open source is a great way to reduce licensing. You don't have to pay anything when you deploy it across your mobile device. There is no maintenance cost as such as you get the patch whenever you want. You apply it in whenever you want. However, and there is a strong message. That's a very valuable one to actually demonstrate the power of open source. You don't have to be alone with open source. There is commercial solutions to support it. And that was a key enabler to demonstrate the power of open source. You've understood that the map cache was for us something that we had to work quite hard on it. And it's really about trading off performance versus storage size. But the use of a map cache should not just be considered for the base mapping. That's what we've discovered as we started to performance test the applications. Actually, for reasonably static business data, don't hesitate to cache it as well. That will obviously dramatically improve the performance of the applications. Particularly in the case where you've got end users, keep using the applications in the same location time after time. They might be surveying water meters into a street for days and days. And the data will be there for them. It will be fetched once and they're available for the duration of their service. So it's a great way to really improve the performance of the applications. And it goes without saying that because we use the OpenGI format, both at the level of storing the data and serving the data, we've really, really enabled openness in the architecture. And yes, HTML5, there's a lot of debates. We've seen the big player like Facebook, LinkedIn. They've all swapped to native platform. But we still believe that HTML5 was a key enabler for us to quickly implement the solutions for particularly on the client side, enabling us to deploy it on tablet PCs, Windows tablets, iPads, and so on. However, we're mindful that if we start needing more low level functionality, we would probably need to start considering a more native implementations. So thanks very much again for coming this afternoon. If you've got any questions, please do ask. Otherwise, you've got my contact details. And feel free to contact me at a later stage for any discussions or questions. OK. OK. Thank you very much, Pascal, reminding us the usefulness of open source and some of the issues with mobile development. And we have time for at least two questions, and this gentleman here. For an average performing current day phone, what kind of performance are you getting? Are you quite satisfied with it? Yeah, very much so. I mean, we're able to zoom from UK level all the way down to master map at the same performance as if you had Google map in front of you. Essentially, you pinch the screen, and you just zoom, zoom, zoom all the way down to master map. And even master map at the level of, let's say, Nottingham on a fairly high level scale, I'll get about two seconds, three seconds max to get the data. And we've got large numbers of point data, 170,000 thinking about it. And it takes us about 5, 10 seconds to get national coverage coming back in one go. So yeah, GeoServer was a bit of a scary thought at first, but actually, it does work for one user. Because those tablets are these days very powerful. So we should not discard it. And are you doing any feature editing or considering to like that? We do, yes. We've got the ability to scribble. But the key things, again, for us, we are not, even if it was connected at some point or synchronizing, we decoupling everything. So every single feature captured is stored as the like of GeoJSON. And then used by a service-oriented architecture on our central infrastructure enabling us to do long transactions or versioning and so on. So actually, we have some time for more questions. Just a gentleman over here. Yeah, a couple of questions. Have you actually got a tablet that you could show us? I mean, not now, but the sort of thing. Yes, I could show you something probably, yes, on an iPad later on. I could probably show you something. On an iPad, but I thought you said it was a Windows 8 tablet. Yes, but we've already ported the front end onto an iPad. Oh, OK. Because I was particularly interested in the combination of technology. Yeah, it was. I could come and talk to you later on, and I'll show you something. What was the deployment like? So I'd imagine that would be pretty complex, putting the whole stack on and rolling out the software. You're right. It could have been challenging, but we've got a very good build manager back to base who's done a fantastic job, and he's put everything into a simple executable. So the ICT guys are just doing a simple Windows installations, install JT, and then deploy Chrome automatically, because he works on Chrome primarily well. And the whole of GeoServer, Postgres, the data is already automatically. So it took a bit of effort, but it simplified, yes. Just one question or comment. You talked about the issue with the tiles and not using space efficiently. I wondered if you looked at the MB tiles format, which sort of gets over that to some degree. It's a slight performance overhead, but. Yes, I did look into this element, and I think at the time I might need a bit of discussions with you. I did not find the right way to serve it into the lack of open layers. I wanted to really bypass the lack of GeoServer and so on for serving my base mapping. I really wanted a direct access. Yeah, I think the tiles batch is something important. OK, that's useful. Nothing. I mean, one key thing to note is this was thought and designed around about a year ago. And one key observations I've had over the last three days with the AGI conference and now Force4G. There is a massive evolution of technology. We're now looking at querying data directly from HTML5 to backend database. So yeah, I think there's a leap forward. The whole architecture probably needs rethink, but there's some strong ideas here. Yeah, we go on with questions. The next session will start at 3. Hi, Pascal. I'm a running environment agency. You mentioned iPad port, which is of great interest. But what kind of software stack did you use to do things like feature data collection on an iPad? Because running the GeoServer and the iPad ain't going to work very well. No, we're looking at the front end, particularly, which is using the lack of open layers. And the key things for us, because we have the ability to connect to a backend infrastructure, for us everything is decoupled. So we are storing the data locally as open format, the lack of DOJ, CERNs, whatever really is convenient to open layers. And then that information is then queued back into our internal infrastructure and ultimately committed to the central database. Next. So we have time for one question. You were speaking for performance about size. And I was wondering about how do you deal with latency in mobile, because I know that you were speaking about the storage, but my main concern is about the latency you have on the network on mobile. How do you deal with this? Or how do you evaluate this use case? Well, the first thing is to note, you mentioned network. We are in a fully disconnected environment, so we don't have any latency on that front. And in terms of technology, in terms of storing the data, we've used solid state as it is with any modern tablets. And when you consider the millions of little files in the tile cache, for instance, this was of great benefit, obviously, to have a solid state. OK, thanks again, Pascal.
We present the challenges of building a disconnected geospatial mobile solution and devise five simple rules for the success of your app. This paper will look at the following key issues: Rule 1 Data Storage. Streaming GI data requires good bandwidth, by implementing a caching mechanism the end-user will always have access to the data for a given area. Rule 2 - Use Open Source. Free and Open Source software for GIS has evolved significantly in recent years and in some cases faster than commercial alternatives. The mobile field is a bit different and few experts are using free and open source mobile GIS, despite the good tools that exist. Rule 3 - Use Open Standards. In combination with the use of Open Source products, Open Standards can help future proof the solution. Rule 4 - Simplify User Interfaces. The time of the stylus is gone and users now expect to use their finger for driving the application. Specific attention must be paid to designing simple and clear user interfaces. Rule 5 - Implement Non native Solutions. Should separate solutions be developed for IPhone and Android? Could the answer be instead to actually develop non native solutions reducing development and maintenance costs. Armed with these rules we will look at the challenges on the road ahead to implementing your GI Mobile solution.
10.5446/15522 (DOI)
AS様 yn Europaeth presenio. The fact is that we do have to pay for hosting or servers or support or maintenance. maen nhw dros عن currently ond yn gyffinio nifer ac ynosionio cyllideb ''n Previously on a y Un p番勿 c modeas. Nid yw'r edrych,�我們的 salt a hygiereノ'r holl yn ei maen nhw curve Siar obsur ffwrdd iawn. Yw'mu likwyr ôl. Ychydig mor weithio i Undistipp Your Lady Llywodraeth yn ym Jordyn? U Mwyli methu ein caelnisib Cymru a'r bonllenol yw yn blaen mwychiol yn ddoch, dwi...! Lylejioau hynny'n gyfyngfwymp lawyerswyr iawn – ac yn nitheron ni i wych chiio bod phonegol yma iboxes i ddim wneud ac ydych i'na mawner ar gyfer cyffredin mewn cwyrddion hyn yn ôl. Mae o unf punwys Gwraedd cy [...] o un o bobl ond olaw 걸igref blaen waiting ar- explicant,..an ilumiau i fel ym sportol, Swami Indeed i fel ymferdd lean. Ond olawer deall pedall MOD wyged, mae mae bai'r hyrraedd gyfan swydd. Y bobni Bayur, wedi will Fogol yw hi weld y boundaries about y�AR? ac phas o pan llawer rrawd llwcwn ni's wrth- yıl boatsohwn. Lleidog ar anarcho t長y ro gain. Saints yn cychwyn wrth asleep Palom. That's a bounce to 50% penetration buyers. Doesn't use it for anything else, that's a massive lost opportunity. But it's out there, it's being used. Open Source is working in government. Okay. oed. Ydiwch chi'n arladpar y nidio gyda fynd i wneud ge segmentai, mae ni niei eu ddudiau? wrth gwrs efallain er microphone o'i ddig MV archw béach o postyledig âch tynnebook.......cyんなch yfoch trwy ales yng nghymlfa ac honraig pwyllfaech gallu…...y s Кар mae'n cael ei cychwyn per��서. Mae'nىon y modd. readsynnau fod yn didd.] Mae'r gymgyn�wf natural Chatfeithio Gynyddydd, ond mae'n gweithio fydd o'n ardal gwneud rhywbeth. ChunErw gagnth arnych, mae'n aurwl pan eu cael bronnern. yng nghymru yn y Llywodraeth yn y Llywodraeth yn y Llywodraeth yn y Llywodraeth, mae'n gweithio'n gweithio'n gweithio. Llangu cyfnodd, byddwch yn y bysnes, byddwch yn y gyfnodd yn gyfnodd yn gyfnodd, ond yn gyfnodd yn gyfnodd yn gyfnodd. Felly werthfyrdd, so itís a f vidéosen o bottlenn, o gydych chi mewn holld diolch wedi cael SIM y�ois healing. Den penni nes mwyftwng.ohon y fact. Nai'n adcech sydd ddau phaith yn emwy wedi cael â hongi gynnig yr holld o'iこれm, new business, ok? So, if we go back to what this digital disruption stuff says and where we look at where the emerging technology cycles are and we look at all these great great ways to drive digital business, we start to do this thing called adjacency, AM shows那个 in mar farm that it that camp bon man it after iron hot and add andค hi the u found when it went in Mae y troing eu cynnwys wedi fy yw'r maen nhar uchatoch maist a ystod y DNA cymhainau. Fi'r bod yn fy mach milieus gyda stock maesbod dda, afynnawdd a gwahanol ac gyn你可以d youth mae y cy vinylaniaeth type. Fyrm醇. Gregor ingrediential. Ond mae'n hys descendants cy 빼 zeidwch, ddech Sheffan ddweud arni Bodyline oble fremitau, mwymaetau otho cyllity reliedriol, ac mae'n hangend Wilson cyllin. Mewn gyllyn arna chi oedyn iawn. Maef a toda h 82 wedi du i lieff honno. Bae un денwysau ich טomys i Wyntvar am cynferwyd pan fel golwgr eich storain ar gyfer lawer, mewn llaποι wrthd sorry dros fodaf, cy straf校 Paidwyr y Wah Ddelw distinct y видео gall Hearingends am fy mod Gor soccer. Ac mai'r�ktor yn cais y datblygu Awgr pan fod? Waist a прoydech chi wasiol aend keep o rowan teimman mae'w'r ll ranking ac oedd gwrthwyd i'u ddau cwble. P Info Gwch, Cymru a Gwold Menach Gwld,賽eddion Sionedd Le errorau neu g excitingu. Mae'r ddexcee ar Ymbudwydod Oedog yn ni wedi bod y Cymraeg ond mae'n skilydd yn ni erich picturesydd. llawer regard y effentegi aheadd syniad�knu am y dyme syniadau ei neggion yn ddalf lun followers ynian yn fade un i'r ddyliadau pwylliad cyd-해서au pers ac er bod yn ni'n dweud syniad yn bobl gwiaidio'n ge bako cwylwg onboard tra smell deol omwoon Real Personal Money Law조 Yn mynd i roedd yn iawn o eu ni wnes yn fan Fasting Black Polud Black Fire cukury Kemi Mer England another lwyd mister Catalunya Felly, mae'n fwyaf yn gweithio'r ffordd ymlaen o'r ffordd yn ymddangos i'r informatio ar y cyntaf, ar y rhan o'r ymddiadau. Dyna o drwsgwch, ymddangos, ymddangos, ymddangos, ymddangos, ymddangos, ymddangos, ymddangos, ymddangos, ymddangos, ymddangos, ymddangos. Mae ffordd yn cael i rhaglen�f Bio-Graff occurs yn fad stor. Maeogi, yr ystydd eich bod yn gyfosig fel wneud ar cre'ch creu gwir 1956. Fel ychwanegwch â GŽ, mae o'r unrhyw ar gyflwyn de Theseb staying data Remlaughing nifer cl dysgu pwysig, lle yna angloedd maen nhw a chwONEW ar rhaid am y cyntaf sydd νw a chwell arherwydd, Bring Black. Rwy'n gweithio'r ffordd, a'n ddau'r ddataeth, y ddataeth o'r ddataeth sy'n ddataeth yn ymwneud. Y ddau'r ddataeth yn y planyddiaeth yn y pwysig yw pdf yn y map. Mae'n ddodol, yn fawr. Fy fyddai'n gweithio, mae'n ddataeth yn y map, mae'n ddodol. Ond o'r ddau'r ddataeth, oedd yna'n ddodol, yn y rhan o'r ddau, mae'n ddodol yn ddodol, ond mae'n ddataeth yn ymwneud, ond mae'n ddodol yn y pwysig yw ddataeth. Yn ymwneud? Nid. Ond ydych chi'n ddodol, yw'n ddodol, yw'n ddodol, yw'n ddodol, ond mae'n ddodol y dywe i ai mwneud ar weithp refres Oedon. Felly, fyddai'n ddodol, esby chefiaid rhagorau gwahanol heb croeso mwy nodi sprth sydd yn siwn menu yng Nghymru neu'r cheiot datblygiadol,味au'nni vom sydd yn sicrhau'r rhlyn. Felly, byddai'n wahanol eich wybod yncaf i Scolwort Mwysyg Lord, lle ei 11 o entydd hyd yn llychynol? Udo. pan, nabagatio eich gwymbig ar bondill Lyst consumers. Foddve Ruowan? rydyn yn gen i ando cyhofarf trai wedi pein hwn a MRI. Rydyn sy'n ca blowsul whether ym i ddim yn ystod Mas of a pier black Is made swing Did in ** student high yeah OK yeah This is our tablet application using augmented reality, integrating open source terrain models, freely downloadable geographic data positions, 3D models to create these visualisations that we see here. We have to rely on some hardware which doesn't really fall into the scope of this conference. But then on top of that we write our own applications. We use things called mono, mono touch, mono game which have open source components. We use continuous integration, we use hosted services, we use a whole bunch of open source components to build the actual application. But essentially the thing that enabled us to build this business was the provision of this Ordnance Survey Open data. If it wasn't for the Terrain 50 data set which is the Ordnance Survey providing a national coverage of a 50 metre terrain model, it used to be called Panorama, just been renamed or relaunched as a completely new product, not just a rebranding called Terrain 50. We wouldn't be able to build this new business. We then build a whole bunch of stuff based on this nearly free, this Macaboy definition. We still have to use hosted services but they don't cost us very much. We can then implement open source software. Again we use MapGuide to do our map delivery because it's what we understand. This all comes together, takes some clever people, put them in a room for about 18 months and they come out with some solutions that really aren't possible due to the barriers of entry of commercial software and commercial data. To finish, part of our core fraction is really about providing better ways to enable a communication platform to go from the web to these devices. It's about mobilising our access to geographic information. Our citizen consumer isn't a map expert. They're just the guy who wants to see what's relevant to them. It's not always about the map. You can have a dramatic change in the way people understand information from a top down map visualisation to a heads up augmented reality visualisation using exactly the same GI tools and techniques we use at the moment. I think that is me, which is pretty much one time. Thank you very much. I've got a couple of questions. Anyone has one? Sorry, I just blasted you with information there. In the augmented reality, out in the wild of Scotland, there's a lot of people with tablets and things like that. So where are we? The product that I showed you the visualisation from there launched in July this year. It started to be used on its first commercial applications, so there aren't. Scotland isn't full of clansmen with mobile tablets yet. AR is a technology waiting for a problem to solve. We think visualisation and geospatial visualisation specifically is one of these problems. I think it's a really good fit. The challenge was to use the right tools to solve that problem. Anyone else? OK, well, thank you. Thank you.
This paper introduces a new digital and in-field mobile solution for landscape visual impact analysis (VIA) with in-field mobile visualisation using GIality (the convergence of 3D models, sensors including location and spatial data) to provide new and engaging, contextual and personal access to information. By taking planning data for spatial analysis off the map and into intuitive app-based mobile systems we will discuss how traditional plan-based representation is not always the best communication tool. Maps may remain a tool for experts and professionals but the future of GI representation is no longer limited by physical media. For public understanding – and the democtatisation of data –we must understand and embrace new technology trends and opportunities in consumer devices. We will explain how, using modern technology drivers including devices such as mobile phones and tablet computers, combined with geospatial positioning, spatial data and services, GIality can bring a new dimension democratisation and community engagement with planning & renewables data. Especially related to planning and renewable energy development, visual impact is one of the primary aspects in the consideration of acceptance under local and national guidance. This is most reported where the impact of wind turbines on the landscape has split political, environmental and consumer opinion. However the current mechanisms and procedures for visual impact assessment (VIA) are based on traditional printed off-site analysis which limits their context, scope and use. A new approach will be demonstrated with a case study in Scotland. The trends for mobile work and play, combined with integrated sensors and social coordination provide the availability and accessibility of tools for both professionals and citizens to democratise and personalise data. The augmentation of as-planned models and geospatial data with device location, attitude and orientation allows individual places of residence, work and play to be equally fairly, rigorously and unambiguously assessed for visual impact and create cost-effective solutions.
10.5446/15520 (DOI)
Let's talk a little bit about the future and I have some questions that I'd be interested in thoughts on. So first of all, modelers and model domains. Numerical modelers are very interested in their algorithms and their solvers and they relate their models to the real world but the relation is somewhat interesting. So there's an old tale that I was told when I was a physics student about a physicist who was asked to help farmers in their dairy farms and his first response was that they were going to consider a spherical cow and a spherical cow would have a constant input of grass and that was the world and then they could start modelling. So that's the kind of attitude that we work with. So this is an example of a limited area numerical model that we've plotted. This is plotting over a standard plat-carry projection and you can see that the model domain is a slightly interesting shape. I've plotted each one of the cells individually on this model and it's quite a course model. I'll come back to exactly some more details on this projection in just a moment. This is taking the world the other way around. So in this example we're looking at the world on a plat-carry projection and then drawing the model data as best we can over the top of it. This is looking at the model domain and then plotting the coastlines relative to the model space. So these two are exactly the same data just flipping the point of reference around. So while plotting is very important to the people who we work with, another thing that's very important to them is resampling. So some of this is to do with interpolation and people are very interested in the interpolation of the schemes and what's being used and this can have quite an effect on the kinds of modelling that they want to do. And then there are a number of different types of resampling which need to be addressed for some of the more esoteric problems. So the picture I've showed you on here is a global model and if we just zoom in over the UK there is a limited area model nested on top of this. So these domains are defined quite differently and for this particular use case what we're looking at is the global model temperature and the limited area model temperature. The global model is a baseline and people want to do an anomaly plot of the difference between the temperature in a particular run and some anomaly set across the globe. So we have to do a quite careful resampling of one data set onto the other before we can do the simple numbers one minus the other to do the anomalies. So that's a little bit about the kind of challenges we're facing. I'd like to talk now about the coordinate reference systems themselves. So the first one that we're looking at here is what's called a rotated pole. So they've taken a normal geographic coordinate system and then they have moved the north pole and the reason they've done this is that on a plat carrier projection then there are a very large number of squares near the north pole and near the south pole which means that they have a very, very small area and that upsets the numerical solvers. You get sinks at the place where the cell area tends to zero and it really causes them problems. So they have to deal with it in their global models but for the local models they move that to somewhere they don't care about and just look at the part where the grid is nice and square. So here we're looking at the UK and the North Atlantic. So everything is square around us and what happens in Central Asia doesn't matter because the solver isn't going to go there. So I picked up a little diagram just to try and show you what we've done. We've taken a geographic coordinate system as Frank was just talking about and then we've added two new parameters to it. One is the location of the north pole. We've done that in the old coordinate system. This one happens to be moved by 90 degrees but different models are moved by different amounts and then some people also do a further rotation of the globe so that they can shift the meridian or to give them a nice model space where they want it. So what I've just been looking at are what I would term parameterised coordinate systems. We've got a simple set of numbers. We can do a small transformation and everything can be handled by the maths. The other kind of problem that we have is what I've termed a translated coordinate reference system. I don't know if that's a widely used name but in this case these people have decided that the most important areas for them are over the Arctic Ocean and the various other oceans around the globe because they happen to be ocean modelers. So to do this they've created two north poles rather than just one. One over Canada and one over I suppose central Russia. It's probably a little bit south of that. But this presents us with quite an interesting domain to work with and as you can see there are areas where the domain simply isn't defined at all and there are areas where there's significant warping taking place. So in this case what we have is that each cell in the model domain is defined in terms of its centre and its corners. We also have the area of a cell parameterised. They give us some information about the connectivity which cells connect to what and perhaps in what direction. But this doesn't give us a full picture. So one of my questions and one of the things that we've been puzzling over slightly is is this really a coordinate reference system? Is it helpful for us to try and treat it as that? Because the kind of work that we want to do with these is very similar to what we do with a rotated pole, is very similar to what we do with a geographic coordinate system or a projected coordinate system. So we can see how the requirements fit together but do we have enough information? Is this a worthwhile approach? I've showed you the ocean model grid which is their tripolar grid. There are a whole suite of those for different resolutions for different purposes and that's being used by a lot of climate research and weather forecasting, ocean modelling communities now. But the thought process about how you're going to build a numerical domain is going on in a lot of other areas and I've got two more examples here of ways of defining a domain which are gaining credence particularly amongst the people who are developing numerical algorithms. One of the reasons for this is that as computing power and storage potential increases then the modelers want more. The limiting factor for most of them is time. They have a certain amount of time on a piece of hardware to run a model and the model has to solve if you want to give a six hour forecast and you're starting at midnight and you want to give a forecast for noon. It doesn't really help if your forecast returns back at six o'clock in the evening because everybody cares anymore. So windows of time are very important. So every time you give them a new supercomputer with an order of magnitude more processing power they increase the resolution of their grid in time and space by that proportion so it fits into the same model time. As they've done that the plaque carry projection or sorry the plaque carry coordinate reference system has more and more problems particularly around the poles so they're all wanting to move away from it and I've got a couple of examples here that are being talked about. I quite like the yin yang grid that's quite pretty. The cube sphere domain stretched across onto the globe is heavily researched at the moment and there are some models being run in the Met Office in some experimental configurations using that particular one. The reason I show these is from my perspective these share a lot of characteristics with the ocean model grid. The model is just tell us a cell and they give us some information about that cell and that's all you get back from the model. It says here I've got a data value and then they want to plot that on the map, they want to re-sample that, there's a number of other operations they want to do. As these become more popular they're coming to the tool builders and saying we want you to help us support this. So I showed you earlier a nice plot of the rotated pole data in its own domain with the coastlines warped around to give you a nice coastline plot. This is a plot of the ocean model, seawater potential temperature, I think it's near the surface. You can see kind of where the world is but because we don't currently treat this as a true coordinate reference system we can't use the same tool to plot the coastlines and there are some interesting features particularly around Africa. If you know your African geography then that's not a good representation. So I've talked quite a bit about transforming coordinate reference systems, that's one of the key requirements that as tool builders we're being presented with. We use Proj4 extensively, we find it a very nice toolkit. We've written a Python library which hooks into Proj4 and connects that to the Python map plot library which is what we're making the graphics with and that provides a really useful set of tools for handling our rotated pole grids and a number of other requirements that we have. There's an ob-trans function in Proj4 which does everything we need for the rotated poles. But when it comes to what I've called these translated coordinate reference systems we're not aware of how to do this, we haven't approached this problem and from one perspective I think that there is an interesting idea that says we should be approaching it in this way and this is the kind of functionality that we want but I'm not convinced and I haven't seen all of the answers yet so that's one of the reasons I've come here is to get some more opinions. Is this a good idea or is this a whole world of pain that I will come back in a year's time saying I wish we'd never gone down that road, I really, really wish we hadn't. The other requirement that we have as well as the functional processing is about specification so people are creating data. There's two different perspectives here which I'll just touch on very quickly. From the weather forecasters perspective they make a forecast, they'll release it at four o'clock tomorrow morning and it will have a forecast for 12 o'clock, six in the evening, 12 o'clock maybe up to five days, maybe up to seven days. When they do the same thing 12 hours later the last one doesn't really matter. All anybody wants is the latest information. The climate research in a very different position because they're looking at doing runs out to 100, 150 years, managing many, many different scenarios. They're very interested in the data archiving problem and they've got requirements to go back to data that was created 10, 15 years ago and be able to use that now and compare it to things that they're doing. There's a huge project called the Climate Model Intercomparison Study which has hundreds of terabytes of data from the last 10 years of climate research which is being used for the International Panel on Climate Change report which is just being published at the moment and will be all over your newspapers with various opinions around it. We need to be able to specify these and specify these to high quality. Particularly for the climate researchers we need standards and we need some way that we can trust we'll still be there in 10 years time. The EPSG which Frank was talking about earlier, we found quite difficult because as the model has come up with new ideas it's been quite hard to work out how you get new coordinate systems in particularly when there's a fast pace and when it comes to the rotated poles it's a parametric definition. The rotated pole is easy to define but each modeler wants to be able to give you new parameters and the idea of EPSG codes giving you an exact answer doesn't really fit that model for us. We've been looking at well-known text and particularly I unpacked the alphabet soup underneath but I had to make it quite small, I'm sorry about that. But there is a special working group in the OGC looking at revamping the coordinate reference system well-known text and I'm interested about whether that provides the long-term mechanism and whether that's really going to hit what we need. I've started having discussions with some of those people about giving them the rotated pole use case to say could you please support this because at the moment we can't put that in well-known text. One other thing I wanted to mention, there's a very widely used standard within the climate research community called the Climate and Forecasting Conventions for NetCDF which is shortened to CF, NetCDF. This has been quite domain specific, it's been very focused on climate research because of the kind of requirements I've been talking about. They've come up with their own method for specifying coordinate reference systems and it's been somewhere recently looking at how they can try and either use another methodology or map that methodology onto another one. This is very heavily in use in our community and it's not clear how that gets out into the geographic communities or how it might be able to be used with GIS software or other such things. I've got some questions so I'd like to take a little bit of time if we've got a little bit left to open up the floor to discussion. I'm going to put my questions up in front of you. I'm very happy to take any other questions you have on what I've talked about but this is one of the reasons that I came here. I'd like to find out more information about this. The first question I have is about how do we standardise definition of coordinate reference systems? What's our best approach for this? My second one is is it useful to treat these translated coordinate reference systems? Should we be trying to push these kinds of definitions into tools that we already use, particularly Proge4 which has been very effective in delivering to a lot of our use cases? As I said, any other questions that you have? I'm really happy to try and discuss. Thank you. Maybe first some questions from the audience. I have a question which is also maybe on your second question. If I read it like that, is it useful to consider translated coordinate systems as true coordinate reference systems? It's a non-secretary. No, of course not because it's a coordinate system that you translate and translation is already well defined. But I think your term translated coordinate system is a bit confusing in this. I think it's confusing. I've tried four or five different terms. I'm yet to find a good one that seems to... You see what confusion is in this one. If you read that like this, anybody having to do it with the coordinate system would say no, of course not because it's a coordinate reference system that's been defined and then you translate. So in that sense... Translation is a perfect legitimate form of another coordinate system that means the same as you could have trans-referred with falsely signalling a zero. You could have another one with a different falsely... But both coordinates are the ones that translate with the other. Clear. But that's supposed to... It is not what he means because then it's not nothing new or different. Well, I'm not sure. If I could add a comment, I would like to say that in the raster world I'm used to having one set of modeling which is how we get from our raster space to some sort of GISC coordinate system space and then we get from that coordinate system space to some other coordinate system space. So I would normally treat these as situations where maybe a Habitual Location Array for my raster that passes into the sort of normal coordinate system and then the next stage or RPCs or various different modeling mechanisms. But I'm used to trying to treat the phase where you go from a regular raster into sort of a regular GISC coordinate system as one distinct operation which I don't try and represent in the same well-known texture. It's actually a completely different thing in my global model. It's actually, they could all be put together but it's easier for me to understand what's going on when I treat that step as distinct from the sort of reprojection stuff or the other kind. Yeah. Okay. Interesting. Because I think one of the challenges that we have is that when we get data out of the models using particularly these ocean models, then the dataset will give us, if you just consider a surface field, you'll have a 2D array of data, you'll have a 2D array of latitudes and a 2D array of longitudes and a 2D array of areas and then a 2 by 4 array of corners. So for any data value, I can give you a cell in that I can tell you the center, whatever you mean by center, the four corners which may give you a slightly odd shape and then some parameters about that cell. So I can put that on a map and I can do a drawing and I can draw some nice shapes. But if I want to go the other way around and say, well, okay, can you take something that exists in space like a coastline and tell me how that looks in index space, we can't do that. We don't have a methodology for doing that at the moment and that's something that we can do very neatly for the rotated poles because we've got a mathematical transform that gives you a rotated pole dataset. I'll just draw the coastlines behind it and that works beautifully. So we've shown our ocean modelers that bit of functionality and they've gone, oh, can we have that and we can went back a couple of months like I said, no, not now. I mean, so GDEL has a geolocation transformer which is roughly that except for the corner corner, it's just the center of each pixel and that is loosely speaking invertible. Now, this is actually, you actually work with some cases where it might get lost because it assumes that there's some regularity to it and interruptions will have problems. But it's just that we have some work and I shouldn't say it was luck but anyway, with understanding that there are regularities where you might have to go to a more expensive solution, it seems like it could make it sort of invertible. And in fact, the way I've done it in the Warpher is I take the geolocation array that goes one way and then I actually sort of resample that geolocation array into a new one which is the inverse of it. Okay. They're like poles or interpolators or something like that. I can imagine there's pathological cases where that's going to fall apart but for things that are reasonably regular, that's been adequate for my... Okay. So you've used GDEL Warpher for that. That's not something that you try to do with Proj4. Is that correct? Okay. So that's why I say I have that praster to coordinate system space is sort of a separate thing. So then the CVF driver first is going to read the Latin long phrase which is that it's a mobile geolocation array in GDEL. And so there's a transformer that uses that. I normally only use it for things that were somewhat irregular but not interrupted or I'm not confident the current one would actually fall apart in some of your more interesting cases. If we get lost it would fail to resolve the inversion. But it seems like that's how I would be inclined to approach it. Interesting. Okay. Okay. Hello. So I came across some rotating full-date of the division, basically a nine, recently, the macro-dave, trying to figure out how to get the volume to use the cable. So is that something you have to use to... The people I work with don't really care about the details of the model, they just want to see where the range is on that. Yeah. So I think there's... I'd give two answers to that. One is if they just want to see some data, they just want a picture, then I'd resample the data. I'd try and resample from the rotated pole onto something that QGIS would understand. So we've got Python tools which we've written specifically to meet some of these use cases. So I mentioned one called Caterpie. There's another tool that we've written called IRS which is a data management tool. But that would give you some tools to do a resampling onto a coordinate system that they could work with. If they particularly wanted to use QGIS, if they just want a picture, then the IRS and Caterpie tools will do that and just give you pictures and it uses a plotting engine. The ones that pictures that I was showing earlier are coming straight out of those tools. So I can provide you information about those. Those are free and open tools. In fact, they're on your OSGO CD that you've got from the conference. So you can experiment with one there and have a little play around. And if you are going to go down the route of resampling the data, then the people who are doing it do need to be aware that you are changing the data at the point where you're resampling it. So you need to apply a little bit of caution, particularly if you're changing the resolution at the same time. So don't assume that you're getting exactly the same numbers. It doesn't quite work like that. So if you're getting them in a net CD file that has a lot of two longitude layers, and you're getting them in a net CD file, then you're getting the same numbers. Thank you very much.
Geoscience modellers develop numerical models where constraints are placed on how the model domain and sampling relates to location in the modelled world and the real world. The shape of the model domain is a significant factor for numerical algorithms and computational solvers. This leads to a number of interesting definitions of coordinate reference systems. I will summarise some requirements the modelling community have for specifying and working with coordinate reference systems. Post processing and presentation of analyses are important factors; archiving for future use is a crucial consideration. I will present examples of horizontal and vertical coordinate system definitions in common use in the meteorology and oceanography domains and the challenges they may bring. The conclusion will be a discussion how specifications and tools for defining, interpreting and transforming coordinate reference systems, such as Well Known Text (WKT), European Petroleum Survey Group (EPSG) and PROJ.4 are able or unable to meet the requirements of a geosciences modeller.
10.5446/15517 (DOI)
Hello everybody. Yes, I want to show you something about a new idea, a Geospatial CMS, Kataaro, it is called. In the abstract we've written it's a new kid on the block. So it's a rather young project, but it's also a project that has something to do with OSGEO because it's also part of the OSGEO live DVD. But first to start a short introduction what we are doing. We are a company based in southwestern Germany, mainly working in Germany and in Switzerland doing all kind of geospatial development, geodatabases, post-GIS, webGIS, but also stuff like business solutions, so where GIS is only a small part of it and whereas all the other points are normally customer projects, there's also this branch we call the Geospatial CMS or something like an open source product we have developed and want to propagate and implement within projects. We have always been active in the fields of open source, so always it means since more or less 11 years when the company was founded we all have only built projects based on open source and also contributing to open source projects, for example within the open layers we have committed codes or maybe you know the open layers editor, so that's small library that allows editing features based on open layers in a web application. If you're interested you can also check our GitHub account. But now back to Katarro, what's the idea behind it? Let's start about the usual features of a content management system. Content management system of course allows to manage content. It normally brings very good user and role administration, at least for the better CMS, you can go into detail as far as role administration and privileges is concerned. It always brings some internationalization, localization so you can use it in multiple languages. Most of the CMS also bring versioning built in, so you don't lose old contents, but you can always go back to them. Then I think very important for CMS it should bring some layout and templating system because in the CMS it's normally much more important than in the normal geospatial world to create attractive guys for good look and feel and for very individual applications. So we can use these, it should mean template, not template of course. Another nice feature most CMS bring are editorial workflows. So built in workflows that allow some groups of people to enter data, to edit data, and to have other groups of people that are the publishers that are responsible for publishing. And depending on your CMS, these workflows can be very simple or you can have many different roles that all have to contribute to the content before it's being published. And of course, most bigger CMS have some systems of modules or plugins that allow for easy extensibility of it. Now all these points are, as I said, for general CMS, very typical. And I would say all these features would be very attractive for geospatial tool. So we also want to manage our geospatial data. We may want to do, to attribute some privileges to it, who is allowed to edit and who is allowed to publish and so on. So all these points would be very nice. And so that's why the idea was created to build some geospatial CMS. So we want to extend the features of a normal CMS to geodata in the way that we need possibilities to edit the data, to persist data, and of course to display and maybe analyze them. We need means for data capture. So being it ways to import data or to create geodata by geocoding or of course by direct editing within the application. So of course we are not the first with the idea of a geospatial CMS. There are many CMS that allow to store spatial data and to enter spatial data in some way. But normally it's always done in the form of simple numbers or texts. So we do not find many CMS that really have some geometric, geometry data types within it. So for us it was clear that especially as far as persistence of data is concerned, we need ways to make sure that our data are consistent, that we can use spatial indices to allow to use the system even if it's growing. And to store the data in a way that guarantee the long-term availability of them. As far as output is concerned, we needed a system that allowed spatial queries. So stuff like finding every feature within another feature. And to us it was also important not only to show the data on the map within the application, but also to integrate or to offer OTC services for data publication. But of course also the visualization within the application was important. The idea we had in mind where we wanted to position the geospatial CMS was between a simple website that has only one or two maps included and some complex business applications as you see it on the right. So it's clear if you just want to include one or two maps within a website, then you don't need the overhead of some geospatial CMS. And on the other hand, if you have a very complex application, then the system we have in mind may also not be the right one, because then you will have to do so much coding anyways that you can't make use of the advantages of the CMS features. But everything that is in the middle, so every application that has a lot to do with maps, especially applications that have the requirement that different people may enter data and other people may publish data, all these applications in between could be good candidates for the use of a geospatial CMS. So what's the architecture between our geospatial CMS Caltaro? As you can see here, it uses well-known components. Open layers, of course, as the library for everything that's displayed in the client. Then in the back, the geo-server for publication of web services mainly. And in the back end, posters as database that allows the real storage of geometry data. All that is managed from Drupal. So Drupal is probably known by most of you. Drupal is one of the biggest, most widespread open source CMS based on PHP. And our idea was to put Drupal in front of all the other components. So we have the possibility to, yeah, of course, to use Drupal to present the content to the user, to create the surface of the application, but also to use the Drupal configuration guys to allow the configuration of all other components. Caltaro is a so-called Drupal distribution. That means it's not one fixed package of software, but it's a combination of different modules. These are partly modules that have been existed before, before we had the idea to create Caltaro. And on the other hand, modules that have been specially created. So for example, Open Layers is also a Drupal module that used the Open Layers library in the back, but can be configured through Drupal. Caltaro is a distribution that means it packs together different modules, but also different themes. So the look and feel of the application and different libraries, and guarantees that they work together in a good way. But of course, you could also use many modules separately. How does it look like? So if you use the standard installation, you get this blue look that's very typical, that's a normal Drupal interface, but it has also integrated an Open Layers map. So Open Layers is one of the most important components within it. You know well what Open Layers can do, but within Caltaro you have the possibility to define the Open Layers maps, simply through the Drupal administration guy. You can define layers, can of course combine the layers into maps, and you can in a simple way also define the styles that your map should use. So all this is done without coding, but through the interface. For example, how Open Layers interface looks like, that's the one that shows all the layers that have been defined within your application. And as you can guess from the second column, we support a whole range, a whole list of different layer types, web feature services, Google Maps, VMTS maps. So more or less all that Open Layers can display is also available within Drupal. Here another example, you can define different styles that should be applied to your layers. It can look like this, but you have the possibility to add as many styles as you like to. And then of course you can define how your map should look like, and everybody who knows how to program Open Layers gets an idea about the parameters you can set, so you can enter them all here in the administration interface. What you can also do with the Open Layers module, you can define so-called behaviors. That's weird wording, but we've got used to it, so it has been defined like this before Cartaro. So behaviors, that means you can define which functions should be available in your Open Layers map. That's can also call it the widgets, which widgets should be available in your map. And again, everybody who is familiar with Open Layers will recognize many of these things. You can for example define a layer switch or can define how the navigation in your map should work like. Okay, the next important component, GeoServer. It's used for OGC services. GeoServer is running in the back, and again it's important that the normal user of Cartaro has no need to access the GeoServer interface directly. But all the important functionalities are again available through the TruePole module. Here for example you can define the URL to your GeoServer installation. Name any workspace that should be used. And here then you can define different layers that you want to use from GeoServer within your Cartaro installation. You can define styles for GeoServer. At the moment we support only SLD, so there's no graphical user interface to define styles. But there are plenty of other tools available that allow you to create the SLD. You can copy paste into Cartaro then. And the nice thing, the integration between TruePole and GeoServer is very tight, especially as far as the privileges for data and service access is defined. That means within TruePole you can define the access rights to your data, and the same rights can be applied to the services of your GeoServer in the back. And I think that makes it really nice because you have one place where you define the access rights to the data, and then it's guaranteed if you access for example WMS or WFS through KUGIS, as you can see it here, you will also have to pass the credentials that are required within the TruePole installation. So that's one place and one place that can easily be managed even by people who do not want to do any configurations in text files or do any programming. Post-GIS, the last very important component of course for storage of TrueGeometry types for spatial queries and for the usage of spatial indices in order to keep our stuff really fast. Again, several screenshots how it looks like. That's the usual Drupal interface that is used to define new content types. So this example, if you want a content type in your TruePole installation that shows the capitals of the world, you have this interface to define different fields of your content type. And now the new thing with Cartaro is you have also the possibility to define GeoData fields for your content type. And as soon as this is done, the data will be stored as geometry fields within Post-GIS. Here you define more details for your geometry field. So for example, here we chose that we want a multipolygon geometry type. We can also define the projection that is used. And if this is defined, you will automatically get the user interface for data editing that allows either to enter data only points or to enter polygons or whatever geometry type you have defined here. So once again, the components that I use within Cartaro, Post-GIS, with Post-GIS Drupal module, GeoServer plus the corresponding module, then GeoServerSec, that's an extension or plug-in for GeoServer, so Java component, that it's needed to do the synchronization between the users within Cartaro and GeoServer. So as I said, we apply the privileges from Cartaro, from Drupal automatically to GeoServer, but then of course you have to guarantee that the users defined on both sides are identical. Then OpenLayers plus the module, and then rather new, there's a good module that allows to import all type of GeoData formats, shapefiles, KML, GIS, S whatever, directly within your Cartaro installation. And then the nice thing, Drupal has over 20,000 different modules available that are not focused on GeoData, but that may also be very interesting for the use within your Cartaro installation. And I think that makes the real power of this solution compared to other Geospatial frameworks that you really can profit from, from this huge community of people that are using CMS and that are contributing modules, and you only have to find the right module and apply it to your GeoData as well. And some things I would recommend that are really nice modules for Cartaro is everything that has to do with workflows. There's a whole lot of different modules that allow you to do different things with workflows for data publication and that also allow things to do, like let's say email notifications when some content is edited or published. And then another important module within Drupal is the Views module. So that's a module that allows for example to create, let's say you want to search for your data, then Views module gives you the possibility to create complex search masks that also apply to the GeoData. So you can easily offer some fields to your user that allows him to enter, to define various which features he is interested in. And then the last very important module, the Feeds module, that's the thing that has always been in use within Drupal whenever you wanted to synchronize contents between different data sources. And it allows you to import feeds or also to import files into your Drupal installation and through our good module now it's also possible to import the GeoData into it. So my time is always almost over just to tell you in which direction we want to go. And this mentioned integration of the Views module will be extended so that you have easier and better ways to query your data. A new thing that's already finished now, we also have a printing module now that is based on MapFish print and that allows you to print every map you have in your Katara application. So in the area of Geocoding there are ongoing developments so that your Katara installation can offer Geocoding services so it can interest Geocoding data but can also offer Geocoding for other applications. In the area of privileges there's already very much done but there's still some functionalities open especially what we cannot do at the moment, we cannot differentiate read and write privileges for the services as it would be possible within GeoServo. But we can only say access yes or no but we are working in order to extend this. And then in work are also better tools for symbolization so that you get a graphical user interface for it. And then of course more map widgets because at the moment the functionality so these behaviors that Katara offers are more or less only the standard open layers controls and there's not so many fancy nice looking widgets available at the moment. But I'm sure development will go on so we have the first interesting installations, larger installations for several clients now so we have great motivation to go on with this. And another source of motivation also comes from OSGO because Katara is also part of the OSGO Live DVD. Since version 6.5 now rather young as version 7 and version 7 contains the first stable release of Katara the version 1.0. So if you're interested to play around with Katara there's on the one hand our online demo available. There you can play around with data but you can also play around this configuration and you know to see how everything works or you can try the quick start on the OSGO Live DVD. If you want to try Katara on your own system you can download all the components from the Drupal sites or from Katara org. But be warned it's not so easy to make everything work together to bring to use or to talk to the Drupal installation and to talk to a post chest. It's a little bit tricky but everything is documented and I think if you have the technical skills it should be feasible to install it yourself. Okay thank you. Yes it would. So that's a normal feature of Drupal that you have to support many languages and so we support it automatically because Drupal supports it. Yeah. No. So if you want a real tight integration between the Drupal administration interface and your map server then this works only with GeoServer. Because GeoServer brings the user and role management that map server does not know. And yeah GeoServer has also the HTTP API that we use from Drupal and that works only with GeoServer not with map server. Of course if you would like to replace GeoServer by map server it's feasible but you would have to do a whole lot of programming again and at least as far as the management of privileges is concerned. I don't see a way to do it directly with map server. Okay. Okay.
Cartaro is a new web mapping platform that makes the power of some of the best open source geospatial components available in a content management system (CMS). Cartaro allows to set-up and run small websites or complex web applications with maps and geodata. It is also suitable for geoportals and spatial data infrastructures whenever there is the need to get everything up and running without much individual programming. The geospatial software stack used in Cartaro consists of PostGIS, GeoServer, GeoWebCache and OpenLayers. The whole stack is managed from within the CMS Drupal. The geospatial components bring professional aspects of geodata management into the CMS. This is namely the ability to persist data as true geometries, thus allowing for complex and fast queries and analyses. It does also mean supporting a whole range of data formats and the most relevant OGC standards. For the latter Cartaro can extend the handling of user roles and permissions, which already exists in Drupal, to define fully granular read and write permissions for the web services, too. In the presentation we will first explain our basic motivation behind Cartaro: that is bringing geospatial functionality to the huge community of CMS developers and users. This community, which is of course much larger than the classical FOSS4G community, has a great potential to make more and better use of geodata than it was possible with most existing tools. We will then demonstrate how far the integration with the CMS reaches and present the Drupal user interface that allows to configure most features of Cartaro. We will show how to create, edit and map geospatial content with Cartaro and we will demonstrate the publication of this content as an OGC web service. We will also go into some details concerning the architecture of Cartaro and explain how we tackled specific problems. A glimpse of the some use cases will demonstrate the real potential of Cartaro. It will also show how the focus and functionality of a Cartaro based application can be extended with the installation of any of the Drupal modules that exist for almost every task one could imagine. The presentation will close with the future perspectives for Cartaro. From a technical point of view this includes the roadmap for the next months. But it also includes a discussion of our ideas about Cartaro's role as self-supporting bridge between the geo and not-so-geo world of open source software.
10.5446/15513 (DOI)
Ik ga mijn presentatie over Boes Diometrie. In deze schedule is het gezegd dat het ook door Matthäus Loskott is. Maar alastraal zou hij niet maken, omdat hij vriend van haar is vergeten en het is in een paar weken. Dus hij wilde niet alleen zijn vriend van haar voor een hele week. En dat is totaal verstandelijk. Dus ik zal het nu op mijn eigen. Dit is onze programma. Het is een C++-librator, dus dit is C++ code. Als je niet C++ gebruikt, is het niet echt een probleem, omdat ik niet echt ga in alle details. Maar ja, gewoon wat introducties. Als ik eerst een introductie heb, dan heb ik een slijt van twee om het te gebruiken. Dat is in C++. Dan een groot verantwoordel slijtjes over de algoritmes en wat testen in de gemeente. Boes Diometrie heeft algoritmes voor de geometrie. In de volgende slijt heb ik een paar comparende liberen. Mijn mensen weten zo'n liber en dan kan je het beter plaatsen. Het is based op generale programma's. Ja, het is niet... Het is niet ontschuldig om dingen over je eigen programma types te maken. Wat is de keyselling point? Uniek selling point, het bevindt geen puntmodel. Het is ook belangrijk met andere liberen. Je moet niet ververenigd zijn, je kunt gewoon gebruiken van jouw punt of polygoon. Het is een lage liberen en je kunt het customeren en je hebt meer controle in een verkeerde aspect. We zien dat ook. En het is een helder alleen, dus je moet het niet verbinden of er iets complexe verbinden. Het is allemaal compilte en run. Comparade liberen, waarschijnlijk weten iedereen de Java topologies suite. Het is al een lange tijd geëxteerd, dat is in Java, Geos. Ze hebben een simeel... Niet exact simeel natuurlijk, maar een simeel interface. En ook de netlibrer, die ook in SQL server is. Of SQL is ook een comparade liberen, GPC, Clipper. Er zijn meer. We volgen de standard liberen, die in C++ de liberen, de meeste programma's, gebruiken. Er zijn veel gebruikstofs. En we zijn een deel van de boest-community, dus we volgen ook boest-conventen natuurlijk. En OGC, onze interface is based on OGC. Dat is Open Geodata Special, no? Open Geospatial Consortium. Dit is onze team. Ik heb inderdaad de initiatie gekregen, waar ik het al vroeg. Bruno Lallande is van Frankrijk, hij leeft nu in de UK. Maar hij kon niet hier ook. Matthijos Loschot en Adam Huijckoffiet zijn dit jaar een joint-tour. En er zijn ook wat contributies van andere mensen. De communite, we hebben een eigen boest-geometrie-mailling-list. Het is ongeveer 150 subscripties, meer trafiek, bijna elke dag. Er zijn ook soms vragen over de generelle boest-mailling-list. Eigenlijk heb ik niet de interesse de boest geïnteresseerd. Boest is voor C++ een collectie van liberen die iedereen kan gebruiken. Het is totaal vrij, het is vrij SMB. Het is geen GPL, ze hebben hun eigen software-lisne en ze hebben een review proces. Het is niet zoals een source-forge waar je alles in kunt doen. Het is eerst dat je ervoor moet worden geaccepteerd en dan is het in de library. Er zijn veel dingen over boest. Boest is ook een ticket-systeem waar je kan file-bugs, er is een IRC-hangout. Er zijn ook soms vragen over de overvloer. Het is dus meer en meer gebruikt. In 1995, de historie gaat alweer terug naar 1995. Ik werkte bij Uudan aan dat tijd, zoals sommige mensen hier. We hebben een library die niet based op template was. In 2008 hebben we het library updateen, moderniseerd en dacht dat het wel goed is voor open source. We hebben een preview gedaan voor boest en ze waren leuk, maar het is nog steeds meer nodig. In november 2009 was het geëcteerd en vervloedd door de boestgemaatrie. Naar dat, er waren nog wat te doen. In 2011 was het voor het eerst in boest gecorporated. Nu is er al een kwart of vier maanden een nieuwe version van boest. En er zijn ook geëcteerds van boestgemaatrie. De revieperiode was een zware periode met veel mailletraffek over alle opinions. In het eind waren er 14 revieperiode en de meeste votingen zijn vervloed. Het was dus in de verschillende condities geaccepteerd. Ze hebben ook heel erg leuk geëcteerd, wat ze hebben gemaakt. Bruno Lalanda heeft ook een speciaal geëcteerd voor de design, want we hebben veel dingen geëcteerd, zoals het was in de eerste preview. De vervloeding is om generisch te bouwen. We zien dat er veel mogelijkheden zijn. We moeten de volgende slide om alle mogelijkheden te vervallen. Het moet snel zijn en het moet ook robber zijn. En ook het limite schoop, want de geometry is een groot subject. Je kunt je voor de hele leven in zo'n library zijn. Je moet dus besluiten wat belangrijk is. En ook op hetzelfde, er zijn veel gebruikers te satisfijnen. De basiscontexten zijn OTC-vervloeding, dus open GIS, open geospatial, het is allemaal om een punt, een lijnstring en een polygon. Of de multiversenies, een multilijnstring, bijvoorbeeld een gegeven hoogway of een multipoligon. Deze dingen zijn samengevallen, bijvoorbeeld een attributie. Deze zijn meer of minder beschrijven door OTC, dus we volgen dat. Dus je mist hier bijvoorbeeld een cirkel, een lippen, een arch, dat is in sommige extenties, we hebben het, maar het is niet, alstublieft, nog niet ondergevallen in de publieke library. Dus we volgen de OTC-conventies hier. We hebben ook een helper geometrie, omdat een lijnstring is gebouwd van segmenten. We hebben ook de segmenten nodig en de polygon is based op ring, dus we hebben de ring nodig en we hebben de gebouwkant nodig, etc. En OTC ook heeft een multilijnstring, dat is een collectie van verschillende dingen. Daarom hebben we het verschillende, omdat we een boost variant gebruiken, die ook dezelfde kan zijn, het kan zijn of iets. Maar het is nog steeds een type gecheckt, dus we hebben het vervolgens een variant of een multivariant, of we het niet zullen gebruiken. In de eerste slide hebben we gezegd dat we agnostisch zijn met respect naar veel dingen. Dus we zijn bijvoorbeeld agnostisch met respect naar orientatie. Een polygon kan klokkwijs of counterklokkwijs zijn en het kan ook open of dicht zijn. Bijvoorbeeld in de eerste versie was het niet de keuze, maar mensen zeiden dat we het niet kunnen gebruiken omdat we de andere orientatie hebben. Dus we hadden het agnostisch om dat te doen. Dus we supporten het allemaal. Nou, laten we eens kijken naar de basisstructuur. Wie kan programma C-processor, zo dat ik een impressie kan krijgen? Oké, de meeste van hem. Kan of kan een beetje of ja. Dit is eigenlijk een van de keyponten van de library. Je hebt je eigen struct, je moet het niet voorstellen. Dit is gewoon een voorbeeld. Je hebt het al, want de meeste mensen die het GES hebben hun eigen struct. Dus bijvoorbeeld is de struct zo voorzichtig. Dan adept je dat struct, dat is geëxplained. Ik ben niet echt in details, maar het is later geëxplained. En dan kun je het gewoon gebruiken. Dus je kunt een standard vector gebruiken, een C++ vector van die punt. En gebruik je een beetje boosjometrie en gebruik de lengte van de lijn. Dus dat is een van de unieke selectenpunten van de library. Je kunt je eigen struct gebruiken. En je kunt ook een hele oude stijl C-raise gebruiken, bijvoorbeeld. En de afgelopen stijl van een vector naar een struct. Dus deze zijn twee verschillende structen, maar nog steeds kunnen ze worden bepaald. Voor de library, dit is het hele programma. Dit illustraterat ook dat boosjometrie, je kunt deze header gebruiken, dus het is echt een lichte wacht. En je kunt het alleen een klein beetje gebruiken, als je het wilt. Dus als je de lengte alleen nodig hebt, kun je het alleen gebruiken. En je hebt geen grote library om dit te linken. Dus je kunt het alleen gebruiken voor een of twee dingen. Of je kunt alles natuurlijk, wat je wilt, of wat je wilt. Dit is een ander voorbeeld. Het is een beetje meer complex. We hebben ook onze eigen modelen natuurlijk, want mensen kunnen hun eigen punt model gebruiken. Dus we hebben een polygon van onze punt model. Dit is in twee dimensies met X en I. We hebben het WKT, dat weet iedereen dat, en we supporten het. Dat is handig voor gebruikers, maar ook voor onszelf te testen met. Dit is gebroken hier omdat het te lang was voor de slijten. We beklaren een standard library, waarin we de uitgang kunnen krijgen en de spatiale intersection opgepunt. En dan bespelen we het, maar we doen ook andere dingen met het. Dus het is ook een volledige programma. Dit is wat we hebben gezien, de meeste dingen die ik al heb geëld, de geometries kunnen van welke type zijn, de lengte of de perren. Het maakt iets, het beurt een soort constructeur, maar we hebben het niet geaccepteerd. We kunnen ook de uitgangsv-envoer, en dat kan worden formatteerd, meer als in de JSON format. Ik weet niet of we een exempla hebben over dat. We kunnen het niet nog pas, het is een goede vraag en een goede idee om dat te supporten. We hebben er nog een paar ASCII formatten, inderdaad WKT. We kunnen SvG, we kunnen RIT en WKB. Maar JSON is ook wel heel gebruik. Dus we geven onze eigen geometries een generische. Ik heb gezien dat het X, Y is, maar we hebben ook de koordineerde systemen. Dit is Cartesian. We gaan dat later terugkomen. Of we hebben X, Y of boost tuples, die ook heel gebruikt zijn. Ze zijn nu ook in de standaardlibrer, een nieuwe versie van C++. Of we hebben Y in X en Y, UF kan ook ook kleurpoints hebben. Dus in een rood-griem-vloer, in een kleurkuben, en kan je kleurverdelingen van het, ook passabel. Dus, laten we beginnen met de versterking, want wel, meestal programma's hebben alweer een in hun leven geversterkt een functie om de versterking tussen twee points te calculeren. Dus dat hebben we natuurlijk gebruikt. Maar de generische versterking die we hebben kan ook de versterking tussen een poort en een polygon calculeren. Dat is de versterking van de versterking. Echt goed. En, bezig van de cartesian versterking, kunnen we ook de versterking van de versterking over de spier calculeren. Dus, zoals je, dat was een beetje snel, zoals je het al gezien hebt, zoals je de versterking van de spier weet, is het niet echt de versterking. Ja, het is de versterking, maar niet de versterking. Een straatlijn is een geweldige cirkel, dus het is een versterking. En we kunnen de versterking over dat calculeren. Of over de gloop, omdat de gloop niet echt een spier is, maar een spier weet. En we hebben verschillende......calculeren. Vijf minuten. Dat is altijd... Nou, al is het al. Ik heb de tijd geestekt. Het was......veel meer, waarschijnlijk. De versterking van Nottingen naar Bangalore. In verschillende stedigen kun je verschillende versterkingen. Wat is de actuale versterking? Proberlijk is dit de meest accurate. Maar het is ook sneller. Dus je kunt selecten wat je wil. Was het een snel calculatie? Less exact? Of een snel calculatie, maar minder snel? Ik moet... Nou, ik heb er nog een. Nou, voor de versterkingen, altijd......programma's die uit de square root leveren, omdat het heel snel is. En vaak, als je versterkingen, is het niet nodig. Dus we kunnen dat voor Pythagoras natuurlijk. Maar we kunnen ook het voor haferscheiden. En dan is het ook wel wat sneller. Dus je hebt een snelle versterking over de terrens. Nou, dezelfde hier, binnen. Een punt in een polygon. Een famele algoritme, maar ook... Je moet ook......determinen of een lijn in een polygon of een segment. Of een polygon in een polygon of een polygon in een multipolygon. Of een punt op een sfeer in een polygon. Want als het hier echt echt is, of zo......dan kun je niet de straatlinen gebruiken. Je moet echt de sfeer gebruiken. Ik simplifiaar, dit is niet een OGC algoritme, maar we hebben het ondergevend......gezien dukle spoikers. Het ondergevend... Het maakt een complexe vorm simpler. Dus je specifieert een maximale versterking. Bijvoorbeeld deze versterking. En de lijn, die origineerlijk hadden, bijvoorbeeld 500 punten......heeft na dat hebben we 10 punten gegeven. En we kunnen ook strategie specifiseren. Je kunt bijvoorbeeld niet de versterking, maar het maximale punt hebben. Dus het is een beetje een lage niveau. Als je wilt. Nou, ik heb het gevoel. We hebben ook de overleving, boelien, zoals ze er soms konden. Dus als je twee polygons hebt, of een polygon, een multipolygon......dan kun je de intersection calculeren. Het werkt ook voor polygon en lijnstring. Of beide samen, of één, maar niet in de andere. Of in een van de en in beide. We kunnen ook de versterking calculeren. De implementatie is waarschijnlijk......want te zeggen iets anders. Op de versterking. Internetisch hebben we een speciaal index nu. Sindsdien, maar internetisch gebruiken we partitioning. Dus als je twee multipolygons hebt, één groen en één blauw......we bevinden het eerst. En je kunt de intersectionpunt calculeren tussen de twee. We bevinden het eerst in twee. En dan hebben we drie groepen. Een de linker, een de rechter en een groep van alle......groepen. En ze moeten met twee bevinden. Dus je maakt het nu simpeler en sneller. En we versterken dat op de tweede niveau. Het is een beetje zoals een quadrie, maar niet precies zoals dat. En tot we de deel de deel versterken......uttil we een reasonable deel hebben, bijvoorbeeld vier. Het is speciële, of meer, zes of twaalf. En dan gebruiken we alles samen. En de groep volgende in beide voeten is altijd......vergezien met beide van de meest. En dan calculeren intersectionpuntjes. Bijvoorbeeld kunnen we het meer makkelijk calculeren. Het werkt niet alleen voor intersectionpuntjes, maar ook een generische algoritme. Dus je kunt het ook gebruiken om te assigneren. Of om andere dingen te doen. Waarschijnlijk hebben we ook intersectionaaliseren. Ja, sectionaaliseren. En dat brengt de polygonop in monotone secties. Dus dit is één sectie. Dus als je hier......om dat deel te denken, en vergeet de rest van de polygon......omdat het uit de schoop is. Het is nu exact 20 minuten. Sectionaaliseren is een deel van de indexing. Nee, het kan gebruikt zijn om samen met de speciële index te ontdekken. Maar het is niet een deel van het. Dus het is meer of minder independant. We hebben een paar algoritmes. Dit is een van de laatste slijten. Dus de centraaliseren. Of het ergens een punt op de bordel moet genereren. Dat kan soms interessant zijn. Of het ergens een punt op de schoenen. Maar het moet in de polygon zijn. De centraaliseren is niet altijd in de polygon. Zoals je het wel weet. Of het de convexe hul genereren. Dus dat is een convex polygon. Exact verkeerd tussen de polygon of de enveloppe. Of het minimum gebouwencircuit. En dat is niet nog geïnplamerd. Daarom is het verschillend. En de voeten zijn nu nu te koeien dat ook enveloppe. Want de return type maakt de distingst al. Dus het kan ook een cirkel zijn. Het cirkel is nu een extentie. Dus dit is niet al daar. En dit is ook in progras. Maar het is aan het zien hoe het buffer zal zien. Want we kunnen een distingstradie specifiek kunnen spelen. Voor de link en de rechts. Dus zo dat we asymmetric de distingst ondersteunen. En ook de eindstradie. Zo dat we straatborden hebben. Of... Ik weet niet hoe het is. Blokborden of rondborden. En ook de jointstradie specifiek. Zo dat we rondkornen hebben. Of schaapkornen. Of je kunt specifiek je eigen strategie. Bijvoorbeeld om dingen zo te doen als je wilt. Dus dat is de leraar deel van de library. Ik heb dit al gezegd. En de speciale index is nu daar. Dus dat bevindt de library. Want we hebben dezelfde... We hebben dezelfde woorden gebruikt. En natuurlijk gebruikt het de library. Dus we hebben een speciale indexlibrer. Dat kan worden gebouwd met veel polygons. Oorleins of multicoleins. Oortpoont. En vraagt om intersection met de bovenkant. Dus als het intersect is. Of als het de joint is, die is allemaal OGC termen. In. En één van de leraar deel. Dus dit is de deel. Maar we hebben ook een deel van info die zal belezen. Dat niet alleen de deel teruggeeft. Van een punt naar een leraar deel. Maar ook de prijzijde punt. Dat is vaak gevraagd. En ook. Op welk segment het ligt. En de deel tussen die segment. En voor een polygons is het dus dezelfde. Maar ook zegt dat dit is binnen. Maar het stelt nog steeds de deel terug. Ik heb al de vragen. Ja. Een paar minuten. Een paar minuten. De boeljoen is een generische lab. Er zijn veel types voor de boeljoen. Maar soms mag je gewoon een type. Dus je vormt een generische calculatie. Dus je verhalen, verhalen, verhalen. Je mag niet deze verschillende types. Is er iets wat je kunt doen met boeljoen? We gaan niet verhalen. We supporten veel punt types. Maar we verhalen niet. We verhalen. Dus je hebt je eigen punt type. En je kunt op een soort van traden, adaptoren, en je kunt gebruiken boelgeometrie. Dus er is geen zin voor de serialisatie of de serialisatie. Ik bedoel niet of dit polygons of luchtstier kan werken. Dat is de variant dan. Ja. Dus het is een type die is... Dat is het boelgeven variant. Ik denk niet dat het een type is. Je hebt opties voor het. Ja. Twee vragen. Wat is de licentie van dat? Een ander is meer technisch. Je wilt meer complexe objecten onderwerpen, zoals 3D mesh en zo. Een goede vraag. Het boelgeven is je eigen licentie. Het boelgeven software licentie is erg open. Dus je kunt het eigenlijk opnieuw gebruiken. Commerciaal of niet-commerciaal, de binary vorms, zelfs meteen de boelgeven licentie. Dus de licentie is open. En ja, we planen meer feature's te supporten. Bijvoorbeeld de circular feature's of de boel. En 3D. De design van de library support's 3D-inventie. En we hebben ook al wat algoritmes, bijvoorbeeld de distante. Maar veel algoritmes zijn nog niet ingelegd in 3D. En 3D-agglasies en mesjes zijn geëmerd. Er zijn geen directe planen, maar ik denk dat het vaak is. Dus ik denk dat het een plan 2 zal zijn. Ja. Een vraagje. De laatste vraag. Kan je meer vragen vragen? Ik ga er niet weg. Je kan meer vragen vragen vragen. Ja, je kan meer vragen vragen vragen. Ik heb de laatste slide. Ik heb mijn eind... Ik heb veel meer slides. Ha! Oké, ik heb een beetje m'n kat.
The first part of the presentation gives an accessible introduction to Boost Geometry. The second part focuses on some algorithms in detail. Boost.Geometry is a generic library written in C++ providing concepts, geometry types and algorithms developed for solving problems in computational geometry. Boost.Geometry is using modern and portable C++ generic programming techniques and is built upon the foundation of the C++ Standard Library and Boost C++ Libraries. Boost.Geometry follows the OGC Simple Features standard. The Boost Geometry library kernel is designed as agnostic with respect to dimensions, coordinate systems, and types, which makes it generally applicable. A set of geometry models is delivered already by Boost Geometry. This set can be complemented through adaptation of user-defined geometry types, following the concepts defined by Boost Geometry. Boost.Geometry is developed since 2008 by Barend Gehrels and Bruno Lalande, and Mateusz Loskot. The library is peer reviewed by the Boost Community, and accepted into the well-known Boost collection in November 2009. Since 2011 it is released as a standard part of Boost, and immediately available for the majority of C++ programmers. The library is licensed under the (non restrictive) Boost Software License. A Spatial Index, developed by Adam Wulkiewicz, will be released as a standard part of the library in the next release of Boost. The Boost.Geometry library can, because it is a concept based library, following OGC Simple Features, easily be fit into for example Spatial Databases or existing projects using (probably legacy) Object Models. The presentation is dedicated to developers who are interested in receiving practical overview to the Boost Geometry library.
10.5446/15511 (DOI)
All right. Thanks everybody for coming. I appreciate the opportunity to talk here at the conference. I hope some people caught Eric's session earlier. We've got a bit of a triathlon or a quadrathlon of open layers talks going on. Eric spoke just before the break and give a really good introduction to open layers. I'm talking now and I'm going to basically just give you a lot of demos of what the library can do and show you what the API looks like. Following me, Tom will talk about some of the internals and give you a look under the hood of open layers and finally Cedric will follow up and discuss a little bit about how we launched this fundraising campaign and how that really helped us tackle this development effort. I previously had titled my talk, application development with open layers. It's now called working with open layers. We've reshuffled to accommodate having these four talks in a row. There's going to be some redundant material for those that were here at Eric's talk, but I hope it's not too much overlap. I work for Boundless Geo. This is formerly Open Geo and I just wanted to thank my employer for allowing me to participate in open layers development for the past six years. The open layers three development effort has really been a collaborative effort and I'll leave you to come to Cedric's talk to hear a little bit more about that. But it's mainly an effort that we fundraise for and we have developers participating from a number of companies, primarily camp to camp and Boundless, also terrestrious and we encourage developers from all different places. So I typically start open layers talks with a bit on the history of open layers and I find myself actually going pretty long and talking about how we've been around for seven years and I show graphs of how the project has grown and how we've brought on new contributors and added new features and I always find myself too short at the end on time for showing demos. So I wanted to sort of flip that around and say, okay, forget the history, let's dispense with that and go straight to the demos. And I don't mean that we're forgetting about our history or I don't think that's important. It actually is very significant that we have taken seven years of experience with working with open layers too and though we have thrown away all the code and started over, we've really learned a lot from that experience and embedded that knowledge in the development of this new library. So it's a bit of a risk, I think, to have an entirely demo driven talk but it seems like the connection is working out and kudos to the conference organizers for making that possible. The way I'll drive this is just by setting up a simple goal. So we'll look at an objective pretending you're sitting down to develop an application. This is something you might want to do, just put a simple map on a page. Then I'm going to show you what it looks like in open layers code. So this is your first introduction to the open layers 3 API. One thing to notice is we have this separation of a map and a view. A map is the primary object that you interact with when working with an open layers application. But it's very important to know about the view. This example shows the construction of a 2D view. And all the examples I'll show you use that same view. We've designed open layers to be able to, open layers 3, to be able to accommodate different views. So we will have a 3D view eventually. And a lot of the initial impetus for developing open layers 3 was to be able to provide 3D views, to be able to provide kind of limited 2 and a half D views, oblique views with extruded buildings or extruded terrain. But also potentially to deal with more complex 3D views. So in this example, I've given a view center. It's just an array of values, two values in this case, and a zoom. A view in general takes resolution and rotation values as well. And zoom is sort of a shorthand convenience for indicating which resolution, assuming the defaults of the default tile levels for a spherical indicator tile set. Then I construct a map and I point it to some viewport in the DOM. This map string is the identifier of a div element, let's say, in the DOM. And I give it a list of layers and I give it that view. And let's see what that looks like. So there we have your basic hello world from open layers. I can double click to zoom in here, drag to pan. I've got the expected sort of default controls, a zoom button here to do the same thing. So pretty basic, something you'd expect to accomplish with any mapping library. And that's how we do it in open layers 3. So moving on from there, the next thing you want is to use different sources. That example I showed you used an OSM data source. And that, I didn't show how it was created in that previous example, but this is how you would do that. You'd use a tile layer. And then you give that layer a source. So again, this is an example of the separation of concerns. Eric mentioned in the previous talk, and I think Tom will be giving a little bit more justification for in his talk. But my quick explanation of it is that a layer represents the view, how you want the data source to be rendered in the map. And the source represents how to fetch that data. So for, I'll be talking in a few examples about vector data. In these examples, it's using tiled raster data. So this source knows about the URL for those tiles, the URL scheme, and knows how to get them. And those tiles might come from different tile providers. It could come from local storage, or you could be using an entirely different protocol, not HTTP, to gather those tiles. But that's what the source does. You can use that same layer. So this is again using the tile layer, but in this case it's using a Bing Maps source. So the OSM source doesn't take any special configuration. This Bing Maps source requires that you provide a key. You go register for an API key for Bing Maps, and then you have to choose which style. And here's the aerial imagery layer with labels. So I'll show an example demonstrating a couple of different tile sources. In the first example, we saw OSM. Here's MapQuests rendering of OSM, Bings, Arials. You can use tiles from other providers. Stamen is a nice design company that provides nice tiles available for use. People probably know of MapBox. This is their geography class layer. This is a tiled JSON layer. So it uses this tile JSON protocol, kind of community protocol, to go out and fetch information about the metadata, the origin of the tiles and what the tile lattice is. And then it configures the source for you. So this is an interesting example. It's actually something that was a bit awkward in OpenLayers 2 to do. But these sources can be asynchronously configured. So in this case, the application developer doesn't know everything about the configuration of this tile grid or the tile matrix set. But you just specify how you can get that information. This source goes out, fetches the capabilities, and then it configures that source. And when it's available, it will be rendered in your map. So a lot going on behind the scenes there. But the important part for you is it's just easy to use. And of course, you can just use any general XYZ layers. So I want to play around with the map and show off a couple other features here. I already showed you this animated zooming when I click. I shouldn't be centering on my home over here. You guys help me find where we are now. One thing that you should be able to see as I zoom in, we have, pardon? Oh. OK. No, no. You want to do something. I'd rather just fly around the map and not really care where I end up, I guess. So I'm going to zoom in here, and because of the latency here, you can watch the tiles load. I've given talks and tried to demonstrate this before, and you have a very fast connection. Of course, you can't see that it's not as pronounced. But it does demonstrate something that I want to show you. So I've zoomed in here over Birmingham. And now I'm going to pan over to the side. And you can see, instead of going off the map and seeing what's beyond the edge of the world, you see these lower resolution tiles. And then, as I said, and wait, the tiles at my target resolution are loaded. So this provides a really nice effect, particularly with imagery. So I will push this map out a little bit and look at tiles that are already got loaded. But you can see as I pan over here, I get these lower resolution tiles that are shown sort of as a placeholder, and then the tiles at the appropriate resolution come in. And there's some really sophisticated tile handling going on behind the scenes here as well. There's a tile queue that determines the priority of the tiles that are fetched. And it is prioritized around the focus of my mouse. So as I zoom in to this location, tiles where my mouse is over on that side will be prioritized. So those will be loaded first, and then these other tiles come in after that. So some really nice tile handling. I hope I haven't stolen too much of Tom's thunder. It's largely kudos to him, but I think that's really great. And it provides a really nice experience for the user. I also am not going to demonstrate it in this example, but it's important to note as well that we're using these standard resolutions, standard zoom levels, for these XYZ tiles that are available here. OpenLayers 3 has a capability to fetch tiles at a resolution that is available on the server, and then display them at any resolution on the client. And that's sort of what you're seeing when you're seeing these lower resolution tiles displayed in the background. But you don't have to just restrict your map to the standard zoom levels that are available on your server. You can zoom to any resolution, and we will scale those raster data sources at the display that you've chosen on the client. So that's a little bit about layers and sources working with raster sources in particular. In the previous examples, I've talked about some of the interaction that you want to do and shown the controls that are on the map. You might want to provide more control to your user than we provide by default. So it's important to know about these two concepts, the interaction and control. In OpenLayers 3, interactions are things that have no visible representation or no representation in the DOM, and they literally just take browser events and compose them into higher level events or take some action and, let's say, move your map. So the drag interaction is just a very basic thing that takes browser events and composes them into a drag sequence, drag start, drag, drag end. And that can be used to do things like drag rotate the map. I'll show you that in a second. You've seen drag panning as I go through. There are also touch-related interactions. So touch zoom works on mobile devices, and then keyboard interactions that can make your map accessible. So you can allow the user to focus your map with a tab key or something and then use the keyboard to interact with the map. Controls, by contrast, are things that do have a representation in the DOM. So the zoom control provides those buttons where I was zooming in and out on the map. The scale line control, I'll show you in just a second, displays a scale line on your map, et cetera. So we're trying to maintain that distinction, and we're sort of testing this division in the architecture as we go. Maybe that we have, and our idea is to have interactions that are reusable so you can have a control that orchestrates a number of different interactions and turns them on and off and takes advantage, reuses these interactions in multiple controls. So this example has a couple non-default controls built in. The first example I showed you, I didn't touch controls when I created a map. In this example, I have extended the default controls with custom ones. So the one up here in the top is this scale line control. If you've got really good eyes and watch that, you should see as I zoom in and out, that scale line is animated with the resolution on the map. So it's showing the resolution at the center of the map at all times. If I zoom way out here, you should see that even as I change the center, the scale line changes because we're changing resolution as we go further north in that case. So that's the scale line control. I mentioned the zoom controls, those let you zoom in and out. Another control that we have included but that is not enabled by default is this geolocation control. So I'm going to see if I can locate ourselves here. And that gets our location and bounces in. I'm going to zoom in a bit further here, see if it was close. At least I saw Nottingham there. Anybody start recognizing campus? All right, so I can explore around here, hit locate again and see if it takes us down. It seems like it did find us accurately. So one thing I mentioned but haven't shown yet is this rotation, drag rotation interaction. So I'm going to Alt, Shift, Drag and I can rotate the map around the center. If you're looking carefully up at the far top right, you can see a little slider there that changes as I rotate the map around. And this is an HTML5 has added the range type input. So this is just an input that takes a value, a rotation value. And if you provide this range type, supported browsers will actually show you a little slider like this. And what I'm showing you here is this two-way binding between that input element and the map's map view rotation. So as I move this slider, the map rotates. As I rotate the map with the drag rotation interaction, the slider moves. And the library allows you to do that sort of two-way binding with things like input elements or other elements in your application. I'm already in full screen, but if you weren't in full screen, we provide a full screen control that expands your map to full screen. It gives a really nice immersive experience. And so those are some custom controls that you can add to and extend the defaults with. OK, next goal might be to work with vector data. All I've shown you so far is working with raster data with raster tiles. But big focus in OpenLayers 3 is working efficiently with vector layers. So previously, I was showing you tile layers and tile sources. In this example, it's a vector layer. We're using the base vector source. And here, I've just given it the URL path to my data. I told it this is going to be GeoJSON. We currently have support for GeoJSON, TopoJSON, GML, KML, GPX, vector data sources. Did I miss any? And we're expanding that list. So we plan to have feature parity with OpenLayers 2 in terms of the wide variety of formats supported there. And again, this source describes where the data is coming from. And then I tell it what format it's in, and the library will parse it for you, transform coordinates on the fly, and then render those. So looking at the vector layer example, here is just a countries data set. Is it a zoom in here? You can see labels coming in. One thing I want to mention about the vector rendering, this isn't, you can see it's pretty low quality data here. But one of the things that's worked into the vector render right now is this internal tiled rendering. So while you're animating, while I'm zooming in and out or doing this animated pan, I'm not actually going back to the data and re-rendering with every move. I'm using cached vector tiles from the previous render and using those to display while we're in this animated transition between resolutions. Maybe that we changed the strategy for this, we're experimenting with strategies that are going to be the most efficient for rendering large amounts of data and still providing really nice interactivity. So that was a pretty boring example. It didn't have much style, just white polygons with blue outlines. The next thing you might want to do is give your layers some style. So the style handling in OpenLayers 3 is pretty sophisticated. We intend on making it as convenient as possible. This example shows creating a style with two symbolizers. So this is a fill symbolizer and a stroke symbolizer. And if you give it polygon data, that will render your polygons with a fill and stroke. If you were to give that line data, it would just stroke them. And if you were to give it point data, that wouldn't render them at this point. I can talk to people afterwards about how to render point data. I don't have an example of that. So that was just saying render all my data in the same way. Maybe what you want to do is provide some sort of selector or some filter. I want to render this portion of my data in one way and the rest of my data in another way. So in CSS, these are selectors and then your style declaration. In OpenLayers, we have rules. So rules have a filter which represents your selector. In this case, I'm saying I just want the highways to be rendered with this symbolizer. So a three-pixel stroke for the highways. And then I could provide another set of symbolizers, I would say, and the rest of my data rendered them with a different set of symbolizers. So you can stack these rules in here and use these filter expressions to determine which features are selected in rendering your data. So I grabbed a simple, this is a New York streets data set and rendered these. I couldn't differentiate that. The data didn't have much variety in terms of attribute values, but there's a highway here running through Manhattan. And then these surface roads are shown in yellow. If you're looking closely, you can see that these roads are cased. So there are two symbolizers. There's like an eight-pixel wide stroke and then a six-pixel wide stroke over it. We intend to provide really robust control for Z-index support. Right now you can see I have this one example of a road going under, another example of a road going over. In some cases, the line cases don't overlay properly, but we want to give you that control so that you can say, I want the wider stroke to be underneath all. Let's say I have a lower Z-index value and the narrower stroke to have a higher Z-index value. So next goal would be to let users interact with data. This is an example of just using an overlay. There's not going to be any real data here. I've got to really speed up because I'm out of time. This is just an example showing Bootstrap popovers to add to your map. So I'm clicking on the map and I'm showing the location where I clicked in this popover. OpenLayers doesn't provide a pop-up class itself, but if you're using something like Bootstrap that does provide this jQuery plugin for popovers, you can use that. And what we do for you is anchor that at the location that you set. So as I rotate around here, I can see that pop-up stays in position. As I zoom in and animate, it moves smoothly with the map. That was just really interaction with my location. Really you want to typically interact with the data. In this case, I'm interacting with vector data that I have rendered on the client. This is sort of a nauseating example to look at, but that's intentional. The idea is to show you that our goal is to efficiently render far more features than you actually want to show people. So here are 20,000 points and they rendered quickly as you saw. And as I'm mousing over, I'm showing you the feature identifiers for those points. So you should get the impression that this is very responsive, one nice bit of functionality here too. My mouse is hovering over one, two, three, four, five, six, seven, eight features and then you get the feature identifiers for all of those back. So it's not just about the features on the top there. As I zoom in, you can see that I get nice animated zooming still with those 20,000 points and they're rendered efficiently. Final goal I'd like to show is just to allow editing. And this is very experimental work at this point or unstable work, I should say. Not necessarily unstable or experimental. But we have these two interactions that are in a branch right now being worked on, the select interaction and the modify interaction. And these interactions will be composed in an editing control. And I'll just show you some of the behavior that those might have. So I'm looking at a states layer rendered on the client here and I'm going to zoom in and show you what the select interaction looks like. So I'm clicking, I'm going to double click there, clicking to select the layer that's shown in a different style. As I hover close to the layer, you can see this point that should show up as I get close to vertices in this vector data. And then I can just drag those. So this is pretty basic vector editing functionality. You can see that I'm destroying the topology in my data when I'm doing this sort of editing. So what we want to be able to provide is the ability to maintain this topology without without you having loaded a topological data structure or providing a bunch of rules, but just by interpreting the user's interactions. So if I get rid of those edits, select a different feature, I'm going to shift click to select a number of other features. Now I have shared vertex editing going on. So as I drag around these points, I'm changing the vertices of the adjacent polygons and maintaining the topology of my data there. So I decide I want to reshape Colorado here. It shouldn't be as big as it is. I like these mountains down here. And I can efficiently have this sort of shared vertex editing with just some simple user interactions. I didn't have to specify my topology rules. I just let the user select a number of features and move the vertices together. So the final goal I hope that I've motivated you to engage in is to get involved in the library. We encourage people to become contributors. Please find the code on GitHub. I saw some people make a mistake in a sprint earlier. We're working in the OL3 repository under the Open Layers organization. And then there's a mailing list. And soon we will have an updated website at this ol3js.org location. And if you're interested in playing around with these examples, you can see this whole slide. The slide's up at this URL at tshub.net working with OL3. And thanks, sorry for going a little bit over. I hope I've got time for a few questions.
OpenLayers 3 is a complete rewrite based on the latest in browser technology. This talk will focus on best practices for application development with OpenLayers 3. Covering simple maps in a page, integration with popular MV* frameworks, and native-wrapped mobile apps, we'll look at strategies for building mapping functionality into your applications. OpenLayers 3 aims to provide a high performance library with a wide breadth of functionality. Come learn about how it differs from OpenLayers 2, what makes it stand apart from other alternatives, and how you can best leverage its functionality.
10.5446/15507 (DOI)
Hello everybody, for those that were here before, my name is Andrea. I work for Geosolutions, which is an Italian-based company contributing to GeoTools GeoServer in a number of other projects. And in this presentation, I'm going to show you some examples of how to use GeoServer ASLD and GeoServer CSS, which is a new extension, to solve common cartographic issues when drawing maps. And yeah, all the examples will, or almost all of the examples, we show both the ASLD syntax and the CSS syntax, just so that you can have a quick comparison of them. For the ASLD, I'm going to only show the important part of the style sheet, because the style sheets are always very long, so I'm going to focus only on the part that I wanted to show you. The equivalent CSS is always showing in full instead, because it's so compact that I can always, besides one slide, stick it into the slide fully. ASLD and CSS have their pros and cons. As I said, CSS is very compact, is very expressive, but it's not a standard. SLD is. So if you go for an interoperable approach, you go for SLD, if you want to quickly make maps, you probably want to go with the CSS instead. So I have an example map, which is a sort of a real world map. I'm going to show you some pictures of this map. So we start up with some digital elevation model and the city borders. We start showing up the roads, then more details about the roads and so on, the buildings, then the parcels and so on. As you can see, also the styling of the roads changed as I switched representation. So this is kind of what you would expect from a multi resolution map. That is, never fill it with too much information and change the styling of the various element hide and show depending on the current scale, so that you have appropriate content for the scalar where you're looking at the map. So the first thing I'm going to talk about is raster styling, which I mean raster styling is generally pretty easy. You either have an RGB area or satellite image, which you have to show as is, and you basically don't have to do anything. Or you have a digital elevation model or any kind of other geophysical parameter pressure, temperature, whatever, concentration of pollutant, and then you have to set up a color map. A color map in S of D 1O is a list of color map entries where you map a color with an eventual opacity, in this case opacity 0 means transparent, and to a quantity. And then you provide a list of values with a list of colors and you server linearly interpolates between the various combinations. So you basically are giving it tie points and any value in between is interpolated linearly between two colors that you gave for those tie points. The CSS expression is just slightly more compact. This is one case where CSS and S of D are similar. I just switched the kind of representation. Instead of having it display the linear interpolation between the tie points, in this case, I modified the type of approach to intervals. In this case, it's solid color between one tie point and the other. So you get basically polygon representation instead of continuous display, which makes sense in case, I don't know, you have a pollutant and you only want to show the areas where the concentration is above a certain limit. Then as I said, scale-dependent rules are the basics, the one thing that you really have to master the day you want to start making mapping. All layers too often forgotten or little used yet very important. Data exposed on the web is multi-resolution. It's not your old paper map that has a fixed resolution and you decide on a display and that's it. A web map is about showing data different resolutions and different resolutions you want to show data in a different way or you want to show different data. So the styles need to take that into account and progressively show details because otherwise the map gets too crowded and nobody can actually read it. Also for performance reasons, you don't want to have to display 100 million lines in a tiny map which would just result in a black blob, but would also take a lot of time to do that. This is one case where the same tool gives you at the same time a good-looking map and high-performance one. This is an example of scale dependencies. As you can see, I progressively show more and more detail and I end up hiding some of the layers. Sorry. So we start with the digital elevation model showing and at some point we turn it off because the grid cell of the digital elevation model gets unsuitable for a high-resolution display. There are two cores and I keep on adding that at first, the highways, then the roads, then the buildings as I zoom in and also the display of the roads goes from single line to case line and again I do it when I change the scale, when it makes sense to actually change the display. So they'll turn. Okay, so in SLD, this is controlled by two elements, mean scale denominator and max scale denominator. In CSS, there is a property that you, a pseudo property that you can filter on for a rule. So this part is actually a filter and it says, well, when the scale actually the scale denominator is below 75,000, then apply this which applies a labeling which I would otherwise not show at one to one thousand, for example. And the SLD does the equivalent thing. Another thing that you might want to do is alternative rendering. So I don't hide the data, but I change the way in which I displayed and as you see, as I zoom in, I change the way I display the roads from single line to case line. In the case of SLD, I'm playing with both mean and max scale denominator to turn on and off the rules. And same here, I do play with the max scale denominator so that I turn off the case line display while this one goes from 10,000 to 75,000. So single line goes from 10,000 to 75,000 when I'm above 10,000, then I display a case line. And this is just some chunks of the full style. This is instead the full style in CSS. So again, when the scale is between 10,000 and 75,000, you can almost read it. I can use a gray line two pixels wide and look at this one. This is cascading. The ability of CSS to combine rules that are active at the same time, which SLD doesn't have. So basically for any road, when the scale is less than the scale denominator is less than 1,000 to 75,000, then I apply the labeling, where I pick the label from an attribute, label name, and I apply some parameters that I'm going to talk about later when we talk about styling. When the scale is less than 10,000, instead, I go for the case of display, which is done by specifying the stroke more times than one, so I say first gray than white, outer line inner line, the stroke width 17 pixels and then 12 pixels. So the gray one will be 17 pixels, the white one will be 12 pixels, and then I apply the Z index so that all the gray lines are painted first and then all the white lines are painted second. So this is important to get the proper display crossings. And I ask for round line cap and round line join. So as you can see, this is very compact and allows me to change the display depending on the scale without having to repeat the labeling portion, which is common between the two styles. Another thing that it's often missing or misused is the use of hatches, patterns, dashes, and plates. So in both SLD and CSS, you can fill polygons with the solid color and that's something that we can do easily. Or you can fill polygons by repeating symbols, filing them, so that we have a different kind of display like this. In the case of SLD, we have this very long-winded syntax to say, oh, OK, use this little image to fill me the polygon. In CSS, this is the full style. This is just part of it, where I say, OK, when the MPFCC attribute, which is a classification attribute, is this value, which matches cemeteries, which is also in SLD, but it was so long that I had to cut it. I'm going to use this mark, this image, to fill it. So as you can see, two lines are done. We can also fill with the true type fonts. If you come from an Asri background, it's common to have lots of simple libraries as true type fonts. G-server can refer to them. What I'm doing here, if the code is, again, the one of the cemeteries, paint me a light green background plus apply this symbol. I'm referring to the font and I'm giving the code of the character. Plus I'm saying, oh, OK, while you're at it, please add an 8 pixel margin around the symbol to space it out of it, so that all the crosses don't get touching each other, which is an extension. It's also available for SLD, of course. Hatches are supported via extended well-known names in SLD. SLD provides the notion of well-known marks, which are five or six names like circle, square, cross, and so on. G-server has extended that concept and we have the ability to add new kinds of markers by code. One of them is called times, which is a cross. That's how you do it in CSS. I'm saying, oh, OK, fill me with the times symbol. Now, there's a catch. When I'm filling with a graphic symbol like a mark, I also have to specify the color and the size of the stroke and how do I fill it if it's an area. To do that in CSS, we have to say something like this. Column fill, which means inside that fill, please use this stroke, which is this color, and give me a size of eight. The bigger the size, the coarser the pattern gets because the X becomes bigger. This is some of the other symbols that we support built in. You cannot more. So you have the basic catches there. Then you can have dashes. That is the ability to display a dotted line or a dot line, whatever, support. In CSS, you would say, OK, give me a dash array of two, which means two pixels span down, two pixels span up, two pixels span down, two pixels span up, and repeat and so on. You can have more than one value. If you wanted to have a dot line, you would say, two pixels down, 10 pixels up, 10 pixels down, and 10 pixels up again, and then repeat from the beginning. That would give you a dot line representation. Again, the CSS representation is quite compact for this. One twist that GSOvr adds, which is not fully part of the specification, not the DSLD1O, and it's more flexible than what the SLD1-1 supports, is the ability to use a mark and repeat it along the line, but also specify a dash array for it. The idea is that I'm repeating a circle along the line, but I'm using the dash array to space it so that it's repeated with some space within. Then I use another line, which is also a dashed line, and I'm using the dash offset to synchronize the two patterns so that they don't overlap with each other, but they are alternate with each other. The ability to use a dash array with a symbol to be repeated is unique to GSOvr. SLD1-1 has the idea of a gap between the symbols, but it would be a uniform one. GSOvr allows you to do non-uniform stuff because it uses the dash array, which can have more than one number. Root plates. The idea is that I wanted to put a label on a map, and I wanted to have maybe a rectangle, maybe an icon below it, and maybe adapt it to the sides of the label. In this case, GSOvr has an extension on top of the basic SLD, which allows you to put a graphic element into a text symbolizer, which is something that you normally wouldn't be allowed to do. I'm basically saying, oh, okay, for this label, please use a mark. I didn't specify the name of the mark, but it's a square. Then there is another set of options that say, oh, okay, with the mark, please, precise it to match the proportions of the label. So I'm stretching it, so it's not more a square, but it's a rectangle. And I'm also adding a graphic margin that is some spacing between the label and the rectangle. And that's the result. Now, this is a very long style, and this is the equivalent in CSS. When I'm doing basically the same thing, in CSS, the graphic is called shield, and I'm saying, oh, use me as square. And I'm setting the recite mode and the margin here. So this is how you do it in CSS. When it comes to point symbology, I have some sort of thematic map made with points. I have points which represent locations, and I wanted to display each one with a different symbol depending on the nature of the location. Some are shopping centers, or there are schools, or there are government buildings, and so on. And I have, again, them categorized by that MTFCC code. I have to put together a quite long-winded filter to specify the shopping centers because they are something like, oh, MTFCC lie is C3081, and then in the name of the thing that has to be shopping inside. And then I choose this particular icon, and I have to repeat this like 16 times, resulting in 600, no, sorry, for six times, resulting in 600 lines of SLD to create the various symbol and specify a label for them. When I do it with CSS, this is the only one where I could not fit the CSS in the slide. It's 70 lines, but the idea is that, again, I'm using the power of cascading, I'm saying for any that star means match any. For any of those points, please use this label, full name, and if I don't say you otherwise, please use a black circle to display it. But if I'm matching a shopping center, then use this icon instead. So I'm basically setting the basics and then overriding them only in the parts that I need to override, resulted that 600 lines of SLD becomes only 70 lines of CSS. And then if I'm willing to modify my data, I can shrink it farther. The idea is that why do I have to set up all these filters in the SLD? Basically I could put the name of the icon that I want to use directly in the data and pick it from the attribute instead. So this is some SQL that I run to do that. I basically added a new attribute and I'm sticking into the data, the image that I want to use. This is, again, a geoserver extension where I'm basically, instead of using the full path as before, I have this parameter, dollar image, where I'm saying, okay, this last part of the path, you pick from the image attribute. So I only have to display one, sorry, to prepare one rule instead of six. And this works, of course, also in CSS and reduces the overall styles to 15 lines, so much more contact. And this is the result that I get. When it comes to labeling, geoserver has lots and lots of vendor options to control how the labels are displayed. Just to make you some examples, this row is following the line, this, sorry, not this row, this label. And this one is actually being wrapped on two lines. And this is a set of vendor options that control this. In this case, for line labeling, I'm saying, vendor option name, follow line, true, there is curved labels. Repeat the label every 250 pixels, so if I have a very long line, I display it multiple times. I'm grouping the labels because most of the time, this MapleTen Avenue is actually made of one feature, another feature, and another, and another, and another, and another, and another, because they are broken at crossings. But for the sake of display, I would like to have the whole line instead. And this group vendor option makes geoserver figure out which lines have the same label and it fuses them. And then there is max displacement that allows the system to move the label along the line a bit in case the place where you wanted to put it is already busy with another label to avoid visual conflicts. The equivalent, the fully equivalent style in CSS we already saw, the only thing to point out here is how we specify the vendor options, which is here, same as above. Point labels, for point labels, we don't need to do much, but some of them are very, very long and when I'm labeling a line, it's fitting in a sense that the line is long, so I have a long label along it, right? But when it comes to a point, I don't want a point to have a label that takes half of the map. I can have geoserver automatically wrap it to a certain length, so this is a vendor option named autofrap at 100 pixels. So if the label goes beyond 100 pixels, geoserver wraps it up for me automatically in one or more lines to satisfy the maximum length. For polygon labels, again, we are using autofrap to have the label be packed in a small space. We are applying the max displacement and then there is one parameter which is probably not very well known to even to a long time geoserver users, which is the goodness of fit. Basically geoserver want to try to display a label if the label is much bigger than the polygon itself. So basically geoserver is trying to compute how big the label is compared to the polygon. And the label is going to be displayed if 70% of it, by default, fits into the polygon. So we allow the label to go a bit outside but not much. With 0.9, I'm actually allowing 90% of the label to go outside the polygon because in this map I actually wanted the labels to be visible even if the polygon was small. We can also apply the concept of label obstacles, which is the idea that some polygons, some lines and some points should not be overlapping labels. So I'm basically saying that any of these symbols is a label obstacle so that I don't have this kind of situation where Boulder Country label overlaps with this symbol. And here we go. I have CTL showing but the Boulder County one is not because that point has been marked as a label obstacle. Transformations, we have two kinds of transformation in geoserver geometry transformation and rendering transformation. I talked about rendering transformation a bit in the presentation before. The idea is that before rendering the map I might want to apply some change to the geometry that I'm displaying or some change to the whole layer that I'm displaying. The idea of geometry transformation is that I take the geometry and I apply an offset function on top of it to offset it a bit to do what? Generate a simple shadow effect. So I'm moving the geometry, painting it with a darker color and then on top of it I painted a normal geometry. There is a very large number of functions that can be applied to geometries to extract vertices, to extract the beginning and ending of a line and so on. Here is the CSS version of the same style. See how much easier it is to apply the offset. This one you can actually read, the offset, the attribute which is called djom by these offsets. This is another example in SLD of a geometry transformation. In this case I'm extracting the end of lines because I want to add an arrow at the end of lines and I'm actually doing some magic which is probably, oh yeah, the angle. The rotation of the arrow has to be aligned to the end of the line. I'm applying another function which is called end angle to get the ending angle of that line and use it as the rotation of the arrow so that the arrow aligns with the end of the line. Then we have rendering transformation which is the concept of applying some spatial data processing on the fly to provide a different representation of the data. In this case I'm calling the GS contour WPS process on the fly, extracting a number of isolines that I'm displaying on the map. This is the only one feature that is missing from CSS. In CSS to date you cannot express a rendering transformation yet. There is some work going on to add the syntax for this. This is very powerful because as I said before, this kind of transformation are applied only on the area that I'm looking at and only at the resolution I'm looking at so that they are actually very fast because they are not processing the whole dataset, they are not processing the dataset at its native resolution, they are processing it at the resolution I'm looking at. In a look for our blog on the internet in a few days I'm going to share the presentation material and the Azure Server Data Directory that does all the SLD styles or all the CSS styles and the data that you have seen on the screen. This is it. Yes? With the labeling context, is there some way to give priority to some labels? The option was always there but I never actually talked about it. Let's see here, SLD priority and I'm giving it a number. The higher the number, the higher the priority for that label. Yes, that can be an expression. You can pick it from the database if you want. Yes? Anything else? Please? Using CSS, not at the moment. You have to do it within the user, the GeoServer user interface. But we are moving towards making CSS and SLD peers. At the moment CSS is actually turned on the fly in SLD before GeoServer uses it. We are working towards making them interchangeable. At that point you will be able to upload and CSS.
Various software can style maps and generate a proper SLD document for OGC compliant WMS like GeoServer to use. However, in most occasions, the styling allowed by the graphical tools is pretty limited and not good enough to achieve good looking, readable and efficient cartographic output. Topics that will be covered are as follows: - Mastering multi-scale styling, choosing the appropriate style and content for the various map scales - Using GeoServer extensions to build common hatch patterns - Line styling beyond the basics, such as cased lines, controlling symbols along a line and the way they repeat - Leveraging TTF symbol fonts and SVGs to generate good looking point thematic maps, line and fill patterns - Use the full power of GeoServer label lay-outing tools to build pleasant, informative maps on both point, polygon and line layers, including adding road plates to your map - Leverage the labelling subsystem conflict resolution engine to avoid overlaps in stand alone point symbology - Blending charts into a map - Dynamically transform data during rendering to get more explicative maps without the need to pre-process a large amount of views, such as on the fly contours extraction, heat maps, and wind maps from raster data - Leverage the analitic power of spatial databases to build dynamic thematic maps based on SQL views - Perform cross layer filtering and parametrize it to perform informative cross layer containment and neighborhood searches. The presentation aims to provide the attendees with enough information to master SLD documents allowing him to produce amazingly looking maps on his own. At the end of the presentation the SLD will no longer be cartographer's enemy.
10.5446/15502 (DOI)
Thank you. So welcome to this talk on 3D web services and on models. Actually, we have been working on 3D web diffusion for a couple of years and there was a lot of things available and there is a lot of things moving on in this field and I wanted to take the opportunity of this talk to give a panorama view of what is existing. So I will start with a couple of examples. So actually, 3D data, 3D visualization on the web, it's not only something we can think at in a dream, it's already there. Big players do it. For example, here we have the here maps or the Nokia map. This is based on WebGL rendering on these online. You can use it. There is also the Google Maps new web map that uses WebGL 3D rendering. And as we can see in this example, it's not just 2D and a half, it's real 3D as you can see through under the bridge. So we have real 3D data displayed on the map. So for the big players, there is also open source solution and I will show one. I choose to show open webglot because Martin is one of the core developers of open webglot and also because if you look at the rendering of this image, it's based, it's like an IRL imagery, WMTS request plus a GZON tile that is the digital elevation model and the rendering is done in the browser and if you look at the quality of the image, you couldn't distinguish if you were in the opposite mountain and took yourself the picture. The quality is really, really great. So just let's take a couple of minutes to have a look at the rendering of a WebGL global 3D. So here we have the global and when you scroll in, you have tiles that come, you have tiles with the digital elevation model and if you go to some nice place in the Swiss Alps, for example, you have real 3D data displayed and you can, so this shows that not only big players can implement such solution but also the open source community has the knowledge and the ability to develop great rendering 3D globes. Not only terrain and image area can be displayed but also, for example, textured buildings like in the EEPFL, just the time to load the data. So you have here 3D buildings that are actually KML buildings and displayed on top of this WebGL global. So let's go back to the talk. So actually, there are a lot of standards ongoing efforts outside in this geo 3D web but actually what do we want? That's one question that's actually we want. So we want to be able at least to display 3D scenes. So aerial imagery for sure, textured on digital elevation models. We want buildings with the texture. We want labels. We want markers to query. We want interaction with the map like navigation, selection, pop-ups, measure and so on. And one thing, we want also a large perimeter, let's say a worldwide perimeter. Not maybe 3D in worldwide doesn't make that sense but we want to be able to be local in a 3D context and navigate to another local place in a 3D context without losing the link. So we need to be able to go back to be on the globe to go to another place. So there is a continuum in the navigation. So for sure in 2013, we want a web solution that runs without any plug-in. Should be cross-platform and of course also cross-devices. And as we stand all for open source, should be of open standards, open format and open source code. So what's available? There is one thing I would say now that is for sure, there is a standard, this is WebGL, that's the way to display on the web 3D graphics. So WebGL, this is for Web Graphics Library and it's a JavaScript API that allows to render graphics using the power of the graphic processing unit. We don't use the processor, the CPU to process the image, we use the GPU and it's dedicated for this task so it's very efficient and the result is you can navigate very smoothly on a web application through WebGL. So when you write a WebGL program, so you've write some JavaScript code for the interaction but also some shader code that actually assign orders to the graphic card. And WebGL, we said we want a solution that runs without any plug-in and WebGL is implemented in every browser and even in the IE Internet Explorer health version. We've done some tests with OpenWebGlob and OpenWebGlob is rendered well in IE 11. That means we are really on the good way, on the good, the great to have a broad usage of WebGL. And also it's mobile device ready, there are still some performance issues due to the hardware but with this trend, it's going to be fine also. So WebGL, this is for sure what we need for the rendering but also we need standards in order to have interoperable solutions. And so as we come from the geo world, we look at the OGC and here there are already some standards for this. At least there are efforts ongoing in this direction. So there is the 3D Portrayal Service standard which has two proposals that are ongoing now. The WebView Service kind of WMS like and the W3DS Service like kind of WFS like. The W3DS Service was used for example by the Brondenburg project that I showed you earlier. So you can ask, you can request the data through an OGC standard, that's okay. There is for example KML standard that allows to display 3D buildings. We've seen this at the EPFL but maybe it's not okay to display for the whole world as KML buildings. There are other standards like city GML but that's not for the Web. It's the standard to, for storing the data but it's not dedicated to transfer this data to the Web. So actually the OGC defined the Web Service the way to query 3D data but there are other guys around the world from other fields that work also towards 3D. So for example there are guys from the Web like Web 3D group that have defined a data format to send 3D data to a Web browser and that's for example X3D data format. That's interesting because this could already be used in the Brondenburg project with a geo server. And there are other guys from the graphic world that have issued the WebGL standard and that have also issued Colada, also a standard to store data but not to transfer it and they are working now on the GLTF transfer format that should allow actually to transfer very efficiently data to WebGL, to OpenGL, to OpenGL on mobiles. So there are a lot of efforts going on on this and we will dig into some of the highlighted standards. So I give you Martin the microphone. Thanks everyone.
In the past years, numerous open source projects have started to display 3D globes and 3D data on the web. Standardizing web services, data format and representation models is, therefore, a very hot topic. There are in particular ongoing efforts on the OGC side as well as on the W3C side. The OGC has released a draft candidate for a 3D web service W3DS, the ISO X3D standard proposes an XML-based file format for representing 3D computer graphics and the W3C is considering adding X3D rendering into HTML5. Other projects implement their own web services and formats. On the implementation side, Geoserver supports W3DS and X3D, the X3DOM library prototypes a possible implementation of X3D HTML5 integration and last but not least, browsers with WebGL support are fully able to handle the representation of 3D data on the client side. The talk is going to detail the mentioned elements, show demonstrations of existing implementations and try to suggest a possible path into the 3D web for the FOSS4G community.
10.5446/15367 (DOI)
with Blue Labs. Blue Labs is an analytics and technology company that spun out primarily from the Obama 2012 campaigns data science team. And we apply data science techniques to improve social good in the areas of civic engagement, healthcare, and education primarily. The examples today will mostly be from politics, since that's what we know best, but then we'll try to draw those threads into other areas as well. So basically, we're all here because ultimately we'd like us or our constituents or our customers to be able to make smarter decisions. And really, people are actually pretty good at making smarter decisions already. So what we really want to do is scale decisions that are too small for a person and provide people with the data that they need to make decisions that are appropriately sized for a person to make. So sort of, you know, the example is that you can't run a political campaign with a human being just not looking at each voter's profile and saying, yes, I think this person is persuadable. No, this person lives with three Republicans in a Republican district voted in every Republican primary. This person is not persuadable. You need a machine to do that for you. So we looked at what decisions can we influence with geospatial data? And what we got to is at the top, do we keep playing in Arizona? The answer is no. Do we, and then all the way down at the bottom, which doors do we knock on? Everything in between. What I find interesting is that the top area of that chart is all about analyst supported decisions. That's something that a human being sits at a sequel prompt or sits at a data prompt and crunches a bunch of data and says the data says this or the data says that. Down at the bottom is self-service tool supported. It's either a computer is making the decisions or non-technical users making the decisions based on self-service data tools. So how do we, so this is actually shifting gears a little bit specifically to predictive analytics. So identify a granular convenient unit of analysis. Within politics, that's a voter, in healthcare, it's a patient, lots of places it's a customer, a prospective customer, a student. Find all the data that you can about people at that unit. And build a coherent view, usually of a person, but it can be of a building, it can be of a neighborhood, it can be of a road. And then build models that predict the behavior at that atom of analysis based strategy based on those things. So sort of this is what it looked like for us. Where you have general analytics databases, which is something like in our case Vertica, but you know, but Redshift, Teradata, Oracle, Postgrass, and PostGIS is our geospatial database, supporting, you know, contacting the right people, custom analysis, data exploration tools. So what does propensity modeling look like? You call up a bunch of people, you ask them what they think, you figure out what the, what the characteristics are that predict someone's support, and then you go ahead and apply that to your entire file of people. However, we can do a bit better than that. That tells you how likely someone is to support your candidate. What it doesn't tell you is how likely someone is to be persuaded by a message. So to do that, we do persuasion modeling, which we take a set of constituents, split them in half, deliver a message to half those people, actually call up the other half. So one half, our candidate is great for the economy. The other half is actually just a support question. It's, do you support this person or that person without identifying our candidate? The purpose of that second one is to make sure that we're not just modeling on people who are reachable via the phone. So we have to talk to some real-life person at the other end of the phone to take consider the person, either a treatment or a control. Then a while later, call up everybody again, ask them what they think. Hi, I'm calling with ABC Research. If the election were held today, would you vote for this person or that person? And then predict who it is who moves based on the message that they received, versus who it is who moves with no message at all. What's really interesting is that there's actually the possibility that some folks, this is true in retail too, this is true in education, it's true across the board, some folks reaching out actually has a negative persuasion of that. Sort of, you know, we tend to call that sleeping dogs. Folks who it's better off just leaving this group alone. So, what's interesting, so again, we're at a geospatial conference. What's interesting is a whole bunch of those models are geospatial. So one example is drive time to a polling location. A person is more likely to vote in an election if they don't have to drive very far to get to the polls. And so, you know, so one area we did that with a straight radius, just said, you know, number of miles to a polling location. Then spent some time looking at, okay, how does that compare to drive time? And what you'll see is that not all of these look like circles. Like this is very much not a circle, this is very much not a circle. Even this clusters up along freeways. This was built using open street map data, grass, and some really naive ways. Just, you know, we think people drive 60 miles an hour on a freeway and 15 miles an hour on one type of road and 20 miles an hour on another. I think the routing algorithm we used would, you know, would allow someone to drive up an AMRAAM. So drive up an off-ramp rather. So it's not a absolute perfect model. It's a naive, let's spend a few days building some routing bolligons based on the data that we have. And then validate it using maps. And we were like, yep, we think, you know, this takes longer to get there than this. So in the end, what you end up with is a list of people with a model score. Contact these people, not these people. So we're done. We'll call all those folks end of the campaign. So not every strategy is at an individual level. Some things happen at a group, TV ads, locations of offices, that sort of thing. So once we have the density of our supporters, we go back and say, where should we place offices that are near the most supporters? This was a fairly simple greedy algorithm. It took the office that can reach the most people, put an office there, but took the office that could reach the next most people, put it there, next most people, put it there, so on and so forth. I'm sure that there are some retail analytics folks who could walk through a bunch of other systems, but really the goal here was how fast can we support as many decisions as possible with a reasonably small staff and a modest budget. So here is going back to the persuasion model. Here's the list of all the people who we modeled as being persuadable nearby Richmond. Turns out that a whole bunch of those folks, it wasn't actually worthwhile to reach. Just walking down a really long driveway isn't worth your time. And the way that we did this was an iterative algorithm where within each volunteer's area, we took a few of the most dense areas, said let's grab everybody within a certain distance of this dense area, then said, okay, within that set of people, let's grab a bunch of people that are within those people. And walking it out, you can see like somewhere around here, it becomes apparent, it's hard to see in this specific area. These outliers are actually huge apartment buildings. They're not errors. So if 200 people live in one place, it's worthwhile to just drive out to that building. Finally, TV advertising. Using set top box data and location data, find the folks who find the programs where the most persuadable people are watching. What you'll notice is there's almost no prime time here. And where that comes from, the naive approach is you think people who are interested in politics must watch the news. So we're going to play on prime time news. It turns out everybody advertises on prime time news. You can get some efficiency by advertising in places where buying power is smaller, but the aggregate persuasion effect is so large. So the next part about it is that most of the slides that I showed were screenshots of an application that basically everybody who worked for the campaign had access to. This is the Virginia Terri McAuliffe race. It's white label. There's a Democrat's tool, but it's the same tool that's available in a few different situations. So just walking through a little bit. Let's say if we can actually just get the browser up. So basically what the tool lets you do is the projector goes up over the screen. We wanted to keep this as simple as possible. We said let's let folks specify a coreplot layer. We called it a shading layer. Let's let them specify a dot layer. We called it a point layer. That might not have liked switching screens. So basically from here folks could select what dots they wanted to say. They could correlate dots and demographic variables. In this case, we're looking at density of GOTV targets by the office, by where our field offices were. And really it was an approach of how simple can we make a GIS tool and still allow folks to find their own correlations that we hadn't thought of. We also added a tool that anybody who knew SQL could push layers into this. They didn't need to be a GIS diru. If they returned a query that had latitude and longitude, it showed up as a dot. If they made a query that had a number from zero to seven and some sort of a geographical identifier, usually a FIPS, it would detect what type of FIPS it was and spit it in as a coreplot layer. Let's see. So what we're looking at now is where it is that our volunteers had contacted voters. Green means they reached someone, yellow means nobody was home, and red means they didn't contact the person. This updated twice a day. So folks could say, hey, there's a big red blob here. What's going on? We thought we can vest this. Somebody lost a walk packet. Or this is a rough neighborhood. We need to send our best volunteers. So finally, there weren't any BI tools or advanced analytics platforms or really much of any sort of large scale enterprise software. It was lots of software bolted together to do specific tasks. So we had a SQL training once a week. Most of our interns knew SQL. They could log into the database and answer questions using SQL. A good chunk of the staff knew how to connect to our post GIS server using QGIS. They could log in and make their own custom maps. So then what is the role of software engineering become? What is the role of high level systems engineering? It becomes much more about coordination. Get data synced into a database, document that data, build open data portals, provide APIs, provide a pre-installed stack. Things like we use Geo server pretty extensively with layers for lots of information people could want and tools for pushing data both between systems. So I found a group of volunteers that I really want to contact. How do I turn that from a SQL query, push it into a list in our CRM so those volunteers get contacted. And then tools for pushing data out to field staff. Things like Explorer, the data exploration tool I showed is based on the idea of I have a SQL query that shows Geospatial data. I want all my staff to be able to access it. So that what all those tools allowed is in 2012, it doesn't look like it over here, but there were 150 people on the analytics team. In 2013, there were eight of us. So being able to go to that means that folks need to spend their time analyzing data. And we think that means you need a BI tool. What it actually means is you need all your data up to date and accessible via SQL. You need a way of spitting out an SAS or R file or whatever statistical analysis tool you need. Once you build a model, you need somewhere that I can push that model right away. So I think I'm pretty darned out of time. So very briefly, applying that to some other industries, primarily healthcare and education, looking at building the same sorts of agile data infrastructures that support those other industries. Cool.
Predictive modeling is used throughout organizations to predict behavior and outcomes; organizations use those predictions to efficiently allocate resources. This talk will cite examples from social organizing and healthcare to show how geographical data can be used to enhance predictive analytics work and drive more efficient and effective programs.
10.5446/15365 (DOI)
ability goals of WAM proxy, which is pretty much anything posits and then a few other systems. So long story short, the decision was made to begin by porting the previous DSTAP to user land to get that done, to get that transparent TCP proxy for a large number of connections he's done. And we figured that was a really good platform to build upon both for transparent proxies and other features that were of interest later. And the main reason we went with the free VSD stack is it's stable, obviously, and built for a long time, using a lot of mission critical systems. It's very widely used. There's a lot of active development work being done on it today. And of course, the license. This made it really a wider commercial interest than, say, looking at Linux or something with a GPL related license. Just to make sure we're on the same page with the context here, this is one definition of a transparent TCP proxy. So that's one that can proxy connections between a client and a server and maintain addressing server addressing from the client's point of view and client addressing from the server's point of view. So at the frame level, you shouldn't be able to tell if there's a proxy involved. One of the easiest understand motivations for wanting a transparent proxy is, if you have a transparent proxy, you don't have to worry about the details, the protocol, that you're proxying on top of it. Some protocols, we can argue whether they're poorly designed or whatever. But the fact is they send this part of their protocol addressing information from the server's or client's side to the other end. Those types of things in your proxy tend to cause trouble with NAT or any other sort of address translation layers. That's just one example. Another way to look at the utility of this is if you have something that can impersonate the addresses of the servers on the one side of the proxy and the client's on the other side of the proxy, it then becomes easier to think about how you're going to build proxies that handle large numbers of subnetworks and plug into different network addressing schemes. And it's easier to think about how you're going to architect that product. So that's a transparent proxy in a nutshell. By scalable, we simply mean it can do this for a large number of connections with arbitrary addressing. So tens of thousands of connections, hundreds of thousands of connections, maybe some of those connections are coming in on BLANs or nested BLANs. Some aren't different subnets. What have you? So we set the decision to import the TCPIP stack from previous data user land. And the overarching goals were going after scalability clearly. The choice was made to go with a non-blocking event-based API. WAN proxy is based on an event system. And that's also just the way to go if you're looking at scaling to things like handling many, many, many connection contexts. We also looked at scaling out in terms of how do you scale across interfaces? How do you handle all those connection contexts? And so the targets there were to be able to scale through multi-threading within the application using LibUINet. And also take advantage of the fact that now that we've got a stack port into the user space, you can run up multi-instance. Every application instance that's linked with the library that you boot is its own isolated stack. So you can then architect whatever your application is to take advantage of that and split up your traffic for management that way. So as a library, it's tightly coupled to the application. We want to keep everything in process. There's been other ports of TCP stacks to user land that have had slightly different goals and wanted to expose all their functionality as a service to other clients through sockets or some other IBC mechanism. None of those that I'm aware of really comport with the performance aspect and the scaling aspect that we're going after. So that wasn't a goal. We're just focused on keeping everything in process. And since we have a callback and event-based API, there's other opportunities for enhanced functionality performance. Once you're not emulating the syscall layer, we're not saying we're trying to make a drop in replacement library that you could just link an existing application to. What we're going after is something that provides functionality you can't get anywhere else. And for that, you're willing to rewrite your application to it. But once you've crossed that river, you then have other opportunities for building features that wouldn't be really feasible if you were trying to deliver them through something like the existing syscall API between user land and kernel for networking. On the portability side, the initial target is positive environments. When we look at re-emplementing kernel facilities a little later in user land, that's what we're looking at initially is using POSIX to give us portability to FreeBSD, Linux, Mac OS. And we want to do this in a maintainable way, because in my point of view, the saddest thing that can happen is you do all this exciting stuff. There's this burst of development. And the effort's been organized in such a way that it becomes hopelessly stale and maintainable, hard to bring up to speed with a new version of the network stack. And then that's the death knell for a lot of these types of projects. So definitely want to avoid that. As I said, we weren't going after providing a drop-in replacement in Socus Library, where you could just relink, and it would look like the BSD Socus API exactly at the binary level. Not even looking at having a set of headers that you can compile against to give you exactly the same API, because there's things we're looking at delivering that really just don't fit in that model. And we didn't think there was any value in emulating syscalls and getting the exact behavior that you currently get running a user-land program using the library interface of the kernel network stack, getting that to have all the same exact behavior. And I think I also already mentioned we're not interested in creating a daemon process that exposes the user-land networking stack services to other processes. Of course, you can write that into your application if it suits, but not a goal for Libyoin it, to provide natively. So there were some alternatives that were considered at the outset. It was talked about to maybe start out with something that seemed a little less complex than the free BSD code base. So a lightweight, independent TCPIP stack, there are some out there. So the problems are that they're lightweight and independent. So you tend to run the issues with them being feature poor, or there's a small user base. You're not mature, the project could become implementing features for TCPIP that are already in the free BSD stack, but weren't there because we went lightweight to begin with. Or we basically hitched ourselves to a project that really has small exposure, and we become the maintainers of it. The huge benefit of going with a stack like free BSDs is that we're reusing all the TCPIP functionality that's already there in the kernel. None of our work has to focus on improving or maintaining that. We're going to build new features on it. We're going to make available in a slightly different package. But all of the tremendous engineering effort that's already gone into having a modern, full featured stack is reusable to us. There was another project called LibLevNet. I'm not sure if that's how you pronounce it. I've only ever seen it in print. That was the usual onboard of an eight series stack that had slightly different goals, like having a daemon service to export networking to fully emulate the Cisco API, things we weren't interested in. It was apparently abandoned when I found it. I hadn't seen any new development on it in several years. But parts of it were used to see the LibLevNet port. It actually served as a pretty decent roadmap for what kernel facilities would have to be re-implemented in the user space to support the stack. There's also the Rumpkernel, which is a NetBSD project. It's a framework for running NetBSD kernel components in user land, including the stack. But it's not focused on just the stack. It also includes file systems and other kernel components. It has slightly different API goals. On paper, when you list the things we were after with LibLevNet, there's a lot of overlap. But when we looked at it, it seemed like it was going to need non-trivial work anyway. And since it had a lot of aspects to its framework that really weren't relevant to LibLevNet, we figured, well, we'll put that non-trivial adaptation effort into something based on the FreeBSD stack instead and get something that's really tailored to our goals. So the approach to porting the stack was, of course, to re-implement what kernel facilities we needed. This is a similar approach that would be taken by any other of the efforts you'll find, like even the Rumpkernel that are out there. And what the idea is, the kernel facilities, like threading, memory allocation, locking, get re-implemented, you have the same API, you have the user land implementation underneath, then we can reuse the network code untouched until we get to the point where we're adding new features to it. Because again, we're trying to leverage as much as possible all of the wisdom and hard-won experience and effort that's gone into the stack as it is today for LibLevNet. Another one of the goals was, whatever new features we put in, should be able to completely disappear from the source base by turning off an if-deaf. We wanted to, at any point, be able to say, I'm compiling the stock kernel stack by saying, by not defining a set of guard defines that wrap all a new feature code that we've introduced. And we wanted to target some of these features so that they could actually be used inside the kernel. That doesn't necessarily serve directly the goals, the initial motivations of bringing the stack to user land. But as we're implementing some of the new functionality, it's clear that there could be a use case for having those built into the kernel directly. So I kept that in mind as I went and tried to introduce as few new interfaces between the application and the stack as possible to get the job done. And I made it possible to include LibLevNet and FreeBSD base, just in terms of how I structured the port, trying to do all the user land support work to the side of the kernel source tree, even within my project structure, for no other reason than to make it possible to, perhaps, one day, ship LibLevNet with FreeBSD base. So as you can see, the source structure has all the kernel sources under the Sys directory in the project. If you go to the current GitHub location for LibLevNet, this is what you'll find. Under Sys, sub directory is all the sub trees that contain files needed by LibLevNet. Not all the files that you'll find in there under the project are actually needed. But the approach taken was to pull in whole sub trees to make merging future versions easier. So instead of cherry picking files out of directories, out of kernel directories for LibLevNet, we've just imported whole sub trees just to make the merge to later releases the stack a bit easier. Under the lib directory in the project, LibLevNet contains all the user land re-implementation of the kernel facilities, the UNNet API, and either support code that's part of the library. Anything else you find in the lib directory is application support code or just support for something to have a programs. Probably the most interesting thing in there is a fork of LibEV, which is an event system that's been around for a while, is pretty highly performant and widely used. And that fork contains a new watcher type. So you can combine the event loop, access to kernel sockets in the host OS, as well as UNNet sockets, as well as time or whatever else the event system gives you access to. And then Bin has all the sample programs for exercising functionality, pretty straightforward stuff. So here are the layers. Mostly, it's hard to draw these diagrams and really show every single relationship between the subcomponents. So you can be pedantic and say, I'm missing something here, for sure. But I think I've captured all the major relationships here. So at the top of the stack is the UNNet API. So that's what applications written using this library will use. There's a couple of things going on there. One is, of course, we're building API entry points to give you access to the features of interest that are available inside the networking stack. But one of the other big purposes of the API and the way it's built is to give you a clean namespace, because we want this to be portable to be used in applications of other operating systems. We also want to be able to have a different version of the previous DTCP IP stack inside the UNNet, than perhaps is on the host operating system. So just using the user land networking API headers for constants, structure definitions, that sort of thing, is sort of a non-starter. Because if you take that API and then bring it over to Linux, or take it over to Mac OS, things aren't going to build. Because although there's a lot of similarity in different implementations of the BSD Socket's API and the associated constants and structures, they're not identical everywhere. So one of the things that's going on in the API is just namespace laundry. Just giving you a clean, generic version that comports with what's inside the library of all those constant structures and entry points. So that API, though, is built on top of kernel Sockets. And the goal here is to integrate with an event system of one sort or another. So our main focus was on non-blocking Sockets and running things in an event-based manner. So the kernel Sockets API, I didn't have to go any further than that, right? Because you can run kernel Sockets non-blocking and you have upcalls, which are sort of a bare bones event interface for kernel Sockets. So that's pretty much everything that you'll find in UIPC Sockets.C and the stack, and that kernel Sockets layer that we're using. The only difference, I would say, is that we've also pulled in some code from the UIPC SysCalls, in particular for accept. If you look at SO except in the kernel, it's really bare bones. It is a really minimal amount of work. If you look at current accept, that's handling the SysCall. It's handling, you know, taking new Sockets off the queue and doing some other error checks and housekeeping details that you really need to get done anytime you're accepting a new socket. So what's the UN API? It's pretty much kernel Sockets exactly with some amount of the kernel side of the SysCall interface, just, you know, we file the scripters for a move because we completely have avoided file the scripters here. So below that, the net plus net-inet, that's the stack, right? So that's just my shorthand for everything in the TCPIP stack. So that's just kernel sources. We'll see the next slide. I'll show you where everything comes from, what's been re-implemented. And then on down, we've got relevant kernel facilities that we need to make that all float. These little legs up here are just showing that there's some things outside of the kernel Sockets API that show up on the API for the application. There's some, you know, like network interface configuration entry points that get exposed to the API. And there's currently access to the UMA zone allocator through the API so applications can access, you know, those pool allocators if they wish. Underneath all those kernel facilities is something called the UINET host interface. So that's an abstraction layer that's going between those re-implemented or partially re-implemented, re-implemented usual versions of those kernel facilities that's serving two purposes. One is it's giving us, you know, portability. So, you know, even using POSIX threading and POSIX locks and standard C library routines to re-implement some of these kernel interfaces, there are interplatform differences. And we also have a similar issue there with namespace. Everything in the UINET host interface is being called into from, like, kernel code context. I have another slide where we'll highlight this in detail. But, you know, we can't have the P threads header, like, pulled into a file that's also got the kernel K threads header in kernel mode. Because in general, you're just going to have namespace collisions, right? Just that it's just not buildable. It's the architecturally wrong thing to do. So one thing that gets done in the UINET host interface is another namespace cleansing process. Every symbol and constant use in the interface is just completely based on basic C types and doesn't pull any baggage from the host OS. The remaining piece here is on the left, the new packet interfaces. So once we've got this user LAN stack, we've got an API application to talk to it. The question is where the packet's coming in and out. So there's a set of packet interfaces that are just interfaces of IFNET, just like they would be in the kernel, except they're tying into other things that are available to user LAN. So for example, you can have a packet interface that's talking to Netmap. And then you can access anything that Netmap can access for packet IO. You can talk to PCAP. You can talk to DPDK. You can talk to Unix, kernel, domain socket, clay tablets, I mean, whatever suits the application. If there isn't in a packet interface that suits, it's a pretty straightforward exercise to write one. Right, so here's, this shows where the sources for all these things are coming from. That nice shade of FreeBSD red is unmodified kernel code. Everything in blue is showing things that are created new, entirely new for LibuINet. And this is my attempt at a purple that's halfway between that red and blue. And those are all the kernel facilities. And what you'll find if you look in there is, depending on the facility and the subroutine in that facility, there's either been a wholesale re-implementation, there's been essentially a copy made of what's in the kernel with some slight modifications, or in some cases almost entire reuse of the kernel code because some of these facilities are built on top of exclusively other existing kernel facilities. So once we've ported the other ones, we get those other kernel facilities for free. All right, this is just a summary of the namespace issues I was talking about. In terms, from a development standpoint, when you're working in any of these layers, you have to keep in mind what environment you're really in. You're technically all writing user land code except that the build environment for everything here in red is the same as if you were running kernel code. Because we're all, everything in the, underneath the UNNAT API is written as if it was written in the kernel because you're talking to the kernel sockets API, you're talking to the kernel networking stack, you're talking to kernel facilities, and then on down. If you're coding inside the UNNAT host interface, that's a host OS environment. All the code you're writing there is like a normal user land program, you've got Pthreads, C library, everything at your disposal. You'll see the packet interfaces are typically split between the two because they have to implement an IFN interface, you have to interact with the kernel facilities, but to get your packets in and out, you're at the end of the day plugging it to some host OS facility, whether it be Netmap or Unix domain socket or something else. So what you'll find in the typical IFN implementation is the driver split into two separate files. One is built in a kernel environment and one is built in the host environment. And then there's an API, a clean API that doesn't have any external dependencies for simple definitions that goes between the kernel and host parts. And also there's things that go into the host side of one of these packet drivers that could be pulled into the UNNAT host interface if they're generic enough. Sometimes they're just drivers specific and they live in there. The character of everything that falls in here inside one of these drivers is exactly the same the character in there. And in some cases it's just sort of a purely discretionary call as to whether a routine can be found in say the host portion of a Netmap driver or whether it was pulled into the more general UHI. The UHI itself of course can be used anywhere because as a completely generic interface is not dependent on any host or kernel headers. In general though, if you're inside kernel code, you use a kernel facility first and only use UHI interface if you have another option. An example of that would be you're writing a new feature inside the kernel part of the libUI net and you need a thread. You could call the UHI create thread interface which would work but the more proper thing to do is call the kthread interface which is using UHI underneath because the kthread interface is doing additional things to keep that thread properly initialized and organized within the kthread kernel context that wouldn't be happening just by calling the UHI net thread create routine. All right, so as I've said, the API itself is intended for use in non-blocking event-driven applications. You get pretty much just by virtue of the fact that it's based on the kernel socket interface, you get blocking support almost by default because it's already there but there's currently no way to weight on groups of sockets in this implementation because we've completely done away with file descriptors. We're not emulating file descriptors. A UHI net socket is really just an opaque pointer to a socket structure. It's not wrapped in anything else. So there's no, and because we're only interested initially in event-driven applications, there's been no facility equivalent to like pole or select implemented for UHI net sockets. So it's a direction that could be gone into but it's just not on the direct roadmap for the UHI net right now. So the initial goal of the API was to integrate with WAN proxy right because that's where the whole project started. WAN proxy has its own event system. So the API was tailored where necessary to interact with that but the idea is that we provide enough tools to in general integrate it with any event system. We're trying to capture generically what's required for integration with event systems. I think I've done it because we've integrated not only with WAN proxy but also with LibiV and they're just two, they're both event systems but the integrations, they just look very different. There's a number of details that can differ in the implementation of event systems that have different requirements they place on things like a library that provides non-blocking sockets and a callback based interface for events and I think between these two integrations that have been completed so far, we have a pretty general interface. If we want to integrate with another event system that was application specific for another project or was Lib event or one of the other extant event systems, I think we have all the tools in place already to do it. One of the motivations for integrating with LibiV was that although we expose essentially all the kernel sockets API functionality which includes non-blocking sockets and up calls and you can write your application directly to it, I think most people will be happier not having to do that because the kernel up calls mechanism has quite a bit of a learning curve to it and there's a common hazard involved that involves the fact that kernel up calls are invoked. Kernel up call is a callback that you can attach to a socket and say when there's read activity, call this function, when there's write activity, call this function. Those functions are called with the lock on the sock buffer inside the socket held and typically what you want to do in integrating with your application through one of these callbacks is inside that callback routine that you've supplied, you'll grab some other application specific lock to then do something in your application under that lock. And then what typically happens is there's some other part of the application that wants to hold that lock while doing some sort of socket operation. Well the socket operation is gonna grab the sock buff lock to do its work and you have a lock order reversal then because when the up calls are called, you've got sock buff lock then your application lock. When somewhere else in your application, you lock your application lock then call to an API routine, you've got the locks being acquired in the reverse order. There's usually a way around that but what seems to happen is it's common for the initial implementation of trying to integrate with up calls to run into the lock order reversal problem and then have to solve it and then you find out when you solve it what you're really doing is writing an event system because you're like, oh I have to queue these event notifications to some other thread or context and then deal with them with the right locking order. Okay you're starting to write an event system. So why not just provide integration with a widely known and used event system so you can hit the ground running and not have to worry about details like that. There's currently two packet interface implementations in the source base. One is for Netmap, it was written to an earlier version than current so it does zero copy on receive up to some fraction of the available ring buffers. So when it's feeding packets to the bottom of the TCPIP stack it'll do that zero copy up until one half or three quarter or some fraction that you can choose of the Netmap ring buffers are outstanding to the stack and once you pass that threshold, they'll start doing copies so that you don't wind up using all the ring buffers up, handing them all to the stack. The stack hangs onto them and now you've stalled your receive path. So I say it builds with the current Netmap but it's not yet taking advantage of some of the more recent features which are the ability to expand the number of receive buffers beyond the ring size for the adapter. That gives us a much wider zone where we can stay in zero copy receive mode when feeding packets to the stack. And there's also functionality on the transmit side that allow us to do zero copy coming from the stack. Currently everything coming out on the transmit side is copied from an Mbuff into a ring buffer in Netmap and then sent. But the functionality is currently there in Netmap. I just have to update that pack interface to use it and we'll see some increased performance on that pack interface. There's also a PCAP interface which I've used less widely. I've mainly used it for feeding PCAP files to the stack for testing. It's a really handy feature to have because you can can a capture and then feed it to the stack to develop a new feature, reproduce a bug, et cetera. Of course you can also use it to deal with real network interfaces, not just PCAP files. So it's useful for doing portability work to operating systems that don't currently have Netmap support, say Mac OS. But pack interfaces in general could be anything. You just have to put an IFNet interface on the top of it and what's underneath, I mean could literally be anything. It's not a comprehensive list. So there's a couple of, I'd say major open issues with the port so far. One is with locking. There's a pretty diverse set of locking primitives in the kernel that have different semantics and different behavior in certain circumstances. In user land, going to POSIX, we've got a much smaller set of primitives and in some cases the behavior differs. I think that one of the most relevant issues in this port has to do with rewrite locks. Because right now everything in LibUI Net for re-implementation of kernel lock facilities are mutexes. They're POSIX mutexes. They're either configured for recursive operation or not depending on what the remapped kernel call was really asking for, but there's not a real rewrite lock. So right now the built-in expectation is there's a lot bigger chance for lock contention in LibUI Net than you'd find in the kernel for similar traffic flow through the stack. One of the issues with the P-thread interface is that while they have a rewrite lock, it doesn't support the recursion semantics that are defined for the previous D-cernal rewrite lock. The recursion behavior that's not supported by P-threads is an optional feature of the kernel RW lock interface. But the locks of interest like the NPCB locks that are used for managing connection contacts in the stack want to use that recursive behavior. So something has to be done there, remains on the to-do list to build something. It might be possible to build something around the P-threads RW lock, but it might have to build something from mutexes and queues and things. PCPU is a kernel facility for per CPU data. There's a number of optimizations in the kernel today that use PCPU. The whole point is that you can keep context on a per CPU basis. And so components of the kernel can be implemented to cache things or maintain state on a per CPU basis. So when you're trying to access a certain facility like allocate memory or allocate memory in a UMA zone allocator or do packet processing work through NetISR, that you can keep that processing or context on a given CPU and take advantage of warm cache effects. Or you can avoid, you can keep zone, for the memory allocator, you can keep local caches of objects that are per CPU. So wherever the allocation happens, you can try and allocate it out of the local cache first and not have to grab a lock that might be contended across CPUs, which is an expensive operation. In general, what PCPU is providing and how it's using the kernel is a value and it's something we wanna be able to take advantage of in LibUI Net, but we really can't currently, there's more work to be done there. But part of the issue is there's no user land way to disable preemption. You know, in the kernel, there's two routines called critical enter and critical exit, right? And those are really inexpensive ways to keep a currently running thread from being preempted on whatever CPU is running on. There's just no user land equivalent. Some of the, that's not required for all the uses of the PCPU infrastructure in kernel code, but it is for some important ones, like the UMA zone allocator. It uses critical enter and critical exit to protect access to its per CPU cache of zone objects. So, you know, currently, you could emulate that by saying, okay, critical enter and exit are gonna be a mutex, but now you're sort of defeating the purpose, you know, the per CPU caching and preemption disable going on the kernel is in part done to prevent acquiring mutexes and containing on mutexes across processors, right? So that's not really a reasonable way to emulate it and expect performance. And although it hasn't, we'll talk a little bit more, a little bit more about performance later. It's certainly clear that some of the PCP optimizations, you know, aren't able to deliver their intended benefits in the, with the current state of the port as it is, but I think it's actually worse in some cases where they become pessimizations. Like I said, the UMA zone allocator is heavily used throughout the stack. You have a question? Yeah, so what about mapping the PCPU to a trend that's an actual trend like you, you know, like red local storage as it was stopped because it was a very broad trend being expected to be a still or flexible effective economy as a CPU, right? Right, because well, really, right, so there's two things that we can do and I think, let me back up, in thinking about this problem, there's different approaches that may not 100% give you what PCPU does in the kernel, right? But it can still get the benefits or some portion of the benefits it's trying to provide, right, using the same infrastructure. And the suggestion of saying, well, if what we're really trying to do is keep threads from contending with each other for the same resources across CPUs, right? So we don't have to quite literally keep everything on a per-CPU basis, right? We can reduce inter-thread contention resources by having per-thread resources instead of per-CPU. And that's what the bullet item is saying, oh, can we make things per-thread instead of per-CPU? You know, an example that many people might be familiar with is the way J Malik uses arenas for its allocation, right? So that's certainly one idea and might be an answer or the answer here. Another way to look at it is using thread pinning. You know, that's currently how things are, you know, in the functional port that's done today, it will work better when you have threads pinned than not. So the way that currently works is, if your thread is pinned to a CPU, it'll access any of these per-CPU accesses will go to the context for the CPU that that thread is pinned to. If the active set or the CPU set for a thread doesn't have a single CPU in it, in other words, you're not pinned to a single CPU, it'll just go to context zero. So it's kind of like anything you have an explicitly pinned will all fight for the same PCPU context, context zero. But if you start pinning your threads, all the accesses to PCPU data will go to a CPU specific location that corresponds to the CPU that you're pinned to. So that's another way to kind of, you know, you're getting some slice of the functionality, but you're not getting the full, you know, the full benefit. And that's, you know, the analogy there for the thread pinning approach would actually be the way that NetISR uses, actually currently uses PCPU stuff. Because NetISR is really a pool of threads where full of worker threads are each one pinned to a CPU. And it uses PCPU data, you know, in that way. Because it's accessing that data from inside of a worker thread. And it relies on the fact that it's already pinned. Okay, so now on to the extras. So the first new feature work that was done once we had the stack ported to userland and functioning and delivering all this existing functionality, you know, TCP, UDP access. Anything that works in the stack in kernel after the port works in userland. There's some things we haven't tried yet. Like we haven't stood up SCTP support. So there's probably some corner cases in the kernel abstraction, you know, re-implementation that we need to fix. But in general, you know, we're using the whole kernel source unmodified. We provide a lot of facilities. Everything that works in the stack, in the kernel should work in userland. We just have most heavily exercised the TCP part of it. But the first new bit of functionality is aimed at the original motivating case, which is building transparent proxies that can handle large number of connections, that can handle a huge diversity and addressing across those connections. And also handle, you know, lots and lots of B lands in that diversity of addressing. So the promiscuous sockets term is what I'm using to describe the ability to set up a listen socket that has a lot more control than usual over what kind of connections it'll capture. By capture, I mean if it sees a syn come in, it'll say, it'll match that syn and say, that's a connection for me. It'll do the three way handshake and establish the connection with the client. So a promiscuous socket allows you to listen on any IP address, any port, any VLAN tag stack. And when I say any VLAN tag stack, you can specify, I'm only gonna match connections that have this many levels of VLAN tag stack with these specific tags in each level. You can say, I wanna capture connections that match all the other criteria on any VLAN, or you can say, I wanna insist that there's no VLAN tags. I wanna match untagged traffic. And you can wild card on any of them. So you can say any combination of like I said, listen on specific VLAN or any VLAN or no VLAN, you can say, I wanna listen on a specific IP or any IP. You can say, I wanna listen on a specific port, I wanna listen on any port. And all the combinations are supported. So you can really go from very targeted listens to wide open, I'm gonna create a connection, do a three way handshake for any syn that reaches this interface. By the way, if you're, I'll just make a quick side note if you're actually building something that is going to respond to every syn that reaches it, you should think carefully about the network that you're plugged into. Because even on switched networks, right? Adverse table misses will cause sins going between two other endpoints that are unrelated to your network segment to show up on your interface. That's something I became aware of early in development when all my X terminals disappeared. So that's on the listen side. On the active side, promiscuous sockets give you control over your personality on the network. So you can say, all the frames that I send are gonna have this VLAN tag stack, this source and destination MAC address, this source IP and source port. Of course, the destination is controlled through bind, so there's nothing new there. So that's promiscuous sockets, is those specific pieces of functionality plus any supporting infrastructure for that. The supporting infrastructure includes an ability to bypass routing on network, on network interfaces on input and output. We'll talk about why that's interesting a little later. That's done using something called connection domains, another invented term, abbreviated CDOMs. We'll talk more about those. There's new interface mode for implementations of network interfaces for the drivers that provide additional handling of L2 info. I mean, normally that stuff is stripped off the packets after it's passed up from the ethernet layer to a higher layer, but some of these functions, like having control and you'll see later we wanna have access to L2 info at a higher level in the stack at the socket level, requires preserving some of that information. And also some of the steering for connection domains gets handled by that mode. And then finally, something called SYN filters. And that's an ability to do some analysis on each arriving SYN packet to decide how you wanna handle it, whether you wanna feed it to the stack or not. I'll of course go into detail on that. So connection domains are really a way to map, to map connection handling, TCPIP socket behavior, two physical ports in the machine. Well, I say two network interfaces. These network interfaces, I'm talking about in the stack, of course, the virtualized concept, it may or may not map to physical ports in the machine, depending on what kind of packet interface you're using and what your architecture is. But normally on packet output, there's gonna be a routing lookup that's gonna decide which interface to send to. Connection domains is a way to bypass that, to say that, to set everything up so that, you can say all traffic sent from this socket, is gonna go out through this network interface all the time. That fits, that kind of behavior fits architecturally some of the use cases for this stack enhancement. But it's also something that's desirable from the standpoint of, if you weren't originally architecting things that way, but you wanna handle hundreds of thousands or more simultaneous connections, you need to start considering whether you really wanna feed all that stuff through the routing infrastructure for performance and scalability reasons. Because a lot of times, it seems to me at least from my perspective of how this is being used, the routing infrastructure isn't actually necessary. So it didn't make sense to feed everything through there, even though it would quote work, because we'd have these huge, if you've got 100,000 sockets that are all, there's a diversity of addressing such as they basically all wind up with their own individual routes, you've got 100,000 routes in the box. But your architecture might be that all the track is coming in, these two interfaces and going out those two interfaces, there's no purpose for doing all that. So that's one of the motivations for this whole connection domain approach. So the way it works is every packet interface belongs to a connection domain, all established connection contexts in PCBs in the kernel. You can think of that roughly as equivalent to being a socket. They all belong to a connection domain. On receipt of a packet, that packet is tagged with a connection domain, depending on the interface, it just inherits the connection domain of whatever interface it arrived on. And a given received packet can only match a connection contact within its connection domain. So that's where the whole term comes from. So when that packet's received, it makes it up to, it's a TCP IP packet and it makes it up through the IP layer. The first thing that's done is a lookup is performed to figure out which existing connection if any of that packet matches. Is it something of interest that should be processed further by the stack? Connection domains are a way to segment the scope of that matching. So instead of matching as all connections that are present in that entire instance of LibUI net, associating interfaces with a connection domain gives you the ability to have an independent pool of connection contexts when it comes to matching. And that lets you do things like build a box that can be connected to multiple, fully independent networks that may be using the same addressing. All right, apparently that's speed things up here. That's something you can't currently do very easily without the miscue of sockets. So the way this shakes out is, CDOM zero is a default. If you don't configure anything, everything will be in CDOM zero. It's special from the standpoint of connection domains, but really it's the default behavior of the stack. You can have multiple interfaces in CDOM zero. Outbound packets in CDOM zero are actually routed. They don't go through this fixed interface transmit path. And all the pre-miss, CDOM, non-zero CDOM zero are what are used for promiscuous socket. So when you create a promiscuous socket, you assign it a CDOM and that basically identifies which interface, which packet interface in the system that socket is gonna handle traffic for. And as I said before, once you do that, no outbound packets are passed through the routing infrastructure. They always go to the interface. So for people familiar with the internals, a lot of these characteristics and maintaining these properties for interfaces connection contexts in PCBs is already in place in the form of the Fib numbers, Fib number properties that are in there. So we leverage that plumbing in the implementation to make CDOMs work. So this is just a quick sketch of what I'm talking about. You can see in CDOM zero is a normal case. These green rectangles represent a connection. You think of like a socket, inbound traffic from the interface comes into connection lookup, gets mapped to a connection on input. Anything coming outbound goes through the routing infrastructure and then gets shunted out of the proper interface. For non-zero CDOMs, it's much simpler. All inbound stuff goes through lookup, but only for that, only within that connection domain. And everything outbound always maps to a single interface. I mentioned there's a new interface mode that works in concert with promiscuous sockets. There's a new flag, IFF, promiscuy net, that when set on the interface causes the VLAN tag stacks to be removed. It supports arbitrary, I mean up to some defined constant. I think it's currently 16, which is I think not necessary. In terms of it'll handle anything you'll see in the wild. But it'll remove the tag stack and remove the MAC addresses and save them an M-book tag on the packet. So that will be available for analysis at any other layer in the network stack up to and including the SIN filter. So SIN filter is something you can install on a promiscuous listen socket that will get called for. It's a callback that'll get invoked for every SIN that arrives that matches that socket's criteria. It's a way, it provides two main pieces of functionality. One, it allows you to do more complex matching than is possible with just the specific or wild cardability of the individual fields of the promiscuous listen. So you can look for complex subsets of VLANs or IP addresses or ports using a SIN filter. The SIN filter will get called on the SIN packet. You then return a status that says, yeah, accept this and pass it through the existing machinery. You know, reject the silently, reject it with a reset. Or I'm gonna defer the decision, which is basically saying, hey, don't submit that to the SIN cache machinery that already exists. I'm gonna hold it aside and at some later point, I'll resubmit it with a disposition. And that last piece of functionality is particularly useful in making well-behaving proxy applications. Right, a couple of things keep in mind when you're implementing a SIN filter. You have to take into account the fact that a SIN may arrive. The same SIN may arrive multiple times due to retransmits, especially if you've told the first instance, you're gonna defer the decision. It's taking you a while to make that decision. You may get another copy from the client in the meantime, so you have to structure your SIN filter implementation accordingly. And nothing in promiscuous sockets or SIN filters defeats the existing function out or directly defeats, or precludes the use of the SIN cache or SIN cookies. They both still work fully in the way they were intended, except that depending on your implementation of the SIN filter, you can subvert their benefits. You know, if you're a SIN filter, the first thing it does every time it runs is allocate a large amount of context to do whatever sort of decision-making you're going to do, you've pretty much subverted the point of the SIN cache, which was not to allocate a lot of memory on every SIN that arrives. This shows the enhanced proxy behavior you can get with a SIN filter. And what this is showing is, you know, normal proxy without a SIN filter is going, these green dots represent connection establishment, right? So normally a client's gonna send a SIN, it's gonna go into the SIN cache, SIN cache will send the SIN act. When the final act in the handshake arrives from the client, then the socket will be created, you know, it'll be available through accept in the proxy. And then at that point, you say, oh, I've got a new connection from a client, your application, that's your first point of awareness, you then initiate the connection with the server. The problem is, if this part doesn't work out, between here and whenever you've had a timeout or a fin arrived from the server, it said this isn't gonna work out, you've had an open connection from the client, right? The client may be sending you things because it thinks you're the server. You know, architecturally, it presents some challenges, do I queue the data, do I ignore it? Does the client care how fast I respond once the connection is opened, et cetera. With the SIN filter, your SIN filter runs when the SIN comes in. At that point, you can initiate the connection to the server you're proxying to. And when the server responds with its SIN act, then that's when you get connection establishment on your outbound connection, right? And at that point, you can say, ah, that worked out, I'm gonna submit my deferred decision for the SIN filter invocation up here back to the stack. And then that's gonna submit to the SIN cache, which will send the SIN act and complete the connection with the client. If anything happens in here, maybe you gotta reset from the server or timed out. You have no established client connection. The client application hasn't proceeded in any way. Your emulated behavior is now exactly what their experience would have been in terms of connection establishment. It's the same as what would have been if they were connected directly to the server instead of the proxy. So, that's the end to end relationship that you made. Yeah. Have you seen this in the fourth one in some specific application? I can see that, in one case, we've gotten that connection closed. In the other case, we've gotten the client that's gone to the connection reset. So, if you have your SIN filter, you can actually convey the reset all the way down to the client. Right. That's a different error for the same scenario. Well, in this case, you're saying if the server, if you send it a SIN and it sent you a FIN in response. For the reset. If it sends you a reset, then you actually know that because you get that status from your connection attempt. You know how the server responded to you. You can distinguish through, you've done a connect call here. And based on the way the connect effort terminates, you can actually distinguish through the normal, without any special, we haven't changed anything. You're able to distinguish whether you got a reset, you got a close, you got a timeout. And then you're able to tell the SIN filter, when you submit this for decision, you can say, reject that SIN silently, which would emulate the timeout. You can say reject it with a reset, which would emulate a reset. I don't think there's, I don't think in the flow here, you can get anything other than a reset or a timeout in this part of the sequence. Maybe my question would be where? So in the first case, when the SIN filter is proxy without the SIN filter, and then you get a reset on the server, but by the time you get your reset, we have already established a connection call line. And so when you get the reset from the server, let's say after you, when you send your SIN, like the first, we're talking about this case here. Yeah. Yeah. Then the only thing you can do is close. Oh, right, yes, exactly. I mean, that's part of the benefit of the, that's part of the benefit of having the SIN filter implementation. Right, in that case, you can't, without the SIN filter, you can't actually emulate all the behaviors of the server to you, back to the client. Right, so that N, in the famous N to N, the N to N, they were media, N to N, they're really crazy about that sort of stuff. And I've personally never seen an application with that as actually useful. Only, you know, the web browser doesn't care whether it's resetting or you're closing. I guess my question to you is, have you ever seen what this kind of behavior can control? Yeah, I say that the actual answer to that is I don't know the specific real world installation example where that's the case. All I can tell you is that this is one of the required features when I was approached to do the work. Because I'm doing this work under a contract and I'm not on the, I don't have exposure to the customer side in this project, right? So I don't have a real world use case that can tell you, oh, in this installation with this client and this protocol, this makes all the difference. I think that the way to look at it is the main value is it allows you, if you're talking about inserting a proxy in some situation to provide some sort of behavior, I mean, just look at the point of view of being a startup and creating a new product and saying, I can deliver a feature XYZ by proxying all your traffic, right? And then, well, someone can, this is just a basic product development benefit, right? Someone can nitpick and say, ah, well, you're changing, you're really getting in, visibly getting in the way between my clients and my servers. And then you have to have an argument, does that really matter? Like you're saying, web browsers don't care, oh, do they? Technically, this is transparent, right? But then it never ends, right? Because you have your sequence back that you have to align. Well, that's like how transparent you wanna be. Then there's timing, then there's, you know, it's like, do you wanna be invisible? So this is, this is true. This is transparent down to addressing and like, you know, connection establishment details, right? Beyond that, nothing else is being aligned, you know, timing, sequence numbers, all the odd behaviors of stacks that can fingerprint. I mean, this isn't invisible sockets, it's, you know, yeah, it's not the goal. So I think we're one minute over now. Okay. Close, I apologize. How about give me two more minutes and I'll wrap it up, because this is really, I think the bulk of the interesting piece. I'll skip the walkthrough of the API, but the flavor here is just that the interface for doing this looks a lot like the interface for normal socket operations. I've tried to implement all the functionality through additional socket options, as opposed to API calls. That just goes to, you know, what if we wanted to use this in kernel as opposed to in user land? That makes that more possible. Just a comment on scalability. The way scalability was handled is that all the connection contexts are still kept in one giant hash, right? So, but everything that comes through the promiscuous sockets, promiscuous interfaces, promiscuous sockets, is hashed with a more expensive hash that takes into account via land tag stacks, source and destination, in order to get good distribution. So if you're handling a million connections, all those connections can be on a million different VLands and the same IP. They can be on a million different IP and port combinations. You know, you can slice it however you want. You'll still get good hash distribution. You won't get, you know, performance degradation from long chains. And the way it's done is that everything that's in connection to main zero, that's not being processed in promiscuous mode, still uses the existing hashes, the smaller, faster hashes. It goes into the same hash table. The lookup path is different, because it knows whether it's doing CDOM zero or promiscuous sockets lookups. So you only pay the penalty when you want promiscuous functionality, but you can still use all the natural stack functionality without additional expense. And this has been tested up to, you know, basically two million sockets in the box. And you know, you have one million active connections, one million listen connections addressing however, and all those sockets function and pass data correctly. So the second feature that was added is a lot simpler to talk about, is called passive receive. And that's the ability to run TCP reassembly and socket operations using a copy of a packet stream between two endpoints. So you can be connected to a span port or some other layer in your architecture to deliver the copy of the packet stream. And you can get a pair of sockets that you can read from that'll have to fully reassemble TCP streams that are present in the actual connection between those two endpoints that you're monitoring. We'll have to save the rest that I think for online reading. But one of the couple of things to think about before we part ways is when you're doing passive receive, you're not actually participating in the TCP protocol, you're only monitoring it. So if there's any packet loss in the path between you and the packet stream that you're observing, so this is, I mean, if you're using a span port, this is a big problem because span ports are really lossy, at least on the equipment I have access to. Even between virtual machines, which is all in memory, they're really lossy. There's no way to get a retransmit from that. The client isn't gonna retransmit a packet because it went missing to you, the passive receiver, if the other side saw it and acted. So there's additional functionality that's been built in to handle the case where there's missing packets, you get holes in the data, you might have missed fins closing connections down. So you need support in the Mbuff system for whole data, support in the receive function to tell you where whole boundaries are, whether you've had them or not, so your application can give up or continue accordingly. And you also need to have timers to make sure connections don't live indefinitely. You can either use the idle timers that are existing to kill connections after an activity or the application can have its own timer to handle the case where you missed connection closed down. Okay, so API. And really, performance, in terms of time performance instead of scaling performance, we've talked about scaling, we're just really getting into the phase where we're starting to look at time performance. But the quick and dirty tests, if you're doing something like net cat-like transfer of a large file, throughput through the user line stack is currently about 70% of what you get with everything else the same doing it in kernel. So on the one hand, it's not like 10x slower, 2x slower. On the other hand, there's clearly a gap that we have to close. And I think there's some obvious places to look. We talked about locking and PCPU issues earlier that'll count for part of the difference. Using the net map interface, we'll get some benefit reduction of packet copies when I bring up the current revision of net map. And we'll close with the list of future work. Maybe the most relevant will be the fact that right now, for promiscuous sockets, it's not fully plumbed through IPv6. There's no technical reason why not, other than there was an IPv4 requirement. This is one of those examples of, this was a scratch-the-itch project that an IPv4 requirement and then a requirement to get passive-received working. So plumbing everything through IPv6 for promiscuous sockets is lower on the list. But if you actually look at the code, it's partially done. You just have to replicate exactly what's going on on an IPv4 side into some of the six specific equivalent functions. But there's technically nothing new going on. Just code that has to be written. Currently, the TCPIP code in there is 9.1. So, near-term effort we're looking at, maybe over the summer, is to upgrade that to 10 series, to get some of the improvements that have happened since 9.1. And that's probably all it's interest. And a couple of quick acknowledgments. Of course, the Julie Mallett, who's really persistent in connecting me to this work. Good sounding board throughout, and certainly suffered through the first Libby Wine integration with WAMP proxy. And of course, to the sponsors who paid for all this, all of whom, which I mean, and are listed on the slide. If you click on their logo, it'll take you to their websites. For some reason, nobody wants to talk about work like this. The good news is it's open source. So we can talk about the details. I think it's the same old story. For some reason, the source code is perceived to have no marketing value, you know what the use of it is. So we can talk about the source code all day and night, but nobody wants to advertise what they're building yet, right, and this was a nice Canadian scene from West of town, well, Vancouver Island, but I think we've covered the Q&A and ran out of time, so. All right, well thanks for sticking around guys. Sorry I ran out of it. And these little, this stack will get posted. It'll be linked to on the speaker page at some point.
libuinet is a userspace library version of the FreeBSD TCP/IP stack that also includes extensions to the base stack functionality that make it particularly useful in network infrastructure equipment. This talk will cover its design goals, implementation, current and potential uses, and performance. libuinet was originally conceived as a way to bring highly scalable transparent proxy functionality to the free, portable TCP proxy WANProxy (http://wanproxy.org). To this end, libuinet extends the base FreeBSD TCP/IP stack feature set to include 'promiscuous sockets', which allow listens to capture connection attempts across VLANs (including nested), any IP address, and any port, admit/ignore those attempts based on an application-supplied filter, and retrieve the complete L2 and L3 details of admitted connections. Promiscuous socket functionality also allows active connections to fully specify their L2 and L3 identity. In this mode, libuinet has been shown to scale to 1 million active connections concurrent with 1 million listen sockets, with those million connections distributed in multiple ways across the VLAN and 4-tuple TCP addressing space. Implementation of another extension to the stack, 'passive sockets', is currently underway and targeted for completion by the end of 1Q2014. Passive sockets provide for reassembly of both data streams in a TCP connection, along with a missing-frame notification mechanism, based on a copy of the packet stream flowing between the connection endpoints (e.g., via a SPAN port).
10.5446/15359 (DOI)
The thing that we could probably all agree on is that one of the reasons you version data is because keeping history of that information is very important. And here are a couple of examples that I like to talk about. On the left we have a map of the former Yugoslavia country and right now we have six sovereign nations. That's a result of the war in the 90s. The map on the right is also kind of striking. This is the ethnic distribution of Bosnia before and after the war. So it's important to keep track of these changes, be those political, physical, natural, in the way we represent our information. And that's what a lot of people think about when they think about versioning information. They think about historical information. But that's not the only reason we may want to version information. We create versions of data for many different reasons. People might be collaborating on the same dataset concurrently, for instance, or you may want to see a different view of the world, of reality, and you may want to prove a point based on that. We argue that we need better tools for this. The state of the art in the industry is good, but there are new models, new ways of working that require new tools, and we like to talk about those today. So the other way looks something like this. You have a lot of people that are trying to collaborate. They are part of a team. Usually they work with data. They modify data. And what usually ends up happening is that you have groups of people that are collaborating through a big relational database, usually. That's what the versioning happens. Another group of people do something kind of similar, and this other person is left wondering how can I extract some of those changes from those two groups? Maybe not all of them, because I may not be interested in every change that happened, but how can I pick and choose the changes that I'm interested in? Do that in real time and when I want to do it, not when it happens in the database. This kind of workflow becomes a little bit cumbersome with a single point of versioning, which is a database. We can think of it in a new way. You can think of this as a peer-to-peer network. Everyone is exchanging information about their versions, and everyone has a full copy of the history of the data sets. That might sound like a lot of information, but the way we've solved this with GeoGate is as optimal as it can be, so you're not really copying every, with every version, with every new version, you're not copying a lot of data. You're just copying the changes that happen to that data. So it can become fairly efficient, and we can all exchange different pieces of information with each other. There is no single central point where the versioning happens. Everyone is participating in the versioning. We can exchange information with our peers. We can also institute some sort of centralized versioning system that becomes an organizational decision, not a technical limitation. We argue that this is better for many reasons, and I just like to offer these three. First of all, it doesn't have a single point of failure. These days with relational databases, we've gotten really good at guaranteeing uptime and fault tolerance, et cetera, but it can still go down. You can still find that a data center goes down, that a system is not available, and in that case, you can't work. You can't just connect to somewhere, do your versioning, keep adding versions, et cetera. So with this distributed model, we don't have a single point of failure. You can still keep adding versions to your data locally. There is no single source of truth for just spatial information with this model. This actually kind of scares a lot of people and is a little bit controversial because we hear a lot about authoritative data sets and people owning information and making the canonical copy of something, but the reality of it is that people just, data wants to be free. Data wants to be copied, it wants to be used for different purposes. And once that happens, I probably want to create my own version of that data for whatever reason. My copy of the road network might be different from the transportation planning department. So it's good that we can actually do this and keep track of the provenance of the information. And these two aspects actually result in a better model for sharing and collaboration on spatial information. So our approach to these problems and to this idea is a project called GeoGet. It's an open source project, has a BSD license. It's built in Java. And that's the website in case people want to check it out, take a look. There's a lot of documentation. You can download the software. There's a couple of tutorials and workshops that you can go through and take a look. It is part of the location tech working group for the Eclipse Foundation. We're really excited about that. We're going through that process now. And this is with the intention to provide a tool that's in a vendor neutral space. We would like to see this concept grow. And we would like to see different implementations on top of different software. So how many of you are developers? How many developers here? Okay, quite a few. So this is the easy audience. How many have worked with Git? All right, still quite a few. So GeoGet actually follows the Git model quite closely. And there's a few new concepts that you have to be aware of. And as I'll show later, you don't have to know this very precisely by using other types of tools. But if you're working with a low-level command line interface, you should know at least what this means. So usually we have data, spatial data somewhere. That's in the form of files. They could be shape files, GIA JSON files, et cetera. Usually also you'll have something like a spatial database, be that POST-YS or Oracle spatial, et cetera. And what GeoGet does is it keeps track of the changes that you made in this data sets. And in order to do that, it has to import that information into the work, what we call the working tree. This is where you're going to be modifying information. And then there's a sort of a two-stage saving mechanism where first you add the information to the staging area. At that point, you're flagging that information as version. And GeoGet will keep track of those changes. And then whenever you're ready to do what is called a commit, a new point in the history gets created. So that's a new version, basically, on your repository. As we saw from the previous slides, the nice thing about this model is that it can be remote. So there's also a few operations to work with remote repositories. You can push changes to a remote repository or pull changes from remote repositories and keep all these data sets in sync, basically. So the power of this model is enabled by these concepts of branching and merging, which may sound alien to GIS analysts or to GIS people, but it's actually fairly simple. The idea is we have a main line of a main version with its history points that's represented on the left side by that master branch. That's my canonical copy of my data. And whenever I want to make a change, whenever someone wants to make a change, what they do is they create a branch. A branch is a divergence in the history of that information, of that data set. And I can make changes in that branch. I am isolated from what happens on master. Master actually can get new edits, new changes, so history keeps advancing and the versioning keeps advancing. And whenever I'm ready to bring these changes on this branch back to master, I do what is called a merge. At that point, GeoGit will detect if there are any conflicts with those edits, because I may have modified a polygon that someone else also modified or maybe deleted or maybe they changed the attributes. So GeoGit will tell you what conflicts you may have and you have to solve those. GeoGit doesn't solve them automatically, this tool is not designed to do that because sometimes you don't know. Sometimes it requires human innovation. We could always add some scripting on top of this. That's certainly possible, but GeoGit will not make a decision for you on which of the two copies is the good one. That's something that you have to implement on a workflow. So I get asked this question a lot, like a lot of people say, okay, why don't you just use Git with GeoJSON files? And it's a good solution if you are willing to cope with the limitations of that format. GeoJSON is a great format for representing geospatial information on a web map, but it's not really a format that will support versioning for large datasets. Git itself wasn't built with large binary files in mind. So we did try to actually implement GeoGit on top of Git directly and this just didn't scale, didn't work. And also this workflow has pretty big gaps with integration with other tools. There's no integration with desktop tools, other GIS servers, etc. So by all means, if you can use this, do it because the Git haptos are pretty awesome and they can actually help you with your workflow, but we're trying to add to that idea by supporting bigger files and more specialized workflows. So why do this? Why would we want to version information and why do we want to keep those versions around? I'm going to offer three. There are many more, but I'm just going to offer three here today. The first one is a classical GIS operation where you're studying different alternatives for, let's say, a road that's being built and you want to study their, maybe their environmental impact or something like that. So you may want to keep those versions around. You may want to run models on those versions separately and it's very easy to change from one version to the other because that's just an operation that's called a checkout and they all live in the same repository. What usually ends up happening is that when you're done with this process, you throw the alternatives that didn't get chosen. You throw them away or you store them somewhere. With this model, they can just live on a branch and you can access them at any time. Another classical example of why you would want to do versioning is crowdsourcing. It's a hot topic these days. A lot of people want to do this. It presents some challenges for your IT infrastructure and your systems people. They will probably freak out when you tell them that you're going to open up a database and that people are going to write to it. That means that they'll have to implement a lot of safeguards and security protocols and there's always a concern that that database is going to be corrupted, et cetera. So an easy way to approach this problem is to just offer a branch for crowdsourcing. And someone or some process can actually pick and choose whatever changes are good for your production-based database and bring those into your organization. The other one is OpenStreetMap. How many people have worked with OpenStreetMap data? Quite a few. OpenStreetMap is a worldwide crowdsourced vector street map. It's very detailed. Sometimes it's better than the commercial providers in some areas. And a lot of people want to use it within their organization, which is something that sounds like a no-brainer, but it's actually not that easy. So with GeoGrid, this is what you do to actually ingest OSM data into your GeoGrid repository. It's just a couple of commands, as you can see here. The first one is just downloading a section of OSM. I'm just passing a bounding box. And that's actually the greater London area. And it'll just click that area and download everything. At that point, I create a new repository. And everything that I do to it is going to be versioned. So I can keep track of the changes that I do locally to my OSM data. But I also want to keep track of what the community keeps doing to the data set. I want to get the updates that the crowd is doing into OSM. So that's what the second command allows me to do. It'll just bring in whatever changes happen to OSM, calculate the differences, and put those into my GeoGrid repository. And actually, if there are some conflicts between the changes that I did and the ones that the crowd did, it'll also tell me, and I'll decide what to do with those. We can actually filter as well. This is a mapping file. It follows the Overpass API language. So I can not only get everything in a certain area, but I can also say, maybe I'm only interested in the highways or only the bridges or something like that. And finally, we can also contribute back to OSM. If we create new data in GeoGrid, there's an export command that will allow me to produce a PDF file that I can then take into an OSM editor and put it into the crowd source map, the public map. So it's a nice workflow for using and also giving back to OSM. So right now, GeoGrid has released a few versions. We just released about a month ago, version 0.8. We're actively working on releasing 0.9. There's a complete command line interface tool that comes with GeoGrid when you download it. I like to say that it's featured complete. By that I mean that it's usable. You can start using it. It's being used in production in some priors like we'll see in the next talk. But we're still working on tweaking things, internal structure, optimizing some operations, performance, et cetera. And we hope to reach 1.0 in a couple months or so. Here are some examples of the command line just to give you a sense of what they look like. A log will show me the commits, the different points of history. The second one is kind of the feature that matters here, which is a geospatial diff. It's not complete because I had to cut the screen, but you can see already there's a poly on there. So it's detecting changes in geography. And I'll show that in a second in the demo. So that's the core GeoGrid project. At Balanced we're working on some additional tools around this core library. We're adding a Python library that would allow you to automate some of these processes in an easier, more accessible language such as Python. We're also implementing a high-performance GeoGrid server that will expose many of these operations through a RESTful API that outputs JSON and GeoJSON, like you see on the right hand side. We are also working on a QGIS plugin. So GIS analysts will be able to access this functionality without touching the command line, without having to know about GeoJSON and those things. We're also adding web components for web mapping applications. And we're also working on supporting larger deployments, larger repositories, performance optimizations, et cetera. And I am going to try to give you a small short demo of some of these things. So resolution is not very good. I apologize for that. But you can get an idea. This is the GeoGrid plugin that we are working on with QGIS. So here you can see, here's the history of my repositories, all the things that I did to those data sets. I'm working with this buildings layer from OSM. This is actually in Ethiopia. So as you can see, I have a Bing layer. It's a bit dark. This is aerial photography, and there's a few buildings that have been digitized already. So this tells me here, I don't know if you can read this, but we are currently in the edits branch. So I created a branch. I'm editing information. And I'm just going to add a few, or maybe one, polygon here. So hopefully the internet works for the base map. And I can do something meaningful. This data is actually currently in a posious database that's running locally. So I'm going to create a polygon here. And forgive me for my lack of precision, but I'm just going to digitize something like this. All right. So I created a new polygon. I click OK. It's a new entry into my database. I'm going to save the edits. And the GeoGrid plugin is going to ask me if I want to import this into the GeoGrid repository to create a new version, which I'll say yes. And it will immediately ask me why. Why did I do this? What's my commit message? What is the operation that I did? So I'm going to add a building. And as you'll see, the list gets updated. This is a new entry into my history list, a new point in my versioning. And it's in my edits branch. So what I'm going to do now is I'm going to actually change my branch. I'm going to go to master. That's my main version branch. You can see that that last commit is not here. And I'm going to merge that from the other branch. And this is one person working on two branches, but you can see where this is going. You can have people collaborating this way and adding changes. So there's my new commit that I just created. The other thing we can do is there's, as I said, a server that we also are working on. So that looks something like this. This is the same repository exposed in the form of several layers. And as you can see, this is a web map display with the same information. And what I'm going to do is I'm going to push these changes to the server so I can view them in a web display. For that, I can just do a sync with the server, which is already configured for this repository. And what it'll do is it'll first pull the changes from the remote repository in case someone else made some changes. And then it's going to push my changes to it. And if that is successful, which it looks like it did, I'm going to redraw this. And you see this polygon just showed up here, which wasn't there before. We also have some tools to kind of see all this history, all this commits, and be able to actually view the changes before and after a commit was made. This one was just one polygon. It's not very interesting. I'll offer you another view that's a bit more complete, which is probably this one. So for instance here, the green ones are polygons that were added. The red ones were deleted. And I think down here is one that was modified, the blue one. So you can see the blue one was just slightly moved. Maybe it wasn't digitized in the right place. So that's what we are working on. GeoGrid and its growing ecosystem. Again, I'd like to stress that we think this is a better model for editing, versioning, and collaboration because of the reasons that I served before. And thank you very much for your time. And I'll be happy to chat with you later if you have any questions or want to know more. Thank you. Thank you.
Everyone working with geospatial data eventually faces the problem of managing their information and assets as they change over time. Versioning of geospatial data has been an issue for any workflow that involves more than one individual. Questions like who changed what and when become hard to answer, and while versioning approaches have existed for a while, they are cumbersome to use and utilize old paradigms. GeoGit takes concepts and lessons learned from the open source programming world and applies them to management of geospatial information, allowing better and decentralized management of versioned data and enabling new and innovative workflows for collaboration. In this 2 hour workshop, we'll walk through core procedures in managing version history and inter-operating with preexisting spatial software tools.
10.5446/15356 (DOI)
Let's get started. I'm Matt Arons. I helped create the ZFS file system back at SunMaker Systems starting in 2001. And now I work for Delfix. So first, before we get to the details of the talk, I have a few questions for you guys, which may result in you receiving a free, awesome t-shirt that looks just like this. A bunch of free BSD-based companies, logos on the back that are using OpenZFS. So these are all applied to using ZFS in production on whatever platform is your favorite one, free BSD, I assume. So who has the biggest storage pool that they're using in production? How many terabytes? Hundreds? 100 terabytes? 127. Yeah? Anybody more than 120? About 256. Wow, that's a lot of drives, huh? Congratulations. Do you want a t-shirt? Do you already have one? I already have one. Would you like a t-shirt? What size? Large. I have a very few number of larges, so you are in luck. Extra large will work. Extra large will work? All right. We'll give you that. Extra large. So how about for all flash pool? Somebody must have, what, 10 terabytes of all flash? No? Not too many people. 60 terabytes of all flash. Think. Can anybody do better, either in terms of the number or the reliability of the number? The laptop has 480 gigabyte of flash. All right, so not too many all flash pools. T-shirt? Sure. What size? Whatever you have. Medium work for you? Sure. Sure. And how about the largest number of file systems or Zvols in a pool? I'm sure somebody has 1,000, right? 1,000 file systems? More than 1,000? No? You're just going to win all of them, and you already have a t-shirt. 1144 would be the size. How about 500? People don't keep track of these things. So it just works so well. You don't have to worry about how many file systems there are, right? So you don't know. Total number of snapshots in a pool? Somebody's got to have like 10,000. What? No. 5,000? 3,000 in months. 3,000? I have like several hundred thousand. Several hundred thousand? I have like 109,000. Wow. Do you want another t-shirt? I know you had one last time. How about the most number of pools in one system? So this is a little bit of an unusual use case, because I think typically people have maybe one or two storage pools, one for booting, one for your data. Anyone using more than that on one system? I have three. Three. One for booting, one for database, one for system, one file is on the file. Gotcha. Cool. Anyone else? Yeah? 8. 8? What are you using them for? One-short, one-short. One-short. I basically have everything. So I'm actually not getting services, because the one that's in the project is. So for fall isolation, it sounds like. How many do you have? 40. 40 different storage pools in one system. Also for fall isolation? Or what size t-shirt? What's next? OK, so who has the most memory? Thank you. Yeah. Who is the biggest memory system that they're using with ZFS? What's that? 144. Somebody has more. 256. Yeah, 256. Do I hear 512? 256. 256. No, nobody with 512? Yeah, we have 512. You have 512. Anybody with? Anybody with more than 512? Another 512. Yeah, 512. What size t-shirt? All right. Maybe I scared you guys off of the larges. I do actually have two larges, if you'd prefer. Thanks. And who has the biggest L2R cache? So the largest cache devices are most numbered. I'm guessing somebody's got a terabyte? No? Terabyte of L2R? 500 gigs of L2R. Yeah? You have 500 gigs? You have 500 gigs in one drive? We have a bunch of different drives. Four SSDs? Cool. What size t-shirt? Large? All right, cool. So now that we're done with the game show part of the presentation, if you're really sad that you didn't get a t-shirt, come up to me afterwards. I do have a few more here. So first, I wanted to talk about just a little bit of overview. Probably most of you are familiar with this, but about what is the ZFS storage system? So ZFS is a storage software. It incorporates the functionality of both file systems and volume managers. So that means that we can create a lot of different file systems from a lot of different pools. And we do that in a very flexible way, allowing you to create file systems on demand. They use only the amount of space that they're using. And as you free space from file systems, that space gets released back to the storage pool immediately. So you don't have to statically configure, this file system is going to use this disk, and this other file system is going to use this other disk. You just have a pool of all the disks. The file systems allocate in free space from the pool as they need it. We have a transactional object model. So everything is always consistent on disk. You never have to run FSCK because everything is always consistent. And you can use ZFS for all sorts of storage systems. So for both files and block devices, exporting them out over NFS, SMB, ISCAZE, Fiber Channel, Samba, all this kind of stuff. And we have end-to-end data integrity. So we check some all of the data before writing it to disk. And then after we read it from the disk, we verify the checksum. And all the checksums, the other checksums below them. So essentially everything in ZFS is a tree, and the checksum verifies everything that's below that point in the tree. So this is called a Merkle tree. So this allows us to detect and actually correct silent data corruption. So all of the metadata in ZFS is stored redundantly with at least two copies. In addition to whatever you've configured, whatever redundancy you've configured at the storage pool level, like mirroring or RAID Z. So when we read data from disk, we verify that checksum. If the checksum is bad, then we can go find another good copy of it, read the good copy, and then correct the bad copy. And if there aren't no good copies, then at least we can tell the application that we don't have this data for you, sorry, rather than just giving you whatever the disk happened to give us and letting that silently get corrupted up through your whole application stack. And lastly, but I think one of the most important things about ZFS is that it's easy to administer. So when we started the ZFS project back in 2001, one of the main goals was just to end the suffering of system administrators. And we saw how hard it was to use file systems and software volume managers together, or file systems and fiber channel and big, expensive storage boxes together. So we wanted to create an administrative interface that allows system administrators to concisely express their intent. And one of the ways that we did that is with inheritable properties. So you can easily group things together that logically are similar. So for example, you might have a system with a lot of home directories, and then some databases or some video files. You can say, all of my home directories should be compressed. And then all of my video files should not be compressed. And all of my database files should be stored with 8K record size. This is very easy to do with ZFS property inheritance. And lastly, we wanted to create scalable data structures. So I actually put this under the category of administration rather than explicitly performance, because I think that a lot of the limitations that came from earlier, that were existing in earlier file systems, the main impact is not so much performance, because people learn how to work around them. But then all those workarounds become additional things you have to remember every time you're configuring a storage system. So you have to remember, well, I can use this file system, but not if I have a disk that's bigger than a terabyte, because then it gets slow. Or I can use this file system, but as long as they don't put more than a couple hundred files in each directory, so I have to figure out how to break up my directories. We wanted to create scalable data structures so that no matter how you're using ZFS, the performance is going to be very good. So this is showing where ZFS fits into the software stack. So this is all demonstrating what different software components in the kernel. So at the top, you have requests coming in over NFS, maybe, SMB, local file accesses. All that gets funneled through the virtual file system layer, VFS. So in a traditional file system model, the VFS accesses files in the file system. The file system then accesses blocks from a block device, either from a volume manager directly, or maybe out over iSCSI or fiber channel to an external volume manager that's implemented in a very expensive hardware appliance. And then that, in turn, talks blocks to the actual storage. So across this interface, a lot of information gets lost. So for example, the volume manager doesn't necessarily know what data is being used by the file system and what isn't. It only knows what you've written at some point in time. And you have this isolation where each file system is attached to its own disks. With ZFS, we basically ZFS subsumes the role of both the file system and the volume manager, as I mentioned. Again, we have file operations coming in from the VFS layer. But we can also service block type operations for SCSI targets, like iSCSI or fiber channel target. So being able to export volumes out over iSCSI or fiber channel. That all comes into ZFS. And then we've redrawn the boundaries between these software components. So the top level of ZFS deals with the operations that are specific to files or volumes. So the volume layer is very simple. But the file layer deals with things like file ownership and permissions and file length and directories, things like that. And then these layers talk to the data management unit using simple atomic transactions on objects. So the data management unit provides objects. Objects are kind of like flat files. So the POSIX layer can create a transaction that says, OK, I'm renaming this file. So I need to remove the entry from this directory, add the entry to this other directory, and modify the file to say where its parent is, for example. Then within ZFS, we have this layer between the DMU and the storage pool allocator. And this layer is where we can request blocks to be allocated and read and then freed. So the DMU has some piece of data that the user has asked us to write. And we send this data down to the storage pool allocator, and we say, OK, great, here's the first 128 kilobytes of the file that the user asked us to write. Please write that somewhere to disk or some number of disks. I don't really care. Just you figure it out and then give me a token that I can use to read that back later on. The storage pool layer deals with allocating space on disk for that, compressing that, check summing it. Maybe if we're doing mirroring, writing it to two copies, or RAIDZ, writing out the parity information, and then tells the DMU, great, here's the location that I wrote on disk. Just whenever you want it back, let me know. The DMU can then read that, and they can also free it by specifying that same block pointer. Cool. Any questions about this? By the way, if you have questions throughout this, just raise your hand or shout out. I think we have plenty of time to take questions throughout the talk. Cool. So how did we get to this point of ZFS having all these features, being available on all these platforms? So as I mentioned, we started working on ZFS back in 2001 at Sun Microsystems. We open sourced the code in 2005. Shortly thereafter, Powell worked on porting it to FreeBST, and it was released in FreeBST 7 back in 2008. So things are going on great, and then there is this really concerning event where Oracle acquired Sun Microsystems and stopped contributing source code for ZFS. So in the community, there's this really big question, because up until this point, almost all of the contributions and source code changes to ZFS were coming from Sun Microsystems. Despite it was open source, everybody was using the source code, it was on FreeBST, but the vast majority of the actual contributions and code changes were coming from Sun. So when Oracle bought Sun and turned off that tap of source code changes, people were asking, well, what's going to happen? Like, is ZFS just going to become some yet another proprietary Oracle technology that is just going to wither and die in the open? So in 2010, in response to that, a bunch of people who had formerly worked at Sun formed the Illumos community as a replacement for Open Solaris. So the reason that I call this truly open source software is because the previous model under Sun was basically that one company controls everything that's happening with Open Solaris. But under the Illumos community, it's much more like the FreeBST community, where there's many different companies, nobody has any more stake than any other company. Everyone's trying to work together to make the operating system as good as possible. Everyone working on an equal footing with a variety of people contributing, not just one company. More recently, ZFS is available now for Linux, in the Linux kernel, as a kernel module, I should say. And then very recently, this past year ago, we started the Open ZFS community. So the point of Open ZFS, well, I'll get to that in a moment. And then just this year, Open ZFS for MacOS 10 has launched. So this is an add-on from a developer who's ported ZFS to MacOS 10, so you can use it on all of your laptops. So what is the point? Any questions? Is it bootable? It is not bootable on MacOS 10. Yes. Yeah, yes. It's still very early days for the MacOS 10 port. Any other questions? OK, cool. So what's the point of Open ZFS? So the goal of Open ZFS is to foster community development. It's a community of developers from all these different platforms, FreeBSD, Elumos, Linux, and MacOS 10. The goal of it is to raise awareness, make sure that people know, hey, ZFS is alive and well in the open. It's not just a proprietary Oracle technology. It's actually being used by a lot of companies to create products based on the open source version of ZFS. Secondly, to make sure that people who are working on ZFS on each of these different platforms are talking to one another and sharing code between, say, Linux and FreeBSD. And also to ensure that there's consistent feature sets on each of these different platforms and good performance on all the platforms. Basically make sure that all of the work that's going into making ZFS better is available to as many people as possible on as many platforms as possible. Any questions? Cool. So those are all great goals. What have we actually done to try and accomplish them? So like any good open source project, the first thing that we did was create a web page. It's open-zfs.org. Don't forget the dash. And we created a mailing list. So the point of the mailing list is for developers to be able to talk about and review platform independent code changes. So this isn't intended as a replacement for the platform specific mailing lists that are largely used for discussion of how does ZFS. How do I encounter some problem? How do I install ZFS? How do I get ZFS running? This mailing list is primarily for developers. So people are actually working with the source code. And you'll see that the focus of the open-zfs community is largely around developers and making sure that developers have the resources that they need and are working with other developers from other platforms who they might not normally come into contact with. So we want to simplify the development process. And really I should say here, we want to simplify the ZFS development process and make sure that people can get their code changes from one platform to another. I'll have an example of how we're doing that in a minute. So to ensure the quality of ZFS on all the platforms, we're working on creating a cross-platform test suite. So we have a test suite that's been available on Illumos for a long time. First it was called STF for Slaris Test Framework. Now it's called Test Runner. And that's important to FreeBSD, but it's not part of the usual workflow of people developing on FreeBSD. So we want to work on making that more available and an easier thing to run on every platform. We want to reduce the code differences between the platforms so that it's easy to continue porting changes between the different platforms so that all the patches apply cleanly on all the platforms. So obviously we can't do this 100% because there are differences between platforms. For example, at the VFS layer where the ZFS file system interfaces with the operating systems virtual file system layer and say the virtual memory sub-system, there are differences there. But the vast majority of the ZFS code is not interfacing with the VFS and virtual memory systems. Most of it is actually platform independent and could be the same on every platform. And we're also holding office hours. This is an online event, roughly once a quarter, where we'll have an expert in ZFS have an online chat over IM and video chat where anybody can call in and just ask questions about what are you working on, how does ZFS work on your platform. And so we're trying to get people from all the different platforms to participate in this and host office hours. So if you are a ZFS developer, especially from FreeBSD because we have not had someone from FreeBSD yet, it would be great to schedule in an office hours that you would hold. Usually what I do is it's kind of like an interview style. So people are a little bit slow to get warmed up and start asking questions. So basically I will ask you questions about what you're working on, what are your favorite things about ZFS, what are you most looking forward to. And then people from the audience will chime in and ask about their favorite pet project or pet feature. So we know that ZFS is available on all these platforms. And there's a lot of activity across all of the platforms. These numbers are a little bit out of date. But I think they serve to demonstrate that we have hundreds and hundreds of commits, about 100-ish people actually contributing to ZFS across all of these platforms. And the stability, equality, and availability of ZFS on these platforms has enabled a lot of software companies, software and hardware companies, I should say, to create products based on open source ZFS. So these include both companies that do kind of traditional storage where they're selling either hardware or they're on top of other people's hardware, like say, Nexenta or IX systems, but also companies that wouldn't be thought of as traditional storage companies that need reliable storage. So for example, wheel systems, they make a security product. So the primarily thing that they're selling is not storage. It's not a storage solution. But they need to be able to store all the information about the data that passes through their VPN type product. And so they choose to use ZFS to do that because it's very reliable, very secure, and they know they'll be able to get the data back. So another kind of interesting company that I'd like to mention is CloudYUS. So they make actually new operating system, which is why I didn't put them in one of these buckets, called OSV. It's actually written mostly from scratch in C++, which is certainly a new trend in operating systems. But one of the few technologies that they did not write from scratch was ZFS. And their operating system is designed for use in the cloud, so it only runs on top of other hypervisors. But they still need file systems, networking, et cetera. And they're using ZFS for that. Any questions about the great companies who are using OpenZFS and their products? So the next section of the talk, I'm going to share a little bit about what we've actually done in OpenZFS that makes OpenZFS different from and better than ZFS as it was in 2010 when Oracle essentially forked ZFS for their own proprietary version. Yeah. So questions about kind of OpenZFS in general, what we're doing, what the goals are? No? Good. Everybody's on board. So one of the first things that we realized when we realized that there was no longer going to be one company controlling everything that happens with ZFS is that we need some way to coordinate changes that are happening on different changes that are being made to ZFS by different companies. So one of the most critical things is that we maintain on-disk version compatibility of ZFS on every platform, on every distribution. So this is what allows you to take a storage pool that was created on Fribiesty and move it to Linux, or take a storage pool that was created on a Lumos and move it to Fribiesty. So in the initial development model, as I mentioned, all the changes essentially had to go through Sun. So there was just a linear version number. So what that means is that there was ZFS version 20. And then when you had version 21, then version 21 knew about everything from version 20, version 19. Everybody agreed what version 21 meant. It meant that we added, say, RAID Z3 support to the on-disk format. In Open ZFS, we created feature flags. So feature flags allows the on-disk format to self-describe what features are being used. So this allows me at my company to create an on-disk format change and say, OK, great. I've created something, but it's not version 22, because you might also be developing something. You might think to call it version 22 as well. I created something, and it's called feature flags, org.freebsd colon, what's it going to do? Background destroy. So this allows us to destroy file systems in the background rather than having to wait for them. So I can develop that feature. I don't need to coordinate with anyone while I'm developing it. You can develop your own feature. You don't have to coordinate with anyone while you're developing it, but we can still both contribute our changes to the common Open ZFS code. Cool. And while we were at this, we realized that there's a lot of rigidity to the existing on-disk format descriptiveness. So with feature flags, you can actually describe features that are backwards compatible in a read-only fashion. So you can say, OK, I'm creating this new feature, but maybe it's just adding some new fields that need to always be updated, but older software doesn't need to know about them. Essentially, it can read it, and it doesn't have to know about the fact that it added some new type of accounting, for example. So this allows you to take a pool that's actually added a newer version that is using newer on-disk features and bring it back to an older release of software and still open it. We also added support for the pool being able to self-describe which features are actually in use versus which features are simply enabled. So with the old version number, once you upgraded to a new version, that was it. It doesn't matter if you were actually using RAID Z3 or not. Once you've gone to the version that has RAID Z3, you can't take that pool to any older software. Versus with feature flags, if you had some equivalent feature that added a new type of RAID, you could enable that feature, and then it wouldn't be until you actually created a RAID Z4 type device that we would mark that feature as being in use. And we actually have the ability for features to become enabled and then go back for features to become in use and then to go back to being enabled. So for example, the background destroy feature that I mentioned, you can enable background destroy. You destroy a file system. Then in the background, we keep track of how far we've gone with that destroy. Then when the destroy finishes, the feature can go back to simply being enabled. And you can take that pool back to an older system, an older software system that doesn't know about the background destroy. Questions about feature flags? Yes. What's the status of compatibility with Oracle VFS? So the question is about compatibility with Oracle VFS. So Oracle hasn't released any information about their on-disk format changes. So if you need compatibility with Oracle, then you should create your pools at version 28, which is the last version that was open sourced from Sun. And that'll allow you to take your pool between Oracle and OpenZFS. But obviously, you wouldn't be able to take advantage of any of the new on-disk features in OpenZFS or the new on-disk features that are in Oracle ZFS. Do you know if Oracle isn't actually doing any private development? Yeah. So Oracle is still investing pretty heavily in ZFS. They still have a pretty large team of software developers. And they're pushing, I think, mainly the storage appliances. So like the ZFS storage appliance, ZFS backup appliance. So they're definitely still investing in it. And we would love to coordinate with them to work on, at least on this format compatibility for later versions. But unfortunately, their development model so far has been to keep everything completely under wraps. And there are no leaks. Well, leaks don't really help us if we can't legally use that information. So any other questions? OK. So actually, this is kind of interesting, because as I mentioned, if you use OpenZFS and you need to take those storage pools to Oracle ZFS, then you wouldn't be able to use any of the new on-disk format features. But we've actually done a lot of work in ZFS that doesn't impact, in OpenZFS, that doesn't impact the on-disk format at all. And this is an example of that. So in older ZFS, well, the moment back of a moment, there's this inherent problem of storage systems, which is that if the application wants to write data more quickly than the storage system, the storage hardware can absorb those writes, then at some point, the file system has to delay those write operations. So in a very simple file system, this might be done, for example, by taking each write from the application and immediately writing it to disk and waiting for that disk write to complete before letting the application continue. In ZFS, if the application doesn't explicitly request that the data is on-disk immediately, then we will buffer those writes in memory and then flush them out at a later point in time. So this gives much, much better performance for those writes than if we were to do every write synchronously. But if we do this, then there's a limit to how many of these writes we can buffer in memory, because we don't have unlimited amounts of memory. So at some point, we have to delay the writes. And in older ZFS, the way that this was done was essentially, if we can buffer this much writes, we start writing, we start filling up the buffer, we let the application go really quickly, everything's great. Then when you hit the limit, everybody stop. So all application threads that are coming in to do writes have to block until we've flushed out that entire amount of dirty data. So the end result is that everything goes blindingly fast until you just slam into that brick wall and all operations stop. So in terms of performance, this is very bad for interactive performance, obviously, because you're running long, everything's great, then you just everything stops. So looking at, we wanted to quantify this impact. So this is a histogram showing the speed of each operation. So we see that the vast, vast majority of all the operations complete very, very quickly, less than just in a matter of microseconds. And this is a log log graph. So this is way, way, way more than this, keep in mind. So everything's great. But then we see that there's hundreds of these operations that are taking seconds to complete. Basically what that means is we're waiting for several seconds for anything to make progress. So we're stuck out here. And so we see that the outliers are taking like 10 seconds or more. So the outliers here are the operations. Basically we're defining that as 99.9% of all the operations completed in less than 10 seconds. Really not great. So in OpenZFS, we implemented a smoother write throttling. So basically as we fill up that buffer of dirty data, we say, OK, great, everything can go fast, everything can go fast, everything's great. But then once we start getting close to being full, then we delay each operation just a little bit. And then as we get more and more full, we delay a little bit more and a little bit more. So depending on the number of threads and the frequency of the requests, we'll reach a natural, self-balancing amount of delay for each operation. So this is much fairer in that we penalize every operation just a tiny bit rather than penalizing a few operations to the extreme. So the orange line shows a histogram of the write latencies under the new smoother write latency algorithms, the new write throttle. And so here we can see that the outliers are much, much smaller. So we completed 99.9% of the operations in less than 30 milliseconds. So this is really great for consistency. One of the most important things with storage is consistent performance, even more so than ultimate bandwidth. Although in this case, we actually also increased the ultimate speed from 5,600 IOPS to 5,900 IOPS. Is that on frequency or on this line? The tests that I did here were on Ilimos. But this change is available on PhubiSD and Linux as well. Is that a deficiency in the scheduling of the IO and the underlying to the less lighter or some other reason? In terms of the improvement in IOPS or the? Well, the reason you do it in the first place is the big tail because Ilimos is a scheduler producing such crazy results. Or is that? This is all ZFS. So this is all inside ZFS that these delays are going on. So this is not like an IO scheduling issue. It's essentially as operations come in, like the application does a right. We get a VOP read. We get a VOP write. And then we have to figure out, OK, great. I can copy your data into the kernel. This is address space and buffer that. But I can only do that for so long until I run out of memory. So it's ZFS that has to worry about when do I delay that writer. So this is until it pushes it to the device right? Exactly. Exactly. Thank you. Sorry. Any other questions? This guy's in PhubiSD 10 release? Yes. I'm pretty sure this is. Anybody know for sure? I'm pretty sure that this is in PhubiSD 10 release. We did this quite a while ago. And if you're interested in some more details about how this is implemented, my colleague Adam Leventhal wrote a couple of blog posts on both the original write throttle and this new write throttle. So another cool thing that we have in OpenZFS is LZ4 compression. So the previous best compression algorithm was LZJB. So ZFS has built-in compression. We can take each block. Maybe it's a 128K block and compress that down. Depending on the type of data, you often see like 2 to 1-ish compression. But that compression has some cost, right? It takes some CPU to do that compression. So in OpenZFS, we have a lot of different types of compression. In OpenZFS, we have the new support for new compression algorithm LZ4, which is much faster, especially in terms of decompression. So the red bars here are LZ4. So we can see that compared to LZJB, the previous default decompression is almost twice as fast. So in practice, what this means is that on the vast majority of workloads, you can turn on LZJB. And there will be basically no performance degradation. In fact, for a lot of workloads, performance gets better because the amount of data that we're reading and writing is less, because it's compressed. So basically, LZ4 is better in all respects in that compression and decompression speeds are faster. And also, it compresses a little bit smaller than LZJB did. Any questions about this? It will be the default at some point. We were actually just discussing that at the BOPF yesterday. I think it would definitely make sense to make this the default rather than LZJB. Cool. So now I'm talking a little bit about some work in progress. So those are two great features that are in OpenZFS today. But what's coming in the future? So one of the things that we're working on is related to ZFS send and receive. So send and receive is a ZFS feature. It's used often for remote replication. It allows you to serialize the contents of a snapshot and send it to another system and restore it there. And the most important piece of ZFS send and receive is that you can do incremental sends. So you can send the incremental changes from one snapshot to another snapshot over to remote systems. So this is kind of like R-Sync, but it's much, much more efficient because it uses the internal ZFS data structures to know what blocks have changed. So the whole ZFS send operation only takes time proportional to the amount of data that was actually changed. Rather than with R-Sync, it needs to examine every file, whether it was changed or not. And then if the file was changed, it needs to examine every block of that file to determine if the block is different in the local versus remote system. So send and receive is great. And one cool thing about OpenZFS that is not in Solaris ZFS is that we have progress monitoring. So when you do the ZFS send, it estimates how big the send stream is going to be, and then can report periodically how much progress has been made so that you know when you start that send and you come back the next morning, it's still running. Is it almost done? Or do I still have another 24 hours to wait? But there is this problem, which is that if, for any reason, the send and receive processes are interrupted, maybe one of the system reboots or the network connectivity is lost, then we have to restart this send process from the very beginning. So if you are sending a terabyte of data and you're almost done and then the network dies, then you have to restart from the very beginning and send that whole 0.9 terabytes again. So we're working on a solution to this where the receiver will remember how much data has already been received. And then we can communicate that back to the sending system so that the sender can resume from exactly that point. So if we lose network connectivity, you can just restart the send. It'll restart from exactly where it left off and resume that operation. Any questions about this? Hopefully, this will be in free BSD by the end of the year. I know we've been talking about this for quite a while. So I mentioned a little bit earlier the goal of making it easier to share ZFS enhancements between different platforms. So the main way that we're going to make this easier is by creating a platform independent code repository for OpenZFS. So let me actually go to the diagram. So the current kind of de facto situation is that changes are made on all different platforms. But the only way that changes typically get moved between platforms is from Alumos to Linux in free BSD and then from Linux to macOS 10. So if some fix is made on macOS 10, typically those changes don't make their ways into free BSD, for example. And the reason for that is largely because in order, say, macOS to contribute changes to free BSD, those developers would need to have a free BSD system. They need to know about the free BSD procedures, how to submit patches, all that kind of stuff. But they might not necessarily care about free BSD. They just care about their platform. So we want to make it as easy as possible for them to share those changes from macOS to free BSD and likewise between all these other platforms. So the goal is that we would still be able to inject changes, so make changes on any platform in any of these code repositories. But each platform would be able to push their changes into OpenZFS and then pull changes from other platforms from OpenZFS down into their platform. So the way that this will be better than using this model is that we'll be able to actually test the OpenZFS code on any platform and have reasonable expectation that it will work on every platform. So we'll be able to test this OpenZFS code in user LAN as a regular user process on any of these platforms. So in other words, if you want to get your changes from free BSD into Illumos, you won't have to install an Illumos system and understand the Illumos development model and submit your changes. You'll just have to understand the OpenZFS development model, which will be much simpler. And it will be testable on the free BSD platform. Questions about this? Yes. What's the size of the operating system that you depend on for a phase? Yeah, so especially relative to the whole ZFS. So actually, I have a little point on this. So because we want to test this code on any platform, we're only going to include code that we can test in user LAN. So in practice, it's around 80% to 90% of the code. We think we'll be able to include off the bat in the OpenZFS repo. So it's basically everything except for, as I mentioned here, the ZPL, the POSIX layer, and the VDF disk. So the VDF disk is very small. The POSIX layer is substantial in size, but still the vast minority of the total ZFS code. So if I had to guess, I think the whole ZFS code is around, let's say, 150,000 lines of code. The ZPL is probably less than 20,000 lines of that. Cool. Time is cool. Great. So we're also working on a whole bunch of other new features. And many of them, the ones in green here, have actually already been implemented, and a few of them integrated. And I'll come back to this in a minute, because I don't have time to explain every single one of these, so I'll take questions on it in a moment. So the last thing that I wanted to leave you with is how to get involved with OpenZFS. So first of all, if you're making a product based on OpenZFS, let us know. We'd love to hear from you about how ZFS is helping your company. And we'd love to get your logo on our website with a little description of what your product is and why you chose ZFS. And get your logo on the back of our t-shirts that we give out. If you're a user or a sysedman, please help us spread the word about the fact that OpenSource ZFS is alive and well. We're doing lots of great work to enhance it and make it better for you every day. And if you're working with a code base, please join the developer mailing list. And we're just announcing this week the second annual OpenZFS developer summit. It'll be this November in San Francisco. And we're accepting talk proposals now. That will be due in September. Also on the mailing list, it's a great place to get feedback on code changes or if you have questions about how the ZFS code works. The OpenZFS mailing list has developers from all different platforms, all different kinds of expertise. So that's a great place to interact with those other developers. Cool. So we have, I think, about a little less than 10 minutes. And I cannot get through every single one of these. So are there any particular items that you see here that you might like to know more about? Otherwise, I'm just going to pick ones that I think are cool and start telling you about them. No? Yes? Device removal? Yes, device removal. So this is a project that I'm actually going to be starting to work on very soon. So some of you may recall the Mythical BP rewrite project from the Open Solaris days, which the goal of which was to be able to change any, basically, to change any on-disk data in any arbitrary way. So move it from one place to another, compress it after the fact, enable encryption on it, or whatever you want. So that method turned out to be extremely complicated and very low performance. So, and I should mention, the connection here is that this BP rewrite project would have enabled device removal. But it was never really completed because of those things that I mentioned. So the device removal project that we're going to be working on now takes a little bit of a different approach. It's focused just on device removal, not on being able to solve all problems for all people. And so the way that we're going to implement it is more like we call it an indirect VDEV. So when you want to remove that device, rather than changing all of the block pointers that reference it, we will take the data that's in that device, move it into the other disks, but keep track of the mapping from the old locations in the device that's being removed to the new locations on the remaining devices. So the performance is going to be better while we're doing this. And the impact to performance after the fact, we think, can be pretty minimal given some tricks that we can play there. And this is something that will probably be in beta by the end of the year, I'm guessing. Yes. Do you have some experience with the application features other than the user journey? I don't know of anyone working on that. I think at every conference I've been to, at least two or three people have asked me about ZFS native encryption. But no one I know of is working on it. So this is a great, I think that this would be a great area for someone to contribute. The way that encryption was implemented at Oracle, I think is perhaps unnecessarily complicated. So I think that going for a feature parity exactly with the Oracle encryption might not be the best goal, but something like a whole disk, a whole pool encryption that's native to ZFS. I think would be a pretty easily implementable thing. The actual impact on ZFS would be fairly minimal. I think that the main complication there is key management, which is an area that I'm not really familiar with. So this is something where somebody who's more familiar with the encryption techniques and key management probably needs to take the helm there. But I'd definitely be more than willing to help figure out how to integrate that with ZFS. I think we could put it in the server and put it on the storage. I think it's another way to put it. I think it's the previous C-fold caps. It's a system of previous C-fold caps that encrypts the files of the NALI data and all of that site layer above ZFS. So the file that ZFS sees is actually a random-name file or the file that you encrypted as well. And the data is encrypted. And that way, if you send a sent replication with another machine, it stays encrypted. So yeah, you replicate your input from the browser. Yeah. So I think we're getting in touch. Yeah. Question. Some performance analysis tools other than standard like in place. For example, Oracle has a few works. Something like this. So not really. So we kind of see like, so FishWorks is the team that created the ZFS storage appliance at SunMicro Systems. It has a really cool graphical user interface that allows you to, it basically wraps around Dtrace and KStats to show you graphically like what's happening with the system and to be able to drill down on like, OK, I see this is the latency of the I-Ops. Then let me drill down on things that are just for this file system or just from this other host, which is really, really cool. Yeah, like your hardware actually see aggressive. Yeah. And like, I think that it would be great to implement something similar like that in open source. But just like at FishWorks, I don't think that it would necessarily be part of core ZFS. It would be part of something built on top of it. I don't know of anyone doing exactly that. I know there's a lot of companies that are making storage appliances like Freenaz, Nexenta. I imagine that they have similar kinds of things. I haven't seen anything as featureful, as rich as the FishWorks analytics. But I think that that would be a good place for that innovation to happen. Yes. Did SANA Oracle license their documentation under the same or at least the license to let you steal it also? I believe so. But I'm not 100% certain on that. I'm pretty sure that it's also licensed under the CDDL. But I'm not 100% sure. The man pages definitely are. But like, say, the ZFS admin guide, I'm not sure about that. Because you're starting with the verge enough. Because right now we see that the whole reason is that the ZFS is so terrible. Like, you're diverging enough now where you don't want to be going there. Do you want to rewrite it all from scratch? Yeah. So I think this is a case where the FreeBSD Handbook chapter on ZFS is a great starting point. And we need to, like, we can combine efforts that are happening on all the different platforms. As long as we keep ZFS kind of roughly the same, we'll be able to get a lot more people working on that. So I don't know if there's any new features that I don't know about. I'm used to free now. So I'm kind of behind. Is there a endless way of growing pools? And I don't mind downtime. Yeah. So you actually, ZFS has always supported online edition of space to the storage pool. So if you have two disks, say two disks in a mirror, you can add another two disks in a mirror. And that happens all online while the system is running. You don't even need any downtime. The one thing that people often asked for, which is not supported, is like, what if I have my home system and I have five disks and I have a RAID, RAID Z of those five disks? Can I just add one more? ZFS doesn't support that. Yeah. It's a huge issue. Well, if you can't do that, it would just be a very bad idea. I mean, you could jump through a lot of, lot of hoops to. No, you have a RAID Z. You can add a second beat up of a single device. Yeah. But that's not what you want. Yeah. So you don't want to maintain the RAID Z. Yeah. You want to maintain the parity. But you need to rearrange all the data to make that happen. So there's no way to do that right now. Yeah. Yeah. I mean, I have also been realistically, I understand, just an invitation to the technology. Like, if you want RAID Z to, you probably have to do things in pairs or whatever. Yeah. Cool. So unfortunately, we're out of time. I'll be here for a few minutes until the next speaker comes and kicks me out. So thanks so much for your interest in Open ZFS. And if you didn't get a t-shirt and you'd really like one, please come see me. I have a few more left. Thank you.
The OpenZFS project provides a common development hub for all platforms working with open source ZFS code. Currently, it is easy to pull changes from Illumos into FreeBSD, but it is more difficult to submit changes from FreeBSD to Illumos. This talk will discuss how OpenZFS will enable ZFS code and ideas to flow easily between the Illumos, FreeBSD and ZFS on Linux communities. In addition, I will present several important features and performance enhancements to ZFS in FreeBSD, and also discuss forthcoming enhancements that are in the planning phase.
10.5446/15355 (DOI)
The first one is the long-end. Next one is the long-end. I'm going to clarify it. I'll see if there's a variant on the line. You can wire it. Well, I must be important with that many microphones. I'm sitting here. Three o'clock. So, welcome. I'm Henning. I'm opposed to what many people think. I did not just write PF. I also did a little bit more. And one of the things is OpenBGPD, which just turned 10 years, which is a great opportunity to recycle the talk from, I mean, to look back. And, well, so that's what we are going to do. There's a nice picture from the birthday party. The cake was very yummy. So, background. Who am I? I run an ISP, which explains why I care about BGP. The company goes back to 1996. It's a slightly different form. So, we are an ISP since 1998. That's 16 years by now. Holy crap. Have been using OpenBSD. Large scale since about 2000. And around that time, our core authors were running OpenBSD, which is good. But they were also running Zipraw for BGP, which is not good. So, Zipraw back then pretty much was the only BGP implementation to run on a Unix system. There were some tiny projects that you couldn't really use. But that was the one that was really, really important for the development of OpenBSD. So, I think, that's the one that was really important for the development of OpenBSD. And so, it was important to be able to run it on a Unix system. There were some tiny projects that you couldn't really use. But that was the one at that time. It was written by a guy from Japan, which in itself is not a problem at all, except that he wrote all the comments in the code and all the documentation in Japanese. I couldn't quite make sense of that. If it's been only that, I had been fine, but unfortunately, the software design was utterly wrong. Do I have this on the next slide or do I have to tell you to load? Yes, I do. So that zebra thing is a prime example on how not to design a BGP daemon. First mistake, you know this famous saying, right? There are three kinds of bugs, my bugs, your bugs, and threats. You can make this even worse, cooperative threats. Combined with the model of a central event queue, this becomes much, much, much worse. So a session coming up or a session going down is an event. Receiving a routing update is an event. In BGP, you have to send keep-alives every now and then to inform your peers that you are still alive. Now, the event I need to send a BGP keep-alive also goes into the centralized event queue, which is processed in a FIFO order. So if your central event queue is very busy because you just lost sessions and they sent you another 500,000 routes, well, that event I need to send the keep-alive to my other peer kind of starts at the end, but you don't send the keep-alive, at least not in time. So what happens? Your peer drops the session because he thinks you're dead. The session, pretty quickly, will come up again and he'll send you another 500,000 routes, making the thing worse. So, not the smartest design. And I said the documentation was in Japanese. So I tried to cope with that. I patched out the worst bugs because the thing also liked to crash. I got a reasonably stable at least, but it was still dog-slow. Again, we were talking 10 years ago where we didn't have the... I'll solve the problem by throwing out a solution that everybody is employing today. So it was still slow. A little bit later, actually after I started BGPD, the author tried to commercialize it, and that's banned as well, as it usually goes when an author tries to commercialize open-source software. It's that. Some frustrated users tried to fix it by forking it, calling it Quarga, which is still around. But, well, the design is still wrong, so this was entirely fruitless. So, starting with BGPD. The biggest problem, Theo got me drunk in Calgary. That's not a problem. That's what you think. And someone on that evening, I was there to do the OpenBSD release work for whatever release it was 10 years ago, and this was a release that was kind of painful because we had some last minute bugs, and after we got it out of the door, we went drinking, and this is what happened. I made the mistake of mentioning how fed up I am with that separate thing, and I mentioned, well, I could write my own, but this is so much work, and I don't know how to do it, I can't code properly anyway, and I don't know how sockets work. Well, that's a lie, but... It was clear this wasn't a monster task, but, well, unfortunately he kept nagging, and in late 2003, this was like two or three months after that getting drunk thing, of course, I never ever got drunk again. What did I do? You just earned the first banana. So two or three months later, I started hacking. Basically, by locking myself into my office and not leaving a power week or so, no, that's not true, I went home to sleep, but it was interesting times. I started hacking by writing the session handling, because, well, that's where Zipro sucked most, and at the point where I had a demon that could establish sessions to other BGP speakers and establish the parameter negotiation and keep the session alive by sending the keeper lives and ignoring all payloads, like ignoring all routing data, I started to show the prototype run to a couple of people, and there was one guy at that time without an OpenBSD account, Claudio, who was interested and joined, so we kept hacking on that outside the OpenBSD tree. In December of the same year, we imported the almost working at that point, BGPD, into the OpenBSD tree, and at the same time imported Claudio as well, which I think was a smart move. And, well, so, the protocols. Couldn't argue with you all. Anyway. The border gateway protocol, BGPD, is defined in an IRC, which only has two major bugs, not the protocol, the IRC. BGPD is used between ISPs, well, and for spam desynchronization now, but the intent is that ISPs talk BGPD to each other to inform each other which networks are reachable through them. So everybody tells his neighbor, hello, if you want to reach 10 slash 8, send it my way. And they'll announce it on to their peers, upstream, downstream. Since that would not scale by announcing all the networks individually or looking at them individually, they're sub-summarized into autonomous systems, and typically one ISP is one AS. Some big ones are more than one, but this is typically the result of mergers or Canadian worships. In BGPD, you're not looking at routes in the sense of IP, really. Like, you're not looking at what looks, it doesn't look like a trace route because that would be too complicated and too static. You describe the path by just describing the AS, the autonomous systems that you pass to reach a certain destination network. The BGPD speakers typically announce the networks that are directly connected to them, that's smaller setups, or that they learned through some kind of internal routing protocol, that's the little bigger setups. Depending on configuration, you might also announce networks that you learned from your peers. AS path is just written by writing the AS numbers from left to right. So if our destination is CVS.openweas.org, that's in AS 22512, and from my network this is reached by first going through 174, then through 812, and then reaching those. Kind of easy. BGPD is actually kind of simple, it only knows about four message types, it has been slightly extended later, but originally it was only four message types. There is Open, which is sent right after your step or the GCP connection. The Open messages contain things like this is my AS number, these are my timing values for I expect keep alive every so often, and I'll drop the connection after 90 seconds without seeing a keep alive. We have set keep alive, that just tells your peer, I'm still alive. We have updates which contain the actual routing information, and we have notifications which really are fatal errors. When you receive a notification from your peer, you have to drop the connection and drop all the routes you learn from him. So designing the BGPD for some interesting reason, I thought threads were not an option. So I went for three processes. One of them is the session engine, which does nothing else, but making sure that the goddamn sessions stay up and the keep alive are sent in order, and to make sure that this session engine never gets too busy, it will never ever look at the update messages, it just takes those and passes them on to a separate process which deals with the actual payload, the content, the routes, what BGPD is all about, and called this the route decision engine. That's the one that takes up all your memory, because it holds the BGPD tables, and that one is the one that decides which of the routes it learned, like the path is to a given destination, which one of those is best. The third process is the parent process, which is the one that talks to the kernel, it forks off the others. Yeah, that's what it mostly is, but we'll get to that later. That's the model, this is an awesome picture I made ten years ago. I think it only has one mistake. So the master process, the parent process, runs as route. It needs route permissions to adjust the kernel routing table, surprise, regular user, you're not allowed to do that. It needs routes to do IPsec and PCPMD5 to protect the sessions, we'll get to that later. It forks off the two other processes, the processes talk to each other over socket pairs, the master passes the configuration information to the other processes, so it parses the config file and sends the config over. The session engine talks to all the peers, sends the actual routing updates to the session engine, and of course it has to inform the session engine when the peer goes away, so that goes over too. It does all the magic of deciding which route is best, and eventually feeds that to the master process. Now of course you don't want to feed the master process the route that is unreachable for some reason, so it asks the master process to validate that the IP next top, that's an IP address, the gateway, is actually reachable. The master knows the kernel routing table and verifies we can reach it. The PCPD employs something that we do all over the place on OpenBSD, especially today, it's the principle of least privilege, run everything with the lowest privileges that are really, really required. The route decision engine doesn't need anything special, so it runs as underscore PCPD, and it routes to var empty, which as the name indicates, is empty except for a loading socket. The session engine needs route to bind to port 179. We will get to how we worked around that later. The parent needs route, there's no way to work around that. A set to modify the kernel routing table, you need route, and this is not just to open the routing socket, this is checked on each and every routing update. It also needs route for the IPs are closed, obviously. A set to bind to a low port, you need route permissions, and since we don't want to run the session engine, which after all is the network facing part of PCPD to run this route, we don't have the session engine opening that socket, we have the parent opening it, and then pass the file descriptor over to the session engine. The parent has to keep track of which file descriptors the session engine has opened, because otherwise you try to open them twice, and that leads to very interesting results that took a while to get right. And now that the session engine does not have to do anything special anymore, we can happily run that as underscore PCPD as well, which is an unprivileged user, of course, and change it itself to var empty as well. So, I'm getting back to threads here, we don't want them, but if we want to have one process talking to a lot of peers, we obviously cannot go for the classic way of dealing with sockets because they block, and we will not read from others when one is blocking. So, the obvious solution is go for non-blocking sockets, which sounds very easy but has consequences. When you call, for example, write or send message, it doesn't really matter here anything that writes to the socket, the same is true for reads, but let's use writes as an example. When you call write, send message, and the like on a blocking socket, and it cannot get rid of the entire payload at once, it will block until it's done, and when the system returns, you know that all data has been written out, or an error has occurred. There's no other way. On a non-blocking socket, let's just call the return as soon as it would block on a blocking socket, which can mean that it wrote half of the payload out, and the other half you have to retry later. So, you have to do all the buffer management yourself. For that, I wrote a framework which I called iMessage for internal messaging on top of a buffer API which hides the, I have to write the remaining 15 bytes later. It's hiding all the complications of dealing with non-blocking sockets. This really, really, really paid out. I just did this because it felt right. However, as it later turned out, this was very useful to many, many other demons that we wrote later in OpenBSD. There's a list here which is certainly incomplete. These are the ones using iMessage in OpenBSD, which were easy and obvious to find. I know of a couple of other projects outside OpenBSD using the framework now. So, that was a good idea. Lesson learned, proper abstraction, proper API. It might get reused. The internal messaging, of course, is a core component in any form of flipper separation. You have multiple processes that need to talk to each other somehow. And instead of reinventing the wheel all the time, do it once and do it right. Today, Wikipedia has 66 different message types. So, it's very heavy on messaging. That's more than OpenBSD. And the iMessage framework initially planned just for the communications between the three processes. And then, the connection between the two processes turned out to be pretty easy to just run this over different kinds of sockets like a TCP connection or a UNIX-demined socket. So, BGP control, which, as the name indicates, is a separate little control program, talks to BGPD using exactly the same framework. You could even talk TCP to a different machine. Is it any unsafe? No, it isn't. But that would be easy to fix. So, let's look at the session engine again. A set maintains the sessions, makes sure they never ever drop. Once the session is established, it takes care of the keep-alive handling. It receives the keep-alives from the neighbors. And of course, if it doesn't see the keep-alives, it has to drop the connections. So, it has to run a bunch of timers. It does not deal with routing or route messages at all. It just passes those on. It is very, very lightweight. Typically, it will consume less than 5 megabytes of RAM. If you see it consuming more memory, that means that one of your peers is very, very slow and it buffers a lot. That typically means you're talking to a Cisco. No kidding. I mean, if you're spending 200,000 euros on a router, you cannot expect to get a CPU worth more than $2. So, the router session engine, that's where the magic happens, maintains what the VGP line is called, the routing information base. That's shitloads of tables. Last but not least, the two most important ones are the prefix tables. Prefixes are the routes. The 10 slash 8 is a prefix. And the AS path is that are associated with the prefixes. But you can have, like, you have one prefix and typically multiple AS path is to it. So, that's kept in separate tables and heavily interlinked. The filters run there. You don't want to accept your downstream customer to announce you the routes to Microsoft.com. Well, maybe you do. It has to take a decision on which of the path is it learned is the best for a given prefix. And it also has to generate the update messages to feed the routing information to your peers. The layout there is many, many tables which are heavily linked. The goal here is to avoid table walks. Once again, this is learning from mistakes others did. Do I have to mention this again? And other commercial BGP implementations, it's very common to have a periodic, like a cron job every 15 seconds or a minute or so, to walk the tables and look for changes. This, of course, is horribly inefficient, which is very nice when you work on a $2 CPU. We did not want to do the same mistake. This doesn't need any periodic table scanning at all. The decision process, keep in mind when I say the best route, it's the best route according to that algorithm. It's not necessarily the best route and what we would consider the best route. So, first step in the decision process, the decision process always compares two routes. It does that multiple times until it figured out which is the best one. First step, you check whether that route is actually valid, like you can reach it. VIP next stop is reachable. Then there's a parameter called local preference. You can set this in your configuration. The bigger the better. If you don't manually interfere, those will be equal, which that means we go on and compare the ASPA length shorter is better. These days, they are often very equal because everybody has the same peering points, at least in Europe. Peering, you might have heard about this. Canadians. North Americans. I know. If they're equal, we are looking at the origin. Origin describes whether the route has been learned from the peer or coming out of an internal routing protocol, the lower the better. Then there's something called the multi-exit discriminator. The idea here is if two networks peer in multiple locations, you can use the MED to indicate the peer, drop me the traffic at this point and not at the other where we are peering. Traditionally, this is only comparable between the same neighboring ASP, but there have been creative uses of that. Most BGP implementations today allow you to compare those between different ASPs as well. External BGP is cooler than internal so that your own BGP speakers talking to each other don't consider each other the best. Of course, that's a loop, obviously. Weight is an extension we did. This is to tip a routing decision. You want for traffic engineering to make sure that your uplinks are loaded kind of equally. You want a way to tip the decision towards one a little bit without taking the hard decision by setting the local path. That's why we add a bit bigger is better when they are equal otherwise you can influence which one is preferred in that case. The rest becomes a little bit academic. The H step is route H. Older is better because that means it's more stable. This is an extension we did. It's actually off by default because it's non-standard. If we still could not take a decision, we are getting into bullshit land because we have to take a decision. The lowest BGP ID wins. What's the BGP ID? It's the lowest IP address on the router numerically. That's a very good factor to decide which route is best. Afterwards, you're looking at the cluster list. Ignore it. It doesn't make sense at all. If that still doesn't lead to a decision, the numerically lowest IP address wins. If that doesn't work, it will spit out the error message and die because that's unreachable. This is all about weight which I already explained. The parent process. That's responsible for talking to the kernel. It has to do the next stop validation. I keep writing on this because this is actually more complex than thought. Once again, you don't want to install routes into your kernel where the next stop is not reachable and you do have an alternate routing BGP. The cart problem got fixed with the cart bug. What got fixed? The cart bug. Which cart bug? So that might have been the case. That's half a year ago. Come on. I don't remember this to be honest, but apparently. To do so, the parent process maintains its own copy of the kernel routing table. That means that on startup, using the Cisco interface, it fetches the entire kernel routing table, which hopefully is small. Before BGP runs, it typically doesn't have that many entries. However, if you're in a big network, on your internal routing protocol is up, it might be not all that small. On top of the routing table, it hits the interface list because you do not want to install routes pointing to an interface which is down. To keep that copy in sync, you don't want to call into the kernel every time you want to validate the next stop. Of course, that's kind of expensive. That's why we have the copy. It's obvious that it has to be kept in sync. This is why we listen to the routing socket. The routing socket is the interface to the kernel routing table. Any change you do to the kernel routing table is a message on the routing socket. The kernel will relay that message to all other listeners on the routing socket. Everybody has a chance to keep informed about changes. That also means that if you manually install routes into the kernel routing table, we will notice and cope with that. Once again, I have to point this out because certain other vendors don't get this right. Certain other vendors. The same goes for the interfaces. Interface link state changes in the light also get announced on the routing socket. That table is kept in sync the very same way. It also means that since we are looking at the link state, we will notice when you pull the cable, we don't have to wait for the keep alive timeout to kick in. We can remove those routes immediately and replaced by alternate ones. Once again, thanks to that, we don't have to walk the next thought table every couple of seconds and check whether they are still reachable like certain other vendors. This copy of the kernel routing table can be coupled and decoupled from the kernel. Why? Because I could. It was really, really for, during the development phase, it was really, really, really convenient. If you run VGP as a route server or for something which is not into routing at all, like VGP's family distribution, there is just no point to insert something to the kernel routing table. The blacklist to the kernel routing table? No. So the next top of the heading specific server, you will be sort of, what is the direct connection to that? No. You don't want that. So there are certain uses, that's one and route servers are the other typical use, where you don't want to update the kernel routing table at all. So you need a way to run decoupled anyway. And since it was so easy to couple and decouple at runtime, I implemented that. It was surprisingly fast, under 10 seconds usually, with the full table, which means half a million prefixes these days, on a not-to-dv machine. And on an AMD64 machine, we just need about 32 megabytes for 400,000 routes, which I find quite impressive once again, because certain other vendors, who also apply arbitrary memory limits in their hardware, don't get this right, not remotely. There's a big debate right now on NANoc, because several commercial routers will not be able to deal with more than half a million prefixes, which we are just hearing. So it's kind of obvious that these TCP sessions between the BGP speakers are critical. If somebody manages to attack this TCP session and make a drop, you will delete all the routes you learned from your peer. If the attacker manages to do this with all your upstreams, you're offline. So this is not good. There was this, when was this? This was around the same time, 2004, 2005, the RST attack on TCP, where an attacker could smuggle an RST onto an existing TCP connection by guessing the window correctly. This is especially bad for BGP, as I said, because it can take you offline. So you want to protect your TCP sessions. There is a standard for that, TCP MD5, which basically just adds an MD5 signature over the payload and a shared secret. And we even had code for that from around 4.4 BST. This code was very impressive, because there's no way that ever worked. So after looking at that, the decision was clearly just deleted, because fixing that was almost impossible, pointless, because it was horrible. I'm not saying the joy again. Damn, I did. Since the signatures in TCP MD5, this is somewhat similar to IP-cycle AH, right? So I actually implemented this as a kind of special algorithm in IP-cycle using the very same user-land kernel interfaces, which also means that I had to add a PF key interface to BGPD. PF key is the standard bodies defined interface between user-land and kernel to modify IP-cycle flows and keys. And well, since this has been designed for e-comedy, which means by people who never wrote a single line of code or did so 25 years ago, unfortunately forgot about it, it's horrible. That was super painful. It claims I drank a lot of beer, but since I never drank again after talking to you about this, this can't be true. One of the advantages of having gone through that pain means since I already have the PF key interface, well, TCP MD5 was nice, but we all know MD5 is not exactly the algorithm of choice today, right? So since you already have the interface, how about implementing real IP-cycle? Oh, before, I have to talk on previous D. Previously, at the same time, instead of just taking our code once again, had to reinvent the wheel and write their own TCP MD5 implementation. This was brilliant. They did attach MD5 signatures to outgoing TCP packets even correctly, but they never bothered to verify incoming ones. So what again was the point? You will not detect any modification? Well, your peer might detect that your packet had been tampered with, yes, but you cannot possibly figure out that the packets from the peer to you were tampered with. So entirely pointless. Another completely unexpected way to screw this up has been demonstrated by a certain commercial vendor, surprise, it says go. They do the MD5 signature check before doing all the cheap checks. You kind of want to check the sequence number fitting the window before starting to do the comparatively expensive MD5 check. Which once again is especially smart when you are shipping $2 CPUs and 200,000 Euro orders. Apparently, at least at that time, only Juniper and OpenBSD got this right. I find this really, really surprising because this is a kind of simple problem. But well, if you have to buy a commercial router, don't pick a Cisco. So, as I said, since I already had to write the PFQ interface, doing real IPsec was kind of easy. So first step, do it with static keys. Not too hard to do. All that needs to be done, BGPD has to load the SAs, which is pretty much the key information, the password, the key, into the kernel. And BGPD will set up the flows. So no manual configuration whatsoever because BGPD has all the information. It knows the peer addresses of the peers and it knows which parts the connections are on right. And turns out that Juniper can do this as well. And we are perfectly compatible. That works fine. At that time, I actually had a session from Hamburg in Germany where I am to the Juniper lab somewhere in the US because they were also interested in making sure this is compatible. And this worked really brilliantly and just fine. On Cisco's, you still can't do this. Perhaps you can buy some features that to enable this. But who knows. Unfortunately, and I really hope for this to change, but this didn't happen. This is not being used in practice. In practice, most TCP sessions don't even have MD5 signatures. Most network admins think they don't have to protect those sessions. They are safe because they are only on one network segment. Right. Nothing I can do about it. Sorry? One. You don't throw food. So, static heat is nice, but you are under the same keys for years. So how about going for Ike, the interim key exchange? At that time, I can't be because we are talking Ike v1. We have not extended this to Ike v2, have we? I don't think so. Do we have to? Isn't this transparent to V2P? Yes, it is. No, yes, no. Almost. Should be. No, this should just work. Anyway, so I can't be the Ike key management demon we already had. Can do the key for us and everything. Instead of the usual setup where it also takes care of the flows, VGPD does that because, once again, it has all that information. VGPD just asks the kernel for an unused pair of SPIs, which are identifiers for those flows and uses them, sets up the flows. And since ISA can only do the key, you don't have to get into the keynote nightmare, which is their policy engine thing. It's even better. ISA KMPD in that setup doesn't need any configuration. You just have to copy the generated key files over, obviously, and run ISA KMPD minus KA. Then you can tell that I gave this talk in Japan last. This is Cheers in Japanese. Then you can go for beer while the other commercial router administrators still fight their setups. Without any form of protecting these TCP sessions, big TCP windows are very risky because it becomes easy for an attacker to guess a sequence number that fits into your window. So in VGPD, we keep the TCP window at the default cell, which I think is 16K. I think it's 16K. However, when there is a form of protecting the TCP session, like TCP ME5 or IPsec, we will raise that window to 64K. The conclusion is IPsec makes your network faster. Since I also had some bits in PF, there is integration with PF in VGPD. As mentioned, VGP is very efficient to distribute prefixes, networks, or IP addresses. Like a slash 32 prefix is just an IP address. So this can be used in many creative ways. Our talks are really in the wrong order, unfortunately. But the most obvious, most prominent, and actually thought about from the beginning use is exactly what Peter talked about, distributing spam-d blacklists and what it does. You're not adding this feature? The exclusive said including spam-d blacklists. And it was about five or seven years before I thought of it. You were slow. But... It's not a new, because I didn't... I was about to add this, opposed to me, he did it. So the PF integration with VGPD, VGPD can talk to PF and add prefixes. It learns that the filter language matches into a PF table. And in PF, those tables can be used for pretty much everything. You can use them to drop packets from any IP address in that table or any network in that table. You can use them to redirect them to spam-d or redirect them to your real mail server or redirect them to your favorite victim. You can also use that to do quality of service processing. So all prefixes you learn from a specific peer put into a very, very low bandwidth queue and tell your customers......for Netflix if you want all money from them? Or... Good idea! Netflix is a good example, or YouTube, or whatever. You can also use this to slow down a specific peer network and tell your customers that you already told you he's slow. This is the tool for providing engine problems. Is it? The tool for providing engine problems? You can use it for... You can use it for... For better or bad? For better or bad? Yes. You can have me build a... You just supply the tools. So now you're an offspring? Well, there is no way to prevent that kind of bad news, and I honestly think there is... There's no justification to even try to prevent that. I mean, we provide software, period. So, route labels. I extended the kernel routing tables to be able to attach a label to a route. A label is a string kind of, which really means it's just a bunch of bytes which you can attach to a route, and the kernel routing table and other demons, other users can see this label and act based on that extra information that you attached to the route. So, for example, you can tag all routes that you receive from a specific neighbor AS with the name of that ISP, and in the kernel routing table, you'll see, well, this is a good example of how, of course, they don't match, but... If you look at the kernel routing table, it'll print out the label for you, and the very same way other demons can see that label and act based on it, and I'm running out of time here. PF can also filter based on these labels, so you can drop all traffic from that ISP or ASET, put them into a queue, which coincidentally is called really slow. So this combining this BGP information with PF is really, really powerful, because suddenly you have BGP information to take routing, sorry, to take filtering decisions on all these QoS decisions or whatever. PF is very, very powerful in that. Like, just one example, you can limit the number of connections source IP addresses can open to your backend service based on which AS number they are coming from, which ISP they are coming from. So if you get all your attacks from a specific ISP in the third world country, the US, you can limit the number of connections those can open to 10, each of the IP addresses in this network, while the rest of the world can happily open as many as they want. When you are fighting large scale distributed network service attacks, this is extremely powerful, and I made a lot of use of that. Carp. We made BGP the aware of the Carp Master backup state, which is kind of simple because that's the link state for a Carp Interface. Sessions that are marked as depending on a Carp Interface are forcefully held in idle state while the Carp Interface is not mastered. They won't even try to reach the neighbor. And when the Carp Interface becomes master, they will immediately try to connect to the neighbor, which means that the fail over time decreases dramatically. It works the other way around as well, which if you can influence the demotion counter in Carp, this is to make sure that a freshly rebooted router doesn't become Carp Master because it actually has all the routes. IPv6, my favorite topic. Please just read the commit message. I have nothing to add. But I mean seriously, 128 bits of addressing are not enough, so we are adding another 32. We have 160 bit addresses, but damn on the soccer we only have space for 128 bits, but on those where we are adding those 32 bits, we have two bytes on zero by definition, so we can use those two bytes to store the lower two bytes of the extra scope ID. The upper 60 bit of the scope ID will always be zero, right? No, there is no such guarantee. Give me a break. Another example, this function in BGPD takes an app mask and gives you the prefix length. So 255, 255, 255.0 leads to 24 because of the stash 24, right? This is the IPv4 version, which only has four lines of code because the default route is a special case. I tried to show you the IPv6 version, but unfortunately it does not fit the screen here. It's much longer and it's incomprehensible. To most of you, it looks like line noise, doesn't it? It does to me, almost. So filters. You don't want to accept random prefixes announced by peers. As said, you don't want Mr. Small Guy to announce to you that, hi, I'm Netflix. Well, maybe you do. Typically you don't. So given my background, I tried to make the filter language as pf-like as possible. That means one big filter was at last match-willings, which was a mistake in pf already. Opposed to all the other, not all, but opposed to most other implementations, specifically the commercial ones, the filter language is a designed language and not an accident that happened along the way. The filters are specifically important on exchange points because everybody peers for the route service, thus putting a lot of trust into the route service. On some exchange points, the filters are automatically generated from the IRR databases. Typically on exchange points, they're either automatically generated from the IRR databases or they don't filter. There basically is no middle ground. We're still going... Three I-axis in the calendar, we're still doing it by hand, at least three. There is no middle ground on saying I-axis. Because that's ridiculous and of course doesn't scale. I said, we're getting there. Yes. You're getting taught how the internet is designed. So that leads to very, very big filter sets. I don't have current numbers, but the D6 filter set, D6 is the exchange point in Frankfurt, being the biggest in the world. They were at something like 300,000 filter routes, which is kind of massive. So, since we went for this one sequential big route set, you have to... Since it's last metronome, you have to walk them all. And this is a performance problem. It would be much better to have smaller filter blocks, which are then applied on a per-peer basis. Unfortunately, we didn't go that way. My desire to make this PF like in this case, really went way too far. And the filter performance really is our biggest problem. And this cost us a lot of deployments. When we came up with OpenBGPD, many, many, many exchange points immediately jumped onto the train and started deploying them. And a fair share of them now went on to something else because our filters are too slow. I was kind of surprised to learn that we still have 30% market share on exchange points. Market share. You get the idea. So, that was a mistake. I have to add something else because everybody asked for portable versions of BGPD and blah, blah, blah. A Unix machine does not suddenly become a good router just by adding a BGP speaker. We did not just do the BGP thing in user land and be done with it. We extended the kernel to it. We extended the kernel routing table. We added OSPF and OSPF6 speakers. We have a DVMRPD for multicast routing, which is completely irrelevant on practice. And we even wrote, I didn't do it, we even wrote a RIPD for those who cannot let RIP go. RIP is a very old routing protocol which doesn't even know about netmasks. RIP2 does. Still, it's horrible and you don't want to run it. And as I mentioned, we had a lot of changes in the kernel. We added route priorities. We have multi-path routes now for load balancing. We have multiple routing tables and routing domains, which is a separate talk. We even have an MPLS stack and the associated label distribution protocol even. I didn't do that either, but I absolutely welcome that. Another argument we are frequently being confronted with is the hardware versus software routers. So if you go to vendor C or vendor J, spend a lot of money, you get a little box and think you have a hardware router. This is in most cases just not true. Most of the so-called hardware routers are just PC-like, architectures running software. There basically is no difference. To get a hardware router that does the packet processing in a separate data pane, separated from the control pane, you have to spend at least 100,000 euros. That's what 150,000 Canadian. So up until somewhere around 10 gigabits, software routers are not really a problem. They reach the limit there and you want some headroom, so if you actually handle 10 gigabits, you want to run one of the big things, but most sites handle way, way less traffic. So for all those cases, if you see running up and visiting op-ed is fine. The limit, of course, keeps increasing because the hardware is getting faster and we keep tuning stuff. So I suspect there's nobody in this room who would have a problem being limited to 10 gigabits. On the core, I suspect that might be wrong. The flexibility you get from running an open-vst box instead of one of the commercial routers is so much higher. The case that kills it alone for me is the ability to run TCP-DOM to diagnose problems. The other thing is that you can just install your favorite monitoring to route modification to... I mean, it's a Unix box. You install whatever you want. So the flexibility you get is much better. I'm not sure that you know you can run TCP-DOM on a Juniper, right? On a Juniper, yes. On a Cisco, you can't. So, status. I'm running out of time anyway. The actual B2P D-minus rock solid. I have not seen us dying in at least five years in my setup. The only reports of a dying we got were really... I can't even remember a single one. Not for normal use cases, certainly not. There are some that probably might work on a B2P. That must be bizarre. Bizarre meaning a PV6. You know, even more bizarre. Yes? I don't know if it would be B2P. I'm going to say it was about a year ago. One day, there was some bright skin. I remember that. Right. I don't know exactly what it was. There was a specific version. Yes. And there was one alleged route being announced by someone in Italy. Not me. No, your cousin. Come on, everybody's cousin in Italy. So, that's your cousin. And you're right. We did not handle this correctly either. All the others didn't either. I got away with it because the path is where this route came in. The others filtered this out for me already. But yes, you're right. So, there was one case. However, it's still rock solid. What I'm really trying to come down to, the reliability is at least on par with the commercial vendors. You know, one thing I was suggesting that saved me in that case was I was running multiple versions of OpenDSD, with multiple versions of OpenDGP on. And one of them got taken out and the other one kept going. That's interesting because that bug was there from day number one on. Maybe it was what route it was coming in with. I don't know what. But that did protect me. You somehow got lucky. Yeah. But this is actually a good point with OpenDSD and OpenGPD. It's much easier to have two routers in a failover setup, which is hard to impossible with the commercial vendors. We are pretty much feature complete. That includes VGP, MPLS VPNs, and multiple rips, which is basically a router virtualization kind of. Don't make me explain out of time. It is in use by many ISPs and exchange points worldwide. Actually, there's one very, very big ISP using it internally. They showed me their rip with millions and millions of entries. I was very impressed. It's not just small ISPs. It's also been used by the multinational giants. You all know I'm not allowed to tell names here. Of course, acquisition operation is much, much, much cheaper than buying the commercial ones. And there is commercial support available. By the way, Paul is a consultant. He forgot to mention this in his presentation. So that's it. Any questions? Just a last slide. Twenty percent of the ISPs in the world use the new ISPs. Twenty percent of all ISPs in the world. Thirty-somethings in the Quarta. There's a burden of seventy-five percent. There's still some kind of share of that. Not that. IXPs, in-premise exchange points. There was a video of a walk-back on YouTube which was meeting with my bias period. Some people, in fact, they were talking about the deployments of the DPD culture. I know that it's widely used, but I don't have those numbers. There was a talk at the time about our directness to Swedish money. So I'm going to go back to the slide with the DPD. I'm too far. Am I wrong? Yeah. I'm not wrong. Out of my head? I don't know. Anybody? On how to put it, at least, from a two to five, two or five, three time frame, was six hundred thousand times the same, I believe? On not exactly fast hardware, off that time. It's kind of a fair reason we passed hardware. I think so, but at the minimum, how much is that? Oh, of course. Of course. For routers, anything forwarding traffic, Bandwidth is not all that interesting. It's for seconds. The point here is, you're pretty immune to Y, as you can build something approaching a 10-day root call for router out of what you need. Do you use your word? That is a lie. No. Sorry, I'm quite incorrect. I said you won't headroom. So you won't run this if you actually run 10 gigabits, but up until ever five, six, seven, you'll be fine. The six hundred thousand is from a couple of years ago, that hardware. Since then, we did a number of changes that improved performance a lot. So today, with all these improvements and current hardware, it should be at least twice as much. I can believe it's about twice as much. And I'm certainly a year old gigabit now. Let's see what we are comparing with. A Cisco 12,000, which costs more than 100,000 euros, cannot handle 5,000 packets a second if they don't manage to hit their fast path. Exactly. Which is trivial to do when I'm the attacker. Trilogy. Actually, all you need to do is to set a random IP option, which has to be ignored by the spec. They will not hit the fast path and they are fucked. You're using hardware rather than data. No, I'm using the same methods here that they do, to make the numbers somewhat comparable. So I was running a OpenVSD, whatever would be as the core output for the numbers. And I was seeing well over 5,000 packets. Sure. I mean, there will be a difference of packet size if there is something above a gigabit. But everybody in here who doesn't know the real issue is... Packets per second. Right? Yes, absolutely. Absolutely. The cost is per packet, not per byte. Absolutely. Now, if I'm giving the numbers for 64 byte packets and everybody else is giving the numbers for 1500, this... Yeah, I think you don't want to get their life. You don't have a life. So, you know... Lies, them lies, benchmarks, marketing. Thank you, first. I was just going to say, what things are for the... testing and the... testing and the... testing. Since we are not modifying it all that much anymore, I actually don't... I have several other routers around. Last but not least, I just run it in production. I'm not very afraid of trying something on the core routers either because it's already done, and seamless failover. So, good enough. Claudia has a little left as well. I think he forgot to take this off. That's a very good question. We want to do this for several years and every time when I sit down and want to do it at some hackathon, there's something else that interrupts us and then we do something else. So I'm not making any prediction there. What should I do about it? Case by case. There's no universal answer to implement everything that's been proposed or to reject everything that's been proposed. It's really a case by case thing. The feature complete with the stuff people ask for. The last big thing that we've been asked for that we did not have were the BGP, MPLS, VPN stuff, which we have for a long time now. I don't remember any feature request that we've been able to implement some standards thing or the like over the last couple of years. I have one. Yes, you wanted to quote that. But you wanted to quote that. You said you would quote that. Remember in Tokyo? That's what I might have thought about. Which means you're writing it. I know this approach. He attempts it. He sends a horrible patch and expects us to fix it. We apply the same technique. It usually works. Doesn't sound like a bad idea. Well, I'm on the other side of the fence. We provide the data about that. RPKI, that's basically signed right up there. So to verify that the guy who's announcing the route actually is allowed to makes a lot of sense. Not for the moment. Because nobody's implementing it. It's like DNSSEC. No, it's different because DNSSEC completely sucks. Thank you. So... No, I mean, sorry. For DNSSEC, we want the feature, but the proposed solution is bullshit. I understand, but from my point of view, I maintain the IPv6 course for the NCCN and take the NSSEC course. And I work on our RPKI, so all the three things I do in my life, it's shit's brain. No. I said RPKI makes sense. What was the first? The NSSEC IPv6. Yeah, so one out of three is good. Re-focus. So instead of handing this incomprehensible bullshit out, implement RPKI. It's not all that hard. It's just C-code. Any more questions or can we go for beer yet? Oh, I don't drink. Can we go for soda? Tea. Okay, thank you then.
The Border Gateway Protocol, BGP, is used on the internet between ISPs to announce reachability of networks. Routers build their routing tables using this information. The global IPv4 routing table has about 470000 entries today. In 2004, I was upset enough with the implementation we were using back then, zebra, to start writing an own one. After showing an early prototype other developers jumped in and helped. Quickly thereafter we had a working BGP implementation that not only I have used ever since then. We'll look at OpenBGPD's design and how it differs from other implementations, the frameworks established and later used for other purposes, and the lessons we learned over the last 10 years.
10.5446/15354 (DOI)
I'm going to take a kind of a hitchhike that of what we've been doing at Esri since I've been there for about two years. Previously it was CTO of GeoCommons where we were focusing on building a crowdsourced open geospatial repository in which people had to go and find data, scrape it, and contribute it essentially to us. We had pretty good results to that. To date I think there's over 300,000 data sets that have been crowdsourced. Obviously to varying level of qualities, maintenance, sustainability. And what we learned from that was that one, there's a huge drive for doing it. There's a lot of people that want to share data and have data made accessible and available in a lot of different formats. Two that the data on the web are crazy and wild and change format a lot. A lot of handwritten XML and JSON out there. It's surprising when you talk to OGC about that, like no, no, everyone uses an XSLT or verification document, no. They write their KML and PHP by hand and you hope you can parse it. And so it's pretty crazy and wild up there. And just over time it's, if you don't have the owner involved in sharing the data, it's just not going to be sustainable. So what we've been doing is trying to do a lot of things in Esri since we joined here is in terms of really making the platform that's used pretty much across the majority of government institutions and a lot of other corporate organizations make it more easy and accessible through a lot of different ways. So relevant to this panel I think in location tech is the open source aspect of it, which is definitely something people aren't used to when they hear about Esri. A lot of people are critical or skeptical of, but I'll tell you it's for real. And we're really changing how we work. Obviously there's a lot of things that have been changing at Esri. So there's a lot of reasons why there's a lot of meat behind this, but also, hey, hold us to what actually happens, not what we're saying. So we have four different initiatives. I'm not going to go through all of them here, but we are heading up open source initiative as well as better open standards. So the ones that you have to check the box, but also the ones that everyone actually uses because sometimes those are different things. Open data that I'll drive into a lot here and then a lot of open content. All of our education materials is all released under Creative Commons, share a like license or attribution I think only. A lot of our algorithms documentation are all openly accessible as well. A lot of other data and information we have out there have been accessible for years. And we're trying to really highlight and find out what other people want. So we're open to open source and just kind of get what's been going on here is first we've been doing open source for years. We've been waiting for hours at Esri, but not there were like two or three projects in source forage and code plex in places that weren't as popular today. So we converge everything to GitHub. And when we started two years ago in this particular initiative, there were zero projects in Esri's GitHub account. And there were maybe one or two people as part of the org. To date after, this is actually from about a month or two ago, there are 718 Esri engineers, Esri employees that are now part of our GitHub organization. In terms of they are, maybe they were on GitHub or we've kind of brought them to GitHub and expose them to what that means in terms of being able to collaboratively work on code in the public. We are also, we leverage GitHub internally as well for all of our core products as well. So we're really changing how we collaborate and making this really part of the way we do business. We have over 300 open source repositories, over 3000 forks, which is the beginning of getting into some of the metrics in terms of where we've done it. We've published it. That's great. But why is it more than a marketing exercise? How do we actually change how people feedback on this? And again, I won't dive into too much here, but it does underpin some of the things we're thinking about is how do we leverage tools and give tools to people to enable them to be flexible, to really solve the problem that they are an expert at and not worry about the rest of the things they don't want to deal with, right? Really focus on those things. For example, one of the first things we actually open source, which was kind of amazing was one of our core geometry engines. So you can go and download for free under an Apache license. Some is more open license than the JTS that was out there. A geometry engine, which you can now go and deploy across a thousand machines and never ever talk to us as a company again, because it's a really good, really fast core geometry engine because we want to enable people to go and explore doing big data analytics. What happens when they want to go and analyze agriculture regions across the world? We don't want them to have to buy thousands of licenses here. Take this geometry engine. We're even building Hive and other spatial query engines on top of it, Pig. And they're really easy for people to deploy this and then leverage either back in our platform or just take and go and build their own applications with it. Similarly, we've been building client side data transformation engines like Terraformer. We've been integrating into popular libraries like Leaflet. So if someone wants to use Leaflet instead, great. We can help you, we'll actually help you do that and help you work in what you're most comfortable using. Geoportal has been one of those that's been out there for several years, I think eight years. That's been open sourced in terms of being able to do metadata standards, CSW, all of the OGC tick boxes. That's an open source application that's been very popular and deployed pretty widely. And then Koop will actually dive into here, powers some of the magic behind what we're doing with open data. So this way to think about is this is the part that everyone has seen and usually trusts and pays for and gets trained up on and things like that. What we're doing now is we're open sourcing all of these pieces around it. We're letting people really go and build a lot of capability without ever talking to us. They can go and build almost completely open stacks or essentially open stacks here. They can have data processing engines to databases, to visualization engines, to even ways to manage your SSH keys across EC2. Most open source companies, when they go open source, open source things that support their core business. We're actually open sourcing parts of our core business to enable some of the domain expertise that exists out there. So why? Why are we doing this? What's really interesting is sometimes when I decompose what ESRI stands for and even the Environmental Systems Research Institute, it's really about enabling people to understand location geography, enabling them to understand that and improve their own lives. As Jack Dangerman, I was amazed when I found this quote and it was actually to Jack a number of years ago. Technology is bringing people closer to their worlds and empowering them to define a future that reflects their values, hopes, and dreams. So we can talk and get all wonky about technology and formats and tools and things like that. But in the end, these people live on the street. It's where the kids will grow up. It's where they're worried about crime and air quality, house prices, transport, how they're getting to work every day, who their neighbors are. How do we empower them to have the most capability and understanding and really ownership of their space around them, how do they become active participating citizens, how do they become active participating, engaging with science and climate change? How do we make open data part of the infrastructure? Like we provide a road. Like the city provides this basic infrastructure and doesn't presume what's going to go across that road but that the road's always there and the citizens and the businesses and the developers can trust that the road will always be there. And now I'm going to put different trucks and vans and trains and things like along that and the city is going to maintain that for me. So that gets into what we're really trying to do with this open data initiative for misery and hopefully collaborate with other people. So it's great. So it's about government providing tools for their citizens. That's what I really think is valuable. But then why are you here as a business? Why are we all wearing jackets and talking about this as well? It's a greater potential here. So you're probably familiar with, if you're not a highly recommend going and reading it, the recent McKinsey Global Institute report that came out I think last October which highlighted the potential for open data for business value in realization. They highlighted six major sectors and about two sub sectors where open data is going to have a tremendous impact over the near future. So how do things like natural resources, health, education, transport, right? So a lot of these different sectors that we kind of know and can think about and kind of understand that yes, I can see how open data helps those and then relevant to everyone in this room at least. It's not always the case when I talk to groups but you get it that location is this common underpinning behind all of these sectors. How do you understand how they're related to one another? What are these the correlations and potentially causations in terms of how you address things like health through better transportation, through better education? How do you use natural resources to improve utilities or there's another meeting just on the hallway about energy efficiency, right? And how can art, concepts and technology all together as a community enable those businesses to operate better? Because again, work that's really interesting is this a three trillion dollar market per year globally. The ability to leverage open data to improve business processes, to make better decisions, to be more efficient, to engage citizens to live better lives is a huge business opportunity as much as it is about just improving overall society. So how do we build that public square? How do we help governments and organizations to go and build a public square in which people will come and live and hang out, right? It's not about sailing to build something. I have to come into my building like a museum or some other esoteric thing where you have to go to, but how does it become part of what I walk by? How do I get people to hang out there so it tracks more businesses to talk to the citizens? How to get more citizens to show up? How do we then encourage more infrastructure to build that out? And really what it's not a hard thing to do, it's being part of the web. This is actually going back when in the beginning of the whole gov2o, egov hype is actually what Gartner cited it as in 2007, in which they said web-oriented architectures have much greater potential effect on the ability to transform government than anything else in the web2o world. It's not necessarily bi-directional. It's not about rounded corners or bubbly gradients or things like that. It's about just leveraging what the web is very good at. The web came out of a research institute, which is meant to enable the open sharing of science data. That's what really kicked off the web, and that's really what the underpinning and what we have the opportunity to do. So people talk about not having silos. It's really actually not about just not having silos, but actually leveraging the fabric of the infrastructure of the web that's out there. So we thought about this in four different ways in terms of the core principles we always keep thinking about. It's making data discoverable. People usually are going to go to their most popular search engine to find data. They're not going to come to your site or your portal or your organization. They're going to say water quality in the Chesapeake, right? In a search engine, and hopefully these things show up, it's not just a way to go and get some advertising or some water testing kit. They can actually go learn about what's really going on and explore that data and information. It can make it exploreable. So once they find it, they can actually do something with it. It's not just here's a link. So go drop into your favorite NetCDF desktop app. I don't know what that means. I'm on the web, but let me leverage the web to play with and explore this data right away. Make it accessible in terms of easily understandable, in terms of terminology and vocabulary. People don't know what things like NetCDF are. They probably don't know what services even really are in mean. That means like 311 maybe. Maybe it means like the service at my local restaurant. Let's talk to them in vernacular. They understand. They're experts in what they do, not in what we build. So let's make sure our technology speaks to them and the way they work. And then make them open formats and then also make it collaborative. Make these things bidirectional. Engage collaboratively here. Citizens are really want to engage. We've had a lot of proof points where open collaboration feedback systems around geospatial data have been extremely successful. Something we've talked about a lot is make it one click downloadable. None of these forms and disclaimers and give me your email address click, go in common formats. And then as I mentioned, creating positive feedback loops. So open government, open data has not necessarily a new thing. It's been going on for quite a while. In fact, DC, where we're in, had one of the first pretty good web open data catalogs. You go to data.dc.gov. Really easy to remember URL. I could always just type this in. I'd have to remember something weird and esoteric. Just getting the lists of data was great. Drove into it a lot of different formats. But the problem is, and if you can see the screen at all, is that while the top data sets are on crime, some of the live data sets are well maintained. The ones that are not live have become dramatically drastically out of date. Last update of 2012, 2011, 2007, for human service locations. I have a pretty good feeling that something's happened in the city. DC's changing pretty dramatically in a positive way that the data is out of date and inaccurate at this point. But the problem is they've separated themselves from how they publish the data, from how they maintain the data. And where they actually maintain the open data with their services, for example, the crime data, it's very up to date. And when their database went down, citizens and developers knew. And so it's an example of how do we make those things live to those core connection points. So coming back to this project I briefly hovered over, this is a project called Coup, where we have our GSCerver, in our case, where it's being deployed pretty widely again across all levels of government. And that's great. And people can already leverage that. But we also wanted a way to do simple export and transforms of that data into different formats, into different common formats, as well as leverage connecting to other types of data sources, which I'll highlight in a second. I don't see Ben here, but I'm going to steal a little bit of his thunder in terms of how we can even connect to GitHub. The premise here is don't move your data. Keep your data where it is, maintain it the way you maintain it, and then we'll help you share it out. So we do this as we leverage the fact that organizations have their databases. And then we can leverage in a click of a button, deploy an open data application, which exposes out to the web in web aligned ways. In which citizens can then come and do a basic search on the web. They can, businesses can explore. Database sets and developers can discover APIs. And people can operate all around the same core data without moving the data around constantly, just where it lives and resides, exposing out all these different interfaces depending on who the user is. So I'll show you just kind of a quick highlight of what this actually looks like. That's always a lot more fun. So we've been working with a number of institutions to start already sharing their data that's powered here. So this is one we've highlighted a lot. It's data-driven Detroit. It's a small NGO that wanted to share data to help a city, which is, to be honest, having a little bit of trouble helping itself. A lot of very empowered citizens are trying to work and improve the government, but it really have to take their own ownership. This is an organization that can do that very easily. They can go and spin up a site. You can go and search for schools and education data sets. So I can go and find different data they've uploaded. So I can go, for example, and dive into Detroit schools, where they're all located, explore the data. So things we talk about like metadata. We don't really, we have one link here, says metadata, but just kind of saying who provided it? When did they provide it? Very common vernacular for these things. If you don't like this data set, here's other recommended data sets. Kind of apply the shopping cart concept. People are familiar with Amazon. If you like schools, you might also like parks. But make it easy for people to go and grab, click download, grab a spreadsheet, and they're done. Or they can also grab KML files. They don't pull it into their favorite 3D globe viewer. Shapefiles are popular. And then API is GeoJSON. People love the D3 now. Or GeoService, if they want to go and do other GeoIS and processing workflows. So that's cool, but that's great. That's a little bit kind of like a glorified FTP. How do we still make things explorable? So people can pull up a table view. We found out very, very quickly that people like to view tables of things. They get it. In fact, it's different, maybe, than an alphabet, but you know what? The map might be optional. Just because it's geographic doesn't mean it's cartographic. What I need to say is that people can understand this. They can look at the school data, the spreadsheet data. They can go and look through different zip codes. So for example, I want to just download to, let me just zoom into this one, right, this one zip code. I now have eight schools here in this one area in downtown Detroit. I can view just these features. I can then go and download just this data set. So right away, people are able to explore the data without having to leave the browser, without having to even know Excel, let alone the GIS. Download that out and now go run with it. And even possibly subscribe to it if there's any updates to these data sets. They can also go and embed this into Facebook and Flickr and to Twitter. So embedding it to tools they're already using. And then kind of behind the scenes, some things we're doing that are subtle and really interesting to explore with other people is how each feature itself has a durable URL. So what does it mean when now the city is maintaining the canonical database of all schools that has a durable URL and now everyone can point to and link to? What might that mean to in terms of maybe the emergence of the semantic web? So that's just a few examples of things we're doing. We're seeing organizations like the Forest Service use this. We're seeing DC use this. So this is an example of instead of those static FTP sites, is the ability for them to go and quickly maintain, deploy this, and over time keep updating all of these data sets. So it's not just about uploading FTP sites, but it's about them maintaining and working the data workflows they understand and do, and then making these available without them having to do really any more work, which is pretty amazing. Maryland and then this is an example where I mentioned about right now, this is assuming everyone's using ArcGIS, which is a lot, and a lot of data is maintained there, but not all data is maintained there. And just as much as I said about not taking data out of the database, if it's already existing and has a workflow somewhere great, keep it there. Keep those workflows. Keep the data timely and up to date and accurate. So City of Philadelphia is maintaining some of their data in GitHub. Awesome. Cool. If they got a workflow, cool. So we've used this KOOP project, can actually link to GitHub and pull this data out and bring it in as a service. So now I'm linking from instead of this GeoJSON file or the CSV file, which I'll leave behind to show some of the really awesome tools GitHub's done. I can now bring it to the same interface. I can go and download this and go back out and I was a spreadsheet KML shapefile. So it does this conversion. So maybe they uploaded it as GeoJSON or something else, but I want other formats back out. People that want to go and view a table and have these other ways to explore the data, filter it down, do different things with it, can bring this in, keeps the data where it's at and pulls it out live. And that's kind of what this KOOP project does. It just really is a big web proxy for APIs and it's all open source. So go crazy with it. So that's just a few examples. I think I have time in my quick 15 minutes. So my point is here is stop moving the data. It's burdensome, it's unsustainable, it's going to get broken. Instead, play with the pieces. See what you can build by pulling them together and all these loosely coupled pieces, pull them together, build things and share them freely. So thank you, everyone. I look forward to more Q&A. Thank you.
Government agencies are responsible for managing updated and authoritative as part of their operations. Recently Œopen government¹ has become a popular movement that promises transparency, engagement, and efficiency for agencies and citizens. Increasingly open government mandates are also requiring organizations to make their data freely available through the web. GIS is already at the heart of the majority of data management in government and also accounts for most of the open data that is intended for release. ArcGIS Open Data leverages your existing infrastructure to make your data discoverable, explorable, and accessible without any migration or additional workload. By aligning government open data to existing workflows as well as the architecture of the web, it's possible to readily fulfill these mandates and achieve agency vision.
10.5446/15346 (DOI)
We are at DC and San Francisco based company in the geospac. I'd like to talk about how we at Mapbox leverage open source and open data to build a business. Open source is winning. This is a map that we launched recently. It's a, we call it Mapbox Outdoors. It's a map that is specifically designed for outdoors activities like hiking, walking, running, cycling. That's mapped that we've all built with open source tools on open data. As you look at this map closer, you'll see, I don't know, this is a little bit harder to see here. You see like an incredible detail here on this map layers with contour lines down to like the highest zoom levels, labeled contour lines that we've sourced from a variety of open data sources like OpenStreetMap, but also like a variety of government data sources, especially when it comes to terrain data that we're using. You see here details like ski lifts that people are using ski applications. And the map is, it's all of our maps, global down to the highest zoom levels. This is really the type of data and the type of technology that allows us to go up against some of our biggest competitors. And it makes us actually look incredibly good in comparison to them. As you can see at these side-by-side comparisons, this would be, sorry, you were here. This would be like Hawaii here looking at the Apple and Mapbox maps. So we're a company built entirely on open source. We like to think of ourselves as a Lego company, as a company that builds the, that builds the, creates those buildings blocks that developers and designers can use to assemble their own maps and map-based applications. But I also said we are entirely open source. So how do we do this? How does this work? Like really what it comes down to for us at Mapbox and the way we work as a business is that we put open source software out there for folks to use free of charge, build communities around it, but we also provide services in that space that do some of the hard things like serving maps. So some of our clients would be Foursquare who's using like a Mapbox-powered open-street map-based map of the world for showing check-in locations, you know, like here at the ski station, what we call the ski station on Dupont Circle. I've checked in from like this weekend, or the financial times here using like our satellite base layers based on NASA and digital globe data for showing where Iran's hiding the nukes, right? This would be here an example of how our geocoding services power Evernote's data, where Evernote's application, or here's a quick example of how the San Francisco based cart to go type service for scooters is using our directions API. This is all again geared towards developers and designers. You go to mapbox.com slash developers and really what you see there is a lot of really good documentation it gets to start it fast on using our tools. So we're really very geared directed towards this type of audience and this is also why we're looking at our documentation as a type of a product. So back to my starting point, how are you using Mapbox like open source and open data for building a business? There's a couple of themes and a couple of aspects that are recurring for us that I'd like to quickly highlight. We use open sources Mapbox very specifically to level the playing field. We're using like specifically open street map for sourcing like global base data that is usually very expensive. Some of our competitors sent the cars around the world and drive like 5 million and have driven so far, you know, 5 million more kilometers or miles to actually capture the same type of data. So it's a similar quality of data that we have here through a project like open street map the Wikipedia of maps in collected in a much more efficient manner. Over 20,000 people like log into open street map every month and update the map. What you're seeing here is a quick recording of what is actually a live data, a live dashboard showing open street map editors updating the map as we speak. As Mapbox, we make like very specific investments in this space and actually put a lot of resources towards that. Like as you can see, for instance, in like this quick animated gift from like the open street map editor that we have funded and built as a team together with the open street map community. Here would be another example of how we are opening like a digital globe imagery to the open street map community specifically for tracing. Another recurring theme for us is like winning the thought space. A good example for that would be like our tile mill open source map design studio that is again free of charge download and download it from mapbox.com and it can start designing maps like this one here that Pinterest did with the statement team for their base maps for pins, for pin boards and really like this is about changing the space and how like map design works and how people interact with map design. And that is also connected to details like the the the exchange formats that we provide that we developed for for actually exchanging like styles and map data between organizations. So this is sort of establishing it establishing it itself as like sort of the go to like tile exchange format now. And again, this is because it is out there as an open format, but it's also because we have open source software around it that supports that format and that is very quickly adopted much quicker as if much quicker than if this type of software was proprietary software that was sold for a license. Next up collaboration that's one of sort of like a no brainer set aspect of open source really as an example want to offer here real quick leaflet JS that we work with. This is a JavaScript library that powers a lot of the zooming like the zoomable and panable maps out there. And that leaflet is a project that we have actually taken over from the open source space so that we have that we have sort of adopted by hiring like the key contributor of leaflet Vladimir and agreed like an aspect that I want to highlight here is how leaflet is actually like a community of contributors. In this case here I showed you before like this little snippet of of of of like ever notes application and you can see here how like the markers cluster nicely and that's a plugin called market cluster plugin that map access and built it was built by this company called smart track and that did this for did this for better representing like all the assets that they are tracking on maps. We didn't need like you know a governing body or coordination body for for creating like this type of open source collaboration. It sort of happened by people scratching their own itch and this is something this is very very powerful as this is like collaboration out there in the open without the need for any NDAs or larger business agreements. Lastly, we are very clear and level headed about like, you know open word make make sense. Like here's a quick example of like how we're combining like open data and proprietary data here as a as a as a demo that we've built together with the micro satellite company skybox and you can see here on this quick animated give how we show skybox video from space together with open open street map data as an overlay for contextualizing where I'm actually looking at I'm looking here at the specific case at the airport in China and I'm seeing like a couple of planes taking off here and I'm seeing traffic in that area. We're looking similarly at the software that we're building up for like hosting our maps so like all the glue code that we create for like cobbling like these open source components together that make up like our online map service make up our online map services. This is proprietary code that we that we deploy on our on our web platforms. So this is really where I want to leave it off with you. I just wanted to quickly offer this as sort of a conversation started here and I'm looking forward to questions later.
Open source and open data has been at the core of Mapbox's fast launch from its beginnings three years ago to powering maps for customers like Foursquare, Pinterest or the Financial Times. Alex Barth will explain Mapbox' open source strategy and show how Mapbox leverages open source software and data to launch new mapping services fast.
10.5446/15343 (DOI)
Oh, hello everybody. Welcome to my talk about transitioning mechanisms. I'm just before we start, how many of you have IPv6 in their network already? Wow. Okay. That's good. That's good. Impressive. How many of you have tried one of the transitioning mechanisms? How many of you have a hurricane electric tunnel? Yeah, so you tried one. Okay, so basic disclaimer before we start. I work at the RIPE NCC. I'm a trainer. But although you can see from the slides that there's the NCC logo, this is not really official from the NCC because we just give you, generally in our courses, an introduction on the mechanisms. We don't go into detail about that. So I decided at one point that it would have been good to be able to go a little deeper in that. I got permission to come here and talk about this and use these slides. But what I'm going to say, it's not really completely official from the NCC. How many of you know what the NCC does? RIPE NCC. Only two. How many of you know what Erin does? A lot less. A lot less. I know. In fact, they don't have a training department. They don't do RIPE meetings as much as we do. There was the RIPE meeting this week with about 500 people, operators, content providers, everything. Whoops. So, well, you know what's happening in IPv4. Ayana, which stands on top of Erin, RIPE, APNIC, and all of them, they ran out of addresses in 2011. There was a nice ceremony. Of course, they decided to do it in Miami because you don't do that in Alaska. We got a nice, the RIPE NCC got a nice gadget, 185 slash 8 to bring home. So there's a picture of the ceremony with all the RIRs. Everybody got his own last slash 8. And at that point, runouts started. So first was APNIC. APNIC, Asia Pacific, they started allocating from the last slash 8. Then 2012 in September, we started allocating from the last slash 8. And who do you think is coming next? We still have three. Erin. Well, before Erin, actually, LACNIC and APRIL this year, Erin, only recently. The problem about Erin and LACNIC is that they don't have policies specific for dealing with the last slash 8. So far, the only one with more than a slash 8 available is APRNIC. So there are a lot of companies popping up in Africa now to get address space there. The only problem is that you have to announce it from that region. But how is this going to evolve? Policies are in place. For example, in the RIPE region in Europe, you can only get, if you're an operator and if you're a newcomer, you can only get a slash 22. That's it. No more, no less. You get 1,000 IP addresses, independently of you being Deutsche Telekom with 100 million subscribers or the last provider that just popped up, you get 1,000 addresses. So there's not much space. And there is a policy, actually, in Erinland that says there's some space that's put apart for dealing with the late newcomers for transitioning. But from out of that space, you will only be able to get small chunks like a slash 28, a slash 29, a slash 25. And as you might know, these are not rootable on the internet because people filter them out. People filter down to the slash 24. So what can you do at this point if you need more space? You can do transfers. So you can... I'll give you a second to run onto here. Sorry. This is a double recording this. Okay. I feel important. Whoa, with two microphones. You're connected objects, though. Well, we already are connected objects all the time. Okay. So you can transfer address space, especially if you live in this region. You can transfer address space also from Asian Pacific. But this is a cost to the operation. How many of you know the basic cost, the average cost of an IPv4 address right now if you want to transfer it? It's around 10 euros. 10 to 12. 10 to 12 euros per address. So it's a costly operation. And what you're doing at that point is just delaying the change. So what you... It's just putting this off for later. Still involving using money. One second. Let me fix this. One on top of the other. Do they... Okay. So the clip doesn't work here. Ah, yeah. The clip is broken. Okay. I'll just keep it in mind. Can do this. Sort of. So we're just delaying the change. We're just postponing it. Well, we've been doing this for years. Time to come up with policies to show the need for addresses and coming up with smaller and smaller time frames for which the RIRs will hand out addresses and so on. But this is just delaying the change too much for some time. So why do we want to get into IPv6? Well, you all know these numbers. But let me ask you a question. How many of you have a smartphone here? How many of you have two? How many have three? Well, no, I don't have three. We get some... In some courses, we get people at the fourth. Like, how many of you have four? Yes, they're still showing up their hands. I still have... I have four smartphones. We have getting to the Internet of Things. We have heard of projects. There's the German government. They want to put sensors on the trash bins so that trucks don't pass by a trash bin if it's not full. Or Porsche has sensors already on the cars. So if the car is going to be broken, you just get a notification. Just go to the repair center because the car is going to break in 15 minutes. Or the German... It's going to be passed. Sorry. Or the German railway systems, producing now trains that need 32 addresses per seat. And they have 400 seats on average for every train. And they have to be unique in the whole system. Yeah, 32 addresses per seat for the comfort services, for the video, for the climate, for all the different systems you can have per seat. So imagine this is going to need lots and lots and lots of addresses. And we have to find a way to fit this in the space we have. So we have, though, in V6... How many of you know about the multiple addresses and the different kinds in V6? So basically what we're going to use is global unicast. But we have some parts that are already reserved. We just point out three here in the scheme. 6 to 4, 3, though, not 6, 4, for the transitioning mechanisms. So basically these are parts of a dress space that for now are reserved for being used for those. And there are also, well, unique local... Just let me give you an introduction about it. It's meant for hosts that are not going to be connected to the internet. It's like private address space, but since there's no net, this is only for... For hosts that are not going to be routed through the internet. And link local, if you have a laptop in front of you with any flavor of unics, you will just have one of that on any interface you have if you enable V6. So we will concentrate mostly on these three during the talk. So what are we trying to solve with transitioning mechanisms? As I said, APNIC was the first RIR to run out of addresses. So in the region, there are already some islands where some people are only implementing V6 in their networks because they can't get addresses. So what we have to do is find a way to provide our users or provide users of our customers to access those sites, to access those services while keeping the actual address space running. How can we do that? We can do it with tunneling or translating addresses. So let's see the first one. 6-in-4, first mechanism, tunnel brokers, hurricane electric, 6-axis, we all know about them. And I saw that a lot of you already used it and implemented it. So it's stable and predictable, but if you have a huge residential market or if you have different networks, if you have many hosts to manage, it's not practically feasible because you would have to set it up manually on each and every one of those hosts. But it works by encapsulating V6 into V4. So you would have an IPv4 host that talks over an IPv4-only infrastructure. You get to a tunnel server run by a tunnel broker and you get on the IPv6 Internet. So your IPv6 packet gets encapsulated into V4 here, goes over the Internet, gets de-capsulated here and goes V6 and comes back still through the tunnel server. This works. A lot of people use it every day, but it's not scalable and doesn't provide, well, provides problems with the MTU, but it's static. So there's a second way of using tunnels that's 6 to 4. 6 to 4 is exactly the same concept, but the address space that we use for establishing the tunnels is anycasted. There's a special part of the address space in V4 and V6 destined to 6 to 4. That's anycasted to the Internet. So basically your CPE or your host brings up a tunnel to one of the tunnel servers operated by somebody on the Internet. We like to say it could be the NSA wanting to intercept your traffic, but anybody who can speak BGP in their network could run one of the 6 to 4 tunnel servers. What is the problem here, though? I send out a packet, gets encapsulated here, gets out from the anycasted tunnel server that's closest to me, gets on the Internet via IPv6, and now here we have a host somewhere, but the host at one point will send back an answer that gets actually to the closest anycasted tunnel server to him, to the host. So we have asymmetric routing most of the time using this mechanism, which is not really what we want, because we cannot really control what the flow is going to be. So for this reason, in France, a guy called Rémi-de-Pré created a transitioning mechanism. He invented it, and Free in France was the first provider to use it. Guess how this mechanism is called? It's R-D from Rémi-de-Pré, and then later it was changed into rapid deployment. So the concept is still the same, exactly the same. So we have a tunnel server, but in this case, this is run by the provider, providing the service. So what we have is a hard-coded address into the CPE, so we use the public IPv4 address on the CPE. The CPE establishes a tunnel to the tunnel server here, and what they get is a slash 64, so a small address space, that gets delivered here over the tunnel. So the home user gets a slash 64, the first 32 bits are generally provider space, the second 32 bits are the actual IPv6 encoded IPv4 address, public IPv4 address that the user has here. Yes, we have a question. So actually now a slash 60, that shift from Free. Okay, well, that's an implementation they had, but in general, you get a slash 64. But because they have, I didn't check, but they have, for that you need a larger address space, actually, because you need a slash 28 to do that. Yeah. That looks like one subterranean slash. Okay. So do you have it at home? Can you watch YouTube? I know about this. Okay. So what you get then at home is sort of native v6 on your local network, it gets encapsulated and gets out of the 6RD tunnel server. The difference between this and using 624 is that, of course, it's all managed by your provider and you're using space from your provider, which makes it more usable in a business environment because you can control everything and you can ask your provider to check everything and you have the same level of service generally that you have on a normal dual stack network. Yes? Does the provider need to assign a public IPv4 to the end user? Well, not really, inside your network you could still use private address space because what you have to do is actually establish the tunnel to the CPE here. But yeah, what is generally used is your public IP address that's put into the space there. This is the mechanism that was used by free initially and was used about a year ago from Swisscom and now BellNet and Belgecom. In Switzerland, Swisscom pushed the button, implemented 6RD from one day to the other and they put 800,000 users directly on v6 from one day to the other, jumping from nothing percent IPv6 usage in the country to 10 something percent. And Bell, Belgecom is now leading with 14 percent, 12, 14 percent of users using the same transitioning mechanism because what you can do with 6RD is you just need a CPE that runs it and then everything in the infrastructure you don't have to touch. You don't have to touch anything else. Question? It's not exactly the same level of service because... Well, it gets encapsulated, I know. No, the thing is you can get the slash 48 from 6RD or 8.9 and you can have your own reverse which does provide it before but not v6. Well, if they don't provide the reverse, well, they could provide the reverse because... It's already morning. Yes, and yeah, exactly. Not the same exact level of service, I'm sorry, but because here you get the slash 64, but I mean in terms of using v6, you get the slash 64 or 60, well, you get 16 slash 64s. That's true. You could get larger address space but this is... If you were to get a larger address space, then they would have to... Well, you could still do that. You could still route that through the tunnel. Yes, it depends on how the provider does it. But in this case, yes. But what I meant is that basically everything is controlled by the provider so you can expect to have some check on the quality of service they're providing. What I wanted to say here was... Is there something else? Well, so basically this is what's being used most of the time from the European providers to provide v6 natively. Now what you have here though is the home user in this case will get IPv6 without noticing. There's a mechanism actually that's called HEPKI BOLS to make sure that the system is going to use either v6 or v4 preferring v6 if it works. If it doesn't work, it will just fall back to v4. Basically the users are not going to notice it that much that there's v6 enabled. I have a few friends in Switzerland which are on Swisscom and they didn't realize that we're using v6 until I told them. It's kind of works. Then we switch over, paradigm. We've seen an IPv4 only infrastructure but not 6.4 in this case is completely opposite. We have an IPv6 only infrastructure so we change completely. In this case what we have is the home user only gets IPv6 and doesn't get IPv4. The way it works is by having a translation box and then a DNS box that does some mangling with the DNS answers. But what happens is home user tries to reach a website. They get through, they put up a DNS request. The DNS goes either v6 or v4 over the internet, queries the DNS. If it gets a response that contains a v6 address, a quad-a record, it will just send it back to the user. The user goes natively on v6. Nothing happens. Everything's fine. But if the answer from DNS contains only v4 records, we have a problem here because the network here is v6 only and we still want to reach it. What the DNS64 will do is translate, take that v4 answer and put it into specific v6 prefix that's a slash 96. We still have 32 bits and IPv6 address is 128 bits. So we'll embed that IPv4 address into a v6 address and send it as an answer to the home user which at that point will use that as destination for his outgoing packets which will go through the NAT64 box. They will get translated, go over the v4 internet and when they come back, they hit it again and they get translated again with the native v6 address with the IPv4 address embedded. This requires having both the DNS64 and NAT64 boxes running all together. The problem we have here, what can you imagine? DNSsec is broken, of course. Well actually no, there is an RFC showing you how you could do NAT64 and DNS64 with a specific demon over the home user's computer that still does DNSsec. Yes, it's a little complicated so you don't want to do it for your end users but DNSsec is going to be broken because you have something in the middle that rewrites your queries and end. Do you know of any service that doesn't work on v6? How many of you use Skype or Twitter? Well you were going to say that. But you know that those are broken. So if you have a home user that wants to use Skype, it will not work. So there is a patch on top of this. It's called 464XLAT. We add another patch on top of the transitioning mechanism where we actually, this scheme is a little bit forvient we say. So basically the mobile user, this is the case for, we highlight the mobile user because NAT64 is implemented on some mobile networks here in the US like Sprint and Verizon. Sorry we're not in the US, on this continent. Sorry. The Canadian providers are way dumber. So what do you expect them to do at this point? Because they don't understand BGP. Well you don't need BGP to run NAT64. Well whatever. So basically what you do at this point is have a mobile user or a user that runs a small demon that creates a local network that's v4. v4 gets translated by this little demon into v6, sent over to the NAT64 box and then again on the v4 internet. So basically you add a layer of translation again. Oh my goodness, yes. It's already on, it works out of the box on the recent versions of Android and there's a demon as well, we'll see that in a moment that works on Linux and Unix variants. So yes. This is how things are at the moment if you need to access v4 content on a mobile network where you just have v6 and NAT64. And DSLite is the last one. DSLite is similar to 6RD but what we have is a v6 infrastructure and we tunnel v4 because we have a private network here, we tunnel it over here and we NAT44 on the box. There are two variants of DSLite, one is the main DSLite and one is MAP A plus P where actually instead of just NATing here you reserve a small chunk of ports for every single user. So users share just port numbers for a single IP address and they're only going to use a small or reduce number of them on the NAT box there. So this is an idea of how transitioning mechanisms work. There are actually many, many more because the idea people have here is, oh, that idea is broken and there are so many I don't want to choose one of them so I'll create my own and then we end up with N transitioning mechanisms plus one. So the last time I tried to count them I stopped at 37 or something, RFCs. This is just a little collection of the most frequently used and seen ones on the today on the Internet. So now let's see, 16.4 on BSD. It's just a tunnel, we spoke about it. So if you go to Hurricane Electric, 6.0 access, they'll just give you copy and paste instructions to just go and create it. That's pretty easy, neat. Most of the time Hurricane Electric will give you a slash 48 out of the box and then you end up with a problem about subnetting but that's not part of this talk. Sixth access, you have to go through different steps since you have to certify to get bigger address space. So you start with this slash 64 and if you want more you have to go through a process to get more space. But let's take a look at 6 to 4. 6 to 4, same as in 16.4, but tunnels should be dynamically created. So for example in FreeBSD you just have to add this to your RC.conf. You just add what you set up a network interface, in this case USB Ethernet 0 and you set up a default router. This is actually the broadcast address that's inside the 6rd defined address space. So basically you use that as your gateway and you tell the STF interface, doesn't mean what you're thinking about. STF interface, you set up your one interface IP address here so that the system can establish tunnels. So what the system is going to do at that point is try to connect to the closest 6rd tunnel server and try to establish a tunnel to send out your packets from. And then what you can do is at that point use the, configure the local interfaces but we'll see that later. Has anybody ever tried this? Yes? A long, long time ago. Okay. Did it work? It did as well. I tried it recently and it works. Okay. Question? Is there support for different messages? Well, yes. You have a slash 16 and you can use the address space from 6 and 4 to, or you can use your own address space behind it and route it through 6 and 4 to get out of that. The ISPA you use as slash 24. Yep. Well. Yeah, it's going to support that. Well, wait, the slash 24, that's the allocation they have. So maybe you don't want to use the whole slash 24 on the 6 or D, on the 16 and 4 tunnel here. There's slash 24 and my IP address. No, that's 6 or D, not 6 to 4. That's later. And then, well, actually that's just what I'm, you need to know for that the tunnel server address and they might have a dedicated address range for that. So probably they have a slash 24 but they might have dedicated, say, a slash 32 to 6 or D. Actually, there's a nice story behind this. In the right region, a provider can get a slash 32 out of the box if they ask for space but they just can send an email and ask to enlarge the address space to a slash 29. We won't ask any questions. Why this? Because there was a provider who made a little mistake in their addressing plan and they ran out of addresses before they could even think about it. They ran out of IPv6 even though they had a large address space. So they changed the policy. They didn't change their addressing plan. They changed the policy to allow for a slash 29 to be distributed to end users, to end users, to LARs. But one of the reasons the policy got through was that it could also be used for 6 or D in an efficient way. So it permitted other providers to come up with a meaningful addressing plan for using 6 or D so that you can do the game of using a slash 32 as basic dedicated to 6 or D and then get another slash 32 out of the public IP address in v4 that your user already has and then you can assign a slash 64 for your users. So how does this work? Well basically you create a GIF tunnel to the tunnel server if you get the address from your provider and you route your IPv6 traffic through the endpoint. So simple. It's not rocket science. Rocket science comes a little bit later with NAT64 and DNS64. But the problem is many providers, we contacted them. They want to have their CPE do it for you because they want to have everything under their own control. So they might not allow you to do it on your own. And there is a provider, I can't remember the name and I tried looking it up recently. That actually provides the whole data for you. So you can do that but that's a US based provider. I don't remember the name. So NAT64. NAT64 instead you need two components. You need translation and the DNS. Let's take a look at translation in PF. Peter correct me if I'm wrong. This is wrong. But there's an AF2 keyword that's built in in PF already. So basically what you need to do is just provide this line into PF to have NAT64 done at that layer, at that level. So that's already in OpenBSD. So you can just put this line. Actually I used the slash 96 that's reserved for NAT64 but you could use any of your ranges as long as it's a slash 96. You can use, if you have a dress space inside your network, you can assign a slash 96 for this purpose and you just put it here. But for now, in my example, I prefer to use the NAT64 dedicated address space. Second way is using Tyga. It's a small demon. It's like 86 kilobytes of sources. Basically you just provide five lines of config file and that's it. I also used the NAT64 prefix here. And this is available in the ports. This is just, you can just download it and use it out of the box. It does it in user space using a ton device. So it might be easier for you to implement this in user space other than differently from PF. Now DNS64 is actually easier than you could think of because from bind 98 up, DNS64 is built in. So basically just provide DNS64 in the options. Just provide this keyword. This goes to still slash 96 that we defined earlier and we say the clients for this query is going to be, I use the documentation prefix but you can put your own address space in here and have it resolved for those clients. When a request comes in that only has IPv4 records, it will translate it into v6 embedding the address into this part. And for 464XLAT, this is quite recent. It was posted on some ripe lists recently. Okay, I didn't put unbound here because it's not, well there isn't still support for that. But if you want you can ask Björn to provide you patches for that. I haven't had time to test CLATD because it was recently announced on Monday on the ripe lists. We had the ripe meeting this week and they presented it. So it's CLATD. Should be, it's written in Perl so it should be easily portable because they wrote it on Linux. It's based on Tyga so it needs Tyga to work together to provide the translation, the final translation. So I will probably follow up with this in some time. But you just have to know it's there. You can try it. It's on GitHub so you can take a look at it. And last thing is there's RADVD. So far we looked at the wide area network part from the outside world. But inside your network you have to enable router advertisements for it to work. So this is just enable RADVD and basically this is the slash 64 that you want to use in your local network. What this demon will do is send out router advertisements for your clients to pick up the local network, install an address on their interface and start using V6. Well just one quick conclusion. The advice we tell everybody is to just go fully dual stack but there might be constraints in doing so. So one of the reasons for example, Swisscom didn't do native V6 to the end users is because they have some of these lamps which don't do V6 or maybe you might have some devices in the middle that you don't want to reboot to change the software version because you know that software version is broken on V6 or you need some new version of the software. So there might be constraints. But there are plenty of choices and it's pretty easy to set them up as you could see. They just require a few lines of configuration to be run. So this is one of the things that we do at the NCC, try to get people excited about V6 so much that they go back and start implementing V6, go back to their offices, try to do that as soon as they can. So this was my goal for today to try to make sure that you know that there's something out there you can use to implement V6 in your network right out of the box even if you're not a user. Network admin doesn't do his job correctly and only gives you V4. Any questions? No? Yeah? What are your suggestions for like a service provider not an end user but like a... Okay. For as a transition mechanism? Well, yeah. Like say we only support IPv4 right now. Uh-huh. So is this an IPv6 or is there something... If your infrastructure could do that out of the box I would say go to Wellstack because then you avoid any problem about MTU, any problem about DNS stack being broken or managing other boxes or adding layers of translation. If you can, just go to Wellstack. You might have limitations as I said, some box in the middle that doesn't do V4 but if you were to have to need some transition mechanism I would go with the 6RD because that's the easiest one, the one you can control and the one that goes over V4 without much hassle. So I would suggest using that but as much as we can we suggest to dual stack from the ground up. Any other question? Well then thank you. Thank you very much. Thank you. Thank you very much. Thank you. Thank you. Thank you. Thank you very much. Thank you.
The growth pace of IPv6 adoption is still slow, but constantly increasing, as more providers and networks migrate. There is one aspect of the adoption that is still underestimated, and it's transition mechanisms, enabling networks speaking different protocols to talk to each other. In may 2013, Switzerland jumped on top of IPv6 utilisation in the world by just having its incumbent operator enable one of these for a large base of its users. This talk will first introduce a handful of different transitioning mechanisms in use, picking the most widely used amongst the plethora of ones available. In the second part, a live demonstration will show the audience how to set up some of them using native tools on OpenBSD and FreeBSD.